id
stringlengths 20
20
| content
stringlengths 211
2.4M
| meta
dict |
---|---|---|
BkiUc1s5qhLACA4PvVeG | \section{Introduction}
Conformal field theory proved to be very efficient tool in
describing second order phase transitions in two-dimensional system
and is a commonly used language of string theory. Correlation
functions in CFT can be expressed as sums (or integrals) of
three-point coupling constants and the conformal blocks, fully
determined by the symmetry alone \cite{Belavin:1984vu}.
An explicit calculation of conformal blocks
is still one of the most difficult problems in CFT.
Not only the form of a general conformal block is unknown, but also its analytic
properties are still conjectures rather than theorems.
On the other hand,
there exist very efficient recursive methods of an approximate, analytic determination
of a general 4-point conformal block
\cite{Zamolodchikov:ie,Zamolodchikov:2,Zamolodchikov:3}.
They were used for instance in checking the conformal bootstrap in the Liouville theory with the DOZZ
coupling constants \cite{Zamolodchikov:1995aa}, in study of the $c\to 1$ limit of minimal models
\cite{Runkel:2001ng} or in obtaining new results in the classical geometry of hyperbolic surfaces
\cite{Hadasz:2005gk}. In a more general context of an arbitrary CFT model these methods
allow for efficient numerical calculations of any 4-point function once the structure constants are known.
In the last year the recursion representations have been worked out
for the super-conformal blocks related to the Neveu-Schwarz algebra
\cite{Hadasz:2006sb,Belavin:2006,Belavin:2007,Belavin:2007eq}. The so called
elliptic recursion was conjectured in \cite{Belavin:2007} for one type of NS blocks
and applied in the numerical
verification of the consistency of $N =1$ super-Liouville theory.
The extension of this method to another type of NS blocks was proposed
in \cite{Belavin:2007eq} where also further numerical support for the consistency of the
$N =1$ super-Liouville theory was given.
The aim of the present paper is to provide a comprehensive derivation
of the elliptic recursion for all types of NS blocks. This is done by
an appropriate extension of the method originally developed in
\cite{Zamolodchikov:2,Zamolodchikov:3} for the Virasoro case.
In our derivation we also use an exact analytic expressions
for certain $N=1$ NS superconformal blocks of $ c =\frac32$
theory obtained in \cite{hjs}.
The organization of the paper is as follows. In Section 2 we present our
notation and basic properties of NS blocks derived in \cite{Hadasz:2006sb}.
Section 3 is devoted to the analysis of the classical limit of
$N=1$ NS supersymmetric Liouville theory. Using the path integral representation
we show that in the classical limit of the supersymmetric
Liouville correlators the leading terms are described by the classical bosonic Liouville action. This
implies in particular that the exponential part of the classical limit
of all NS blocks is given by the classical conformal block of the Virasoro theory.
One of the main points of the method proposed in \cite{Zamolodchikov:2,Zamolodchikov:3}
is that the dependence of the first two terms of the ${1\over \Delta}$ expansion of quantum conformal block
on the external weights and the central charge
can be read off from the ${1\over \delta}$ expansion of the classical block.
Since the extension of this reasoning to the NS N=1 case is rather straightforward
we present it in Section 4 mainly for completeness.
At this point one could use the results of
\cite{Zamolodchikov:2,Zamolodchikov:3} to derive the large $\Delta$ asymptotics
of NS blocks. On the other hand one can follow the general strategy
of \cite{Zamolodchikov:2,Zamolodchikov:3} within the NS theory. This is done
in Section 5. The fact that the null vector decoupling equations of NS theory imply
exactly the same equation for the classical conformal block that one gets in the Virasoro case
can be seen as a consistency check of the path integral arguments
used in Section~3.
Finally in Section 6, using the explicit analytic expressions for $c=\frac32$
superconformal blocks \cite{hjs} we derive the recurrence relations
for all type of NS superconformal blocks.
Some technical details of the elliptic Ansatz
used in \cite{Zamolodchikov:2,Zamolodchikov:3} are given in Appendix A.
In Appendix B it is shown that the recurrence formulae are in
perfect agreement with an exact analytical form
of the conformal blocks obtained in \cite{hjs}.
\section{NS $N=1$ superconformal blocks}
The 4-point NS superconformal blocks are conveniently defined in
terms of the 3-point block
$$
\rho{^{\Delta_3}_\infty}{^{\Delta_2}_{\:x}}{^{\Delta_1}_{\;0}}
: {\cal V}_{\Delta_3}\times {\cal V}_{\Delta_2} \times {\cal V}_{\Delta_1} \ \mapsto \ \mathbb{C},
$$
normalized by the condition
\[
\rho^{\Delta_3\ \Delta_2 \ \Delta_1}_{\infty \ \ x \ \ \ 0}(\nu_3,\nu_2,\nu_1)
\; = \;
\rho^{\Delta_3\ \Delta_2 \ \Delta_1}_{\infty \ \ x \ \ \ 0}(\nu_3,*\nu_2,\nu_1) \; = \; 1,
\]
where $\nu_3,\nu_2,\nu_1$ are super-primary states in NS superconformal Verma
modules
${\cal V}_{\Delta_3}, {\cal V}_{\Delta_2}, {\cal V}_{\Delta_1}$
and $*\nu_2=S_{-{1\over 2}}\nu_2$ \footnote{We shall follow the notation of \cite{Hadasz:2006sb}}.
Then the even,
\begin{eqnarray*}
\mathcal{F}^1_{\Delta}\!
\left[^{\underline{\hspace{3pt}}\,\Delta_3 \;\underline{\hspace{3pt}}\,\Delta_2}_{\hspace{3pt}\,\Delta_4 \;\hspace{3pt}\, \Delta_1} \right]\!(x)
&=&
x^{\Delta - \underline{\hspace{3pt}}\,\Delta_2 - \Delta_1}
\left(
1 + \sum_{m\in \mathbb{N}} x^m\, F^m_{c, \Delta}\!
\left[^{\underline{\hspace{3pt}}\,\Delta_3\;\underline{\hspace{3pt}}\,\Delta_2}_{\hspace{3pt}\,\Delta_4\;\hspace{3pt}\, \Delta_1} \right]
\right),
\end{eqnarray*}
and the odd,
\begin{eqnarray}
\label{odd_block_def}
\mathcal{F}^{\frac{1}{2}}_{\Delta}\!
\left[^{\underline{\hspace{3pt}}\,\Delta_3\;\underline{\hspace{3pt}}\,\Delta_2}_{\hspace{3pt}\,\Delta_4\;\hspace{3pt}\, \Delta_1} \right]\!(x)
&=&
x^{\Delta - \underline{\hspace{3pt}}\,\Delta_2 - \Delta_1 }
\sum_{k\in \mathbb{N}- \frac{1}{2}}
\hskip -5pt
x^k\,
F^k_{c, \Delta}\!
\left[^{\underline{\hspace{3pt}}\,\Delta_3 \;\underline{\hspace{3pt}}\,\Delta_2}_{\hspace{3pt}\,\Delta_4\;\hspace{3pt}\, \Delta_1} \right],
\end{eqnarray}
superconformal blocks are determined by their
coefficients:
\begin{eqnarray}
\label{block:definition} && F^{f}_{c, \Delta}\!
\left[^{\underline{\hspace{3pt}}\,\Delta_3
\;\underline{\hspace{3pt}}\,\Delta_2}_{\hspace{3pt}\,\Delta_4
\;\hspace{3pt}\, \Delta_1} \right] \; \\
\nonumber && = \hspace*{-20pt}
\begin{array}[t]{c}
{\displaystyle\sum} \\[2pt]
{\scriptstyle |K|+|M| = |L|+|N| = f }
\end{array}
\hspace*{-20pt} \rho^{\Delta_4\ \Delta_3 \ \Delta}_{\infty \ \ 1 \
\ \ 0} (\nu_4, \underline{\hspace{4pt}}\,\nu_3 , \nu_{\Delta,KM} )
\ \left[B^{f}_{c, \Delta}\right]^{KM,LN} \rho^{\Delta\ \Delta_2 \
\Delta_1}_{\infty \ \ 1 \ \ \ 0}
(\nu_{\Delta,LN}, \underline{\hspace{4pt}}\,\nu_2 , \nu_1 ),
\end{eqnarray}
where {\small $\left[B^{f}_{c, \Delta}\right]^{KM,LN}$} is the
matrix inverse to the Gram matrix of superconformal Verma module at level $f$
with respect to the basis $\{\nu_{\Delta,LN}\}$,
$\underline{\hspace{4pt}}\,\Delta_i$ and
$\underline{\hspace{4pt}}\,\nu_i$ stand for $\Delta_i$ or
$*\Delta_i$, and $\nu_i$ or $*\nu_i$, respectively, and
$x^{\Delta -*\Delta_2 - \Delta_1} =x^{\Delta - \Delta_2 - \Delta_1-{1\over 2}}. $
It follows from (\ref{block:definition}) that the blocks' coefficients
are polynomials in the external weights $\Delta_i$
and rational functions of the intermediate weight $\Delta$ and the central charge $c.$ They
can be expressed as a sum over the poles in~$\Delta:$
\begin{equation}
\label{first:expansion:Delta}
F^{f}_{c, \Delta}\!
\left[^{\underline{\hspace{3pt}}\,\Delta_3
\;\underline{\hspace{3pt}}\,\Delta_2}_{\hspace{3pt}\,\Delta_4
\;\hspace{3pt}\, \Delta_1} \right]
\; = \;
{\rm h}^{f}_{c,\Delta}\!
\left[^{\underline{\hspace{3pt}}\,\Delta_3
\;\underline{\hspace{3pt}}\,\Delta_2}_{\hspace{3pt}\,\Delta_4
\;\hspace{3pt}\, \Delta_1} \right]
+
\begin{array}[t]{c}
{\displaystyle\sum} \\[-6pt]
{\scriptscriptstyle
1 < rs \leq 2{f}}
\\[-8pt]
{\scriptscriptstyle
r + s\in 2{\mathbb N}
}
\end{array}
\frac{
{\mathcal R}^{{f}}_{c,\,rs}\!
\left[^{\underline{\hspace{3pt}}\,\Delta_3
\;\underline{\hspace{3pt}}\,\Delta_2}_{\hspace{3pt}\,\Delta_4
\;\hspace{3pt}\, \Delta_1} \right]
}
{
\Delta-\Delta_{rs}(c)
}\,,
\end{equation}
with $\Delta_{rs}(c)$ given by Kac determinant formula for NS Verma modules:
\begin{eqnarray}
\label{delta:rs}
\Delta_{rs}(c)
& = &
-\frac{rs-1}{4} + \frac{1-r^2}{8}b^2 + \frac{1-s^2}{8}\frac{1}{b^2}\,,\hskip 10mm
c=\frac{3}{2} +3\left(b+{1\over b}\right)^2.
\end{eqnarray}
It was shown in \cite{Hadasz:2006sb} that
the residue at $\Delta = \Delta_{rs}$ takes the form
\begin{eqnarray}
\label{res:evenf} {\mathcal R}^{m}_{c,\,rs}\!
\left[^{\underline{\hspace{3pt}}\,\Delta_3
\;\underline{\hspace{3pt}}\,\Delta_2}_{\hspace{3pt}\,\Delta_4
\;\hspace{3pt}\, \Delta_1} \right] &=&
S_{rs}(\underline{\hspace{3pt}}\Delta_3)
A_{rs}^c\hspace{-3pt}\left[^{\underline{\hspace{3pt}}\,\Delta_3
\;\underline{\hspace{3pt}}\,\Delta_2}_{\hspace{3pt}\,\Delta_4
\;\hspace{3pt}\, \Delta_1} \right] F^{m-\frac{rs}{2}}_{c,
\Delta_{rs} + \frac{rs}{2}}\!
\left[^{\underline{\hspace{3pt}}\,\Delta_3
\;\underline{\hspace{3pt}}\,\Delta_2}_{\hspace{3pt}\,\Delta_4
\;\hspace{3pt}\, \Delta_1} \right]
\end{eqnarray}
for $m \in {\mathbb N}\cup \{0\}$ and
\begin{eqnarray}
\label{res:oddf} {\mathcal R}^{k}_{c,\,rs}\!
\left[^{\underline{\hspace{3pt}}\,\Delta_3
\;\underline{\hspace{3pt}}\,\Delta_2}_{\hspace{3pt}\,\Delta_4
\;\hspace{3pt}\, \Delta_1} \right] &=&
S_{rs}(\underline{\hspace{3pt}}\Delta_3)A_{rs}^c\hspace{-3pt}\left[^{\widetilde{\underline{\hspace{3pt}}\,\Delta_3}
\;\widetilde{\underline{\hspace{3pt}}\,\Delta_2}}_{\hspace{3pt}\,\Delta_4
\;\hspace{3pt}\, \Delta_1} \right] F^{k-\frac{rs}{2}}_{c,
\Delta_{rs} + \frac{rs}{2}}\!
\left[^{\underline{\hspace{3pt}}\,\Delta_3
\;\underline{\hspace{3pt}}\,\Delta_2}_{\hspace{3pt}\,\Delta_4
\;\hspace{3pt}\, \Delta_1} \right]
\end{eqnarray}
for $k \in {\mathbb N}- \frac12.$ Here
\(
\widetilde{*\Delta}=\Delta,
\)
\(
\widetilde{\Delta}=*\Delta,
\)
\(
S_{rs}(\Delta)=1,
\)
\(
S_{rs}(*\Delta)=(-1)^{rs}
\)
and
\begin{eqnarray*}
A_{rs}^c\hspace{-3pt}\left[^{{\underline{\hspace{3pt}}\,\Delta_3}
\;{\underline{\hspace{3pt}}\,\Delta_2}}_{\hspace{3pt}\,\Delta_4
\;\hspace{3pt}\, \Delta_1} \right] =
A_{rs}(c)\,
P^{rs}_{c}\!\left[^{\underline{\hspace{3pt}}\,\Delta_3}_{\hspace{5pt}\Delta_4}\right]
P^{rs}_{c}\!\left[^{\underline{\hspace{3pt}}\,\Delta_2}_{\hspace{5pt}\Delta_1}\right],
\end{eqnarray*}
where
\begin{equation}
\label{P:1} P^{rs}_{c}\!\left[^{\Delta_2}_{\Delta_1}\right] \; =
\; \prod_{p=1-r}^{r-1} \prod_{q=1-s}^{s-1}
\left(\frac{2a_1-2a_2 -p b - q b^{-1}}{2\sqrt2}\right) \left(\frac{2a_1 + 2a_2 +p b + q b^{-1}}{2\sqrt2}\right)
\end{equation}
with $p+q -(r+s) \in 4{\mathbb Z} + 2,$
\begin{equation}
\label{P:2}
P^{rs}_{c}\!\left[^{*\Delta_2}_{\hspace{4pt}\Delta_1}\right] \; =
\; \prod_{p=1-r}^{r-1} \prod_{q=1-s}^{s-1}
\left(\frac{2a_1-2a_2 -p b - q b^{-1}}{2\sqrt2}\right) \left(\frac{2a_1 + 2a_2 +p b + q b^{-1}}{2\sqrt2}\right)
\end{equation}
with $p+q -(r+s) \in 4{\mathbb Z}$ and
\begin{equation}
\label{A:rs:2}
A_{rs}(c)
\; = \;
\frac12
\prod_{p=1-r}^r
\prod_{q=1-s}^s
\left(\frac{1}{\sqrt{2}}\left(p b + \frac{q}{b}\right)\right)^{-1}
\hskip -3mm,
\hskip .5cm
p+q \in 2{\mathbb Z}, \; (p,q) \neq (0,0),(r,s).
\end{equation}
\section{Supersymmetric Liouville theory and classical limit of superconformal blocks}
\setcounter{equation}{0}
Within a path integral approach the $N=1$ super-Liouville theory
is defined by the action:
\begin{equation}
\label{L:SLFT}
{\cal S}_{\rm\scriptscriptstyle SLFT}=
\int\!d^2z
\left(\frac{1}{2\pi}\left|\partial\phi\right|^2
+
\frac{1}{2\pi}\left(\psi\bar\partial\psi + \bar\psi\partial\bar\psi\right)
+
2i\mu b^2\bar\psi\psi {\rm e}^{b\phi}
+
2\pi b^2\mu^2{\rm e}^{2b\phi}\right).
\end{equation}
Each super-primary field $V_a$ with conformal dimension $
\Delta_a=\bar\Delta_a={a(Q-a)\over 2} $ and all its descendant
Virasoro primaries are represented by exponentials:
$$
\begin{array}{lllllllllll}
V_a &=& {\rm e}^{a\phi},
\\[5pt]
\Lambda _a &=& \left[S_{-1/2},V_a\right]
=
-ia\psi{\rm e}^{a\phi},
\\[5pt]
\bar\Lambda _a &=& \left[\bar S_{-1/2},V_a\right]
=
-i a\bar \psi{\rm
e}^{a \phi},
\\[5pt]
\widetilde V_a &=& \left\{S_{-1/2},\left[\bar S_{-1/2},V_a\right]
\right\}
=
a^2\psi\bar\psi{\rm e}^{a\phi} -2i\pi\mu b a {\rm e}^{(a
+b)\phi}.
\end{array}
$$
One has for instance
\begin{equation}
\label{cor} \langle V_{4} V_{3}\tilde V_{2}V_{1} \rangle=
\int\limits {\cal D}\phi {\cal D}\psi {\cal D}\bar\psi\;
{\rm e}^{-{\cal S}_{\rm\scriptscriptstyle SLFT}[\phi,\psi]}
{\rm e}^{a_4\phi} {\rm e}^{a_3\phi}
\left( a^2_2\psi\bar\psi{\rm e}^{a_2\phi} -2i\pi\mu b a_2 {\rm e}^{(a_2 +b)\phi} \right)
{\rm e}^{a_1\phi}.
\end{equation}
In order to analyze the classical limit ($b\to 0$, $2\pi \mu b^2 \to m={\rm const}$)
of this correlator
one may integrate fermions out.
Since the integration is gaussian and the operator
${\rm e}^{b\phi}$ is light, one can expect
in the case of
heavy weights
\[
a = \textstyle \frac{Q}{2}\left(1-\lambda\right),
\hskip 5mm
ba\to {1-\lambda\over 2},
\hskip 5mm
2{b^2}\Delta\to \delta={1-\lambda^2\over 4},
\]
the following
asymptotic behavior
$$
\langle V_{4} V_{3}\tilde V_{2}V_{1}\rangle
\sim
{\textstyle {1\over b^2}}\,
{\rm e}^{-{1\over 2 b^2}S_{\rm\scriptscriptstyle cl}[\delta_4,\delta_3,\delta_2,\delta_1]},
$$
where $S_{\rm\scriptscriptstyle cl}[\delta_4,\delta_3,\delta_2,\delta_1]$ is the bosonic Liouville
action\footnote{The factor ${1\over 2}$ in front of the Liouville classical action cames
from different parameterizations of the central charge in the NS and in the bosonic Liouville theory.}
$$
S[\phi]=
\frac{1}{2\pi}
\int\!d^2z
\left(\left|\partial\phi\right|^2
+
m^2{\rm e}^{2\phi}\right)
$$
calculated on the classical configuration $\varphi$ satisfying the Liouville equation
$$
\partial\bar\partial \varphi - {m^2}{\rm e}^{2\phi}=\sum\limits_{i=1}^4 {1-\lambda_i\over 4}\delta(z-z_i).
$$
On the other hand in $N=1$ supersymmetric Liouville theory the
correlator $\langle V_{4}V_{3}\tilde V_{2}V_{1} \rangle $
can be expressed as an integral over the spectrum. In the case of
standard locations $z_4=\infty, z_3=1, z_2=x, z_1=0$ one has
\begin{equation}
\label{ql}
\langle V_{4} V_{3}\tilde V_{2}V_{1} \rangle
\;\ =
\int\limits_{\frac{Q}{2} + i{\mathbb R}_+}
\hskip -10pt
\frac{da}{2\pi i}
\Big(
C_{43s} \tilde C_{s21}
\left|\mathcal{F}_{\Delta}^{1}\!\left[^{\Delta_3 \;*\Delta_2}_{\Delta_4 \;\hspace{4.5pt}\Delta_1} \right]\!(x)\right|^2
+
\tilde{C}_{43a} {C}_{a 21}
\left| \mathcal{F}_{\Delta}^{1\over 2}\!\left[^{\Delta_3\;*\Delta_2}_{\Delta_4\;\hspace{4.5pt}\Delta_1} \right]\!(x) \right|^2
\Big),
\end{equation}
where $C$ and $\tilde C$ are the the two independent supersymmetric Liouville structure constants
\[
C_{a 21}=
\langle
V_{a}(\infty,\infty)V_{a_2}(1,1)V_{a_1}(0,0)
\rangle ,
\hskip 1cm
\tilde C_{a 21}
=\langle
V_{a}(\infty,\infty)\tilde V_{a_2}(1,1)V_{a_1}(0,0)
\rangle.
\]
As in the case of 4-point functions the path integral representation yields
the asymptotic behavior
\begin{equation}
\label{3as}
\begin{array}{llll}
C_{a 21}
&\sim &
{\rm e}^{-{1\over 2 b^2}S_{\rm\scriptscriptstyle cl}[\delta,\delta_2,\delta_1]},
\\
\tilde C_{a 21}
& \sim &
{\textstyle{1\over b^2}}\,
{\rm e}^{-{1\over 2 b^2}S_{\rm\scriptscriptstyle cl}[\delta,\delta_2,\delta_1]},
\end{array}
\end{equation}
where $S_{\rm\scriptscriptstyle cl}[\delta,\delta_2,\delta_1]$ is the 3-point classical bosonic Liouville action.
Following the reasoning of \cite{Zamolodchikov:3,Zamolodchikov:1995aa} one can apply the path integral arguments
to
the correlator (\ref{cor}) projected on the even and on the odd subspaces of
the $\Delta$ superconformal family of intermediate states.
This leads to the following $b\to 0$ asymptotic behavior:
\begin{equation}
\label{4as}
\begin{array}{lll}
\langle V_{4} V_{3}\mid_{\Delta}^{\rm \scriptscriptstyle even}\tilde V_{2}V_{1}\rangle
& \sim &
{\textstyle {1\over b^2}}\,
{\rm e}^{-{1\over 2 b^2}S_{\rm\scriptscriptstyle cl}[\delta_4,\delta_3,\delta_2,\delta_1|\delta]},
\\[5pt]
\langle V_{4} V_{3}\mid_{\Delta}^{\rm \scriptscriptstyle odd}\tilde V_{2}V_{1}\rangle
& \sim &
{\textstyle {1\over b^2}}\,
{\rm e}^{-{1\over 2 b^2}S_{\rm\scriptscriptstyle cl}[\delta_4,\delta_3,\delta_2,\delta_1|\delta]},
\end{array}
\end{equation}
where the ``$\Delta$-projected'' classical action is given by
$$
S_{\rm\scriptscriptstyle cl}[\delta_4,\delta_3,\delta_2,\delta_1|\delta]
=
S_{\rm\scriptscriptstyle cl}[\delta_4,\delta_3,\delta]
+S_{\rm\scriptscriptstyle cl}[\delta,\delta_2,\delta_1]
-
f_{\delta}
\!\left[_{\delta_{4}\;\delta_{1}}^{\delta_{3}\;\delta_{2}}\right]
\!(x)
- \bar f_{\delta}
\!\left[_{\delta_{4}\;\delta_{1}}^{\delta_{3}\;\delta_{2}}\right]
\!(\bar x)
$$
and
\(
f_{\delta}\!\left[_{\delta_{4}\;\delta_{1}}^{\delta_{3}\;\delta_{2}}\right]\!(x)
\)
is the classical conformal block, defined in terms of
the $\tilde Q\to \infty$ limit of the quantum conformal block
in the Virasoro $c=1+6\tilde Q^2$ CFT:
\begin{equation}
\label{defccb}
{\cal F}_{\!1+6\tilde Q^2,\Delta}
\!\left[_{\Delta_{4}\;\Delta_{1}}^{\Delta_{3}\;\Delta_{2}}\right]\!(x)
\; \stackrel{\tilde Q \to \infty}{\sim} \;
\exp \left\{
\tilde Q^2\,f_{\delta}
\!\left[_{\delta_{4}\;\delta_{1}}^{\delta_{3}\;\delta_{2}}\right]\!(x)
\right\}.
\end{equation}
Equations (\ref{ql}) -- (\ref{4as}) then
imply:
\begin{eqnarray}
\label{claslim1}
\mathcal{F}_{\Delta}^{1}\!
\left[^{\Delta_3 \;*\Delta_2}_{\Delta_4 \;\hspace{4.5pt}\Delta_1} \right]\!(x)
&\sim &
{\rm e}^{{1\over 2 b^2} f_{\delta}
\!\left[_{\delta_{4}\;\delta_{1}}^{\delta_{3}\;\delta_{2}}\right](x)},
\hskip 1cm
\mathcal{F}_{\Delta}^{1\over 2}\!
\left[^{\Delta_3\;*\Delta_2}_{\Delta_4\;\hspace{4.5pt}\Delta_1} \right]\!(x)
\;\sim \;
{\rm e}^{{1\over 2b^2} f_{\delta}
\!\left[_{\delta_{4}\;\delta_{1}}^{\delta_{3}\;\delta_{2}}\right](x)}.
\end{eqnarray}
Using representations analogous to (\ref{ql}) and the same
reasoning for the other 4-point correlators of primary fields $V_a,
\Lambda_a, \bar \Lambda_a, \tilde V_a$ one gets
\begin{equation}
\label{claslim2}
\begin{array}{lllllllll}
\mathcal{F}_{\Delta}^{1}\!
\left[^{\Delta_3\;\Delta_2}_{\Delta_4\;\Delta_1} \right]\!(x)
& \sim &
{\rm e}^{{1\over 2b^2} f_{\delta}
\!\left[_{\delta_{4}\;\delta_{1}}^{\delta_{3}\;\delta_{2}}\right](x)},
&&
\mathcal{F}_{\Delta}^{1\over 2}\!
\left[^{\Delta_3\;\Delta_2}_{\Delta_4\;\Delta_1} \right]\!(x)
& \sim &
b^2\,
{\rm e}^{{1\over 2b^2} f_{\delta}
\!\left[_{\delta_{4}\;\delta_{1}}^{\delta_{3}\;\delta_{2}}\right](x)},
\\[6pt]
\mathcal{F}_{\Delta}^{1}\!
\left[^{*\Delta_3\;*\Delta_2}_{\hspace{4.5pt}\Delta_4\;\hspace{4.5pt}\Delta_1} \right]\!(x)
& \sim &
{\rm e}^{{1\over 2b^2} f_{\delta}
\!\left[_{\delta_{4}\;\delta_{1}}^{\delta_{3}\;\delta_{2}}\right](x)},
&&
\mathcal{F}_{\Delta}^{1\over 2}\!
\left[^{*\Delta_3\;*\Delta_2}_{\hspace{4.5pt}\Delta_4\;\hspace{4.5pt}\Delta_1} \right]\!(x)
& \sim &
{\textstyle {1\over b^2}}\,
{\rm e}^{{1\over 2b^2} f_{\delta}
\!\left[_{\delta_{4}\;\delta_{1}}^{\delta_{3}\;\delta_{2}}\right](x)}.
\end{array}
\end{equation}
The properties of classical conformal block relevant for the
the elliptic recurrence relations
where already derived by Al.~B.~Zamolodchikov
in the Virasoro CFT \cite{Zamolodchikov:3}.
In the next two sections we shall nevertheless present a step by step derivation of
these properties in NS SCFT. This can be seen as a nontrivial consistency
check of heuristic path integral arguments of this section.
\section{Large $\Delta_p$ vs.\ classical asymptotic of superconformal blocks}
\setcounter{equation}{0}
As in the bosonic case the first step in the derivation of the elliptic recurrence
is to find the large $\Delta$ asymptotic of the conformal block.
The method of calculations proposed in \cite{Zamolodchikov:3} is based
on the observation that the full dependence of the first two terms in the large $\Delta$ expansion
on the variables $\Delta_i, c$
can be read off from the first two terms of the ${1\over \delta}$ expansion
of the classical block. While in the case of even NS blocks the reasoning is essentially the same
as in \cite{Zamolodchikov:3}, the odd case is slightly more complicated.
Let us first note that on each level of the NS Verma module the
determinant of the Gram matrix is proportional to $\Delta$
and each matrix element of its inverse contains a
factor $\Delta^{-1}.$ On the other hand it follows from the
properties of the 3-point superconformal blocks
\cite{Hadasz:2006sb} that in a generic case $\rho^{\Delta_4\
\Delta_3 \ \Delta}_{\infty \ \ 1 \ \ \ 0} (\nu_4, \nu_3 ,
\nu_{\Delta,KM} )$ does not contain the factor $\Delta$, while for
all odd levels $|K|\in \mathbb{N}-{1\over 2}$, $\rho^{\Delta_4\
\Delta_3 \ \Delta}_{\infty \ \ 1 \ \ \ 0} (\nu_4, *\nu_3 ,
\nu_{\Delta,KM} )$ is proportional to $\Delta$. Since
the power series defining the odd
blocks (\ref{odd_block_def}) do not contain zeroth order terms it
follows from the definition (\ref{block:definition}) that the
functions
$$
\mathcal{G}_{\Delta}^{1\over 2}\!
\left[^{\underline{\hspace{3pt}}\,\Delta_3\;\underline{\hspace{3pt}}\,\Delta_2}_{\hspace{3pt}\,\Delta_4\;\hspace{3pt}\, \Delta_1} \right]\!(x)
\; = \;
\ln
\mathcal{F}_{\Delta}^{1\over 2}\!
\left[^{\underline{\hspace{3pt}}\,\Delta_3\;\underline{\hspace{3pt}}\,\Delta_2}_{\hspace{3pt}\,\Delta_4\;\hspace{3pt}\, \Delta_1} \right]\!(x)
$$
admit power series expansions of the form
\begin{eqnarray*}
\mathcal{G}_{\Delta}^{1\over 2}\!
\left[^{\Delta_3
\;\Delta_2}_{\Delta_4
\;\Delta_1} \right]\!(x)
&=&
(\Delta - \Delta_2 - \Delta_1)\ln x -\ln \Delta + \sum\limits_{i=0}^\infty
G_n x^n,
\\
\mathcal{G}_{\Delta}^{1\over 2}\!
\left[^{\Delta_3
\;*\Delta_2}_{\Delta_4
\;\hspace{4.5pt}\Delta_1} \right]\!(x)
&=&
(\Delta - *\Delta_2 - \Delta_1)\ln x + \sum\limits_{i=0}^\infty
G^*_n x^n,
\\
\mathcal{G}_{\Delta}^{1\over 2}\!
\left[^{*\Delta_3
\;*\Delta_2}_{\hspace{4.5pt}\Delta_4
\;\hspace{4.5pt}\Delta_1} \right]\!(x)
&=&
(\Delta - *\Delta_2 - \Delta_1)\ln x + \ln \Delta+\sum\limits_{i=0}^\infty
G^{**}_n x^n,
\end{eqnarray*}
where the coefficients $G_n, G^*_n, G^{**}_n $ are rational functions of $\Delta, \Delta_i, c$.
We shall consider the first case in more detail. One has:
\begin{eqnarray*}
G_n&=&{P_n(\Delta,\Delta_i,c)\over Q_n(\Delta, c)},
\end{eqnarray*}
where $P_n(\Delta,\Delta_i,c)$ and $ Q_n(\Delta, c)$ are polynomials in all their arguments. The existence of semiclassical limit
(\ref{claslim2}) implies that the maximal homogeneous degree of $P_n(\Delta,\Delta_i,c),$
$$
P^{N_n+1}_n(\Delta,\Delta_i,c)= \Delta^{N_n} \left(A_n \Delta + \sum\limits_{i=1}^4
B^i_n \Delta_i + C_n c\right) + \Delta^{N_n-1}\left( \dots \right.,
$$
is greater by 1 than the maximal homogeneous degree of $Q_n(\Delta, c),$
$$
Q^{N_n}_n(\Delta, c) = a_n\Delta^{N_n} + b_n \Delta^{N_n-1} c + c_n \Delta^{N_n-2} c^2 +\dots
$$
The coefficients $A_n,\ldots c_n$ are by definition independent of $c,\Delta$ and $\Delta_i.$
It follows from (\ref{block:definition}) and the Kac determinant formula for NS supermodules
that $a_n\neq 0$,
hence
$$
f_{\delta}^n
\!\left[_{\delta_{4}\;\delta_{1}}^{\delta_{3}\;\delta_{2}}\right]
={\delta^{N_n} \left(A_n \delta + \sum\limits_{i=1}^4
B^i_n \delta_i + 6 C_n \right) + \delta^{N_n-1}\left( \dots \right.\over
a_n\delta^{N_n} + 6 b_n \delta^{N_n-1} + 36 c_n \delta^{N_n-2} +\dots}.
$$
Expanding in reciprocal powers of $\delta$ one gets
$$
f_{\delta}^n
\!\left[_{\delta_{4}\;\delta_{1}}^{\delta_{3}\;\delta_{2}}\right]
={A_n\over a_n} \delta
+ \sum\limits_{i=1}^4{ B^i_n \over a_n}\delta_i + {6( C_n +A_n b_n)\over a_n}
+ {\cal O}\left({1\over \delta}\right).
$$
On the other hand the ${1\over \Delta}$ expansion of
$G_n$ takes the form
\begin{equation}
\label{G_eq}
G_n={A_n\over a_n} \Delta
+ \sum\limits_{i=1}^4 { B^i_n \over a_n} \Delta_i + {C_n + A_n b_n \over a_n} c+
{D_n\over a_n} +{\cal O} \left({1\over \Delta}\right),
\end{equation}
where $D_n$ is the coefficient in front of $\Delta^{N_n}$ in the polynomial $P_n.$
\section{Null vector decoupling equation}
\setcounter{equation}{0}
We consider 5-point correlators of primary fields $V_{i} =
V_{a_i}(z_i, \bar{z}_i) $ or $\Lambda_{i} = \Lambda_{a_i}(z_i,
\bar{z}_i) $
in the limit $a_5 \to -b.$ The field $V_{-b}\left(z_5,\bar z_5\right)$
is degenerate and satisfies the null vector decoupling equation:
\begin{eqnarray*}
V_0(z_5,\bar z_5) \equiv \left(L_{-1}S_{-\frac12} + b^2
S_{-\frac32}\right)V_{-b}\left(z_5,\bar z_5\right) = 0.
\end{eqnarray*}
Applying to correlators $\langle V_{4}
\Lambda_{3} V_{0} V_{2}V_{1}\rangle$, $\langle V_{4} V_{3} V_{0}
\Lambda_{2}V_{1}\rangle$, $\langle V_{4} V_{3} V_{0}
V_{2}\Lambda_{1}\rangle$, $\langle V_{4} \Lambda_{3} V_{0}
\Lambda_{2}\Lambda_{1}\rangle$
the conformal Ward identities
one can obtain differential
equations ($ z_4 \to \infty $):
\begin{eqnarray}
\nonumber
&& \hskip -1cm \left[
\partial^2_{5} + b^2 \left(
\frac{1}{z_{53}} \partial_{3} +
\frac{1}{z_{52}} \partial_{2} +
\frac{1}{z_{51}} \partial_{1} +
\frac{2\Delta_1}{z_{51}^2} \right) \right]
\langle V_{4} V_{3} V_5 V_2 V_1\rangle
\\[3pt]
\nonumber
& = &
\left( \partial_{z_5} - \frac{b^2}{z_{52}} \right)
\langle V_{ 4} V_3 \Lambda_5 \Lambda_{2} V_1\rangle - \left(
\partial_{z_5} - \frac{b^2}{z_{53}} \right)
\langle V_4 \Lambda_{3} \Lambda_5 V_2V_1\rangle
\\ [3pt]
\nonumber
&+& b^2
\left( \frac{1}{z_{53}} - \frac{1}{z_{52}} \right)
\langle V_4 \Lambda_{3} V_5 \Lambda_{2}V_1 \rangle,
\\[10pt]
\label{5:point:after:limit}
&& \hskip -1cm
b^2 \left[
\left( \frac{1}{z_{15}} + \frac{1}{z_{52}} \right) \partial_{2} + \frac{2\Delta_2}{z_{52}^2} \right]
\langle V_4 V_3 V_5 V_2V_1\rangle
\\ [3pt]
\nonumber
& = &
b^2 \left( \frac{1}{z_{35}} + \frac{1}{z_{51}} \right)
\langle V_4 \Lambda_{3} V_5 \Lambda_{2}V_1\rangle
-\left( \partial_{z_5} + \frac{b^2}{z_{15}} \right)
\langle V_4 V_3 \Lambda_5 \Lambda_{2}V_1\rangle,
\\[10pt]
\nonumber
&& \hskip -1cm
b^2 \left[
\left( \frac{1}{z_{15}} + \frac{1}{z_{53}} \right)
\partial_{3} + \frac{2\Delta_3}{z_{53}^2} \right]
\langle V_4 V_3 V_5 V_2V_1\rangle
\\ [3pt]
\nonumber
& = &
- b^2 \left( \frac{1}{z_{25}} + \frac{1}{z_{51}} \right)
\langle V_4 \Lambda_{3} V_5 \Lambda_{2}V_1\rangle
+ \left( \partial_{z_5} + \frac{b^2}{z_{15}} \right)
\langle V_4 \Lambda_{3} \Lambda_5 V_2V_1\rangle .
\end{eqnarray}
Adding the second equation to the first one and subtracting from
the result the third equation we obtain:
\begin{eqnarray}
&& \hskip -1cm \left[
\partial^2_{z_5}\! + b^2 \left(
\frac{1}{z_{51}} \partial_{1} +
\left( \frac{1}{z_{15}} + \frac{2}{z_{52}} \right) \partial_{2}
+\left( \frac{1}{z_{15}} + \frac{2}{z_{53}} \right) \partial_{3}
+ \frac{2\Delta_1}{z_{51}^2}
+ \frac{2\Delta_2}{z_{52}^2}
+ \frac{2\Delta_3}{z_{53}^2}
\right) \right]
\langle V_4 V_3 V_{5} V_2V_1\rangle
\nonumber
\\[3pt]
\label{suma}
&=&
b^2
\left( \frac{1}{z_{51}} - \frac{1}{z_{52}} \right)
\langle V_4 V_3 \Lambda_5
\Lambda_{2} V_1\rangle - b^2\left(
\frac{1}{z_{51}} - \frac{1}{z_{53}} \right)
\langle V_4 \Lambda_{3} \Lambda_5 V_2V_1\rangle
\end{eqnarray}
Since $V_{-b}$ and $\Lambda_{-b}$ are ``light'' fields, in the classical limit
all the correlators in (\ref{suma}) have the form
$$
\chi(z_5)\,
{\rm e}^{-\frac{1}{2 b^2}S_{\rm\scriptscriptstyle
cl}[\delta_4,\delta_3,\delta_2,\delta_1]}\ .
$$
Therefore, for $b \to 0$ and $ \Delta_1, \Delta_2, \Delta_3, \Delta_4$ of order $b^{-2},$
we have:
\begin{eqnarray*}
\partial_{1},\; \partial_{2},\; \partial_{3} = {\cal O}\left(b^{-2}\right),
\qquad \Delta_5,\; \partial_{z_5} = {\cal O}\left(1\right).
\end{eqnarray*}
Keeping only the leading terms in (\ref{suma}) we thus get the
closed equation for the classical limit of $ \langle V_4 V_3 V_5
V_2V_1\rangle$. In the standard locations $z_1= 0, z_3 = 1,$ $z_5
= z,\ z_2 = x,$ it takes the form:
\begin{eqnarray}
\label{eigen:2}
\nonumber
\Bigg\lbrace \partial^2_z
&+& 2b^2\left[
\frac{\Delta_4-\Delta_3-\Delta_2-\Delta_1}{z(z-1)}
+
\frac{\Delta_3}{(z-1)^2}
+
\frac{\Delta_2}{(z-x)^2}
+
\frac{\Delta_1}{z^2}
\right] \Bigg\rbrace
\langle V_4 V_3 V_5 V_2 V_1\rangle
\\[4pt]
&+& 2b^2\frac{x(x-1)}{z(z-1)(z-x)}
\frac{\partial}{\partial x} \,
\langle V_4 V_3 V_5 V_2 V_1\rangle
= 0.
\end{eqnarray}
Let us consider the contribution to this correlation function
from an even subspace of a single NS Verma module
${\cal V}_{\Delta}\!\!\otimes \overline{{\cal V}}_{\Delta}$.
In the classical limit one gets:
\begin{eqnarray*}
\Big\langle
V_4\left(\infty\right) V_3\left(1,1\right) V_{-b}\left(z,\bar
z\right) \mid_{\Delta}^{\rm \scriptscriptstyle even}
V_2\left(x,\bar x\right) V_1\left(0,0\right) \Big\rangle \sim
\chi_\Delta(z)\ {\rm e}^{
\frac{1}{2b^2}f_{\delta\!}\left[^{\delta_3\:\delta_2}_{\delta_4\:\delta_1}\right]\!(x)},
\end{eqnarray*}
where \(
f_{\delta\!}\left[^{\delta_3\:\delta_2}_{\delta_4\:\delta_1}\right]\!(x)
\) is the classical conformal block (\ref{defccb}).
Substituting into (\ref{eigen:2}) one obtains a Fuchsian equation:
\begin{eqnarray}
\label{eigen:3}
\frac{d^2\chi_\Delta(z)}{dz^2}
+
\left(
\frac{\delta_4 - \delta_3 -\delta_2 - \delta_1}{z(z-1)}
+
\frac{\delta_1}{z^2} + \frac{\delta_2}{(z-x)^2} + \frac{\delta_3}{(z-1)^2}
\right)
\chi_\Delta(z)
&&
\\[10pt]
\nonumber
+
\frac{x(x-1){\cal C}(x)}{z(z-x)(z-1)}
\chi_\Delta(z)
& = &
0,
\end{eqnarray}
with the
accessory parameter ${\cal C}(x)$ given by:
\begin{eqnarray}
\label{accessory:parameter}
{\cal C}(x)
& = &
\frac{\partial}{\partial x}
f_{\delta}\!\!\left[^{\delta_3\:\delta_2}_{\delta_4\:\delta_1}\right]\!(x).
\end{eqnarray}
We shall now calculate the monodromy properties of $\chi_\Delta(z)$ along
the contour encircling the points $0$ and $x$. There are only
three conformal families in the OPE of the degenerate field
$V_{-b}$ with a super-primary field $V_{a}$:
\begin{eqnarray}
\nonumber \label{OPE:degenerate} V_{-b}(z, \bar
z)V_{a}(0, 0)
&=& C_{a_+,-b,a}\,(z \bar z)^{\frac{bQ}{2}(1+\lambda)}\,V_{a_+}(0,0) +
C_{a_-,-b,a}\,(z \bar z)^{\frac{bQ}{2}(1-\lambda)}\,V_{a_-}(0,0)
\\ [6pt]
&+& \tilde C_{a,-b,a}\,(z \bar z)^{1+b^2}\, \frac{1}{(2
\Delta_{a})^2} \, \tilde V_{a}(0,0)
\, + \, {\rm descendants},
\end{eqnarray}
where
$
a_{\pm} = a \pm b.
$
In the classical limit the third term in (\ref{OPE:degenerate})
is sub-leading with respect to the first two. Hence in the space
of solutions of (\ref{eigen:3}) there is a basis $\chi_{\Delta}^\pm(z)$
such that:
\begin{eqnarray}
\label{basis:continuation}
\chi_{\Delta}^\pm\left({\rm e}^{2\pi i}z\right)
& = &
-{\rm e}^{\pm i\pi \lambda}\,\chi_{\Delta}^\pm\left(z\right).
\end{eqnarray}
The problem of calculating ${\cal C}$ can then be formulated as follows:
adjust ${\cal C}$ in such a way that the equation
admits solutions with the monodromy around $0$ and $x$ given by
(\ref{basis:continuation}). This is exactly the monodromy problem
obtained and solved in the context of Virasoro theory in
\cite{Zamolodchikov:2,Zamolodchikov:3}. Details of these
calculations are presented in Appendix A.
Taking into account the ${1\over \delta}$ expansion of
the classical block
(\ref{clas_block}) and (\ref{G_eq})
one gets:
\begin{eqnarray}
\label{asymptoticG}
\nonumber
&& \hskip -1cm \mathcal{G}_{\Delta}^{1\over 2}\!
\left[^{\Delta_3 \;\Delta_2}_{\Delta_4 \;\Delta_1} \right]\!(x)
= -\ln \Delta
+\pi \tau \left( \Delta - \frac{c}{24} \right)
+ \left( \frac{c}{8} - \Delta_1 - \Delta_2 - \Delta_3 - \Delta_4\right) \, \ln{K^2(x)} \\
[6pt] &&
+ \left( \frac{c}{24} - \Delta_2 - \Delta_3\right) \, \ln(1-x)
+ \left( \frac{c}{24} - \Delta_1 - \ \Delta_2\right) \, \ln(x) + f^{1\over 2}(x) +{\cal O}\left({1\over \Delta}\right),
\end{eqnarray}
where
\(
K(x)
\)
is the complete elliptic integral of the first kind,
\(
\tau \equiv \tau(x) = i\frac{K(1-x)}{K(x)}
\)
is the half-period ratio and
$f^{1\over 2}(x)$ is a function of $x$ specific for each type of block and independent of $\Delta_i$ and $c$.
One can obtain corresponding formulae for other types of blocks in a similar way.
The exact form of the functions $f^{1,\frac12}(x)$ can be derived from analytic expressions for $c={3\over 2}$ NS superconformal blocks
with external weights $\Delta_i= {1\over 8}$ \cite{hjs}.
\section{Elliptic recursion relations }
\setcounter{equation}{0}
The large $\Delta$ asymptotic suggests the following form of superconformal blocks:
\begin{eqnarray}
\label{Hblock}
\mathcal{F}^{1, \frac12}_{\Delta}\!
\left[^{\underline{\hspace{3pt}}\,\Delta_3
\;\underline{\hspace{3pt}}\,\Delta_2}_{\hspace{3pt}\,\Delta_4
\;\hspace{3pt}\, \Delta_1} \right]\!(x)
& =&
(16q)^{\Delta - \frac{c-3/2}{24}}\ x^{\frac{c-3/2}{24} - \Delta_1 - \underline{\hspace{3pt}}\, \Delta_2} \
(1- x)^{\frac{c-3/2}{24} - \underline{\hspace{3pt}}\,\Delta_2 - \underline{\hspace{3pt}}\,\Delta_3}\
\\
\nonumber
& \times &\theta_3^{\frac{c - 3/2}{2}
- 4 (\Delta_1 + \underline{\hspace{3pt}}\,\Delta_2 +\underline{\hspace{3pt}}\,\Delta_3 + \Delta_4) } \
\mathcal{H}^{1, \frac12}_{\Delta}\! \left[^{\underline{\hspace{3pt}}\,\Delta_3
\;\underline{\hspace{3pt}}\,\Delta_2}_{\hspace{3pt}\,\Delta_4
\;\hspace{3pt}\, \Delta_1} \right]\!(x),
\end{eqnarray}
where
\(
q \equiv q(x) = {\rm e}^{i\pi\tau}
\)
is the elliptic nome. The
elliptic blocks
\(\mathcal{H}^{1, \frac12}_{\Delta}\! \left[^{\underline{\hspace{3pt}}\,\Delta_3
\;\underline{\hspace{3pt}}\,\Delta_2}_{\hspace{3pt}\,\Delta_4
\;\hspace{3pt}\, \Delta_1} \right]\!(x)
\)
have the same analytic structure as
superconformal ones:
\begin{eqnarray*}
\mathcal{H}^{1, \frac12}_{\Delta}\! \left[^{\underline{\hspace{3pt}}\,\Delta_3
\;\underline{\hspace{3pt}}\,\Delta_2}_{\hspace{3pt}\,\Delta_4
\;\hspace{3pt}\, \Delta_1} \right]\!(x)=
g^{1,\frac12\,}_{\underline{\hspace{3pt}}\,\underline{\hspace{3pt}}}(x)
+ \sum_{m,n}
\frac{
h^{1, \frac12}_{mn}
\left[^{\underline{\hspace{3pt}}\,\Delta_3
\;\underline{\hspace{3pt}}\,\Delta_2}_{\hspace{3pt}\,\Delta_4
\;\hspace{3pt}\, \Delta_1} \right]\!(x)
}{
\Delta - \Delta_{mn}}.
\end{eqnarray*}
The functions $ g^{1,\frac12\,}_{\underline{\hspace{3pt}}\,\underline{\hspace{3pt}}}(x)$
depend on the type of block and are independent
of the external weights $\Delta_i$ and the central charge $c$.
They have no singularities in $\Delta$ and are directly related to the functions
$f^{1,\frac12}(x)$ in (\ref{asymptoticG}). For instance, in the case of the odd block
$\mathcal{F}_{\Delta}^{1\over 2}\!
\left[^{\Delta_3
\;\Delta_2}_{\Delta_4
\;\Delta_1} \right]\!(x):
$
$$
\exp f^{1\over 2}(x)
\; = \;
(16 q)^{1\over 16} \, \left[ x(1-x)\right] ^{-{1\over 16}} \, \theta_3(q)^{-{3\over 4}} \,
g^{1\over 2}(x).
$$
The analytic form of these functions can be read off from the form of the elliptic blocks
related to the $c = \frac32$ conformal ones with $\Delta_i= \Delta_0 = \frac18$ \cite{hjs}:
\begin{equation}
\label{H0blocks1}
\begin{array}{rlllrlllllllllll}
\mathcal{H}^{1}_{\Delta}\! \left[^{\Delta_0 \; \Delta_0}_{\Delta_0 \; \Delta_0} \right]\!(q)
&=& \theta_3(q^2),
&\hspace{10pt}&
\mathcal{H}^{\frac12}_{\Delta}\! \left[^{\Delta_0 \; \Delta_0}_{\Delta_0 \; \Delta_0} \right]\!(q)
&=&\frac{\textstyle 1}{\textstyle\Delta}\, \theta_2(q^2),
\\ [6pt]
\mathcal{H}^{1}_{\Delta}\! \left[^{\Delta_0 \;*\Delta_0}_{\Delta_0 \;\hspace{4pt}\Delta_0} \right]\!(q)
&=& \theta_3(q^2),
&\hspace{10pt}&
\mathcal{H}^{\frac12}_{\Delta}\! \left[^{\Delta_0 \;*\Delta_0}_{\Delta_0 \;\hspace{4pt}\Delta_0} \right]\!(q)
&=& \theta_2(q^2),
\end{array}
\end{equation}
\vspace*{-10pt}
\begin{eqnarray}
\label{H0blocks2}
\nonumber
\mathcal{H}^{1}_{\Delta}\! \left[^{*\Delta_0 \;*\Delta_0}_{\hspace{4pt} \Delta_0 \;\hspace{4pt}\Delta_0} \right]\!(q)
&=&
\theta_3(q^2) \left( 1 - \frac{q}{\Delta} \theta_3^{-1}(q) \frac{\partial}{\partial q} \theta_3(q) + \frac{\theta_2^{4}(q)}{4 \Delta} \right) ,
\\[-6pt]
\\[-6pt]
\nonumber
\mathcal{H}^{\frac12}_{\Delta}\! \left[^{*\Delta_0 \;*\Delta_0}_{\hspace{4pt} \Delta_0 \;\hspace{4pt}\Delta_0} \right] \!(q)
&=&
- \theta_2(q^2) \left( \Delta - q\, \theta_3^{-1}(q) \frac{\partial}{\partial q} \theta_3(q) + \frac{\theta_2^{4}(q)}{4 } \right) .
\end{eqnarray}
Indeed the functions $ g^{1,\frac12\,}_{\underline{\hspace{3pt}}\,\underline{\hspace{3pt}}}(x)$
are just given by terms regular for $\Delta\to 0:$
\begin{equation}
\label{gfunctions}
\begin{array}{rlllrllllllllllllll}
g^{1}(x)
&=& \theta_3(q^2),
&\qquad &g^{1\over 2}_{\Delta}(x)&=& 0,
\\ [4pt]
g^{1}_*(x)
&=& \theta_3(q^2),
&\qquad &
g^{\frac12}_{*} (x)
&=& \theta_2(q^2),
\\
g^{1}_{**}(x)
&=& \theta_3(q^2),
&\quad &
g^{\frac12\,}_{**} (x)
&=& - \theta_2(q^2) \left( \Delta - q\, \theta_3^{-1}(q)\,
\textstyle\frac{\textstyle\partial}{\textstyle\partial q} \theta_3(q) + \frac{\textstyle\theta_2^{4}(q)}{\textstyle 4} \right).
\end{array}
\end{equation}
Taking into account the form of the residue at $\Delta_{mn}$
(equations (\ref{res:evenf}), (\ref{res:oddf}))
one gets the general elliptic recursion
relations:
\begin{eqnarray}
\nonumber
\mathcal{H}^{1,\frac12}_{\Delta}\!
\left[^{\underline{\hspace{3pt}}\,\Delta_3
\;\underline{\hspace{3pt}}\,\Delta_2}_{\hspace{3pt}\,\Delta_4
\;\hspace{3pt}\, \Delta_1} \right]\!(x)
&=&
g^{1,\frac12\,}_{\underline{\hspace{3pt}}\,\underline{\hspace{3pt}}}(x)
\\
\label{Hrek}
&+&
\hspace{-15pt}\begin{array}[t]{c}
{\displaystyle\sum} \\[-6pt]
{\scriptscriptstyle
m,n>0}
\\[-8pt]
{\scriptscriptstyle
m,n\in 2{\mathbb N}
}
\end{array}
(16q)^{\frac{mn}{2}}
\frac{A_{rs}^c\hspace{-3pt}\left[^{\underline{\hspace{3pt}}\,\Delta_3
\;\underline{\hspace{3pt}}\,\Delta_2}_{\hspace{3pt}\,\Delta_4
\;\hspace{3pt}\, \Delta_1} \right]}{\Delta - \Delta_{mn}} \,
\mathcal{H}^{1,\frac12}_{\Delta_{mn}+\frac{mn}{2}}\!
\left[^{\underline{\hspace{3pt}}\,\Delta_3
\;\underline{\hspace{3pt}}\,\Delta_2}_{\hspace{3pt}\,\Delta_4
\;\hspace{3pt}\, \Delta_1} \right]\!(x)
\\
\nonumber&+&
\hspace{-15pt}\begin{array}[t]{c}
{\displaystyle\sum} \\[-6pt]
{\scriptscriptstyle
m,n>0}
\\[-8pt]
{\scriptscriptstyle
m,n\in 2{\mathbb N}+1
}
\end{array}(16q)^{\frac{mn}{2}}
\frac{S_{rs}(\underline{\hspace{3pt}}\Delta_3)A_{rs}^c\hspace{-3pt}\left[^{\widetilde{\underline{\hspace{3pt}}\,\Delta_3}
\;\widetilde{\underline{\hspace{3pt}}\,\Delta_2}}_{\hspace{3pt}\,\Delta_4
\;\hspace{3pt}\, \Delta_1} \right]}{\Delta - \Delta_{mn}} \,
\mathcal{H}^{\frac12,1}_{\Delta_{mn}+\frac{mn}{2}}\!
\left[^{\underline{\hspace{3pt}}\,\Delta_3
\;\underline{\hspace{3pt}}\,\Delta_2}_{\hspace{3pt}\,\Delta_4
\;\hspace{3pt}\, \Delta_1} \right]\!(x).
\end{eqnarray}
Formula (\ref{Hrek}) is the main result of the present paper.
As a nontrivial consistency check of (\ref{Hrek}) one can verify
that each pair of elliptic blocks in (\ref{H0blocks1}), (\ref{H0blocks2}),
satisfy recursion relations (\ref{Hrek}) with the corresponding
functions (\ref{gfunctions}).
This is done in Appendix B.
\setcounter{equation}{0}
\section*{Acknowledgements}
The work was partially supported by the Polish State Research
Committee (KBN) grant no. N N202 0859 33.
\vskip 1mm
\noindent
The research of L.H.\ was supported by the Alexander von Humboldt Foundation.
\vskip 1mm
\noindent
P.S.\ is grateful to the faculty of the Institute of Theoretical Physics, University of Wroc\l{}aw,
for the hospitality.
\section*{Appendix A}
\setcounter{equation}{0}
\renewcommand{\theequation}{A.\arabic{equation}}
Consider the equation
\begin{eqnarray}
\label{class:1}
\frac{d^2\chi(z)}{dz^2}
+
U(z) \chi (z)
+
\frac{x(x-1){\cal C}(x)}{z(z-x)(z-1)}
\chi(z)
& = &
0,
\end{eqnarray}
with
\begin{equation}
\label{U}
U(z) = \frac14\left(
\frac{\lambda_1^2 + \lambda_2^2 + \lambda_3^2 -\lambda_4^2 - 2}{z(z-1)}
+
\frac{1-\lambda_1^2}{z^2} + \frac{1-\lambda_2^2}{(z-x)^2} + \frac{1-\lambda_3^2}{(z-1)^2}
\right).
\end{equation}
We want to choose ${\cal C}(x)$ such that (\ref{class:1}) admits a pair of solutions $\chi^{\pm}(z)$ satisfying
the monodromy condition
\begin{equation}
\label{monodromy:condition}
\chi^{\pm}\left({\rm e}^{2\pi i}z\right) = -{\rm e}^{\pm i\pi\lambda}\chi^{\pm}(z),
\end{equation}
where
\(
\chi\left({\rm e}^{2\pi i}z\right)
\)
denotes a function analytically continued in $z$ along a closed path encircling points $z = 0$ and $z = x.$
Following \cite{Zamolodchikov:3} we perform an elliptic change of variables:
\begin{equation}
\label{elliptic:substitution}
\xi(z) = \frac12\int\limits^{\frac{z}{x}}
\frac{dt}{\sqrt{t(1-t)(1-xt)}},
\qquad
\tilde\chi(\xi) = \left(\frac{dz(\xi)}{d\xi}\right)^{-\frac12}\chi\Big(z(\xi)\Big).
\end{equation}
This gives
\begin{eqnarray}
\label{second:derivative}
\frac{d^2}{dz^2} \chi(z)
& = &
-\frac12 \left(\xi'\right)^{-\frac12}\left\{\xi(z),z\right\}\tilde\chi(\xi)
+
\left(\xi'\right)^{+\frac32}\left.\frac{d^2\tilde\chi(\xi)}{d\xi^2}\right|_{\xi = \xi(z)},
\end{eqnarray}
where
\(
\left\{\xi(z),z\right\}
\)
is the Schwarzian derivative of the map (\ref{elliptic:substitution}):
\begin{equation}
\label{Schwarz}
\left\{\xi(z),z\right\}=
\frac38\left[\frac{1}{z^2} + \frac{1}{(z-x)^2} + \frac{1}{(z-1)^2}\right]
-
\frac14\left[\frac{1}{z(z-x)} + \frac{1}{z(z-1)} + \frac{1}{(z-x)(z-1)}\right].
\end{equation}
Using (\ref{elliptic:substitution}) and (\ref{second:derivative}) we can rewrite equation (\ref{class:1})
in the form of a Schr\"odinger equation
\begin{eqnarray}
\label{class:2}
\left[-\frac{d^2}{d\xi^2}
-
\tilde U(\xi)
\right]
\tilde\chi(\xi)
=
4x(x-1){\cal C}(x)\tilde\chi(\xi)
\end{eqnarray}
with the (double periodic in $\xi$) potential
\begin{eqnarray}
\nonumber
\label{tilde:U}
\tilde U(\xi)
& = &
\left.
\Big(\xi'(z)\Big)^{-2}
\left[
U(z) - \frac12\{\xi(z),z\}
\right]
\right|_{z = z(\xi)}
\\[6pt]
& = &
\left(\frac14- \lambda_1^2\right)\left(\frac{x}{z(\xi)} -1\right)
+
\left(\frac14- \lambda_2^2\right)\left[\frac{x(x-1)}{z(\xi)-x} + 2x -1\right]
\\
\nonumber
& + &
\left(\frac14- \lambda_3^2\right)\left(1-\frac{1-x}{1-z(\xi)}\right)
+
\left(\frac14- \lambda_4^2\right)(z(\xi)-x)
+x - \frac12.
\end{eqnarray}
Continuing analytically the function $\xi(z)$ along the closed path encircling the points $0$ and $x$ one gets:
\begin{eqnarray*}
\xi\left({\rm e}^{2\pi i}z\right)
& = &
\xi(z) +\int\limits_0^1 \frac{dt}{\sqrt{t(1-t)(1-xt)}}
= \xi(z) + 2K(x)
\end{eqnarray*}
where $K(x)$ is the complete elliptic integral of the first kind:
\begin{eqnarray*}
K(x) \equiv \int\limits_0^1 \frac{dt}{\sqrt{(1-t^2)(1-xt^2)}}.
\end{eqnarray*}
The monodromy condition (\ref{monodromy:condition}) thus takes the form
\begin{equation}
\label{monodromy:2}
\tilde\chi^{\pm}\Big(\xi+2K(x)\Big) = {\rm e}^{\pm i \pi \lambda} \tilde\chi^{\pm}(\xi).
\end{equation}
We shall solve (\ref{class:2}) in the large $\lambda $ limit using a standard perturbative method.
First, assume that
\begin{equation}
\label{zeroord:assumpt}
\tilde U(\xi) = o\big({\cal C}(x)\big)
\end{equation}
so that in the leading order we can neglect in (\ref{class:2}) the potential term and
the solutions are just plane waves:
\begin{equation}
\label{zeroth:order}
\tilde\chi^{\pm}_0(\xi) = {\rm e}^{\pm i p \xi},
\qquad
p^2 = 4x(x-1){\cal C}(x).
\end{equation}
On the other hand the monodromy condition (\ref{monodromy:2}) implies
\begin{eqnarray*}
{\rm e}^{\pm 2i p K(x)} = {\rm e}^{\pm i \pi \lambda}
\qquad \Rightarrow \qquad
p = - \frac{\pi\lambda}{2K(x)},
\end{eqnarray*}
what also proves the consistency of (\ref{zeroord:assumpt}). Hence, in the leading order one obtains:
\begin{equation}
\label{C:zero}
{\cal C}^{(0)}(x) = \frac{\pi^2\lambda^2}{16 x(x-1)K^2(x)}.
\end{equation}
The first correction ${\cal C}^{(1)}(x)$ is given by:
\begin{eqnarray}
\label{first:order:2}
{\cal C}^{(1)}(x)&=&
\frac{-1}{8x(x-1)K(x)}\int\limits_{\xi_0}^{\xi_0+2K(x)}\!\!d\xi\ \tilde\chi_0^{-}(\xi)\tilde U(\xi)\tilde\chi_0^+(\xi)
\\
\nonumber
&=&
\frac{-1}{16x(x-1)K(x)}\int_{[0,x]} \frac{ \tilde U\big(\xi(z)\big) dz}{\sqrt{z(1-z)(x-z)}},
\end{eqnarray}
where in the first line $\Im\,\xi_0 > 0$ while in the second line $[0,x]$ denotes a positively oriented, closed
contour in the complex $z$ plane, surrounding the points $0$ and $x.$
Integrating one gets:
\begin{eqnarray*}
&& \hskip -1cm \int_{[0,x]} \frac{\tilde U\big(\xi(z)\big)dz}{\sqrt{z(1-z)(x-z)}}
= \Bigg\lbrace
\left( 1-4 \lambda^2_1\right) (I_1 - K(x)) + \left( 1-4 \lambda^2_2\right)
\left( I_2 + (2x -1) K(x)\right)
\\ [4pt]
&& +\left( 1-4 \lambda^2_3 \right) ( K(x) - I_3) + \left( 1-4 \lambda^2_4\right) (I_4 - x K(x))
+ 4 \left( x - \frac12\right) K(x)
\Bigg\rbrace,
\end{eqnarray*}
where
\begin{eqnarray*}
I_1 &=& \frac14 \int_{[0,x]} \frac{x dz}{z \sqrt{z(1-z)(x-z)}} = K(x) - E(x), \\ [4pt]
I_2 &=& \frac14 \int_{[0,x]} \frac{x (1-x) dz}{(z-x) \sqrt{z(1-z)(x-z)}} =(1-x) K(x) - E(x), \\ [4pt]
I_3 &=& \frac14 \int_{[0,x]} \frac{(1-x) dz}{(1-z) \sqrt{z(1-z)(x-z)}} = E(x),\\ [4pt]
I_4 &=& \frac14 \int_{[0,x]} \frac{z dz}{\sqrt{z(1-z)(x-z)}} = K(x) - E(x).
\end{eqnarray*}
Here $E(x)$ is the complete elliptic integral of the second kind:
\begin{eqnarray*}
E(x)
\; = \;
\int_0^1 \frac{(1-xt^2)\, dt}{\sqrt{(1-t^2)(1-xt^2)}}.
\end{eqnarray*}
The correction to the accessory parameter takes the form:
\begin{eqnarray*}
{ \cal C}^{(1)}(x) = \frac{-1}{4 x (x-1)}
\left\lbrace
\frac{E(x)}{K(x)}\left( -1 + \lambda^2_1 + \lambda^2_2 + \lambda^2_3 + \lambda^2_4 \right)
+ x \left( 1- \lambda^2_2 + \lambda^2_4\right ) - \left( \lambda^2_3 + \lambda^2_4\right)
\right\rbrace
\end{eqnarray*}
Since $ {\cal C}(x) = \partial_x f_{\delta\!}\left[^{\delta_3\:\delta_2}_{\delta_4\:\delta_1}\right]\!(x)$ one can calculate the classical block:
\begin{eqnarray*}
f_{\delta\!}\left[^{\delta_3\:\delta_2}_{\delta_4\:\delta_1}\right]\!(x)
&=& \int \frac{dx}{4 x (x-1)}
\left\lbrace
\frac{\left( \pi \lambda\right)^2}{4 K^2(x)} \right.
\\
&&
\hskip -10mm
+ \;
\left.
\frac{E(x)}{K(x)}\left( 1 - \lambda^2_1 - \lambda^2_2 - \lambda^2_3 - \lambda^2_4\right)
- x \left( 1- \lambda^2_2 + \lambda^2_4\right) + \lambda^2_3 + \lambda^2_4
\right\rbrace
+
{\cal O} \left({1\over \lambda^2}\right).
\end{eqnarray*}
Using
\begin{eqnarray*}
\int \frac{dx}{ x (x-1)} \frac{1}{4 K^2(x)}
&=& \frac{1}{\pi} \frac{K(1-x)}{K(x)} \equiv \frac{\tau}{i \pi}, \\ [6pt]
\int \frac{dx}{ x (x-1)} \frac{E(x)}{ K(x)} &=& - \frac12 \ln{K^4(x)} - \ln{x},
\end{eqnarray*}
one gets:
\begin{eqnarray*}
f_{\delta\!}\left[^{\delta_3\:\delta_2}_{\delta_4\:\delta_1}\right]\!(x) &=& \frac14 \, \Bigg\lbrace -i \pi \tau \lambda^2
- \frac12 \left( 1 - \lambda^2_1 - \lambda^2_2 - \lambda^2_3 - \lambda^2_4\right) \, \ln{K^4(x)} \\
[6pt] && \ \
- (1 - \lambda^2_2 - \lambda^2_3) \, \ln(1-x) - (1 - \lambda^2_1 - \lambda^2_2) \, \ln(x)
\Bigg\rbrace + {\cal O} \left({1\over \lambda^2}\right)
\end{eqnarray*}
or, in terms of $
\delta = \frac{1-\lambda^2}{4},
\delta_i = \frac{1-\lambda^2_i}{4},
$
\begin{eqnarray}
\label{clas_block}
f_{\delta\!}\left[^{\delta_3\:\delta_2}_{\delta_4\:\delta_1}\right]\!(x) &=& i \pi \tau \left( \delta - \frac{1}{4} \right)
+ \frac12 \left( \frac{3 }{4} - \delta_1 - \delta_2 - \delta_3 - \delta_4\right) \, \ln{K^4(x)}
\\ [6pt]
\nonumber
& +& \left( \frac{1}{4} - \delta_2 - \delta_3\right) \, \ln(1-x)
+ \left( \frac{1}{4} - \delta_1 - \ \delta_2\right) \, \ln(x)
+ {\cal O} \left({1\over \delta}\right) .
\end{eqnarray}
The absence in the last two formulae of the $x-$independent integration constants follows from the
normalization condition of the block
\(
{\cal F}_{c,\Delta\!}\left[^{\Delta_3\:\Delta_2}_{\Delta_4\:\Delta_1}\right]\!(x).
\)
\section*{Appendix B}
\setcounter{equation}{0}
\renewcommand{\theequation}{B.\arabic{equation}}
Consider $c =\frac32 $ theory with external weights $\Delta_0 = \frac18$.
For $r \ne s$ all coefficients $A_{rs}^c\hspace{-3pt}\left[^{\underline{\hspace{3pt}}\,\Delta_0
\;\underline{\hspace{3pt}}\,\Delta_0}_{\hspace{3pt}\,\Delta_0 \;\hspace{3pt}\, \Delta_0} \right] $ are zero.
There are however some non zero terms if $r=s$:
\begin{eqnarray} \label{Arr}
(16)^{\frac{r^2}{2}}\ \mathcal{A}_{rr}^{\hat c=1}\left[^{*\Delta_0\: *\Delta_0}_{\hspace{4pt} \Delta_0\: \hspace{4pt}\Delta_0}\right]
= \, \left\{
\begin{array}{lcl}
- r^2 \qquad & \mathrm{if} & \quad r \in 2\mathbb{N},
\\[4pt]
2 \qquad & \mathrm{if} & \quad r \in 2\mathbb{N}+1.
\end{array}
\right.
\end{eqnarray}
Moreover, $\Delta_{rr}=0$.
One can show that all elliptic blocks (\ref{H0blocks1}), (\ref{H0blocks2}) satisfy recursion relations (\ref{Hrek}) with corresponding
$g^{1,\frac12}$ functions (\ref{gfunctions}).
Indeed, for the blocks:
\begin{eqnarray*}
&& \mathcal{H}^{1}_{\Delta}\! \left[^{\Delta_0 \; \Delta_0}_{\Delta_0 \; \Delta_0} \right]\!(x)
= g^{1}(x),
\qquad \ \
\mathcal{H}^{1}_{\Delta}\! \left[^{\Delta_0 \; *\Delta_0}_{\Delta_0 \;\hspace{4pt}\Delta_0} \right]\!(x)
= g^{1}_{*}(x),
\\[6pt]
&&
\mathcal{H}^{\frac12}_{\Delta}\! \left[^{\Delta_0 \;*\Delta_0}_{\Delta_0 \;\hspace{4pt}\Delta_0} \right]\!(x)
= g^{\frac12}_{*}(x),
\qquad
\mathcal{H}^{\frac12}_{\Delta}\! \left[^{*\Delta_0 \;*\Delta_0}_{\hspace{4pt} \Delta_0 \;\hspace{4pt}\Delta_0} \right] \!(x)
= g^{\frac12}_{**}(x),
\end{eqnarray*}
the relation (\ref{Hrek}) holds because all the residues at $\Delta = \Delta_{rs}$ are zero. In the other
cases the formula (\ref{Arr}) becomes helpful:
\begin{eqnarray*}
\mathcal{H}^{\frac12}_{\Delta}\!
\left[^{\Delta_0\:\Delta_0}_{\Delta_0\:\Delta_0}\right]\!(x) &=&
\sum_{r\in 2{\mathbb N}}
(16q)^{\frac{r^2}{2}}
\frac{A_{rr}^c\hspace{-3pt}\left[^{*\Delta_0 \;*\Delta_0}_{\hspace{3pt}\,\Delta_0 \;\hspace{3pt}\, \Delta_0} \right]}{\Delta }
\mathcal{H}^{\frac{1}{2}}_{\frac{r^2}{2}}\!
\left[^{\Delta_0\:\Delta_0}_{\Delta_0\:\Delta_0}\right]\!(x) \\
[4pt]
&+& \sum_{r\in 2{\mathbb N}+1} (16q)^{\frac{r^2}{2}}
\frac{A_{rr}^c\hspace{-3pt}\left[^{*\Delta_0 \; *\Delta_0}_{\hspace{3pt}\,\Delta_0
\;\hspace{3pt}\, \Delta_0} \right]}{\Delta }
\mathcal{H}^{1}_{\frac{r^2}{2}}\!
\left[^{\Delta_0\:\Delta_0}_{\Delta_0\:\Delta_0}\right]\!(x) \\[6pt]
&=& \frac{1}{\Delta} \sum_{r\in 2{\mathbb N}} q^{\frac{r^2}{2}} \ (-r^2) \ \mathcal{H}^{\frac12}_{\frac{r^2}{2}}\!
\left[^{\Delta_0\:\Delta_0}_{\Delta_0\:\Delta_0}\right]\!(x)
+ \frac{2}{\Delta} \sum_{r\in 2{\mathbb N}+1} q^{\frac{r^2}{2}}\ \theta_3(q^2).
\end{eqnarray*}
Substituting $\mathcal{H}^{\frac12}_{\frac{r^2}{2}}\!
\left[^{\Delta_0\:\Delta_0}_{\Delta_0\:\Delta_0}\right]\!(x) = \frac{2}{r^2}\ \theta_2(q^2) $
and using the definitions of the theta functions:
\begin{eqnarray*}
\theta_2(q^2) &=& \sum_{n=- \infty}^{\infty} q^{\frac{(n+1)^2}{2}} = 2 \sum_{n=0}^{\infty} q^{\frac{(n+1)^2}{2}},
\qquad
\theta_3(q^2) = \sum_{n=- \infty}^{\infty} q^{2n^2} = 1+ 2 \sum_{n=1}^{\infty} q^{2n^2},
\end{eqnarray*}
one gets:
\begin{eqnarray*}
\mathcal{H}^{\frac12}_{\Delta}\!
\left[^{\Delta_0\:\Delta_0}_{\Delta_0\:\Delta_0}\right]\!(x)
&=&
- \frac{2}{\Delta}\sum_{r\in 2{\mathbb N}} q^{\frac{r^2}{2}} \theta_2(q^2)
+ \frac{2}{\Delta} \sum_{r\in 2{\mathbb N}+1} q^{\frac{r^2}{2}} \theta_3(q^2) \\
& = &
-
\frac{1}{\Delta}\ (\theta_3(q^2) - 1)\ \theta_2(q^2) + \frac{1}{\Delta} \theta_2(q^2)\ \theta_3(q^2) = \frac{1}{\Delta}\ \theta_2(q^2).
\end{eqnarray*}
The last block in (\ref{H0blocks2}), $\mathcal{H}^{1}_{\Delta}\!
\left[^{*\Delta_0\: *\Delta_0}_{\hspace{4pt} \Delta_0\: \hspace{4pt} \Delta_0}\right]\!(x),$
also satisfies the recursion relation:
\begin{eqnarray}
\label{H**}
\nonumber
&& \hskip -1cm \mathcal{H}^{1}_{\Delta}\!
\left[^{*\Delta_0\: *\Delta_0}_{\hspace{4pt} \Delta_0\: \hspace{4pt} \Delta_0}\right]\!(x)
=
\theta_3(q^2)
+ \sum_{r\in 2{\mathbb N}} \left( \frac{-r^2}{\Delta} \right) q^{\frac{r^2}{2}} \,
\mathcal{H}^{1}_{\frac{r^2}{2}}\!
\left[^{*\Delta_0\: *\Delta_0}_{\hspace{4pt} \Delta_0\: \hspace{4pt} \Delta_0}\right]\!(x)
\\ [6pt]
\nonumber
&& + \sum_{r\in 2{\mathbb N}+1} S_{rr}(*\Delta_0) \left( \frac{2}{\Delta}\right) q^{\frac{r^2}{2}} \,
\mathcal{H}^{\frac12}_{\frac{r^2}{2}}\!
\left[^{*\Delta_0\: *\Delta_0}_{\hspace{4pt} \Delta_0\: \hspace{4pt} \Delta_0}\right]\!(x)
\\ [6pt]
\nonumber
&& = \,
\theta_3(q^2)
- \frac{2}{\Delta} \left(
\sum_{r\in 2{\mathbb N}} q^{\frac{r^2}{2}}\, \theta_3(q^2)
- \sum_{r\in 2{\mathbb N}+1} q^{\frac{r^2}{2}} \, \theta_2(q^2)
\right)
\left(
- q\, \theta_3^{-1} \frac{\partial}{\partial q} \theta_3(q) + \frac{\theta_2^{4}(q)}{4 }
\right)
\\
\nonumber
&& - \frac{2}{\Delta} \sum_{r\in 2{\mathbb N}} \frac{r^2}{2}\, q^{\frac{r^2}{2}}\, \theta_3(q^2)
+ \frac{2}{\Delta} \sum_{r\in 2{\mathbb N}+1} \frac{r^2}{2}\, q^{\frac{r^2}{2}} \, \theta_2(q^2)
\\ [6pt]
&& = \,
\theta_3(q^2)\left(1 - \frac{q}{\Delta} \theta_3^{-1}(q) \frac{\partial\theta_3(q)}{\partial q} + \frac{\theta_2^{4}(q)}{4 }
\right)
\\ [6pt]
\nonumber
&& + \frac{1}{\Delta} \left(
q \theta_3^{-1}(q) \frac{\partial\theta_3(q)}{\partial q} - \frac{\theta_2^{4}(q)}{4 }
- \frac{q}{2 } \frac{\partial}{\partial q} \right)
\left(
\theta_3^{2}(q^2) - \theta_2^{2}(q^2)
\right).
\end{eqnarray}
From the identities
\begin{eqnarray*}
&& \theta_3(q^2) = \theta_3(q) \left( \frac{1 + \sqrt{1-x}}{2} \right)^{\frac12} , \qquad
\theta_2(q^2) = \theta_3(q) \left(\frac{1 - \sqrt{1-x}}{2} \right)^{\frac12},
\end{eqnarray*}
it follows that
\begin{eqnarray*}
\theta_3^{2}(q^2) - \theta_2^{2}(q^2) = \sqrt{1-x}\, \theta^2_3(q).
\end{eqnarray*}
Since
\[
\frac{dq(x)}{dx} = \frac{\pi^2 q(x)}{4x(1-x) K^2(x)} = \frac{q(x)}{x(1-x)\theta_3^4(q)}
\]
we have
\begin{eqnarray*}
q\, \frac{\partial}{\partial q} = x (1-x)\, \theta_3^{4}(q) \frac{\partial}{\partial x}
\end{eqnarray*}
and with the help of the relation
\(
x = \frac{\theta_2^{4}(q) }{\theta_3^{4}(q) }
\)
we finally get
\begin{eqnarray*}
\frac{q}{2}\, \frac{\partial}{\partial q}
\left(
\sqrt{1-x}\, \theta^2_3(q)
\right)
= \left(
q\theta_3^{-1}(q)\frac{\partial\theta_3(q)}{\partial q}
- \frac{\theta_2^{4}(q)}{4 }
\right) \,
\sqrt{1-x} \, \theta^2_3(q),
\end{eqnarray*}
what demonstrates that the last line in (\ref{H**}) indeed vanishes.
| {
"attr-fineweb-edu": 1.908203,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUc3E5qhDBMnKv_oP_ | \section{Introduction}
\label{Sec-Intro}
\IEEEPARstart{W}{ireless} multi-access networks are increasingly emerging as an integral part of many communication and control systems with a central data processing or decision making unit, such as the uplink of wireless cellular communications, sensor networks, and machine-to-machine (M2M) communication systems. Such multi-access communication networks usually have low latency constraints for their information, due to their nature or application. These delay requirements, along with the desire for low-complexity system designs, call for schemes and protocols that employ finite blocklengths, even on the order of several hundred symbols, and achieve high levels of reliability at the same time.
A mathematical analysis and design of multi-access networks with such stringent latency requirements, however, cannot rely on conventional information theoretic results, which assume asymptotically large blocklengths and vanishingly small error probability. It is therefore critical to develop rigorous non-asymptotic results that are tight for finite blocklengths. Although this has been an strong trend in the early ages of information theory~\cite{Shannon,Strassen}, there has been renewed interest in this direction since the landmark works of~\cite{PPV,Hayashi}. The main theme of these works is treating mutual information as a random variable (RV)~\cite{Fano, JNL}, which has a stochastic behavior based on the transmitted input and the channel noise and interference. This idea is mainly developed in the information spectrum approach of Verd\'u and Han~\cite{Verdu-Han,Han}, which suggests that the cumulative distribution function (CDF) of this RV characterizes performance in terms of the probability that the channel cannot support the communication rate and causes an ``outage'' for the actual codeword to be correctly detected at the receiver. The highest coding rates arise if the error probability is dominated by the outage probability, and the probability of ``confusion'', i.e., the observation is wrongly decoded to any incorrect codeword, decays to zero.
Although tight non-asymptotic bounds obtained by the information spectrum approach can help with precise analysis and design of communication systems, their numerical computation are usually cumbersome. It is therefore of high practical interest to come up with accurate approximations of the coding rates that are still valid for moderately short blocklengths. Capacity (region), as a first-order statistic of the channel, is already a first-order approximation of the coding rates, but it is only useful for very long blocklengths. Error exponent~\cite{Gallager,Csiszar} is one conventional tool for this purpose, which applies Large Deviation Theory (LDT) to the mutual information RV and studies the exponential decay in error probability of a fixed-rate coding scheme as the blocklength grows larger. Although error exponent analysis can provide a rough estimate for finite-blocklength analysis, a method for finding sharper approximations employs the Central Limit Theorem (CLT) to the mutual information RV, specifically for rates close to capacity. This way, one can investigate the increase in coding rate of a scheme with fixed error probability as the blocklength grows larger and obtain second (and higher) order approximations. In particular, it has been demonstrated~\cite{Strassen,PPV,Hayashi} that second-order approximations involving the fundamental quantity of \emph{channel dispersion}, as a second order statistic of the channel, provide good estimates of the channel coding rates for moderate to short blocklengths.
In this paper, we show how similar ideas can be extended to a multi-user setting in which multiple users are communicating several independent messages to a single receiver over a Gaussian multiple access channel (MAC). In particular, we present several non-asymptotic achievability bounds on the channel coding rates of a Gaussian MAC as a function of the finite blocklength, the fixed average error probability, and the users' power constraints. Our bounds suggest that the \emph{joint outage} event, in which either of the users' mutual information is not strong enough to support its target rate, is the fundamental quantity that governs the performance over the Gaussian MAC. Since this joint outage event is in general complex, we also give a slightly looser, but simpler to analyze non-asymptotic achievable region based on an \emph{outage-splitting} idea~\cite{ML-ISIT12}, in which the joint outage event is split into individual outage events via the union bound.
Applying the CLT to our finite-blocklength results, we also obtain corresponding achievable second-order coding rate regions for the Gaussian MAC. In particular, we give explicit expressions for the achievable dispersion matrices of the Gaussian MAC in terms of the users' power constraints.
A critical ingredient of our analysis is the choice of input distribution for improving the second-order performance.
In particular, consider the 2-user Gaussian MAC with maximal power constraints $P_1$ and $P_2$. Inspired by Shannon~\cite{Shannon}, rather than random coding using the common choice of independent and identically distributed~(i.i.d) Gaussian input distributions $X^n_1\overset{\text{i.i.d.}}{\sim}\mathcal{N}(0,P_1),X^n_2\overset{\text{i.i.d.}}{\sim}\mathcal{N}(0,P_2)$ ~\cite{Rice,Cover}, which achieve the capacity region and are therefore optimal to first order, or their truncated versions lying in thin shells $nP_1\!-\delta \leq\!||x_1^n||^2\!\leq nP_1, \; nP_2-\delta \leq\!||x_2^n||^2\leq nP_2$ for an arbitrarily small $\delta>0$, which are used by Gallager for the error exponent analysis~\cite{Gallager,Gallager-MAC}, we focus on inputs having \emph{independent uniform distributions on the respective power shells}, namely, the $n$-dimensional spheres~$||x_1^n||^2=nP_1$ and $||x_2^n||^2=nP_2$.
Consider a symmetric Gaussian MAC with blocklength~$n=500$, average error probability $\epsilon=10^{-3}$, and powers $P_1\!=\!P_2=0$~dB. Figure 1 compares\footnote{This is a corrected and updated version of a similar plot which was presented in the conference version of this work~\cite{ML-Allerton12}.} the approximate achievable rate regions for all of the aforementioned input distributions: independent power-shell inputs with both joint-outage and outage-splitting versions; independent i.i.d. Gaussian inputs; independent truncated Gaussian inputs; and also the rate region achievable via time division multiple access (TDMA) with power control; along with the asymptotic Cover-Wyner capacity region~\cite{Cover-GMAC,Wyner}. We also depict a \textit{hypothetical} rate region which would be achievable if the sum of independent power shell inputs fell on the sum-power shell.\footnote{We conjecture this to be an outer bound for the Gaussian MAC.} To show the tightness of the achievable rate regions, we also depict two straightforward second-order single-user (SU) outer bounds and a conjectured second-order sum-rate outer bound. The details of all of these regions are given in Section IV. We note that all of the approximation results are computed only up to second-order.
\begin{figure}
\label{fig-low}
\begin{center}
\includegraphics[scale=0.65]{TIT-GMAC-Fig-b}
\end{center}
\caption{Symmetric Gaussian MAC with blocklength~$n=500$, average error probability $\epsilon=10^{-3}$, and powers $P_1\!=\!P_2=0$~dB.}
\end{figure}
Unlike the pentagon shape of the capacity region of the Gaussian MAC in the infinite blocklength regime, we observe that its second-order approximation has a curved shape with no sharp corners. Moreover, the region resulting from independent power shell inputs lies roughly halfway between that of the i.i.d. Gaussian inputs and that which would be achievable by the hypothetical ``sum-power shell'' input. This phenomenon is similar to one observed by Gallager in his analysis of error exponents for the Gaussian MAC~\cite{Gallager-MAC}. It is also interesting that, Gaussian (and truncated Gaussian) random codebooks, although optimal for achieving capacity, are not second-order optimal, and their finite blocklength achievable rate region falls well inside that of power shell inputs. Another interesting observation is that, contrary to the infinite blocklength case, the TDMA strategy with power control is not even sum-rate optimal. Last but not least, the outage-splitting region of the power-shell input closely resembles that of the joint-outage version, and therefore its simplicity does not sacrifice much with respect to accuracy.
Of course, the improved performance of the independent power shell inputs comes at the price of additional complexity in the analysis. First, although the variance of the sum is the sum of variances for two independent Gaussians, the sum of two independent power shell inputs does not lie on the power shell corresponding to the sum of the powers, i.e., $||x_1^n||^2+||x_2^n||^2 \neq ||x_1^n+x_2^n||^2$. Second, classical CLT and LDT analysis do not apply directly for such non-i.i.d. inputs. To overcome these difficulties, we develop new techniques: since power shell inputs can be constructed by normalizing i.i.d. Gaussian RVs, we rely on a \textit{CLT for functions} to develop the outage probability approximation; additionally, we introduce a \textit{change of measure} technique for the confusion probability, so that classical LDT can be applicable to prove its decay to zero.
Another main emphasis of our work has been to utilize standard and transparent methods to highlight the proof steps for finite blocklength analysis, especially when input cost constraints are involved. We are specifically focused on the method of encoding and decoding. Although \emph{random coding and typicality decoding} have proven to be powerful tools in information theory and the standard method for proving most source and channel coding theorems~\cite{ElGamal-Kim}, all non-asymptotic achievability bounds for the Gaussian channel either use random coding but with maximum likelihood (ML) decoding~\cite{Shannon,Gallager}, or employ typicality decoding but with non-random sequential encoding~\cite{PPV,Thomasian}. In addition, for the tightest bounds, handling the cost constraint is either done through relatively sophisticated geometric arguments~\cite{Shannon} or via a relatively complicated introduction and analysis of composite hypothesis testing~\cite{PPV}. In this paper, we start by proving tight finite-blocklength achievability results for point-to-point (P2P) Gaussian channels, which are at least second-order optimal, using the standard arguments of random coding and typicality decoding, with some slight modifications. As we will see, this approach appears easier to generalize than those in~\cite{Shannon,PPV} to multi-user settings, specifically the Gaussian MAC, for which we obtain rather tight achievable second-order approximations using the random coding and modified typicality decoding method.
The rest of this paper is organized as follows. In Section~\ref{Sec-BG}, we review the tightest achievability methods in the finite blocklength regime. Then, in Section~\ref{sec-P2P}, to highlight the key elements of our proof techniques, we revisit the problem of the P2P Gaussian channel, develop new non-asymptotic achievability bounds, and re-derive the second-order approximation of~\cite{PPV,Hayashi}. In Section~\ref{sec-MAC}, we turn to to our problem of interest, prove finite-blocklength inner bounds for the Gaussian MAC, and then apply them to establish achievable second-order coding rates. We conclude the paper in Section~\ref{sec-con} and relegate some of the technical proofs to the Appendices.
\section{Background on Tight Achievability Methods}
\label{Sec-BG}
To highlight the conciseness and simplicity of our approach in proving accurate non-asymptotic and approximate achievability results for cost-constrained channel models, specifically the P2P Gaussian channel and the Gaussian MAC, in this section we review the sharpest and most well-known achievability methods in the literature.
We first review the details of random coding and typicality decoding for proving the achievability side of the coding theorems, because of their simplicity and also because we slightly modify these methods to prove sharp achievability bounds for Gaussian (and other cost-constrained) channels. To emphasize the transparency and conciseness of this approach, we will then review the details of two of the sharpest bounds for the Gaussian channel, namely Polyanskiy et al.'s $\kappa\beta$ method based on composite hypothesis testing~\cite{PPV} and Shannon's geometric method~\cite{Shannon}, and point out the complexities of these methods and the difficulties in generalizing them to multi-user settings. We explain these methods to some level of details to highlight some of their key tools and concepts that we leverage in our later analysis. Note that, in this section, we are concerned with non-asymptotic achievability bounds that are valid for any finite blocklength without requiring convergence conditions.
\subsection{Random Coding and Typicality Decoding}
\label{sub-RC-TD}
The basic idea in an argument based upon random coding and typicality decoding can be reviewed most clearly for a P2P channel~$P_{Y^n|X^n}(y^n|x^n)$. The channel encoder randomly generates $M$~codewords $\{x^n(j)\}_{j=1}^M$ of the codebook independently according to some given $n$-letter distribution $P_{X^n}(x^n)$, where $n$ is the designated blocklength. Observing the output $y^n$, the decoder then chooses the first codeword $x^n(\hat{m})$ of the codebook which looks ``typical'' with $y^n$ in a one-sided sense\footnote{ The use of ``typicality'' nomenclature for this threshold decoding is inspired by the \textit{two-sided} threshold decoding in conventional typicality definition, e.g. by Cover and Thomas~\cite[Section 8.6]{Cover}: $(x^n,y^n)$ are jointly typical if
\[
\left|\frac{1}{n}\log P_{X^n}(x^n)-\mathbb{H}(X)\right|<\epsilon, \quad \left|\frac{1}{n}\log P_{Y^n}(y^n)-\mathbb{H}(Y)\right|<\epsilon, \quad \left|\frac{1}{n}\log P_{X^nY^n}(x^n,y^n)-\mathbb{H}(X,Y)\right|<\epsilon,
\]
and Han~\cite[Section 3.1]{Han}: $(x^n,y^n)$ are jointly typical if
\[
\left|\frac{1}{n}\log \frac{P_{Y^n|X^n}(y^n|x^n)}{P_{Y^n}(y^n)}-\mathbb{I}(X;Y)\right|<\gamma,
\]
where $\mathbb{H}(X)$ and $\mathbb{I}(X,Y)$ denote the average entropy and the average mutual information, respectively. Note that the latter condition of Han is implied by the former set of conditions of Cover and Thomas.
}
\begin{equation}
i(x^n(\hat{m});y^n)>\log \gamma(x^n(\hat{m})),
\end{equation}
where $\gamma(x^n)$ is a (possibly) codeword-dependent threshold and $i(x^n;y^n)$ is the corresponding realization of the \emph{mutual information} RV
\begin{equation}
i(X^n;Y^n):=\log \frac{P_{Y^n|X^n}(Y^n|X^n)}{P_{Y^n}(X^n)}.
\label{inf-RV}
\end{equation}
Here, the \emph{reference} distribution $P_{Y^n}$ is the marginal output distribution induced by the input distribution~$P_{X^n}$, i.e.,
\begin{equation}
P_{Y^n}(y^n)=\sum_{x^n\in\mathcal{X}^n} P_{X^n}(x^n) P_{Y^n|X^n}(y^n|x^n).
\end{equation}
Using one realization of such a code $\{x^n(j)\}_{j=1}^M$, the average error probability can be bounded as\footnote{Throughout this paper, we use a non-standard notation of the form~$P_XP_YP_{Z|X}[f(X,Y,Z)\in\mathcal{A}]$ to explicitly indicate that $(X,Y,Z)$ follow the joint distribution~$P_X\times P_Y\times P_{Z|X}$ in determining the probability $\Pr[f(X,Y,Z)\in\mathcal{A}]$.}
\begin{align}
\epsilon &\leq \frac{1}{M}\sum_{k=1}^M P_{Y^n|X^n=x^n(k)}[i(x^n(k);Y^n)\leq \log \gamma(x^n(k))] \notag \\
& \quad + \frac{1}{M}\sum_{k=1}^M P_{Y^n|X^n=x^n(k)}\left[\bigcup_{j=1}^{k-1}i(x^n(j);Y^n)>\log \gamma(x^n(j))\right], \label{typ-1}
\end{align}
that is, the sum of an \emph{outage probability}, that the correct codeword does not look typical, and a \emph{confusion probability}, that a preceding codeword incorrectly looks typical.
The error probability averaged over all possible realizations of the codebook can then be bounded as
\allowdisplaybreaks{
\begin{align}
\epsilon &\leq \prod_{l=1}^M\left[ \sum_{x^n(l)} P_{X^n}(x^n(l))\right] \frac{1}{M}\sum_{k=1}^M P_{Y^n|X^n=x^n(k)}[i(x^n(k);Y^n)\leq \log \gamma(x^n(k))] \notag \\
& \;\;\; +\prod_{l=1}^M\left[ \sum_{x^n(l)} P_{X^n}(x^n(l))\right] \frac{1}{M}\sum_{k=1}^M P_{Y^n|X^n=x^n(k)}\left[\bigcup_{j=1}^{k-1}i(x^n(j);Y^n)>\log \gamma(x^n(j))\right] \label{ave} \\
&\leq \frac{1}{M}\sum_{k=1}^M \sum_{x^n(k)} P_{X^n}(x^n(k)) P_{Y^n|X^n=x^n(k)}[i(x^n(k);Y^n)\leq \log \gamma(x^n(k))] \prod_{l\neq k}\left[ \sum_{x^n(l)} P_{X^n}(x^n(l))\right] \notag \\
& \;\;\; + \frac{1}{M}\sum_{k=1}^M \sum_{j=1}^{k-1} \sum_{x^n(j)} \sum_{x^n(k)} P_{X^n}(x^n(j)) P_{X^n}(x^n(k)) P_{Y^n|X^n=x^n(k)}\left[i(x^n(j);Y^n)>\log \gamma(x^n(j))\right] \prod_{l\neq j,k}\left[ \sum_{x^n(l)} P_{X^n}(x^n(l))\right] \label{uni} \\
&= \frac{1}{M}\sum_{k=1}^M P_{X^n}P_{Y^n|X^n}[i(X^n;Y^n)\leq \log \gamma(X^n)] + \frac{1}{M}\sum_{k=1}^M (k-1) P_{X^n}P_{Y^n}[i(X^n;Y^n)>\log \gamma(X^n)] \\
&\leq P_{X^n}P_{Y^n|X^n}[i(X^n;Y^n)\leq \log \gamma(X^n)] + \frac{M-1}{2} P_{X^n}P_{Y^n}[i(X^n;Y^n)>\log \gamma(X^n)], \label{typ-2}
\end{align}}
where~\eqref{ave} follows from averaging over the random codebook, and~\eqref{uni} follows from the union bound.
The final result is that there exists a deterministic codebook consisting of $M$ codewords whose average error probability~$\epsilon$ satisfies~\eqref{typ-2}. It is worth mentioning that, in the standard asymptotic analysis of memoryless channels $P_{Y^n|X^n}(y^n|x^n)=\prod_{t=1}^n P_{Y|X}(y_t|x_t)$, the input distribution is selected i.i.d. $P_{X^n}(x^n)=\prod_{t=1}^n P_X(x_t)$, and the threshold is selected as a function of the \emph{average mutual information}, $\log\gamma(x^n)=\log \gamma_n=n\mathbb{I}(X;Y)-o(n)=n\mathbb{E}_{P_XP_{Y|X}}[i(X;Y)]-o(n)$. This leads to the proof of achievability for rates $\frac{\log M}{n}<\mathbb{I}(X;Y)$. In this paper, however, we preserve the general $n$-letter form of the input distribution, since we will use non-i.i.d. input distributions to deal with input cost constraints.
The result~\eqref{typ-2} can be readily extended to input cost constrained settings~\cite{Thomasian} requiring~$X^n\in\mathcal{F}_n$, where $\mathcal{F}_n\subset \mathcal{X}^n$ is the set of feasible input sequences: upon selecting the decoding threshold
\begin{align}
\gamma(x^n)=
\begin{cases}
\gamma_n & x^n\in\mathcal{F}_n\; , \\
\infty & x^n\notin\mathcal{F}_n\; ,
\end{cases} \label{inf-th}
\end{align}
where $\gamma_n$ is a prescribed threshold, we obtain
\begin{align}
\epsilon &\leq P_{X^n}P_{Y^n|X^n}\left[{i}(X^n;Y^n)\leq \log \gamma(X^n) \bigcap X^n\notin\mathcal{F}_n\right] + P_{X^n}P_{Y^n|X^n}\left[{i}(X^n;Y^n)\leq \log \gamma(X^n) \bigcap X^n\in\mathcal{F}_n\right] \notag \\
& \quad + \frac{M-1}{2} P_{X^n}P_{Y^n}\left[{i}(X^n;Y^n)>\log \gamma(X^n)\bigcap X^n\notin\mathcal{F}_n\right] \notag \\
& \quad + \frac{M-1}{2} P_{X^n}P_{Y^n}\left[{i}(X^n;Y^n)>\log \gamma(X^n)\bigcap X^n\in\mathcal{F}_n\right] \label{DT-cost-1} \\
&\leq P_{X^n}[X^n\notin\mathcal{F}_n] + P_{X^n}P_{Y^n|X^n}[i(X^n;Y^n)\leq \log \gamma_n] + \frac{M-1}{2} P_{X^n}P_{Y^n}[i(X^n;Y^n)>\log \gamma_n]. \label{DT-cost-PPV}
\end{align}
Upon remapping all non-feasible codewords to an arbitrary sequence~$x^n(0)\in\mathcal{F}_n$ and without touching the decoding regions, we conclude that there exists a deterministic codebook with $M$ codewords all belonging to the feasible set~$\mathcal{F}_n$ and whose average error probability~$\epsilon$ satisfies~\eqref{DT-cost-PPV}, cf.~\cite[p. 2314]{PPV}.
Considering an i.i.d. Gaussian input $P_{X^n}\sim\mathcal{N}(\mathbf{0},(P-\delta)I_n)$, with $\delta$ being any arbitrarily small positive constant\footnote{The power margin $\delta$ can be vanishing with $n$ provided that it decays strictly slower than $O\left(\frac{1}{\sqrt{n}}\right)$ so that the cost-violation probability does not dominate.}, and applying the conventional CLT to~\eqref{DT-cost-PPV} results in the approximate achievability bound~\cite{Rice}:
\begin{align}
\frac{\log M}{n}\leq C(P-\delta)-\frac{\log e}{\sqrt{n}}\sqrt{\frac{P-\delta}{1+P-\delta}}Q^{-1}(\epsilon)+O\left(\frac{1}{n}\right), \label{Gaussian-input}
\end{align}
where, as usual, $Q^{-1}(\cdot)$ is the functional inverse of the complementary cumulative distribution function (CDF) of a standard Gaussian distribution $Q(x)=\frac{1}{\sqrt{2\pi}}\int_x^\infty e^{-t^2/2}dt$, and
\begin{equation}
C(P)=\frac{1}{2}\log(1+P).
\label{capacity}
\end{equation}
As will be seen shortly, this second-order performance is not optimal. Therefore, the i.i.d. Gaussian input distribution achieves capacity but is not second-order optimal, since a considerable portion of Gaussian codewords do not utilize the maximum available power budget~$P$, which degrades the performance. In Shannon's words~\cite{Shannon}, ``it is evidently necessary to avoid having too many of codepoints interior to the $\sqrt{nP}$ sphere.'' It will be shown that more refined input distributions, that force all codewords to use the maximum power~$P$, are required for this purpose.
\subsection{Polyanskiy et al.'s $\kappa\beta$ Bound}
\label{sub-kap-beta}
A tighter achievability result for the P2P Gaussian channel is provided in the recent $\kappa\beta$~bound of Polyanskiy et al.~\cite{PPV}. Using a slightly different language from that in~\cite{PPV}, this bound fixes an arbitrary \emph{output} distribution $Q_{Y^n}$, similar to~\cite{Fano,Wolfowitz}, and employs this as the reference distribution for the definition of a \emph{modified} mutual information random variable:
\begin{equation}
\tilde{i}(X^n;Y^n):=\log \frac{P_{Y^n|X^n}(Y^n|X^n)}{Q_{Y^n}(Y^n)}.
\end{equation}
Building upon the maximal coding idea~\cite{Feinstein,Ash}, deterministic sequences are arbitrarily chosen as codewords one by one, and the sequential codeword generation process stops after selecting $M$ codewords $\{x^n(j)\}_{j=1}^M$ if the error probability for any choice of the $(M+1)$-th sequence exceeds the target maximal error probability~$\epsilon$, i.e.,
\begin{align}
\epsilon &< P_{Y^n|X^n=x^n}[\,\tilde{i}(x^n;Y^n)\leq\log \gamma_n] + P_{Y^n|X^n=x^n}\left[\bigcup_{j=1}^{M} \tilde{i}(x^n(j);Y^n)>\log \gamma_n\right] \label{kappa-1}
\end{align}
for all sequences $x^n\in\mathcal{F}_n$, where $\mathcal{F}_n$ is the feasible set of codewords according to the input cost constraint. Rearranging~\eqref{kappa-1} then yields
\begin{align}
&P_{Y^n|X^n=x^n}\left[\bigcup_{j=1}^{M} \tilde{i}(x^n(j);Y^n)>\log \gamma_n\right] > \epsilon -P_{Y^n|X^n=x^n}[\,\tilde{i}(x^n;Y^n)\leq\log \gamma_n] \geq \tau^* \label{union}
\end{align}
again for all sequences $x^n\in\mathcal{F}_n$, where
\begin{align}
\tau^*=\epsilon- \sup_{x^n\in F}P_{Y^n|X^n=x^n}[\,\tilde{i}(x^n;Y^n)\leq \log \gamma_n]. \label{tau*}
\end{align}
Now, thinking of the union in the brackets in~\eqref{union} as a binary test on $Y^n$, one can cast the problem into the framework of the following composite hypothesis test, which is used to treat the input cost constraint:
\begin{align}
&\kappa_{\tau}\left(\{P_{Y^n|X^n=x^n}\}_{x^n\in F}, Q_{Y^n}\right) := \min_{Z: P_{Y^n|X^n=x^n}[Z(Y^n)=1]>\tau, \forall x^n\in F} Q_{Y^n}[Z(Y^n)=1],
\end{align}
where $Z(Y^n)$ is a binary test choosing either the class of conditional channel laws~$\{P_{Y^n|X^n=x^n}\}_{x^n\in F}$ if $Z=1$, or the unconditional output distribution~$Q_{Y^n}$ if $Z=0$.
The $\kappa\beta$ bound of~\cite{PPV} for maximal error probability can then be stated as
\begin{align}
\kappa_{\tau^*}\left(\{P_{Y^n|X^n=x^n}\}_{x^n\in F}, Q_{Y^n}\right) &\leq Q_{Y^n}\left[\bigcup_{j=1}^{M}\tilde{i}(x^n(j);Y^n)>\log \gamma_n\right] \\
& \leq M \sup_{x^n\in F} Q_{Y^n}[\,\tilde{i}(x^n;Y^n)>\log \gamma_n]. \label{kappa-2}
\end{align}
Interpretation of the composite hypothesis test~$\kappa_\tau$ and accordingly its evaluation for the P2P Gaussian channel is quite involved. Polyanskiy \emph{et al.}~\cite{PPV} invoke arguments from abstract algebra to analyze the performance of this test for the feasible set $\mathcal{F}_n=\{x^n\in\mathbb{R}^n: ||x^n||=\sqrt{nP}\}$ being the ``power shell'' and the special choice $Q_{Y^n}\sim\mathcal{N}(\mathbf{0},(1+P)I_n)$ with the selection~$\tau^*=1/\sqrt{n}$ in~\eqref{tau*}, finally concluding that
\begin{align}
\log\kappa_{\tau^*}\geq \frac{1}{2}\log n + O(1), \label{kappa-bound}
\end{align}
which with application of the CLT results in the following second-order optimal achievable rate for the P2P Gaussian channel
\begin{align}
\frac{\log M}{n}\leq C(P) - \sqrt{\frac{V(P)}{n}}Q^{-1}(\epsilon) + O\left(\frac{1}{n}\right),
\end{align}
where~$V(P)$ is the dispersion of the Gaussian P2P channel
\begin{align}
V(P)&=\frac{\log^2e}{2}\frac{P(P+2)}{(1+P)^2}.
\label{dispersion}
\end{align}
Comparing the $\kappa\beta$ bound of~\cite{PPV} with the random coding and typicality decoding method discussed earlier suggests an important insight. Introducing the composite hypothesis bound $\kappa_\tau$ in~\cite{PPV} enables a \emph{change of measure} from $P_{Y^n|X^n=x^n}$ in~\eqref{kappa-1} to $Q_{Y^n}$ in~\eqref{kappa-2} in computing the confusion probability. A similar process occurs in the random coding argument with typicality decoding, as the random generation of the codebook makes it possible to change the measure for computation of the confusion probability from $P_{Y^n|X^n=x^n}$ in~\eqref{typ-1} to its average $P_{Y^n}$ in~\eqref{typ-2}. We suspect the reason why the composite $\kappa_\tau$ test is introduced in~\cite{PPV} is to enable such a change of measure argument which is required for the evaluation of the confusion probability, but is not directly available in the sequential generation of maximal coding, which does not incorporate any random generation process. This insight is one of the main ideas we will use in this paper for the analysis of the P2P Gaussian channel and the Gaussian MAC with a random coding and typicality decoding method.
\subsection{Shannon's Geometric Bound}
\label{sub-Shannon}
As mentioned before, the best known achievable rate for P2P Gaussian channel is due to Shannon~\cite{Shannon} who starts with a random codebook generation according to the uniform distribution on the $n$-dimensional sphere of radius~$\sqrt{nP}$, i.e. the power shell
\begin{align}
||x^n||^2=nP, \label{pshell}
\end{align}
but considers the optimal ML decoding method. Since this rule is equivalent to minimum Euclidian distance in~$\mathbb{R}^n$, Shannon employs geometric arguments to evaluate and bound the code-ensemble-average probability that the i.i.d. Gaussian channel noise moves the output closer to some incorrect codeword than to the originally transmitted codeword:
\begin{align}
\epsilon
&=-\int_0^\pi \left\{1-\left[1-\frac{S_n(1;\theta)}{S_n(1)}\right]^{M-1}\right\}dQ(\theta) \\
&\leq Q(\theta^*)-\frac{M}{S_n(1)}\int_0^{\theta^*} S_n(1;\theta)dQ(\theta),
\end{align}
where: $S_n(1;\theta)$ is the surface area of a unit-radius $n$-dimensional spherical cap with half-angle $\theta$; $S_n(1)=S_n(1;\pi)$ is the surface area of a unit-radius $n$-dimensional sphere; $Q(\theta)$ is the probability with respect to $\mathcal{N}(\mathbf{0},I_n)$ that a point $x^n\in\mathbb{R}^n$ with $||x^n||=\sqrt{nP}$ is moved outside a circular cone of half-angle $\theta$ with vertex at the origin and axis passing through $x^n$; and $\theta^*$ is a characteristic of the rate defined as the solid angle satisfying $S_n(1;\theta^*)=S_n(1)/M$. Shannon then expresses this geometric bound as an error exponent result in terms of the rate and SNR:
\begin{align}
\epsilon \leq \frac{\alpha(P,\theta^*)}{\sqrt{n}} e^{-nE(P,\theta^*)}
\end{align}
where $\alpha(P,\theta^*)$ and $E(P,\theta^*)$ are positive functions of the power~$P$ and the rate characteristic~$\theta^*$.
A key observation in Shannon's work is his use of the uniform distribution on the power shell, which enables him to develop sharp non-asymptotic bounds. In this paper, we will follow Shannon in this respect, but rely on the more familiar and less complex method of typicality decoding which we show is still capable of achieving sharp non-asymptotic bounds for the Gaussian channel, at least up to the second order.
Having reviewed the basic elements of the different procedures for handling cost constraints, especially in Gaussian settings, we now move on to the formal statement of our problems and results.
\section{P2P Gaussian Channel}
\label{sec-P2P}
In this section, we re-derive the fundamental communication limits over the P2P Gaussian channel, that are well-known from classical and recent studies, e.g.~\cite{Shannon, PPV, Hayashi}, and were summarized in Section II. Building upon the standard random coding and typicality decoding method with slight modifications, our aims are both to 1) clarify our proof techniques in this simpler setting before exploring the more complex Gaussian MAC model, and 2) provide a more transparent alternative achievability proof for the P2P Gaussian problem, which is at least second-order optimal.
\subsection{System Model and Known Result}
\label{sub-P2P-model}
A general P2P channel with input cost constraint and without feedback consists of an input alphabet $\mathcal{X}$, an output alphabet $\mathcal{Y}$, and an $n$-letter channel transition probability $P_{Y^n|X^n}(y^n|x^n)\colon\mathcal{F}_n\to\mathcal{Y}^n$, where $\mathcal{F}_n\subseteq\mathcal{X}^n$ is the feasible set of $n$-letter input sequences.
For such a P2P channel, an $(n,M,\epsilon)$ code is composed of a message set~$\mathcal{M}=\{1,...,M\}$ and a corresponding set of codewords and mutually exclusive decoding regions $\{(x^n(j),D_{j})\}$ with $j\in\mathcal{M}$, such that the average error probability satisfies
\begin{align}
P_e^{(n)}:=\frac{1}{M}\sum_{j=1}^{M} \Pr[Y^n\notin D_{j}|X^n(j)\;\text{sent}]\leq\epsilon. \label{err-p2p}
\end{align}
Accordingly, a rate $\frac{\log M}{n}$ is \emph{achievable} for the P2P channel with finite blocklength~$n$ and average error probability~$\epsilon$ if such an $(n,M,\epsilon)$ code exists.
In particular, a P2P memoryless Gaussian channel without feedback consists of an input and an output taking values on the real line~$\mathbb{R}$ and a channel transition probability density $P_{Y|X}(y|x)\colon\mathbb{R}\to\mathbb{R}$ whose $n$-th extension follows $\mathcal{N}(y^n; x^n,I_n)$, i.e.,
\begin{align}
P_{Y^n|X^n}(y^n|x^n)\!=\!\prod_{t=1}^n\!P_{Y|X}(y_t|x_t)\!=\!(2\pi)^{-n/2} e^{-||y^n-x^n||^2/2}\!.
\end{align}
For such a P2P Gaussian channel, an $(n,M,\epsilon,P)$ code is an $(n,M,\epsilon)$ code as defined above, in which each codeword also satisfies a maximal power constraint:
\begin{align}
\frac{1}{n}\sum_{t=1}^n x_t^2(j)=\frac{1}{n}||x^n(j)||^2\leq P, \qquad \forall j\in\mathcal{M}.
\end{align}
Accordingly, a rate $\frac{\log M}{n}$ is \emph{achievable} for the P2P Gaussian channel with finite blocklength~$n$, average error probability $\epsilon$, and maximal power~$P$ if such an $(n,M,\epsilon,P)$ code exists.
The set of all achievable second-order coding rates for the P2P Gaussian channel is characterized as~\cite{PPV,Hayashi}
\begin{align}
\frac{\log M}{n}\leq C(P) - \sqrt{\frac{V(P)}{n}}Q^{-1}(\epsilon) + O\left(\frac{1}{n}\right), \label{2nd-P2P}
\end{align}
where $C(P)$ and $V(P)$ are the capacity~\eqref{capacity} and dispersion~\eqref{dispersion} of the P2P Gaussian channel, respectively. This section presents a relatively straight-forward achievability proof for this result based upon random coding and typicality decoding.
\subsection{Key Elements of the Proof}
\label{sub-key-p2p}
In this section, we summarize the main ingredients of the proof of main result~\eqref{2nd-P2P} for P2P Gaussian channels. The formal proof will be given in the Sections~\ref{sub-P2P-Ach} and~\ref{sub-2nd-p2p}. The three main ingredients are modified random coding and typicality decoding, CLT for functions of random vectors, and change of measure and uniform bounding.
\subsubsection{Modified Random Coding and Typicality Decoding}
\label{sub-P2P-RC-TD}
The random coding and typicality decoding bounds~\eqref{typ-2} and~\eqref{DT-cost-PPV} are by now the most standard method for proving the achievability side of the channel coding theorems~\cite{ElGamal-Kim}. If the input distribution were chosen to be i.i.d., such as the i.i.d. Gaussian distribution, then an evaluation of these achievability bounds would be straightforward, using a CLT for the outage probability, and an LDT bound for the confusion probability, but the cost-violation probability would be non-zero $P_{X^n}[X^n\notin\mathcal{F}_n]\neq0$. However, as discussed in Section I, for the P2P Gaussian channel (and potentially other cost-constrained channels), no single-letter i.i.d. input distribution exists which can achieve the second-order optimal performance~\eqref{2nd-P2P}, and more complicated $n$-letter input distributions must be considered. A non-i.i.d. input distribution~$P_{X^n}$ that is second-order optimal and leads to a zero cost-violation probability~$P_{X^n}[X^n\notin\mathcal{F}_n]=0$, such as the uniform distribution on the power shell~\eqref{pshell}, induces a non-i.i.d. output distribution~$P_{Y^n}$, and this in turn prevents the mutual information RV~$i(X^n;Y^n)$ from being a sum of independent random variables, i.e. $i(X^n;Y^n)\neq\sum_{t=1}^n i(X_t;Y_t)$, a form which is convenient for CLT and LDT analyses. It is therefore appealing for typicality decoding to change the reference of the mutual information RV from the actual output distribution~$P_{Y^n}$ to an arbitrary \textit{product} distribution~$Q_{Y^n}$ and work with a \emph{modified} mutual information RV~$\tilde{i}(X^n;Y^n)$ which is defined as
\begin{equation}
\tilde{i}(X^n;Y^n):=\log \frac{P_{Y^n|X^n}(Y^n|X^n)}{Q_{Y^n}(Y^n)}, \label{mod-i}
\end{equation}
and can be written as a summation $\tilde{i}(X^n;Y^n)=\sum_{t=1}^n \tilde{i}(X_t;Y_t)$, although the summands are not independent.
\subsubsection{CLT for Functions of Random Vectors}
\label{sub-CLT-Taylor}
The second-order approximations of channel coding rates mainly result from approximating the mutual information RV in the outage probability with a Gaussian distribution via the CLT. In the conventional setting, the CLT applies to a summation of independent RV's. However, due to the use of non-i.i.d. input distribution to handle the cost constraint, in this paper, we deal with mutual information densities that are sums of non-independent random variables or vectors. Rather, they can be expressed as (vector-) \emph{functions of sums} of i.i.d. random vectors. To facilitate the CLT for these situations, we rely on a simplified version of a technical result of Hoeffding and Robbins~\cite[Theorem 4]{Hoef}, for which they also credit Anderson and Rubin~\cite{Rubin}. Since these references do not specify the rate of convergence to Gaussianity, we slightly extend the analysis to prove a Berry-Esseen version of their result. The basic idea of the proof, which is relegated to Appendix A, is the application of Taylor's Theorem around the mean to a (vector-) function whose arguments are normalized sums of i.i.d. random vectors.
\begin{proposition} \label{CLT-func}
Let $\{\mathbf{U}_t:=(U_{1t},...,U_{Kt})\}_{t=1}^\infty$ be zero-mean i.i.d. random vectors in $\mathbb{R}^K$ with $\mathbb{E}[||\mathbf{U}_{1}||_2^3]<\infty$, and denoting $\mathbf{u}:=(u_1,...,u_K)$, let $\mathbf{f}(\mathbf{u}):\mathbb{R}^K\to\mathbb{R}^L$ be an $L$-component vector-function~$\mathbf{f}(\mathbf{u})=(f_1(\mathbf{u}),...,f_L(\mathbf{u}))$ which has continuous second-order partial derivatives in a $K$-hypercube neighborhood of $\mathbf{u}=\mathbf{0}$ of side length at least~$\frac{1}{\sqrt[4]{n}}$, and whose corresponding Jacobian matrix~$\mathbf{J}$ at $\mathbf{u}=\mathbf{0}$ consists of the following first-order partial derivatives
\begin{align}
J_{lk}:=\left.\frac{\partial f_l(\mathbf{u})}{\partial u_k}\right|_{\mathbf{u}=\mathbf{0}} \qquad l=1,...,L, \quad k=1,...,K.
\end{align}
Then, for any convex Borel-measurable set $\mathcal{D}$ in $\mathbb{R}^L$, there exists a finite positive constant~$B$ such that\footnote{The gap to Gaussianity in this form of CLT, similar to other Berry-Esseen type bounds, is on the order of $\frac{1}{\sqrt{n}}$. Although this is enough for second-order proofs, it makes the reader doubt whether this slowly decaying gap leads to accurate approximations for short blocklengths. This is indeed a valid concern, but one should note that empirical evidences, such as the results of~\cite{PPV} for discrete and Gaussian P2P channels, suggest that CLT is actually a highly accurate estimate and that the Berry-Esseen bound may be (much) looser than reality; c.f.~\cite[P. 135]{P-thesis}.}
\begin{align}
\left| \Pr\left[\mathbf{f}\left(\frac{1}{n}\sum_{t=1}^n \mathbf{U}_t\right) \in \mathcal{D}\right] - \Pr\left[\mathcal{N}\left(\mathbf{f}\left(\mathbf{0}\right), \mathbf{V}\right) \in \mathcal{D}\right] \right| \leq \frac{B}{\sqrt{n}},
\end{align}
where the covariance matrix~$\mathbf{V}$ is given by $\mathbf{V}=\frac{1}{n}\mathbf{J}\text{Cov}(\mathbf{U}_1)\mathbf{J}^T$, that is, its entries are defined as
\begin{align}
V_{ls}:= \frac{1}{n}\sum_{k=1}^K \sum_{p=1}^K J_{lk} J_{sp} \mathbb{E}[U_{k1}U_{p1}], \qquad l,s=1,...,L.
\end{align}
\end{proposition}
We would like to mention that, references~\cite{PPV,Hayashi} take an indirect approach based on symmetry to handle this problem for the P2P Gaussian channel, thus reducing the problem to the evaluation of the conditional outage probability for a fixed input sequence, for which the conventional CLT is applicable. However, this approach does not generalize to multi-user settings. Our approach here to use a CLT for functions of random variables, although more complicated, provides a direct analysis of outage probability without exploiting the symmetry property, and will be seen to generalize to the Gaussian MAC.
\subsubsection{Change of Measure and Uniform Bounding}
\label{sub-P2P-change-measure}
For non-i.i.d. input distributions~$P_{X^n}$, such as the uniform distribution on the power shell~\eqref{pshell}, an LDT analysis of the confusion probability is also challenging due to the non-product nature of the output distribution~$P_{Y^n}$ induced by the input distribution~$P_{X^n}$. We are therefore interested in changing the measure with respect to which the confusion probability is analyzed, as follows:
\begin{align}
&P_{X^n}P_{Y^n}[\,\tilde{i}(X^n;Y^n)>\log \gamma_n] \label{chng-beg} \\
&= \int\int 1\left\{\tilde{i}(x^n;y^n)>\log \gamma_n\right\} dP_{Y^n}(y^n) dP_{X^n}(x^n) \\
&=\int\int1\left\{\tilde{i}(x^n;y^n)>\log\gamma_n\right\}\frac{dP_{Y^n}(y^n)}{dQ_{Y^n}(y^n)}dQ_{Y^n}(y^n)dP_{X^n}(x^n) \\
&\leq \sup_{y^n\in\mathcal{Y}^n} \frac{dP_{Y^n}(y^n)}{dQ_{Y^n}(y^n)}\, P_{X^n}Q_{Y^n}\!\!\left[\,\tilde{i}(X^n;Y^n)>\log\gamma_n\right]. \label{change-measure}
\end{align}
The final expression~\eqref{change-measure} enables us to compute the confusion probability with respect to the more convenient measure~$Q_{Y^n}$, but at the expense of the additional Radon-Nikodym (R-N) derivative~$\frac{dP_{Y^n}(y^n)}{dQ_{Y^n}(y^n)}$.\footnote{To be precise, the definition of this R-N derivative requires an absolute continuity condition $P_{Y^n}\ll Q_{Y^n}$~\cite{R-N-der}, which is considered to be true for our general arguments in Theorem~\ref{corol-modified-DT}, and can be easily seen to hold in our concrete example of the P2P Gaussian channel.} This bound would be particularly useful, if this extra coefficient is uniformly bounded by a positive constant~$K$ or a slowly growing function~$K_n$, such that its rate loss does not affect the second-order behavior.
A close examination of~\cite{PPV} shows that the $\kappa_\tau$ performance characteristic in the $\kappa\beta$ bound is also mainly concerned with the R-N derivative~$\frac{dP_{Y^n}(Y^n)}{dQ_{Y^n}(Y^n)}$ introduced above, and the bound~\eqref{kappa-bound} is analogous to the uniform bounding by~$K_n$ in the analysis above~\eqref{change-measure}. We believe the difference is that our analysis using random coding, typicality decoding, and change of measure is a more transparent procedure and more closely follows conventional lines of argument.
\vspace{3mm}
We are now ready to provide the formal proof in the next two subsections.
\subsection{Non-Asymptotic Achievability for Cost-Constrained Channels}
\label{sub-P2P-Ach}
In the following, we state a result based upon modified random coding and typicality decoding for achievability on general P2P channels with input cost constraints valid for any blocklength. The result basically describes the error probability in terms of the outage, confusion, and constraint-violation probabilities, and is based on the dependence testing (DT) bound of \cite{PPV}.
\begin{theorem}
\label{corol-modified-DT}
For a general P2P channel $(\mathcal{X},P_{Y^n|X^n}(y^n|x^n),\mathcal{Y})$, any input distribution $P_{X^n}$, and any output distribution~$Q_{Y^n}$, there exists an $(n,M,\epsilon)$ code that satisfys the input cost constraint~$\mathcal{F}_n$ and
\begin{align}
\epsilon \leq P_{X^n}P_{Y^n|X^n}[\,\tilde{i}(X^n;Y^n)\leq \log \gamma_n] + K_n\frac{M-1}{2} P_{X^n}Q_{Y^n}[\,\tilde{i}(X^n;Y^n)>\log \gamma_n] + P_{X^n}[X^n\!\!\notin\mathcal{F}_n], \label{DT-simple}
\end{align}
where the coefficient $K_n$ is defined as
\begin{align}
K_n:=\sup_{y^n\in\mathcal{Y}^n} \frac{dP_{Y^n}(y^n)}{dQ_{Y^n}(y^n)}, \label{unif-bound}
\end{align}
and $\gamma_n$ is an arbitrary positive threshold whose optimal choice to give the highest rates is $\gamma_n\equiv K_n\frac{M-1}{2}$.
\end{theorem}
\textit{Remark.} The bound~\eqref{DT-simple} reduces to a standard one with random coding, typicality decoding, and $K_n=1$ if the auxiliary distribution~$Q_{Y^n}(y^n)$ is identical to the actual output distribution~$P_{Y^n}(y^n)$ induced by the input~$P_{X^n}(x^n)$.
\begin{proof}
The channel encoder randomly generates $M$~codewords of the codebook independently according to some given $n$-letter distribution $P_{X^n}$, where $n$ is the designated blocklength. Observing the output $y^n$, the decoder chooses the first codeword $x^n(\hat{m})$ of the codebook which looks ``typical'' with $y^n$ in a \emph{modified} one-sided sense
\begin{equation}
\tilde{i}(x^n(\hat{m});y^n)>\log \gamma(x^n(\hat{m})), \label{mod-decod}
\end{equation}
where $\gamma(x^n)$ is a codeword-dependent threshold and $\tilde{i}(x^n;y^n)$ is the corresponding realization of the \emph{modified} mutual information random variable~$\tilde{i}(X^n;Y^n)$.
The error probability averaged over the set of $M$ codewords of all possible realizations of the codebook can then be bounded, similar to~\eqref{typ-1}-\eqref{typ-2}, as the sum of an outage probability and a confusion probability as follows:
\begin{align}
\epsilon &\leq P_{X^n}P_{Y^n|X^n}[\,\tilde{i}(X^n;Y^n)\leq \log \gamma(X^n)] + \frac{M-1}{2} P_{X^n}P_{Y^n}[\,\tilde{i}(X^n;Y^n)>\log \gamma(X^n)] .
\end{align}
Applying the change of measure technique of~\eqref{change-measure} with the definition~\eqref{unif-bound} yields
\begin{align}
\epsilon &\leq P_{X^n}P_{Y^n|X^n}[\,\tilde{i}(X^n;Y^n)\leq \log \gamma(X^n)] + K_n\frac{M-1}{2} P_{X^n}Q_{Y^n}[\,\tilde{i}(X^n;Y^n)>\log \gamma(X^n)] .
\end{align}
Upon selecting the threshold~$\log\gamma(x^n)$ as in~\eqref{inf-th} and following the reasonings proceeding~\eqref{DT-cost-1}-\eqref{DT-cost-PPV} to handle the cost constraint, we infer that there exists a deterministic codebook, consisting of $M$ codewords all belonging to the feasible set~$\mathcal{F}_n$, whose average error probability~$\epsilon$ satisfies
\begin{align}
\epsilon \leq P_{X^n}[X^n\notin\mathcal{F}_n] + P_{X^n}P_{Y^n|X^n}[\,\tilde{i}(X^n;Y^n)\leq \log \gamma_n] + K_n\frac{M-1}{2} P_{X^n}Q_{Y^n}[\,\tilde{i}(X^n;Y^n)>\log \gamma_n] . \label{typ-2-mod}
\end{align}
To conclude the final assertion of Theorem~\ref{corol-modified-DT}, it is sufficient to observe that the last two summands on the RHS of \eqref{typ-2-mod} are a weighted sum of two types of error in a Bayesian binary hypothesis test, and therefore corresponds to average error probability of the test. Then, it is known from Neyman-Pearson Theorem that the optimal test is a likelihood-ratio test (LRT), as we have used in~\eqref{mod-decod}, with the optimal threshold equal to the ratio of priors or simply the ratio of the coefficients of the two error probabilities of the test, namely $\gamma_n\equiv K_n\frac{M-1}{2}$.
\end{proof}
\subsection{Second-Order Characterization for P2P Gaussian Channels}
\label{sub-2nd-p2p}
So far, we have stated and proved Theorem 1 which holds for any arbitrary cost-constrained P2P channel. In the following, we specialize this achievability bound to the P2P Gaussian channel.
\subsubsection{Coding on the Power Shell}
First, we choose the input distribution as the uniform distribution on the power shell:
\begin{align}
P_{X^n}(x^n)=\frac{\delta(||x^n||-\sqrt{nP})}{S_n(\sqrt{nP})}, \label{unif-shell}
\end{align}
where $\delta(\cdot)$ is the Dirac delta function and~$S_n(r)=\frac{2\pi^{n/2}}{\Gamma(n/2)}r^{n-1}$ is the surface area of an~$n$-dimensional sphere of radius~$r$. Notice that this distribution satisfies the input power constraint with probability one, so that
\begin{align}
P_{X^n}[X^n\notin\mathcal{F}_n]=P_{X^n}[||X^n||^2>nP]=0.
\label{cost=0}
\end{align}
Moreover, the output distribution induced by this input is
\begin{align}
P_{Y^n}(y^n) & =\int_{\mathbb{R}^n} P_{X^n}(x^n) P_{Y^n|X^n}(y^n|x^n) dx^n \\
& = \int_{\mathbb{R}^n} \frac{\delta(||x^n||-\sqrt{nP})}{S_n(\sqrt{nP})} (2\pi)^{-n/2} e^{-||y^n-x^n||^2/2} dx^n \\
& = \int_0^\pi \int_{0}^\infty \frac{\delta(r-\sqrt{nP})}{S_n(\sqrt{nP})} (2\pi)^{-n/2}e^{-r^2/2} e^{-||y^n||^2/2}e^{||y^n||r\cos\theta} S_{n-1}(r\sin\theta) r dr d\theta \label{decomp} \\
& = \frac{(2\pi)^{-n/2}\Gamma\left(n/2\right)}{\pi^{1/2}\Gamma\left(\frac{n-1}{2}\right)} e^{-nP/2} e^{-||y^n||^2/2} \int_0^\pi e^{||y^n||\sqrt{nP}\cos\theta} \sin^{n-2}\theta d\theta \\
& = \frac{(2\pi)^{-n/2}\Gamma\left(n/2\right)}{\pi^{1/2}\Gamma\left(\frac{n-1}{2}\right)} e^{-nP/2} e^{-||y^n||^2/2} \frac{\pi^{1/2}2^{n/2-1}\Gamma\left(\frac{n-1}{2}\right)}{(||y^n||\sqrt{nP})^{n/2-1}}I_{n/2-1}(||y^n||\sqrt{nP}) \label{bessel} \\
& = \frac{1}{2}\pi^{-n/2}\Gamma\left(\frac{n}{2}\right) e^{-nP/2} e^{-||y^n||^2/2} \frac{I_{n/2-1}(||y^n||\sqrt{nP})}{(||y^n||\sqrt{nP})^{n/2-1}}, \label{shell-output}
\end{align}
where~\eqref{decomp} follows form decomposing the space~$\mathbb{R}^n$ into a continuum of $(n-1)$-dimensional ring elements of radius $r\sin\theta$ with $0\leq r\leq \infty$ being the distance of ring points from the origin and $0\leq\theta\leq\pi$ being the angle of ring points with the line connecting the origin and the point~$y^n$, and where~\eqref{bessel} follows from the definition of modified Bessel function~$I_v(\cdot)$ of the first kind and~$v$-th order. It is worth mentioning that the general form of the above marginal distribution is obtained in~\cite{Bessel}.
Next, we select the reference output distribution for the P2P Gaussian channel as
\begin{align}
Q_{Y^n}(y^n)=\mathcal{N}(y^n;\mathbf{0},(1+P)I_n), \label{Q-p2p}
\end{align}
that is, the capacity-achieving output distribution.
The following proposition will then bound the R-N derivative term introduced in~\eqref{unif-bound}. The proof, which is a slight generalization of that in~\cite[p.~2347]{PPV}, is given in Appendix~\ref{proof-Prop-P2P}.
\begin{proposition}
\label{Prop-P2P-unif-bound}
Let $P_{Y^n}$ be the distribution~\eqref{shell-output} induced on the output of the P2P Gaussian channel by the uniform input distribution~\eqref{unif-shell} on the power shell, and let $Q_{Y^n}$ be the capacity-achieving output distribution~\eqref{Q-p2p} for the P2P Gaussian channel. There exists a positive constant~$K$ such that, for sufficiently large~$n$,
\begin{align}
\frac{dP_{Y^n}(y^n)}{dQ_{Y^n}(y^n)}\leq K, \qquad \forall\; y^n\in\mathbb{R}^n;
\end{align}
In fact, $K\leq1$ is a constant independent of the power constraint $P$.
\end{proposition}
\textit{Remark.} Using some more complicated manipulations, this proposition can be shown to be valid for any finite~$n$, but the above statement is enough for our second-order analysis.
Proposition~\ref{Prop-P2P-unif-bound} facilitates the use of Theorem~\ref{corol-modified-DT} with the aforementioned choices for the input distribution~$P_{X^n}$ and the reference output distribution~$Q_{Y^n}$. Substituting~\eqref{cost=0} into the achievability bound~\eqref{DT-simple} of Theorem~\ref{corol-modified-DT} with the optimal threshold~$\gamma_n\equiv K\frac{M-1}{2}$ only leaves the outage and confusion probabilities.
In the following, we evaluate the outage and confusion probabilities for sufficiently large blocklength to obtain second-order achievable rates.
\subsubsection{Evaluation of the Outage Probability}
\label{sub-P2P-outage}
In this subsection, we bound the outage probability
\begin{align}
P_{X^n}P_{Y^n|X^n}\left[\,\tilde{i}(X^n;Y^n)\leq \log\gamma_n\right]
\end{align}
where the input distribution $P_{X^n}$ is the uniform distribution on the power shell~\eqref{unif-shell}. Note that, since the input distribution is non-i.i.d., the summands in $\tilde{i}(X^n;Y^n)=\sum_{t=1}^n \tilde{i}(X_t;Y_t)$ are not independent so that direct application of the conventional CLT is not possible. Unlike the indirect symmetry-based approach of~\cite{PPV,Hayashi}, we here give a direct, although more complicated, analysis of the outage probability which does not rely on the conditional mutual information RV and instead makes use of the structure of the uniform distribution on the power shell.
Under the $P_{Y^n|X^n}$ law, the output~$Y^n$ can be written in the form
\begin{align}
Y^n=X^n+Z^n,
\end{align}
where $Z^n\sim \mathcal{N}(\mathbf{0},I_n)$ is the i.i.d. unit-variance channel noise. With the choice~\eqref{Q-p2p} for $Q_{Y^n}(y^n)$, the modified mutual information random variable simplifies as
\begin{align}
\tilde{i}(X^n;Y^n)&\equiv \log\frac{(2\pi)^{-n/2}e^{-||Y^n-X^n||^2/2}}{(2\pi(1+P))^{-n/2}e^{-||Y^n||^2/2(1+P)}} \\
&=\frac{n}{2}\log(1+P)+\frac{\log e}{2}\left[\frac{||Y^n||^2}{1+P}-||Y^n-X^n||^2\right] \\
&=nC(P)+\frac{\log e}{2(1+P)}\left[||X^n+Z^n||^2-(1+P)||Z^n||^2\right] \\
&=nC(P) +\frac{\log e}{2(1+P)}\left[P(n-||Z^n||^2)+2 \langle X^n,Z^n \rangle\right]. \label{mi-p2p}
\end{align}
where~\eqref{mi-p2p} uses the inner-product notation $\langle a^n,b^n\rangle:=\sum_{t=1}^n a_t b_t$ and the fact that $||X^n||^2=nP$ with probability one.
Although this random variable is written in the form of a summation, the summands are not independent, since the input $X^n$ is not independent across time. However, recall that independent uniform RVs on the power shell are functions of independent Gaussian RVs. More precisely, let $W^n\sim\mathcal{N}(\mathbf{0},I_n)$ be an i.i.d. Gaussian RV independent of the noise RV~$Z^n$. The input elements $X_t$, $t=1,...,n$, of the independent uniformly distributed RV $X^n$ on the power shell can then be expressed as follows:
\begin{align}
X_t=\sqrt{nP}\frac{W_t}{||W^n||}.
\end{align}
To apply the CLT for functions of Proposition~\ref{CLT-func}, consider the sequence $\{\mathbf{U}_t:=(U_{1t},U_{2t},U_{3t})\}_{t=1}^\infty$ whose elements are defined as
\begin{align}
U_{1t}&=1-Z_t^2, \\
U_{2t}&=\sqrt{P}W_tZ_{t}, \\
U_{3t}&=W_t^2-1.
\end{align}
Note that this random vector has an i.i.d. distribution across time $t=1,...,n$, and its moments can be easily verified to satisfy $\mathbb{E}[\mathbf{U}_1]=0$ and $\mathbb{E}[||\mathbf{U}_t||_2^3]<\infty$. Moreover, the covariance matrix of this vector is given by
\begin{align}
\text{Cov}(\mathbf{U})=
\begin{bmatrix}
2 & 0 & 0 \\
0 & P & 0 \\
0 & 0 & 2
\end{bmatrix}.
\end{align}
Next, define the function $f$ as
\begin{align}
f(\mathbf{u})&=Pu_1+ \frac{2u_2}{\sqrt{1+u_3}}.
\end{align}
Notice that, $f(\mathbf{0})=0$ and all the first- and second-order partial derivatives of $f$ are continuous in a neighborhood of $\mathbf{u}=\mathbf{0}$. Moreover, the Jacobian matrix $\{\frac{\partial f(\mathbf{u})}{\partial u_j}\}_{1\times3}$ at $\mathbf{u}=\mathbf{0}$ can be readily verified to be
\begin{align}
\left.J\right|_{\mathbf{u}=\mathbf{0}}=
\begin{bmatrix}
P & 2 & 0
\end{bmatrix}.
\end{align}
We are therefore left with
\begin{align}
f\left(\frac{1}{n}\sum_{t=1}^n \mathbf{U}_t\right)
&=\frac{P}{n}\sum_{t=1}^n (1-Z_t^2)+ \frac{2\frac{1}{n}\sum_{t=1}^n \sqrt{P}W_tZ_t}{\sqrt{1+\frac{1}{n}\sum_{t=1}^n (W_t^2-1)}} \label{f-sum-1} \\
&= \frac{1}{n}\sum_{t=1}^n P(1-Z_t^2)+ \frac{2}{n}\sum_{t=1}^n \frac{\sqrt{nP}W_t}{||W^n||}Z_t \label{f-sum-2} \\
&= \frac{1}{n}\left[P(n-||Z^n||^2)+2 \langle X^n,Z^n \rangle \right]. \label{f-sum-3}
\end{align}
We now conclude from Proposition~\ref{CLT-func} that the modified mutual information RV~\eqref{mi-p2p} converges in distribution to a Gaussian distribution with mean $nC(P)$ and variance given by
\begin{align}
\left(\frac{n\log e}{2(1+P)}\right)^2\frac{1}{n}
\begin{bmatrix}
P & 2 & 0
\end{bmatrix}
\begin{bmatrix}
2 & 0 & 0 \\
0 & P & 0 \\
0 & 0 & 2
\end{bmatrix}
\begin{bmatrix}
P \\ 2 \\ 0
\end{bmatrix}
= n \frac{\log^2 e}{2}\frac{P(P+2)}{(1+P)^2}=nV(P).
\end{align}
In particular, the outage probability is bounded as
\begin{align}
P_{X^n}P_{Y^n|X^n}\left[\,\tilde{i}(X^n;Y^n) \leq \log\left(K\frac{M-1}{2}\right)\right]
&\leq \Pr\left[\mathcal{N}(nC(P),nV(P)) \leq \log\left(K\frac{M-1}{2}\right)\right] +\frac{B_1}{\sqrt{n}} \\
&= Q\left(\frac{nC(P)-\log\left(K\frac{M-1}{2}\right)}{\sqrt{nV(P)}}\right) + \frac{B_1}{\sqrt{n}}, \label{outage-P2P}
\end{align}
where $B_1$ is the constant introduced in Proposition~\ref{CLT-func}.
\subsubsection{Evaluation of the Confusion Probability}
\label{sub-2nd-Ach}
In this subsection, we bound the confusion probability
\begin{align}
K\frac{M-1}{2}P_{X^n}Q_{Y^n}\left[\,\tilde{i}(X^n;Y^n)\leq \log\gamma_n\right]
\end{align}
where the input distribution $P_{X^n}$ is the uniform distribution on the power shell~\eqref{unif-shell}, and $Q_{Y^n}$ is the capacity-achieving output distribution~\eqref{Q-p2p}. We first need a change of measure technique as in~\cite{PPV}
\begin{align}
Q\left[\frac{dP}{dQ}>\gamma\right]&=\int1\left\{\frac{dP}{dQ}>\gamma\right\}dQ \\
&=\int \left(\frac{dP}{dQ}\right)^{-1}1\left\{\frac{dP}{dQ}>\gamma\right\}dP \\
&=\mathbb{E}_{P}\left[\left(\frac{dP}{dQ}\right)^{-1}1\left\{\frac{dP}{dQ}>\gamma\right\}\right]. \label{change of measure}
\end{align}
Using~\eqref{change of measure} with $P_{Y^n|X^n=x^n}$ in the role of $P$ and~$Q_{Y^n}$ in the role of~$Q$, we can bound the conditional confusion probability, conditioned on the input sequence~$x^n$ on the power shell, as follows:
\begin{align}
&Q_{Y^n}\left[\tilde{i}(x^n;Y^n)\!>\!\log\left(K\frac{M-1}{2}\right)\right] \notag \\
&=\mathbb{E}_{P_{Y^n|X^n=x^n}}\left[\exp\{-\tilde{i}(x^n;Y^n)\}1\left\{\tilde{i}(x^n;Y^n)\!>\!\log\left(K\frac{M-1}{2}\right)\right\}\right] \\
&=\mathbb{E}_{P_{Y^n|X^n=x^n}}\left[\exp\left\{-\sum_{t=1}^n \tilde{i}(x_t;Y_t)\right\}1\left\{\sum_{l=1}^n \tilde{i}(x_t;Y_t)>\log\left(K\frac{M-1}{2}\right)\right\}\right] \\
&\leq \frac{B_{2}}{\sqrt{n}}\left(K\frac{M-1}{2}\right)^{-1}, \label{PPV-lemma20}
\end{align}
where \eqref{PPV-lemma20} is a refined large deviation bound according to~\cite[Lemma~47]{PPV}. The specific expression for the finite constant~$B_2$ can be computed readily in terms of the power constraint~$P$, but is not necessary here. Since the bound~\eqref{PPV-lemma20} is uniform with respect to the actual input sequence~$x^n$, the unconditional confusion probability can be bounded as
\begin{align}
K\frac{M-1}{2} P_{X^n}Q_{Y^n}\left[\tilde{i}(X^n;Y^n)\!>\!\log\left(K\frac{M-1}{2}\right)\right] \leq \frac{B_{2}}{\sqrt{n}}.
\label{conf-P2P}
\end{align}
\subsubsection{Completion}
Substituting~\eqref{cost=0},~\eqref{outage-P2P}, and~\eqref{conf-P2P} into the achievability bound~\eqref{DT-simple} of Theorem~\ref{corol-modified-DT} and recalling~\eqref{err-p2p} that, with a little abuse of notation cf.~\cite[Eq. (186)]{PPV}, $\epsilon$ is the target error probability, yields
\begin{align}
\epsilon \geq Q\left(\frac{nC(P)-\log\left(K\frac{M-1}{2}\right)}{\sqrt{nV(P)}}\right) + \frac{{B}}{\sqrt{n}},
\end{align}
where~${B}=B_1+B_2$. Upon rearranging we obtain
\begin{align}
\log M &\leq nC(P)-\sqrt{nV(P)}Q^{-1}\left(\epsilon- \frac{{B}}{\sqrt{n}}\right) - \underbrace{\log K}_{= O(1)} \\
&= nC(P)-\sqrt{nV(P)}Q^{-1}(\epsilon) + \sqrt{nV(P)} O\left(\frac{1}{\sqrt{n}}\right) + O(1) \label{Taylor} \\
&= nC(P)-\sqrt{nV(P)}Q^{-1}(\epsilon) + O(1),
\end{align}
where~\eqref{Taylor} follows from the Taylor expansion for the $Q^{-1}$ function
\begin{align}
Q^{-1}\left(\epsilon- \frac{{B}}{\sqrt{n}}\right)=Q^{-1}(\epsilon) + \underbrace{\left.\frac{dQ^{-1}(x)}{dx}\right|_{x=\epsilon}}_{= O(1)} \frac{{B}}{\sqrt{n}} + o(\frac{1}{\sqrt{n}})
= Q^{-1}(\epsilon) + O\left(\frac{1}{\sqrt{n}}\right).
\end{align}
Thus, we have proved that an $(n,M,\epsilon,P)$ code exists if the rate satisfies
\begin{align}
\frac{\log M}{n} &\leq C(P)-\sqrt{\frac{V(P)}{n}} Q^{-1}(\epsilon) + O\left(\frac{1}{n}\right),
\label{P2P-ach-approx}
\end{align}
where $C(P)$ and $V(P)$ are the capacity and dispersion of the P2P Gaussian channel, respectively.
As discussed in Section I, we have observed that such high rates arise from coding schemes in which outages dominate confusions.
\section{Gaussian MAC}
\label{sec-MAC}
In this section, we study our main problem of interest, namely the fundamental communication limits over the Gaussian MAC in the finite blocklength regime. We first state our main result on achievable second-order coding rate regions for the Gaussian MAC, overview the key elements of the proof, and then develop the results in detail.
\subsection{System Model and Main Results}
\label{sub-MAC-model}
A general 2-user multiple access channel (MAC) with input cost constraints and without feedback consists of two input alphabets $\mathcal{X}_1$ and $\mathcal{X}_2$, an output alphabet~$\mathcal{Y}$, and an $n$-letter channel transition probability given by $P_{Y^n|X_1^nX_2^n}(y^n|x_1^n,x_2^n)\colon\mathcal{F}_{1n}\times\mathcal{F}_{2n}\to\mathcal{Y}^n$, where $\mathcal{F}_{1n}\subseteq\mathcal{X}_1^n$ and $\mathcal{F}_{2n}\subseteq\mathcal{X}_2^n$ are the feasible sets of $n$-letter input sequences for the two users, respectively.
For such a MAC, an $(n,M_1,M_2,\epsilon)$ code is composed of two message sets $\mathcal{M}_1=\{1,...,M_1\}$ and $\mathcal{M}_2=\{1,...,M_2\}$, and a corresponding set of codeword pairs and mutually exclusive decoding regions $\{(x_1^n(j),x_2^n(k),D_{j,k})\}$, with $j\in\mathcal{M}_1$ and $k\in\mathcal{M}_2$, such that the average error probability satisfies
\begin{align}
P_e^{(n)}:=\frac{1}{M_1M_2}\sum_{j=1}^{M_1}\sum_{k=1}^{M_2} \Pr[Y^n\notin D_{j,k}|X_1^n(j),X_2^n(k)\;\text{sent}]\leq\epsilon. \label{err-mac}
\end{align}
Accordingly, a $\left(\frac{\log M_1}{n},\frac{\log M_2}{n}\right)$ rate pair is \emph{achievable} for this MAC with finite blocklength $n$ and average error probability $\epsilon$ if such an $(n,M_1,M_2,\epsilon)$ code exists.
In particular, a memoryless 2-user Gaussian MAC without feedback consists of two inputs and an output all taking values on the real line~$\mathbb{R}$ and a channel transition probability density $P_{Y|X_1X_2}(y|x_1,x_2)\colon\mathbb{R}\times\mathbb{R}\to\mathbb{R}$ whose $n$-th extension follows $\mathcal{N}(y^n; x_1^n+x_2^n,I_n)$, i.e.,
\begin{align}
P_{Y^n|X_1^nX_2^n}(y^n|x_1^n,x_2^n)&=\prod_{t=1}^n P_{Y|X_1X_2}(y_t|x_{1t},x_{2t})=(2\pi)^{-n/2} e^{-||y^n-x_1^n-x_2^n||^2/2}.
\end{align}
For such a Gaussian MAC, an $(n,M_1,M_2,\epsilon,P_1,P_2)$ code is an $(n,M_1,M_2,\epsilon)$ code as defined above, in which each codeword also satisfies a maximal power constraint:
\begin{align}
\frac{1}{n}\sum_{t=1}^n x_{1t}^2(j)=\frac{1}{n}||x_1^n(j)||^2&\leq P_1, \qquad \forall j\in\mathcal{M}_1,\\
\frac{1}{n}\sum_{t=1}^n x_{2t}^2(k)=\frac{1}{n}||x_2^n(k)||^2&\leq P_2, \qquad \forall k\in\mathcal{M}_2.
\end{align}
Accordingly, a rate pair $\left(\frac{\log M_1}{n},\frac{\log M_2}{n}\right)$ is \emph{achievable} for the Gaussian MAC with finite blocklength $n$, average error probability $\epsilon$, and maximal power constraints~$P_1$ and $P_2$ if such an $(n,M_1,M_2,\epsilon,P_1,P_2)$ code exists.
From classical results by Cover~\cite{Cover-GMAC} and Wyner~\cite{Wyner}, we know that the capacity region of the Gaussian MAC in the infinite blocklength regime, that is when $n\to\infty$, is given by the pentagonal region
\begin{align}
\frac{\log M_1}{n} &\leq C(P_1) + o(1), \\
\frac{\log M_2}{n} &\leq C(P_2) + o(1), \\
\frac{\log M_1}{n} + \frac{\log M_2}{n} &\leq C(P_1+P_2) + o(1).
\end{align}
Some old and new bounds on the error exponent of the Gaussian MAC are also known, e.g.~\cite{Gallager-MAC,Pradhan,Haim-EE}.
In the following, we state finite-blocklength achievability results that lead to the following achievable second-order rate regions for the Gaussian MAC. As mentioned in Section I, the following regions are achieved using codebooks that are randomly generated according to independent uniform distributions on the respective power shells.
\begin{theorem}
\label{GMAC-joint}
\textit{(Joint Outage)} An achievable region for the 2-user Gaussian MAC with maximal power constraints~$P_1$ and~$P_2$ is given by the union of all rate pairs $(\frac{\log M_1}{n},\frac{\log M_2}{n})$ satisfying
\begin{align}
\begin{bmatrix}
\frac{\log M_1}{n}\\
\frac{\log M_2}{n}\\
\frac{\log M_1}{n}\!+\!\frac{\log M_2}{n}
\end{bmatrix}
\in\mathbf{C}(P_1,P_2)-\frac{1}{\sqrt{n}} Q^{-1}(\epsilon;\mathbf{V}(P_1,P_2))+O\left(\frac{1}{n}\right)\mathbf{1},
\end{align}
where: $\mathbf{1}=\begin{bmatrix} 1 & 1 & 1\end{bmatrix}^T$ denotes the all-one vector; $Q^{-1}(\epsilon;\mathbf{\Sigma})$ is the inverse complementary CDF of a 3-dimensional Gaussian random variable defined as the set
\begin{align}
Q^{-1}(\epsilon;\mathbf{\Sigma})\!:=\!\left\{\mathbf{z}\in\mathbb{R}^3: \Pr\left(\mathcal{N}(\mathbf{0},\mathbf{\Sigma})\leq\mathbf{z}\right)\geq1-\epsilon\right\},
\label{Q-inv}
\end{align}
with vector-inequality understood element-wise; the capacity vector $\mathbf{C}(P_1,P_2)$ and dispersion matrix $\mathbf{V}(P_1,P_2)$ are defined as
\begin{align}
\mathbf{C}(P_1,P_2):=
\begin{bmatrix}
C(P_1)\\
C(P_2)\\
C(P_1+P_2)
\end{bmatrix},
\label{capacity vector}
\end{align}
and
\begin{align}
\mathbf{V}(P_1,P_2):=
\begin{bmatrix}
V(P_1)&V_{1,2}(P_1,P_2)&V_{1,3}(P_1,P_2) \\
V_{1,2}(P_1,P_2)&V(P_2)&V_{2,3}(P_1,P_2) \\
V_{1,3}(P_1,P_2)&V_{2,3}(P_1,P_2)&V(P_1+P_2)+ V_{3}(P_1,P_2)
\end{bmatrix}\!\!,
\label{dispersion matrix}
\end{align}
in which $C(P)$ and $V(P)$ are the capacity and dispersion of the P2P Gaussian channel, respectively,
\begin{align}
C(P)&=\frac{1}{2}\log(1+P), \\
V(P)&=\frac{\log^2e}{2}\frac{P(P+2)}{(1+P)^2},
\end{align}
and we have employed the shorthands
\begin{align}
V_{1,2}(P_1,P_2) &= \frac{\log^2 e}{2}\frac{P_1P_2}{(1+P_1)(1+P_2)}, \label{cross-disp-1} \\
V_{u,3}(P_1,P_2) &= \frac{\log^2 e}{2}\frac{P_u(2+P_1+P_2)}{(1+P_u)(1+P_1+P_2)}, \quad u\in\{1,2\}\label{cross-disp-2} \\
V_{3}(P_1,P_2) & = \log^2 e\frac{P_1P_2}{(1+P_1+P_2)^2}. \label{disp-3}
\end{align}
\end{theorem}
The evaluation of the above region, especially when extended to a large number of users, may be cumbersome. In the following, we present another second-order achievable rate region, which is easier to compute even for large number of users, but as we have already seen in Figure 1 provides a very good estimate of the joint-outage region for the Gaussian MAC.
\begin{theorem}
\label{GMAC-splitting}
\textit{(Outage Splitting)} An achievable region for the 2-user Gaussian MAC with maximal power constraints~$P_1$ and~$P_2$ is given by the union of all $(\frac{\log M_1}{n},\frac{\log M_2}{n})$ pairs satisfying
\begin{align}
&\frac{\log M_1}{n}\leq C(P_1)-\sqrt{\frac{V(P_1)}{n}}Q^{-1}(\lambda_1\epsilon)\!+\!O\left(\frac{1}{n}\right), \notag\\
&\frac{\log M_2}{n}\leq C(P_2) -\sqrt{\frac{V(P_2)}{n}}Q^{-1}(\lambda_2\epsilon)\!+\!O\left(\frac{1}{n}\right), \notag\\
&\frac{\log M_1}{n}+\frac{\log M_2}{n}\leq C(P_1+P_2)-\sqrt{\frac{V(P_1+P_2)+ V_{3}(P_1,P_2)}{n}}Q^{-1}(\lambda_3\epsilon)+O\left(\frac{1}{n}\right), \label{b-3}
\end{align}
for some choice of positive constants $\lambda_1,\lambda_2,\lambda_3$ satisfying $\lambda_1+\lambda_2+\lambda_3=1$.
\end{theorem}
Both of the achievable second-order rate regions in Theorems~\ref{GMAC-joint} and~\ref{GMAC-splitting} suggest that taking finite blocklength into account introduces a rate penalty (for the interesting case of $\epsilon<\frac{1}{2}$) that depends on blocklength, error probability, and Gaussian MAC dispersions.
However, the main difference between the two theorems is that, in Theorem~\ref{GMAC-splitting}, the average error probability~$\epsilon$ is basically split among the three outage events of a 2-user Gaussian MAC according to some $(\lambda_1,\lambda_2,\lambda_3)$ partitioning and a one-dimensional CLT is applied. A similar approach was taken in~\cite{ML-ISIT12,Verdu-Allerton12} for the MAC in the discrete setting. On the other hand, in Theorem~\ref{GMAC-joint}, essentially all the average error probability~$\epsilon$ is assigned to the joint outage event and a multi-dimensional CLT is applied. This latter approach, which leads to a relatively larger region, is similar to that in~\cite{Tan,HM, Albert} for the discrete MAC. Finally, we would like to point out that the statements of Theorems 2 and 3 correct a slight error in the corresponding result in our conference version of this work~\cite{ML-Allerton12}, in which the term $V_3(P_1,P_2)$ defined in~\eqref{disp-3} was missing in~\eqref{dispersion matrix} and~\eqref{b-3}.
To observe the tightness of the region achieved by random codebooks with independent power shell input distribution, we compare it with several other second-order inner and outer rate regions relying on simple and common structures. First, consider the second-order rate region achieved by a pair of random codebooks which are, as usual~\cite{Cover}, generated according to independent i.i.d. Gaussian distributions. One can easily show an extension of~\eqref{Gaussian-input} to a Gaussian MAC so that
\begin{align}
\begin{bmatrix}
\frac{\log M_1}{n}\\
\frac{\log M_2}{n}\\
\frac{\log M_1}{n}\!+\!\frac{\log M_2}{n}
\end{bmatrix}&\in\mathbf{C}(\bar{P}_1,\bar{P}_2)-\frac{1}{\sqrt{n}}Q^{-1}\left(\epsilon; \mathbf{V}_{\text{G}}(\bar{P}_1,\bar{P}_2)\right)+O\left(\frac{1}{n}\right)\mathbf{1},
\label{Gaussian-input-MAC}
\end{align}
are achievable, where $\bar{P}_1\!=\!P_1\!-\!\delta$ and $\bar{P}_2\!=\!P_2\!-\!\delta$ for an arbitrarily small positive constant\footnote{Again, the margin~$\delta$ can be vanishing with blocklength provided that it decays strictly slower than $O\left(\frac{1}{\sqrt{n}}\right)$.}~$\delta$, and
\begin{align}
\mathbf{V}_{\text{G}}(P_1,P_2)=\log^2 e
\begin{bmatrix}
\frac{P_1}{1+P_1}\!&\!\frac{P_1P_2}{2(1+P_1)(1+P_2)}\!\!&\!\!\frac{P_1(2+2P_1+P_2)}{2(1+P_1)(1+P_1+P_2)} \\
\frac{P_1P_2}{2(1+P_1)(1+P_2)}\!&\!\frac{P_2}{1+P_2}\!\!&\!\!\frac{P_2(2+P_1+2P_2)}{2(1+P_2)(1+P_1+P_2)} \\
\frac{P_1(2+2P_1+P_2)}{2(1+P_1)(1+P_1+P_2)}\!&\!\frac{P_2(2+P_1+2P_2)}{2(1+P_2)(1+P_1+P_2)}\!\!&\!\!\frac{P_1+P_2}{1+P_1+P_2}
\end{bmatrix}.
\end{align}
Another important comparison is with the rate region achieved by a pair of independent truncated Gaussian random codebooks, as employed by Gallager for the error exponent analysis of the Gaussian MAC~\cite{Gallager-MAC}. This rate region is given by the set of all rate pairs $(R_1:=\frac{\log M_1}{n},R_2:=\frac{\log M_1}{n})$ satisfying
\begin{align}
\epsilon \leq a n 2^{-nE_1\left(R_1\right)} +a n 2^{-nE_2\left(R_2\right)} + a n^2 2^{-nE_3\left({R_1+R_2}\right)},
\end{align}
where $a$ is a constant, and the error exponent term for the individual rates is defined as
\begin{align}
E_l(R_l):=
\begin{dcases}
\frac{(P_l-\alpha_l)\log e}{2^{2R_l+1}}+ \frac{1}{2}\log\left(2^{2R_l}-\alpha_l\right) & \text{if} \quad \frac{1}{2}\log\left(\frac{2+P_l+\sqrt{4+P_l^2}}{4}\right) \leq R_l \leq C(P_l) \\
\left(1-\beta_l+\frac{P_l}{2}\right)\log e+\frac{1}{2}\log\left(\beta_l\left[\beta_l-\frac{P_l}{2}\right]\right)-R_l & \text{if} \quad 0 \leq R_l < \frac{1}{2}\log\left(\frac{2+P_l+\sqrt{4+P_l^2}}{4}\right)
\end{dcases}
\end{align}
with $l=1,2$ and the shorthands
\begin{align}
\alpha_l &:= \frac{P_l(2^{2R_l}-1)}{2}\left[\sqrt{1+\frac{2^{2R_l+2}}{P_l(2^{2R_l}-1)}}-1\right], \\
\beta_l &:= \frac{1}{2} \left[1+\frac{P_l}{2}+\sqrt{1+\frac{P_l^2}{2}}\,\right];
\end{align}
and the error exponent term for the sum-rate is defined as
\begin{align}
E_3(R_s):=
\begin{dcases}
(1+\rho-\theta_1)\log e + \log\left(\frac{\theta_1}{1+\rho}\right) \\
\qquad \qquad \qquad \qquad \qquad \text{if} \;\; \frac{1}{2}\log\left(\frac{1}{2}\left[1-\frac{P_s}{4}+\sqrt{1-\frac{P_s}{2}+\frac{P_s^2}{4}}\,\right]\right) \leq R_s \leq C(P_s) \\
2\left[\log\left(\frac{\theta_2}{2}\right)-(\theta_2+1)\log e\right]+\frac{1}{2}\log\left(1+\frac{P_s}{\theta_2}\right)-R_s \\
\qquad \qquad \qquad \qquad \qquad \text{if} \;\; 0 \leq R_s < \frac{1}{2}\log\left(\frac{1}{2}\left[1-\frac{P_s}{4}+\sqrt{1-\frac{P_s}{2}+\frac{P_s^2}{4}}\,\right]\right)
\end{dcases}
\end{align}
with the shorthands~$R_s:=R_1+R_2$, $P_s:=P_1+P_2$ and
\begin{align}
\rho&:= \left[\frac{1}{2}+\frac{2^{2R_s+1}}{P_s}-\frac{1}{2}\sqrt{1+\frac{2^{2R_s+3}}{P_s}+\frac{2^{2R_s+4}}{P_s^2}}\,\right]^{-1/2}-1, \\
\theta_1 &:= \frac{1+\rho-P_s}{2} +\frac{1}{2}\sqrt{P_s^2+2P_s+(1+\rho)^2}, \\
\theta_2 &:=1-\frac{P_s}{2}+\frac{1}{2} \log\left(P_s^2+2P_s+4\right).
\end{align}
It is also interesting to compare with the second-order achievable region via time-division multiple access~(TDMA). For TDMA with power control, the two users can share the~$n$ channel uses, use single-user coding strategies, and average the error probability~$\epsilon$. Specifically, user 1 transmits in the first~$\alpha n$ channel uses with power~$P_1/\alpha$ and rate such that an average error probability~$\beta\epsilon$ is achieved, and user 2 transmits in the remaining~$\bar{\alpha}n:=(1-\alpha)n$ channel uses with power~$P_2/\bar{\alpha}$ and rate such that an average error probability~$\tilde{\beta}\epsilon$ is achieved. Since the average error probability of this scheme is $\epsilon=\beta\epsilon+\tilde{\beta}\epsilon-\beta\tilde{\beta}\epsilon^2$, we choose $\tilde{\beta}=(1-\beta)/(1-\beta\epsilon)$. Using the power shell uniform input distribution for each user and relying on the Gaussian P2P results~\cite{PPV,Hayashi}, the TDMA strategy achieves the following set of rate pairs:
\begin{align}
\frac{\log M_1}{n}&\leq\alpha C\left(\frac{P_1}{\alpha}\right)\!-\!\sqrt{\frac{\alpha}{n} V\left(\frac{P_1}{\alpha}\right)}Q^{-1}(\beta\epsilon)+O\left(\frac{1}{n}\right), \notag\\
\frac{\log M_2}{n}&\leq\bar{\alpha} C\left(\frac{P_2}{\bar{\alpha}}\right)-\sqrt{\frac{\bar{\alpha}}{n}V\left(\frac{P_2}{\bar{\alpha}}\right)}Q^{-1}\left(\frac{(1-\beta)\epsilon}{1-\beta\epsilon}\right)+O\left(\frac{1}{n}\right),
\end{align}
for some $0\leq\alpha\leq1$ and $0\leq\beta\leq1$.
Further comparison can be made using single-user outer bounds. Since the achievable rate for each user cannot exceed that when the other user is silent, similar to~\cite{Tan}, two simple outer bounds can be developed using single-user results~\cite{PPV,Hayashi} by assigning the total error probability~$\epsilon$ to only one of the outage events. Hence
\begin{align}
&\frac{\log M_1}{n}\leq C(P_1)-\sqrt{\frac{V(P_1)}{n}}Q^{-1}(\epsilon)\!+\!O\left(\frac{\log n}{n}\right), \\
&\frac{\log M_2}{n}\leq C(P_2) -\sqrt{\frac{V(P_2)}{n}}Q^{-1}(\epsilon)\!+\!O\left(\frac{\log n}{n}\right),
\end{align}
presents a simple outer bound for the Gaussian MAC. Note that, since the two power constraints $||x_1^n||^2\leq nP_1$ and $||x_2^n||^2\leq nP_2$ do not imply the sum-power constraint $||x_1^n+x_2^n||^2\leq n(P_1+P_2)$, a similar conclusion cannot be readily made for the sum-rate, that is, the inequality
\begin{align}
\frac{\log M_1}{n}+\frac{\log M_2}{n}\leq C(P_1+P_2)-\sqrt{\frac{V(P_1+P_2)}{n}}Q^{-1}(\epsilon)+O\left(\frac{\log n}{n}\right)
\end{align}
is not a trivial outer bound, as was mistakenly claimed in~\cite{ML-Allerton12}; however, it is conjectured to be a valid outer bound.
Finally, we would like to mention a \textit{hypothetical} second-order rate region which would be achievable if the sum of power shell inputs fell on the \textit{sum-power shell}, i.e., if the two users' codebooks were independent and distributed uniformly on the respective power shells \textit{and} the equality $||x_1^n(j)+x_2^n(k)||^2=||x_1^n(j)||^2+||x_2^n(k)||^2$ hypothetically hold for (almost) all codeword pairs. Following the lines of proof of Theorem 2, such a hypothetical codebook pair would then achieve the following second-order rate region:
\begin{align}
\begin{bmatrix}
\frac{\log M_1}{n}\\
\frac{\log M_2}{n}\\
\frac{\log M_1}{n}+\frac{\log M_2}{n}
\end{bmatrix}
\in\mathbf{C}(P_1,P_2)-\frac{1}{\sqrt{n}} Q^{-1}(\epsilon;\mathbf{V}_{\text{sum}}(P_1,P_2))+O\left(\frac{1}{n}\right)\mathbf{1}, \label{sum-power-shell}
\end{align}
where $\mathbf{V}_{\text{sum}}(P_1,P_2)$ is defined as the rank-2 matrix
\begin{align}
\mathbf{V}_{\text{sum}}(P_1,P_2)=
\begin{bmatrix}
V(P_1)&V_{1,2}(P_1,P_2)&V_{1,3}(P_1,P_2) \\
V_{1,2}(P_1,P_2)&V(P_2)&V_{2,3}(P_1,P_2) \\
V_{1,3}(P_1,P_2)&V_{2,3}(P_1,P_2)&V(P_1+P_2)
\end{bmatrix}\!\!,
\label{Vout}
\end{align}
and all other notations are defined as in Theorem 2. Note that the only difference between the above region and that stated in Theorem 2 is the term $V_3(P_1,P_2)$ in the last diagonal element of the dispersion matrix~$\mathbf{V}(P_1,P_2)$ in~\eqref{dispersion matrix} which is dropped in $\mathbf{V}_{\text{sum}}(P_1,P_2)$. The term~$V_3(P_1,P_2)$ captures the variance of the inner product $\langle X_1^n,X_2^n \rangle$, which is the remainder term in $||x_1^n+x_2^n||^2-||x_1^n||^2-||x_2^n||^2=2\langle x_1^n,x_2^n \rangle$. Since $V_3(P_1,P_2)$ is a positive term, its removal leads to a ``smaller'' dispersion matrix and hence a lower rate penalty. In fact, as was illustrated in Figure~1, one can show that the power-shell rate region of Theorem 2 is roughly \textit{halfway} between the i.i.d. Gaussian rate region and this hypothetical sum-power shell region. A similar observation has been made by Gallager in the study of error exponents for the Gaussian MAC~\cite{Gallager-MAC}. We conjecture that the hypothetical sum-power shell rate region provides a second-order \textit{outer} region for the Gaussian MAC, with the third-order term in~\eqref{sum-power-shell} replaced by~$O\left(\frac{\log n}{n}\right)\mathbf{1}$.
A numerical comparison of these different rate regions was already presented in Figure 1 of Section I. The reminder of this section presents a relatively straightforward proof of Theorems 2 and 3 based upon random coding and typicality decoding as well as the application of the CLT for functions.
\subsection{Key Elements of the Proof}
\label{sub-key-mac}
In this section, we comment on the main ingredients of the proof of Theorems 2 and 3 for the Gaussian MAC. The details of the proofs will be given in Sections~\ref{sub-MAC-Ach} and~\ref{sub-2nd-mac}.
\subsubsection{Modified Random Coding and Typicality Decoding}
Analogously to the P2P Gaussian channel, we show that i.i.d. input distributions are not sufficient to achieve the second-order optimal performance over the Gaussian MAC. However, the analysis of the conventional achievability result based upon random coding and typicality decoding for the MAC is difficult for non-i.i.d. inputs, particularly because 1) the induced (conditional) output distributions $P_{Y^n|X_2^n}, P_{Y^n|X_1^n}, P_{Y^n}$ are not i.i.d. and 2) the corresponding mutual information RVs cannot be written as a sums. Hence, we use \textit{modified} mutual information RVs defined in terms of arbitrary reference output distributions~$Q^{(1)}_{Y^n|X_2^n}, Q^{(2)}_{Y^n|X_1^n}, Q^{(3)}_{Y^n}$, instead of the actual output distributions:
\begin{subequations}
\begin{align}
\tilde{i}(X_1^n;Y^n|X_2^n)&:=\log\frac{P_{Y^n|X_1^nX_2^n}(Y^n|X_1^n,X_2^n)}{Q^{(1)}_{Y^n|X_2^n}(Y^n|X_2^n)}, \\
\tilde{i}(X_2^n;Y^n|X_1^n)&:=\log\frac{P_{Y^n|X_1^nX_2^n}(Y^n|X_1^n,X_2^n)}{Q^{(2)}_{Y^n|X_1^n}(Y^n|X_1^n)}, \\
\tilde{i}(X_1^nX_2^n;Y^n)&:=\log\frac{P_{Y^n|X_1^nX_2^n}(Y^n|X_1^n,X_2^n)}{Q^{(3)}_{Y^n}(Y^n)};
\end{align}
\label{mod-mi-MAC}%
\end{subequations}
Selecting the reference distributions to be in product from, e.g. $Q^{(1)}_{Y^n|X_2^n}(y^n|x_2^n)=\prod_{t=1}^n Q^{(1)}_{Y_t|X_{2t}}(y_t|x_{2t})$, enables us to write these RVs as sums of random variables, e.g. $\tilde{i}(X_1^n;Y^n|X_2^n)=\sum_{t=1}^n \tilde{i}(X_{1t};Y_t|X_{2t})$, a form which is convenient for the later application of CLT and LDT. However we note that these summands are not independent.
\subsubsection{CLT for Functions of Random Vectors}
As mentioned earlier, the summands of the modified mutual information random variables of a Gaussian MAC are not independent. Moreover, the interaction of the two users' codebooks through the inner product~$\langle X_1^n,X_2^n \rangle$ prevents the application of symmetry arguments as used in~\cite{PPV,Hayashi} for the P2P Gaussian channel. However, these mutual information RVs can be expressed as (vector-) functions of i.i.d. random vectors, to which the CLT for functions in Propositon~\ref{CLT-func} can be applied. Specifically, this proposition with $L=3$ will be used in the the proof of the joint-outage region in Theorem 2 and with $L=1$ in the proof of the outage-splitting region in Theorem 3.
\subsubsection{Change of Measure and Uniform Bounding}
Similar to the P2P case, an LDT analysis of the three confusion probabilities in the modified random coding and typicality decoding bounds is challenging due to the non-product nature of the (conditional) output distributions induced by a pair of non-i.i.d. input distributions. Thus, we again apply a change of measure argument for computing these confusion probabilities. Analogously to~\eqref{chng-beg}-\eqref{change-measure}, we can show that the following inequalities hold:
\allowdisplaybreaks{
\begin{subequations}
\begin{align}
&P_{X_1^n}P_{X_2^n}P_{Y^n|X_2^n}\left[\,\tilde{i}(X_1^n;Y^n|X_2^n)> \log\gamma_1(X_1^n,X_2^n)\right] \notag \\
& \qquad \qquad \leq \sup_{x_2^n\in\mathcal{X}_2^n,\,y^n\in\mathcal{Y}^n}\frac{dP_{Y^n|X_2^n}(y^n|x_2^n)}{dQ^{(1)}_{Y^n|X_2^n}(y^n|x_2^n)} \, P_{X_1^n}P_{X_2^n}Q^{(1)}_{Y^n|X_2^n}\left[\,\tilde{i}(X_1^n;Y^n|X_2^n)> \log\gamma_1(X_1^n,X_2^n)\right]\, \\
&P_{X_1^n}P_{X_2^n}P_{Y^n|X_1^n}\left[\,\tilde{i}(X_2^n;Y^n|X_1^n)> \log\gamma_2(X_1^n,X_2^n)\right] \notag \\
&\qquad \qquad \leq \sup_{x_1^n\in\mathcal{X}_2^n, \,y^n\in\mathcal{Y}^n}\frac{dP_{Y^n|X_1^n}(y^n|x_1^n)}{dQ^{(2)}_{Y^n|X_1^n}(y^n|x_1^n)} \, P_{X_1^n}P_{X_2^n}Q^{(2)}_{Y^n|X_1^n}\left[\,\tilde{i}(X_2^n;Y^n|X_1^n)> \log\gamma_2(X_1^n,X_2^n)\right]\, \\
&P_{X_1^n}P_{X_2^n}P_{Y^n}\left[\,\tilde{i}(X_1^nX_2^n;Y^n)> \log\gamma_3(X_1^n,X_2^n)\right] \notag \\
&\qquad \qquad \leq \sup_{y^n\in\mathcal{Y}^n}\frac{dP_{Y^n}(y^n)}{dQ^{(3)}_{Y^n}(y^n)} \, P_{X_1^n}P_{X_2^n}Q^{(3)}_{Y^n}\left[\,\tilde{i}(X_1^nX_2^n;Y^n)> \log\gamma_3(X_1^n,X_2^n)\right].
\end{align}
\label{change-measure-MAC}%
\end{subequations}}%
Therefore, we may compute the confusion probabilities with respect to the more convenient measures~$Q^{(1)}_{Y^n|X_2^n}$, $Q^{(2)}_{Y^n|X_1^n}$, $Q^{(3)}_{Y^n}$, but at the expense of the additional R-N derivatives.\footnote{Again, the absolute continuity conditions for the above R-N derivatives are implicitly assumed to hold in the general bounds of Theorem~\ref{corol-modified-DT-MAC} and can be easily verified for the Gaussian MAC.} The bounds in~\eqref{change-measure-MAC} are particularly useful if these extra coefficients can bounded by positive constant~$K_1,K_2,K_3$ or slowly growing functions~$K_{1n},K_{2n},K_{3n}$, such that their rate loss does not affect the second-order behavior.
\vspace{3mm}
We are now ready to provide the formal proof in the next two subsections.
\subsection{Non-Asymptotic Achievability for Cost-Constrained MAC}
\label{sub-MAC-Ach}
In the following, we state a result based upon random coding and modified typicality decoding for achievability on a general MAC with input cost constraints valid for any blocklength. The result basically describes the error probability in terms of the outage, confusion, and constraint-violation probabilities.
\begin{theorem}
\label{corol-modified-DT-MAC}
For a MAC $(\mathcal{X}_1,\mathcal{X}_2,P_{Y^n|X_1^nX_2^n}(y^n|x_1^n,x_2^n),\mathcal{Y})$, for any pair of independent input distributions~$P_{X_1^n}$ and~$P_{X_2^n}$, and any triple of (conditional) output distributions $Q^{(1)}_{Y^n|X_2^n}, Q^{(2)}_{Y^n|X_1^n}, Q^{(3)}_{Y^n}$, there exists an $(n,M_1,M_2,\epsilon)$ code satisfying input cost constraints~$\mathcal{F}_{1n}$ and~$\mathcal{F}_{2n}$ with
\begin{align}
\epsilon \leq\; & P_{X_1^n}P_{X_2^n}P_{Y^n|X_1^nX_2^n}\left[\,\tilde{i}(X_1^n;Y^n|X_2^n)\leq \log\gamma_{1n} \right. \notag \\
& \qquad \qquad \qquad \qquad \quad \cup\left. \tilde{i}(X_2^n;Y^n|X_1^n)\leq \log\gamma_{2n} \right. \notag \\
& \qquad \qquad \qquad \qquad \quad \cup\left. \tilde{i}(X_1^nX_2^n;Y^n)\leq \log\gamma_{3n}\,\right] \notag\\
& +K_{1n}\frac{M_1-1}{2}P_{X_1^n}P_{X_2^n}Q^{(1)}_{Y^n|X_2^n}\left[\,\tilde{i}(X_1^n;Y^n|X_2^n)>\log\gamma_{1n}\right] \notag\\
& +K_{2n}\frac{M_2-1}{2}P_{X_1^n}P_{X_2^n}Q^{(2)}_{Y^n|X_1^n}\left[\,\tilde{i}(X_2^n;Y^n|X_1^n)>\log\gamma_{2n}\right] \notag\\
& +K_{3n}\frac{(M_1-1)(M_2-1)}{2}P_{X_1^n}P_{X_2^n}Q^{(3)}_{Y^n}\left[\,\tilde{i}(X_1^nX_2^n;Y^n)>\log\gamma_{3n}\right] \notag \\
& + P_{X_1^n}P_{X_2^n}[X_1^n\notin\mathcal{F}_{1n} \cup X_2^n\notin\mathcal{F}_{2n}],
\label{DT-joint}
\end{align}
or
\begin{align}
\epsilon \leq\; & P_{X_1^n}P_{X_2^n}P_{Y^n|X_1^nX_2^n}\left[\,\tilde{i}(X_1^n;Y^n|X_2^n)\leq \log\gamma_{1n}\right] \notag\\
&+K_{1n}\frac{M_1-1}{2}P_{X_1^n}P_{X_2^n}Q^{(1)}_{Y^n|X_2^n}\left[\,\tilde{i}(X_1^n;Y^n|X_2^n)>\log\gamma_{1n}\right] \notag\\
&+ P_{X_1^n}P_{X_2^n}P_{Y^n|X_1^nX_2^n}\left[\,\tilde{i}(X_2^n;Y^n|X_1^n)\leq \log\gamma_{2n}\right] \notag\\
&+K_{2n}\frac{M_2-1}{2}P_{X_1^n}P_{X_2^n}Q^{(2)}_{Y^n|X_1^n}\left[\,\tilde{i}(X_2^n;Y^n|X_1^n)>\log\gamma_{2n}\right] \notag\\
&+ P_{X_1^n}P_{X_2^n}P_{Y^n|X_1^nX_2^n}\left[\,\tilde{i}(X_1^nX_2^n;Y^n)\leq \log\gamma_{3n}\right] \notag\\
&+K_{3n}\frac{(M_1-1)(M_2-1)}{2}P_{X_1^n}P_{X_2^n}Q^{(3)}_{Y^n}\left[\,\tilde{i}(X_1^nX_2^n;Y^n)>\log\gamma_{3n}\right] \notag \\
&+ P_{X_1^n}P_{X_2^n}[X_1^n\notin\mathcal{F}_{1n} \cup X_2^n\notin\mathcal{F}_{2n}],
\label{DT-splitting}
\end{align}
where: the modified mutual information random variables for the MAC are defined in~\eqref{mod-mi-MAC}; the coefficients~$K_{1n}$, $K_{2n}$, $K_{3n}$ are defined as\footnote{The bounds~\eqref{DT-joint} and~\eqref{DT-splitting} can be further improved by replacing the $\sup_{x_2^n\in\mathcal{X}_2^n,\,y^n\in\mathcal{Y}^n}$ in~\eqref{u-1} with $\sup_{x_2^n\in\text{supp}(P_{X_2^n}),\,y^n\in\mathcal{Y}^n}$, where $\text{supp}(P_{X_2^n})$ denotes the support of the distribution~$P_{X_2^n}$, and analogously in~\eqref{u-2}, but is not necessary in this work.}
\begin{subequations}
\begin{align}
K_{1n}&:=\sup_{x_2^n\in\mathcal{X}_2^n,\,y^n\in\mathcal{Y}^n}\frac{dP_{Y^n|X_2^n}(y^n|x_2^n)}{dQ^{(1)}_{Y^n|X_2^n}(y^n|x_2^n)}, \label{u-1}\\
K_{2n}&:=\sup_{x_1^n\in\mathcal{X}_2^n, \,y^n\in\mathcal{Y}^n}\frac{dP_{Y^n|X_1^n}(y^n|x_1^n)}{dQ^{(2)}_{Y^n|X_1^n}(y^n|x_1^n)}, \label{u-2} \\
K_{3n}&:=\sup_{y^n\in\mathcal{Y}^n}\frac{dP_{Y^n}(y^n)}{dQ^{(3)}_{Y^n}(y^n)};
\end{align}
\label{unif-bound-MAC}%
\end{subequations}
and $\gamma_{1n},\gamma_{2n},\gamma_{3n}$ are arbitrary positive thresholds whose optimal choices to give the highest rates in~\eqref{DT-splitting} are
\begin{align}
\gamma_{1n}\equiv K_{1n}\frac{M_1-1}{2}, \qquad \gamma_{2n}\equiv K_{2n}\frac{M_2-1}{2}, \qquad \gamma_{3n}\equiv K_{3n}\frac{(M_1-1)(M_2-1)}{2}\;. \label{th-MAC}
\end{align}
\end{theorem}
The achievable bounds~\eqref{DT-joint} and~\eqref{DT-splitting} in Theorem~\ref{corol-modified-DT-MAC} are inspired by the joint dependence-testing and splitting dependence-testing (DT) bounds for the discrete MAC, respectively~\cite{ML-ISIT12}. The latter is a loosening of the former that results from \emph{splitting} the joint outage event via a union bound. The joint-outage result~\eqref{DT-joint} provides a tighter bound on the ultimate performance, and the splitting-outage result~\eqref{DT-splitting} enables simpler evaluation. These two forms will be used in proving the second-order regions presented in Theorems~\ref{GMAC-joint} and~\ref{GMAC-splitting}, respectively.
\begin{proof}
The two channel encoders randomly generate $M_1$ and $M_2$~codewords independently according to some given $n$-letter distributions $P_{X_1^n}$ and~$P_{X_2^n}$, respectively, where $n$ is the designated blocklength. Observing the output~$y^n$, the decoder chooses the first codeword pair $x_1^n(\hat{m}_1)$ and~$x_2^n(\hat{m}_2)$ that look ``jointly typical'' with $y^n$ in a \emph{modified} one-sided sense, specifically satisfying all three of the following conditions:
\begin{subequations}
\begin{align}
\tilde{i}(x_1^n(\hat{m}_1);y^n|x_2^n(\hat{m}_2))>\log \gamma_1(x_1^n(\hat{m}_1),x_2^n(\hat{m}_2)), \\
\tilde{i}(x_2^n(\hat{m}_2);y^n|x_1^n(\hat{m}_1))>\log \gamma_2(x_1^n(\hat{m}_1),x_2^n(\hat{m}_2)), \\
\tilde{i}(x_1^n(\hat{m}_1),x_2^n(\hat{m}_2);y^n)>\log \gamma_3(x_1^n(\hat{m}_1),x_2^n(\hat{m}_2)),
\end{align}
\label{mod-decod-MAC}%
\end{subequations}
where $\gamma_1(x_1^n,x_2^n)$, $\gamma_2(x_1^n,x_2^n)$, $\gamma_3(x_1^n,x_2^n)$ are codeword-dependent thresholds and $\tilde{i}(x_1^n;y^n|x_2^n)$, $\tilde{i}(x_2^n;y^n|x_1^n)$, $\tilde{i}(x_1^n,x_2^n;y^n)$ are the corresponding realizations of the \emph{modified} mutual information random variables~$\tilde{i}(X_1^n;Y^n|X_2^n)$, $\tilde{i}(X_2^n;Y^n|X_1^n)$, $\tilde{i}(X_1^nX_2^n;Y^n)$, respectively.
The error probability averaged over the set of $M_1$ codewords of all possible realizations of the first codebook and the set of $M_2$ codewords of all possible realizations of the second codebook can then be bounded, similar to~\eqref{typ-1}-\eqref{typ-2}, as the sum of a joint-outage probability and three confusion probabilities as follows:
\allowdisplaybreaks{
\begin{align}
&\epsilon\!\leq\!P_{X_1^n}P_{X_2^n}P_{Y^n|X_1^nX_2^n}\left[\,\tilde{i}(X_1^n;Y^n|X_2^n)\leq \log\gamma_1(X_1^n,X_2^n) \right. \notag \\
& \qquad \qquad \qquad \qquad \qquad \cup\left. \tilde{i}(X_2^n;Y^n|X_1^n)\leq \log\gamma_2(X_1^n,X_2^n) \right. \notag \\
& \qquad \qquad \qquad \qquad \qquad \cup\left. \tilde{i}(X_1^nX_2^n;Y^n)\leq \log\gamma_3(X_1^n,X_2^n)\right] \notag\\
&+\frac{M_1-1}{2}P_{X_1^n}P_{X_2^n}P_{Y^n|X_2^n}\left[\,\tilde{i}(X_1^n;Y^n|X_2^n)>\log\gamma_1(X_1^n,X_2^n)\right] \notag\\
&+\frac{M_2-1}{2}P_{X_1^n}P_{X_2^n}P_{Y^n|X_1^n}\left[\,\tilde{i}(X_2^n;Y^n|X_1^n)>\log\gamma_2(X_1^n,X_2^n)\right] \notag\\
&+\frac{(M_1-1)(M_2-1)}{2}P_{X_1^n}P_{X_2^n}P_{Y^n}\left[\,\tilde{i}(X_1^nX_2^n;Y^n)>\log\gamma_3(X_1^n,X_2^n)\right].
\end{align}}
Applying the change of measure technique of~\eqref{change-measure-MAC} with the definitions~\eqref{unif-bound-MAC} yields
\begin{align}
\epsilon \leq\; &P_{X_1^n}P_{X_2^n}P_{Y^n|X_1^nX_2^n}\left[\,\tilde{i}(X_1^n;Y^n|X_2^n)\leq \log\gamma_1(X_1^n,X_2^n) \right. \notag \\
& \qquad \qquad \qquad \qquad \quad \cup\left. \tilde{i}(X_2^n;Y^n|X_1^n)\leq \log\gamma_2(X_1^n,X_2^n) \right. \notag \\
& \qquad \qquad \qquad \qquad \quad \cup\left. \tilde{i}(X_1^nX_2^n;Y^n)\leq \log\gamma_3(X_1^n,X_2^n)\right] \notag\\
&+K_{1n}\frac{M_1-1}{2}P_{X_1^n}P_{X_2^n}Q^{(1)}_{Y^n|X_2^n}\left[\,\tilde{i}(X_1^n;Y^n|X_2^n)>\log\gamma_1(X_1^n,X_2^n)\right] \notag\\
&+K_{2n}\frac{M_2-1}{2}P_{X_1^n}P_{X_2^n}Q^{(2)}_{Y^n|X_1^n}\left[\,\tilde{i}(X_2^n;Y^n|X_1^n)>\log\gamma_2(X_1^n,X_2^n)\right] \notag\\
&+K_{3n}\frac{(M_1-1)(M_2-1)}{2}P_{X_1^n}P_{X_2^n}Q^{(3)}_{Y^n}\left[\,\tilde{i}(X_1^nX_2^n;Y^n)>\log\gamma_3(X_1^n,X_2^n)\right].
\end{align}
The input cost constraints can be handled, analogously to the P2P case, by selecting all the decoding thresholds to be infinite~$\gamma_1(x_1^n,x_2^n)=\infty$, $\gamma_2(x_1^n,x_2^n)=\infty$ and $\gamma_3(x_1^n,x_2^n)=\infty$ if either $x_1^n\notin\mathcal{F}_{1n}$ or $x_2^n\notin\mathcal{F}_{2n}$, and selecting $\gamma_1(x_1^n,x_2^n)=\gamma_{1n}$, $\gamma_2(x_1^n,x_2^n)=\gamma_{2n}$ and $\gamma_3(x_1^n,x_2^n)=\gamma_{3n}$, otherwise. To handle the input cost constraints, analogously to~\eqref{DT-cost-1}-\eqref{DT-cost-PPV}, we obtain
\begin{align}
\epsilon \leq\; &P_{X_1^n}P_{X_2^n}\left[X_1^n\notin\mathcal{F}_{1n} \cup X_2^n\notin\mathcal{F}_{2n}\right] \notag \\
&+ P_{X_1^n}P_{X_2^n}P_{Y^n|X_1^nX_2^n}\left[\,\tilde{i}(X_1^n;Y^n|X_2^n)\leq \log\gamma_{1n} \right. \notag \\
& \qquad \qquad \qquad \qquad \qquad \cup\left. \tilde{i}(X_2^n;Y^n|X_1^n)\leq \log\gamma_{2n} \right. \notag \\
& \qquad \qquad \qquad \qquad \qquad \cup\left. \tilde{i}(X_1^nX_2^n;Y^n)\leq \log\gamma_{3n}\;\right] \notag\\
&+K_{1n}\frac{M_1-1}{2}P_{X_1^n}P_{X_2^n}Q^{(1)}_{Y^n|X_2^n}\left[\,\tilde{i}(X_1^n;Y^n|X_2^n)>\log\gamma_{1n}\right] \notag\\
&+K_{2n}\frac{M_2-1}{2}P_{X_1^n}P_{X_2^n}Q^{(2)}_{Y^n|X_1^n}\left[\,\tilde{i}(X_2^n;Y^n|X_1^n)>\log\gamma_{2n}\right] \notag\\
&+K_{3n}\frac{(M_1-1)(M_2-1)}{2}P_{X_1^n}P_{X_2^n}Q^{(3)}_{Y^n}\left[\,\tilde{i}(X_1^nX_2^n;Y^n)>\log\gamma_{3n}\right].
\label{joint DT for MAC}
\end{align}
To simplify the analysis, one could apply a union bound to the second term on the RHS of~\eqref{joint DT for MAC}, the joint outage event, to obtain the following potentially looser, but simpler, outage-splitting bound.
\begin{align}
\epsilon \leq\; & P_{X_1^n}P_{X_2^n}\left[X_1^n\notin\mathcal{F}_{1n} \cup X_2^n\notin\mathcal{F}_{2n}\right] \notag \\
&+P_{X_1^n}P_{X_2^n}P_{Y^n|X_1^nX_2^n}\left[\,\tilde{i}(X_1^n;Y^n|X_2^n)\leq \log\gamma_{1n}\right] \notag\\
&+K_{1n}\frac{M_1-1}{2}P_{X_1^n}P_{X_2^n}Q^{(1)}_{Y^n|X_2^n}\left[\,\tilde{i}(X_1^n;Y^n|X_2^n)>\log\gamma_{1n}\right] \notag\\
&+ P_{X_1^n}P_{X_2^n}P_{Y^n|X_1^nX_2^n}\left[\,\tilde{i}(X_2^n;Y^n|X_1^n)\leq \log\gamma_{2n}\right] \notag\\
&+K_{2n}\frac{M_2-1}{2}P_{X_1^n}P_{X_2^n}Q^{(2)}_{Y^n|X_1^n}\left[\,\tilde{i}(X_2^n;Y^n|X_1^n)>\log\gamma_{2n}\right] \notag\\
&+ P_{X_1^n}P_{X_2^n}P_{Y^n|X_1^nX_2^n}\left[\,\tilde{i}(X_1^nX_2^n;Y^n)\leq \log\gamma_{3n}\right] \notag\\
&+K_{3n}\frac{(M_1-1)(M_2-1)}{2}P_{X_1^n}P_{X_2^n}Q^{(3)}_{Y^n}\left[\,\tilde{i}(X_1^nX_2^n;Y^n)>\log\gamma_{3n}\right].
\label{Splitting DT for MAC}
\end{align}
Upon remapping all of the non-feasible codewords of each codebook to arbitrary sequences~$x_1^n(0)\in\mathcal{F}_{1n}$ or $x_2^n(0)\in\mathcal{F}_{2n}$, respectively, without modifying the decoding regions, we infer that there exists a pair of deterministic codebooks with $M_1$ codewords belonging to the feasible set~$\mathcal{F}_{1n}$ and $M_2$ codewords belonging to the feasible set~$\mathcal{F}_{2n}$, respectively, whose average error probability~$\epsilon$ satisfies~\eqref{joint DT for MAC} or~\eqref{Splitting DT for MAC}.
To conclude the final assertion~\eqref{th-MAC} of Theorem~\ref{corol-modified-DT-MAC}, it is sufficient to observe that the last six summands on the RHS of \eqref{Splitting DT for MAC} are three weighted sums of two types of error in three Bayesian binary hypothesis tests, respectively, and therefore correspond to the average error probabilities of these tests. The optimal test for each case is an LRT, as we have seen in~\eqref{mod-decod-MAC}, with the optimal threshold equal to the ratio of priors or simply the ratio of the coefficients of the two error probabilities of the test, as given in~\eqref{th-MAC}.
\end{proof}
\subsection{Second-Order Characterization for the Gaussian MAC}
\label{sub-2nd-mac}
In this section, we specialize the achievability bound of Theorem 4 to the Gaussian MAC and prove Theorems~2 and 3. Our approach is analogous to that taken for the P2P Gaussian channel.
\subsubsection{Coding on Independent Power Shells}
First, we choose the pair of input distributions to be independent uniform distributions on the respective power shells
\begin{align}
P_{X_1^nX_2^n}(x_1^n,x_2^n)&=\frac{\delta(||x_1^n||-\sqrt{nP_1})}{S_n(\sqrt{nP_1})}\cdot\frac{\delta(||x_2^n||-\sqrt{nP_2})}{S_n(\sqrt{nP_2})}, \label{unif-shell-mac}
\end{align}
with the same notations as in~\eqref{unif-shell}. Note that this pair of distributions satisfies the input power constraint with probability one, that is,
\begin{align}
P_{X_1^n}P_{X_2^n}\left[X_1^n\notin\mathcal{F}_{1n} \cup X_2^n\notin\mathcal{F}_{2n}\right]=P_{X_1^n}P_{X_2^n}\left[||X_1^n||^2>nP_1 \cup ||X_2^n||^2>nP_2\right]=0.
\label{cost=0-MAC}
\end{align}
Moreover, analogous to~\eqref{shell-output} for the P2P Gaussian channel, the conditional output distributions induced by this input pair are
\begin{align}
P_{Y^n|X_2^n}(y^n|x_2^n) &= \frac{1}{2}\pi^{-n/2}\Gamma\left(\frac{n}{2}\right) e^{-nP_1/2} e^{-||y^n-x_2^n||^2/2} \frac{I_{n/2-1}(||y^n-x_2^n||\sqrt{nP_1})}{(||y^n-x_2^n||\sqrt{nP_1})^{n/2-1}}, \label{shell-output-1} \\
P_{Y^n|X_1^n}(y^n|x_1^n) &= \frac{1}{2}\pi^{-n/2}\Gamma\left(\frac{n}{2}\right) e^{-nP_2/2} e^{-||y^n-x_1^n||^2/2} \frac{I_{n/2-1}(||y^n-x_1^n||\sqrt{nP_2})}{(||y^n-x_1^n||\sqrt{nP_2})^{n/2-1}}, \label{shell-output-2}
\end{align}
where $I_v(\cdot)$ is again the modified Bessel function of the first kind and~$v$-th order.
The analysis of the unconditional output distribution~$P_{Y^n}$ for such an input pair is more complicated, and appears unlikely to be expressed in closed form.\footnote{In fact, our former expression~\cite[Eq. (39)]{ML-Allerton12} for this induced output distribution appears to be incorrect, and we have not been able to obtain a closed form expression for this distribution, even using Bessel functions and the like.} However, we can fully characterize the distribution $U^n:=X_1^n+X_2^n$ of the superimposed input to the channel, and use this distribution for our later analysis. In particular, under the independent uniform distribution~\eqref{unif-shell-mac} for $X_1^n$ and $X_2^n$ on the respective power shells, we have $P_{U^n}(u^n)=0$ for any $u^n\in\mathbb{R}^n$ that satisfies $||u^n||<|\sqrt{nP_1}-\sqrt{nP_2}|$ or $||u^n||>\sqrt{nP_1}+\sqrt{nP_2}$. Moreover, we have $P_{U^n}(u^n)=0$ for those $u^n\in\mathbb{R}^n$ satisfying $||u^n||=|\sqrt{nP_1}-\sqrt{nP_2}|$, since
\begin{align}
\Pr\left[||U^n||<\left|\sqrt{nP_1}-\sqrt{nP_2}\right|\right]&=\Pr\left[||X_1^n+X_2^n||<\left|\sqrt{nP_1}-\sqrt{nP_2}\right|\right]=0,
\end{align}
and
\begin{align}
\Pr\left[||U^n||\leq\left|\sqrt{nP_1}-\sqrt{nP_2}\right|\right] &= \Pr\left[||U^n||=\left|\sqrt{nP_1}-\sqrt{nP_2}\right|\right] \\
&=\Pr\left[||X_1^n+X_2^n||=\left|\sqrt{nP_1}-\sqrt{nP_2}\right|\right] \\
&=\mathbb{E}_{X_2^n}\left[\Pr\left[||X_1^n+x_2^n||=\left|\sqrt{nP_1}-\sqrt{nP_2}\right|\right]\right] \\
&=\mathbb{E}_{X_2^n}\left[\Pr\left[X_1^n=-\sqrt{nP_1}\frac{x_2^n}{||x_2^n||}\right]\right] \\
&=\mathbb{E}_{X_2^n}[0] =0.
\end{align}
Analogously, we have $P_{U^n}(u^n)=0$ for those $u^n\in\mathbb{R}^n$ satisfying $||u^n||=\sqrt{nP_1}+\sqrt{nP_2}$, since
\begin{align}
\Pr\left[||U^n||\leq\sqrt{nP_1}+\sqrt{nP_2}\right]=1 \quad \text{and} \quad \Pr\left[||U^n||<\sqrt{nP_1}+\sqrt{nP_2}\right] = 1.
\end{align}
However, for any $u^n\in\mathbb{R}^n$ belonging to the hollow sphere $|\sqrt{nP_1}-\sqrt{nP_2}|<||u^n||<\sqrt{nP_1}+\sqrt{nP_2}$, we have
\begin{align}
P_{U^n}(u^n) & =\int_{\mathbb{R}^n} P_{X_1^n}(x_1^n) P_{X_2^n}(u^n-x_1^n) dx_1^n \\
& = \int_{\mathbb{R}^n} \frac{\delta(||x_1^n||-\sqrt{nP_1})}{S_n(\sqrt{nP_1})} \cdot \frac{\delta(||u^n-x_1^n||-\sqrt{nP_2})}{S_n(\sqrt{nP_2})} dx_1^n \\
& = \int_0^\pi \int_0^\infty \frac{\delta(r-\sqrt{nP_1})}{S_n(\sqrt{nP_1})} \cdot \frac{\delta(\sqrt{||u^n||^2-2||u^n||r\cos\theta+r^2}-\sqrt{nP_2})}{S_n(\sqrt{nP_2})} S_{n-1}(r\sin\theta) r dr d\theta \label{decomp-mac} \\
& = \frac{\sqrt{nP_1}S_{n-1}(\sqrt{nP_1})}{S_n(\sqrt{nP_1})} \int_0^\pi \frac{\delta\left(\sqrt{||u^n||^2-2||u^n||\sqrt{nP_1}\cos\theta+nP_1}-\sqrt{nP_2}\right)}{S_n(\sqrt{nP_2})} \left(\sin\theta\right)^{n-2} d\theta dr \label{th0} \\
& = \frac{1}{\sqrt{\pi}} \frac{\Gamma\left(\frac{n}{2}\right)}{\Gamma\left(\frac{n-1}{2}\right)} \int_0^\pi \frac{1}{S_n(\sqrt{nP_2})} \sqrt{\frac{P_2}{P_1}} \frac{\delta(\theta-\theta_0)}{||u^n||\sin\theta_0} \left(\sin\theta\right)^{n-2} dr \\
& = \sqrt{\frac{P_2}{\pi P_1}} \frac{\Gamma\left(\frac{n}{2}\right)}{\Gamma\left(\frac{n-1}{2}\right)} \frac{\Gamma\left(\frac{n}{2}\right)}{2\pi^{n/2}(nP_2)^{(n-1)/2}} \frac{1}{||u^n||} \left(1-\left(\frac{||u^n||^2+n(P_1-P_2)}{2\sqrt{nP_1}||u^n||}\right)^2\right)^{(n-3)/2},
\end{align}
where~\eqref{decomp-mac} follows from a decomposition of the space~$\mathbb{R}^n$ into a continuum of ring elements as in~\eqref{decomp}, and~\eqref{th0} follows from the identity $\delta(g(x))=\frac{\delta(x-x_0)}{|g'(x_0)|}$ with $x_0$ being the real root of $g(x)$, so that
\begin{align}
\delta\left(\sqrt{||u^n||^2-2||u^n||\sqrt{nP_1}\cos\theta+nP_1}-\sqrt{nP_2}\right) &= \frac{\delta(\theta-\theta_0)}{\left|\frac{2||u^n||\sqrt{nP_1}\sin\theta_0}{2\sqrt{nP_2}}\right|} = \sqrt{\frac{P_2}{P_1}} \frac{\delta(\theta-\theta_0)}{||u^n||\sin\theta_0},
\end{align}
in which $\theta_0\in(0,\pi)$ is defined as the solution to
\begin{align}
\cos\theta_0= \frac{||u^n||^2+n(P_1-P_2)}{2\sqrt{nP_1}||u^n||}.
\end{align}
The unconditional output distribution $P_{Y^n}$ is now given by
\begin{align}
P_{Y^n}(y^n) =\int_{\mathbb{R}^n} P_{U^n}(u^n) P_{Y^n|U^n}(y^n|u^n) du^n, \label{shell-output-3}
\end{align}
where $P_{Y^n|U^n}(y^n|u^n)$ is the i.i.d. Gaussian distribution~$\mathcal{N}(y^n; u^n, I_n)$ of the channel noise.
Next, we choose the triple of (conditional) output distributions to be the capacity-achieving output distributions with respect to each case, that is,
\begin{align}
Q^{(1)}_{Y^n|X_2^n}(y^n|x_2^n)&\sim\mathcal{N}(y^n; x_2^n,(1+P_1)I_n), \label{Q1}\\
Q^{(2)}_{Y^n|X_1^n}(y^n|x_1^n)&\sim\mathcal{N}(y^n; x_1^n,(1+P_2)I_n), \label{Q2}\\
Q^{(3)}_{Y^n}(y^n)&\sim\mathcal{N}(y^n; \mathbf{0},(1+P_1+P_2)I_n). \label{Q3}
\end{align}
The following proposition will then bound the R-N derivatives introduced in~\eqref{unif-bound-MAC}. The proof, which is a slight generalization of the one for the P2P case, is given in Appendix~\ref{proof-Prop-MAC}.
\begin{proposition}
\label{divergence-bound-MAC}
Let $P_{Y^n|X_2^n},P_{Y^n|X_1^n},P_{Y^n}$ be the (conditional) distributions~\eqref{shell-output-1}, \eqref{shell-output-2}, \eqref{shell-output-3} induced on the output of the Gaussian MAC by a pair of independent uniform input distributions on the respective power shells~\eqref{unif-shell-mac}, and let $Q^{(1)}_{Y^n|X_2^n},Q^{(2)}_{Y^n|X_1^n},Q^{(3)}_{Y^n}$ be the (conditional) capacity-achieving output distributions~\eqref{Q1}, \eqref{Q2}, \eqref{Q3}. There exist positive constants~$K_1,K_2,K_3$ such that, for any $x_1^n,x_2^n,y^n\in\mathbb{R}^n$, and for sufficiently large~$n$,
\begin{subequations}
\begin{align}
\frac{dP_{Y^n|X_2^n}(y^n|x_2^n)}{dQ^{(1)}_{Y^n|X_2^n}(y^n|x_2^n)}&\leq K_1, \label{unif-mac-1}\\
\frac{dP_{Y^n|X_1^n}(y^n|x_1^n)}{dQ^{(2)}_{Y^n|X_1^n}(y^n|x_1^n)}&\leq K_2, \label{unif-mac-2} \\
\frac{dP_{Y^n}(y^n)}{dQ^{(3)}_{Y^n}(y^n)} & \leq K_3. \label{unif-mac-3}
\end{align}
\end{subequations}
\end{proposition}
\textit{Remark.} Using some more complicated manipulations, the proposition can be shown to be valid for any finite~$n$, but the above statement is enough for our second-order analysis.
Proposition~\ref{divergence-bound-MAC} facilitates the use of Theorem~\ref{corol-modified-DT-MAC} with the aforementioned choices for the input distributions and the reference output distributions. Substituting~\eqref{cost=0-MAC} into the achievability bounds~\eqref{DT-joint} and~\eqref{DT-splitting} of Theorem~\ref{corol-modified-DT-MAC} leaves only the confusion and joint/individual outage probabilities. Note that, for simplicity of analysis, we will use the choice of thresholds as indicated in~\eqref{th-MAC} for both of the joint-outage and outage-splitting bounds above, although it need not be the optimal choice for the joint case.
In the following, we evaluate the outage and confusion probabilities for sufficiently large blocklength to obtain the second-order achievable bounds.
\subsubsection{Evaluation of the Outage Probability}
\label{sub-MAC-outage}
The joint outage probability for the Gaussian MAC can be written in the following generic form
\begin{align}
P_{X_1^n}P_{X_2^n}P_{Y^n|X_1^nX_2^n}&\left[\,\tilde{i}(X_1^n;Y^n|X_2^n)\leq \log\gamma_1 \right. \notag \\
& \quad \cup\left. \tilde{i}(X_2^n;Y^n|X_1^n)\leq \log\gamma_2 \right. \notag \\
& \quad \cup\left. \tilde{i}(X_1^nX_2^n;Y^n)\leq \log\gamma_3\;\right] \\
&=1 - P_{X_1^n}P_{X_2^n}P_{Y^n|X_1^nX_2^n}\left[\,\tilde{\mathbf{i}}(X_1^nX_2^n;Y^n) >
\begin{pmatrix}
\log\gamma_1 \\
\log\gamma_2 \\
\log\gamma_3
\end{pmatrix}
\right] ,
\label{outage-pr-joint}
\end{align}
in which the modified mutual information random \textit{vector} is defined as
\begin{align}
\tilde{\mathbf{i}}(X_1^nX_2^n;Y^n):=
\begin{pmatrix}
\tilde{i}(X_1^n;Y^n|X_2^n) \\
\tilde{i}(X_2^n;Y^n|X_1^n) \\
\tilde{i}(X_1^nX_2^n;Y^n)
\end{pmatrix}, \label{mi-mac}
\end{align}
and the input distribution~$P_{X_1^n}P_{X_2^n}$ used in the outage formulation above is the independent uniform distribution on the respective power shells~\eqref{unif-shell-mac}, and the vector inequality in~\eqref{outage-pr-joint} is understood as being element wise.
Under the $P_{Y^n|X_1^nX_2^n}$ channel law, the output~$Y^n$ can be written in the form
\begin{align}
Y^n=X_1^n+X_2^n+Z^n,
\end{align}
where $Z^n\sim \mathcal{N}(\mathbf{0},I_n)$ is the i.i.d. unit-variance channel noise. With the choices~\eqref{Q1} and \eqref{Q2} for $Q^{(1)}_{Y^n|X_2^n}$ and $Q^{(2)}_{Y^n|X_1^n}$, respectively, the first two elements of this random vector simplify analogous to~\eqref{mi-p2p} as follows:
\begin{align}
\tilde{i}(X_1^n;Y^n|X_2^n)&\equiv nC(P_1)+\frac{\log e}{2(1+P_1)}\left[P_1(n-||Z^n||^2)+2 \langle X_1^n,Z^n \rangle\right], \label{mi-mac-1}\\
\tilde{i}(X_2^n;Y^n|X_1^n)&\equiv nC(P_2)+\frac{\log e}{2(1+P_2)}\left[P_2(n-||Z^n||^2)+2 \langle X_2^n,Z^n \rangle\right]. \label{mi-mac-2}
\end{align}
Moreover, with the choice~\eqref{Q3} for $Q^{(3)}_{Y^n}$, the third element of the modified mutual information random vector also simplifies to
\begin{align}
\tilde{i}(X_1^n,X_2^n;Y^n)&\equiv \log\frac{(2\pi)^{-n/2}e^{-||Y^n-X_1^n-X_2^n||^2/2}}{(2\pi(1+P_1+P_2))^{-n/2}e^{-||Y^n||^2/2(1+P_1+P_2)}} \\
&=\frac{n}{2}\log(1+P_1+P_2)+\frac{\log e}{2}\left[\frac{||Y^n||^2}{1+P_1+P_2}-||Y^n-X_1^n-X_2^n||^2\right] \\
&=nC(P_1+P_2)+\frac{\log e}{2(1+P_1+P_2)}\left[||X_1^n+X_2^n+Z^n||^2-(1+P_1+P_2)||Z^n||^2\right] \\
&=nC(P_1+P_2) +\frac{\log e}{2(1+P_1+P_2)}\left[(P_1+P_2)(n-||Z^n||^2) + 2 \langle X_1^n, X_2^n \rangle+2 \langle X_1^n,Z^n \rangle+2 \langle X_2^n,Z^n \rangle\right], \label{mi-mac-3}
\end{align}
since $||X_1^n||^2=nP_1$ and $||X_2^n||^2=nP_2$ with probability one.
Note that, although these random variables are written in the form of summations, the summands are not independent, since neither of the inputs $X_1^n$ and $X_2^n$ are independent across time. Therefore, a direct application of the conventional CLT is not possible. Moreover, the symmetry arguments used in the Gaussian P2P case \cite{PPV,Hayashi} do not apply, since the realization of the inner product RV~$\langle X_1^n, X_2^n \rangle$ varies with different pairs of codewords~$(x_1^n,x_2^n)$ on the power shells.
However, recall that independent uniform RVs on the power shells can be viewed as functions of i.i.d. Gaussian RVs. More precisely, let $W_1^n\sim\mathcal{N}(\mathbf{0},I_n)$ and $W_2^n\sim \mathcal{N}(\mathbf{0},I_n)$ be i.i.d. Gaussian RVs independent of each other and the channel noise~$Z^n\sim \mathcal{N}(\mathbf{0},I_n)$. The elements $X_{1t}$ and $X_{2t}$, $t=1,...,n$, of the independent uniformly distributed RVs $X_1^n,X_2^n$ on the power shells~\eqref{unif-shell-mac} can be expressed as
\begin{align}
X_{1t}=\sqrt{nP_1}\frac{W_{1t}}{||W_1^n||}, \qquad
X_{2t}=\sqrt{nP_2}\frac{W_{2t}}{||W_2^n||}.
\end{align}
Hence, we can apply the CLT for functions of Proposition~\ref{CLT-func} as follows. Consider the vector $\{\mathbf{U}_t=(U_{1t},...,U_{6t})\}_{t=1}^\infty$ whose elements are
\begin{align}
U_{1t}&=1-Z_t^2, \\
U_{2t}&=\sqrt{P_1}W_{1t}Z_{t}, \\
U_{3t}&=\sqrt{P_2}W_{2t}Z_{t}, \\
U_{4t}&=\sqrt{P_1P_2}W_{1t}W_{2t}, \\
U_{5t}&=W_{1t}^2-1, \\
U_{6t}&=W_{2t}^2-1.
\end{align}
Note that this random vector has an i.i.d. distribution across time $t=1,...,n$, and its moments can be easily verified to satisfy $\mathbb{E}[\mathbf{U}_1]=0$ and $\mathbb{E}[||\mathbf{U}_1||_2^3]<\infty$. Moreover, the covariance matrix of this vector is given by
\begin{align}
\text{Cov}(\mathbf{U}_1)=
\begin{pmatrix}
2 & 0 & 0 & 0 & 0 & 0 \\
0 & P_1 & 0 & 0 & 0 & 0 \\
0 & 0 & P_2 & 0 & 0 & 0 \\
0 & 0 & 0 & P_1P_2 & 0 & 0 \\
0 & 0 & 0 & 0 & 2 & 0 \\
0 & 0 & 0 & 0 & 0 & 2
\end{pmatrix}.
\end{align}
Next, define the vector function $\mathbf{f}(\mathbf{u})=(f_1(\mathbf{u}),f_2(\mathbf{u}),f_3(\mathbf{u}))$ whose three components are
\allowdisplaybreaks{
\begin{align}
f_1(\mathbf{u})&=P_1u_1+ \frac{2u_2}{\sqrt{1+u_5}}, \\
f_2(\mathbf{u})&=P_2u_1+ \frac{2u_3}{\sqrt{1+u_6}}, \\
f_3(\mathbf{u})&=(P_1+P_2)u_1+ \frac{2u_2}{\sqrt{1+u_5}} + \frac{2u_3}{\sqrt{1+u_6}} + \frac{2u_4}{\sqrt{1+u_5}\sqrt{1+u_6}}.
\end{align}}%
Again, $\mathbf{f}(\mathbf{0})= \mathbf{0}$ and all the first- and second-order partial derivatives of all three components of $\mathbf{f}$ are continuous in a neighborhood of $\mathbf{u}=\mathbf{0}$. Moreover, the Jacobian matrix $\{\frac{\partial f_l(\mathbf{u})}{\partial u_j}\}_{3\times6}$ at $\mathbf{u}=\mathbf{0}$ can be readily verified to be
\begin{align}
\left.J\right|_{\mathbf{u}=\mathbf{0}}=
\begin{pmatrix}
P_1 & 2 & 0 & 0 & 0 & 0 \\
P_2 & 0 & 2 & 0 & 0 & 0 \\
P_1+P_2 & 2 & 2 & 2 & 0 & 0
\end{pmatrix}.
\end{align}
Moreover, the first two components, similar to the P2P case~\eqref{f-sum-1}-\eqref{f-sum-3}, give
\begin{align}
f_1\left(\frac{1}{n}\sum_{t=1}^n \mathbf{U}_t\right) &= \frac{1}{n}\left[P_1(n-||Z^n||^2)+2 \langle X_1^n,Z^n \rangle\right], \\
f_2\left(\frac{1}{n}\sum_{t=1}^n \mathbf{U}_t\right) &= \frac{1}{n}\left[P_2(n-||Z^n||^2)+2 \langle X_2^n,Z^n \rangle\right],
\end{align}
and the third component yields
\begin{align}
f_3\left(\frac{1}{n}\sum_{t=1}^n \mathbf{U}_t\right)
&=\frac{P_1+P_2}{n}\sum_{t=1}^n (1-Z_t^2)+ \frac{2\frac{1}{n}\sum_{t=1}^n \sqrt{P_1}W_{1t}Z_{t}}{\sqrt{1+\frac{1}{n}\sum_{t=1}^n (W_{1t}^2-1)}}
+ \frac{2\frac{1}{n}\sum_{t=1}^n \sqrt{P_2}W_{2t}Z_{t}}{\sqrt{1+\frac{1}{n}\sum_{t=1}^n (W_{2t}^2-1)}} \notag \\
& \qquad \qquad + \frac{2\frac{1}{n}\sum_{t=1}^n \sqrt{P_1P_2}W_{1t}W_{2t}}{\sqrt{1+\frac{1}{n}\sum_{t=1}^n (W_{1t}^2-1)}\sqrt{1+\frac{1}{n}\sum_{t=1}^n (W_{2t}^2-1)}} \\
&= \frac{1}{n}\sum_{t=1}^n (P_1+P_2)(1-Z_t^2)+ \frac{2}{n}\sum_{t=1}^n \frac{\sqrt{nP_1}W_{1t}}{||W_1^n||}Z_{t} + \frac{2}{n}\sum_{t=1}^n \frac{\sqrt{nP_2}W_{2t}}{||W_2^n||}Z_{t} \notag \\
& \qquad \qquad + \frac{2}{n}\sum_{t=1}^n \frac{\sqrt{nP_1}W_{1t}}{||W_1^n||^2} \frac{\sqrt{nP_2}W_{2t}}{||W_2^n||^2} \\
&= \frac{1}{n}\left[(P_1+P_2)(n-||Z^n||^2) + 2 \langle X_1^n, X_2^n \rangle +2 \langle X_1^n,Z^n \rangle +2 \langle X_2^n,Z^n \rangle\right].
\end{align}
Recalling~\eqref{mi-mac-1}, \eqref{mi-mac-2}, \eqref{mi-mac-3}, we now conclude from Proposition~\ref{CLT-func} that the modified mutual information random vector~\eqref{mi-mac} converges in distribution to a 3-dimensional Gaussian random vector with mean vector $n\mathbf{C}(P_1,P_2)$ and covariance matrix given by
\begin{align}
&\frac{1}{n}\left(\frac{n\log e}{2}\right)^2
\begin{pmatrix}
\frac{1}{1+P_1}&\!\!0 &\!\! 0 \\
0&\!\!\frac{1}{1+P_2} &\!\! 0 \\
0&\!\!0 & \!\!\frac{1}{1+P_1+P_2}
\end{pmatrix} \times \\
&\qquad \begin{pmatrix}
P_1 & \!\!2 & 0 & 0 & 0 & 0 \\
P_2 & \!\!0 & 2 & 0 & 0 & 0 \\
P_1+P_2 & \!\!2 & 2 & 2 & 0 & 0
\end{pmatrix}
\begin{pmatrix}
2 & 0 & 0 & 0 & 0 & 0 \\
0 & P_1 & 0 & 0 & 0 & 0 \\
0 & 0 & P_2 & 0 & 0 & 0 \\
0 & 0 & 0 & P_1P_2 & 0 & 0 \\
0 & 0 & 0 & 0 & 2 & 0 \\
0 & 0 & 0 & 0 & 0 & 2
\end{pmatrix}
\begin{pmatrix}
P_1 & P_2 & \!\!P_1+P_2 \\
2 & 0 & \!\!2 \\
0 & 2 & \!\!2 \\
0 & 0 & \!\!2 \\
0 & 0 & \!\!0 \\
0 & 0 & \!\!0
\end{pmatrix}
\begin{pmatrix}
\frac{1}{1+P_1} & \!\!0 & \!\!0 \\
0 & \!\!\frac{1}{1+P_2} & \!\!0 \\
0 & \!\!0 & \!\!\frac{1}{1+P_1+P_2}
\end{pmatrix} \\
&=\frac{n\log^2 e}{2}
\begin{pmatrix}
\frac{P_1(P_1+2)}{(P_1+1)^2} & \frac{P_1P_2}{(P_1+1)(P_2+1)} & \frac{P_1(P_1+P_2+2)}{(P_1+1)(P_1+P_2+1)} \\
\frac{P_1P_2}{(P_1+1)(P_2+1)} & \frac{P_2(P_2+2)}{(P_2+1)^2} & \frac{P_2(P_1+P_2+2)}{(P_2+1)(P_1+P_2+1)} \\
\frac{P_1(P_1+P_2+2)}{(P_1+1)(P_1+P_2+1)} & \frac{P_2(P_1+P_2+2)}{(P_2+1)(P_1+P_2+1)} & \frac{(P_1+P_2)(P_1+P_2+2)+2P_1P_2}{(P_1+P_2+1)^2}
\end{pmatrix} = n\mathbf{V}(P_1,P_2).
\end{align}
In particular, the joint outage probability is bounded as
\begin{align}
P_{X_1^n}P_{X_2^n}P_{Y^n|X_1^nX_2^n}&\left[\,\tilde{i}(X_1^n;Y^n|X_2^n)\leq \log\left(K_1\frac{M_1-1}{2}\right) \right. \notag \\
& \quad \cup\left. \tilde{i}(X_2^n;Y^n|X_1^n)\leq \log\left(K_2\frac{M_2-1}{2}\right) \right. \notag \\
& \quad \cup\left. \tilde{i}(X_1^nX_2^n;Y^n)\leq \log\left(K_3\frac{(M_1-1)(M_2-1)}{2}\right)\right] \\
&\leq 1- \Pr\left[\mathcal{N}(n\mathbf{C}(P_1,P_2),n\mathbf{V}(P_1,P_2))>\begin{pmatrix}
\log\left(K_1\frac{M_1-1}{2}\right) \\
\log\left(K_2\frac{M_2-1}{2}\right) \\
\log\left(K_3\frac{(M_1-1)(M_2-1)}{2}\right)
\end{pmatrix}\right] + \frac{B_1}{\sqrt{n}}.
\label{outage-mac-joint}%
\end{align}
where $B_1$ is the constant introduced in Proposition~\ref{CLT-func}.
Moreover, the individual outage probabilities are bounded as
\begin{subequations}
\begin{align}
&P_{X_1^n}P_{X_2^n}P_{Y^n|X_1^nX_2^n}\left[\,\tilde{i}(X_1^n;Y^n|X_2^n)\leq \log\left(K_1\frac{M_1-1}{2}\right) \right] \leq Q\left(\frac{nC(P_1)-\log\left(K_1\frac{M_1-1}{2}\right)}{\sqrt{nV(P_1)}}\right) + \frac{B_{11}}{\sqrt{n}}, \label{outage-mac-1} \\
&P_{X_1^n}P_{X_2^n}P_{Y^n|X_1^nX_2^n}\left[\,\tilde{i}(X_2^n;Y^n|X_1^n)\leq \log\left(K_2\frac{M_2-1}{2}\right) \right] \leq Q\left(\frac{nC(P_2)-\log\left(K_2\frac{M_2-1}{2}\right)}{\sqrt{nV(P_2)}}\right) + \frac{B_{12}}{\sqrt{n}}, \label{outage-mac-2} \\
&P_{X_1^n}P_{X_2^n}P_{Y^n|X_1^nX_2^n}\left[\,\tilde{i}(X_1^n;Y^n|X_2^n)\leq \log\left(K_3\frac{(M_1-1)(M_2-1)}{2}\right) \right] \notag \\
&\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad \leq Q\left(\frac{nC(P_1+P_2)-\log\left(K_3\frac{(M_1-1)(M_2-1)}{2}\right)}{\sqrt{n[V(P_1+P_2)+V_3(P_1,P_2)]}}\right) + \frac{B_{13}}{\sqrt{n}}, \label{outage-mac-3}
\end{align}
\label{outage-mac-splitting}%
\end{subequations}
where $B_{11}, B_{12}, B_{13}$ are also the constants introduced in Proposition~\ref{CLT-func}.
\subsubsection{Evaluation of the Confusion Probability}
\label{sub-2nd-Ach-MAC}
The confusion probabilities for the Gaussian MAC can be written in the following generic form
\begin{subequations}
\begin{align}
&K_1\frac{M_1-1}{2}P_{X_1^n}P_{X_2^n}Q^{(1)}_{Y^n|X_2^n}\left[\,\tilde{i}(X_1^n;Y^n|X_2^n)> \log\left(K_1\frac{M_1-1}{2}\right)\right], \\
&K_2\frac{M_2-1}{2}P_{X_1^n}P_{X_2^n}Q^{(2)}_{Y^n|X_1^n}\left[\,\tilde{i}(X_2^n;Y^n|X_1^n)>\log\left(K_2\frac{M_2-1}{2}\right)\right], \\
&K_3\frac{(M_1-1)(M_2-1)}{2}P_{X_1^n}P_{X_2^n}Q^{(3)}_{Y^n}\!\left[\,\tilde{i}(X_1^nX_2^n;Y^n)\!\!>\!\log\left(K_3\frac{(M_1-1)(M_2-1)}{2}\right)\right],
\end{align}
\end{subequations}
where $P_{X_1^n}P_{X_2^n}$ is the independent uniform input distribution on the respective power shells~\eqref{unif-shell-mac}, and $Q^{(1)}_{Y^n|X_2^n=x_2^n}$ $Q^{(2)}_{Y^n|X_1^n=x_1^n}$ and $Q^{(3)}_{Y^n}$ are the (conditional) capacity achieving output distributions~\eqref{Q1}, \eqref{Q2}, \eqref{Q3} for the Gaussian MAC.
Focusing on the conditional confusion probabilities for fixed input sequences~$x_1^n$ and~$x_2^n$ on the respective power shells, we employ the change of measure technique of~\eqref{change of measure} with $P_{Y^n|X_1^n=x_1^n,X_2^n=x_2^n}$ in the role of~$P$, and $Q^{(1)}_{Y^n|X_2^n=x_2^n}$ $Q^{(2)}_{Y^n|X_1^n=x_1^n}$ and $Q^{(3)}_{Y^n}$ respectively in the role of~$Q$ to obtain the following refined large deviation bounds
\begin{align}
Q^{(1)}_{Y^n|X_2^n=x_2^n}\left[\,\tilde{i}(x_1^n;Y^n|x_2^n)> \log\gamma_1\right] &\leq \frac{B_{21}}{\sqrt{n}}\gamma_1^{-1}, \\
Q^{(2)}_{Y^n|X_1^n=x_1^n}\left[\,\tilde{i}(x_2^n;Y^n|x_1^n)> \log\gamma_2\right] &\leq \frac{B_{22}}{\sqrt{n}}\gamma_2^{-1},\\
Q^{(3)}_{Y^n}\!\left[\,\tilde{i}(x_1^n,x_2^n;Y^n)\!\!>\!\log\gamma_3\right] &\leq \frac{B_{23}}{\sqrt{n}}\gamma_3^{-1},
\end{align}
which follow from~\cite[Lemma 47]{PPV}. Specific expressions for the finite constants $B_{21},B_{22},B_{23}$ can be readily obtained in terms of the power constraints~$P_1$ and $P_2$, but are not our main interest. Since these bounds are uniform with respect to the location of the input sequences~$x_1^n$ and~$x_2^n$ on the respective power shells, we can bound the unconditional confusion probabilities as
\begin{subequations}
\begin{align}
K_1\frac{M_1-1}{2}P_{X_1^n}P_{X_2^n}Q^{(1)}_{Y^n|X_2^n}\left[\,\tilde{i}(X_1^n;Y^n|X_2^n)> \log\left(K_1\frac{M_1-1}{2}\right)\right] &\leq \frac{B_{12}}{\sqrt{n}}, \\
K_2\frac{M_2-1}{2}P_{X_1^n}P_{X_2^n}Q^{(2)}_{Y^n|X_1^n}\left[\,\tilde{i}(X_2^n;Y^n|X_1^n)>\log\left(K_2\frac{M_2-1}{2}\right)\right] &\leq \frac{B_{22}}{\sqrt{n}},\\
K_3\frac{(M_1-1)(M_2-1)}{2}P_{X_1^n}P_{X_2^n}Q^{(3)}_{Y^n}\!\left[\,\tilde{i}(X_1^nX_2^n;Y^n)\!\!>\!\log\left(K_3\frac{(M_1-1)(M_2-1)}{2}\right)\right] &\leq \frac{B_{32}}{\sqrt{n}}.
\end{align}
\label{conf-MAC}
\end{subequations}
\subsubsection{Completion}
Substituting~\eqref{cost=0-MAC},~\eqref{outage-mac-joint}, and~\eqref{conf-MAC} into the achievability bound~\eqref{DT-joint} of Theorem~\ref{corol-modified-DT-MAC} and recalling~\eqref{err-mac} that, with a little abuse of notation cf.~\cite[Eq. (186)]{PPV}, $\epsilon$ is the target error probability yields
\begin{align}
&\epsilon
\geq 1- \Pr\left[\mathcal{N}(n\mathbf{C}(P_1,P_2),n\mathbf{V}(P_1,P_2))>\begin{pmatrix}
\log\left(K_1\frac{M_1-1}{2}\right) \\
\log\left(K_2\frac{M_2-1}{2}\right) \\
\log\left(K_3\frac{(M_1-1)(M_2-1)}{2}\right)
\end{pmatrix}\right] + \frac{B}{\sqrt{n}},
\end{align}
where~$B=B_1+B_{21}+B_{22}+B_{23}$. Rearranging and using the symmetry property of the Gaussian distribution $\Pr[\mathcal{N}>z]=\Pr[\mathcal{N}<-z]$, we obtain
\begin{align}
\Pr\left[\mathcal{N}(\mathbf{0},n\mathbf{V}(P_1,P_2))<n\mathbf{C}(P_1,P_2)-
\begin{pmatrix}
\log\left(K_1\frac{M_1-1}{2}\right) \\
\log\left(K_2\frac{M_2-1}{2}\right) \\
\log\left(K_3\frac{(M_1-1)(M_2-1)}{2}\right)
\end{pmatrix}\right] \geq 1- \left(\epsilon - \frac{B}{\sqrt{n}}\right).
\end{align}
Recalling the definition~\eqref{Q-inv} of the inverse complementary CDF of the multi-dimensional Gaussian RV, we find
\begin{align}
\begin{pmatrix}
\log\left(\frac{M_1-1}{2}\right) \\
\log\left(\frac{M_2-1}{2}\right) \\
\log\left(\frac{(M_1-1)(M_2-1)}{2}\right) \\
\end{pmatrix} &\in n\mathbf{C}(P_1,P_2)-\sqrt{n}Q^{-1}\left(\epsilon- \frac{B}{\sqrt{n}}; \mathbf{V}(P_1,P_2)\right) -
\begin{pmatrix}
\log K_1 \\
\log K_2 \\
\log K_3 \\
\end{pmatrix} \\
&\subseteq n\mathbf{C}(P_1,P_2)-\sqrt{n} Q^{-1}\left(\epsilon; \mathbf{V}(P_1,P_2)\right) + O(1)\mathbf{1}, \label{Taylor-MAC-1}
\end{align}
where~\eqref{Taylor-MAC-1} follows from the Taylor expansion for the multi-dimensional $Q^{-1}$ function.
Thus, we have proved that an $(n,M_1,M_2,\epsilon,P_1,P_2)$ code exists if the rate pair satisfies
\begin{align}
\frac{1}{n}
\begin{pmatrix}
\log M_1 \\
\log M_2 \\
\log\left(M_1M_2\right) \\
\end{pmatrix} &\in \mathbf{C}(P_1,P_2)-\frac{1}{\sqrt{n}}Q^{-1}\left(\epsilon; \mathbf{V}(P_1,P_2)\right)+ O\left(\frac{1}{n}\right)\mathbf{1}.
\label{MAC-ach-approx}
\end{align}
This concludes the proof of achievability for the joint-outage rate region of Theorem~\ref{GMAC-joint}.
Next, we turn to the proof of Theorem 3. Substituting~\eqref{cost=0-MAC},~\eqref{outage-mac-splitting}, and~\eqref{conf-MAC} into the achievability bound~\eqref{DT-splitting} of Theorem~\ref{corol-modified-DT-MAC} and again recalling~\eqref{err-mac} that $\epsilon$ is the target error probability leads to
\begin{align}
\epsilon
&\geq Q\left(\frac{nC(P_1)-\log\left(K_1\frac{M_1-1}{2}\right)}{\sqrt{nV(P_1)}}\right) + \frac{\tilde{B}_1}{\sqrt{n}} \notag\\
&+ Q\left(\frac{nC(P_2)-\log\left(K_2\frac{M_2-1}{2}\right)}{\sqrt{nV(P_2)}}\right) + \frac{\tilde{B}_2}{\sqrt{n}} \notag\\
&+ Q\left(\frac{nC(P_1+P_2)-\log\left(K_3\frac{(M_1-1)(M_2-1)}{2}\right)}{\sqrt{n[V(P_1+P_2)+V_3(P_1,P_2)]}}\right) + \frac{\tilde{B}_3}{\sqrt{n}},
\end{align}
where~$\tilde{B}_1:=B_{11}+B_{21}$, $\tilde{B}_2:=B_{12}+B_{22}$, and $\tilde{B}_3:=B_{13}+B_{23}$. Now, splitting $\epsilon$ among the three first terms of each line gives
\begin{align}
&\log M_1\leq nC(P_1)-\sqrt{nV(P_1)}Q^{-1}\left(\lambda_1\epsilon-\frac{\tilde{B}_1}{\sqrt{n}}\right)-\log K_1\notag\\
&\log M_2\leq nC(P_2)-\sqrt{nV(P_2)}Q^{-1}\left(\lambda_2\epsilon-\frac{\tilde{B}_2}{\sqrt{n}}\right)-\log K_2\notag\\
&\log M_1+\log M_2\leq nC(P_1+P_2)-\sqrt{n[V(P_1+P_2)+V_3(P_1,P_2)]}Q^{-1}\left(\!\lambda_3\epsilon-\!\frac{\tilde{B}_3}{\sqrt{n}}\!\right)-\log K_3
\label{bounds on CLT}
\end{align}
where the positive constants $\lambda_1,\lambda_2,\lambda_3$ such that $\lambda_1+\lambda_2+\lambda_3=1$ can be arbitrarily chosen to represent the weight of each of the three types of outage. We can further simplify the bounds in \eqref{bounds on CLT} using the Taylor expansion $Q^{-1}(\lambda\epsilon-\frac{\tilde{B}}{\sqrt{n}})= Q^{-1}(\lambda\epsilon)+O(1/\sqrt{n})$ to obtain
\begin{align}
&\log M_1\leq nC(P_1)-\sqrt{nV(P_1)}Q^{-1}\left(\lambda_1\epsilon\right)+O(1),\notag\\
&\log M_2\leq nC(P_2)-\sqrt{nV(P_2)}Q^{-1}\left(\lambda_2\epsilon\right)+O(1),\notag\\
&\log M_1+\log M_2\leq nC(P_1+P_2)-\sqrt{n[V(P_1+P_2)+V_3(P_1,P_2)]}Q^{-1}\left(\lambda_3\epsilon\right)+O(1).
\end{align}
This concludes the proof of achievability for the outage-splitting rate region of Theorem~\ref{GMAC-splitting}.
\newpage
\section{Conclusion}
\label{sec-con}
We have proved several inner bounds for the Gaussian MAC in the finite blocklength regime, and used them to establish second-order achievable rate regions. As a consequence of our study, we observe that codebooks that are randomly generated according to independent uniform distributions on the users' power shells result in rather tight second-order rate regions for the Gaussian MAC, and they outperform coding schemes induced by the (first-order-optimal) Gaussian input distribution and those via TDMA.
To obtain these main results, we have developed simple and transparent methods for proving non-asymptotic achievability results for Gaussian settings. Our achievability methods rely on the conventional random coding and typicality decoding, but employs modified mutual information random variables, a new change of measure technique, and the application of a CLT for functions. We believe that our methods provide valuable insights for handling other channel models involving input cost constraints, and they may also be generalized to other multi-user settings.
\appendices
\section{Proof of Proposition~\ref{CLT-func}: CLT for Functions}
\label{proof-Prop-P2P}
Since the vector-valued function~$\mathbf{f}(\mathbf{u})$ has continuous second-order partial derivatives at $\mathbf{0}$, we have from Taylor's Theorem that
\begin{align}
\mathbf{f}\left(\mathbf{u}\right)=\mathbf{f}(\mathbf{0}) + \mathbf{J} \mathbf{u}^T + \mathbf{R}(\mathbf{u}),
\end{align}
where $\mathbf{R}(\mathbf{u})$ is the vanishing remainder term in the Taylor expansion. In particular, for those $\mathbf{u}$ belonging to the $K$-hypercube neighborhood~$N(r_0)$ of $\mathbf{0}$ with side length~$r_0>\frac{1}{\sqrt[4]{n}}$, the Lagrange (mean-value) form of the Taylor Theorem provides the following uniform bound on the remainder term
\begin{align}
|\mathbf{R}(\mathbf{u})| \leq \frac{1}{2}
\begin{bmatrix}
\max_{1\leq k,p\leq K} \max_{\mathbf{u}_0\in N(r_0)} \left|\frac{\partial^2 f_1(\mathbf{u}_0)}{\partial u_k \partial u_p}\right| \\
\cdot \\
\cdot \\
\cdot \\
\max_{1\leq k,p\leq K} \max_{\mathbf{u}_0\in N(r_0)} \left|\frac{\partial^2 f_L(\mathbf{u}_0)}{\partial u_k \partial u_p}\right|
\end{bmatrix}
(u_1+...+u_K)^2, \label{lag-bound}
\end{align}
where $|\mathbf{u}|:=(|u_1|,...,|u_K|)$ denotes the element-wise absolute value, and the vector inequality in~\eqref{lag-bound} is also element wise.
Now, we apply the normalized sum $\frac{1}{n}\sum_{t=1}^n \mathbf{U}_t$ as the argument of function~$\mathbf{f}$ to obtain
\begin{align}
\mathbf{f}\left(\frac{1}{n}\sum_{t=1}^n \mathbf{U}_t\right)=\mathbf{f}(\mathbf{0}) + \mathbf{J} \frac{1}{n}\sum_{t=1}^n \mathbf{U}_t^T + \mathbf{R}\left(\frac{1}{n}\sum_{t=1}^n \mathbf{U}_t\right) \label{taylor}
\end{align}
almost surely. Since the random vector~$\frac{1}{n}\sum_{t=1}^n \mathbf{U}_t$ is concentrating around $\mathbf{0}$, we conclude that the corresponding remainder term also concentrates around~$0$:
\begin{align}
&\Pr\left[\left|\mathbf{R}\left(\frac{1}{n}\sum_{t=1}^n \mathbf{U}_t\right)\right|>\frac{1}{\sqrt{n}}\mathbf{1}\right] \notag \\
&\; \leq \Pr\left[\left|\mathbf{R}\left(\frac{1}{n}\sum_{t=1}^n \mathbf{U}_t\right)\right|>\frac{1}{\sqrt{n}}\mathbf{1} \bigcap \frac{1}{n}\sum_{t=1}^n \mathbf{U}_t \in N(r_0)\right]
+ \Pr\left[\frac{1}{n}\sum_{t=1}^n \mathbf{U}_t \notin N(r_0)\right] \label{neigh} \\
&\; \leq \Pr\left[\frac{1}{2}
\begin{bmatrix}
\max_{1\leq k,p\leq K} \max_{\mathbf{u}_0\in B(r_0)} \left|\frac{\partial^2 f_1(\mathbf{u}_0)}{\partial u_k \partial u_p}\right| \\
\cdot \\
\cdot \\
\cdot \\
\max_{1\leq k,p\leq K} \max_{\mathbf{u}_0\in B(r_0)} \left|\frac{\partial^2 f_L(\mathbf{u}_0)}{\partial u_k \partial u_p}\right|
\end{bmatrix}
\left(\frac{1}{n}\sum_{t=1}^n (U_{1t}+...+U_{Kt})\right)^2>\frac{1}{\sqrt{n}}\mathbf{1}\right]
+\sum_{k=1}^K \Pr\left[\left|\frac{1}{n}\sum_{t=1}^n U_{kt}\right| >r_0\right] \label{Lagrange} \\
&\; = \Pr\left[ \left(\frac{1}{n}\sum_{t=1}^n (U_{1t}+...+U_{Kt})\right)^2>\frac{2}{\sqrt{n}} \left(\min_{1\leq l\leq L} \max_{1\leq k,p\leq K} \max_{\mathbf{u}_0\in B(r_0)} \left|\frac{\partial^2 f_l(\mathbf{u}_0)}{\partial u_k \partial u_p}\right|\right)^{-1}\right] +\sum_{k=1}^K \Pr\left[\left|\frac{1}{n}\sum_{t=1}^n U_{kt}\right| >r_0\right] \\
&\; \leq \frac{\text{Var}\left[\frac{1}{n}\sum_{t=1}^n (U_{1t}+...+U_{Kt})\right]}{\frac{2}{\sqrt{n}} \left(\min_{1\leq l\leq L} \max_{1\leq k,p\leq K} \max_{\mathbf{u}_0\in B(r_0)} \left|\frac{\partial^2 f_l(\mathbf{u}_0)}{\partial u_k \partial u_p}\right|\right)^{-1}} +\sum_{k=1}^K \frac{\text{Var}\left[\frac{1}{n}\sum_{t=1}^n U_{kt}\right]}{r_0^2} \label{cheb} \\
&\; \leq \frac{\frac{K}{n}\left(\text{Var}[U_{11}]+...+\text{Var}[U_{K1}]\right)}{\frac{2}{\sqrt{n}} \left(\min_{1\leq l\leq L} \max_{1\leq k,p\leq K} \max_{\mathbf{u}_0\in B(r_0)} \left|\frac{\partial^2 f_l(\mathbf{u}_0)}{\partial u_k \partial u_p}\right|\right)^{-1}} + \frac{\text{Var}[U_{11}]+...+\text{Var}[U_{K1}]}{nr_0^2} \label{sum-var} \\
&\; = \frac{c_1}{\sqrt{n}}, \label{c1}
\end{align}
where~\eqref{neigh} follows from the simple bound~$\Pr[A]\leq \Pr[A\cap B]+\Pr[B^c]$,~\eqref{Lagrange} from the Lagrange bound~\eqref{lag-bound} and the union bound,~\eqref{cheb} follows from the Chebyshev inequality,~\eqref{sum-var} from the simple bound on the sum of variances of generic dependent random variables~$\text{Var}[X_1+...+X_K]\leq K(\text{Var}[X_1]+...+\text{Var}[X_K])$, and~\eqref{c1} from the constraint~$r_0>\frac{1}{\sqrt[4]{n}}$ on the side length of the neighborhood and the definition of the constant $c_1$ as
\begin{align}
c_1:=\left(\text{Var}[U_{11}]+...+\text{Var}[U_{K1}]\right) \left[1+\frac{K}{2}\min_{1\leq l\leq L} \max_{1\leq k,p\leq K} \max_{\mathbf{u}_0\in B(r_0)} \left|\frac{\partial^2 f_l(\mathbf{u}_0)}{\partial u_k \partial u_p}\right|\right]. \label{c1-def}
\end{align}
In the derivation above, we have assumed that~$\min_{1\leq l\leq L} \max_{1\leq k,p\leq K} \max_{\mathbf{u}_0\in B(r_0)} \left|\frac{\partial^2 f_l(\mathbf{u}_0)}{\partial u_k \partial u_p}\right|>0$; in case this does not hold, that is, if~$\min_{1\leq l\leq L} \max_{1\leq k,p\leq L} \max_{\mathbf{u}_0\in B(r_0)} \left|\frac{\partial^2 f_l(\mathbf{u}_0)}{\partial u_k \partial u_p}\right|=0$, the above sequence of steps do not hold, but the final result~\eqref{c1} trivially holds with $c_1=\text{Var}[U_{11}]+...+\text{Var}[U_{K1}]$ which is consistent with~\eqref{c1-def}.
Now, we obtain
\begin{align}
&\Pr\left[\mathbf{f}\left(\frac{1}{n}\sum_{t=1}^n \mathbf{U}_t\right) \in \mathcal{D}\right] \notag \\
&\qquad \leq \Pr\left[\mathbf{f}\left(\frac{1}{n}\sum_{t=1}^n \mathbf{U}_t\right) \in \mathcal{D} \; \bigcap \; \left|\mathbf{R}\left(\frac{1}{n}\sum_{t=1}^n \mathbf{U}_t\right)\right|\leq\frac{1}{\sqrt{n}}\mathbf{1} \right] + \Pr\left[\left|\mathbf{R}\left(\frac{1}{n}\sum_{t=1}^n \mathbf{U}_t\right)\right|>\frac{1}{\sqrt{n}}\mathbf{1} \right] \label{step-0} \\
&\qquad \leq \Pr\left[ \mathbf{f}(\mathbf{0})+\frac{1}{n}\sum_{t=1}^n \mathbf{J}\mathbf{U}_t^T \in \mathcal{D}\oplus\frac{1}{\sqrt{n}}\mathbf{1} \right] + \frac{c_1}{\sqrt{n}} \label{step-1} \\
&\qquad \leq \Pr\left[ \mathcal{N}\left( \mathbf{f}(\mathbf{0})+\mathbb{E}[\mathbf{J}\mathbf{U}_1^T] , \frac{1}{n}\text{Cov}[\mathbf{J}\mathbf{U}_1^T]\right) \in \mathcal{D}\oplus\frac{1}{\sqrt{n}}\mathbf{1} \right] + \frac{c_2}{\sqrt{n}} + \frac{c_1}{\sqrt{n}} \label{step-2} \\
&\qquad \leq \Pr\left[ \mathcal{N}\left(\mathbf{f}(\mathbf{0}),\frac{1}{n}\mathbf{J}\text{Cov}[\mathbf{U}_1]\mathbf{J}^T\right) \in \mathcal{D} \right] + \frac{c_3}{\sqrt{n}} + \frac{c_2}{\sqrt{n}} + \frac{c_1}{\sqrt{n}} \label{step-3}
\end{align}
where inequality~\eqref{step-0} follows from the simple bound~$\Pr[A]\leq \Pr[A\cap B]+\Pr[B^c]$,~\eqref{step-1} from~\eqref{taylor} and~\eqref{c1} as well as the definition of a ``linear outward set-expansion''~$\mathcal{D}\oplus\frac{1}{\sqrt{n}}\mathbf{1}$, which is closely related to the formal definition of set expansion in~\cite{set-expansion} and basically means an enlargement of the set~$\mathcal{D}$ with an ``addition in all directions'' with~$\frac{1}{\sqrt{n}}$,~\eqref{step-2} from the multi-dimensional CLT~\cite{3D-CLT,Tan} with the constant~$c_2$ defined as
\begin{align}
\tilde{c}_2:=\frac{400\,L^{1/4} \mathbb{E}[||\mathbf{J}\mathbf{U}_1^T||_2^3]}{\lambda_{\text{min}}\left(\text{Cov}[\mathbf{J}\mathbf{U}_1^T]\right)^{3/2}}\leq \frac{400\,L^{1/4} \lambda_{\text{max}}\left(\mathbf{J}\mathbf{J}^T\right)^{3/2}\mathbb{E}[||\mathbf{U}_1^T||_2^3]}{\lambda_{\text{min}}\left(\text{Cov}[\mathbf{J}\mathbf{U}_1^T]\right)^{3/2}}:=c_2,
\end{align}
where $\lambda_{\text{min}}(\Sigma)$ and $\lambda_{\text{max}}(\Sigma)$ denotes the smallest and largest eigenvalues of the matrix~$\Sigma$, respectively, and finally \eqref{step-3} from the Taylor expansion for the probability at hand with the proper positive finite constant~$c_3$ depending upon the set~$\mathcal{D}$.
Analogously, we have
\begin{align}
\Pr\left[\mathbf{f}\left(\frac{1}{n}\sum_{t=1}^n \mathbf{U}_t\right) \in \mathcal{D}\right]
& \geq \Pr\left[\mathbf{f}\left(\frac{1}{n}\sum_{t=1}^n \mathbf{U}_t\right) \in \mathcal{D} \; \bigcap \; \left|\mathbf{R}\left(\frac{1}{n}\sum_{t=1}^n \mathbf{U}_t\right)\right|\leq\frac{1}{\sqrt{n}}\mathbf{1} \right] \\
&\geq \Pr\left[\mathbf{f}(\mathbf{0})+\mathbf{J} \frac{1}{n}\sum_{t=1}^n \mathbf{U}_t^T \in \mathcal{D}\ominus\frac{1}{\sqrt{n}}\mathbf{1} \; \bigcap \; \left|\mathbf{R}\left(\frac{1}{n}\sum_{t=1}^n \mathbf{U}_t\right)\right|\leq\frac{1}{\sqrt{n}}\mathbf{1} \right] \label{set-cont} \\
& \geq \Pr\left[\mathbf{f}(\mathbf{0})+ \frac{1}{n}\sum_{t=1}^n \mathbf{J}\mathbf{U}_t^T \in \mathcal{D}\ominus\frac{1}{\sqrt{n}}\mathbf{1} \right] - \Pr\left[\left|\mathbf{R}\left(\frac{1}{n}\sum_{t=1}^n \mathbf{U}_t\right)\right|>\frac{1}{\sqrt{n}}\mathbf{1} \right] \label{step-11} \\
& \geq \Pr\left[ \mathbf{f}(\mathbf{0})+\frac{1}{n}\sum_{t=1}^n \mathbf{J}\mathbf{U}_t^T \in \mathcal{D}\ominus\frac{1}{\sqrt{n}}\mathbf{1} \right] - \frac{c_1}{\sqrt{n}} \\
& \geq \Pr\left[ \mathcal{N}\left( \mathbf{f}(\mathbf{0})+\mathbb{E}[\mathbf{J}\mathbf{U}_1^T] , \frac{1}{n}\text{Cov}[\mathbf{J}\mathbf{U}_1^T]\right) \in \mathcal{D}\ominus\frac{1}{\sqrt{n}}\mathbf{1} \right] - \frac{c_2}{\sqrt{n}} - \frac{c_1}{\sqrt{n}} \\
& \geq \Pr\left[ \mathcal{N}\left(\mathbf{f}(\mathbf{0}),\frac{1}{n}\mathbf{J}\text{Cov}[\mathbf{U}_1]\mathbf{J}^T\right) \in \mathcal{D} \right] - \frac{c_3}{\sqrt{n}} - \frac{c_2}{\sqrt{n}} - \frac{c_1}{\sqrt{n}} \label{step-13}
\end{align}
where inequality~\eqref{set-cont} follows from the definition of a ``linear inward set-contraction''~$\mathcal{D}\ominus\frac{1}{\sqrt{n}}\mathbf{1}$, which is closely related to the formal definition of set contraction in~\cite{set-expansion} and basically means a shrinkage of the set~$\mathcal{D}$ with a ``deduction in all directions'' of~$\frac{1}{\sqrt{n}}$,~\eqref{step-11} follows from the bound~$\Pr[A\cap B]\geq \Pr[A]-\Pr[B^c]$, and all the other steps are as in the previous case.
Combining inequalities~\eqref{step-3} and~\eqref{step-13} establishes Proposition~\ref{CLT-func} with the constant $B:=c_1+c_2+c_3$.
\section{Proof of Proposition~\ref{Prop-P2P-unif-bound} for P2P Gaussian Channels}
\label{proof-Prop-P2P}
Define $D_{P,Q}(y^n):=\frac{P_{Y^n}(y^n)}{Q_{Y^n}(y^n)}$. Recalling the output distribution~\eqref{shell-output} induced by the uniform input distribution on the power shell~\eqref{unif-shell}, we can simplify $D_{P,Q}(y^n)$ as
\begin{align}
D_{P,Q}(y^n)&=\frac{1}{2}\left(2e^{-P}(1+P)\right)^{n/2}\Gamma\left(\frac{n}{2}\right)e^{-P||y^n||^2/2(1+P)} \frac{I_{n/2-1}(||y^n||\sqrt{nP})}{(||y^n||\sqrt{nP})^{n/2-1}}.
\end{align}
To bound this divergence, we first notice that
\begin{align}
\ln \Gamma\left(\frac{n}{2}\right)\leq\frac{n-1}{2}\ln\left(\frac{n}{2}\right)-\frac{n}{2}+c_\Gamma. \label{Gamma}
\end{align}
where $c_\Gamma\leq 2$; in fact, for asymptotically large $n$, the above inequality tends to equality with $c_\Gamma=\ln(\sqrt{2\pi})$ due to Sterling's approximation.
Moreover, $I_{k}(z)\leq I_{k+1}(z)$ for any order~$k$, and so it is sufficient to bound the above divergence only for even values of the order, such that $k=n/2-1$ is an integer. For such an integer, we have~\cite{PPV}
\begin{align}
z^{-k}I_{k}(z)\leq \sqrt{\frac{\pi}{8}}\left(k^2+z^2\right)^{-1/4}\left(k+\sqrt{k^2+z^2}\right)^{-k}e^{\sqrt{k^2+z^2}} \label{bessel-bound}
\end{align}
Using the above inequality along with shorthands $a=\frac{n/2-1}{n/2}$, we obtain
\begin{align}
\ln D_{P,Q}(y^n)&\leq c+\frac{n}{2}f_{a,P}\left(\frac{||y^n||^2}{n}\right),
\end{align}
where $c=\ln(1/2)+c_\Gamma+\ln(\sqrt{\pi/8})=O(1)$, and for $t\in\mathbb{R}^+$
\begin{align}
f_{a,P}(t)&:=\ln\left(2e^{-(1+P)}(1+P)\right)-\frac{Pt}{1+P}+\sqrt{a^2+4Pt} -a\ln\left(a+\sqrt{a^2+4Pt}\right)-\frac{1-a}{2}\ln\left(\sqrt{a^2+4Pt}\right).
\end{align}
To prove the proposition for any finite~$n$, one needs to show that the above function~$f_{a,P}(t)$ is non-positive for all~$t\in\mathbb{R}^+$, for any fixed~$P$. For simplicity, however, we only focus on sufficiently large values of~$n$, such that $a\to1$. In such a case, the above function simplifies to
\begin{align}
f_P(t)&:=\ln\left(2e^{-(1+P)}(1+P)\right)-\frac{Pt}{1+P} +\sqrt{1+4Pt}-\ln\left(1+\sqrt{1+4Pt}\right).
\end{align}
It is easy to show that the function~$f_P(t)$ has only one local (and also global) maximum which occurs at $t=1+P$ leading to~$f_P(1+P)=0$. Therefore,~$f_P(t)\leq0$ for all~$t\in\mathbb{R}^+$, concluding that for all $y^n\in\mathbb{R}^n$
\begin{align}
D_{P,Q}(y^n)
&\leq \text{exp}\left(c+\frac{n}{2}f_P\left(\frac{||y^n||^2}{n}\right)\right) \leq K,
\end{align}
where $K:=e^c\leq 1$. Notice that, interestingly, the constant~$K$ on the RHS of this inequality is independent of the power constraint~$P$.
\section{Proof of Proposition~\ref{divergence-bound-MAC} for the Gaussian MAC}
\label{proof-Prop-MAC}
Proof is similar to that of Proposition 2. The first two inequalities~\eqref{unif-mac-1} and~\eqref{unif-mac-2} indeed directly follow from Proposition~1, since the conditional outputs~$P_{Y^n|X_2^n}(y^n|x_2^n),P_{Y^n|X_1^n}(y^n|x_1^n)$ induced by the power shell distribution and the per-user capacity achieving distributions~$Q^{(1)}_{Y^n|X_2^n}(y^n|x_2^n),Q^{(2}_{Y^n|X_1^n}(y^n|x_1^n)$ both have the same expressions as the output distribution~\eqref{shell-output} induced by the P2P shell distribution and the capacity-achieving output distribution of a P2P channel, respectively, with the only modification that $y^n\in\mathbb{R}^n$ is replace by $(y^n-x_2^n)\in\mathbb{R}^n$ and $(y^n-x_1^n)\in\mathbb{R}^n$, and $P$ by $P_1$ and $P_2$, respectively.
Therefore, it only remain to prove the third inequality~\eqref{unif-mac-3} on the unconditional R-N derivative $\frac{P_{Y^n}(y^n)}{Q^{(3)}_{Y^n}(y^n)}$. Since the output distribution~$P_{Y^n}$ is not explicitly available, we take an indirect approach. We show that the corresponding \emph{input} distributions satisfy the desired property and thus conclude that their resulting output distribution do as well. In particular, let $Q_{U^n}(u^n)\sim\mathcal{N}(u^n; \mathbf{0},(P_1+P_2)I_n)$ be the distribution of the superimposed input~$U^n=X_1^n+X_2^n$ when the two inputs~$X_1^n$ and~$X_2^n$ are independent i.i.d. Gaussian distributions. Note that feeding this distribution to the channel $Y^n=U^n+Z^n$ recovers the capacity-achieving output distribution~$Q^{(3)}_{Y^n}(y^n)\sim\mathcal{N}(y^n; \mathbf{0},(1+P_1+P_2)I_n)$:
\begin{align}
Q^{(3)}_{Y^n}(y^n) =\int_{\mathbb{R}^n} Q_{U^n}(u^n) P_{Y^n|U^n}(y^n|u^n) du^n.
\end{align}
Therefore, if we can show that
\begin{align}
D_{P,Q}(u^n):=\frac{P_{U^n}(u^n)}{Q_{U^n}(u^n)} \leq K_3, \qquad \forall u^n\in\mathbb{R}^n \label{unif-mac-input}
\end{align}
then it immediately follows for any $y^n\in\mathbb{R}^n$ that
\begin{align}
P_{Y^n}(y^n) &= \int_{\mathbb{R}^n} P_{U^n}(u^n) P_{Y^n|U^n}(y^n|u^n) du^n \\
&\leq \int_{\mathbb{R}^n} K_3\, Q_{U^n}(u^n) P_{Y^n|U^n}(y^n|u^n) du^n= K_3\, Q^{(3)}_{Y^n}(y^n) .
\end{align}
Hence, we are only left with the proof of~\eqref{unif-mac-input}. Note that, the claim is trivial for those $u^n\in\mathbb{R}^n$ not belonging to the hollow sphere $|\sqrt{nP_1}-\sqrt{nP_2}|< ||u^n|| < \sqrt{nP_1}+\sqrt{nP_2}$, since they satisfy $P_{U^n}(u^n)=0$. Thus, focusing on those~$u^n$ belonging to this hollow sphere, we have
\begin{align}
D_{P,Q}(u^n)
&= \sqrt{\frac{P_2}{\pi P_1}} \frac{\Gamma\left(\frac{n}{2}\right)}{||u^n||\Gamma\left(\frac{n-1}{2}\right)} \frac{\Gamma\left(\frac{n}{2}\right)(2\pi)^{n/2}(P_1+P_2)^{n/2} e^{||u^n||^2/2(P_1+P_2)}}{2\pi^{n/2}(nP_2)^{(n-1)/2}} \left(1-\left(\frac{||u^n||^2+n(P_1-P_2)}{2\sqrt{nP_1}||u^n||}\right)^2\right)^{(n-3)/2}
\end{align}
Using~\eqref{Gamma} and the crude bound $\Gamma\left(\frac{n}{2}\right)/\Gamma\left(\frac{n-1}{2}\right)\leq \sqrt{n}$, we obtain
\begin{align}
\ln D_{P,Q}(u^n) &\leq \ln\left(\frac{P_2}{\sqrt{\pi P_1}}\right) + \ln \left(\frac{n}{2}\right) - \ln(||u^n||) + \frac{n-1}{2}\ln\left(\frac{n}{2}\right)-\frac{n}{2}+c_\Gamma + \frac{n}{2}\ln\left(\frac{2(P_1+P_2)}{P_2}\right) - \frac{n}{2}\ln(n) \notag \\
& \qquad \qquad + \frac{||u^n||^2}{2(P_1+P_2)} + \frac{n-3}{2}\ln \left(1-\left(\frac{||u^n||^2+n(P_1-P_2)}{2\sqrt{nP_1}||u^n||}\right)^2\right) \\
&= c + \frac{n}{2} f_{n,P_1,P_2}\left(\frac{||u^n||^2}{n}\right)
\end{align}
where $c:=c_\Gamma + \ln\left(\frac{P_2}{\sqrt{2\pi P_1}}\right)$ and
\begin{align}
f_{n,P_1,P_2}(t)=& - \frac{\ln(t)}{n} + \ln\left(\frac{P_1+P_2}{e\,P_2}\right) + \frac{t}{P_1+P_2} + \frac{n-3}{n}\ln \left(1-\frac{(t+P_1-P_2)^2}{4P_1t}\right),
\end{align}
with $(\sqrt{P_1}-\sqrt{P_2})^2 < t < (\sqrt{P_1}+\sqrt{P_2})^2$.
To prove the proposition for any finite~$n$, one needs to show that the above function~$f_{n,P_1,P_2}(t)$ is non-positive for all~$t$ in the aforementioned range, for any fixed~$P_1,P_2$. For simplicity, however, we only focus on sufficiently large values of~$n$. In such a case, the above function simplifies to
\begin{align}
f_{P_1,P_2}(t)=& \ln\left(\frac{P_1+P_2}{e\,P_2}\right) + \frac{t}{P_1+P_2} +\ln \left(1-\frac{(t+P_1-P_2)^2}{4P_1t}\right).
\end{align}
It is then easy to show that, in this range of values for $t$, the function~$f_{P_1,P_2}(t)$ has only one local (and also global) maximum which occurs at $t=P_1+P_2$ leading to~$f_{P_1,P_2}(P_1+P_2)=0$. Therefore,~$f_{P_1,P_2}(t)\leq0$ for all~$t\in((\sqrt{P_1}-\sqrt{P_2})^2 , (\sqrt{P_1}+\sqrt{P_2})^2)$, concluding that, for any $u^n$ satisfying $|\sqrt{nP_1}-\sqrt{nP_2}|< ||u^n|| < \sqrt{nP_1}+\sqrt{nP_2}$,
\begin{align}
D_{P,Q}(u^n) &\leq \text{exp}\left(c+\frac{n}{2}f_{P_1,P_2}\left(\frac{||u^n||^2}{n}\right)\right) \leq K_3,
\end{align}
where $K_3:=e^c = \frac{e^{c_\Gamma}\,P_2}{\sqrt{2\pi P_1}}=O(1)$. This concludes the proof of Proposition 3. Note that, in the case of Gaussian MAC, the constant~$K_3$ \textit{depends} upon the power constraints~$P_1$ and~$P_2$, at least as indicated by our bounding techniques.
\section*{Acknowledgment}
This work has beeb supported in part by the NSF grants CCF05-46618 and CPS12-39222.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
| {
"attr-fineweb-edu": 1.853516,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUc4LxK6wB9jjDfu85 | \section{Introduction}
Several non-standard models in the literature can be obtained by considering
random time-changes of some standard processes; in fact, in this way, it is
possible to construct more realistic models in different fields. Subordinators,
i.e. nondecreasing L\'evy processes, are among the most common examples of random
time-changes in the literature; see e.g. \cite{Bertoin} and \cite{Sato} as
references on L\'evy processes. However several recent references study random
time-changes with inverse of subordinators.
An attractive example of this kind of processes is the positive stable subordinator;
in such a case the distribution of its random variable at time $t=1$ (it is
well-known that, in general, this distribution governs the random evolution of a
L\'evy process) is the positive stable distribution. On the other hand the
positive stable subordinator may not provide a realistic model because it does
not have finite moments. This explains the increasing popularity of its tempered
versions; they are similar to the positive stable subordinators but they have
lighter tails, and they possess all finite moments. There are several references
in the literature on tempered stable processes, tempered stable subordinators and,
in some cases, on inverse of stable subordinators; here we recall
\cite{Rosinski}, \cite{GajdaWylomanska}, \cite{Wylomanska2012},
\cite{Wylomanska2013}, \cite{KuchlerTappe}, \cite{KumarVellaisamy} and, among the
more recent papers, we recall \cite{GajdaKumarWylomanska},
\cite{KumarUpadhyeWylomanskaGajda}, \cite{KumarGajdaWylomanskaPoloczanski} and
\cite{GuptaKumarLeonenko}.
The interest of the processes studied in this paper is motivated by their connections
with important research fields as, for instance, the theory of differential fractional
equations (see e.g. \cite{KilbasSrivastavaTrujillo}; see also \cite{Beghin} for the
tempered case) and the theory of processes with long-fange dependence (see e.g.
\cite{Samorodnitsky}).
In this paper we consider a 4-parameter family of infinitely divisible
distributions introduced in \cite{BCCP} (Section 3), which is inspired by some
ideas in \cite{KlebanovSlamova}. This family, which generalizes the Tweedie
distribution (case $\delta=0$) and the positive Linnik distribution (case
$\theta=0$), is constructed by considering the randomization of the parameter
$\lambda$ with a Gamma distributed random variable. Actually in our results we
often have to restrict the analysis on the case $\delta=0$.
The asymptotic results presented in this paper concern the theory of large
deviations (see e.g. \cite{DemboZeitouni} as a reference on this topic;
here we also recall \cite{GulinskyVeretennikov} as a reference having links with
the averaging theory). This theory gives asymptotic computations of small
probabilities on an exponential scale.
We start with some preliminaries in Section \ref{sec:preliminaries}. Section
\ref{sec:MD} is devoted to the main contribution in this paper, i.e. a class
of large deviation principles that can be seen as a result of
\emph{non-central moderate deviations} with respect to $\theta$ (see Proposition
\ref{prop:MD-theta-to-infinity} as $\theta\to\infty$; see also Remark
\ref{rem:case-theta-to-zero} for the case $\theta\to 0$). We use this terminology
to say that these large deviation principles fill the gap between two asymptotic
regimes:
\begin{itemize}
\item a weak convergence to a non-Gaussian distribution (we use the term
\lq\lq non-central\rq\rq because of the non-Gaussianity); actually we have a
sequence of identically distributed random variables, which are trivially weakly
convergent;
\item the convergence of some other related random variables that converge to
a constant (and this follows from a large deviation principle).
\end{itemize}
This is illustrated in detail in Remark \ref{rem:MD-typical-features}.
In Section \ref{sec:LD-inverse-processes} we present some other minor
large deviation results for the inverse of the subordinators treated in
this paper. These results can be derived by applying the results in
\cite{DuffieldWhitt}. Some other minor results are presented in Section
\ref{sec:LD-time-changes}, where we consider suitable processes
$\{X(t):t\geq 0\}$, independent of inverse of subordinators
$\{T(t):t\geq 0\}$, say, and we discuss the possibility to obtain results
for the composition processes $\{X(T(t)):t\geq 0\}$ (a reference with
this kind of composition processes is \cite{KumarLeonenkoPichler}, where
those processes can be seen as insurance models called fractional risk
processes).
\section{Preliminaries and some remarks}\label{sec:preliminaries}
In this section we present some preliminaries on large deviations
and on the family of tempered distributions introduced in \cite{BCCP}.
\subsection{Preliminaries on large deviations}\label{sec:preliminaries-LD}
Here we recall some preliminaries on the theory of large deviations. Let
$\mathcal{Y}$ be a topological space, and let $\{Y_r\}_r$ be a family of
$\mathcal{Y}$-valued random variables defined on the same probability space
$(\Omega,\mathcal{F},P)$; then $\{Y_r\}_r$ satisfies the large deviation
principle (LDP from now on), as $r\to r_0$ (possibly $r_0=\infty$), with
speed $v_r$ and rate function $I$ if: $v_r\to\infty$ as $r\to r_0$,
$I:\mathcal{Y}\to [0,\infty]$ is a lower semicontinuous function, and the
inequalities
$$\liminf_{r\to r_0}\frac{1}{v_r}\log P(Y_r\in O)\geq-\inf_{y\in O}I(y)\ \mbox{for all open sets}\ O$$
and
$$\liminf_{r\to r_0}\frac{1}{v_r}\log P(Y_r\in C)\leq-\inf_{y\in C}I(y)\ \mbox{for all closed sets}\ C$$
hold. A rate function is said to be good if
$\{\{y\in\mathcal{Y}:I(y)\leq\eta\}:\eta\geq 0\}$ is a family of compact sets.
We essentially deal with cases where $\mathcal{Y}=\mathbb{R}^h$ for some integer
$h\geq 1$, and we often use G\"artner Ellis Theorem (see e.g. Theorem 2.3.6
in \cite{DemboZeitouni}). Throughout this paper we use the notation
$\langle\cdot,\cdot\rangle$ for the inner product in $\mathbb{R}^h$. G\"artner
Ellis Theorem allows to say that, if there exists
$$\lim_{r\to r_0}\frac{1}{v_r}\log\mathbb{E}[e^{v_r\langle y,Y_r\rangle}]=\Lambda(y)\ (\mbox{for all}\ y\in\mathbb{R}^h)$$
as an extended real number, then, under suitable hypotheses ($\Lambda$ is finite in a
neighborhood of the origin $y=0$, where $0\in\mathbb{R}^h$ is the null vector, and it
is lower semicontinuous and essentially smooth according to Definition 2.3.5 in
\cite{DemboZeitouni}), the LDP holds with speed $v_r$ and good rate function
$\Lambda^*$ defined by
$$\Lambda^*(x):=\sup_{y\in\mathbb{R}^h}\{\langle x,y\rangle-\Lambda(y)\}.$$
The function $\Lambda^*$ is called Legendre transform of the function $\Lambda$.
\begin{remark}\label{rem:convergence-under-GET}
Let us consider the above setting of G\"artner Ellis Theorem and, for simplicity, we
consider the case $r_0=\infty$. Moreover we consider the closed set
$C_\delta:=\{x\in\mathbb{R}:\|x-\nabla\Lambda(0)\|\geq\delta\}$ for some $\delta>0$;
then, since $\Lambda^*(x)=0$ if and only if $x=\nabla\Lambda(0)$, we have
$\Lambda^*(C_\delta):=\inf_{y\in C_\delta}\Lambda^*(y)>0$. We want to consider the LDP
upper bound for the closed set $C_\delta$. Then, for all $\varepsilon>0$ small enough,
there exists $r_\varepsilon$ such that
$$P(|Y_r-\nabla\Lambda(0)|\geq\delta)\leq e^{-v_r(\Lambda^*(C_\delta)-\varepsilon)}
\ \mbox{for all}\ r>r_\varepsilon.$$
Thus $Y_r$ converges to $\nabla\Lambda(0)$ in probability. Moreover it is possible
to check the almost sure convergence along a sequence $\{r_n:n\geq 1\}$ such that
$r_n\to\infty$; in fact, by a standard application of Borel Cantelli Lemma, we can
say that $Y_{r_n}$ converges to $\nabla\Lambda(0)$ almost surely if
\begin{equation}\label{eq:BC-lemma-convergence}
\sum_{n\geq 1}e^{-v_{r_n}(\Lambda^*(C_\delta)-\varepsilon)}<\infty;
\end{equation}
for instance, when $v_r=r$ (we have this situation in Sections
\ref{sec:LD-inverse-processes} and \ref{sec:LD-time-changes}), condition
\eqref{eq:BC-lemma-convergence} holds with the sequence $r_n=n$.
\end{remark}
Another standard large deviation result used in this paper (more precisely in
Remark \ref{rem:increments-vs-marginals}) is the contraction principle (see e.g.
Theorem 4.2.1 in \cite{DemboZeitouni}). As above let $\{Y_r\}_r$ be a family of
$\mathcal{Y}$-valued random variables (defined on the same probability), and
assume that $\{Y_r\}_r$ satisfies the LDP, as $r\to r_0$, with speed $v_r$ and
good rate function $I$. Then, if we consider a continuous function
$f:\mathcal{Y}\to\mathcal{Z}$, where $\mathcal{Z}$ is another topological space,
the family of $\mathcal{Z}$-valued random variables $\{f(Y_r)\}_r$ satisfies the
LDP, as $r\to r_0$, with speed $v_r$ and good rate function $J$ defined by
$$J(z):=\inf\{I(y):y\in\mathcal{Y},\ f(y)=z\}.$$
\subsection{Preliminaries on the tempered distributions in \cite{BCCP} (Section 3)}\label{sec:preliminaries-BCCP}
We consider a family of subordinators $\{S_{(\gamma,\lambda,\theta,\delta)}(t):t\geq 0\}$,
where the parameters $(\gamma,\lambda,\theta,\delta)$ belong to a suitable set
$\mathcal{P}:=\mathcal{P}_1\cup\mathcal{P}_2$, i.e.
$$\mathcal{P}_1=(-\infty,0)\times(0,\infty)\times(0,\infty)\times[0,\infty)\
\mbox{and}\ \mathcal{P}_2=(0,1)\times(0,\infty)\times [0,\infty)\times[0,\infty);$$
actually other cases could be allowed ($\gamma=0$ when
$(\gamma,\lambda,\theta,\delta)\in\mathcal{P}_1$
and $\gamma=1$ when $(\gamma,\lambda,\theta,\delta)\in\mathcal{P}_2$) but they will
be neglected (because they give rise to deterministic random variables). By well-known
properties of L\'evy processes, the random evolution of the subordinator
$\{S_{(\gamma,\lambda,\theta,\delta)}(t):t\geq 0\}$ is governed by the infinitely divisible
distribution of the positive random variable $S_{(\gamma,\lambda,\theta,\delta)}(1)$.
Here, for each value of $(\gamma,\lambda,\theta,\delta)\in\mathcal{P}$, we refer to the
moment generating function of the random variable $S_{(\gamma,\lambda,\theta,\delta)}(1)$
with argument $y\in\mathbb{R}$; in particular we refer to suitable functions
$\kappa_{(\gamma,\lambda,\theta,\delta)}$ specified below, such that
$$\mathbb{E}[e^{yS_{(\gamma,\lambda,\theta,\delta)}(1)}]=\exp(\kappa_{(\gamma,\lambda,\theta,\delta)}(y))$$
(note that we are setting $y=-s$, where $s>0$ is the argument of the Laplace transforms
in \cite{BCCP}) and, obviously, we have
$\mathbb{E}[e^{yS_{(\gamma,\lambda,\theta,\delta)}(1)}]=\infty$ for some $y>0$.
Moreover, in view of the applications of G\"artner Ellis Theorem, it is useful to
introduce the notation $\kappa_{(\gamma,\lambda,\theta,\delta)}^*$ for the Legendre
transform of the function $\kappa_{(\gamma,\lambda,\theta,\delta)}$, i.e.
\begin{equation}\label{eq:Legendre-transform-kappa}
\kappa_{(\gamma,\lambda,\theta,\delta)}^*(x):=\sup_{y\in\mathbb{R}}\{xy-\kappa_{(\gamma,\lambda,\theta,\delta)}(y)\}.
\end{equation}
We remark that, when we deal with $\{S_{(\gamma,\lambda,\theta,\delta)}(t):t\geq 0\}$,
G\"artner Ellis Theorem can be applied only when $\theta>0$; in fact in this case the
function $\kappa_{(\gamma,\lambda,\theta,\delta)}$ is finite in a neighborhood of the
origin ($y=0$).
\paragraph{Case $\delta=0$.}
This is the case of Tweedie distribution (see Section 2.2 in \cite{BCCP} and the
references cited therein). We have
$$\kappa_{(\gamma,\lambda,\theta,0)}(y):=\log\mathbb{E}[e^{yS_{(\gamma,\lambda,\theta,0)}(1)}]
=\lambda\hspace{1.0pt}\sgn{\gamma}(\theta^\gamma-(\theta-y)^\gamma)$$
if $y\leq\theta$, and equal to infinity otherwise. Note that
$$\kappa_{(\gamma,\lambda,\theta,0)}(y)=\lambda\kappa_{(\gamma,1,\theta,0)}(y).$$
Thus, if we specialize the cases for the sign of $\gamma$, we have the following cases:
$$\mbox{if}\ \gamma\in(-\infty,0),\quad\kappa_{(\gamma,\lambda,\theta,0)}(y):=\log\mathbb{E}[e^{yS_{(\gamma,\lambda,\theta,0)}(1)}]
=\left\{\begin{array}{ll}
\frac{\lambda}{\theta^{-\gamma}}\left(\left(\frac{\theta}{\theta-y}\right)^{-\gamma}-1\right)&\ \mbox{if}\ y<\theta\\
\infty&\ \mbox{otherwise}
\end{array}
\right.$$
(that is a compound Poisson distribution with Gamma distributed jumps);
$$\mbox{if}\ \gamma\in(0,1),\quad\kappa_{(\gamma,\lambda,\theta,0)}(y):=\log\mathbb{E}[e^{yS_{(\gamma,\lambda,\theta,0)}(1)}]
=\left\{\begin{array}{ll}
\lambda(\theta^\gamma-(\theta-y)^\gamma)&\ \mbox{if}\ y\leq\theta\\
\infty&\ \mbox{otherwise}
\end{array}
\right.$$
(that is the possibly tempered positive Linnik distribution; we have the tempered
case when $\theta>0$). In view of the application of G\"artner Ellis
Theorem, we can get the full LDP if the function
$\kappa_{(\gamma,\lambda,\theta,0)}$ is steep; then, in both cases $\gamma\in(0,1]$
and $\gamma\in(-\infty,0)$, we need to check the condition
$\lim_{y\to\theta^-}\kappa_{(\gamma,\lambda,\theta,0)}^\prime(y)=\infty$
and this can be easily done (the details are omitted).
\paragraph{Case $\delta>0$.}
We construct this case starting from the case $\delta=0$ and by considering a Gamma
subordination, i.e. a randomization of the parameter $\lambda$ with a Gamma
distributed random variable $G_{\delta,\lambda}$ such that
$$\mathbb{E}[e^{yG_{\delta,\lambda}}]=(1-\lambda\delta y)^{-1/\delta}=\left(\frac{(\lambda\delta)^{-1}}{(\lambda\delta)^{-1}-y}\right)^{1/\delta}$$
if $y<(\lambda\delta)^{-1}$, and equal to infinity otherwise. Then, by referring
to the moment generating function of the random variable $G_{\delta,\lambda}$
above, we have
$$\mathbb{E}[e^{yS_{(\gamma,\lambda,\theta,\delta)}(1)}]:=\mathbb{E}[e^{\kappa_{(\gamma,1,\theta,0)}(y)G_{\delta,\lambda}}]
=(1-\delta\kappa_{(\gamma,\lambda,\theta,0)}(y))^{-1/\delta}
=(1-\lambda\delta\hspace{1.0pt}\sgn{\gamma}(\theta^\gamma-(\theta-y)^\gamma))^{-1/\delta}$$
if $y\leq\theta$ and $\sgn{\gamma}(\theta^\gamma-(\theta-y)^\gamma)<(\lambda\delta)^{-1}$,
and equal to infinity otherwise; thus, for the same values of $y$, the function
$\kappa_{(\gamma,\lambda,\theta,\delta)}$ is defined by
$$\kappa_{(\gamma,\lambda,\theta,\delta)}:=-\frac{1}{\delta}\log(1-\lambda\delta\hspace{1.0pt}\sgn{\gamma}(\theta^\gamma-(\theta-y)^\gamma)).$$
Moreover, if we refer to the abscissa of convergence $y_0$, say, of the function
$\kappa_{(\gamma,\lambda,\theta,\delta)}$ (note that $y_0\in[0,\theta]$), this function is
steep if
$$\kappa_{(\gamma,\lambda,\theta,\delta)}^\prime(y)
=\frac{\kappa_{(\gamma,\lambda,\theta,0)}^\prime(y)}{1-\delta\kappa_{(\gamma,\lambda,\theta,0)}(y)}
\to\infty\ \mbox{as}\ y\to y_0^-,$$
and one can check that this condition holds.
As pointed out in \cite{BCCP} (see just after equation (11) in that reference)
we note that
$$\lim_{\delta\to 0^+}\mathbb{E}[e^{yS_{(\gamma,\lambda,\theta,\delta)}(1)}]
=\lambda\hspace{1.0pt}\sgn{\gamma}(\theta^\gamma-(\theta-y)^\gamma)$$
for all $y\leq\theta$, and equal to infinity otherwise; so we recover the case
$\delta=0$ by taking the limit as $\delta\to 0^+$. In fact the random variable
$G_{\delta,\lambda}$ converges weakly to the constant $\lambda$ as $\delta\to 0^+$.
\paragraph{Some further comments on both cases $\delta=0$ and $\delta>0$ (for $\theta>0$).}
Let $\theta>0$ be arbitrarily fixed. We can say that, for all $\delta\geq 0$,
$\frac{S_{(\gamma,\lambda,\theta,\delta)}(t)}{t}\to\kappa_{(\gamma,\lambda,\theta,\delta)}^\prime(0)=\lambda\gamma\theta^{\gamma-1}$
as $t\to\infty$; in fact the rate function $\kappa_{(\gamma,\lambda,\theta,\delta)}^*$ uniquely vanishes
at $y=\kappa_{(\gamma,\lambda,\theta,\delta)}^\prime(0)$. Then, since the limit value
$\kappa_{(\gamma,\lambda,\theta,\delta)}^\prime(0)$ does not depend on $\delta$, it is interesting to
see how the rate function $\kappa_{(\gamma,\lambda,\theta,\delta)}^*(y)$ varies with $\delta$ around the
limit value; in fact a locally larger rate function $\kappa_{(\gamma,\lambda,\theta,\delta)}^*(y)$
(for $y$ in a neighborhood of $\kappa_{(\gamma,\lambda,\theta,\delta)}^\prime(0)$, except this value)
yields a faster convergence. In order to study this problem it is useful to refer to
$\kappa_{(\gamma,\lambda,\theta,\delta)}^{\prime\prime}(0)$; so, for all $\delta>0$, we get
$$\kappa_{(\gamma,\lambda,\theta,\delta)}^{\prime\prime}(0)
=\frac{\kappa_{(\gamma,\lambda,\theta,0)}^{\prime\prime}(0)(1-\delta\kappa_{(\gamma,\lambda,\theta,0)}(0))
+\delta(\kappa_{(\gamma,\lambda,\theta,0)}^\prime(0))^2}{(1-\delta\kappa_{(\gamma,\lambda,\theta,0)}(0))^2}
=\kappa_{(\gamma,\lambda,\theta,0)}^{\prime\prime}(0)+\delta(\kappa_{(\gamma,\lambda,\theta,0)}^\prime(0))^2,$$
and this equality trivially holds also for $\delta=0$. Then, by some properties of Legendre transforms, we
can say that the rate function is locally larger as $\kappa_{(\gamma,\lambda,\theta,\delta)}^{\prime\prime}(0)$
decreases, and therefore as $\delta$ decreases. We remark that, in some sense, this agrees with the monotonicity
of $\mathrm{Var}[G_{\delta,\lambda}]=\lambda^2\delta$ with respect to $\delta$. Furthermore the inequality $\kappa_{(\gamma,\lambda,\theta,\delta)}^*(x)\leq\kappa_{(\gamma,\lambda,\theta,0)}^*(x)$
holds for all $x$ (and not only around $\kappa_{(\gamma,\lambda,\theta,\delta)}^\prime(0)$); in fact we have
$$\kappa_{(\gamma,\lambda,\theta,\delta)}(y)=-\frac{1}{\delta}\log(1-\delta\kappa_{(\gamma,\lambda,\theta,0)}(y))
\geq\kappa_{(\gamma,\lambda,\theta,0)}(y)$$
for all $y$ such that $\kappa_{(\gamma,\lambda,\theta,0)}(y)<\frac{1}{\delta}$.
\subsection{Some remarks}\label{sub:remarks-in-preliminaries}
The following remarks explain how some processes can be seen as members of the
family of subordinators studied in this paper.
\begin{remark}[Composition of independent processes]\label{rem:composition-of-independent-subordinators}
It is well-known (and it is easy to check) that, if we consider $h$ independent
subordinators
$\{\{S_{(\gamma_i,\lambda_i,\theta_i,\delta_i)}(t):t\geq 0\}:i\in\{1,\ldots,h\}\}$,
the process $\{S(t):t\geq 0\}$ defined by
$$S(t):=S_{(\gamma_1,\lambda_1,\theta_1,\delta_1)}\circ\cdots\circ
S_{(\gamma_n,\lambda_n,\theta_n,\delta_h)}(t)$$
is a subordinator and, moreover, for all $t\geq 0$ we have
$$\mathbb{E}[e^{yS(t)}]=e^{t\kappa_S(y)},\ \mbox{where}\
\kappa_S(y):=\kappa_{(\gamma_h,\lambda_h,\theta_h,\delta_h)}\circ\cdots\circ\kappa_{(\gamma_1,\lambda_1,\theta_1,\delta_1)}(y).$$
So one can wonder if, in some cases, the composition of independent processes
in this family still belongs to this family. It seems that this is possible only
in a very particular case, i.e.
$$(\gamma_i,\lambda_i,\theta_i,\delta_i)=(\gamma_i,1,0,0)\ \mbox{with}\ \gamma_i\in(0,1),\ \mbox{for all}\ i\in\{1,\ldots,h\},$$
and we have
$$\kappa_{(\gamma_h,1,0,0)}\circ\cdots\circ\kappa_{(\gamma_1,1,0,0)}(y)=\kappa_{(\gamma_1\cdots \gamma_h,1,0,0)}.$$
\end{remark}
\begin{remark}[Generalization of the mixtures in \cite{GuptaKumarLeonenko}]\label{rem:generalization-of-mixtures-in-the-literature}
We consider $h$ independent subordinators
$\{\{S_{(\gamma_i,\lambda_i,\theta_i,\delta_i)}(t):t\geq 0\}:i\in\{1,\ldots,h\}\}$
and, for some $c_1,\ldots,c_h>0$, let $\{S(t):t\geq 0\}$ be the
process defined by
$$S(t):=\sum_{i=1}^hS_{(\gamma_i,\lambda_i,\theta_i,\delta_i)}(c_it).$$
Note that this kind of processes is a generalization of the mixtures
studied in \cite{GuptaKumarLeonenko}; actually in that reference the
authors require some restrictions on the parameters that here can be
neglected (in particular they require that $c_1+\cdots+c_h=1$ and this
explains the term \emph{mixture} used in \cite{GuptaKumarLeonenko}).
For all $t\geq 0$ we have
$$\mathbb{E}[e^{yS(t)}]=e^{t\kappa_S(y)},\ \mbox{where}\ \kappa_S(y):=\sum_{i=1}^hc_i\kappa_{(\gamma_i,\lambda_i,\theta_i,\delta_i)}(y).$$
One can wonder if, in some cases, the generalized mixture of processes
in this family (according to the terminology here) still belongs to
this family. It seems that this is possible in a very particular case,
i.e.
$$(\gamma_i,\lambda_i,\theta_i,\delta_i)=(\gamma,\lambda_i,0,0)\ \mbox{for all}\ i\in\{1,\ldots,h\},\ \mbox{for some}\ \gamma\in(0,1),$$
and we have
$$\kappa_S(y)=\sum_{i=1}^hc_i\kappa_{(\gamma,\lambda_i,0,0)}(y)=\sum_{i=1}^hc_i\lambda_i\kappa_{(\gamma,1,0,0)}(y).$$
\end{remark}
\section{Non-central moderate deviations (for $\delta=0$)}\label{sec:MD}
The term moderate deviations is used in the literature for a suitable class
of LDPs governed by the same rate function; moreover, in some sense, moderate
deviations fill the gap between a convergence to a constant and a weak
convergence to a (possibly multivariate) Gaussian distribution.
In this section we study a non-central moderate deviation regime for
$\{S_{(\gamma,\lambda,\theta,0)}(t):t\geq 0\}$ with respect to
$\theta$; we use the term non-central because in our case the limit in the
weak convergence is not a Gaussian distribution. As we see, we deal with finite
families of increments of the subordinator; however, as we shall explain in
Remark \ref{rem:increments-vs-marginals}, it is also possible to present similar
results for finite families of marginal random variables of the subordinator.
We start with the following simple result.
\begin{proposition}\label{prop:identically-distributed-rvs}
Let $m\geq 1$ and $0=t_0<t_1<t_2<\cdots<t_m$ be arbitrarily fixed. Then, for
every $\theta>0$, the random vectors
$$\{(\theta S_{(\gamma,\lambda,\theta,0)}(t_i/\theta^\gamma)
-\theta S_{(\gamma,\lambda,\theta,0)}(t_{i-1}/\theta^\gamma))_{i=1,\ldots,m}:\theta>0\}$$
are equally distributed; thus, in particular, they are distributed as
$(S_{(\gamma,\lambda,1,0)}(t_i)-S_{(\gamma,\lambda,1,0)}(t_{i-1}))_{i=1,\ldots,m}$.
\end{proposition}
\begin{proof}
We prove this result with some computations for (the logarithm of) the moment
generating functions. In fact, by taking into account the independence and
the distribution of the increments, for all $\theta>0$ we have
\begin{multline*}
\log\mathbb{E}\left[\exp\left(\sum_{i=1}^my_i(\theta S_{(\gamma,\lambda,\theta,0)}(t_i/\theta^\gamma)
-\theta S_{(\gamma,\lambda,\theta,0)}(t_{i-1}/\theta^\gamma))\right)\right]\\
=\sum_{i=1}^m\log\mathbb{E}\left[e^{\theta y_i(S_{(\gamma,\lambda,\theta,0)}(t_i/\theta^\gamma)
-S_{(\gamma,\lambda,\theta,0)}(t_{i-1}/\theta^\gamma))}\right]
=\sum_{i=1}^m\frac{t_i-t_{i-1}}{\theta^\gamma}\kappa_{(\gamma,\lambda,\theta,0)}(\theta y_i)\\
=\left\{\begin{array}{ll}
\sum_{i=1}^m\frac{t_i-t_{i-1}}{\theta^\gamma}\lambda\hspace{1.0pt}\sgn{\gamma}(\theta^\gamma-(\theta-\theta y_i)^\gamma)
&\ \mbox{if}\ \theta y_1,\ldots,\theta y_m\leq\theta\\
\infty&\ \mbox{otherwise}
\end{array}\right.\\
=\left\{\begin{array}{ll}
\sum_{i=1}^m(t_i-t_{i-1})\lambda\hspace{1.0pt}\sgn{\gamma}(1-(1-y_i)^\gamma)&\ \mbox{if}\ y_1,\ldots,y_m\leq 1\\
\infty&\ \mbox{otherwise}
\end{array}\right.
=\sum_{i=1}^m(t_i-t_{i-1})\kappa_{(\gamma,\lambda,1,0)}(y_i).
\end{multline*}
This completes the proof.
\end{proof}
The identical distribution result stated in Proposition \ref{prop:identically-distributed-rvs}
would allow to consider different kind of weak convergence. Here we mainly consider
the case $\theta\to\infty$; the case $\theta\to 0$ will be briefly discussed in Remark
\ref{rem:case-theta-to-zero}.
\begin{proposition}\label{prop:LD-theta-to-infinity}
Let $m\geq 1$ and $0=t_0<t_1<t_2<\cdots<t_m$ be arbitrarily fixed. Moreover
let $g(\gamma),h(\gamma)\in\mathbb{R}$ be such that $\gamma-h(\gamma)=1-g(\gamma)>0$. Then the family
of random vectors
$$\{(\theta^{g(\gamma)}S_{(\gamma,\lambda,\theta,0)}(t_i/\theta^{h(\gamma)})
-\theta^{g(\gamma)}S_{(\gamma,\lambda,\theta,0)}(t_{i-1}/\theta^{h(\gamma)}))_{i=1,\ldots,m}:\theta>0\}$$
satisfies the LDP with speed $\theta^{\gamma-h(\gamma)}$, or equivalently $\theta^{1-g(\gamma)}$, and good
rate function $I_{t_1,\ldots,t_m}$ defined by
$$I_{t_1,\ldots,t_m}(x_1,\ldots,x_m)=\sum_{i=1}^m(t_i-t_{i-1})\kappa_{(\gamma,\lambda,1,0)}^*\left(\frac{x_i}{t_i-t_{i-1}}\right).$$
\end{proposition}
\begin{proof}
We want to apply the G\"artner Ellis Theorem. Firstly we have
\begin{multline*}
\frac{1}{\theta^{\gamma-h(\gamma)}}\log\mathbb{E}\left[\exp\left(\theta^{\gamma-h(\gamma)}
\sum_{i=1}^m y_i(\theta^{g(\gamma)}S_{(\gamma,\lambda,\theta,0)}(t_i/\theta^{h(\gamma)})
-\theta^{g(\gamma)}S_{(\gamma,\lambda,\theta,0)}(t_{i-1}/\theta^{h(\gamma)}))\right)\right]\\
=\frac{1}{\theta^{\gamma-h(\gamma)}}\sum_{i=1}^m\log\mathbb{E}\left[e^{\theta^{\gamma-h(\gamma)+g(\gamma)}
y_i(S_{(\gamma,\lambda,\theta,0)}(t_i/\theta^{h(\gamma)})
-S_{(\gamma,\lambda,\theta,0)}(t_{i-1}/\theta^{h(\gamma)}))}\right]\\
=\sum_{i=1}^m\frac{t_i-t_{i-1}}{\theta^{\gamma-h(\gamma)+h(\gamma)}}\kappa_{(\gamma,\lambda,\theta,0)}(\theta^{\gamma-h(\gamma)+g(\gamma)}y_i)
=\sum_{i=1}^m\frac{t_i-t_{i-1}}{\theta^\gamma}\kappa_{(\gamma,\lambda,\theta,0)}(\theta y_i).
\end{multline*}
Moreover, as a consequence of some computations in the proof of Proposition
\ref{prop:identically-distributed-rvs}, the final expression does not depend
on $\theta$, and we have
\begin{equation}\label{eq:theta-invariance}
\sum_{i=1}^m\frac{t_i-t_{i-1}}{\theta^\gamma}\kappa_{(\gamma,\lambda,\theta,0)}(\theta y_i)
=\sum_{i=1}^m(t_i-t_{i-1})\kappa_{(\gamma,\lambda,1,0)}(y_i)\ \mbox{for all}\ \theta>0.
\end{equation}
Then, for all $(y_1,\ldots,y_m)\in\mathbb{R}^m$, we have
\begin{multline*}
\lim_{\theta\to\infty}\frac{1}{\theta^{\gamma-h(\gamma)}}\log\mathbb{E}\left[\exp\left(\theta^{\gamma-h(\gamma)}
\sum_{i=1}^m y_i(\theta^{g(\gamma)}S_{(\gamma,\lambda,\theta,0)}(t_i/\theta^{h(\gamma)})
-\theta^{g(\gamma)}S_{(\gamma,\lambda,\theta,0)}(t_{i-1}/\theta^{h(\gamma)}))\right)\right]\\
=\sum_{i=1}^m(t_i-t_{i-1})\kappa_{(\gamma,\lambda,1,0)}(y_i).
\end{multline*}
So we can apply the G\"artner Ellis Theorem and the desired LDP holds with good
rate function $I_{t_1,\ldots,t_m}$ defined by
$$I_{t_1,\ldots,t_m}(x_1,\ldots,x_m)=\sup_{(y_1,\ldots,y_m)\in\mathbb{R}^m}
\left\{\sum_{i=1}^my_ix_i-\sum_{i=1}^m(t_i-t_{i-1})\kappa_{(\gamma,\lambda,1,0)}(y_i)\right\}.$$
Finally one can check with some standard computations that the rate function
$I_{t_1,\ldots,t_m}$ defined here coincides with the one in the statement of
the proposition (for instance one could adapt some computations in the proof of
Lemma 5.1.8 in \cite{DemboZeitouni}).
\end{proof}
For completeness we recall that (the equalities below holds for $\theta$ large
enough which depends on $y_1,\ldots,y_m$; otherwise the moment generating
function is equal to infinity)
\begin{multline*}
\log\mathbb{E}\left[\exp\left(\sum_{i=1}^m y_i(\theta^{g(\gamma)}
S_{(\gamma,\lambda,\theta,0)}(t_i/\theta^{h(\gamma)})
-\theta^{g(\gamma)}S_{(\gamma,\lambda,\theta,0)}(t_{i-1}/\theta^{h(\gamma)}))\right)\right]\\
=\sum_{i=1}^m\frac{t_i-t_{i-1}}{\theta^{h(\gamma)}}\kappa_{(\gamma,\lambda,\theta,0)}(\theta^{g(\gamma)}y_i)
=\sum_{i=1}^m\frac{t_i-t_{i-1}}{\theta^{h(\gamma)}}\lambda\hspace{1.0pt}\sgn{\gamma}(\theta^\gamma-(\theta-\theta^{g(\gamma)}y_i)^\gamma)\\
=\sum_{i=1}^m(t_i-t_{i-1})\lambda\hspace{1.0pt}\sgn{\gamma}\theta^{\gamma-h(\gamma)}\left(1-\left(1-\frac{y_i}{\theta^{1-g(\gamma)}}\right)^\gamma\right),
\end{multline*}
whence we obtain
\begin{multline*}
\lim_{\theta\to\infty}\log\mathbb{E}\left[\exp\left(\sum_{i=1}^m y_i(\theta^{g(\gamma)}
S_{(\gamma,\lambda,\theta,0)}(t_i/\theta^{h(\gamma)})
-\theta^{g(\gamma)}S_{(\gamma,\lambda,\theta,0)}(t_{i-1}/\theta^{h(\gamma)}))\right)\right]\\
=\sum_{i=1}^my_i(t_i-t_{i-1})\lambda\hspace{1.0pt}\sgn{\gamma}\gamma.
\end{multline*}
Thus the random variables in Proposition \ref{prop:LD-theta-to-infinity} converge (as
$\theta\to\infty$) to the vector $(x_1(\gamma),\ldots,x_m(\gamma))$ defined by
$$x_i(\gamma)=(t_i-t_{i-1})\lambda\hspace{1.0pt}\sgn{\gamma}\gamma\ (\mbox{for all}\ i\in\{1,\ldots,m\}).$$
Moreover, as one can expect, $I_{t_1,\ldots,t_m}(x_1,\ldots,x_m)=0$ if and only if
\begin{multline*}
(x_1,\ldots,x_m)=\left(\left.\frac{\partial}{\partial y_i}\sum_{i=1}^m(t_i-t_{i-1})
\kappa_{(\gamma,\lambda,1,0)}(y_i)\right|_{(y_1,\ldots,y_m)=(0,\ldots,0)}\right)_{i=1,\ldots,m}\\
=((t_i-t_{i-1})\kappa_{(\gamma,\lambda,1,0)}^\prime(0))_{i=1,\ldots,m}=
(x_1(\gamma),\ldots,x_m(\gamma)).
\end{multline*}
Now we are ready to present the non-central moderate deviation
result (as $\theta\to\infty$); see also Remark
\ref{rem:MD-typical-features} for this interpretation.
\begin{proposition}\label{prop:MD-theta-to-infinity}
Let $m\geq 1$ and $0=t_0<t_1<t_2<\cdots<t_m$ be arbitrarily fixed. Moreover
let $g(\gamma),h(\gamma)$ be such that $\gamma-h(\gamma)=1-g(\gamma)>0$ (as in
Proposition \ref{prop:LD-theta-to-infinity}). Then, for all families of positive
numbers $\{a_\theta:n\geq 1\}$ such that
\begin{equation}\label{eq:MD-conditions-theta-to-infinity}
a_\theta\to 0\ \mbox{and}\ \theta^{\gamma-h(\gamma)}a_\theta=\theta^{1-g(\gamma)}a_\theta\to\infty\ (\mbox{as}\ \theta\to\infty),
\end{equation}
the family of random vectors
$$\{(a_\theta\theta S_{(\gamma,\lambda,\theta,0)}(t_i/(a_\theta\theta^\gamma))-a_\theta\theta S_{(\gamma,\lambda,\theta,0)}(t_{i-1}/(a_\theta\theta^\gamma)))_{i=1,\ldots,m}:\theta>0\}$$
satisfies the LDP with speed $1/a_\theta$ and good rate function $I_{t_1,\ldots,t_m}$
presented in Proposition \ref{prop:LD-theta-to-infinity}.
\end{proposition}
\begin{proof}
We want to apply the G\"artner Ellis Theorem. For all $\theta>0$, by taking into account
equation \eqref{eq:theta-invariance} for the last equality below, we get
\begin{multline*}
\frac{1}{1/a_\theta}\log\mathbb{E}\left[\exp\left(\frac{1}{a_\theta}\sum_{i=1}^m y_i
(a_\theta\theta S_{(\gamma,\lambda,\theta,0)}(t_i/(a_\theta\theta^\gamma))-a_\theta\theta S_{(\gamma,\lambda,\theta,0)}(t_{i-1}/(a_\theta\theta^\gamma)))\right)\right]\\
=a_\theta\sum_{i=1}^m\log\mathbb{E}\left[e^{\theta y_i\{S_{(\gamma,\lambda,\theta,0)}(t_i/(a_\theta\theta^\gamma))
-S_{(\gamma,\lambda,\theta,0)}(t_{i-1}/(a_\theta\theta^\gamma))\}}\right]\\
=a_\theta\sum_{i=1}^m\frac{t_i-t_{i-1}}{a_\theta\theta^\gamma}\kappa_{(\gamma,\lambda,\theta,0)}(\theta y_i)
=\sum_{i=1}^m\frac{t_i-t_{i-1}}{\theta^\gamma}\kappa_{(\gamma,\lambda,\theta,0)}(\theta y_i)
=\sum_{i=1}^m(t_i-t_{i-1})\kappa_{(\gamma,\lambda,1,0)}(y_i);
\end{multline*}
so, for all $(y_1,\ldots,y_m)\in\mathbb{R}^m$, we have
\begin{multline*}
\lim_{\theta\to\infty}\frac{1}{1/a_\theta}\log\mathbb{E}\left[\exp\left(\frac{1}{a_\theta}\sum_{i=1}^m y_i
(a_\theta\theta S_{(\gamma,\lambda,\theta,0)}(t_i/(a_\theta\theta^\gamma))-a_\theta\theta S_{(\gamma,\lambda,\theta,0)}(t_{i-1}/(a_\theta\theta^\gamma)))\right)\right]\\
=\sum_{i=1}^m(t_i-t_{i-1})\kappa_{(\gamma,\lambda,1,0)}(y_i).
\end{multline*}
Then we can conclude the proof by considering the same application of the
G\"artner Ellis Theorem presented in the proof of Proposition
\ref{prop:LD-theta-to-infinity}.
\end{proof}
We conclude with some remarks.
\begin{remark}\label{rem:MD-typical-features}
The class of LDPs in Proposition \ref{prop:MD-theta-to-infinity} fill the
gap between the following asymptotic regimes:
\begin{itemize}
\item the convergence of the random variables in Proposition
\ref{prop:LD-theta-to-infinity} to $(x_1(\gamma),\ldots,x_m(\gamma))$;
\item the weak convergence of the random variables in Proposition
\ref{prop:identically-distributed-rvs} that trivially converge to
their common law, and therefore the law of the random vector
$(S_{(\gamma,\lambda,1,0)}(t_i)-S_{(\gamma,\lambda,1,0)}(t_{i-1}))_{i=1,\ldots,m}$.
\end{itemize}
In some sense these two asymptotic regimes can be recovered by considering
two extremal choices for $a_\theta$ in Proposition
\ref{prop:MD-theta-to-infinity}, i.e.
$a_\theta=\frac{1}{\theta^{\gamma-h(\gamma)}}=\frac{1}{\theta^{1-g(\gamma)}}$
and $a_\theta=1$ (in both cases one condition in
\eqref{eq:MD-conditions-theta-to-infinity} holds and the other one fails),
respectively.
\end{remark}
\begin{remark}\label{rem:MD-comments-on-rf}
The rate function $I_{t_1,\ldots,t_m}$ in Proposition \ref{prop:MD-theta-to-infinity},
which coincides with the one in Proposition \ref{prop:LD-theta-to-infinity}, has some
connections with the two asymptotic regimes as $\theta\to\infty$ presented in Remark
\ref{rem:MD-typical-features}:
\begin{itemize}
\item the rate function $I_{t_1,\ldots,t_m}$ uniquely vanishes at
$(x_1(\gamma),\ldots,x_m(\gamma))$, which is the limit of the random variables in
Proposition \ref{prop:LD-theta-to-infinity} (this was already remarked);
\item the Hessian matrix $\left(\left.\frac{\partial^2}{\partial x_i\partial x_j}I_{t_1,\ldots,t_m}
(x_1,\ldots,x_m)\right|_{(x_1,\ldots,x_m)=(x_1(\gamma),\ldots,x_m(\gamma))}\right)_{i,j=1,\ldots,m}$ has some
connections with the law of the random vector
$(S_{(\gamma,\lambda,1,0)}(t_i)-S_{(\gamma,\lambda,1,0)}(t_{i-1}))_{i=1,\ldots,m}$ that
appears in Proposition \ref{prop:identically-distributed-rvs}; in fact it is
a diagonal matrix (because of the independence of the increments) and, for all
$i\in\{1,\ldots,m\}$, the $i$-th diagonal element is
\begin{multline*}
\left.\frac{\partial^2}{\partial x_i^2}I_{t_1,\ldots,t_m}
(x_1,\ldots,x_m)\right|_{(x_1,\ldots,x_m)=(x_1(\gamma),\ldots,x_m(\gamma))}\\
=\frac{1}{(t_i-t_{i-1})\mathrm{Var}[S_{(\gamma,\lambda,1,0)}(1)]}
=\frac{1}{\mathrm{Var}[S_{(\gamma,\lambda,1,0)}(t_i)-S_{(\gamma,\lambda,1,0)}(t_{i-1})]}.
\end{multline*}
\end{itemize}
\end{remark}
\begin{remark}\label{rem:increments-vs-marginals}
The results presented in this section concern the increments of the process.
However analogue results can be presented for the marginal random variables
of the process at the times $t_1,\ldots,t_m$. The idea is to combine the
above propositions and a suitable transformation of the involved random
vectors with the continuous function
$$(x_1,\ldots,x_m)\mapsto f(x_1,\ldots,x_m):=\left(x_1,x_1+x_2,\ldots,\sum_{i=1}^m x_i\right).$$
In the case of Propositions \ref{prop:LD-theta-to-infinity} and
\ref{prop:MD-theta-to-infinity} we have to apply the contraction
principle recalled in Section \ref{sec:preliminaries-LD}. Then we have the
following statements.
\begin{itemize}
\item the random vectors
$\{(\theta S_{(\gamma,\lambda,\theta,0)}(t_i/\theta^\gamma))_{i=1,\ldots,m}:\theta>0\}$
are equally distributed and, moreover, they are distributed as
$(S_{(\gamma,\lambda,1,0)}(t_i))_{i=1,\ldots,m}$ (case $\theta=1$).
\item the family of random vectors
$\{(\theta^{g(\gamma)}S_{(\gamma,\lambda,\theta,0)}(t_i/\theta^{h(\gamma)}))_{i=1,\ldots,m}:\theta>0\}$
satisfies the LDP with speed $\theta^{\gamma-h(\gamma)}$, or equivalently
$\theta^{1-g(\gamma)}$, and good rate function $J_{t_1,\ldots,t_m}$
defined by
\begin{multline*}
J_{t_1,\ldots,t_m}(z_1,\ldots,z_m)=\inf\{I_{t_1,\ldots,t_m}(x_1,\ldots,x_m):
f(x_1,\ldots,x_m)=(z_1,\ldots,z_m)\}\\
=I_{t_1,\ldots,t_m}(z_1,z_2-z_1,\ldots,z_m-z_{m-1})=
\sum_{i=1}^m(t_i-t_{i-1})\kappa_{(\gamma,\lambda,1,0)}^*\left(\frac{z_i-z_{i-1}}{t_i-t_{i-1}}\right),
\end{multline*}
where $z_0=0$ in the last equality.
\item if condition \eqref{eq:MD-conditions-theta-to-infinity} holds, the family of
random vectors
$\{(a_\theta\theta S_{(\gamma,\lambda,\theta,0)}(t_i/(a_\theta\theta^\gamma)))_{i=1,\ldots,m}:\theta>0\}$
satisfies the LDP with speed $1/a_\theta$ and good rate function $J_{t_1,\ldots,t_m}$
defined in the item above.
\end{itemize}
\end{remark}
\begin{remark}\label{rem:case-theta-to-zero}
All the results above (together with Remark \ref{rem:increments-vs-marginals})
concern the case $\theta\to\infty$. Here we briefly discuss the required changes
in order to obtain versions for the case $\theta\to 0$. We do not have
any changes for Proposition \ref{prop:identically-distributed-rvs}. The
condition $\gamma-h(\gamma)=1-g(\gamma)>0$ in Propositions
\ref{prop:LD-theta-to-infinity} and \ref{prop:MD-theta-to-infinity} (and in
Remark \ref{rem:increments-vs-marginals}) has to be replaced with $\gamma-h(\gamma)
=1-g(\gamma)<0$. The speed function in the version of Proposition
\ref{prop:LD-theta-to-infinity} for the case $\theta\to 0$ has to be
$\theta^{\gamma-h(\gamma)}=\theta^{1-g(\gamma)}$ (in fact it tends to infinity as
$\theta\to 0$). The condition \eqref{eq:MD-conditions-theta-to-infinity} in
Proposition \ref{prop:MD-theta-to-infinity} for the case $\theta\to 0$ has to be
\begin{equation}\label{eq:MD-conditions-theta-to-0}
a_\theta\to 0\ \mbox{and}\ \theta^{\gamma-h(\gamma)}a_\theta=\theta^{1-g(\gamma)}a_\theta\to\infty\ (\mbox{as}\ \theta\to 0);
\end{equation}
note that, in both conditions \eqref{eq:MD-conditions-theta-to-infinity} and
\eqref{eq:MD-conditions-theta-to-0}, one requires that $a_\theta$ tends to zero
slowly.
We conclude by adapting what we say in Remark \ref{rem:MD-typical-features}.
Proposition \ref{prop:MD-theta-to-infinity} for the case $\theta\to 0$ provides a
class of LDPs which fill the gap between a convergence to
$(x_1(\gamma),\ldots,x_m(\gamma))$ (which is a consequence of Proposition
\ref{prop:LD-theta-to-infinity} for the case $\theta\to 0$) and the weak
convergence as $\theta\to 0$ (which is a consequence of Proposition
\ref{prop:identically-distributed-rvs}). These two asymptotic regimes can be
recovered by setting
$a_\theta=\theta^{-(\gamma-h(\gamma))}=\theta^{-(1-g(\gamma))}$ and $a_\theta=1$
(in both cases one condition in \eqref{eq:MD-conditions-theta-to-0} holds and the
other one fails). The rate functions that appear in Propositions
\ref{prop:LD-theta-to-infinity} and \ref{prop:MD-theta-to-infinity} (and in Remark
\ref{rem:increments-vs-marginals}) do not change for the analogue stataments for
the case $\theta\to 0$, and therefore we can repeat the comments in Remark
\ref{rem:MD-comments-on-rf} without any changes.
\end{remark}
\section{Large deviations for inverse processes}\label{sec:LD-inverse-processes}
In this section we consider the inverse process of
$\{S_{(\gamma,\lambda,\theta,\delta)}(t):t\geq 0\}$, i.e. the process
$\{T_{(\gamma,\lambda,\theta,\delta)}(t):t\geq 0\}$ defined by
$$T_{(\gamma,\lambda,\theta,\delta)}(t):=\inf\{u>0:S_{(\gamma,\lambda,\theta,\delta)}(u)>t\}.$$
\begin{remark}\label{rem:lambda-scale-parameter-for-delta=0}
Assume that $\delta=0$. Then the processes
$\{S_{(\gamma,\lambda,\theta,0)}(t):t\geq 0\}$ and
$\{S_{(\gamma,1,\theta,0)}(\lambda t):t\geq 0\}$ are
equally distributed, and therefore
$$\inf\{u>0:S_{(\gamma,\lambda,\theta,0)}(u)>t\}
=\inf\{u>0:S_{(\gamma,1,\theta,0)}(\lambda u)>t\}$$
is distributed as $\frac{T_{(\gamma,1,\theta,0)}}{\lambda}$.
\end{remark}
Our aim is to illustrate an application of the results for inverse processes in
\cite{DuffieldWhitt}; actually we always consider the simple case in which the
$u,v,w$ in that reference are the identity function. We remark that all the
LDPs stated in this section holds with speed $t$; therefore here we always omit
this detail.
A naive approach is to try to apply the G\"artner Ellis Theorem to
$\{T_{(\gamma,\lambda,\theta,\delta)}(t)/t:t>0\}$ as $t\to\infty$; in other words, if there
exists (for all $y\in\mathbb{R}$)
\begin{equation}\label{eq:GE-limit-naive-approach}
\lim_{t\to\infty}\frac{1}{t}\log\mathbb{E}[e^{yT_{(\gamma,\lambda,\theta,\delta)}(t)}]
=\Lambda_{(\gamma,\lambda,\theta,\delta)}(y)
\end{equation}
as an extended real number, and the function $\Lambda_{(\gamma,\lambda,\theta,\delta)}$
satisfies some conditions, we can say that $\{T_{(\gamma,\lambda,\theta,\delta)}(t)/t:t>0\}$
satisfies the LDP with good rate function
$\Lambda_{(\gamma,\lambda,\theta,\delta)}^*$ defined by
$$\Lambda_{(\gamma,\lambda,\theta,\delta)}^*(x):=\sup_{y\in\mathbb{R}}\{xy-\Lambda_{(\gamma,\lambda,\theta,\delta)}(y)\}.$$
Unfortunately, in general, the moment generating function
$\mathbb{E}[e^{yT_{(\gamma,\lambda,\theta,\delta)}(t)}]$ is not available.
The approach based on the application of the results in \cite{DuffieldWhitt} allows to overcome
this problem. In order to do that we have to consider the LDP of
$\{S_{(\gamma,\lambda,\theta,\delta)}(t)/t:t>0\}$ as $t\to\infty$, and this can be done by
considering an application of the G\"artner Ellis Theorem because the moment generating function
$\mathbb{E}[e^{yS_{(\gamma,\lambda,\theta,\delta)}(t)}]$ is available. In fact we have
\begin{equation}\label{eq:GE-limit-subordinator}
\lim_{t\to\infty}\frac{1}{t}\log\mathbb{E}[e^{yS_{(\gamma,\lambda,\theta,\delta)}(t)}]
=\kappa_{(\gamma,\lambda,\theta,\delta)}(y)\ (\mbox{for all}\ y\in\mathbb{R}),
\end{equation}
where the function $\kappa_{(\gamma,\lambda,\theta,\delta)}$ has been introduced in Section
\ref{sec:preliminaries-BCCP}; moreover, if the function $\kappa_{(\gamma,\lambda,\theta,\delta)}$
satisfies some conditions (see the case $\theta>0$ below), the LDP holds with good rate function
$\kappa_{(\gamma,\lambda,\theta,\delta)}^*$ defined by \eqref{eq:Legendre-transform-kappa}.
Then we can apply the results in \cite{DuffieldWhitt} and we have the following claims.
\begin{claim}\label{claim:DW1}
By Theorem 1(i) in \cite{DuffieldWhitt}, $\{T_{(\gamma,\lambda,\theta,\delta)}(t)/t:t>0\}$
satisfies the LDP with good rate function $\Psi_{(\gamma,\lambda,\theta,\delta)}$ defined by
$$\Psi_{(\gamma,\lambda,\theta,\delta)}(x)=x\kappa_{(\gamma,\lambda,\theta,\delta)}^*(1/x)$$
for $x>0$, $\Psi_{(\gamma,\lambda,\theta,\delta)}(0)=\lim_{x\to 0^+}\Psi_{(\gamma,\lambda,\theta,\delta)}(x)$,
and $\Psi_{(\gamma,\lambda,\theta,\delta)}(x)=\infty$ for $x<0$.
\end{claim}
\begin{claim}\label{claim:DW2}
By Theorem 3(ii) in \cite{DuffieldWhitt} (note that the function $I$ in that reference
coincides with $\kappa_{(\gamma,\lambda,\theta,\delta)}^*$ in this paper) condition
\eqref{eq:GE-limit-naive-approach} holds for $y<\kappa_{(\gamma,\lambda,\theta,\delta)}^*(0)$
and we have
\begin{equation}\label{eq:*}
\Lambda_{(\gamma,\lambda,\theta,\delta)}(y)=\sup_{x\in\mathbb{R}}\{xy-\Psi_{(\gamma,\lambda,\theta,\delta)}(x)\};
\end{equation}
moreover we also have
$$\kappa_{(\gamma,\lambda,\theta,\delta)}^*(0)=-\lim_{y\to-\infty}\kappa_{(\gamma,\lambda,\theta,\delta)}(y)=
\left\{\begin{array}{ll}
\infty&\ \mbox{if}\ \gamma\in(0,1)\ \mbox{and}\ \delta\geq 0\\
\frac{1}{\delta}\log(1+\lambda\delta\theta^\gamma)&\ \mbox{if}\ \gamma\in(-\infty,0)\ \mbox{and}\ \delta>0\\
\lambda\theta^\gamma&\ \mbox{if}\ \gamma\in(-\infty,0)\ \mbox{and}\ \delta=0.
\end{array}\right.$$
\end{claim}
We also recall that, for all $\delta\geq 0$, we have $\kappa_{(\gamma,\lambda,\theta,\delta)}^*(x)=0$ if and
only if $x=\kappa_{(\gamma,\lambda,\theta,\delta)}^\prime(0)=\lambda\hspace{1.0pt}\sgn{\gamma}\gamma\theta^{\gamma-1}$;
thus, by the definition of $\Psi_{(\gamma,\lambda,\theta,\delta)}$, we have
$\Psi_{(\gamma,\lambda,\theta,\delta)}(x)=0$ if and only if
$x=(\kappa_{(\gamma,\lambda,\theta,\delta)}^\prime(0))^{-1}$.
We shall discuss the case $\theta=0$ in Section \ref{sub:theta=0} and the case $\theta>0$ in Section
\ref{sub:theta>0}. Finally, in Section \ref{sub:GW}, we shall compute the function
$\Lambda_{(\gamma,\lambda,\theta,\delta)}$ in \eqref{eq:*} when $\delta=0$.
\subsection{Case $\theta=0$}\label{sub:theta=0}
In this case we cannot apply G\"artner Ellis Theorem to obtain the LDP of $\{S_{(\gamma,\lambda,\theta,\delta)}(t)/t:t>0\}$
as $t\to\infty$; in fact $\kappa_{(\gamma,\lambda,0,\delta)}$ is not finite in a neighborhood of the
origin. We recall that we only have $\gamma\in(0,1)$ when $\theta=0$. We can obtain the LDP of
$\{T_{(\gamma,\lambda,0,\delta)}(t)/t:t>0\}$ as $t\to\infty$ only if $\delta=0$. In fact, if we
consider the Mittag-Leffler function $E_\gamma(x):=\sum_{k=0}^\infty\frac{x^k}{\Gamma(\gamma k+1)}$,
we have
$$\mathbb{E}[e^{yT_{(\gamma,\lambda,0,0)}(t)}]=\mathbb{E}[e^{yT_{(\gamma,1,0,0)}(t)/\lambda}]=E_\gamma\left(\frac{y}{\lambda}t^\gamma\right)$$
(the first equality holds by Remark \ref{rem:lambda-scale-parameter-for-delta=0}; the second equality
holds by a well-known result in the literature for $\lambda=1$ (see e.g. eq. (24) in
\cite{MainardiMuraPagnini}; however this result with $y\leq 0$ can be found in eq. (16) in
\cite{Bingham}) combined with Remark \ref{rem:lambda-scale-parameter-for-delta=0}). Then, by taking
into account the asymptotic behavior of the Mittag-Leffler function as its argument tends to infinity
(see e.g. eq. (1.8.27) in \cite{KilbasSrivastavaTrujillo}), we have
\begin{equation}\label{eq:GE-limit-for-inverse-particular-case}
\lim_{t\to\infty}\frac{1}{t}\log\mathbb{E}[e^{yT_{(\gamma,\lambda,0,0)}(t)}]
=\left\{\begin{array}{ll}
(y/\lambda)^{1/\gamma}&\ \mbox{if}\ y\geq 0\\
0&\ \mbox{if}\ y<0
\end{array}\right.=:\Lambda_{(\gamma,\lambda,0,0)}(y).
\end{equation}
So we are in the case described when we presented the limit in \eqref{eq:GE-limit-naive-approach}. Then
G\"artner Ellis Theorem yields the LDP of $\{T_{(\gamma,\lambda,0,0)}(t)/t:t>0\}$ as $t\to\infty$ with good
rate function $\Psi_{(\gamma,\lambda,0,0)}:=\Lambda_{(\gamma,\lambda,0,0)}^*$, i.e.
\begin{equation}\label{eq:Psi-for-theta=delta=0}
\Psi_{(\gamma,\lambda,0,0)}(x):=\sup_{y\in\mathbb{R}}\{xy-\Lambda_{(\gamma,\lambda,0,0)}(y)\}
=\left\{\begin{array}{ll}
\lambda^{1/(1-\gamma)}(\gamma^{\gamma/(1-\gamma)}-\gamma^{1/(1-\gamma)})x^{1/(1-\gamma)}&\ \mbox{if}\ x\geq 0\\
\infty&\ \mbox{if}\ x<0;
\end{array}\right.
\end{equation}
moreover, as one can expect (noting that $\Lambda_{(\gamma,\lambda,0,0)}^\prime(0)=0$), we
have $\Lambda_{(\gamma,\lambda,0,0)}^*(x)=0$ if and only if $x=0$.
\subsection{Case $\theta>0$}\label{sub:theta>0}
In this case we can apply G\"artner Ellis Theorem by considering the limit in
\eqref{eq:GE-limit-subordinator}; in fact the function $\kappa_{(\gamma,\lambda,\theta,\delta)}(y)$
is finite in a neighborhood of the origin. However we cannot provide an explicit expression of
$\kappa_{(\gamma,\lambda,\theta,\delta)}^*$ and $\Psi_{(\gamma,\lambda,\theta,\delta)}$ when $\delta>0$.
On the contrary this is feasible when $\delta=0$. In fact, after some easy computations, we get:
$$\kappa_{(\gamma,\lambda,\theta,0)}^*(x)=x\left(\theta-\left(\frac{x}{\lambda\hspace{1.0pt}\sgn{\gamma}\gamma}\right)^{1/(\gamma-1)}\right)
-\lambda\hspace{1.0pt}\sgn{\gamma}\left(\theta^\gamma-
\left(\frac{x}{\lambda\hspace{1.0pt}\sgn{\gamma}\gamma}\right)^{\gamma/(\gamma-1)}\right)\ (\mbox{for all}\ x>0)$$
and $\kappa_{(\gamma,\lambda,\theta,\delta)}^*(0)=\infty$;
\begin{equation}\label{eq:Psi-for-delta=0}
\Psi_{(\gamma,\lambda,\theta,0)}(x)=\theta-(\lambda\hspace{1.0pt}\sgn{\gamma}\gamma x)^{1/(1-\gamma)}
+\lambda\hspace{1.0pt}\sgn{\gamma}x\left((\lambda\hspace{1.0pt}\sgn{\gamma}\gamma x)^{\gamma/(1-\gamma)}
-\theta^\gamma\right)\ (\mbox{for all}\ x\geq 0).
\end{equation}
Moreover, in particular, the right derivative of $\Psi_{(\gamma,\lambda,\theta,0)}$ at $y=0$ is
\begin{equation}\label{eq:right-derivative-of-Psi-at-zero}
\Psi_{(\gamma,\lambda,\theta,0)}^\prime(0)=\left\{\begin{array}{ll}
-\lambda\theta^\gamma&\ \mbox{if}\ \gamma\in(0,1)\\
-\infty&\ \mbox{if}\ \gamma\in(-\infty,0).
\end{array}\right.
\end{equation}
We also remark that, when $\gamma\in(0,1)$, the expression of $\Psi_{(\gamma,\lambda,\theta,0)}$ in
\eqref{eq:Psi-for-delta=0} yields
\begin{equation}\label{eq:psi-theta=0-and-theta>0}
\Psi_{(\gamma,\lambda,\theta,0)}(x)=\theta+\Psi_{(\gamma,\lambda,0,0)}(x)-\lambda\theta^\gamma x\ (\mbox{for all}\ x\geq 0),
\end{equation}
where $\Psi_{(\gamma,\lambda,0,0)}$ is the function computed for the case $\theta=0$ (see
\eqref{eq:Psi-for-theta=delta=0}).
\subsection{The function $\Lambda_{(\gamma,\lambda,\theta,\delta)}$ in \eqref{eq:*} for $\delta=0$}\label{sub:GW}
We restrict the attention to the case $\delta=0$ because we have an explicit expression
of $\kappa_{(\gamma,\lambda,\theta,\delta)}^*$ and therefore of
$\Psi_{(\gamma,\lambda,\theta,\delta)}$. Thus we refer to $\Psi_{(\gamma,\lambda,\theta,\delta)}$
in \eqref{eq:Psi-for-delta=0}; that formula is stated for $\theta>0$ but, by
\eqref{eq:psi-theta=0-and-theta>0}, that formula holds even if $\theta=0$ (and we recall that only
$\gamma\in(0,1)$ is allowed in this case).
So we have
$$\Lambda_{(\gamma,\lambda,\theta,0)}(y)=\sup_{x\geq 0}\{xy-\Psi_{(\gamma,\lambda,\theta,0)}(x)\}$$
where, if we consider the positive constant $c_\gamma$ defined by
$$c_\gamma:=\left\{\begin{array}{ll}
\gamma^{\gamma/(1-\gamma)}-\gamma^{1/(1-\gamma)}&\ \mbox{if}\ \gamma\in(0,1)\\
(-\gamma)^{\gamma/(1-\gamma)}+(-\gamma)^{1/(1-\gamma)}&\ \mbox{if}\ \gamma\in(-\infty,0),
\end{array}\right.$$
for all $x\geq 0$ we have
$$\Psi_{(\gamma,\lambda,\theta,0)}(x)=\left\{\begin{array}{ll}
\theta-\lambda\theta^\gamma x+\lambda^{1/(1-\gamma)}c_\gamma x^{\gamma/(1-\gamma)}&\ \mbox{if}\ \gamma\in(0,1)\\
\theta+\lambda\theta^\gamma x-\lambda^{1/(1-\gamma)}c_\gamma x^{\gamma/(1-\gamma)}&\ \mbox{if}\ \gamma\in(-\infty,0)
\end{array}\right.$$
Then we can state some results in the following lemma. Note that the next formula \eqref{eq:Lambda-gamma-positivo}
with $\theta=0$ meets the expression of the limit in \eqref{eq:GE-limit-for-inverse-particular-case}.
\begin{lemma}\label{lem:GW}
We have:
\begin{equation}\label{eq:Lambda-gamma-positivo}
\Lambda_{(\gamma,\lambda,\theta,0)}(y)=\left\{\begin{array}{ll}
-\theta&\ \mbox{if}\ y<-\lambda\theta^\gamma\\
\left(\theta^\gamma+\frac{y}{\lambda}\right)^{1/\gamma}-\theta&\ \mbox{if}\ y\geq-\lambda\theta^\gamma
\end{array}\right.\ \mbox{for}\ \gamma\in(0,1),
\end{equation}
\begin{equation}\label{eq:Lambda-gamma-negativo}
\Lambda_{(\gamma,\lambda,\theta,0)}(y)=\left\{\begin{array}{ll}
\left(\theta^\gamma-\frac{y}{\lambda}\right)^{1/\gamma}-\theta&\ \mbox{if}\ y<\lambda\theta^\gamma\\
\infty&\ \mbox{if}\ y\geq\lambda\theta^\gamma
\end{array}\right.\ \mbox{for}\ \gamma\in(-\infty,0),
\end{equation}
and, in both cases,
$$\Psi_{(\gamma,\lambda,\theta,0)}(x)=\sup_{y\in\mathbb{R}}\{xy-\Lambda_{(\gamma,\lambda,\theta,0)}(y)\}.$$
\end{lemma}
\begin{proof}
All the results can be proved with some standard computations. The details are omitted.
\end{proof}
We conclude with another remark concerning both cases $\gamma\in(0,1)$ and $\gamma\in(-\infty,0)$.
Note that equation \eqref{eq:GW} in the following remark has some analogies with equations (12)-(13)
in \cite{GlynnWhitt} where the authors deal with counting processes (and they are particular
non-decreasing processes).
\begin{remark}\label{rem:GW}
For all $x\geq 0$ we have
$$x\kappa_{(\gamma,\lambda,\theta,0)}^*(1/x)=x\sup_{y\leq\theta}\{y/x-\kappa_{(\gamma,\lambda,\theta,0)}(y)\}
=\sup_{y\leq\theta}\{y-x\kappa_{(\gamma,\lambda,\theta,0)}(y)\};$$
moreover, by considering the change of variable $z=-\kappa_{(\gamma,\lambda,\theta,0)}(y)$
and if we set
$$\mathcal{I}:=(-\kappa_{(\gamma,\lambda,\theta,0)}(\theta),-\kappa_{(\gamma,\lambda,\theta,0)}(-\infty)),$$
then
$$x\kappa_{(\gamma,\lambda,\theta,0)}^*(1/x)=
\sup_{z\in\mathcal{I}}\left\{\kappa_{(\gamma,\lambda,\theta,0)}^{-1}(-z)+xz\right\}=
\sup_{z\in\mathcal{I}}\left\{xz-(-\kappa_{(\gamma,\lambda,\theta,0)}^{-1}(-z))\right\}.$$
Thus $\Psi_{(\gamma,\lambda,\theta,0)}$ can be seen as the Legendre transform of
the function
\begin{equation}\label{eq:GW}
z\mapsto\tilde{\Psi}(z):=-\kappa_{(\gamma,\lambda,\theta,0)}^{-1}(-z),
\end{equation}
where $z$ belongs to a suitable set where the inverse function is well-defined.
In fact we have
$$\mathcal{I}=\left\{\begin{array}{ll}
(-\lambda\theta^\gamma,\infty)&\ \mbox{if}\ \gamma\in(0,1)\\
(-\infty,\lambda\theta^\gamma)&\ \mbox{if}\ \gamma\in(-\infty,0).
\end{array}\right.$$
and in both cases we have the interval where the function $\Lambda_{(\gamma,\lambda,\theta,0)}$
is strictly increasing and finite (see \eqref{eq:Lambda-gamma-positivo} and
\eqref{eq:Lambda-gamma-negativo}).
\end{remark}
\section{Large deviations for time-changes with inverse processes}\label{sec:LD-time-changes}
The aim of this section is to present some possible applications of G\"artner Ellis Theorem
in order to obtain LDPs for $\{X(T_{(\gamma,\lambda,\theta,\delta)}(t))/t:t>0\}$, when
$\{X(t):t\geq 0\}$ is a $\mathbb{R}^h$-valued process (for some integer
$h\geq 1$) which satisfies some suitable hypotheses, and independent of
$\{T_{(\gamma,\lambda,\theta,\delta)}(t):t\geq 0\}$. Actually, since we want to refer to the
contents of Sections \ref{sub:theta=0} and \ref{sub:theta>0}, in this section we always
restrict the attention to the case $\delta=0$. Moreover all the LDPs stated in this section
holds with speed $t$; therefore we always omit this detail (as we did in Section
\ref{sec:LD-inverse-processes}).
The simplest case is when $\{X(t):t\geq 0\}$ is a L\'evy process; in fact we have
$$\mathbb{E}[e^{\langle\eta,X(t)\rangle}]=e^{t\Lambda_X(\eta)}\ (\mbox{for all}
\ \eta\in\mathbb{R}^h),\ \mbox{where}\ \Lambda_X(\eta):=\log\mathbb{E}[e^{\langle\eta,X(1)\rangle}].$$
In this case the applications of G\"artner Ellis Theorem work well when $\{X(t):t\geq 0\}$ is
a light tailed process, i.e. the function $\Lambda_X$ is finite in a neighborhood of the origin.
A more general situation concerns additive functionals of Markov processes (here we recall
\cite{Veretennikov} as a reference with results based on the G\"artner Ellis Theorem); however,
for simplicity, here we refer to the case of Markov additive processes (see e.g.
\cite{AsmussenAlbrecher}, Chapter 3, Section 4; actually the presentation in that reference
concerns the case $h=1$). We have a Markov additive process $\{(J(t),X(t)):t\geq 0\}$ if, for
some set $E$, it is $E\times\mathbb{R}^h$-valued Markov process with suitable properties; in
particular $\{J(t):t\geq 0\}$ is a Markov process. We refer to the continuous time case with a
finite state space $E$ for $\{J(t):t\geq 0\}$; see e.g. \cite{AsmussenAlbrecher}, page 55. We
also assume that $\{J(t):t\geq 0\}$ is irreducible and, for simplicity, that
$\mathbb{E}[e^{\langle\eta,X(t)\rangle}]<\infty$ for all $\eta\in\mathbb{R}^h$.
Then, as a consequence of Proposition 4.4 in Chapter 3 in \cite{AsmussenAlbrecher}, we have
$$\min_{i\in E}h_i(\eta)e^{t\Lambda_X(\eta)}\leq\mathbb{E}[e^{\langle\eta,X(t)\rangle}]\leq\max_{i\in E}h_i(\eta)e^{t\Lambda_X(\eta)}$$
where $e^{t\Lambda_X(\eta)}$ is a suitable simple and positive eigenvalue and
$(h_i(\eta))_{i\in E}$ is a positive eigenvector (these items can be found by a suitable
application of the Perron Frobenius Theorem).
Now we are ready to illustrate the applications of the G\"artner Ellis Theorem which
provides the LDP for $\{X(T_{(\gamma,\lambda,\theta,\delta)}(t))/t:t>0\}$ with rate function
$H_{(\gamma,\lambda,\theta,0)}$, say. In particular we can have a trapping and delaying
effect for $\theta=0$ (see Remark \ref{rem:trapping-delaying}), and a possible rushing
effect for $\theta>0$; we recall a recent reference with this kind of analysis for
time-changed processes is \cite{CapitanelliDovidio}, even if the approach in this paper
is different from the one in that reference. We also give some comments on the behavior of
$H_{(\gamma,\lambda,\theta,0)}(x)$ around the origin $x=0$ for $h=1$; this will be done for
both cases $\theta=0$ and $\theta>0$, and we see that right and left derivatives at $x=0$
(which will be denoted by $D_-H_{(\gamma,\lambda,\theta,0)}(0)$ and
$D_+H_{(\gamma,\lambda,\theta,0)}(0)$) can be different.
\subsection{Case $\theta=0$}\label{sub:theta=0-tc}
Here we refer to the content of Section \ref{sub:theta=0}. We also recall that we only have
$\gamma\in(0,1)$ when $\theta=0$. Then, after some standard computations (with a conditional
expectation with respect to the independent random time-change), we get
\begin{multline*}
\lim_{t\to\infty}\frac{1}{t}\log\mathbb{E}[e^{\langle\eta,X(T_{(\gamma,\lambda,0,0)}(t))\rangle}]=
\lim_{t\to\infty}\frac{1}{t}\log E_\gamma\left(\frac{\Lambda_X(\eta)}{\lambda}t^\gamma\right)\\
=\left\{\begin{array}{ll}
(\Lambda_X(\eta)/\lambda)^{1/\gamma}&\ \mbox{if}\ \Lambda_X(\eta)\geq 0\\
0&\ \mbox{if}\ \Lambda_X(\eta)<0
\end{array}\right.=\Lambda_{(\gamma,\lambda,0,0)}(\Lambda_X(\eta)),
\end{multline*}
where $\Lambda_{(\gamma,\lambda,0,0)}(\cdot)$ is the function in \eqref{eq:GE-limit-for-inverse-particular-case}
(see also \eqref{eq:Lambda-gamma-positivo} with $\theta=0$).
Then, under suitable hypotheses, by the G\"artner Ellis Theorem, $\{X(T_{(\gamma,\lambda,0,0)}(t))/t:t>0\}$
satisfies the LDP with good rate function $H_{(\gamma,\lambda,0,0)}$ defined by
$$H_{(\gamma,\lambda,0,0)}(x):=\sup_{\eta\in\mathbb{R}^h}\{\langle\eta,x\rangle-\Lambda_{(\gamma,\lambda,0,0)}(\Lambda_X(\eta))\}.$$
We can say that $H_{(\gamma,\lambda,0,0)}(x)=0$ if and only if $x=\Lambda_{(\gamma,\lambda,0,0)}^\prime(\Lambda_X(0))\nabla\Lambda_X(0)$;
thus, since $\Lambda_X(0)=0$ and $\Lambda_{(\gamma,\lambda,0,0)}^\prime(0)=0$,
we have $H_{(\gamma,1,0,0)}(x)=0$ if and only if $x=0$, whatever is
$\nabla\Lambda_X(0)$.
\begin{remark}\label{rem:trapping-delaying}
We can say that $\frac{X(T_{(\gamma,\lambda,0,0)}(t))}{t}$ converges to zero as $t\to\infty$
(at least in probability; see Remark \ref{rem:convergence-under-GET} for a discussion on the
almost sure of $\frac{X(T_{(\gamma,\lambda,0,0)}(t_n))}{t_n}$ along a sequence
$\{t_n:n\geq 1\}$ such that $t_n\to\infty$), and this happens whatever is the limit
$\nabla\Lambda_X(0)$ of $\frac{X(t)}{t}$. This fact is not surprising because random
time-changes with $\{T_{(\gamma,\lambda,0,0)}(t):t\geq 0\}$ typically give rise to a sort of
trapping and delaying effect.
\end{remark}
We conclude with some statements for the case $h=1$. In what follows we consider certain inequalities;
however similar statements hold if we consider inverse inequalities. We assume that $\Lambda_X^\prime(0)>0$.
\begin{itemize}
\item If there exists $\eta_0<0$ such that $\Lambda_X(\eta_0)=0$ (note that this condition can occur because
$\Lambda_X$ is convex and $\Lambda_X(0)=0$), we can say that $D_-H_{(\gamma,\lambda,0,0)}(0)=\eta_0$ and
$D_+H_{(\gamma,\lambda,0,0)}(0)=0$.
\item On the contrary, if $\Lambda_X$ is strictly increasing (and therefore uniquely vanishes at $\eta=0$), we have
$H_{(\gamma,\lambda,0,0)}(x)=\infty$ for all $x<0$ instead of $D_-H_{(\gamma,\lambda,0,0)}(0)=\eta_0$.
\end{itemize}
\subsection{Case $\theta>0$}\label{sub:theta>0-tc}
Here we refer to the content of Section \ref{sub:theta>0}. We start with the same standard computations
considered in Section \ref{sub:theta=0} but here we cannot refer \eqref{eq:GE-limit-for-inverse-particular-case}.
In fact in this case we refer to Claim \ref{claim:DW2} in order to have the limit \eqref{eq:GE-limit-naive-approach}
for all $y\in\mathbb{R}$; so, as stated in Claim \ref{claim:DW2}, we take $\gamma\in(0,1)$ in order to
have $\kappa_{(\gamma,\lambda,\theta,\delta)}^*(0)=\infty$. Then we get
$$\lim_{t\to\infty}\frac{1}{t}\log\mathbb{E}[e^{\langle\eta,X(T_{(\gamma,\lambda,\theta,0)}(t))\rangle}]
=\Lambda_{(\gamma,\lambda,\theta,0)}(\Lambda_X(\eta));$$
moreover $\Lambda_{(\gamma,\lambda,\theta,0)}(\cdot)$ is given by \eqref{eq:Lambda-gamma-positivo}.
Then, under suitable hypotheses, by the G\"artner Ellis Theorem,
$\{X(T_{(\gamma,\lambda,\theta,0)}(t))/t:t>0\}$ satisfies the LDP with good rate function
$H_{(\gamma,\lambda,\theta,0)}$ defined by
$$H_{(\gamma,\lambda,\theta,0)}(x):=\sup_{\eta\in\mathbb{R}^h}\{\langle\eta,x\rangle-\Lambda_{(\gamma,\lambda,\theta,0)}(\Lambda_X(\eta))\}.$$
We can say that $H_{(\gamma,\lambda,\theta,0)}(x)=0$ if and only if $x=\Lambda_{(\gamma,\lambda,\theta,0)}^\prime(\Lambda_X(0))\nabla\Lambda_X(0)$;
thus, since $\Lambda_X(0)=0$ and $\Lambda_{(\gamma,\lambda,\theta,0)}^\prime(0)=\frac{\theta^{1-\gamma}}{\lambda\gamma}$ (note that
$\Lambda_{(\gamma,\lambda,\theta,0)}^\prime(0)=(\kappa_{(\gamma,\lambda,\theta,0)}^\prime(0))^{-1}$ as one can expect), we have
$H_{(\gamma,\lambda,\theta,0)}(x)=0$ if and only if $x=\frac{\theta^{1-\gamma}}{\lambda\gamma}\nabla\Lambda_X(0)$.
So $X(T_{(\gamma,\lambda,\theta,0)}(t))/t$ converges to a limit that depends on $\nabla\Lambda_X(0)$, and we have a possible rushing effect.
We conclude with some statements for the case $h=1$. In what follows we consider certain inequalities;
however similar statements hold if we consider inverse inequalities.
\begin{itemize}
\item If there exists $\eta_1<\eta_2<0$ such that $\Lambda_X(\eta_1)=\Lambda_X(\eta_2)=-\lambda\theta^\gamma$
(and this happens if $\Lambda_X^\prime(0)>0$), then $D_-H_{(\gamma,\lambda,\theta,0)}(0)=\eta_1$ and
$D_+H_{(\gamma,\lambda,\theta,0)}(0)=\eta_2$.
\item On the contrary, if there exists a unique $\eta_0<0$ such that $\Lambda_X(\eta_0)=-\lambda\theta^\gamma$
(and this could happen if $\Lambda_X$ is strictly increasing) we have $D_+H_{(\gamma,\lambda,\theta,0)}(0)=\eta_0$
and $H_{(\gamma,\lambda,\theta,0)}(x)=\infty$ for $x<0$.
\end{itemize}
\begin{remark}\label{rem:HversusPsi}
Note that $H_{(\gamma,\lambda,\theta,0)}$ coincides with $\Psi_{(\gamma,\lambda,\theta,0)}$ when we have
$X(t)=t$ for all $t\geq 0$. In such a case $\Lambda_X(\eta)=\eta$ for all $\eta\in\mathbb{R}$ and
therefore we have $\Lambda_X(\eta_0)=-\lambda\theta^\gamma$ for $\eta_0=-\lambda\theta^\gamma<0$. Thus we get $D_+H_{(\gamma,\lambda,\theta,0)}(0)=-\lambda\theta^\gamma$ and this agrees with the right derivative of
$\Psi_{(\gamma,\lambda,\theta,0)}(y)$ at $y=0$ in \eqref{eq:right-derivative-of-Psi-at-zero} for
$\gamma\in(0,1)$.
\end{remark}
| {
"attr-fineweb-edu": 1.768555,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUc4zxK0-nUKDhqYvC |
\section{Introduction}
Boundary element method (BEM) are obtained as the discretizations of
boundary boundary integral equations. These arise, for example, when
elliptic partial differential equations are reformulated as integral equations on the
boundary $\Gamma:= \partial\Omega$ of a domain $\Omega \subset \R^d$. A particular strength
of these methods is that they can deal with unbounded exterior domains. Reformulating
an equation posed in a volume as one on its boundary brings about a significant reduction
in complexity. However, the boundary integral operators are fully occupied, and
this has sparked the development of various matrix compression techniques. One possibility,
which we will not pursue here,
are wavelet compression techniques,
\cite{rathsfeld98,rathsfeld01,schneider98,petersdorff-schwab-schneider97,tausch03,tausch-white03},
where sparsity of the system matrices results from the choice of basis.
In the present work, we will consider data-sparse matrix formats that are based on blockwise
low-rank matrices. These formats can be traced back to
multipole expansions, \cite{rokhlin85,greengard-rokhlin97},
panel clustering, \cite{hackbusch-nowak88,hackbusch-nowak89,hackbusch-sauter93,sauter92},
and were then further developed in the mosaic-skeleton method, \cite{tyrtyshnikov00},
the adaptive cross approximation (ACA)
method, \cite{bebendorf00}, and the hybrid cross approximation (HCA), \cite{boerm-grasedyck05}.
A fairly general framework for these techniques is given by
the ${\mathcal H}$-matrices, introduced
in \cite{Hackbusch99,GrasedyckHackbusch,GrasedyckDissertation,HackbuschBuch}
and the $\H^2$-matrices, \cite{HackbuschKhoromskijSauter,Boerm,BoermBuch}. Both $\H$- and
$\H^2$-matrices come with
an (approximate) arithmetic and thus provide the possibility of
(approximately) inverting or factorizing a BEM matrix; also
algebraic approaches to the design of preconditioners for boundary element discretizations, both
for positive and negative order operators, are available with this framework.
Empirically, it has already been observed in \cite{GrasedyckDissertation,Bebendorf05} that such
an approach works well in practice.
Mathematically, the fundamental question in connection with the $\H$-matrix arithmetic is whether the desired
result, i.e., the inverse (or a factorization such as an $LU$- or Cholesky factorization), can
be represented accurately in near optimal complexity in this format. This question is
answered in the affirmative in the present work for discretizations of the hyper-singular integral
operator associated with the Laplace operator. In previous work, we showed similar existence results for
FEM discretizations \cite{FMPFEM} and the discretization of the
single layer operator, \cite{FMPBEM}. Compared to the symmetric positive definite case of the
single layer operator studied in \cite{FMPBEM}, the hyper-singular operator on closed surfaces has
a one-dimensional kernel and is naturally treated as a (simple) saddle point problem.
We show
in Theorem~\ref{th:Happrox} (cf. also Remark~\ref{rem:saddle-point})
that the inverse of the discretization of this saddle point formulation can be approximated
by blockwise low-rank matrices at an exponential rate in the block rank. A corresponding
approximation result for the discretized version of the stabilized hyper-singular operator
follows then fairly easily in Corollary~\ref{cor:stabGalerkin}. The approximation result
Theorem~\ref{th:Happrox} also underlies our proof that the hierarchical Cholesky factorization
of the stabilized hyper-singular operator admits an efficient representation
in the $\H$-matrix format (Theorem~\ref{th:HLU}).
The approximability problem for the inverses of Galerkin BEM-matrices has previously only
been studied in \cite{FMPBEM} for the single layer operator. In a FEM context, works prior
to \cite{FMPFEM} include \cite{Bebendorf,Bebendorf4,Bebendorf05}, \cite{schreittmiller06}, and
\cite{Boerm}. These works differ from \cite{FMPFEM,FMPBEM} and the present paper in an important
technical aspect: while \cite{FMPFEM,FMPBEM} and the present analysis analyze the discretized
operators and show exponential convergence in the block rank, the above mentioned works
study first low-rank approximations on the continuous level and transfer these to the discrete
level in a final projection step. Therefore, they achieve exponential convergence in the block rank
up to this projection error, which is related to the discretization error.
The paper is structured as follows. In the interest of readability, we have collected
the main result concerning the approximability of the inverse of the discretization of the
saddle point formulation in Section~\ref{sec:main-results}. The mathematical core is
found in Section~\ref{sec:Approximation-solution}, where we study how well solutions of the
(discretized) hyper-singular integral equation can be approximated from low-dimensional spaces
(Theorem~\ref{thm:function-approximationHypSing}). In contrast to \cite{FMPBEM}, which considered
only lowest-order discretization, we consider here arbitrary fixed-order discretizations.
The approximation result of Section~\ref{sec:Approximation-solution} can be translated
to the matrix level, which is done in Section~\ref{sec:H-matrix-approximation}.
Section~\ref{sec:stabGalerkin} shows how the results for the saddle point formulation imply
corresponding ones for the stabilized hyper-singular operator.
Finally, Section~\ref{sec:LU-decomposition} provides the existence of an
approximate $\H$-Cholesky decomposition. We close with numerical examples in
Section~\ref{sec:numerics}.
\medskip
We use standard integer order Sobolev spaces and the fractional order Sobolev spaces $H^{1/2}(\Gamma)$
and its dual $H^{-1/2}(\Gamma)$ as defined in, e.g., \cite{SauterSchwab}.
The notation $\lesssim$ abbreviates $\leq$ up to a
constant $C>0$ that depends only on the domain $\Omega$, the spatial dimension $d$,
the polynomial degree $p$, and the $\gamma$-shape regularity of $\mathcal{T}_h$. It does not, however,
depend on critical parameters such as the mesh size $h$, the dimension of the finite dimensional BEM
space, or the block rank employed.
Moreover, we use $\simeq$ to indicate that both estimates
$\lesssim$ and $\gtrsim$ hold.
\section{Main Result}
\label{sec:main-results}
\subsection{Notation and setting}
Throughout this paper, we assume that $\Omega \subset \R^{d}$, $d \in \{2,3\}$ is a
bounded Lipschitz domain such that $\Gamma:=\partial \Omega$
is polygonal (for $d = 2$) or polyhedral (for $d = 3$).
We assume that $\Gamma$ is connected.
We consider the hyper-singular integral operator $W \in L(H^{1/2}(\Gamma),H^{-1/2}(\Gamma))$ given by
\begin{equation*}
W v(x) = -\gamma_{1,x}^{\text{int}}(\widetilde{K}v)(x) = -\gamma_{1,x}^{\text{int}}\int_{\Gamma}(\gamma_{1,y}^{\text{int}}G(x-y))v(y) ds_y, \quad x \in \Gamma,
\end{equation*}
where $G(x) = -\frac{1}{2\pi} \log\abs{x}$ for $d=2$ and $G(x) = \frac{1}{4\pi}\frac{1}{\abs{x}}$ for $d=3$
is the fundamental solution associated with the Laplacian. Here, the double layer potential
$\widetilde{K} \in L(H^{1/2}(\Gamma),H^{1}(\Omega))$
is given by $\widetilde{K}v(x) := \int_{\Gamma}(\gamma_{1,y}^{\text{int}}G(x-y))v(y) ds_y$,
where $\gamma_{1,z}^{\text{int}}$ denotes the interior conormal derivative at the point $z \in \Gamma$, i.e.,
with the normal vector $n(z)$ at $z \in \Gamma$ pointing into $\Omega^c$ and some sufficiently smooth function $u$
defined in $\Omega$ one requires $\gamma_{1,z}^{\text{in}} u = \nabla u(z) \cdot n(z)$.
The hyper-singular integral operator $W$ is symmetric, positive semidefinite on $H^{1/2}(\Gamma)$.
Since $\Gamma$ is connected, $W$ has a one-dimensional kernel given by the constant functions.
In order to deal with this kernel, we can either use factor spaces, stabilize the operator,
or study a saddle point formulation.
In the following, we will employ the latter by adding the side constraint of vanishing mean.
In Section~\ref{sec:stabGalerkin} we will very briefly study the case of the stabilized operator, and
our analysis of Cholesky factorizations in Section~\ref{sec:LU-decomposition} will be performed
for the stabilized operator.
With the bilinear form $b(v,\mu) := \mu \int_{\Gamma}v ds_x$,
we get the saddle point formulation of the boundary integral equation
\begin{equation*}
W\phi = f \quad \text{on} \;\Gamma
\end{equation*}
with arbitrary $f \in H^{-1/2}(\Gamma)$ as finding $(\phi,\lambda) \in H^{1/2}(\Gamma) \times \R$
such that
\begin{subequations}\label{eq:modelHScont}
\begin{align}
\skp{W\phi,\psi} + b(\psi,\lambda) &= \skp{f,\psi} \qquad \forall \psi \in H^{1/2}(\Gamma), \\
b(\phi,\mu) &= 0 \qquad \forall \mu \in \R.
\end{align}
\end{subequations}
By classical saddle-point theory, this problem has a unique solution $(\phi,\lambda) \in H^{1/2}(\Gamma) \times \R$,
since the bilinear form $b$ satisfies an inf-sup condition, and the bilinear form $\skp{W\phi,\psi}$ is coercive
on the kernel of $b(\cdot,\lambda)$, which is just the one-dimensional space of constant functions
(see, e.g., \cite{SauterSchwab}).
For the discretization, we assume that $\Gamma$ is triangulated by a (globally) {\it quasiuniform} mesh
${\mathcal T}_h=\{T_1,\dots,T_M\}$ of mesh width $h := \max_{T_j\in \mathcal{T}_h}{\rm diam}(T_j)$.
The elements $T_j \in \mathcal{T}_h$ are open line segments ($d=2$) or triangles ($d=3$).
Additionally, we assume that the mesh $\mathcal{T}_h$ is regular in the sense of Ciarlet and
$\gamma$-shape regular in the sense that for $d=2$ the quotient of the diameters of neighboring elements
is bounded by $\gamma$ and for $d=3$ we have
${\rm diam}(T_j) \le \gamma\,|T_j|^{1/2}$ for all $T_j\in\mathcal{T}_h$, where $|T_j| = \operatorname*{area}(T_j)$
denotes the length/area of the element $T_j$.
We consider the Galerkin discretization of $W$ by continuous, piecewise polynomial functions of
fixed degree $p \geq 1$ in
$S^{p,1}({\mathcal T}_h) := \{u \in C(\Gamma)\, :\, u|_T \in P_p(T) \, \forall T \in \mathcal{T}_h \}$, where
$P_p(T)$ denotes the space of polynomials of maximal degree $p$ on the triangle $T$.
We choose a basis of $S^{p,1}({\mathcal T}_h)$, which is denoted by
${\mathcal B}_h:= \{\psi_j\, :\, j = 1,\dots, N\}$.
Given that our results are formulated for matrices, assumptions on the basis ${\mathcal B}_h$
need to be imposed. For the
isomorphism $\Phi:\R^N\ra S^{p,1}({\mathcal T}_h)$, $\mathbf{x} \mapsto \sum_{j=1}^Nx_j\psi_j$,
we require
\begin{equation}\label{eq:basisisomorphism}
h^{(d-1)/2}\norm{\mathbf{x}}_2 \lesssim \norm{\Phi(\mathbf{x})}_{L^2(\Gamma)} \lesssim h^{(d-1)/2}\norm{\mathbf{x}}_2
\quad \forall\, \mathbf{x} \in \R^N.
\end{equation}
\begin{remark}
{\rm
The standard basis for $p=1$ consists of the classical hat functions satisfying
$\psi_j(x_i) = \delta_{ij}$ and for $p\geq 2$
we refer to, e.g., \cite{SchwabBuch,karniadakis-sherwin99,demkowicz-kurtz-pardo-paszynski-rachowicz-zdunek08}.
These bases satisfy assumption \eqref{eq:basisisomorphism}.}
\eex
\end{remark}
The discrete variational problem is given by finding $(\phi_h,\lambda_h) \in S^{p,1}(\mathcal{T}_h) \times \R$
such that
\begin{eqnarray}\label{eq:modelHS}
\skp{W\phi_h,\psi_h} + b(\psi_h,\lambda_h) &=& \skp{f,\psi_h} \qquad \forall \psi_h \in S^{p,1}(\mathcal{T}_h), \\
b(\phi_h,\mu) &=& 0 \qquad \forall \mu \in \R. \nonumber
\end{eqnarray}
Since the bilinear form $b$ trivially satisfies a discrete inf-sup condition, the discrete problem
is uniquely solvable as well, and one has the stability bounds
\begin{equation}
\label{eq:discrete-stability-1}
\norm{\phi_h}_{H^{1/2}(\Gamma)} + |\lambda| \leq C \norm{f}_{H^{-1/2}(\Gamma)},
\end{equation}
for a constant $C > 0$ which depends only on $\Gamma$. For $f \in L^2(\Gamma)$ and the $L^2$-projection
$\Pi^{L^2}:L^2(\Gamma) \rightarrow S^{p,1}(\mathcal{T}_h)$, one even has the following estimate
\begin{equation}
\label{eq:discrete-stability}
\norm{\phi_h}_{H^{1/2}(\Gamma)} + |\lambda| \leq C \norm{\Pi^{L^2} f}_{L^{2}(\Gamma)}
\leq C \norm{f}_{L^2(\Gamma)}.
\end{equation}
With the basis $\mathcal{B}_h$, the left-hand side of \eqref{eq:modelHS} leads to the invertible block matrix
\begin{equation}
\label{eq:Wtilde}
\boldsymbol{\mathcal{W}} := \begin{pmatrix} \mathbf{W} & \mathbf{B} \\ \mathbf{B}^T & 0 \end{pmatrix},
\end{equation}
where the matrix $\mathbf{W} \in \R^{N\times N}$ and the vector $\mathbf{B} \in \R^{N\times 1}$ are given by
\begin{equation}\label{eq:matrixHypsing}
\mathbf{W}_{jk} = \skp{W\psi_k,\psi_j}, \quad \mathbf{B}_j = \skp{\psi_j,1}, \quad \psi_k,\psi_j \in \mathcal{B}_h.
\end{equation}
\subsection{Approximation of $\boldsymbol{\mathcal{W}}^{-1}$ by blockwise low-rank matrices}
Our goal is to approximate the inverse matrix $\boldsymbol{\mathcal{W}}^{-1}$ by $\H$-matrices, which
are based on the concept that certain 'admissible' blocks can be approximated by low-rank factorizations.
The following definition specifies for which blocks such a factorization can be derived.
\begin{definition}[bounding boxes and $\eta$-admissibility]
\label{def:admissibility}
A \emph{cluster} $\tau$ is a subset of the index set $\mathcal{I} = \{1,\ldots,N\}$. For a cluster $\tau \subset \mathcal{I}$,
we say that $B_{R_{\tau}} \subset \R^d$
is a \emph{bounding box} if:
\begin{enumerate}[(i)]
\item
\label{item:def:admissibility-i}
$B_{R_{\tau}}$ is a hyper cube with side length $R_{\tau}$,
\item
\label{item:def:admissibility-ii}
$ \supp \psi_i \subset B_{R_{\tau}}$ for all $ i \in \tau $.
\end{enumerate}
For an admissibility parameter $\eta > 0$,
a pair of clusters $(\tau,\sigma)$ with $\tau,\sigma \subset \mathcal{I}$
is $\eta$-\emph{admissible} if
there exist bounding boxes $B_{R_{\tau}}$, $B_{R_{\sigma}}$ satisfying
(\ref{item:def:admissibility-i})--(\ref{item:def:admissibility-ii})
such that
\begin{equation}\label{eq:admissibility}
\min\{{\rm diam}(B_{R_{\tau}}),{\rm diam}(B_{R_{\sigma}})\} \leq \eta \; {\rm dist}(B_{R_{\tau}},B_{R_{\sigma}}).
\end{equation}
\end{definition}
\begin{definition}[blockwise rank-$r$ matrices]
Let $P$ be a partition of ${\mathcal I} \times {\mathcal I}$ and $\eta>0$.
A matrix ${\mathbf W}_{\mathcal{H}} \in \mathbb{R}^{N \times N}$
is said to be a \emph{blockwise rank-$r$ matrix}, if for every $\eta$-admissible cluster pair $(\tau,\sigma) \in P$,
the block ${\mathbf W}_{\mathcal{H}}|_{\tau \times \sigma}$ is a rank-$r$ matrix, i.e., it has the form
${\mathbf W}_{\mathcal{H}}|_{\tau \times \sigma} = {\mathbf X}_{\tau \sigma} {\mathbf Y}^T_{\tau \sigma}$ with
$\mathbf{X}_{\tau\sigma} \in \mathbb{R}^{\abs{\tau}\times r}$
and $\mathbf{Y}_{\tau\sigma} \in \mathbb{R}^{\abs{\sigma}\times r}$.
Here and below, $\abs{\sigma}$ denotes the cardinality of a finite set $\sigma$.
\end{definition}
\begin{definition}[cluster tree]
A \emph{cluster tree} with \emph{leaf size} $n_{\rm leaf} \in \mathbb{N}$ is a binary tree $\mathbb{T}_{\mathcal{I}}$ with root $\mathcal{I}$
such that for each cluster $\tau \in \mathbb{T}_{\mathcal{I}}$ the following dichotomy holds: either $\tau$ is a leaf of the tree and
$\abs{\tau} \leq n_{\rm leaf}$, or there exist sons $\tau'$, $\tau'' \in \mathbb{T}_{\mathcal{I}}$, which are disjoint subsets of $\tau$ with
$\tau = \tau' \cup \tau''$. The \emph{level function} ${\rm level}: \mathbb{T}_{\mathcal{I}} \rightarrow \mathbb{N}_0$ is inductively defined by
${\rm level}(\mathcal{I}) = 0$ and ${\rm level}(\tau') := {\rm level}(\tau) + 1$ for $\tau'$ a son of $\tau$. The \emph{depth} of a cluster tree
is ${\rm depth}(\mathbb{T}_{\mathcal{I}}) := \max_{\tau \in \mathbb{T}_{\mathcal{I}}}{\rm level}(\tau)$.
\end{definition}
\begin{definition}[far field, near field, and sparsity constant]
A partition $P$ of $\mathcal{I} \times \mathcal{I}$ is said to be based on the cluster tree $\mathbb{T}_{\mathcal{I}}$,
if $P \subset \mathbb{T}_{\mathcal{I}}\times\mathbb{T}_{\mathcal{I}}$. For such a partition $P$
and a fixed admissibility parameter $\eta > 0$, we define the \emph{far field} and the \emph{near field}
as
\begin{equation}\label{eq:farfield}
P_{\rm far} := \{(\tau,\sigma) \in P \; : \; (\tau,\sigma) \; \text{is $\eta$-admissible}\}, \quad P_{\rm near} := P\setminus P_{\rm far}.
\end{equation}
The \emph{sparsity constant} $C_{\rm sp}$ of such a partition
was introduced in \cite{GrasedyckDissertation} as
\begin{equation}\label{eq:sparsityConstant}
C_{\rm sp} := \max\left\{\max_{\tau \in \mathbb{T}_{\mathcal{I}}}\abs{\{\sigma \in \mathbb{T}_{\mathcal{I}} \, : \, \tau \times \sigma \in P_{\rm far}\}},\max_{\sigma \in \mathbb{T}_{\mathcal{I}}}\abs{\{\tau \in \mathbb{T}_{\mathcal{I}} \, : \, \tau \times \sigma \in P_{\rm far}\}}\right\}.
\end{equation}
\end{definition}
The following theorem is the main result of this paper. It states that
the inverse matrix $\boldsymbol{\mathcal{W}}^{-1}$ can be approximated by an $\H$-matrix,
where the approximation error in the spectral norm converges exponentially in the block rank.
\begin{theorem}\label{th:Happrox}
Fix an admissibility parameter $\eta >0$. Let a partition $P$ of $\mathcal{I}\times\mathcal{I}$ be based
on the cluster tree $\mathbb{T}_{\mathcal{I}}$.
Then, there exists a blockwise rank-$r$ matrix $\mathbf{V}_{\H}$ such that
\begin{equation*}
\norm{\boldsymbol{\mathcal{W}}^{-1}|_{N\times N} - \mathbf{V}_{\H}}_2 \leq C_{\rm apx}C_{\rm sp}
{\rm depth}(\mathbb{T}_{\mathcal{I}})
N^{(2d-1)/(2d-2)} e^{-br^{1/(d+1)}}.
\end{equation*}
The constant $C_{\rm apx}$ depends only on $\Omega$, $d$, $p$, and the $\gamma$-shape
regularity of the quasiuniform triangulation $\mathcal{T}_h$, while
the constant $b>0$ additionally depends on $\eta$.
\end{theorem}
\begin{remark}[approximation of inverse of full system]
\label{rem:saddle-point}
{\rm
The previous theorem provides an approximation $\mathbf{V}_{\H}$ to the first
$N\times N$-subblock $\mathbf{V}$ of the matrix
$\boldsymbol{\mathcal{W}}^{-1} = \begin{pmatrix} \mathbf{V} & \mathbf{P} \\ \mathbf{P}^T & 0\end{pmatrix}$.
Since $\mathbf{P} \in \R^{N\times 1}$ is a vector, the matrix
$\widehat{\mathbf{V}}_{\H} = \begin{pmatrix} \mathbf{V}_{\H} & \mathbf{P} \\ \mathbf{P}^T & 0\end{pmatrix}$ is a
blockwise rank-$r$ approximation to the matrix $\boldsymbol{\mathcal{W}}^{-1}$ satisfying
\begin{equation*}
\norm{\boldsymbol{\mathcal{W}}^{-1} - \widehat{\mathbf{V}}_{\H}}_2 \leq
C_{\rm apx}C_{\rm sp} {\rm depth}(\mathbb{T}_{\mathcal{I}})
N^{(2d-1)/(2d-2)} e^{-br^{1/(d+1)}}.
\end{equation*}
}\eex
\end{remark}
\begin{remark}[relative errors]
{\rm
In order to derive a bound for the relative error, we need an estimate on
$\norm{\boldsymbol{\mathcal{W}}}_2$, since
$\frac{1}{\norm{{\boldsymbol{\mathcal{W}}}^{-1}}_2} \leq \norm{\boldsymbol{\mathcal{W}}}_2$.
Since $\mathbf{W}$ is symmetric it suffices to estimate the Rayleigh quotient.
The continuity of the hyper-singular integral operator as well as an inverse inequality, see
Lemma~\ref{lem:inverseinequality} below, and \eqref{eq:basisisomorphism} imply
\begin{eqnarray*}
\skp{\boldsymbol{\mathcal{W}}\begin{pmatrix} \mathbf{v} \\ \lambda \end{pmatrix},\begin{pmatrix}\mathbf{v} \\ \lambda \end{pmatrix}}
&\lesssim& \norm{v}_{H^{1/2}(\Gamma)}^2 + \abs{\lambda \skp{v,1}} \\
&\lesssim& h^{-1}\norm{v}_{L^{2}(\Gamma)}^2 + \abs{\lambda}\norm{v}_{L^2(\Gamma)}
\lesssim h^{d-2}\norm{\begin{pmatrix} \mathbf{v} \\ \lambda \end{pmatrix}}_2^2.
\end{eqnarray*}
Using $h \simeq N^{-1/(d-1)}$, we get a bound for the relative error
\begin{equation}
\frac{\norm{{\boldsymbol{\mathcal{W}}}^{-1} - \widehat{\mathbf{V}}_{\H}}_2}{\norm{\boldsymbol{\mathcal{W}}^{-1}}_2 }
\lesssim C_{\rm apx} C_{\rm sp} N^{(d+1)/(2d-2)} {\rm depth}(\mathbb{T}_{\mathcal{I}}) e^{-br^{1/(d+1)}}.
\end{equation}
}\eex
\end{remark}
\section{Approximation of the potential}
\label{sec:Approximation-solution}
In order to approximate the inverse matrix $\boldsymbol{\mathcal{W}}^{-1}$ by a blockwise low-rank matrix,
we will analyze how well the solution of \eqref{eq:modelHS} can be approximated from low dimensional spaces.
Solving the problem \eqref{eq:modelHS} is equivalent to solving the linear system
\begin{equation}\label{eq:linearsystem}
\begin{pmatrix} \mathbf{W} & \mathbf{B} \\ \mathbf{B}^T & 0 \end{pmatrix} \begin{pmatrix}\mathbf{x} \\ \lambda \end{pmatrix} = \begin{pmatrix} \mathbf{b} \\0\end{pmatrix}
\end{equation}
with $\mathbf{W}$, $\mathbf{B}$ from \eqref{eq:matrixHypsing} and $\mathbf{b} \in \R^N$
defined by $\mathbf b_j = \skp{f,\psi_j}$.
The solution vector $\mathbf{x}$ is linked to the Galerkin solution $\phi_h$ from \eqref{eq:modelHS} via
$\phi_h = \sum_{j=1}^N \mathbf{x}_j\psi_j$.
In this section, we will repeatedly use the $L^2(\Gamma)$-orthogonal projection
$\Pi^{L^2}:L^2(\Gamma)\ra S^{p,1}(\mathcal{T}_h)$ onto $S^{p,1}(\mathcal{T}_h)$, which, we recall, is defined by
\begin{equation}\label{L2projection}
\skp{\Pi^{L^2}v,\psi_h} = \skp{v,\psi_h} \quad \forall \psi_h \in S^{p,1}(\mathcal{T}_h).
\end{equation}
The following theorem is the main result of this section; it states that
for an admissible block $(\tau,\sigma)$, there exists a low dimensional approximation space such that the
restriction to $B_{R_{\tau}}\cap\Gamma$ of the Galerkin solution $\phi_h$ can be approximated well
from it as soon as the right-hand side $f$ has support in $B_{R_{\sigma}}\cap\Gamma$.
\begin{theorem}\label{thm:function-approximationHypSing}
Let $(\tau,\sigma)$ be a cluster pair with bounding boxes $B_{R_\tau}$, $B_{R_\sigma}$
(cf. Definition~\ref{def:admissibility}).
Assume $ \eta\, {\rm dist}(B_{R_\tau},B_{R_\sigma}) \geq {\rm diam}(B_{R_\tau})$
for some admissibility parameter $\eta > 0$.
Fix $q \in (0,1)$.
Then, for each $k\in\mathbb{N}$ there exists a space
$W_k\subset S^{p,1}({\mathcal T}_h)$ with $\dim W_k\leq C_{\rm dim} (2+\eta)^d q^{-d}k^{d+1}$
such that for arbitrary $f \in L^2(\Gamma)$ with
$\supp f \subset B_{R_\sigma}\cap\Gamma$, the solution $\phi_h$ of \eqref{eq:modelHS}
satisfies
\begin{equation}
\label{eq:thm:function-approximation-1HS}
\min_{w \in W_k} \|\phi_h - w\|_{L^2(B_{R_{\tau}}\cap\Gamma)}
\leq C_{\rm box} h^{-1/2} q^k \|\Pi^{L^2} f\|_{L^2(\Gamma)}
\leq C_{\rm box} h^{-1/2} q^k \|f\|_{L^2(\Gamma)}.
\end{equation}
The constants $C_{\rm dim}$, $C_{\rm box}>0$ depend only on $\Omega$, $d$, $p$,
and the $\gamma$-shape regularity of the quasiuniform triangulation $\mathcal{T}_h$.
\end{theorem}
The proof of Theorem~\ref{thm:function-approximationHypSing} will be given at the end
of this section. Its main ingredients can be summarized as follows:
First, the double-layer potential
$$
u(x) := \widetilde{K}\phi_h(x) = \int_{\Gamma}\gamma_{1,y}^{\text{int}} G(x-y)\phi_h(y)ds_y,
\qquad x \in \R^d \setminus \Gamma,
$$
generated by the solution $\phi_h$ of \eqref{eq:modelHS}
is harmonic on $\Omega$ as well as on $\Omega^c := \R^d \setminus \overline{\Omega}$ and
satisfies the jump conditions
\begin{eqnarray}\label{eq:jumpconditions}
[\gamma_0 u] &:=& \gamma_0^{\text{ext}}u - \gamma_0^{\text{int}}u = \phi_h \in H^{1/2}(\Gamma),\nonumber \\
[\partial_{n}u] &:=& \gamma_1^{\text{ext}}u - \gamma_1^{\text{int}}u = 0 \in H^{-1/2}(\Gamma).
\end{eqnarray}
Here, $\gamma_0^{\text{ext}},\gamma_0^{\text{int}}$ denote the exterior and interior trace operator and
$\gamma_1^{\text{ext}},\gamma_1^{\text{int}}$ the exterior and interior conormal derivative,
see, e.g., \cite{SauterSchwab}.
Hence, the potential $u$ is in a space of piecewise harmonic functions, where
the jump across the boundary is a continuous piecewise polynomial of degree $p$,
and the jump of the normal derivative vanishes.
These properties will characterize the spaces ${\mathcal H}_h(D)$ to be introduced below.
The second observation is an orthogonality condition on admissible blocks $(\tau,\sigma)$.
For right-hand sides $f$ with $\supp f \subset B_{R_{\sigma}}\cap \Gamma$,
equation \eqref{eq:modelHS}, the admissibility condition, and $W = - \gamma_1^{\text{int}}\widetilde{K}$ imply
\begin{equation}
\label{eq:orthogonalityHS}
-\skp{\gamma_1^{\text{int}} u,\psi_{h}} + \lambda\skp{\psi_{h},1} = 0
\quad \forall \psi_{h} \in S^{p,1}({\mathcal T}_{h}) \, \text{with} \, \supp\psi_{h} \subset B_{R_{\tau}}\cap\Gamma.
\end{equation}
For a cluster $\rho \subset \mathcal{I}$, we define $\Gamma_{\rho} \subset \Gamma$ as an open polygonal
manifold given by
\begin{equation}\label{eq:screen}\Gamma_{\rho} := {\rm interior} \left(\bigcup_{j\in\rho}\supp \psi_{j}\right).\end{equation}
Let $D$ be an open set and $D^- := D\cap\Omega$, $D^+ := D\cap\overline{\Omega}^c$. A function
$v \in H^1(D\setminus\Gamma)$
is called piecewise harmonic, if
$$
\int_{D\setminus\Gamma}\nabla v \cdot \nabla \varphi\, dx = 0 \quad \forall\varphi \in C_0^{\infty}(D^{\pm}).
$$
\begin{definition}
Let $D \subset \R^{d}$ be open. The restrictions
of the interior and exterior trace operators $\gamma_0^{\text{int}}$, $\gamma_0^{\text{ext}}$ to
$D \cap \Gamma$ are operators $\gamma_0^{\text{int}}|_{D \cap \Gamma}:H^1(D^-) \rightarrow L^2_{loc} (D \cap \Gamma)$
and $\gamma_0^{\text{ext}}|_{D \cap \Gamma}:H^1(D^+) \rightarrow L^2_{loc} (D \cap \Gamma)$
defined in the following way:
For any (relative) compact $U \subset D \cap \Gamma$, one selects a cut-off function $\eta \in C^\infty_0(D)$
with $\eta \equiv 1$ on $U$.
Since $u \in H^1(D^-)$ implies $\eta u \in H^1(\Omega)$, we have
$\gamma_0^{\text{int}} \eta u \in H^{1/2}(\Gamma)$ and thus its restriction to $U$ is a well-defined
function in $L^2(U)$. It is easy to see that the values on $U$ do not depend on the choice of $\eta$.
The operator $\gamma_0^{\text{ext}} |_{D\cap\Gamma}$ is defined
completely analogously.
In order to define the restriction of the normal derivative of a piecewise harmonic function
$v \in H^1(D\setminus\Gamma)$, let $\eta \in C^{\infty}(\R^d)$
with $\supp \eta \subset D$ and $\eta \equiv 1$ on a compact set
$U\subset D$. Then, the exterior normal derivative $\partial_n(\eta v)$ is well defined
as a functional in $H^{-1/2}(\Gamma)$, and
we define $\partial_n v|_{U}$ as the functional
\begin{equation*}
\skp{\partial_n v|_{U},\varphi} = \skp{\partial_n(\eta v),\varphi}, \quad \forall \varphi \in H^{1/2}(\Gamma),
\supp \varphi \subset U.
\end{equation*}
Again, this definition does not depend on the choice of $\eta$ as long as $\eta \equiv 1$ on $U$.
\end{definition}
\begin{definition}
For a piecewise harmonic function $v \in H^1(D\setminus\Gamma)$,
we define the jump of the normal derivative $[\partial_{n}v]|_{D\cap\Gamma}$ on $D\cap\Gamma$ as the functional
\begin{equation}\label{eq:unjumpdef}
\skp{[\partial_{n}v]|_{D\cap\Gamma},\varphi} := \int_{D^+\cup D^-}\nabla v \cdot \nabla \varphi\, dx \quad \forall \varphi \in H^1_0(D).
\end{equation}
\end{definition}
We note that the value $\skp{[\partial_{n}v]|_{D\cap\Gamma},\varphi}$ depends only on $\varphi|_{D\cap \Gamma}$
in the sense that $\skp{[\partial_{n}v]|_{D\cap\Gamma},\varphi} = 0$ for all
$\varphi \in C_0^{\infty}(D)$ with $\varphi|_{D\cap\Gamma} = 0$.
Moreover, if $[\partial_{n}v]|_{D\cap\Gamma}$ is a function in $L^2(D\cap\Gamma)$, then it is unique.
The definition \eqref{eq:unjumpdef} is consistent with \eqref{eq:jumpconditions} in the following sense:
For a potential $\widetilde{K}\phi_h$ with $\phi_h \in S^{p,1}(\mathcal{T}_h)$, we have the jump condition
$[\partial_n \widetilde{K} \phi_h]|_{D\cap\Gamma} = 0$.
With these observations, we can define the space
\begin{eqnarray*}
\mathcal{H}_h(D) &:=& \{v\in H^1(D\setminus\Gamma) \colon
\text{$v$ is piecewise harmonic}, [\partial_n v]|_{D\cap\Gamma} = 0, \\
& &\phantom{v\in H^1(D\setminus\Gamma) \colon \;\,} \exists \widetilde{v} \in S^{p,1}({\mathcal T}_h) \;
\mbox{s.t.} \
[\gamma_0 v]|_{D\cap\Gamma} = \widetilde{v}|_{D\cap\Gamma}\}.
\end{eqnarray*}
The potential $u = \widetilde{K}\phi_h$ indeed satisfies $u \in \H_h(D)$ for any domain $D$;
we will later take $D$ to be a box $B_R$.
For a box $B_R$ with side length $R$, we introduce the following norm on $H^1(B_R\setminus\Gamma)$
\begin{equation*}
\triplenorm{v}_{h,R}^2 := \left(\frac{h}{R}\right)^2 \norm{\nabla v}^2_{L^2(B_R\setminus\Gamma)} +
\frac{1}{R^2}\norm{v}_{L^2(B_R\setminus\Gamma)}^2,
\end{equation*}
which is, for fixed $h$, equivalent to the $H^1(B_R\setminus\Gamma)$-norm. \\
A main tool in our proofs is the nodal interpolation operator
$I_h: C(\Gamma) \rightarrow S^{p,1}({\mathcal T}_h)$.
Since $p+1>\frac{d-1}{2}$ (recall: $d \in \{2,3\}$), the interpolation operator $I_h$ has
the following local approximation property for continuous, $\mathcal{T}_h$-piecewise $H^{p+1}$-functions
$u \in C(\Gamma)\cap H^{p+1}_{\text{pw}}(\mathcal{T}_h) :=
\{u\in C(\Gamma) : u|_{T} \in H^{p+1}(T)\, \forall \, T \in \mathcal{T}_h\}$:
\begin{equation}\label{eq:NIapprox}
\norm{u-I_h u}_{H^m(T)}^2 \leq C h^{2(p+1-m)} \left\vert{u}\right\vert_{H^{p+1}(T)}^2,
\; 0 \leq m \leq p+1.
\end{equation}
The constant $C>0$ depends only on $\gamma$-shape regularity of the quasiuniform triangulation $\mathcal{T}_h$,
the dimension $d$, and the polynomial degree $p$.
\begin{comment}
In the following, we will repeatedly employ the Scott-Zhang projection
\begin{equation*} I_h: H^1(\Gamma) \rightarrow S^{p,1}({\mathcal T}_h) \end{equation*}
introduced in \cite{ScottZhang}.
By $\omega_T := \bigcup\left\{T' \in \mathcal{T}_h \;:\; T \cap T' \neq \emptyset\right\}$, we denote the element patch of $T$,
which contains $T$ and all elements $T' \in \mathcal{T}_h$ that share a node with $T$.
For $\mathcal{T}_h$-piecewise $H^{\ell}$-functions
$v \in H^{\ell}_{\text{pw}}(\Gamma) := \left\{u \in L^2(\Gamma): u|_T \in H^{\ell}(T)\, \forall T\in \mathcal{T}_h \right\}$,
the operator $I_h$ has the well-known
local approximation properties
\begin{equation}\label{eq:SZapprox}
\norm{v-I_h v}_{H^m(T)}^2 \leq C h^{2(\ell-m)}\! \sum_{T' \subset \omega_T}\abs{v}_{H^{\ell}(T')}^2,
\quad 0 \leq m \leq 1, \ m \leq \ell \leq p+1.
\end{equation}
The constant $C>0$ depends only on the $\gamma$-shape regularity of the quasiuniform triangulation $\mathcal{T}_h$, the dimension $d$,
and the polynomial degree $p$.
Moreover, the approximation property from above together with the triangle inequality imply the $L^2$-stability
estimate
\begin{equation}\label{eq:L2stabilitySZ}
\norm{I_h v}_{L^2(T)} \leq C \norm{v}_{L^2(\omega_T)} \quad \forall v \in H^1(\Gamma).
\end{equation}
\end{comment}
In the following, we will approximate the Galerkin solution on
certain nested boxes, which are concentric according to the following definition.
\begin{definition}
Two (open) boxes $B_R$, $B_{R^\prime}$
are said to be concentric boxes with side lengths $R$ and $R^\prime$, if
they have the same barycenter and $B_R$ can be obtained by a stretching
of $B_{R^\prime}$ by the factor $R/R^\prime$ taking their common barycenter
as the origin.
\end{definition}
The following lemma states two classical inverse inequalities for functions in $\mathcal{S}^{p,1}(\mathcal{T}_h)$,
which will repeatedly be used in this section. For a proof we refer to {\cite[Theorem 3.2]{GHS}}
and \cite[Theorem 4.4.2]{SauterSchwab}.
\begin{lemma}\label{lem:inverseinequality}
There is a constant $C > 0$ depending only on $\Omega,d,p$, and the $\gamma$-shape
regularity of the quasiuniform triangulation $\mathcal{T}_h$ such that for all $s \in [0,1]$ the inverse inequality
\begin{equation}\label{eq:inverse}
\norm{v}_{H^{s}(\Gamma)}\leq C h^{-1/2}\norm{v}_{L^2(\Gamma)} \quad \forall v \in S^{p,1}(\mathcal{T}_h)
\end{equation}
holds.
Furthermore, for $0\leq m\leq \ell$ the inverse estimate
\begin{equation}\label{eq:inverse2}
\norm{v}_{H^{\ell}(T)} \leq C h^{m-\ell} \norm{v}_{H^{m}(T)}, \quad \forall v \in \mathcal{P}^{p}(T)
\end{equation}
holds for all $T \in \mathcal{T}_h$, where the constant
$C>0$ depends only on $\Omega,p,\ell$ and the $\gamma$-shape
regularity of the quasiuniform triangulation $\mathcal{T}_h$.
\end{lemma}
The following lemma shows that for piecewise harmonic functions, the restriction of the
normal derivative is a function in $L^2$ on a smaller box, and provides an estimate
of the $L^2$-norm of the normal derivative.
\begin{lemma}
\label{lem:estimateun}
Let $\delta \in (0,1)$, $R \in(0,2\operatorname{diam}(\Omega))$ be such that
$\frac{h}{R}\leq \frac{\delta}{4}$, and let $\mu \in \R$.
Let $B_R$, $B_{(1+\delta)R}$ be two concentric boxes of
side lengths $R$ and $(1+\delta)R$.
Then, there exists a constant $C> 0$ depending only on $\Omega$, $d,p$, and the $\gamma$-shape regularity
of the quasiuniform triangulation $\mathcal{T}_h$, such that for
all $v \in \H_{h}(B_{(1+\delta)R})$ we have
\begin{eqnarray}
\label{eq:lem:estimateunjump-ii}
\norm{\partial_n v}_{L^2(B_{R}\cap \Gamma)} \
\leq C h^{-1/2}\left(\norm{\nabla v}_{L^2(B_{(1+\delta)R}\setminus\Gamma)}+
\frac{1}{\delta R}\norm{v}_{L^2(B_{(1+\delta)R}\setminus\Gamma)}\right).
\end{eqnarray}
\end{lemma}
\begin{proof}
{\em 1.~step:}
Let $\eta\in W^{1,\infty}({\mathbb R}^d)$ satisfy $0\leq\eta\leq 1$, $\eta \equiv 1$ on
$B_{(1+\delta/2)R}$, $\supp \eta \subset B_{(1+\delta)R}$,
and $\norm{\nabla \eta}_{L^{\infty}(B_{(1+\delta)R})} \lesssim \frac{1}{\delta R}$.
In order to shorten the proof, we assume
$\gamma_0^{\rm int} \eta = \gamma_0^{\rm ext} \eta \in S^{1,1}(\mathcal{T}_h)$ so that inverse inequalities
are applicable. We mention in passing that this simplification could be avoided by using
``super-approximation'', a technique that goes back to \cite{nitsche-schatz74}
(cf., e.g., \cite[Assumption~{7.1}]{wahlbin91}). Let us briefly indicate, how the assumption
$\eta \in S^{1,1}(\mathcal{T}_h)$ can be ensured: Start from a smooth cut-off function
$\widetilde \eta \in C^\infty_0({\mathbb R}^d)$ with the desired support properties. Then, the
piecewise linear interpolant $I^1_h \widetilde\eta \in S^{1,1}(\mathcal{T}_h)$ has the
desired properties on $\Gamma$. It therefore suffices to construct a suitable lifting.
This is achieved with the lifting operator described in \cite[Chap.~{VI}, Thm.~3]{stein70} and afterwards
a multiplication by a suitable cut-off function again.
{\em 2.~step:}
Let $z := \widetilde{K}(\gamma_0^{\rm int}\eta [v])$. Then with the jump conditions
\begin{equation*}
[\partial_n z] = 0, \quad [z] = \gamma_0^{\rm int}\eta [v]
\end{equation*}
and the fact that $v$ is piecewise harmonic, we get that the function $v-z$ is harmonic
in the box $B_{(1+\delta/2)R}$. Thus, the function $w:= \nabla(v-z)$ is harmonic in $B_{(1+\delta/2)R}$ as well.
It therefore satisfies the interior regularity (Caccioppoli) estimate
\begin{equation}\label{eq:intreg}
\norm{\nabla w}_{L^2(B_{(1+\delta/4)R}\setminus\Gamma)} \lesssim \frac{1}{\delta R}
\norm{w}_{L^2(B_{(1+\delta/2)R}\setminus\Gamma)};
\end{equation}
a short proof of this Caccioppoli inequality can be found, for example, in \cite{Bebendorf}.
We will need a second smooth cut-off function $\widetilde{\eta}$ with $0\leq\widetilde{\eta}\leq 1$,
$\widetilde{\eta} \equiv 1$ on
$B_{R}$, and $\supp \widetilde{\eta} \subset B_{(1+\delta/4)R}$ and
$\norm{\nabla \widetilde{\eta}}_{L^{\infty}(B_{(1+\delta/4)R})} \lesssim \frac{1}{\delta R}$.
The multiplicative trace inequality, see, e.g., \cite{BrennerScott},
implies together with \eqref{eq:intreg}
and $\delta R \leq 2 {\rm diam}(\Omega)$ due to the assumptions on $\delta,R$ that
\begin{eqnarray*}
\norm{\widetilde{\eta}w}_{L^2(B_R \cap \Gamma)}^2 &\lesssim&
\norm{\widetilde{\eta}w}_{L^2(B_{(1+\delta/4)R}\setminus\Gamma)}^2
+ \norm{\widetilde{\eta}w}_{L^2(B_{(1+\delta/4)R}\setminus\Gamma)}
\norm{\nabla(\widetilde{\eta}w)}_{L^2(B_{(1+\delta/4)R}\setminus\Gamma)} \\
&\lesssim& \norm{\widetilde{\eta}w}_{L^2(B_{(1+\delta/4)R}\setminus\Gamma)}^2
+ \norm{\widetilde{\eta}w}_{L^2(B_{(1+\delta/4)R}\setminus\Gamma)}
\left(\frac{1}{\delta R}\norm{w}_{L^2(B_{(1+\delta/4)R}\setminus\Gamma)}
+\norm{\nabla w}_{L^2(B_{(1+\delta/4)R}\setminus\Gamma)}\right) \\
&\lesssim& \frac{1}{\delta R}\norm{w}_{L^2(B_{(1+\delta/2)R}\setminus\Gamma)}^2.
\end{eqnarray*}
Therefore and with $\partial_n z = \partial_n\widetilde{K}(\gamma_0^{\rm int}\eta [v])
= -W(\gamma_0^{\rm int}\eta [v])$, we can estimate the normal derivative of $v$ by
\begin{eqnarray*}
\norm{\partial_n v}_{L^2(B_R\cap\Gamma)} &\leq& \norm{w\cdot n}_{L^2(B_R\cap\Gamma)} +
\norm{\partial_n z}_{L^2(B_R\cap\Gamma)} \\
&\lesssim& \frac{1}{\sqrt{\delta R}}\norm{w}_{L^2(B_{(1+\delta/2)R}\setminus\Gamma)} +
\norm{W(\gamma_0^{\rm int}\eta[v])}_{L^2(B_R\cap\Gamma)}.
\end{eqnarray*}
Since the hyper-singular integral operator is a continuous mapping from $H^{1}(\Gamma)$ to $L^2(\Gamma)$ and the
double layer potential is continuous from $H^{1/2}(\Gamma)$ to $H^{1}(\Omega)$ (see, e.g.,
\cite[Remark 3.1.18.]{SauterSchwab}), we get
with $h<\delta R$, the inverse inequality \eqref{eq:inverse} (note that
$(\gamma_0^{\rm int}\eta) [v]$ is a piecewise polynomial), and the trace inequality
\begin{eqnarray*}
\norm{\partial_n v}_{L^2(B_R\cap\Gamma)} &\lesssim&
\frac{1}{\sqrt{\delta R}}\norm{w}_{L^2(B_{(1+\delta/2)R}\setminus\Gamma)} +
\norm{(\gamma_0^{\rm int}\eta)[v]}_{H^1(\Gamma)} \\
&\lesssim& \frac{1}{\sqrt{\delta R}}\left(\norm{\nabla v}_{L^2(B_{(1+\delta/2)R}\setminus\Gamma)}+
\norm{\nabla z}_{L^2(B_{(1+\delta/2)R}\setminus\Gamma)}\right)
+h^{-1/2}\norm{(\gamma_0^{\rm int}\eta)[v]}_{H^{1/2}(\Gamma)} \\
&\lesssim& h^{-1/2}\left(\norm{\nabla v}_{L^2(B_{(1+\delta/2)R}\setminus\Gamma)}+
\norm{(\gamma_0^{\rm int}\eta)[v]}_{H^{1/2}(\Gamma)}\right) \\
&\lesssim& h^{-1/2}\left(\norm{\nabla v}_{L^2(B_{(1+\delta/2)R}\setminus\Gamma)}+
\norm{\eta v}_{H^{1}(B_{(1+\delta)R}\setminus\Gamma)}\right) \\
&\lesssim& h^{-1/2}\left(\norm{\nabla v}_{L^2(B_{(1+\delta)R}\setminus\Gamma)}+
\frac{1}{\delta R}\norm{v}_{L^{2}(B_{(1+\delta)R}\setminus\Gamma)}\right), \\
\end{eqnarray*}
which finishes the proof.
\end{proof}
The previous lemma implies that for functions in $\H_h(B_{(1+\delta)R})$, the normal derivative
is a function in $L^2(B_R\cap\Gamma)$.
Together with the orthogonality properties that we have identified
in \eqref{eq:orthogonalityHS}, this is captured by the following affine space
$\H_{h,0}(D,\Gamma_{\rho},\mu)$:
\begin{eqnarray}
\H_{h,0}(D,\Gamma_{\rho},\mu)&:=& {\mathcal H}_h(D) \cap
\{v \in H^1(D\setminus\Gamma)\colon \supp[\gamma_0 v]|_{D\cap\Gamma} \subset \overline{\Gamma_{\rho}}, \\
& & \phantom{{\mathcal H}^1_h(D) \cap \{} \langle \partial_n v|_{D\cap\Gamma},\psi_h\rangle -
\mu\skp{\psi_h,1} = 0
\, \forall \psi_h \in S^{p,1}({\mathcal T}_h) \, \text{with} \,
\supp \psi_h \subset D\cap \overline{\Gamma_{\rho}}\}. \nonumber
\end{eqnarray}
\begin{lemma}\label{lem:closedsubspaceHS}
The spaces $\H_h(D)$ and $\H_{h,0}(D,\Gamma_{\rho},\mu)$ are closed subspaces of $H^1(D\setminus \Gamma)$.
\end{lemma}
\begin{proof}
Let $(v^j)_{j\in \N} \subset \H_h(D)$ be a sequence converging to $v \in H^1(D\setminus \Gamma)$.
With the definition of the jump $[\gamma_0 v^j]|_{D\cap\Gamma}$ and the continuity of the trace operator from
$H^1(\Omega)$ to $L^2(\Gamma)$, we get that the sequence $[\gamma_0 v^j]|_{D\cap\Gamma}$ converges in
$L^2_{\rm loc}(D\cap\Gamma)$ to $[\gamma_0 v]|_{D\cap\Gamma}$,
and since $S^{p,1}(\mathcal{T}_h)$ is finite dimensional, we get that
$[\gamma_0 v]|_{D\cap\Gamma} = \widetilde{v}|_{D\cap\Gamma}$ with a function $\widetilde{v} \in S^{p,1}(\mathcal{T}_h)$.
Moreover, for $\varphi \in C_0^{\infty}(D^{\pm})$ we have
\begin{equation*}
\skp{\nabla v,\nabla \varphi}_{L^2(D\setminus\Gamma)} =
\lim_{j\ra \infty} \skp{\nabla v^j, \nabla \varphi}_{L^2(D\setminus\Gamma)} = 0,
\end{equation*}
so $v$ is piecewise harmonic on $D\setminus\Gamma$. By definition \eqref{eq:unjumpdef} and the same argument,
we get
$[\partial_n v]|_{D\cap\Gamma} = 0$, and therefore $\H_h(D)$ is closed.
The space $\H_{h,0}(D,\Gamma_{\rho},\mu)$ is closed, since the intersection of closed
spaces is closed.
\end{proof}
A key ingredient of the proof of Theorem~\ref{thm:function-approximationHypSing} is
a Caccioppoli-type interior regularity estimate, which is proved by use of the orthogonality
property \eqref{eq:orthogonalityHS}.
\begin{lemma}\label{lem:CaccioppoliHS}
Let $\delta \in (0,1)$, $R\in(0,2{\rm diam}(\Omega))$ such that
$\frac{h}{R} \leq \frac{\delta}{8}$
and let $\Gamma_{\rho}\subset \Gamma$ be of the form \eqref{eq:screen}.
Let $B_R$, $B_{(1+\delta)R}$ be two concentric boxes and let $\mu \in \R$.
Then, there exists a constant $C > 0$ depending only on
$\Omega$, $d$, $p$, and the $\gamma$-shape regularity of the quasiuniform triangulation $\mathcal{T}_h$ such that for all
$v \in \H_{h,0}(B_{(1+\delta)R},\Gamma_{\rho},\mu)$
\begin{equation}\label{eq:caccioppoliHS}
\norm{\nabla v}_{L^2(B_{R}\setminus \Gamma)} \leq C\left(\frac{1+\delta}{\delta} \triplenorm{v}_{h,(1+\delta)R}
+((1+\delta)R)^{(d-1)/2}\abs{\mu}\right).
\end{equation}
\end{lemma}
\begin{proof}
Let $\eta \in H^1(\R^d)$ be a cut-off function with $\supp \eta \subset B_{(1+\delta/2)R}$,
$\eta \equiv 1$ on $B_{R}$, and $\norm{\nabla \eta}_{L^{\infty}(B_{(1+\delta)R})} \lesssim \frac{1}{\delta R}$.
As in the proof of Lemma~\ref{lem:estimateun}, we may additionally assume that
$\gamma_0^{\rm int} \eta = \gamma_0^{\rm ext} \eta$ is a piecewise polynomial of degree 1 on
each connected component of $\Gamma\cap B_{(1+\delta)R}$.
Since $h$ is the maximal element diameter, $8h \leq \delta R$ implies $T \subset B_{(1+\delta)R}$
for all $T \in \mathcal{T}_h$ with $T \cap \supp \eta \neq \emptyset$.
Because $v$ is piecewise harmonic and $[\partial_n v]|_{B_{(1+\delta)R}\cap\Gamma}=0$, we get
\begin{eqnarray}\label{eq:CaccHS1}
\norm{\nabla(\eta v)}_{L^2(B_{(1+\delta)R}\setminus \Gamma)}^2 &=&
\int_{B_{(1+\delta)R}\setminus\Gamma}\nabla v \cdot \nabla(\eta^2 v)+v^2 \abs{\nabla \eta}^2 dx \nonumber \\
&=&\langle\partial_n v, \eta^2 [\gamma_0 v]\rangle + \int_{B_{(1+\delta)R}\setminus\Gamma}{v^2\abs{\nabla \eta}^2 dx}.
\end{eqnarray}
We first focus on the surface integral.
With the nodal interpolation operator $I_h$ from \eqref{eq:NIapprox}
and the orthogonality \eqref{eq:orthogonalityHS},
we get
\begin{eqnarray}
\langle\partial_n v, \eta^2 [\gamma_0 v]\rangle &=&
\langle \partial_n v, \eta^2 [\gamma_0 v] - I_h (\eta^2 [\gamma_0 v])\rangle +
\mu\skp{I_h(\eta^2 [\gamma_0 v]),1}.
\label{eq:lem:Caccioppoli-10HS}
\end{eqnarray}
The approximation property \eqref{eq:NIapprox} leads to
\begin{equation}\label{eq:cacctemp1}
\norm{\eta^2[\gamma_0 v] - I_h(\eta^2[\gamma_0 v])}_{L^2(\Gamma)}^2 \lesssim h^{2(p+1)}
\sum_{T \in \mathcal{T}_h}\abs{\eta^2[\gamma_0 v]}_{H^{p+1}(T)}^2.
\end{equation}
Since for each $T \in \mathcal{T}_h$ we have $[\gamma_0 v]|_T \in {\mathcal P}_p$,
we get $D^{k}[\gamma_0 v]|_T = 0$ for all multiindices
$k \in \N_0^{d}$ with $\abs{k}:=\sum_{i=1}^d k_i = p+1$ and
$\eta|_T \in \mathcal{P}_1$ implies $D^j \eta|_T = 0$ for $j \in \N_0^{d}$ with $\abs{j} \geq 2$.
With the Leibniz product rule, a direct calculation (see \cite[Lemma 2]{FMPFEM} for details) leads to
\begin{eqnarray*}
\abs{\eta^2 [\gamma_0 v]}^2_{H^{p+1}(T)} &\lesssim&
\frac{1}{(\delta R)^2} \abs{\eta [\gamma_0 v]}_{H^{p}(T)}^2+\frac{1}{(\delta R)^4} \abs{[\gamma_0 v]}_{H^{p-1}(T)}^2,
\end{eqnarray*}
where the suppressed constant depends on $p$.
The inverse inequalities \eqref{eq:inverse2} given in
Lemma~\ref{lem:inverseinequality} imply
\begin{eqnarray}\label{eq:NIjumpest}
\norm{\eta^2[\gamma_0 v] - I_h(\eta^2[\gamma_0 v])}_{L^2(\Gamma)}^2
&\lesssim& h^{2(p+1)}\sum_{T \in \mathcal{T}_h}\left(
\frac{1}{(\delta R)^2} \abs{\eta [\gamma_0 v]}_{H^{p}(T)}^2+\frac{1}{(\delta R)^4} \abs{[\gamma_0 v]}_{H^{p-1}(T)}^2\right)\nonumber\\
&\lesssim& \frac{h^{3}}{(\delta R)^2}
\norm{\eta[\gamma_0 v]}_{H^{1/2}(\Gamma)}^2
+ \frac{h^{4}}{(\delta R)^4} \norm{\eta[\gamma_0 v]}_{L^{2}(B_{(1+\delta)R}\cap\Gamma)}^2.
\end{eqnarray}
With the trace inequality, we obtain
\begin{eqnarray}\label{eq:traceestimate}
\norm{\eta [\gamma_0 v]}_{H^{1/2}(\Gamma)}^2 &=&
\norm{\gamma_0^{\text{ext}}(\eta v) - \gamma_0^{\text{int}}(\eta v)}_{H^{1/2}(\Gamma)}^2 \nonumber \\
&\lesssim& \norm{\eta v}_{L^2(\Omega)}^2 + \norm{\nabla(\eta v)}_{L^2(\Omega)}^2 +
\norm{\eta v}_{L^2(\Omega^c)}^2 +\norm{\nabla(\eta v)}_{L^2(\Omega^c)}^2 \nonumber \\
&\leq&\norm{v}_{L^2(B_{(1+\delta)R}\setminus\Gamma)}^2 +
\norm{\nabla(\eta v)}_{L^2(B_{(1+\delta)R}\setminus\Gamma)}^2.
\end{eqnarray}
In the same way, the multiplicative trace inequality implies
\begin{eqnarray}\label{eq:tempmultrace}
\norm{\eta [\gamma_0 v]}_{L^{2}(\Gamma)}^2 \lesssim \frac{1}{\delta R}\norm{\eta v}_{L^2(B_{(1+\delta)R}\setminus\Gamma)}^2 +
\norm{\eta v}_{L^2(B_{(1+\delta)R}\setminus\Gamma)}\norm{\eta \nabla v}_{L^2(B_{(1+\delta)R}\setminus\Gamma)}.
\end{eqnarray}
We apply Lemma~\ref{lem:estimateun} with $\widetilde{R}=(1+\delta/2)R$ and
$\widetilde{\delta}=\frac{\delta}{2+\delta}$ such that $(1+\widetilde{\delta})\widetilde{R}=(1+\delta)R$.
Together with \eqref{eq:NIjumpest} -- \eqref{eq:tempmultrace}, we get
\begin{align*}
&\abs{\langle \partial_n v, \eta^2 [\gamma_0 v] - I_h (\eta^2 [\gamma_0 v])\rangle} \leq
\norm{\partial_n v}_{L^{2}(B_{(1+\delta/2)R}\cap\Gamma)}
\norm{\eta^2[\gamma_0 v] - I_h(\eta^2[\gamma_0 v])}_{L^2(\Gamma)} \\
&\qquad\leq C \left(\norm{\nabla v}_{L^2(B_{(1+\delta) R}\setminus\Gamma)}+
\frac{1}{\delta R}\norm{v}_{L^2(B_{(1+\delta) R}\setminus\Gamma)}\right)
\bigg\{ \frac{h}{\delta R}\left(\norm{v}_{L^2(B_{(1+\delta)R}\setminus\Gamma)}+
\norm{\nabla(\eta v)}_{L^2(B_{(1+\delta)R}\setminus\Gamma)}\right) \\
&\qquad\qquad +
\frac{h^{3/2}}{(\delta R)^2}\left(\frac{1}{(\delta R)^{1/2}}\norm{v}_{L^2(B_{(1+\delta)R}\setminus\Gamma)}+
\norm{\eta v}^{1/2}_{L^2(B_{(1+\delta)R}\setminus\Gamma)}\norm{\eta\nabla v}^{1/2}_{L^2(B_{(1+\delta)R}\setminus\Gamma)}\right)
\bigg\} \\
&\qquad\leq C\frac{h^2}{(\delta R)^2}\norm{\nabla v}_{L^2(B_{(1+\delta)R}\setminus \Gamma)}^2 +
C\frac{1}{(\delta R)^2}\norm{v}_{L^2(B_{(1+\delta)R}\setminus \Gamma)}^2+
\frac{1}{4} \norm{\nabla(\eta v)}_{L^2(B_{(1+\delta)R}\setminus\Gamma)}^2,
\end{align*}
where, in the last step, we applied Young's inequality
as well as the assumptions $\frac{h}{R} \leq \frac{\delta}{8}$ and $\delta R \leq 2 {\rm diam} (\Omega)$ multiple times.
The last term in \eqref{eq:lem:Caccioppoli-10HS} can
be estimated with \eqref{eq:NIapprox}, $\eta \leq 1$,
the previous estimates \eqref{eq:NIjumpest} -- \eqref{eq:traceestimate},
and the assumption
$\frac{h}{R} \leq \frac{\delta}{8}$, as well as $\delta R \leq 2 {\rm diam} (\Omega)$ by
\begin{eqnarray*}
\abs{\mu\skp{I_h(\eta^2 [\gamma_0 v]),1}}&\lesssim& \abs{\mu\skp{\eta^2 [\gamma_0 v],1}} +
\abs{\mu\skp{\eta^2[\gamma_0 v] - I_h(\eta^2 [\gamma_0 v]),1}} \\
&\lesssim&\abs{\mu}\abs{B_{(1+\delta)R}\cap\Gamma}^{1/2}
\left(\norm{\eta^2[\gamma_0 v]}_{L^2(\Gamma)}+\norm{\eta^2[\gamma_0 v] - I_h(\eta^2[\gamma_0 v])}_{L^2(\Gamma)}\right) \\
&\lesssim&
\abs{\mu}((1+\delta)R)^{(d-1)/2}\left(\norm{\eta^2[\gamma_0 v]}_{L^{2}(\Gamma)}+
h^{1/2}\norm{\eta^2[\gamma_0 v]}_{H^{1/2}(\Gamma)}\right) \\
&\lesssim&
\abs{\mu}((1+\delta)R)^{(d-1)/2}\left(\norm{v}_{L^2(B_{(1+\delta)R}\setminus\Gamma)} +
\norm{\nabla(\eta v)}_{L^2(B_{(1+\delta)R}\setminus\Gamma)}\right).
\end{eqnarray*}
Applying Young's inequality, we obtain
\begin{equation*}
\abs{\mu\skp{I_h(\eta^2 [\gamma_0 v]),1}}\leq
C((1+\delta)R)^{d-1}\abs{\mu}^2 + C\frac{1}{(\delta R)^2}\norm{v}_{L^2(B_{(1+\delta)R}\setminus\Gamma)}^2
+\frac{1}{4}\norm{\nabla(\eta v)}_{L^2(B_{(1+\delta)R}\setminus\Gamma)}^2.
\end{equation*}
Inserting the previous estimates in \eqref{eq:lem:Caccioppoli-10HS},
Lemma~\ref{lem:estimateun},
Young's inequality, and the assumption $\frac{h}{R} \leq \frac{\delta}{8}$ lead to
\begin{eqnarray*}
\abs{\langle \partial_n v, \eta^2 [\gamma_0 v]\rangle} &\leq&
\abs{\langle \partial_n v, \eta^2 [\gamma_0 v] - I_h (\eta^2 [\gamma_0 v])\rangle}+\abs{\mu\skp{I_h(\eta^2 [\gamma_0 v]),1}}\\
&\leq&C\frac{h^2}{(\delta R)^2}\norm{\nabla v}_{L^2(B_{(1+\delta)R}\setminus \Gamma)}^2 +
C\frac{1}{(\delta R)^2}\norm{v}_{L^2(B_{(1+\delta)R}\setminus \Gamma)}^2 +C((1+\delta)R)^{d-1}\abs{\mu}^2 \\
& & + \frac{1}{2} \norm{\nabla(\eta v)}_{L^2(B_{(1+\delta)R}\setminus\Gamma)}^2.
\end{eqnarray*}
Inserting this in \eqref{eq:CaccHS1} and subtracting the term
$\frac{1}{2} \norm{\nabla(\eta v)}_{L^2(B_{(1+\delta)R}\setminus\Gamma)}^2$ from both sides
finally leads to
\begin{eqnarray*}
\norm{\nabla(\eta v)}_{L^2(B_{(1+\delta)R}\setminus \Gamma)}^2 \lesssim \frac{h^2}{(\delta R)^2} \norm{\nabla v}^2_{L^2(B_{(1+\delta)R} \setminus \Gamma)} +
\frac{1}{(\delta R)^2}\norm{v}_{L^2(B_{(1+\delta)R} \setminus \Gamma)}^2+((1+\delta)R)^{d-1}\abs{\mu}^2,
\end{eqnarray*}
which finishes the proof.
\end{proof}
We consider $\gamma$-shape regular triangulations ${\mathcal{E}_H}$ of $\R^d$ that conform to
$\Omega$. More precisely, we will assume that every $E \in {\mathcal{E}_H}$ satisfies either
$E \subset \ol{\Omega}$ or $E\subset \Omega^c$ and that the restrictions
${\mathcal E}_H|_{\Omega}$ and ${\mathcal E}_H|_{\Omega^c}$ are $\gamma$-shape regular, regular
triangulations of $\Omega$ and $\Omega^c$ of mesh size $H$, respectively.
On the piecewise regular mesh ${\mathcal E}_H$, we define the Scott-Zhang projection
$J_H:H^1(\R^d\setminus \Gamma) \rightarrow
S^{1,1}_{pw}:= \{v\,:\, v|_\Omega \in S^{1,1}({\mathcal E}_H|_{\Omega})\ \mbox{ and } \
v|_{\Omega^c} \in S^{1,1}({\mathcal E}_H|_{\Omega^c})\}$ in a piecewise fashion by
\begin{equation}\label{eq:pwInterpolation}
J_H v = \left\{
\begin{array}{l}
\widetilde J_H^{\rm int} v \quad \text{for} \, x \in \overline{\Omega}, \\
\widetilde J_H^{\rm ext} v \quad \textrm{otherwise};
\end{array}
\right.
\end{equation}
here, $\widetilde J_H^{\rm int}$, $\widetilde J_H^{\rm ext}$ denote the Scott-Zhang projections
for the grids $\mathcal{E}_H|_{\Omega}$ and $\mathcal{E}_H|_{{\Omega}^c}$. Since $J_H$ is a piecewise Scott-Zhang projection
the approximation properties proved in \cite{ScottZhang} apply and result in the following estimates:
\begin{equation}
\label{eq:SZapprox}
\norm{v-J_H v}_{H^m(E)}^2 \leq C H^{2(\ell-m)}
\begin{cases}
\abs{v}_{H^{\ell}(\omega_E^\Omega)} &\mbox{ if $E \subset \Omega$} \\
\abs{v}_{H^{\ell}(\omega_E^{\Omega^c})} &\mbox{ if $E \subset \Omega^c$}
\end{cases}
\quad 0 \leq m \leq \ell \leq 1;
\end{equation}
here,
\begin{align*}
\omega_E^\Omega =
\bigcup\left\{E' \in \mathcal{E}_H|_{\Omega} \;:\; E \cap E' \neq \emptyset \right\},
\qquad
\omega_E^{\Omega^c} =
\bigcup\left\{E' \in \mathcal{E}_H|_{\Omega^c} \;:\; E \cap E' \neq \emptyset \right\}.
\end{align*}
The constant $C>0$ in (\ref{eq:SZapprox}) depends only on the $\gamma$-shape regularity of the
quasiuniform triangulation $\mathcal{E}_H$ and the dimension $d$.\\
Let $\Pi_{h,R,\mu} : (H^1(B_R\setminus\Gamma),\triplenorm{\cdot}_{h,R}) \rightarrow
(\H_{h,0}(B_R,\Gamma_{\rho},\mu),
\triplenorm{\cdot}_{h,R})$
be the orthogonal projection,
which is well-defined since $\H_{h,0}(B_R,\Gamma_{\rho},\mu)\subset H^1(B_R\setminus\Gamma)$ is
a closed subspace by Lemma~\ref{lem:closedsubspaceHS}.
\begin{lemma}\label{lem:lowdimappHS}
Let $\delta \in (0,1)$, $R\in (0,2 \operatorname*{diam} (\Omega))$
be such that $\frac{h}{R}\leq \frac{\delta}{8}$. Let
$B_R$, $B_{(1+\delta)R}$, $B_{(1+2\delta)R}$ be concentric boxes.
Let $\Gamma_{\rho}\subset \Gamma$ be of the form \eqref{eq:screen}
and $\mu \in\R$.
Let $\mathcal{E}_H $ be an (infinite) $\gamma$-shape regular triangulation of $\mathbb{R}^d$
of mesh width $H$ that conforms to $\Omega$ as described above. Assume $\frac{H}{R} \leq \frac{\delta}{4}$.
Let $J_H: H^1(\mathbb{R}^d\setminus\Gamma) \rightarrow S^{p,1}_{\rm pw}$ be the piecewise Scott-Zhang projection
defined in \eqref{eq:pwInterpolation}.
Then, there exists a constant $C_{\rm app} > 0$ that depends only on $\Omega$, $d,p$, and $\gamma$, such that for
$v\in\H_{h,0}(B_{(1+2\delta)R},\Gamma_{\rho},\mu)$
\begin{enumerate}[(i)]
\item
\label{item:lem:lowdimapp-ii}
$\big(v-\Pi_{h,R,\mu}J_H v\big)|_{B_{R}} \in \H_{h,0}(B_{R},\Gamma_{\rho},0)$; \\[-2mm]
\item
\label{item:lem:lowdimapp-i}
$\triplenorm{v-\Pi_{h,R,\mu}J_H v}_{h,R} \leq C_{\rm app}
\left(\frac{h}{R}+\frac{H}{R}\right)\left(\frac{1+2\delta}{\delta}\triplenorm{v}_{h,(1+2\delta)R} +
((1+2\delta)R)^{(d-1)/2}\abs{\mu}\right)$;
\item
\label{item:lem:lowdimapp-iii}
$\dim W\leq C_{\rm app}\left(\frac{(1+2\delta)R}{H}\right)^d$, where
$W:=\Pi_{h,R,\mu}J_H \H_{h,0}(B_{(1+2\delta)R},\Gamma_{\rho},\mu) $.
\end{enumerate}
\end{lemma}
\begin{proof}
For $u \in \mathcal{H}_{h,0}(B_{(1+2\delta)R},\Gamma_{\rho},\mu)$, we have
$u \in \mathcal{H}_{h,0}(B_{R},\Gamma_{\rho},\mu)$ as well and hence
$\Pi_{h,R,\mu}\left(u|_{B_{R}}\right) = u|_{B_{R}}$, which gives (\ref{item:lem:lowdimapp-ii}).
The assumption $\frac{H}{R} \leq \frac{\delta}{4}$ implies
$\bigcup\{E \in \mathcal{E}_H \;:\; \omega_E \cap B_{R} \neq \emptyset\} \subseteq B_{(1+\delta)R}$.
The locality and the approximation properties \eqref{eq:SZapprox} of $J_H$ yield
\begin{eqnarray*}
\frac{1}{H} \norm{u - J_Hu}_{L^2(B_{R}\setminus\Gamma)} +
\norm{\nabla(u - J_Hu)}_{L^2(B_{R}\setminus\Gamma)} &\lesssim&
\norm{\nabla u}_{L^2(B_{(1+\delta)R}\setminus\Gamma)}.
\end{eqnarray*}
We apply Lemma~\ref{lem:CaccioppoliHS} with
$\widetilde{R} = (1+\delta)R$ and $\widetilde{\delta} = \frac{\delta}{1+\delta}$.
Note that $(1+\widetilde{\delta})\widetilde{R} = (1+2\delta)R$, and
$\frac{h}{\widetilde{R}}\leq \frac{\widetilde{\delta}}{8}$
follows from $8h \leq \delta R = \widetilde{\delta}\widetilde{R}$. Hence, we obtain
\begin{align*}
&\triplenorm{u-\Pi_{h,R,\mu}J_H u}_{h,R}^2 =\triplenorm{\Pi_{h,R,\mu}\left(u-J_H u\right)}^2_{h,R} \leq
\triplenorm{u-J_H u}_{h,R}^2 \\
& \qquad= \left(\frac{h}{R}\right)^{2}\norm{\nabla (u-J_H u)}_{L^2(B_{R}\setminus\Gamma)}^2 +
\frac{1}{R^2} \norm{u-J_H u}_{L^2(B_{R}\setminus\Gamma)}^2\\
&\qquad\lesssim\frac{h^2}{R^2}\norm{\nabla u}_{L^{2}(B_{(1+\delta) R}\setminus\Gamma)}^2 +
\frac{H^2}{R^2}\norm{\nabla u}_{L^2(B_{(1+\delta)R}\setminus\Gamma)}^2\\
&\qquad\lesssim \left(\frac{h}{R}+\frac{H}{R}\right)^2
\left(\frac{(1+2\delta)^2}{\delta^2}\triplenorm{u}^2_{h,(1+2\delta)R}+((1+2\delta)R)^{d-1}\abs{\mu}^2\right),
\end{align*}
which concludes the proof (\ref{item:lem:lowdimapp-i}).
The statement (\ref{item:lem:lowdimapp-iii}) follows from the fact that
$\dim J_H\mathcal{H}_{h,0}(B_{(1+2\delta)R},\Gamma_{\rho},\mu) \lesssim ((1+2\delta)R/H)^d$.
\end{proof}
\begin{lemma}\label{cor:lowdimappHS}
Let $C_{\rm app}$ be the constant of Lemma~\ref{lem:lowdimappHS}.
Let $q,\kappa \in (0,1)$, $R \in (0,2\operatorname*{diam}(\Omega))$, $k \in \mathbb{N}$,
and $\Gamma_{\rho}\subset \Gamma$ be of the form \eqref{eq:screen}.
Assume
\begin{equation}
\label{eq:cor:lowdimapp-1HS}
\frac{h}{R} \leq \frac{\kappa q} {32 k \max\{C_{\rm app},1\}}.
\end{equation}
Then, there exists a finite dimensional subspace $\widehat{W}_k$ of
$\mathcal{H}_{h,0}(B_{(1+\kappa)R},\Gamma_{\rho},\mu)$
with dimension
$$
\dim \widehat{W}_k \leq C_{\rm dim} \left(\frac{1 + \kappa^{-1}}{q}\right)^dk^{d+1},$$
such that for every $v \in \mathcal{H}_{h,0}(B_{(1+\kappa)R},\Gamma_{\rho},\mu)$ it holds
\begin{align}
\label{eq:lowdimappHS}
&\min_{\widehat{w} \in \widehat{W}_k} \norm{[\gamma_0 v]- [\gamma_0\widehat{w}]}_{L^2(B_R\cap\Gamma_{\rho})} \\
&
\qquad \leq C_{\rm low}R(1+\kappa) h^{-1/2} \min_{\widehat{w}\in \widehat{W}_k}
\triplenorm{v-\widehat{w}}_{h,(1+\kappa/2)R} \nonumber\\
&
\qquad \leq
C_{\rm low}R(1+\kappa) h^{-1/2} q^{k} \left(\triplenorm{v}_{h,(1+\kappa)R}+
((1+\kappa)R)^{(d-1)/2}\abs{\mu}\right).
\nonumber
\end{align}
The constants $C_{\rm dim}$, $C_{\rm low}>0$ depends only on $\Omega$, $d$, $p$, and the
$\gamma$-shape regularity of the quasiuniform triangulation $\mathcal{T}_h$.
\end{lemma}
\begin{proof}
Let $B_R$ and $B_{(1+\delta_j)R}$
with $\delta_j := \kappa(1-\frac{j}{2k})$ for $j=0,\dots,k$ be concentric boxes.
We note that $\kappa = \delta_0>\delta_1 > \dots >\delta_k = \frac{\kappa}{2}$.
We choose $H = \frac{\kappa q R}{32k\max\{C_{\rm app},1\}}$, where $C_{\rm app}$ is the constant
in Lemma~\ref{lem:lowdimappHS}.
By the choice of $H$, we have $h \leq H$. We apply Lemma~\ref{lem:lowdimappHS} with
$\widetilde{R}\! =\! (1+\delta_j)R$ and
$\widetilde{\delta}_j\! =\! \frac{\kappa}{4k(1+\delta_j)}\!<\!\frac{1}{4}$.
Note that $\delta_{j-1} = \delta_j +\frac{\kappa}{2k}$ gives
$(1+\delta_{j-1})R=(1+2\widetilde{\delta}_j)\widetilde{R}$.
Our choice of $H$ implies
$\frac{H}{\widetilde{R}}\leq \frac{\widetilde{\delta}_j}{4}$.
Hence, for $j=1$, Lemma~\ref{lem:lowdimappHS} provides a
subspace $W_1$ of $\mathcal{H}_{h,0}(B_{(1+\delta_1)R},\Gamma_{\rho},\mu)$
with $\dim W_1 \leq C\left(\frac{(1+\kappa)R}{H}\right)^d$
and a $w_1 \in W_1$ such that
\begin{eqnarray*}
\triplenorm{v-w_1}_{h,(1+\delta_1)R} &\leq&
2C_{\rm app}\frac{H}{(1+\delta_1)R}\left({\frac{1+2\widetilde{\delta}_1}{\widetilde{\delta}_1}}
\triplenorm{v}_{h,(1+\delta_0)R}+((1+\delta_0)R)^{(d-1)/2}\abs{\mu}\right)\\
&=& 8C_{\rm app}\frac{k H}{\kappa R}(1+2\widetilde{\delta}_1)\left(\triplenorm{v}_{h,(1+\kappa)R} +
\frac{\widetilde{\delta}_1}{1+2\widetilde{\delta}_1}(1+\delta_0)^{(d-1)/2}\abs{\mu}\right) \\
&\leq& q\left(\triplenorm{v}_{h,(1+\kappa)R}+((1+\kappa)R)^{(d-1)/2}\abs{\mu}\right).
\end{eqnarray*}
Since $v-w_1 \in \mathcal{H}_{h,0}(B_{(1+\delta_1)R},\Gamma_{\rho},0)$, we can use Lemma~\ref{lem:lowdimappHS} again
(this time with $\mu = 0$)
and get an approximation $w_2$ of $v-w_1$ in a subspace $W_2$
of $\mathcal{H}_{h,0}(B_{(1+\delta_1)R},\Gamma_{\rho},0)$ with
$\dim W_2\leq C\left(\frac{(1+\kappa)R}{H}\right)^d$. Arguing as for $j=1$, we get
\begin{equation*}
\triplenorm{v-w_1-w_2}_{h,(1+\delta_2)R} \leq q \triplenorm{v-w_1}_{h,(1+\delta_1)R} \leq
q^2 \left(\triplenorm{v}_{h,(1+\kappa)R}+((1+\kappa)R)^{(d-1)/2}\abs{\mu}\right).
\end{equation*}
Continuing this process $k-2$ times leads to an approximation $\widehat{w} := \sum_{j=1}^kw_i$
in the space $\widehat{W}_k := \sum_{j=1}^{k}W_j$
of dimension $\dim \widehat{W}_k \leq Ck\left(\frac{(1+\kappa)R}{H}\right)^d=
C_{\rm dim} ((1+\kappa^{-1}) q^{-1})^dk^{d+1}$ such that
\begin{equation} \label{eq:tmp:iterationargument}
\triplenorm{v-\widehat{w}}_{h,(1+\kappa/2)R}=\triplenorm{v-\widehat{w}}_{h,(1+\delta_k)R}
\leq q^k\left(\triplenorm{v}_{h,(1+\kappa)R}+((1+\kappa)R)^{(d-1)/2}\abs{\mu}\right).
\end{equation}
The last step of the argument is to use the multiplicative trace inequality.
With a suitable cut-off function $\eta$ supported by $B_{(1+\kappa/2)R}$ and
$\|\nabla \eta\|_{L^\infty} \lesssim (\kappa R)^{-1}$ as well as $\eta \equiv 1$ on $B_R$, we get
for $z \in H^1(B_{(1+\kappa/2)R}\setminus\Gamma)$
\begin{align*}
\|[\gamma_0 z]\|^2_{L^2(B_R\cap\Gamma)} &\leq \|[\gamma_0 (\eta z)]\|^2_{L^2(\Gamma)}
\lesssim \|\eta z\|_{L^2(\R^d\setminus\Gamma)}
\|\eta z\|_{H^1(\R^d\setminus\Gamma)} \\
& \lesssim \frac{1}{\kappa R}\|z\|^2_{L^2(B_{(1+\kappa/2)R})} +
\|z\|_{L^2(B_{(1+\kappa/2)R})} \|\nabla z\|_{L^2(B_{(1+\kappa/2)R}\setminus\Gamma)} \\
& \lesssim \frac{1}{\kappa R}\|z\|^2_{L^2(B_{(1+\kappa/2)R})} +
h^{-1}\|z\|_{L^2(B_{(1+\kappa/2)R})}^2+ h\|\nabla z\|_{L^2(B_{(1+\kappa/2)R}\setminus\Gamma)}^2 \\
&\lesssim \left((1+\kappa/2)R\right)^2h^{-1} \triplenorm{z}^2_{h,(1+\kappa/2)R},
\end{align*}
where the last step follows from the assumption $\frac{h}{\kappa R}\leq 1$.
Using this estimate for $z=v-\widehat{w}$ together with \eqref{eq:tmp:iterationargument} gives
\begin{align*}
\min_{\widehat w \in \widehat W_k}
\|[\gamma_0 v] - [\gamma_0 \widehat w]\|_{L^2(B_R \cap \Gamma)}
&\leq C_{\rm low}
(1 + \kappa)R \, h^{-1/2} q^k \left[
\triplenorm{v}_{h,(1+\kappa)R} + \left((1+\kappa)R\right)^{(d-1)/2}|\mu|
\right].
\end{align*}
This concludes the proof.
\end{proof}
\begin{remark}
{\rm
The proof of Lemma~\ref{cor:lowdimappHS} shows that approximation results in $H^{1/2}$ can be achieved at
the expense of an additional factor $h^{-1/2}$: With the cut-off function $\eta$ that is used at the end of the
proof of Lemma~\ref{cor:lowdimappHS}, we can can bound for $z \in H^1(B_{(1+\kappa/2)R}\setminus\Gamma)$
$$
\|[\gamma_0 (\eta z)]\|_{H^{1/2}(\Gamma)} \lesssim
\|\eta z\|_{H^{1}(\R^d\setminus\Gamma)} \lesssim
(1 +\kappa/2)R\, h^{-1} \triplenorm{z}_{h,B_{(1+\kappa/2)R}}
$$
Hence, with the spaces $\widehat W_k$ of Lemma~\ref{cor:lowdimappHS} one gets
$$
\min_{\widehat w \in \widehat W_k}
\|[\gamma_0 (\eta (v - \widehat w))]\|_{H^{1/2}(\Gamma)}
\lesssim C_{\rm low}^\prime (1+\kappa)R\, h^{-1} q^k
\left[
\triplenorm{v}_{h,(1+\kappa)R} + \left((1+\kappa)R\right)^{(d-1)/2}|\mu|
\right].
$$
}\eex
\end{remark}
Now we are able to prove the main result of this section.
\begin{proof}[of Theorem~\ref{thm:function-approximationHypSing}]
Choose $\kappa = \frac{1}{1+\eta}$. By assumption, we have
${\rm dist}(B_{R_\tau},B_{R_\sigma})
\ge \eta^{-1} {\rm diam} B_{R_\tau} = \sqrt{d} \eta^{-1} R_{\tau}$.
In particular, this implies
\begin{equation*}{\rm dist}(B_{(1+\kappa) R_\tau},B_{R_\sigma}) \geq {\rm dist}(B_{R_\tau},B_{R_\sigma}) -
\kappa R_{\tau} \sqrt{d} \geq \sqrt{d}R_{\tau}(\eta^{-1}-\kappa) =
\sqrt{d}R_{\tau}\left(\frac{1}{\eta} - \frac{1}{1+\eta}\right) >0.\end{equation*}
Let $\phi_h \in S^{p,1}(\mathcal{T}_h)$ solve (\ref{eq:modelHS}). Recall from (\ref{eq:discrete-stability}) that
\begin{equation*}
\norm{\phi_h}_{H^{1/2}(\Gamma)} + |\lambda| \lesssim \norm{\Pi^{L^2} f}_{L^2(\Gamma)}.
\end{equation*}
The potential $u = \widetilde{K}\phi_h$ then satisfies
$u \in \H_{h,0}(B_{(1+\kappa) R_\tau},\Gamma,\lambda)$.
Furthermore, the boundedness of $\widetilde{K}: H^{1/2}(\Gamma) \ra H^1_{\text{loc}}(\R^d)$
and $\frac{h}{R_{\tau}}<1$ lead to
\begin{eqnarray*}
\triplenorm{\widetilde{K} \phi_h}_{h,R_{\tau}(1+\kappa)} &\leq&
2\left(1+\frac{1}{R_\tau}\right)\norm{\widetilde{K} \phi_h}_{H^1(B_{2R_{\tau}})} \\&\lesssim&
\left(1+\frac{1}{R_\tau}\right) \norm{\phi_h}_{H^{1/2}(\Gamma)}
\lesssim \left(1+\frac{1}{R_\tau}\right)\norm{\Pi^{L^2}f}_{L^{2}(\Gamma)}.
\end{eqnarray*}
We are now in position to define the space $W_k$, for which we distinguish two cases.
\newline
{\bf Case 1:} The condition \eqref{eq:cor:lowdimapp-1HS} is satisfied with $R = R_{\tau}$.
With the space $\widehat{W}_k$ provided by Lemma~\ref{cor:lowdimappHS} we set
$W_k := \{[\gamma_0 \widehat{w}] : \widehat{w} \in \widehat{W}_k\}$.
Then, Lemma~\ref{cor:lowdimappHS} and $R_{\tau}\leq 2{\rm diam}(\Omega)$ as well as
$\kappa \leq 1$ lead to
\begin{eqnarray*}
\min_{w\in W_k}\norm{\phi_h-w}_{L^2(B_{R_{\tau}}\cap \Gamma)} &\lesssim&
(1+\kappa)R_{\tau}\, h^{-1/2} q^k
\left(\triplenorm{\widetilde{K}\phi_h}_{h,(1+\kappa)R_{\tau}}+\abs{\lambda} \right) \\
&\lesssim&
(1+\kappa)(R_{\tau}+1)h^{-1/2}q^k\norm{\Pi^{L^2}f}_{L^{2}(\Gamma)}
\lesssim h^{-1/2} q^k \norm{\Pi^{L^2}f}_{L^{2}(\Gamma)} ,
\end{eqnarray*}
and the dimension of $W_k$ is bounded by
$$
\dim W_k \leq C_{\rm dim} \left(\frac{1+\kappa^{-1}}{q}\right)^d k^{d+1} = C_{\rm dim} (2 + \eta)^d q^{-d} k^{d+1}.
$$
{\bf Case 2:} The condition \eqref{eq:cor:lowdimapp-1HS} is not satisfied with $R = R_{\tau}$.
Then, we select
$W_k:= \left\{w|_{B_{R_\tau}\cap \Gamma} : w \in S^{p,1}({\mathcal T}_h)\right\}$ and
the minimum in \eqref{eq:thm:function-approximation-1HS} is obviously zero.
By the choice of $\kappa$ and $\frac{h}{R_{\tau}}>\frac{\kappa q}{32k\max\{C_{\rm app},1\}}$,
the dimension of $W_k$ is bounded by
$$
\dim W_k \lesssim \left(\frac{R_{\tau}}{h}\right)^{d-1}
\lesssim \left(\frac{32 k \max\{C_{\rm app},1\}}{\kappa q}\right)^{d-1}
\simeq \left((1+\eta) q^{-1} k \right)^{d-1}
\lesssim (2+\eta)^d q^{-d} k^{d+1}.
$$
This concludes the proof.
of the first inequality in \eqref{eq:thm:function-approximation-1HS}.
The second inequality in \eqref{eq:thm:function-approximation-1HS} follows
from the $L^2(\Gamma)$-stability of the $L^2(\Gamma)$-orthogonal projection.
\end{proof}
\section{$\H$-matrix approximation}
\label{sec:H-matrix-approximation}
In order to obtain an $\H$-matrix approximating $\boldsymbol{\mathcal W}^{-1}$ (cf. (\ref{eq:Wtilde}))
we start with the construction of a low-rank approximation of an admissible matrix block.
\begin{theorem}\label{th:blockapprox}
Fix an admissibility parameter $\eta > 0$ and $q\in (0,1)$.
Let the cluster pair $(\tau,\sigma)$ be $\eta$-admissible.
Then, for every $k \in \mathbb{N}$, there are matrices
$\mathbf{X}_{\tau\sigma} \in \mathbb{R}^{\abs{\tau}\times r}$, $\mathbf{Y}_{\tau\sigma} \in \mathbb{R}^{\abs{\sigma}\times r}$ of rank $r \leq C_{\rm dim} (2+\eta)^d q^{-d}k^{d+1}$
such that
\begin{equation}
\norm{\boldsymbol{\mathcal W}^{-1}|_{\tau \times \sigma} - \mathbf{X}_{\tau\sigma}\mathbf{Y}_{\tau\sigma}^T}_2
\leq C_{\rm apx} N^{(2d-1)/(2d-2)} q^k.
\end{equation}
The constants $C_{\rm apx}$, $C_{\rm dim}>0$ depend only on $\Omega$, $d$,
the $\gamma$-shape regularity of the quasiuniform triangulation $\mathcal{T}_h$, and $p$.
\end{theorem}
\begin{proof}
If $C_{\rm dim} (2+\eta)^d q^{-d} k^{d+1} \geq \min (\abs{\tau},\abs{\sigma})$, we use the exact matrix block
$\mathbf{X}_{\tau\sigma}=\boldsymbol{\mathcal W}^{-1}|_{\tau \times \sigma}$ and
$\mathbf{Y}_{\tau\sigma} = I \in \mathbb{R}^{\abs{\sigma}\times\abs{\sigma}}$.
If $C_{\rm dim}(2+\eta)^d q^{-d} k^{d+1} < \min (\abs{\tau},\abs{\sigma})$, we employ the approximation
result of Theorem~\ref{thm:function-approximationHypSing} in the following way.
Let $\lambda_i:L^2(\Gamma) \rightarrow \mathbb{R}$ be continuous linear functionals on $L^2(\Gamma)$
satisfying $\lambda_i(\psi_j) = \delta_{ij}$, as well as the stability estimate
$\norm{\lambda_i(w)\psi_i}_{L^2(\Gamma)} \lesssim \norm{w}_{L^2(\supp \psi_i)}$ for $w \in L^2(\Gamma)$,
where the suppressed constant depends only on the shape-regularity of the quasiuniform mesh $\mathcal{T}_h$.
For the existence of such functionals, we refer to \cite{ScottZhang}.
We define $\R^{\tau}:= \{\mathbf{x} \in \R^N \; : \; x_i = 0 \; \forall \; i \notin \tau \}$ and the mappings
\begin{equation*}
\Lambda_{\tau} : L^2(\Gamma) \rightarrow \mathbb{R}^{\tau}, v \mapsto (\lambda_i(v))_{i \in \tau} \;\text{and}
\; \Phi_{\tau}: \mathbb{R}^{\tau} \rightarrow S^{p,1}({\mathcal T}_h),\; \mathbf{x} \mapsto \sum_{j\in\tau} x_j \psi_j.
\end{equation*}
The interpolation operator
$\Phi_{\tau}\Lambda_{\tau}$ is, due to our assumptions on the functionals $\lambda_i$,
stable in $L^2$ and for a piecewise polynomial function $\widetilde{\phi}\in S^{p,1}(\mathcal{T}_h)$ we get
$\Phi_{\tau}(\Lambda_{\tau} \widetilde{\phi}) = \widetilde{\phi}|_{\Gamma_{\tau}}$
with $\Gamma_{\tau}:=\bigcup_{i \in \tau} \supp \psi_i \subset B_{R_{\tau}}$.
For $\mathbf{x} \in \R^{\tau}$, \eqref{eq:basisisomorphism} implies
\begin{equation*}
Ch^{(d-1)/2}\norm{\mathbf{x}}_{2} \leq \norm{\Phi_\tau(\mathbf{x})}_{L^2(\Gamma)} \leq
\widetilde{C} h^{(d-1)/2}\norm{\mathbf{x}}_{2}, \quad \forall\mathbf{x} \in \R^{{\tau}}.
\end{equation*}
The adjoint $\Lambda_{\mathcal{I}}^* : \R^N \ra L^2(\Gamma)'\simeq L^2(\Gamma),
\mathbf{b}\mapsto\sum_{i\in \mathcal{I}}b_i\lambda_i$
of $\Lambda_{\mathcal{I}}$ satisfies,
because of \eqref{eq:basisisomorphism} and the $L^2$-stability of $\Phi_{\mathcal{I}} \Lambda_{\mathcal{I}}$,
\begin{equation*}
\norm{\Lambda_{\mathcal{I}}^* \mathbf{b}}_{L^2(\Gamma)} = \sup_{w \in L^2(\Gamma)}
\frac{\skp{\mathbf{b},\Lambda_{\mathcal{I}} w}_2}{\norm{w}_{L^2(\Gamma)}}
\lesssim \norm{\mathbf{b}}_2 \sup_{w \in L^2(\Gamma)}
\frac{h^{-(d-1)/2}\norm{\Phi_{\mathcal{I}}\Lambda_{\mathcal{I}} w}_{L^2(\Gamma)}}{\norm{w}_{L^2(\Gamma)}}
\leq h^{-(d-1)/2}\norm{\mathbf{b}}_2.
\end{equation*}
Let $\mathbf{b} \in \R^N$. Defining $f := \Lambda_{\mathcal{I}}^*\mathbf{b}|_{\sigma}$,
we get $b_i =\skp{f,\psi_i}$ for $i \in \sigma$ and $\supp f \subset B_{R_{\sigma}}\cap\Gamma$.
Theorem~\ref{thm:function-approximationHypSing} provides a
finite dimensional space $W_k$ and an element $w \in W_k$
that is a good approximation to the Galerkin solution $\phi_h|_{B_{R_{\tau}}\cap \Gamma}$.
It is important to note that the space $W_k$ is constructed independently of the function $f$;
it depends only on the cluster pair $(\tau,\sigma)$.
The estimate \eqref{eq:basisisomorphism}, the approximation result from
Theorem~\ref{thm:function-approximationHypSing},
and $\norm{\Pi^{L^2}f}_{L^2(\Gamma)} \leq \norm{f}_{L^2(\Gamma)}
\leq \norm{\Lambda_{\mathcal{I}}^*\mathbf{b}}_{L^2(\Gamma)}
\lesssim h^{-(d-1)/2}\norm{\mathbf{b}}_2$ imply
\begin{eqnarray*}
\norm{\Lambda_{\tau} \phi_h - \Lambda_{\tau} w}_2 &\lesssim&
h^{-(d-1)/2}\norm{\Phi_{\tau}(\Lambda_{\tau} \phi_h-\Lambda_{\tau} w)}_{L^2(\Gamma)}
\leq h^{-(d-1)/2}\norm{\phi_h-w}_{L^2(B_{R_{\tau}}\cap \Gamma)} \\
&\lesssim& h^{-(d-1)/2-1/2} \,
q^k\norm{\Pi^{L^2}f}_{L^2(\Gamma)} \lesssim
h^{-(2d-1)/2} \,q^k\norm{\mathbf{b}}_{2}.
\end{eqnarray*}
In order to translate this approximation result to the matrix level, let
\begin{equation*}
\mathcal{W} := \{\Lambda_{\tau} w \; : \; w \in W_k \}.
\end{equation*}
Let the columns of $\mathbf{X}_{\tau\sigma}$ be an orthogonal basis of the space $\mathcal{W}$.
Then, the rank of $\mathbf{X}_{\tau\sigma}$ is bounded by $\dim W_k \leq C_{\rm dim} (2+\eta)^d q^{-d}k^{d+1} $.
Since $\mathbf{X}_{\tau\sigma} \mathbf{X}_{\tau\sigma}^T$ is the orthogonal projection from
$\R^N$ onto $\mathcal{W}$,
we get that $z:=\mathbf{X}_{\tau\sigma} \mathbf{X}_{\tau\sigma}^T \Lambda_{\tau} \phi_h$
is the best approximation of $\Lambda_{\tau} \phi_h$ in $\mathcal{W}$ and arrive at
\begin{equation}
\label{eq:matrix-level-estimate-function}
\norm{\Lambda_{\tau} \phi_h-z}_2 \leq \norm{\Lambda_{\tau} \phi_h-\Lambda_{\tau} w}_2 \lesssim
h^{-(2d-1)/2} \, q^k\norm{\mathbf{b}}_2
\simeq \, N^{(2d-1)/(2d-2)} q^k\norm{\mathbf{b}}_2.
\end{equation}
Note that $\Lambda_{\tau} \phi_h = \boldsymbol{\mathcal W}^{-1}|_{\tau\times \sigma} \mathbf{b}|_\sigma$.
If we define $\mathbf{Y}_{\tau,\sigma} := \boldsymbol{\mathcal W}^{-1}|_{\tau\times \sigma}^{T}\mathbf{X}_{\tau\sigma}$,
we thus get $z = \mathbf{X}_{\tau\sigma} \mathbf{Y}_{\tau\sigma}^T \mathbf{b}|_\sigma$. The bound
\eqref{eq:matrix-level-estimate-function} expresses
\begin{equation}
\label{eq:matrix-level-estimate-function-1}
\norm{\left(\boldsymbol{\mathcal W}^{-1}|_{\tau\times \sigma} - \mathbf{X}_{\tau\sigma} \mathbf{Y}_{\tau\sigma}^T \right)\mathbf{b}|_\sigma}_2
= \norm{\Lambda_{\tau} \phi_h-z}_2
\lesssim \, N^{(2d-1)/(2d-2)} q^k\norm{\mathbf{b}}_2.
\end{equation}
The space $W_k$ depends only on the cluster pair $(\tau,\sigma)$, and the estimate
\eqref{eq:matrix-level-estimate-function-1} is valid for any $\mathbf b$.
This concludes the proof.
\end{proof}
The following lemma gives an estimate for the global spectral norm by the local spectral norms,
which we will use in combination with Theorem \ref{th:blockapprox} to derive our main result,
Theorem \ref{th:Happrox}. \\
\begin{lemma}[{\cite{GrasedyckDissertation},\,\cite[Lemma 6.5.8]{HackbuschBuch},\,\cite{BoermBuch}}]\label{lem:spectralnorm}
Let $\mathbf{M} \in \R^{N\times N}$ and $P$ be a partitioning of ${\mathcal{I}}\times {\mathcal{I}}$. Then,
\begin{equation*}
\norm{\mathbf{M}}_2 \leq C_{\rm sp} \left(\sum_{\ell=0}^{\infty}\max\{\norm{\mathbf{M}|_{\tau\times \sigma}}_2 : (\tau,\sigma) \in P, {\rm level}(\tau) = \ell\}\right),
\end{equation*}
where the sparsity constant $C_{\rm sp}$ is defined in \eqref{eq:sparsityConstant}.
\end{lemma}
Now we are able to prove our main result, Theorem \ref{th:Happrox}.
\begin{proof}[of Theorem \ref{th:Happrox}]
Theorem \ref{th:blockapprox} provides matrices $\mathbf{X}_{\tau\sigma} \in \R^{\abs{\tau}\times r}$, $\mathbf{Y}_{\tau\sigma} \in \R^{\abs{\sigma}\times r}$,
so we can define the $\H$-matrix $\mathbf{W}_{\H}$ by
\begin{equation*}
\mathbf{W}_{\H} = \left\{
\begin{array}{l}
\mathbf{X}_{\tau\sigma}\mathbf{Y}_{\tau\sigma}^T \quad \;\textrm{if}\hspace{2mm} (\tau,\sigma) \in P_{\text{far}}, \\
\boldsymbol{\mathcal W}^{-1}|_{\tau \times \sigma} \quad \textrm{otherwise}.
\end{array}
\right.
\end{equation*}
On each admissible block $(\tau,\sigma) \in P_{\rm far}$ we can use the blockwise estimate of Theorem \ref{th:blockapprox}
and get
\begin{equation*}
\norm{(\boldsymbol{\mathcal W}^{-1} - \mathbf{W}_{\H})|_{\tau \times \sigma}}_2 \leq
C_{\rm apx} N^{(2d-1)/(2d-2)} q^k.
\end{equation*}
On inadmissible blocks, the error is zero by definition.
Therefore, Lemma \ref{lem:spectralnorm} leads to
\begin{eqnarray*}
\norm{\boldsymbol{\mathcal W}^{-1} - \mathbf{W}_{\H}}_2 &\leq& C_{\rm sp}
\left(\sum_{\ell=0}^{\infty}\text{max}\{\norm{(\boldsymbol{\mathcal W}^{-1} - \mathbf{W}_{\H})|_{\tau \times \sigma}}_2 : (\tau,\sigma) \in P, {\rm level}(\tau) = \ell\}\right) \\
&\leq& C_{\rm apx}C_{\rm sp} N^{(2d-1)/(2d-2)} q^k {\rm depth}(\mathbb{T}_{\mathcal{I}}).
\end{eqnarray*}
With $r = C_{\rm dim}(2+\eta)^dq^{-d}k^{d+1}$,
the definition $b = -\frac{\ln(q)}{C_{\rm dim}^{1/(d+1)}}q^{d/(d+1)}(2+\eta)^{-d/(1+d)} > 0$
leads to $q^k = e^{-br^{1/(d+1)}}$, and hence
\begin{equation*}
\norm{\boldsymbol{\mathcal W}^{-1} - \mathbf{W}_{\mathcal{H}}}_2 \leq C_{\rm apx}C_{\rm sp}
N^{(2d-1)/(2d-2)} {\rm depth}(\mathbb{T}_{\mathcal{I}})e^{-br^{1/(d+1)}},
\end{equation*}
which concludes the proof.
\end{proof}
\section{Stabilized Galerkin discretization}
\label{sec:stabGalerkin}
In the previous section, we studied a saddle point formulation of the hyper-singular integral operator.
It is possible to reformulate the hyper-singular integral equation as a positive definite system by
a rank-one correction that does not alter the solution. In numerical computations, this reformulation is
often preferred, and we therefore study it. Furthermore, it will be the starting point for the $\H$-matrix
Cholesky factorization studied in Section~\ref{sec:LU-decomposition} below.
The {\em stabilized Galerkin matrix} $\mathbf W^{\text{st}}\in\mathbb R^{N\times N}$
is obtained from the matrix $\mathbf W \in \mathbb R^{N \times N}$ as follows:
\begin{equation}\label{eq:MatrixHypsing}
\mathbf W^{\text{st}}_{jk} = \langle W\psi_k,\psi_j\rangle + \alpha\skp{\psi_k,1}\skp{\psi_j,1} =
\mathbf{W}_{jk}+ \alpha \mathbf{B}_k\mathbf{B}_j,
\quad \forall j,k = 1,\dots,N. \end{equation}
Here, $\alpha > 0$ is a fixed stabilization parameter.
The matrix $\mathbf W^{\text{st}}$ is symmetric and positive definite.
With the notation from \eqref{eq:matrixHypsing} the stabilized matrix $\mathbf W^{\text{st}}$ can be written as
\begin{equation*}
\mathbf W^{\text{st}} = \mathbf{W} + \alpha \mathbf{B}\mathbf{B}^T.
\end{equation*}
The interest in the stabilized matrix $\mathbf W^{\text{st}}$ arises from the fact that
solving the linear system
\begin{equation*}
\boldsymbol{\mathcal{W}}\begin{pmatrix} \mathbf{x} \\ \lambda \end{pmatrix} :=
\begin{pmatrix} \mathbf{W} & \mathbf{B} \\ \mathbf{B}^T & 0 \end{pmatrix} \begin{pmatrix}\mathbf{x} \\ \lambda \end{pmatrix}
= \begin{pmatrix} \mathbf{b} \\0\end{pmatrix}
\end{equation*}
is equivalent to solving the symmetric positive definite system
\begin{equation}\label{eq:stabGalerkinSystem}
\widehat{\boldsymbol{\mathcal W}}\begin{pmatrix} \mathbf{x} \\ \lambda \end{pmatrix}:=
\begin{pmatrix} \mathbf{W} +\alpha \mathbf{B}\mathbf{B}^T & \mathbf{B} \\ \mathbf{B}^T & 0 \end{pmatrix} \begin{pmatrix}\mathbf{x} \\ \lambda \end{pmatrix}
= \begin{pmatrix} \mathbf{b} \\0\end{pmatrix}.
\end{equation}
For more details about this stabilization, we refer to \cite[Ch. 6.6/12.2]{Steinbach}.
In order to see that the question of approximating $(\mathbf W^{\text{st}})^{-1}$
in the $\H$-matrix format is closely related to approximating ${\boldsymbol{\mathcal W}}^{-1}$
in the $\H$-matrix format, we partition
$$
\boldsymbol{\mathcal W}^{-1} = \begin{pmatrix} \mathbf{G} & \mathbf{P} \\ \mathbf{P}^T & z \end{pmatrix}
$$
and observe that the inverse $\left(\mathbf W^{\text{st}}\right)^{-1}$ can be computed explicitly:
\begin{equation*}
\left(\mathbf W^{\text{st}}\right)^{-1} =
\mathbf{G} + \left(\mathbf W^{\text{st}}\right)^{-1}\mathbf{B}\mathbf{P}^T.
\end{equation*}
Hence, the inverse $\left(\mathbf W^{\text{st}}\right)^{-1}$ can be
computed just from a rank one update from $\mathbf{G}$, i.e., a subblock of $\boldsymbol{\mathcal W}^{-1}$. We immediately
get the following corollary to Theorem~\ref{th:Happrox}:
\begin{corollary}
\label{cor:stabGalerkin}
There exists a blockwise rank-$(r+1)$ approximation
$\mathbf W^{\text{st}}_\H$ to $({\mathbf W}^{\text{st}})^{-1}$ with
$$
\|(\mathbf W^{\text{st}})^{-1} - \mathbf W^{\text{st}}_\H\|_2 \leq
C_{\rm apx} C_{\rm sp} \operatorname*{depth}(\mathbb{T}_{\mathcal I})
N^{(2d-1)/(2d-2)} e^{-br^{1/(d+1)}}.
$$
\end{corollary}
\section{$\H$-Cholesky decomposition}
\label{sec:LU-decomposition}
In this section we are concerned with proving the existence of a hierarchical Cholesky-decomposition
of the form $\mathbf{W}^{\rm st}\approx\mathbf{C}_{\H}\mathbf{C}_{\H}^T$, where $\mathbf{C}_{\H}$
is a lower triangular $\H$-matrix.
The main results are summarized in Theorem~\ref{th:HLU}. It is shown by
approximating off-diagonal block of certain Schur complements by low-rank matrices.
Therefore, the main contribution is done in Section~\ref{sec:schur-complements},
the remaining steps follow the lines of \cite{Bebendorf07,GrasedyckKriemannLeBorne,FMPFEM}. \\
The advantage of studying the second system \eqref{eq:stabGalerkinSystem} is that the submatrix
$\mathbf{W}^{\rm st}=\mathbf{W} +\alpha \mathbf{B}\mathbf{B}^T$ is symmetric and positive definite and therefore has
a Cholesky-decomposition, which can be used to derive a $LU$-decomposition for the whole matrix.
Moreover, the existence of the Cholesky decomposition does not depend on the numbering of the degrees of freedom,
i.e., for every other numbering of the basis functions there is a Cholesky decomposition as well
(see, e.g., \cite[Cor.~{3.5.6}]{horn-johnson13}).
The existence of the Cholesky decomposition implies
the invertibility of the matrix $\mathbf{W}^{\rm st}|_{\rho \times \rho}$ for any $n \leq N$ and index set
$\rho := \{1,\ldots,n\}$ (see, e.g., \cite[Cor.~{3.5.6}]{horn-johnson13}).
For the $\H$-Cholesky decomposition of Theorem~\ref{th:HLU} below we assume that
the unknowns are organized in a binary cluster tree ${\mathbb T}_{\mathcal I}$. This induces
an ordering of the unknowns by requiring that the unknowns of one of the sons be numbered first and
those of the other son later; the precise numbering for the leaves is immaterial for our purposes.
This induced ordering of the unknowns allows us to speak of {\em block lower triangular} matrices, if the
block partition $P$ is based on the cluster tree ${\mathbb T}_{\mathcal I}$.
The following theorem states that the Cholesky factor $\mathbf{C}$ for the stabilized matrix
can be approximated by a block lower triangular $\H$-matrix and, as a consequence, there exists a
hierarchical $LU$-factorization of $\widehat{\boldsymbol{\mathcal W}}$.
\begin{theorem}\label{th:HLU}
Let $\mathbf{W}^{\rm st} = \mathbf{C}\mathbf{C}^T$ be the Cholesky decomposition.
Let a partition $P$ of $\mathcal{I}\times \mathcal{I}$ be based on a cluster tree
$\mathbb{T}_{\mathcal{I}}$.
Then for every $r\geq 3$, there exist block lower triangular, blockwise rank-$r$ matrices $\mathbf{C_{\H}},\mathbf{L_{\H}}$
and a block upper triangular, blockwise rank-$r$ matrix $\mathbf{U_{\H}}$
such that
\begin{enumerate}[(i)]
\item
\label{item:th:HLU-i}
$\displaystyle \frac{\norm{\mathbf{C}-\mathbf{C_{\H}}}_2}{\norm{\mathbf{C}}_2} \leq
C_{\rm chol} N^{\frac{2}{d-1}} {\rm depth}(\mathbb{T}_{\mathcal{I}})
e^{-br^{1/(d+1)}}$
\item
\label{item:th:HLU-ii}
$\displaystyle\frac{\norm{\mathbf{W}^{\rm st}-\mathbf{C_{\H}}\mathbf{C_{\H}}}_2}
{\norm{\mathbf{W}^{\rm st}}_2} \leq
2C_{\rm chol} N^{\frac{2}{d-1}} {\rm depth}(\mathbb{T}_{\mathcal{I}}) e^{-br^{1/(d+1)}}
\!+\! C_{\rm chol}^2 N^{\frac{4}{d-1}} {\rm depth}(\mathbb{T}_{\mathcal{I}})^2 e^{-2br^{1/(d+1)}}$,
\item
\label{item:th:HLU-iii}
$\displaystyle\frac{\norm{\widehat{\boldsymbol{\mathcal W}}-\mathbf{L_{\H}}\mathbf{U_{\H}}}_2}
{\norm{\widehat{\boldsymbol{\mathcal W}}}_2} \leq
2C_{\rm chol} N^{\frac{2}{d-1}} {\rm depth}(\mathbb{T}_{\mathcal{I}}) e^{-br^{1/(d+1)}}
\!+\! C_{\rm chol}^2 N^{\frac{4}{d-1}} {\rm depth}(\mathbb{T}_{\mathcal{I}})^2 e^{-2br^{1/(d+1)}}$,
\end{enumerate}
where $C_{\rm chol} = C_{\rm sp}C_{\rm sc}\sqrt{\kappa_2(\mathbf{W}^{\rm st})}$,
with the sparsity constant $C_{\rm sp}$ of (\ref{eq:sparsityConstant}),
the spectral condition number $\kappa_2(\mathbf{W}^{\rm st}) := \norm{\mathbf{W}^{\rm st}}_2
\norm{{\mathbf{W}^{\rm st}}^{-1}}_2$,
and a constant $C_{\rm sc}$
depending only on $\Omega$, $d$, $p$, the $\gamma$-shape regularity of the quasiuniform triangulation $\mathcal{T}_h$,
the admissibility parameter $\eta$ and the stabilization parameter $\alpha$.
\end{theorem}
\subsection{Schur complements}
\label{sec:schur-complements}
For a cluster pair $(\tau,\sigma)$ and $\rho := \{i\in \mathcal{I} : i < \min(\tau\cup\sigma)\}$,
we define the Schur complement
\begin{equation}\label{eq:defSchur}
\mathbf{S}(\tau,\sigma) = \mathbf{W}^{\rm st}|_{\tau\times\sigma} - \mathbf{W}^{\rm st}|_{\tau\times \rho}
(\mathbf{W}^{\rm st}|_{\rho\times \rho})^{-1}\mathbf{W}^{\rm st}|_{\rho\times\sigma}.
\end{equation}
As mentioned in \cite{FMPBEM} such a Schur complement can be approximated by using $\H$-arithmetic,
but leads to worse estimates with respect to the rank needed for the approximation than the
procedure here.
Therefore, we revisit our approach from \cite{FMPBEM} that is based on interpreting Schur complements
as BEM matrices obtained from certain constrained spaces.
The main result in this section is Theorem~\ref{lem:Schur} below.
For its proof, we need a degenerate approximation
of the kernel function $\kappa(x,y) = G(x,y)$ of the single layer operator $V$ given by
$V\phi(x) := \int_{\Gamma}G(x,y)\phi(y)ds_y$.
This classical result, stated here as a degenerate approximation by Chebyshev interpolation,
is formulated in the following lemma. A proof can be found in \cite{FMPBEM}.
\begin{lemma}\label{lem:lowrankGreensfunction}
Let $\widetilde \eta>0$ and fix $\eta^\prime \in (0,2 \widetilde \eta)$. Then, for every hyper cube
$B_Y \subset \R^d$, $d \in \{2,3\}$ and closed $D_X \subset \R^d$ with
${\rm dist}(B_Y,D_X)\geq \widetilde\eta {\rm diam}(B_Y)$ the following is true: For every
$r \in \N$ there exist functions $g_{1,i}$, $g_{2,i}$, $i=1,\ldots,r$ such that
\begin{equation}
\norm{\kappa(x,\cdot)-\sum_{i=1}^r g_{1,i}(x) g_{2,i}(\cdot)}_{L^{\infty}(B_{Y})}
\leq C \frac{(1+1/\widetilde \eta)}{{\rm dist}(\{x\},B_Y)^{d-2}} (1 + \eta^\prime)^{-r^{1/d}}
\qquad \forall x \in D_X,
\end{equation}
for a constant $C$ that depends solely on the choice of $\eta^\prime \in (0,2\widetilde\eta)$.
\end{lemma}
The following lemma gives a representation for the Schur complement by
interpreting it as a BEM matrix from a certain constrained space.
A main message of the following lemma is that by slightly modifying the Schur complement
$\mathbf{S}(\tau,\sigma)$, we can use an orthogonality without the stabilization term.
\begin{lemma}[Schur complement and orthogonality]
\label{rem:SchurRepresentation}
Let $(\tau,\sigma)$ be an admissible cluster pair,
$\rho := \{i\in \mathcal{I} : i < \min(\tau\cup\sigma)\}$,
and the Schur complement $\mathbf{S}(\tau,\sigma)$ defined by \eqref{eq:defSchur}.
Let the function
$\widetilde{\phi} \in S^{p,1}(\mathcal{T}_h)$
with $\widetilde{\phi} = \phi + \phi_{\rho}$, where
$\phi \in S^{p,1}(\mathcal{T}_h), \supp \phi \subset \overline{\Gamma_{\tau}}$ and
$\phi_{\rho} \in S^{p,1}(\mathcal{T}_h), \supp \phi_{\rho} \subset \overline{\Gamma_{\rho}}$, with
$\Gamma_{\tau},\Gamma_{\rho}$ of the form \eqref{eq:screen},
satisfy the orthogonality
\begin{equation}\label{eq:SchurOrthoNoStab}
\skp{W\widetilde{\phi},\widehat{\psi}}_{L^2(\Gamma)} = 0 \quad
\forall \widehat{\psi} \in S^{p,1}(\mathcal{T}_h) \; \text{with} \; \supp \widehat{\psi} \subset \overline{\Gamma_{\rho}}.
\end{equation}
Then, there exists a matrix $\mathbf{D}$ of rank $2$, which is independent of $\phi$ and $\psi$, such that
\begin{equation*}
\skp{W\widetilde{\phi},\psi}+\alpha\skp{\widetilde{\phi},1}\skp{\psi,1} =
\boldsymbol{\phi}^T\left(\mathbf{S}(\tau,\sigma)+\mathbf{D}\right)\boldsymbol{\psi}.
\end{equation*}
\end{lemma}
\begin{proof}
Given $\phi$, $\widetilde{\phi}$ is indeed uniquely defined:
By definition of $\widetilde{\phi}$, we get with the matrix $\mathbf{W}$ from \eqref{eq:matrixHypsing}
\begin{equation*}
0 = \skp{W\widetilde{\phi},\widehat{\psi}}_{L^2(\Gamma)} =
\skp{W(\phi + \phi_{\rho}),\widehat{\psi}}_{L^2(\Gamma)} =
(\boldsymbol{\phi}^T\mathbf{W}|_{\tau \times \rho} + \boldsymbol{\phi}_{\rho}^T\mathbf{W}|_{\rho \times \rho})
\widehat{\boldsymbol \psi},
\end{equation*}
for $\widehat{\psi} \in S^{p,1}(\mathcal{T}_h)$, $\supp \widehat{\psi} \subset \overline{\Gamma_{\rho}}$
and corresponding vector $\widehat{\boldsymbol\psi} \in \R^{\abs{\rho}}$.
Due to $\rho \subsetneq I$, the matrix $\mathbf{W}|_{\rho \times \rho}$ is symmetric and positive definite
and therefore invertible.
This leads to
\begin{equation*}
\boldsymbol{\phi}_{\rho}^T = - \boldsymbol{\phi}^T\mathbf{W}|_{\tau \times \rho}\mathbf{W}|_{\rho \times \rho}^{-1}.
\end{equation*}
Thus, we get for $\psi$ with $\supp \psi \subset \overline{\Gamma_{\sigma}}$
and the vector $\mathbf{B}$ from \eqref{eq:matrixHypsing} that
\begin{eqnarray}\label{eq:Schur1}
\skp{W\widetilde{\phi},\psi}+\alpha\skp{\widetilde{\phi},1}\skp{\psi,1} &=&
\boldsymbol{\phi}^T \left(\mathbf{W}|_{\tau \times \sigma} +
\alpha\mathbf{B}\mathbf{B}^T|_{\tau \times \sigma} \right)\boldsymbol{\psi}+
\boldsymbol{\phi}_{\rho}^T \left(\mathbf{W}|_{\rho \times \sigma}+
\alpha\mathbf{B}\mathbf{B}^T|_{\rho \times \sigma}\right) \boldsymbol{\psi} \nonumber\\
&=& \boldsymbol{\phi}^T\left(\mathbf{W}^{\rm st}|_{\tau\times\sigma} -
\mathbf{W}|_{\tau \times \rho}\mathbf{W}|_{\rho \times \rho}^{-1}\mathbf{W}^{\rm st}|_{\rho\times\sigma} \right)
\boldsymbol{\psi}.
\end{eqnarray}
With the Sherman-Morrison-Woodbury formula (\cite[Ch.~{0.7.4}]{horn-johnson13}),
the Schur complement $\mathbf{S}(\tau,\sigma)$ can be written as
\begin{eqnarray}\label{eq:Schur2}
\mathbf{S}(\tau,\sigma) &=& \mathbf{W}^{\rm st}|_{\tau\times\sigma} - \mathbf{W}^{\rm st}|_{\tau\times \rho}
(\mathbf{W}^{\rm st}|_{\rho\times \rho})^{-1}\mathbf{W}^{\rm st}|_{\rho\times\sigma} \nonumber \\
&=& \mathbf{W}^{\rm st}|_{\tau\times\sigma} - \left(\mathbf{W}|_{\tau \times \rho} +
\alpha\mathbf{B}\mathbf{B}^T|_{\tau \times \rho} \right)\left(\mathbf{W}|_{\rho \times \rho}^{-1} + \mathbf{P}\right)
\mathbf{W}^{\rm st}|_{\rho\times\sigma},
\end{eqnarray}
where $\mathbf{P}$ is a rank one matrix given by
$\mathbf{P} = \mathbf{W}|_{\rho\times\rho}^{-1}\alpha\mathbf{B}|_{\rho}
\left(1+\alpha\mathbf{B}|_{\rho}^T\mathbf{W}|_{\rho\times\rho}^{-1}\mathbf{B}|_{\rho}\right)
\mathbf{B}|_{\rho}^T\mathbf{W}|_{\rho\times\rho}^{-1}$.
Thus, comparing the matrices in \eqref{eq:Schur1} and \eqref{eq:Schur2}, we observe that
\begin{equation*}
\skp{W\widetilde{\phi},\psi}+\alpha\skp{\widetilde{\phi},1}\skp{\psi,1} =
\boldsymbol{\phi}^T\left(\mathbf{S}(\tau,\sigma)+\mathbf{D}\right)\boldsymbol{\psi},
\end{equation*}
with a rank-2 matrix $\mathbf{D}$.
\end{proof}
Now, we are able to prove the main result of this subsection, an approximation result for the
Schur-complement $\mathbf{S}(\tau,\sigma)$.
\begin{theorem}\label{lem:Schur}
Let $(\tau,\sigma)$ be an $\eta$-admissible cluster pair,
set $\rho := \{i\in \mathcal{I} : i < \min(\tau\cup\sigma)\}$,
and let the Schur complement $\mathbf{S}(\tau,\sigma)$ be defined in \eqref{eq:defSchur}.
Then for every $r \ge 3$, there exists a rank-$r$ matrix $\mathbf{S}_{r}(\tau,\sigma)$ such that
\begin{equation*}
\norm{\mathbf{S}(\tau,\sigma) - \mathbf{S}_{r}(\tau,\sigma)}_2 \leq C_{\rm sc}^\prime h^{d-3}
e^{-br^{1/(d+1)}},
\end{equation*}
where the constants $C_{\rm sc}^\prime$, $b >0$ depend only on $\Omega$,
$d,p$, the $\gamma$-shape regularity of the quasiuniform triangulation $\mathcal{T}_h$, and $\eta$.
Furthermore, there exists a constant $C_{\rm sc}$ depending additionally on the stabilization parameter $\alpha > 0$
such that
\begin{equation*}
\norm{\mathbf{S}(\tau,\sigma) - \mathbf{S}_{r}(\tau,\sigma)}_2 \leq C_{\rm sc} N^{2/(d-1)}
e^{-br^{1/(d+1)}} \norm{\mathbf{W}^{\rm st}}_2.
\end{equation*}
\end{theorem}
\begin{proof}
Let $B_{R_{\tau}},B_{R_{\sigma}}$ be
bounding boxes for the clusters $\tau$, $\sigma$ satisfying \eqref{eq:admissibility} and
$\Gamma_{\rho} \subset \Gamma$ defined by \eqref{eq:screen}.
Lemma~\ref{rem:SchurRepresentation} provides a representation for the Schur complement as
\begin{equation}\label{eq:SchurRepresentation}
\boldsymbol{\phi}^T\left(\mathbf{S}(\tau,\sigma)+\mathbf{D}\right)\boldsymbol{\psi} =
\skp{W\widetilde{\phi},\psi}_{L^2(\Gamma)}+\alpha\skp{\widetilde{\phi},1}_{L^2(\Gamma)}\skp{\psi,1}_{L^2(\Gamma)},
\end{equation}
with the following relation between the functions $\psi$, $\widetilde\phi$ and the vectors
$\boldsymbol{\psi}$, $\boldsymbol{\phi}$, respectively:
$\psi = \sum_{j=1}^{\abs{\sigma}}\boldsymbol{\psi}_j\chi_{j_{\sigma}}$,
where the index $j_{\sigma}$ denotes the $j$-th basis function corresponding to the cluster $\sigma$,
and the function $\widetilde{\phi} \in S^{p,1}(\mathcal{T}_h)$
is defined by $\widetilde{\phi} = \phi + \phi_{\rho}$
with $\phi = \sum_{j=1}^{\abs{\tau}}\boldsymbol{\phi}_j\chi_{j_{\tau}}$
and $\supp \phi_{\rho} \subset \overline{\Gamma_{\rho}}$ such that
\begin{equation}\label{eq:SchurOrthogonality}
\skp{W\widetilde{\phi},\widehat{\psi}}_{L^2(\Gamma)}
= 0 \quad \forall \widehat{\psi} \in S^{p,1}({\mathcal T}_h) \; \text{with}\;
\supp \widehat{\psi} \subset \overline{\Gamma_{\rho}}.
\end{equation}
Our low-rank approximation of the Schur complement matrix $\mathbf{S}(\tau,\sigma)$ will have two
ingredients: first,
based on the the techniques of Section~\ref{sec:Approximation-solution} we exploit the
orthogonality \eqref{eq:SchurOrthogonality} to
construct a low-dimensional space $\widehat W_k$ from which for any $\phi$, the corresponding
function $\widetilde \phi$ can be approximated well. Second,
we exploit that the function $\psi$ in (\ref{eq:SchurRepresentation}) is
supported by $\Gamma_{\sigma}$, and we will use Lemma~\ref{lem:lowrankGreensfunction}.
Let $\delta = \frac{1}{1+\eta}$ and $B_{R_{\sigma}}$, $B_{(1+\delta)R_{\sigma}}$ be concentric boxes.
The symmetry of $W$ leads to
\begin{align}
\label{eq:tmpSchur}
&\skp{W\widetilde{\phi},\psi}_{L^2(\Gamma)}+\alpha\skp{\widetilde{\phi},1}_{L^2(\Gamma)}\skp{\psi,1}_{L^2(\Gamma)}
= \skp{\widetilde{\phi},W\psi}_{L^2(\Gamma)}+\alpha\skp{\widetilde{\phi},1}_{L^2(\Gamma)}\skp{\psi,1}_{L^2(\Gamma)}
\nonumber \\
\qquad &=
\skp{\widetilde{\phi},W\psi}_{L^2( B_{(1+\delta)R_{\sigma}}\cap\Gamma_{\rho} )} +
\skp{\widetilde{\phi},W\psi}_{L^2(\Gamma \setminus B_{(1+\delta)R_{\sigma}})}
+\alpha\skp{\widetilde{\phi},1}_{L^2(\Gamma)}\skp{\psi,1}_{L^2(\Gamma)}.
\end{align}
First, we treat the first term on the right-hand side of \eqref{eq:tmpSchur}.
In view of the symmetry property
$\mathbf{S}(\tau,\sigma) = \mathbf{S}(\sigma,\tau)^T$, we may assume for approximation
purposes that $\operatorname*{diam} B_{R_\sigma} \leq \operatorname*{diam} B_{R_\tau}$, i.e.,
$\min\{{\rm diam}(B_{R_{\tau}}),{\rm diam}(B_{R_{\sigma}})\} = \sqrt{d}R_{\sigma}$.Next, the choice of $\delta$
and the admissibility condition \eqref{eq:admissibility} imply
\begin{equation*}
{\rm dist}(B_{(1+2\delta)R_{\sigma}},B_{R_{\tau}}) \geq {\rm dist}(B_{R_{\sigma}},B_{R_{\tau}})-\sqrt{d}\delta R_{\sigma}
\geq \sqrt{d}R_{\sigma}(\eta^{-1}-\delta) > 0.
\end{equation*}
Therefore, we have $\widetilde{\phi}|_{B_{(1+2\delta)R_{\sigma}}\cap\Gamma_{\rho}} = \phi_{\rho}|_{B_{(1+2\delta)R_{\sigma}}\cap\Gamma_{\rho}}$
and the orthogonality \eqref{eq:SchurOrthogonality} holds
on the box $B_{(1+2\delta)R_{\sigma}}$.
Thus, by definition of $\H_{h,0}$, we have
$\widetilde{K}\widetilde{\phi} \in \H_{h,0}(B_{(1+2\delta)R_{\sigma}},\Gamma_{\rho},0)$.
As a consequence, Lemma~\ref{cor:lowdimappHS} can be applied to the potential
$\widetilde{K}\widetilde{\phi}$
with $R := (1+\delta)R_{\sigma}$ and $\kappa := \frac{1}{2+\eta} = \frac{\delta}{1+\delta}$. Note that
$(1+\kappa)(1+\delta) = 1+2\delta$ and $1+\kappa^{-1} = 3+\eta$.
Hence, we get a low dimensional space $\widehat{W}_k$ of dimension
$\dim \widehat{W}_k \leq C_{\rm dim}(3+\eta)^dq^{-d}k^{d+1} =: r$, and
the best approximation $\widehat{\phi} = \Pi_{\widehat{W}_k}\widetilde{\phi}$
to $\widetilde{\phi}$ from the space $\widehat{W}_k$ satisfies
\begin{equation*}
\norm{\widetilde{\phi}-\widehat{\phi}}_{L^{2}(B_{(1+\delta)R_{\sigma}}\cap\Gamma_{\rho})} \lesssim
R_{\sigma} h^{-1/2} q^k \triplenorm{\widetilde{K}\widetilde{\phi}}_{h,(1+2\delta)R_{\sigma}}
\lesssim h^{-1/2} e^{-b_1r^{1/(d+1)}}\norm{\widetilde{\phi}}_{H^{1/2}(\Gamma)},
\end{equation*}
where we defined $b_1 := -\frac{\ln(q)}{C_{\rm dim}^{1/(d+1)}}q^{d/(d+1)}(3+\eta)^{-d/(1+d)} > 0$
to obtain $q^k = e^{-b_1r^{1/(d+1)}}$.
Therefore, we get
\begin{eqnarray}\label{eq:Schurtemp1}
\abs{\skp{\widetilde{\phi}-\widehat{\phi},W\psi}_{L^2(B_{(1+\delta)R_{\sigma}}\cap\Gamma_{\rho})}} \lesssim
h^{-1/2} e^{-b_1r^{1/(d+1)}}\norm{\widetilde{\phi}}_{H^{1/2}(\Gamma)}\norm{W\psi}_{L^{2}(\Gamma)}.
\end{eqnarray}
The ellipticity of the hyper-singular integral operator on the screen $\Gamma_{\rho} \subsetneq \Gamma$,
$\supp (\widetilde{\phi} - \phi) = \supp \phi_{\rho} \subset \overline{\Gamma_{\rho}}$,
and the orthogonality \eqref{eq:SchurOrthogonality} lead to
\begin{eqnarray}\label{eq:Schurtemp2}
\norm{\widetilde{\phi}-\phi}_{H^{1/2}(\Gamma)}^2&\lesssim&
\skp{W(\widetilde{\phi}-\phi),\widetilde{\phi}-\phi}_{L^2(\Gamma)}
= -\skp{W\phi,\widetilde{\phi}-\phi}_{L^2(\Gamma)} \nonumber \\
&\lesssim&
\norm{W\phi}_{H^{-1/2}(\Gamma)}\norm{\widetilde{\phi}-\phi}_{H^{1/2}(\Gamma)}
\lesssim \norm{\phi}_{H^{1/2}(\Gamma)}\norm{\widetilde{\phi}-\phi}_{H^{1/2}(\Gamma)}.
\end{eqnarray}
Thus, with the triangle inequality, \eqref{eq:Schurtemp2},
the stability of $W:H^{1}(\Gamma)\ra L^{2}(\Gamma)$, and the inverse estimate \eqref{eq:inverse}, we
can estimate \eqref{eq:Schurtemp1} by
\begin{eqnarray*}
\abs{\skp{\widetilde{\phi}-\widehat{\phi},W\psi}_{L^2(B_{(1+\delta)R_{\sigma}}\cap\Gamma_{\rho})}}
&\lesssim&
h^{-1/2} e^{-br^{1/(d+1)}}\left(\norm{\widetilde{\phi}-\phi}_{H^{1/2}(\Gamma)}+
\norm{\phi}_{H^{1/2}(\Gamma)}\right)\norm{W\psi}_{L^{2}(\Gamma)} \\
&\lesssim&
h^{-2} e^{-br^{1/(d+1)}}\norm{\phi}_{L^{2}(\Gamma)}\norm{\psi}_{L^2(\Gamma)}.
\end{eqnarray*}
For the second term in \eqref{eq:tmpSchur},
we exploit the asymptotic smoothness of the Green's function $G(\cdot,\cdot)$.
First, we mention a standard device in connection with the hyper-singular integral operator, namely,
it can be represented in terms of the simple-layer operator (see, e.g., \cite[Sec.~6]{Steinbach}):
\begin{equation}
\label{eq:representation-of-W-in-terms-of-V}
\skp{\widetilde{\phi},W\psi}
=
\skp{{{\rm curl}}_{\Gamma}\widetilde{\phi},V{{\rm curl}}_{\Gamma}\psi},
\end{equation}
where for a scalar function $v$ defined on $\Gamma$, a lifting operator $\mathcal{L}$, and the outer normal
vector $n$, the surface curl is defined as
\begin{align*}
{{\rm curl}}_\Gamma v &= n \times \gamma_0^{\rm int} (\nabla \mathcal{L} v), \qquad \mbox{ for $d = 3$}, \\
{{\rm curl}}_\Gamma v &= n \cdot \gamma_0^{\rm int} (\nabla^T \mathcal{L} v), \quad
\nabla^T v = (\partial_2 v,-\partial_1 v)^T \qquad \mbox{ for $d = 2$}.
\end{align*}
The representation (\ref{eq:representation-of-W-in-terms-of-V})
is necessary here, since the kernel of the hyper-singular integral operator is not
asymptotically smooth on non-smooth surfaces $\Gamma$.
Now, Lemma~\ref{lem:lowrankGreensfunction}
can be applied with $B_Y = B_{R_{\sigma}}$ and $D_X = \Gamma \setminus B_{(1+\delta)R_{\sigma}}$,
where the choice of $\delta$ implies
\begin{equation}
\label{eq:degenerate-approximation-admissibility}
{\rm dist}(B_Y,D_X)\geq \frac{1}{2\sqrt{d}(1+\eta)} {\rm diam}(B_Y).
\end{equation}
Therefore, we get an approximation $G_r(x,y) = \sum_{i=1}^r g_{1,i}(x) g_{2,i}(y)$ such that
\begin{eqnarray}
\label{eq:degenerate-approximation-error}
\norm{G(x,\cdot)-{G}_r(x,\cdot)}_{L^{\infty}(B_{R_{\sigma}})}
\!&\lesssim&\! \frac{1}{{\rm dist}(\{x\},B_{R_\sigma})^{d-2}}e^{-b_2r^{1/d}}
\quad\! \! \forall x \in \Gamma\setminus B_{(1+\delta)R_{\sigma}};
\end{eqnarray}
here, the constant $b_2>0$ depends only on $d$ and $\eta$.
As a consequence of \eqref{eq:degenerate-approximation-admissibility} and
\eqref{eq:degenerate-approximation-error},
the rank-$r$ operator $W_r$ given by
\begin{equation*}
\skp{\widetilde{\phi},W_r\psi}_{L^2(\Gamma\setminus B_{(1+\delta)R_{\sigma}})}:=
\int_{\Gamma\setminus B_{(1+\delta)R_{\sigma}}} {\rm curl}_{\Gamma}\widetilde{\phi}(x)
\int_{B_{R_{\sigma}}\cap\Gamma}{G}_r(x,y){\rm curl}_{\Gamma}\psi(y)ds_yds_x \end{equation*}
satisfies with $B:= (\Gamma \setminus B_{(1+\delta)R_{\sigma}}) \times (B_{R_{\sigma}}\cap\Gamma)$
\begin{eqnarray*}
\abs{\skp{\widetilde{\phi},(W-W_r)\psi}_{L^2(\Gamma \setminus B_{(1+\delta)R_{\sigma}})}}
&\lesssim&
\norm{{\rm curl}_{\Gamma}\widetilde{\phi}}_{L^2(\Gamma)}
\sqrt{\operatorname*{meas}(\Gamma \cap B_{R_\sigma})}
\norm{G-\widetilde{G}_r}_{L^{\infty}\left(B\right)}
\norm{{\rm curl}_{\Gamma}\psi}_{L^2(\Gamma)} \\
&\lesssim&
h^{-3/2}\delta^{2-d} R_\sigma^{(3-d)/2} e^{-b_2r^{1/d}}
\norm{\widetilde{\phi}}_{H^{1/2}(\Gamma)} \norm{\psi}_{L^2(\Gamma)} \\
&\lesssim&
h^{-2}e^{-b_2r^{1/d}} \norm{\phi}_{L^2(\Gamma)}
\norm{\psi}_{L^2(\Gamma)},
\end{eqnarray*}
where the last two inequalities follow from the inverse estimate Lemma~\ref{lem:inverseinequality},
the stability estimate \eqref{eq:Schurtemp2} for the mapping $\phi \mapsto \widetilde \phi$, the assumption
$d \leq 3$ as well as $R_{\sigma} \leq \eta{\rm diam}(\Omega)$, and the choice $\delta = \frac{1}{1+\eta}$.
Here, the hidden constant additionally depends on $\eta$.
Since the mapping
\begin{equation*}
(\phi,\psi)\! \mapsto\! \skp{\widehat{\phi},W\psi}_{L^2(B_{(1+\delta)R_{\sigma}}\cap\Gamma_{\rho})} +
\skp{\widetilde{\phi},W_r\psi}_{L^2(\Gamma \setminus B_{(1+\delta)R_{\sigma}})}
\end{equation*}
defines a bounded bilinear form on $L^2(\Gamma)$,
there exists a linear operator $\widehat{W}_r:L^2(\Gamma)\ra L^2(\Gamma)$ such that
\begin{equation*}
\skp{\widehat{\phi},W\psi}_{L^2(B_{(1+\delta)R_{\sigma}}\cap\Gamma_{\rho})} +
\skp{\widetilde{\phi},W_r\psi}_{L^2(\Gamma \setminus B_{(1+\delta)R_{\sigma}})}
= \skp{\widehat{W}_r\phi,\psi}_{L^2(\Gamma)},
\end{equation*}
and the dimension of the range of $\widehat{W}_r$ is bounded by $2r$.
Therefore, we get
\begin{eqnarray*}
\abs{\skp{W\widetilde{\phi},\psi}_{L^2(\Gamma)}
- \skp{\widehat{W}_r\phi,\psi}_{L^2(\Gamma)}}
\lesssim h^{-2}e^{-br^{1/(d+1)}}\norm{\phi}_{L^2(\Gamma)}
\norm{\psi}_{L^2(\Gamma)},
\end{eqnarray*}
with $b := \min\{b_1,b_2\}$.
This leads to a
matrix $\widehat{\mathbf{S}_{r}}(\tau,\sigma)$ of rank $2r+1$ such that
\begin{equation*}
\norm{\mathbf{S}(\tau,\sigma)+\mathbf{D}-\widehat{\mathbf{S}_{r}}(\tau,\sigma)}_2 =
\sup_{\boldsymbol{\phi}\in\R^{\abs{\tau}},\boldsymbol{\psi}\in \R^{\abs{\sigma}}}
\frac{\abs{\boldsymbol{\phi}^T(\mathbf{S}(\tau,\sigma)+\mathbf{D}-\widehat{\mathbf{S}_{r}}(\tau,\sigma))\boldsymbol{\psi}}}
{\norm{\boldsymbol{\phi}}_2\norm{\boldsymbol{\psi}}_2}
\leq C h^{d-3} e^{-br^{1/(d+1)}},
\end{equation*}
where we have used \eqref{eq:basisisomorphism}.
Consequently we can find a matrix $\mathbf{S}_{r}(\tau,\sigma) := \widehat{\mathbf{S}_{r}}(\tau,\sigma)-\mathbf{D}$
of rank $2r+3$ such that
\begin{equation*}
\norm{\mathbf{S}(\tau,\sigma)-\mathbf{S}_{r}(\tau,\sigma)}_2
\leq C h^{d-3} e^{-br^{1/(d+1)}}.
\end{equation*}
The estimate $\frac{1}{\norm{\mathbf{W}^{\rm st}}_2}\lesssim h^{-d+1}$
(with implied constant depending on $\alpha$) from \cite[Lemma~12.9]{Steinbach}
and $h \simeq N^{-1/(d-1)}$ finish the proof.
\end{proof}
\begin{comment}
As a direct consequence of the representation \eqref{eq:SchurRepresentation} and the results from
Section~\ref{sec:Approximation-solution}, we can get a blockwise rank-$r$ approximation of the inverse of the Schur complement
$\mathbf{S}(\tau,\tau)$. For the existence of the inverse $\mathbf{S}(\tau,\tau)^{-1}$, we refer to the next subsection.
For a given right-hand side $f \in L^2(\Gamma)$, \eqref{eq:SchurRepresentation} implies that
solving $\mathbf{S}(\tau,\tau)\boldsymbol{\phi} = \mathbf{f}$
with $\mathbf{f} \in\R^{\abs{\tau}}$ defined by $\mathbf{f}_i = \skp{f,\chi_{i_{\tau}}}$,
is equivalent to solving
$a(\widetilde{\phi},\psi) = \skp{f,\psi}$
for all $\psi \in S^{p,1}(\mathcal{T}_h)$ with $\supp \psi \subset \overline{\Gamma_{\tau}}$.
Let $\tau_1\times\sigma_1 \subset \tau\times\tau$ be an $\eta$-admissible subblock.
For $f \in L^2(\Gamma)$ with $\supp f \subset B_{R_{\sigma_1}}\cap\Gamma$, the support properties
as well as the admissibility condition \eqref{eq:admissibility}
for the cluster pair $(\tau_1,\sigma_1)$ imply the orthogonality
\begin{equation*}
a(\widetilde{\phi},\psi) = 0 \quad \forall \psi \in S^{p,1}(\mathcal{T}_h) \, \text{with} \,
\supp \psi \subset B_{R_{\tau_1}}\cap\overline{\Gamma_{\tau}}.
\end{equation*}
Therefore, we have $\widetilde{W^{\rm st}}\widetilde{\phi} \in \H_{h,0}(B_{R_{\tau_1}},\Gamma_{\tau})$, and
Lemma~\ref{cor:lowdimappHS} provides an approximation to $\widetilde{\phi}$
on $B_{R_{\tau_1}}\cap\Gamma_{\tau}$. Then, a rank-$r$ factorization of the subblock
$S(\tau,\tau)^{-1}|_{\tau_1\times\sigma_1}$ can be constructed as in Section~\ref{sec:H-matrix-approximation},
which is summarized in the following theorem.
\begin{theorem}
Let $\tau \subset \mathcal{I}$, $\rho := \{i\in \mathcal{I} : i < \min(\tau)\}$,
$\tau_1\times\sigma_1 \subset \tau\times\tau$ be $\eta$-admissible, and let
the Schur complement $\mathbf{S}(\tau,\tau)$ be defined in \eqref{eq:defSchur}.
Then, there exist rank-$r$ matrices
$\mathbf{X}_{\tau_1\sigma_1} \in \R^{\abs{\tau_1}\times r}$, $\mathbf{Y}_{\tau_1\sigma_1} \in \R^{\abs{\sigma_1}\times r}$
such that
\begin{equation}
\norm{\mathbf{S}(\tau,\tau)^{-1}|_{\tau_1\times\sigma_1} - \mathbf{X}_{\tau_1\sigma_1}\mathbf{Y}_{\tau_1\sigma_1}^T}_2
\leq C_{\rm apx} N^{(d+2)/(d-1)} e^{-br^{1/(d+1)}}.
\end{equation}
The constants $C_{\rm apx}$ depends only on
$\Omega$, $d$, and the $\gamma$-shape regularity of $\mathcal{T}_h$,
and the constant $b>0$ additionally depends on $\eta$.
\end{theorem}
\end{comment}
\subsection{Existence of $\H$-Cholesky decomposition}
In this subsection, we will use the approximation of the Schur complement from the previous section
to prove the existence of an (approximate) $\H$-Cholesky decomposition.
We start with a hierarchical relation of the Schur complements $\mathbf{S}(\tau,\tau)$. \newline
The Schur complements $\mathbf{S}(\tau,\tau)$ for a block $\tau \in \mathbb{T}_{\mathcal{I}}$
can be derived from the Schur complements of its sons $\tau_1$, $\tau_2$ by
\begin{equation*}
\mathbf{S}(\tau,\tau) = \begin{pmatrix} \mathbf{S}(\tau_1,\tau_1) & \mathbf{S}(\tau_1,\tau_2) \\
\mathbf{S}(\tau_2,\tau_1) & \mathbf{S}(\tau_2,\tau_2) + \mathbf{S}(\tau_2,\tau_1)\mathbf{S}(\tau_1,\tau_1)^{-1}\mathbf{S}(\tau_1,\tau_2) \end{pmatrix},
\end{equation*}
A proof of this relation can be found in \cite[Lemma 3.1]{Bebendorf07}. One should note that
the proof does not use any properties of the matrix $\mathbf{W}^{\rm st}$ other than invertibility and
existence of a Cholesky decomposition.
Moreover, we have by definition of $\mathbf{S}(\tau,\tau)$ that $\mathbf{S}(\mathcal{I},\mathcal{I}) = \mathbf{W}^{\rm st}$.
If $\tau$ is a leaf, we get the Cholesky decomposition of $\mathbf{S}(\tau,\tau)$ by the classical
Cholesky decomposition,
which exists since $\mathbf{W}^{\rm st}$ has a Cholesky decomposition.
If $\tau$ is not a leaf, we use the hierarchical relation of the Schur complements to
define a Cholesky decomposition of the Schur complement
$\mathbf{S}(\tau,\tau)$ by
\begin{equation}\label{eq:LUdefinition}
\mathbf{C}(\tau) := \begin{pmatrix} \mathbf{C}(\tau_1) & 0 \\ \mathbf{S}(\tau_2,\tau_1)(\mathbf{C}(\tau_1)^T)^{-1} & \mathbf{C}(\tau_2) \end{pmatrix}, \quad
\end{equation}
with $\mathbf{S}(\tau_1,\tau_1) = \mathbf{C}(\tau_1)\mathbf{C}(\tau_1)^T$,
$\mathbf{S}(\tau_2,\tau_2) = \mathbf{C}(\tau_2)\mathbf{C}(\tau_2)^T$ and indeed get
$\mathbf{S}(\tau,\tau) = \mathbf{C}(\tau) \mathbf{C}(\tau)^T$.
Moreover, the uniqueness of the Cholesky decomposition of $\mathbf{W}^{\rm st}$ implies that due to
$\mathbf{C}\mathbf{C}^T = \mathbf{W}^{\rm st} = \mathbf{S}(\mathcal{I},\mathcal{I}) = \mathbf{C}(\mathcal{I})\mathbf{C}(\mathcal{I})^T$, we have
$\mathbf{C} = \mathbf{C}(\mathcal{I})$.
The existence of the inverse $\mathbf{C}(\tau_1)^{-1}$
follows from the representation \eqref{eq:LUdefinition}
by induction over the levels, since on a leaf the existence is clear and the matrices
$\mathbf{C}(\tau)$ are block triangular matrices. Consequently, the inverse of
$\mathbf{S}(\tau,\tau)$ exists.
Moreover, as shown in \cite[Lemma~{22}]{GrasedyckKriemannLeBorne} in the context of $LU$-factorizations
instead of Cholesky decompositions, the restriction of the lower triangular part
$\mathbf{S}(\tau_2,\tau_1)(\mathbf{C}(\tau_1)^T)^{-1}$
of the matrix $\mathbf{C}(\tau)$ to a subblock $\tau_2'\times\tau_1'$
with $\tau_i'$ a son of $\tau_i$ satisfies
\begin{equation}
\label{eq:foo}
\left(\mathbf{S}(\tau_2,\tau_1)(\mathbf{C}(\tau_1)^T)^{-1}\right)|_{\tau_2'\times\tau_1'} =
\mathbf{S}(\tau_2',\tau_1')(\mathbf{C}(\tau_1')^T)^{-1}.
\end{equation}
The following lemma shows that the spectral norm of the inverse
$\mathbf{C}(\tau)^{-1}$ can be bounded by the norm of the inverse
$\mathbf{C}(\mathcal{I})^{-1}$.
\begin{lemma}\label{lem:LUnorm}
For $\tau\in \mathbb{T}_{\mathcal{I}}$,
let $\mathbf{C}(\tau)$ be given by \eqref{eq:LUdefinition}. Then,
\begin{eqnarray*}
\max_{\tau\in\mathbb{T}_{\mathcal{I}}}\norm{\mathbf{C}(\tau)^{-1}}_2 &=& \norm{\mathbf{C}(\mathcal{I})^{-1}}_2, \\
\end{eqnarray*}
\end{lemma}
\begin{proof}
With the block structure of \eqref{eq:LUdefinition}, we get the inverse
\begin{equation*}
\mathbf{C}(\tau)^{-1} = \begin{pmatrix} \mathbf{C}(\tau_1)^{-1} & 0 \\ -\mathbf{C}(\tau_2)^{-1} \mathbf{S}(\tau_2,\tau_1)(\mathbf{C}(\tau_1)^T)^{-1}\mathbf{C}(\tau_1)^{-1} & \mathbf{C}(\tau_2)^{-1} \end{pmatrix}.
\end{equation*}
So, we get by choosing $\mathbf{x}$ such that $\mathbf{x}_i = 0$ for $i \in \tau_1$ that
\begin{eqnarray*}
\norm{\mathbf{C}(\tau)^{-1}}_2 = \sup_{\mathbf{x}\in \R^{\abs{\tau}},\norm{x}_2=1}\norm{\mathbf{C}(\tau)^{-1}\mathbf{x}}_2
\geq \sup_{\mathbf{x}\in \R^{\abs{\tau_2}},\norm{x}_2=1}\norm{\mathbf{C}(\tau_2)^{-1}\mathbf{x}}_2
= \norm{\mathbf{C}(\tau_2)^{-1}}_2.
\end{eqnarray*}
The same argument for $\left(\mathbf{C}(\tau)^{-1}\right)^T$ leads to
\begin{eqnarray*}
\norm{\mathbf{C}(\tau)^{-1}}_2 = \norm{\left(\mathbf{C}(\tau)^{-1}\right)^T}_2 \geq \norm{\mathbf{C}(\tau_1)^{-1}}_2.
\end{eqnarray*}
Thus, we have $\norm{\mathbf{C}(\tau)^{-1}}_2 \geq \max_{i=1,2}\norm{\mathbf{C}(\tau_i)^{-1}}_2$ and as a consequence
$\max_{\tau\in\mathbb{T}_{\mathcal{I}}}\norm{\mathbf{C}(\tau)^{-1}}_2 = \norm{\mathbf{C}(\mathcal{I})^{-1}}_2$.
\end{proof}
We are now in position to prove Theorem~\ref{th:HLU}:
\begin{proof}[of Theorem~\ref{th:HLU}]
{\em Proof of (\ref{item:th:HLU-i}):}
In the following, we show that every admissible subblock $\tau\times\sigma$ of $\mathbf{C}(\mathcal{I})$,
recursively defined by \eqref{eq:LUdefinition}, has a rank-$r$ approximation. Since an admissible block
of the lower triangular part of $\mathbf{C}(\mathcal{I})$
has to be a subblock of a matrix $\mathbf{C}(\tau')$ for some
$\tau' \in \mathbb{T}_{\mathcal{I}}$, we get in view of \eqref{eq:foo} that
$\mathbf{C}(\mathcal{I})|_{\tau\times\sigma} = \mathbf{S}(\tau,\sigma)(\mathbf{C}(\sigma)^T)^{-1}$.
Theorem~\ref{lem:Schur} provides a rank-$r$ approximation
${\mathbf S}_{r}(\tau,\sigma)$ to ${\mathbf S}(\tau,\sigma)$. Therefore, we can estimate
\begin{eqnarray*}
\norm{\mathbf{C}(\mathcal{I})|_{\tau\times\sigma}\! - \! \mathbf{S}_{r}(\tau,\sigma)(\mathbf{C}(\sigma)^T)^{-1}}_2 \!\! &=& \!\!
\norm{\left(\mathbf{S}(\tau,\sigma)-\mathbf{S}_{r}(\tau,\sigma)\right)(\mathbf{C}(\sigma)^T)^{-1}}_2 \\
&\leq&\!\! C_{\rm sc} N^{2/(d-1)} e^{-br^{1/(d+1)}}\norm{(\mathbf{C}(\sigma')^T)^{-1}}_2\norm{\mathbf{W}^{\rm st}}_2.
\end{eqnarray*}
Since $\mathbf{S}_{r}(\tau,\sigma)(\mathbf{C}(\sigma)^T)^{-1}$ is a rank-$r$ matrix for each $\eta$-admissible
cluster pair $(\tau,\sigma)$, we immediately get an $\H$-matrix approximation $\mathbf{C}_{\H}$
of the Cholesky factor $\mathbf{C}(\mathcal{I}) = \mathbf{C}$. With Lemma~\ref{lem:spectralnorm}
and Lemma~\ref{lem:LUnorm}, we get
\begin{equation*}
\norm{\mathbf{C}-\mathbf{C}_{\H}}_2\leq C_{\rm sc} C_{\rm sp} N^{2/(d-1)}
{\rm depth}(\mathbb{T}_{\mathcal{I}})e^{-br^{1/(d+1)}}
\norm{\mathbf{C}^{-1}}_2\norm{\mathbf{W}^{\rm st}}_2,
\end{equation*}
and with $\norm{\mathbf{W}^{\rm st}}_2 = \norm{\mathbf{C}}_2^2$, we conclude the proof of
(\ref{item:th:HLU-i}).
{\em Proof of (\ref{item:th:HLU-ii}):}
Since $\mathbf{W}^{\rm st}=\mathbf{C}\mathbf{C}^T$, the triangle inequality leads to
\begin{eqnarray*}
\norm{\mathbf{W}^{\rm st}-\mathbf{C}_{\H}\mathbf{C}_{\H}^T}_2 &\leq&
\norm{\mathbf{C}-\mathbf{C}_{\H}}_2\norm{\mathbf{C}^T}_2 +
\norm{\mathbf{C}^T-\mathbf{C}_{\H}^T}_2 \norm{\mathbf{C}}_2 +
\norm{\mathbf{C}-\mathbf{C}_{\H}}_2\norm{\mathbf{C}^T-\mathbf{C}_{\H}^T}_2 \\
&\leq& 2C_{\rm sc}C_{\rm sp}\kappa_2(\mathbf{C}){\rm depth}(\mathbb{T}_{\mathcal{I}})
N^{2/(d-1)} e^{-br^{1/(d+1)}}\norm{\mathbf{W}^{\rm st}}_2\\
& & +\kappa_2(\mathbf{C})^2C_{\rm sc}^2C_{\rm sp}^2 {\rm depth}(\mathbb{T}_{\mathcal{I}})^2
N^{4/(d-1)} e^{-2br^{1/(d+1)}}\frac{\norm{\mathbf{W}^{\rm st}}_2^2}{\norm{\mathbf{C}}_2^2},
\end{eqnarray*}
and the equality $\kappa_2(\mathbf{W}^{\rm st}) = \kappa_2(\mathbf{C})^2$ finishes the proof of
(\ref{item:th:HLU-ii}).
{\em Proof of (\ref{item:th:HLU-iii}):}
The approximate $LU$-factors $\mathbf{L_{\H}}, \mathbf{U_{\H}}$ can be constructed from $\mathbf{C_{\H}}$ by
\begin{equation}
\mathbf{L}_{\H}\mathbf{U}_{\H} = \begin{pmatrix} \mathbf{C_{\H}} & 0 \\ \boldsymbol{\ell}^T & -\abs{\mathbf{B}}^2 \end{pmatrix}
\begin{pmatrix} \mathbf{C_{\H}}^T & \boldsymbol{\ell} \\ 0 & 1 \end{pmatrix} =
\begin{pmatrix} \mathbf{C_{\H}}\mathbf{C_{\H}}^T & \mathbf{B} \\ \mathbf{B}^T & 0 \end{pmatrix},
\end{equation}
where $\boldsymbol{\ell} \in \R^N$ solves $\mathbf{C}_{\H}\boldsymbol{\ell} = \mathbf{B}$, and the error estimate
follows from (\ref{item:th:HLU-ii}).
\end{proof}
\section{Numerical Examples}
\label{sec:numerics}
In this section, we present some numerical examples in dimension $d = 3$ to illustrate
the theoretical estimates derived in the previous sections.
Further numerical examples about $\H$-matrix approximation of inverse BEM matrices and black-box
preconditioning with an $\H$-LU decomposition can be found, e.g., in
\cite{GrasedyckDissertation,Bebendorf05,Grasedyck05,BoermBuch,FMPBEM}, where the focus is, however,
on the weakly-singular integral operator.
With the choice $\eta=2$ for the admissibility parameter in \eqref{eq:admissibility},
the clustering is done by the standard geometric clustering algorithm,
i.e., by choosing axis-parallel bounding boxes of minimal volume and
splitting these bounding boxes in half across the largest face
until they are admissible or contain less degrees of freedom than $n_{\text{leaf}}$,
which we choose as $n_{\text{leaf}} = 50$ for our computations.
An approximation to the inverse Galerkin matrix is computed by using the C++-software package
BEM++ \cite{BEMpp}. The $\H$-matrices are assembled using ACA and the C++-library
AHMED \cite{AHMED}.
Our numerical experiments are performed for the Galerkin discretization of
the stabilized hyper-singular integral operator $\mathbf{W}^{\rm st}$
as described in Section~\ref{sec:stabGalerkin} with $\alpha = 1$.
The geometry is the crankshaft generated by NETGEN \cite{netgen} visualized in Figure~\ref{fig:crankshaft}.
We employ a fixed triangulation of the crankshaft consisting of $5,393$ nodes and $6,992$ elements.
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.35\textwidth]{shaft.ps}
\caption{\footnotesize Crankshaft domain}
\label{fig:crankshaft}
\end{center}
\end{figure}
\begin{example}
{\rm
The numerical calculations are performed for the polynomial degree $p=2$, resulting in
$N = 13,986$ degrees of freedom. The largest block of $\mathbf{W}_{\H}$ has a size of $1,746^2$.
In Figure~\ref{fig:3DBEMp2r}, we compare the decrease of the upper bound
$\norm{\mathbf{I}-\mathbf{W}^{\rm st}\mathbf{W}_{\H}}_2$ of the relative error with the increase
of the block-rank. Figure~\ref{fig:3DBEMp2Mem} shows the storage requirement for the
computed $\H$-matrix approximation in MB. Storing the dense matrix would need $1,492$ MB.
We observe exponential convergence in the block rank, even with a convergence behavior
$\exp(-br^{1/2})$, which is faster than the rate of $\exp(-br^{1/4})$ guaranteed by Theorem~\ref{th:Happrox}.
Moreover, we also observe exponential convergence of the error compared to the increase of required memory.
\begin{figure}[hbt]
\begin{minipage}{.50\linewidth}
\centering
\psfrag{Error}[c][c]{%
\footnotesize Error}
\psfrag{Block rank r }[c][c]{%
\footnotesize Block rank r}
\psfrag{asd}[l][l]{\tiny $\exp(-2.2\, r^{1/2})$}
\psfrag{jkl}[l][l]{\tiny $\norm{I-\mathbf{W}^{\rm st}W_{\H}}_2$}
\includegraphics[width=0.80\textwidth]{CS12r13986.eps}
\caption{\footnotesize Exponential convergence in block rank}
\label{fig:3DBEMp2r}
\end{minipage}
\begin{minipage}{.50\linewidth}
\centering
\psfrag{Error}[c][c]{%
\footnotesize Error}
\psfrag{Memory (MB)}[c][c]{%
\footnotesize Memory (MB)}
\psfrag{asd}[l][l]{\tiny $\exp(-1.6\, r^{1/2})$}
\psfrag{jkl}[l][l]{\tiny $\norm{I-\mathbf{W}^{\rm st}W_{\H}}_2$}
\includegraphics[width=0.80\textwidth]{CS12Memory13986.eps}
\caption{\footnotesize Exponential convergence in memory required}
\label{fig:3DBEMp2Mem}
\end{minipage}
\end{figure}
\hbox{}\hfill\rule{0.8ex}{0.8ex}
}
\end{example}
\begin{example}
{\rm
We consider the case $p = 3$, which leads to $N= 31,466$ degrees of freedom.
The largest block of $\mathbf{W}_{\H}$ has a size of $3,933^2$.
Storing the dense matrix would need $7,608$ MB.
\begin{figure}[hbt]
\begin{minipage}{.50\linewidth}
\centering
\psfrag{Error}[c][c]{%
\footnotesize Error}
\psfrag{Block rank r }[c][c]{%
\footnotesize Block rank r}
\psfrag{asd}[l][l]{\tiny $\exp(-1.7\, r^{1/2})$}
\psfrag{jkl}[l][l]{\tiny $\norm{I-\mathbf{W}^{\rm st}W_{\H}}_2$}
\includegraphics[width=0.80\textwidth]{CS12r31466.eps}
\caption{\footnotesize Exponential convergence in block rank}
\label{fig:3DBEMp3r}
\end{minipage}
\begin{minipage}{.50\linewidth}
\centering
\psfrag{Error}[c][c]{%
\footnotesize Error}
\psfrag{Memory (MB)}[c][c]{%
\footnotesize Memory (MB)}
\psfrag{asd}[l][l]{\tiny $\exp(-0.8\, r^{1/2})$}
\psfrag{jkl}[l][l]{\tiny $\norm{I-\mathbf{W}^{\rm st} W_{\H}}_2$}
\includegraphics[width=0.80\textwidth]{CS12Memory31466.eps}
\caption{\footnotesize Exponential convergence in memory required}
\label{fig:3DBEMp3Mem}
\end{minipage}
\end{figure}
We observe in Figure~\ref{fig:3DBEMp3r} exponential convergence both in the block rank and in the memory.
}\hbox{}\hfill\rule{0.8ex}{0.8ex}
\end{example}
\nocite{*}
| {
"attr-fineweb-edu": 1.556641,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUc5U4eIXhsP2jpP0M | \section{Introduction}
Recent advances on modulation formats for optical communications showed that a Multi Dimensional (MD) design increases the performance in both linear and nonlinear channels \cite{Reimer2016, Bendimerad2018}, compared to conventional formats such as Polarization-Division-Multiplexing Quadrature-Phase-Shift-Keying (PDM-QPSK). However, the implementation complexity of the soft-demapper increases significantly, and this limits the use of these formats in optical transmission systems that employ Forward Error Correction (FEC) codes under soft-decision decoding. To address this issue, we proposed in \cite{Bendimerad2018b} to use the Min-Sum (MS) algorithm as an ultra-low complexity soft-demapper for MD formats that are generated using binary arithmetics, referred to as Boolean Equations (BEs).
This algorithm operates on the Tanner graph, and can be used for formats that are generated using quasi-linear BEs with the property that each bit occurs only once, and therefore, can be expressed as a function of the other bits. In other words, observations at the demapper are not correlated, and extrinsic information can be easily identified\cite{Bendimerad2018b}. For modulation formats that are generated using nonlinear BEs\cite{Bendimerad2018}, where all observations at the demapper are correlated, it is not possible to extract the extrinsic information, and therefore, the MS algorithm cannot be used.\\
In this paper, we propose to use the Ordered Statistics Decoding (OSD) algorithm, first introduced in \cite{Fossorier1995}, for the demapping. This algorithm offers an ultra-low complexity solution with high performances, and can demap any MD format, as long as the latter is generated using Boolean Equations. Four nonlinearity-tolerant modulation formats, based on Set-Partitioned PDM-QPSK, are studied over the Additive White Gaussian Noise (AWGN) channel, and the algorithm performance is assessed using the post-FEC Bit Error Rate (BER).
\section{Principle of Operations of the pOSD Soft-Demapper}
Compared to standard formats that have independent dimensions like PDM-QPSK for example, and for which the Log-Likelihood-Ratios (LLRs) at the soft-demapper is obtained simply by measuring the observation (1D-demapping), MD formats must be demapped in all dimensions jointly. As a consequence, using observations at the demapper for an MD format without considering the BEs results on suboptimal demapping; the loss in performance can be significant, depending on the modulation format. To avoid this loss, one solution is to use the so-called MaxLogMap (MLM) algorithm to compute LLRs. However, the received observation needs to be compared to all possible MD symbols of the modulation format.
This approach is very complex, and increases significantly the cost and the power consumption of the hardware, which makes the implementation unfeasible.\\
Instead, we propose here to use the OSD algorithm \cite{Fossorier1995}, which we adapt for the demapping. The difference here is that we order only information bits based on the absolute value of observations. We therefore refer to it as Partially Ordered Statistics Demapper (pOSD). Besides, the same generator matrix used in the mapper is used in the algorithm, which makes it simpler than the OSD where the Gaussian elimination is needed to get the new generator matrix. Basically, we choose to correct $p$ Least Reliable Positions (LRPs) out of a total of $m$ information bits. For the remaining bits positions, observations are considered as LLRs. Similarly to the Chase algorithm\cite{Chase1972}, we generate codewords using the hard decision on the observations, all possible bit combinations of the LRPs and the BEs. We then associate a metric to each candidate codeword and compute the LLR value for each LRP based on this metric.
\section{Performance of the pOSD}
\subsection{Study description}
The four modulation formats that are studied here are based on set-partitioning PDM-QPSK in 8D, i.e., sets of 4 QPSK symbols each Gray-labeled. The first and second ones are Polarization-Balanced 4 Bits in 8D (PB-4B8D, $m=4$) and PB-5B8D ($m=5$) \cite{Bendimerad2018b}, and use linear BEs in the mapping process. The third one is referred to as PB-6B8D ($m=6$). This format is equivalent to PB-QPSK\cite{Reimer2016}; we use a different name to highlight the spectral efficiency. We propose to use BEs to generate this format. Each 8D symbol is labeled by an eight-bit vector $b_1...b_8$. The first six bits are taken from the sequence of information, the last two are parity bits and computed as:
\begin{equation}
\begin{split}
b_7 = \overline{b_2}\oplus b_3 \oplus b_5\oplus \left( b_1 \oplus b_2\right) \cdot \left( b_3 \oplus b_4 \oplus b_5 \oplus b_6\right) \oplus \left( b_3 \oplus b_4 \right) \cdot \left( b_5 \oplus b_6\right)\\
b_8 = \overline{b_1} \oplus b_4 \oplus b_6 \oplus \left( b_1 \oplus b_2 \right) \cdot \left( b_3\oplus b_4\oplus b_5\oplus b_6\right) \oplus \left( b_3 \oplus b_4 \right) \cdot \left( b_5 \oplus b_6 \right)
\end{split}
\end{equation}
where [$\oplus$] and [$\cdot$] denote binary additions and multiplications, respectively, and $\overline{b}$ denotes the logical negation of $b$. By doing so, we fix its labeling, recalling that, to the best of our knowledge, no such labeling of the PB-QPSK can be found in the literature. For the complete approach on how to generate the format, please refer to \cite{Bendimerad2018}. The fourth modulation format considered is the Polarization Alternating -7B8D ($m=7$): seven bits are taken from the binary information sequence, and one parity bit is computed from a BE, as reported in \cite{Bendimerad2018}.
In order to assess the performance of the proposed soft-demapper for each modulation format, we simulate each format over the AWGN channel. The BER is calculated using a Monte Carlo loop, at the output of a Low Density Parity Check (LDPC) code decoder. We use an LDPC code of length 18K and an overhead of 20\%. As a decoder, we use a MS algorithm with 20 iterations.
\subsection{Linear Channel Performance}
Figure \ref{fig:linCh} shows the linear channel performance of PB-6B8D and PA-7B8D for several soft-demapping techniques. For simplicity, we compare the curves at a BER of $3\cdot 10^{-4}$, assuming the same relative differences for lower BER values. The curve with black circles represents the optimal post-FEC BER (lower bound), which results from using the MLM algorithm as soft-demapper. The red curve represents the suboptimal post-FEC BER (upper bound), which results from considering observations as LLRs (1D-demapper).
\begin{figure}[htbp]
\centering
\includegraphics[width=17cm]{LinearChannelPerf.pdf}
\caption{\label{fig:linCh}AWGN channel performance of (a) PB-6B8D and (b) PA-7B8D soft-demapping.}
\end{figure}
The difference between the upper and lower bounds highlights the loss in terms of SNR. As such, losses of 1.63dB (Fig.\ref{fig:linCh}.a) and 0.63dB (Fig.\ref{fig:linCh}.b) can be seen for PB-6B8D and PA-7B8D, respectively. The two formats experience different losses because for PB-6B8D, 6 information bits must be processed using 2 BEs, while for PA-7B8D, 6 information bits are processed from only one BE. In other words, using observations as LLRs is closer to the optimal performance for PA-7B8D than for PB-6B8D. Notice that for PA-7B8D, bit 7 does not appear in the equation\cite{Bendimerad2018}, so, the observation relative to bit 7 is already an optimal LLR, as for PDM-QPSK.
It can be seen in Fig.\ref{fig:linCh} that the pOSD algorithm offers a series of alternatives to trade performance against complexity, and an optimal soft-demapping performance is achieved by processing $p=4$ LRPs for PB-6B8D (Fig.\ref{fig:linCh}.a), and $p=3$ LRPs for PA-7B8D (Fig.\ref{fig:linCh}.b). Decreasing the number of processed LRPs results in lower performances as well as lower complexity. Notice that results of PB-4B8D and PB-5B8D are not shown for lack of space. We report $k=4$ for both formats to ensure optimal performance. Finally, these relative results were validated using the achievable rate metric\cite{Bocherer2017}, in order to ensure that post-FEC BER results do not depend on a specific code.
\subsection{Complexity assessment}
We consider only the optimal version of the pOSD algorithm, namely $k=3$ LRPs for PA-7B8D and $k=4$ LRPs for PB-4B8D, PB-5B8D and PB-6B8D. The complexity assessment further depends on the sorting algorithms; we consider the merge sort algorithm\cite{Knuth1998}. The complexity of this algorithm depends on a random process, as observations at the demapper input might be already partially or completely sorted. We distinguish therefore the best and worst case scenarios for comparisons.
\begin{table}[htb]
\centering \caption{Complexity assessment of the pOSD soft-demapper}
\begin{tabular}{cccc|ccc|ccc|ccc}
\hline
\hline
& \multicolumn{3}{c|}{PB-4B8D} & \multicolumn{3}{c|}{PB-5B8D} & \multicolumn{3}{c|}{PB-6B8D} & \multicolumn{3}{c}{PA-7B8D} \\
\cline{2-13}
& MLM & MS & pOSD & MLM & MS & pOSD & MLM & MS & pOSD & MLM & MS & pOSD \\
\cline{1-13}
Logical op. & 260 & 210 & 304 & 654 & 71 & 304 & 1542 & $\times$ & 528 & 3078 & $\times$ & 160 \\
Additions & 452 & 40 & 8 & 1125 & 21 & 8 & 2694 & $\times$ & 9 & 5382 & $\times$ & 4 \\
Comparisons & 56 & 56 & 59-62 & 150 & 18 & 60-66 & 372 & $\times$ & 63-67 & 756 & $\times$ & 28-32 \\
\# LUTs & 4 & - & - & 5 & - & - & 6 & $\times$ & - & 7 & $\times$ & - \\
LUT size & 2$\times$8 & - & - & 2$\times$16 & - & - & 2$\times$32 & $\times$ & - & 2$\times$64 & $\times$ & - \\
\hline
\hline
\end{tabular}
\label{table:comp}
\end{table}
Table \ref{table:comp} shows the type and number of operations needed for the MLM, the MS\cite{Bendimerad2018b} and the pOSD algorithms. The order of operations matters as well, as the cost of logical operations is lower than real additions, which in turn have a lower cost than real comparisons. We recall that the MS can only be applied to PB-4B8D and PB-5B8D because the other two formats are generated using nonlinear BEs. It can be observed that the pOSD outperforms the MLM, as the number of all operations is lowered by several orders of magnitude compared to the MLM. We also observe that the pOSD has similar complexity compared to the MS, reporting a slightly better performance of the pOSD for PB-4B8D and PB-5B8D as MS exhibits some losses compared to the MLM\cite{Bendimerad2018b}. Furthermore and interestingly, the pOSD soft-demapper is less complex for PA-7B8D than for all the others, although the others have a lower spectral efficiency. This is understandable by the fact that during the bit correction process, only one BE is used for the first format while several are used for the others. Finally, it is important to note that the pOSD algorithm does not require any use of Look-Up Tables (LUTs), which introduce a high complexity implementation \cite{Bendimerad2018b}.
\section{Conclusion}
In this paper, we proposed the partially ordered statistics demapping algorithm as an ultra-low complexity and low-cost soft-demapper for MD formats which are generated using Boolean equations. We showed that this algorithm does not have any performance loss compared to the MLM soft-demapper, while it outperforms the latter by reducing drastically the implementation complexity. As such, the hardware and power consumptions are significantly decreased, removing the necessity of LUTs. We also presented the Boolean equations that are used to generate the PB-6B8D, and are necessary to allow the use of the pOSD soft-demapper solution. This algorithm makes feasible the implementation of soft-demappers for formats such as PB-6B8D and PA-7B8D. This solution is appealing to be used as a unified soft-demapper for the whole MD format series. It is finally important to note that this algorithm can be applied to any MD format that uses Boolean equations in the bit-to-symbol mapping process, even for nonlinear equations.
| {
"attr-fineweb-edu": 1.368164,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUc6s4eIXhqCi86Znm | \section{Introduction.}
Coulomb effects in several different types of
three-terminal devices consisting of an island connected
to external leads by two
weak-link contacts, and capacitatively coupled to an
additional gate potential, have been extensively studied during last
years. The systems with a normal-metal island and leads
were studied theoretically both in the tunnel-junction limit \cite{MG1}
and in the case of a quantum point contact with almost perfect transmission
\cite{M}. The theory of charge-parity effects and Coulomb modulation of
the Josephson current was investigated in details in \cite{GM}. All the
above-mentioned systems at present are realized experimentally.
Recently it was shown to be possible to produce quantum point contact
between two superconductors via a normal-conductive region made of
two-dimensional electron gas (2DEG) \cite{jap}; smeared step-wise
behaviour of the critical current was observed, in qualitative agreement
with predictions \cite{BH} for the superconductive quantum contact with
a few conduction channels of high transmittivity.
An observation of a non-sinusoidal current-phase relation in
superconducting mechanically-controllable break junctions has been reported
in \cite{K}, again in agreement with \cite{BH}.
Another interesting experimental achivement
was reported in \cite{kasumov}, where S-N-S contact with a size comparable
to the de Broghle wavelength in the N region made of BiPb was realized
and nonmonotonic behaviour of the critical current with the thickness of
normal region was found. This remarkable development of technology points
to the principal possibility to make a system of a small superconductive (SC)
island connected to the superconductive leads by two quantum point contacts
(QPC).
In such a system macroscopic quantum effects due to competition between
Josephson coupling energy and Coulomb (charging) energy could be realized
together with quantization (due to small number of conductive channels)
of the Josephson critical current.
In the present paper we develop
a theory for an extreme case of such a system, namely, for the case of
two almost ballistic one-channel QPCs connecting a small SC island
with two SC leads. We consider the limit of the characteristic
charging energy much smaller than the superconducting gap, $E_C\ll\Delta$,
and, therefore, the Coulomb effects are small.
We derive the dependences of the average Josephson current
across the sytem, and its fluctuations (noise power) as functions of the SC
phase difference between the leads $\alpha$, and of the electric gate
potential $V_g$.
The Coulomb effects reveal themselves at phase differences
$\alpha$ close to $\pi$, when the two lowest states are almost
degenerate.
We show that such a system realizes a tunable
quantum two-level system (pseudo-spin 1/2) which may be
useful for the realization of quantum computers
(see e.g. \cite{kitaev,devinch,schoen,qaverin} ).
The paper is organized as follows. We start with considering a single
QPC connecting a superconducting island to a single lead (Section II).
We find the oscillations of the effective capacitance on the island as a
function of the gate potential (in some analogy with Matveev's
results \cite{M} for a normal QPC). Depending on the backscattering
probability in the contact, it may be described either in adiabatic
or in diabatic approximation. We find the condition for the
diabatic-adiabatic crossover. Then in Section III we formulate a
simple model for the double-contact system in the adiabatic approximation.
We replace the full many-body problem by a quantum-mechanical problem
for the dynamics of the SC phase on the middle island.
In Sec.IV we calculate average Josephson current through the
system as a function of $\alpha$ and $V_g$, with a particular emphasis
on the case of the phase difference $\alpha$ close to $\pi$ (when our
effective two-level system is almost degenerate). Sec.V is
devoted to the analysis of the Josephson current noise; we calculate
integrated intensity $S_0$ of the "zero"-frequency noise (an analogue of the
noise calculated in \cite{MR1,AI,Gordei} for a single superconduvtive QPC)
as well as finite-frequency noise $S_{\omega}$ due to transitions between the
two almost-degenerate levels. Finally, we present our conclusions in
Sec.VI.
\section{Adiabatic-diabatic crossover in a single superconducting
quantum point contact.}
Consider a small superconducting island connected to an external
superconducting lead by an one-channel nearly ballistic quantum point
contact \cite{BH,B}.
The electric potential of the grain may be adjusted via
a gate terminal (fig.~1a).
Following \cite{BH} we assume that the contact
is much wider than the Fermi wavelength (so that the transport through
the constriction may be treated adiabatically), but much smaller than
the coherence length $\xi_0\equiv\hbar v_F/\pi\Delta$ (where $v_F$ is the
Fermi velocity, $\Delta$ is the superconducting gap).
\begin{figure}
\centerline{\epsffile{fig1a.eps}\hskip 3cm \epsffile{fig1b.eps}}
\caption{(a) Single QPC. The system consists of a SC grain
connected to a SC lead via a QPC. A gate terminal is used to control
the electric potential of the grain.
(b) Double-contact S-S-S system. The second terminal is added to the
single-QPC setup.}
\label{fig1}
\end{figure}
Our assumption of low temperature is that the average number of
one-electron excitations on the island is much less
than one. Then they cannot contribute to the total charge of the
grain and we may restrict our Coulomb blockade problem to the evolution
of the superconducting phase only. The condition of low temperature
is then $T < \Delta / \log (V \nu(0) \Delta)$, where
$V$ is the volume of the grain, $\nu(0)$ is the density of electron
states at the Fermi level.
We neglect phase fluctuations in the bulk of the island and
describe the whole island by a single superconducting phase $\chi$.
At a fixed value of the phase on the island, the spectrum of the
junction consists of the two Andreev states
localized on the junction and the continuum spectrum
above the gap $\Delta$ \cite{B} (fig.~2).
The energies of the Andreev states lie
below the gap:
\begin{equation}
E(\chi)=\pm\Delta\sqrt{1-t\sin^2(\chi/2)},
\label{Andreev}
\end{equation}
where $\chi$ is the phase difference at the contact,
$t$ is the transmission coefficient.
\begin{figure}
\centerline{\epsffile{fig2.eps}}
\caption{Single-contact energy spectrum.
The spectrum consists of the continuum of delocalized states
and the two Andreev (subgap) states. Dashed lines denote Andreev
states in the absence of backscattering (diabatic terms). Solid lines
are the states split by backscattering (adiabatic terms).}
\label{fig2}
\end{figure}
At $t=1$, the spectrum of Andreev states (\ref{Andreev})
has a level crossing point at $\chi=\pi$.
At this point, the left and right Andreev states have equal
energies, but in the absence of backscattering ($t=1$) the transitions
between them are impossible. Therefore, we expect that an ideal ballistic
contact cannot adiabatically follow the ground state
as the phase $\chi$ changes,
but remains on the same left or right Andreev state as it passes
the level-crossing point $\chi=\pi$.
We borrow the terminology from the theory of
atomic collisions \cite{lichten} and call the
(crossing) Andreev levels at $t=1$
diabatic terms (dashed lines in fig.~2), and the split
levels --- adiabatic terms (solid lines in fig.~2).
Instead of transmission coefficient $t$, it will be more convenient
to speak of the reflection coefficient $r=1-t$. At $r=0$,
the contact is described by diabatic terms. As $r$ increases,
the transitions occur between the terms, and at sufficiently large
$r$ the system will mostly adiabatically follow the split
Andreev levels. In this section we study adiabatic-diabatic
crossover and find the crossover scale for the reflection
coefficient $r$.
We assume that the reflection probability $r\ll 1$
(almost unity transmission) and that the charging energy $E_C\ll \Delta $
(the charging energy is defined by $E_C=(2e)^2/C$).
The latter assumption appears natural, because, like in the tunnel
junctions \cite{LO} we expect
that the capacitance $C$ of the grain has an additional contribution from the
capacitance of the point contact. This capacitance is of order
$\Delta/e^2$. A more detailed discussion of this phenomenon will be
given elsewhere. At the moment we just mention that this contribution
to the capacitance leads to the inequality $E_C \leq \Delta$.
To probe the degree of adiabaticity, we study the periodic
dependence of the ground state energy $E_0$ on the gate voltage.
Because of the weakness of charging effects, this dependence will
be sinusoidal:
\begin{equation}
E_0(V_g)=\varepsilon \cos(2\pi N)
\label{sinus1}
\end{equation}
(where $N=V_g C / 2e$ is the dimensionless voltage),
and we are interested in
the amplitude $\varepsilon$
of these oscillations. The physical meaning of this
periodicity is the oscillations of the induced charge on the
grain --- it follows immediately from the relation
\begin{equation}
\delta Q = {C\over 2e} {\partial E_0\over \partial N}.
\end{equation}
There is a simple physical explanation of the sinusoidal
dependence (\ref{sinus1}). The ground-state energy modulation is
determined by phase-slip processes in the contact. Such processes
are phase tunneling events with phase changing by $\pm 2\pi$.
While the magnitudes of the clockwise and counter-clockwise
tunneling amplitudes are the same, their phases
are $\pm 2\pi N$. This results in the expression (\ref{sinus1}).
Higher-order tunneling processes would give rise to higher-order harmonics
in the periodic $N$-dependence. This argument shows that the
amplitude of oscillations $\varepsilon$ coincides with the
phase-tunneling amplitude and, therefore, provides
a good measure of adiabaticity in the phase dynamics.
Under assumption $E_C\ll\Delta$, we may describe
the contact by the dynamics of the phase on the grain and thus
reduce the problem to a single-particle quantum mechanics.
Since we restrict our attention to low
lying excitations, it is only necessary to include the two Andreev
levels on the junction.
The potential term is the Josephson energy of the Andreev
levels, the kinetic term is the charging energy.
After a simple computation of the backscattering matrix
elements (the off-diagonal entries in the potential term),
we arrive to the following Hamiltonian:
\begin{equation}
H=H(\chi)+{1\over 2}E_C(\pi_\chi - N)^2
\label{Ham3}
\end{equation}
where
\begin{equation}
H(\chi)=\Delta
\pmatrix{-\cos{\chi\over2} & r^{1/2}\sin{\chi\over2} \cr
r^{1/2}\sin{\chi\over2} & \cos{\chi\over2} }.
\label{Ham4}
\end{equation}
Here $\chi$ is the phase difference across the contact,
$r$ is the reflection coefficient. Obviously, the eigenvalues
of $H(\chi)$ reproduce the result (\ref{Andreev}).
The number of Cooper pairs at the grain $\pi_\chi$ is the momentum
conjugate to $\chi$, $[\chi,\pi_\chi]=i$. Notice that $\chi$ takes
values on the circle $\chi=\chi+2\pi$, and, accordingly, $\pi_\chi$
is quantized to take integer values. We may also write
$\pi_\chi=-i\partial/\partial\chi$.
This Hamiltonian loses its validity at the top of the upper band at
$\chi=2\pi n$, where the upper Andreev state mixes with the continuous
spectrum (fig.~2). Howerver, the probability of the phase $\chi$
to reach the top of the upper band of $H(\chi)$ is exponentially
small at $E_C\ll \Delta$ (smaller than the tunneling probability).
The adiabatic-diabatic crossover is determined by the properties
of the system near the minimal-gap point $\chi=\pi$. Therefore,
we may neglect the transitions to continuous spectrum
at $\chi=2\pi n$.
At the same time, we must disregard tunneling porcesses
via the top of the upper Andrees band (next-nearest-neighbor
tunneling) which is present in the Hamiltonian (\ref{Ham3})-(\ref{Ham4}),
but not in the original system. The nearest-neighbor tunneling
is a feature of our model and is beyond the precision of our
approximation.
There are two opposite limits of the problem:
small and ``large'' reflection.
At zero reflection, the Hamiltonian splits into
lower and upper components. Within each component the potential
is periodic with the period $4\pi$.
As explained above, we must neglect the next-nearest-neighbor
tunneling via the top of the bands. Therefore, the potential
minima of $H(\chi)$ are disconnected and
cannot tunnel to each other, $\varepsilon=0$.
The opposite limit is the case of ``large'' reflection (the
precise meaning of "large reflection" consistent with $r\ll 1$ will be
clarified below). In this limit, the gap opens in the spectrum
of Andreev states, and the system adiabatically follows the
lower state. We can replace the two-level Hamiltonian $H(\chi)$
by its lowest eigenvalue and arrive to the quantum-mechanical problem of
a particle in a periodic potential. The quasiclassical limit of
this problem is solved in the textbook \cite{LL}.
In our notation the answer reads as follows:
\begin{equation}
\varepsilon_{ad}={\rm const} \sqrt{E_C \Delta}
\exp(-S_{cl}),
\end{equation}
where
\begin{equation}
S_{cl}=B_1 \sqrt{\Delta\over E_C}-{1\over 4}\log {\Delta\over E_C}
+ O(1)
\end{equation}
is the classical action connecting two nearest
minima (or more precisely the two return points).
The numerical constant $B_1$ is of order one
(at $r\to 0$, $B_1=4.69 + 1.41 r\log r +\dots $).
To study how the adiabaticity is destroyed it is useful to introduce
the dimensionless ``coherence factor'' $f(r)$ defined by
\begin{equation}
\varepsilon=f(r) \varepsilon_{ad},
\end{equation}
where $\varepsilon_{ad}$ is the amplitude of oscillations
of the ground-state energy
derived in the adiabatic approximation (with only the lowest Andreev
state included).
We see that $f(0)=0$, $f(r\gg r_{ad})=1$.
The crossover scale $r_{ad}$ can be derived by computing
the corrections to $f(r)$ in these two limits.
First consider the limit of weak backscattering ($r\ll r_{ad}$).
In this limit we take the wavefunction to be the ground state
of the Hamiltonian with zero $r$ (at a given wavevector $N$),
and then compute the first-order correction in $r^{1/2}$ to the
energy. The wavefunction is of ``tight-binding'' type and is
generated by the ``ground-state'' wavefunctions $\Psi_i$ localized
in the potential minima (diabatic terms). The components of the
two-dimensional vectors $\Psi_i$ alternate:
\begin{equation}
\Psi_i=\pmatrix{\Psi_i(\chi) \cr 0},
\qquad
\Psi_{i+1}=\pmatrix{0 \cr \Psi_{i+1}(\chi)}.
\end{equation}
Then we find
\begin{equation}
\varepsilon = 2 \langle \Psi_i | H_{12}(\chi) | \Psi_{i+1} \rangle
= 2 r^{1/2} \Delta \int d\chi \,
\Psi_i^*(\chi) \Psi_{i+1}(\chi) \sin{\chi\over2}
\label{diabatics}
\end{equation}
(We assume the wavefunctions $\Psi_i$ to be normalized).
It is important that
$\Psi_i$ and $\Psi_{i+1}$ are wavefunctions for different potentials
($-\Delta_0\cos(\chi/2)$ and $\Delta_0\cos(\chi/2)$) and
the overlap integral (\ref{diabatics})
has a saddle point at the minimal-gap point
$\chi=\pi$, and it reduces the effective region of integration
to $|\chi-\pi| \le (E_C/\Delta)^{1/4}$.
The normalization of the quasiclassical tail of the
wavefunctions $\Psi_i(\chi)$ yields
\begin{equation}
\Psi(\chi=\pi)= \exp(-S_{cl}(\chi=\pi))
\end{equation}
(up to a numerical factor independent of $E_C/\Delta$).
Thus we obtain
\begin{equation}
\varepsilon\sim r^{1/2} \Delta \left({E_C\over\Delta}\right)^{1/4}
\exp(-S_{cl}),
\end{equation}
i.e., in terms of the ``coherence factor'' $f(r)$,
\begin{equation}
f(r) \sim r^{1/2} \left({\Delta\over E_C}\right)^{1/4}.
\end{equation}
The physical meaning of the integral (\ref{diabatics})
is the summation over all paths shown in fig.~3a.
\begin{figure}
\centerline{\epsffile{fig3a.eps}\hskip 3cm \epsffile{fig3b.eps}}
\caption{Tunneling paths in the diabatic (a) and
adiabatic (b) limits.
These diagrams represent the lowest-order corrections to the phase
tunneling amplitudes in the diabatic and adiabatic limits respectively.}
\label{fig3}
\end{figure}
The above calculation shows that the crossover scale to
adiabatic behavior is
\begin{equation}
r_{ad} \sim \left({E_C\over\Delta}\right)^{1/2}.
\label{Rad}
\end{equation}
In fact, we neglected the effect of change
in the classical action $S_{cl}$
due to opening a gap; this effect
is estimated to be of order
\begin{equation}
\delta S_{cl} \sim \sqrt{\Delta\over E_C} r \log r,
\end{equation}
i. e. it is a higher-order effect than
the change in $f(r)$ proportional to $r^{1/2}$.
Notice that the characteristic scale of this change in the
classical action is again $r_{ad}\sim \sqrt{E_C/\Delta}$
(corresponding to $\delta S_{cl}\sim 1$).
We may alternatively find the crossover scale $r_{ad}$
by computing the lowest order correction to the
``coherence factor'' $f(r)$ in the adiabatic limit.
In this limit the Hamiltonian (\ref{Ham3},\ref{Ham4}) may
be rewritten in adiabatic terms (the voltage $N$ is
for simplicity moved to the boundary condition
$\Psi(\chi+2\pi)=e^{2i\pi N} \Psi(\chi)$
by a gauge transformation)
as
\begin{equation}
H=-{E_C\over 2}({\partial\over\partial\chi})^2
+D(\chi) -{E_C\over2}[G(\chi){\partial\over\partial\chi}
+ {\partial\over\partial\chi} G(\chi)] -{E_C\over2}
G^2(\chi),
\label{Ham5}
\end{equation}
where
\begin{equation}
D(\chi)=\pmatrix{E_1(\chi) & 0 \cr 0 & E_2(\chi)}
\end{equation}
is the diagonalized form of the matrix (\ref{Ham4}), and
\begin{equation}
G(\chi)=\pmatrix{0 & g(\chi) \cr -g(\chi) & 0}, \quad
g(\chi)=\langle 0 | {\partial\over\partial\chi} | 1 \rangle,
\end{equation}
and $| 0\rangle$ and $|1\rangle$ are the eigenvectors of the
matrix (\ref{Ham4}).
The last term in the Hamiltonian (\ref{Ham5}) can be shown to give
smaller corrections than the term of the first order in $G(\chi)$.
A careful perturbation theory in $g(\chi)$ gives in second order
\begin{equation}
1-f(r) \sim \int_{\chi_1<\chi_2} e^{S_1(\chi_1, \chi_2) -
S_2(\chi_1, \chi_2)} g(\chi_1) g(\chi_2) \,
d\chi_1\, d\chi_2,
\label{integral1}
\end{equation}
where $S_{1,2} (\chi_1, \chi_2)$ are the classical actions
along the lower and the upper adiabatic branches between the
points $\chi_1$ and $\chi_2$.
This integral corresponds to summation over all tunneling paths
shown in fig.~3b.
The function $g(\chi)$ for
the given matrix $H(\chi)$ is a lorentzian peak at $\chi=\pi$
of height $r^{-1/2}$ and width $r^{1/2}$. Putting everything
together, the integral (\ref{integral1}) is calculated to
be
\begin{equation}
1-f(r) \sim {1\over r} \sqrt{E_C\over\Delta}.
\end{equation}
This asymptotics agrees with the found previously crossover
scale (\ref{Rad}).
To summarize the results of this section, the characteristic
scale for adiabatic-diabatic crossover in a nearly-ballistic
single contact is found to be $r_{ad}\sim \sqrt{E_C/\Delta}$.
The phase tunneling amplitude is proportional to the gate-voltage
modulation of the effective capacitance of the island, and thus
can be directly measured.
At low reflection
coefficients, these oscillations are proportional to
$\sqrt{r}$, like in the normal 1-channel QPC \cite{M}.
\section{Adiabatic approximation of a double-junction system.}
Now turn to the case of a double-junction system (fig.~1b).
As before, we assume that the reflection probabilities in both contacts
are small, $r_i\ll 1$,
that the charging energy $E_C\ll \Delta $
and that the temperature is sufficiently low to prohibit
single-electron excitations on the grain. To adjust electrostatic
potential of the grain we again use a gate terminal, $N=V_g C/2e$
denotes the dimensionless gate voltage, as before.
For the moment, to simplify the discussion we assume that the reflection
coefficients in the contacts are greater than the
crossover scale $r_{ad}$ found in the previous section and, therefore,
we may consider only the lower adiabatic branch of the Andreev states.
In fact, the results may be extended further to the case $r_i<r_{ad}$
by using appropriate ``coherence factors'' $f(r)$, similar
to those in the previous section.
We set the superconducting phase on one of the leads to be zero;
the phase on the other lead $\alpha$ is assumed to be fixed externally.
Then the total Josephson energy of the two contacts is (fig.~4):
\begin{equation}
U(\chi)=U_1(\chi)+U_2(\alpha-\chi),
\label{potential}
\end{equation}
where
\begin{equation}
U_i(\delta\phi)=-\Delta\sqrt{1-t_i\sin^2(\delta\phi/2)}
\label{groundstates}
\end{equation}
are the lower adiabatic Andreev terms in the two junctions.
\begin{figure}
\centerline{\epsffile{fig4.eps}}
\caption{Potential $U(\chi)$. At $\alpha\ne0$ it has two minima.
Finite backscattering in the contacts smoothes the summits of the
potential, but leaves the bottom of the wells unchanged.
}
\label{fig4}
\end{figure}
At $t_1=t_2=1$, the potential $U(\chi)$ obviously has two minima ---
at $\chi=\alpha/2$ and at $\chi=\alpha/2+\pi$ ---
and sharp summits at $\chi=\pi$ and $\chi=\pi+\alpha$ (fig.~4).
At small nonzero $r_i$, gaps open at the crossing points of Andreev
levels, which smoothes the summits of $U(\chi)$. Still, the bottom
of the potential remains practically unchanged.
The adiabatic Hamiltonian for the double
junction looks like follows:
\begin{equation}
H(\alpha,N)=U(\chi)+U(\alpha-\chi)+
{1\over 2}E_C(-i{\partial\over\partial\chi}-N)^2.
\label{Ham1}
\end{equation}
The potential term of the Hamiltonian is the sum of Josephson energies
of the contacts, the kinetic term is the Coulomb energy of the charge
at the grain.
\section{Josephson current.}
The condition
$E_C\ll\Delta$ allows us to treat the Coulomb term in the Hamiltonian
perturbatively.
First, neglecting the Coulomb term, we obtain a classical
system on the circle in the potential (\ref{potential}) with
two minima.
The energies of the minima are
$V_1(\alpha)=-2\Delta |\cos(\alpha/4)|$ and $V_2(\alpha)=
-2\Delta |\sin(\alpha/4)|$ (see fig.~4).
To a very good precision, we may neglect backscattering in determining
the minima --- except near the point $\alpha=0$. Since all the Coulomb
effects occur near the resonance point $\alpha=\pi$, this approximation
is justified.
At zero temperature, our
classical system prefers
the lowest of the minima. Thus the energy of the S-S-S system in the
absence of the Coulomb term is given by
\begin{equation}
E(\alpha)=-2\Delta \cos(\alpha/4) \qquad
{\rm for} \quad -\pi<\alpha<\pi
\end{equation}
(see fig.~5). Differentiating this energy with respect to the phase
$\alpha$ gives the Josephson current
\begin{equation}
I(\alpha)=2e{\partial E(\alpha) \over \partial\alpha}=
\Delta \sin {\alpha\over 4} \qquad
{\rm for} \quad -\pi<\alpha<\pi
\end{equation}
(fig.~6). Notice that the current has large jumps at the points of
level crossing $\alpha=\pi+2\pi n$. Qualitatively this picture is
very similar to the case of a single S-S ballistic junction,
but the shape of the current-phase dependence $I(\alpha)$
is different.
\begin{figure}
\centerline{\epsffile{fig5.eps}}
\caption{Classical minimum of the potential $U(\chi)$ as a function of the
external phase difference $\alpha$.
Dotted line shows the quantum gap opened by the Coulomb term.}
\label{fig5}
\end{figure}
\begin{figure}
\centerline{\epsffile{fig6.eps}}
\caption{Josephson current as a function of the external phase difference
$\alpha$.
Dotted line shows smearing of the singularity
due to the Coulomb term.}
\label{fig6}
\end{figure}
If we assume a non-zero temperature $T\ll\Delta$, the occupation of the
upper minimum is exponentially small except in the vicinity
of the level-crossing point $|\alpha-\pi|\sim T/\Delta$.
Thus, the effect of the temperature results in the smearing of
the singularity in $I(\alpha)$ at $\alpha=\pi$.
Another source of level mixing near the singular point $\alpha=\pi$
is {\it quantum} fluctuations, i.e. the fluctuations arising
from the kinetic term in the Hamiltonian (\ref{Ham1}).
They result in nonzero amplitudes of tunneling through the two
potential barriers between the potential minima. Due to the shift
in the "angular momentum" by $N$, the wave functions in the two potential
wells aquire an additional factor $\exp(iN\chi)$. This results
in the relative phase of the two tunneling
amplitudes by $2\pi N$. The net tunneling amplitude
(defining the level splitting) may be written as
\begin{equation}
H_{12}(N)\equiv\Delta\gamma(N)=\Delta (\gamma_1 e^{i\pi N}
+ \gamma_2^{-i\pi N}).
\label{H12}
\end{equation}
where $\gamma_1$ and $\gamma_2$ are the two amplitudes
of phase tunneling in the two different directions
(i.e. of phase slip processes in the two different contacts).
Below we assume that these amplitudes are computed at the
level-crossing point $\alpha=\pi$, where they are responsible
for level splitting.
The amplitudes $\gamma_1$ and $\gamma_2$ obey all asymptotics
derived in the previous section (except for numerical
factors). When the backscattering in the contacts $r\gg r_{ad}$,
they may be found in the quasiclassical approximation:
\begin{equation}
\gamma_{1,2}\sim\left({E_C\over\Delta}\right)^{1/4}
\exp(-B_2\sqrt{\Delta\over E_C})\ll 1,
\end{equation}
where $B_2\sim 1$ is determined by the classical action connecting
the two potential minima (at $r \ll 1$,
$B_2 \cong 1.45 + 2.20 r\log r + \dots$).
At $r\ll r_{ad}$, the tunneling amplitudes are
\begin{equation}
\gamma_{1,2}\sim r^{1/2}
\exp(-B_2\sqrt{\Delta\over E_C})
\label{gamma2}
\end{equation}
For the best observation of Coulomb oscillations, $\gamma_1$ and
$\gamma_2$ must be of the same order, but not very small.
In the ideal case $\gamma_1=\gamma_2=\gamma$ the total amplitude
\begin{equation}
\gamma(N)=2\gamma\cos(\pi N)
\label{gammas}
\end{equation}
Although the periodic dependence (\ref{gammas}) has
$4e$ period as function of the "external charge" $Q_x = CV_g \equiv 2eN$,
the Josephson current and its fluctuations depend on
$|\gamma(N)|^2$ only (cf. eqs.(\ref{I},\ref{S}) below),
and their period is $2e$ as expected~\cite{GM}.
The characteristic scale for the $r$-dependence
of $B_2$ is $\delta r \sim \sqrt{E_C/\Delta}$, therefore
for $\gamma_1$ and $\gamma_2$ to be of the same order,
the transparencies of the two contacts must differ by no more
than $|r_1-r_2|\le \sqrt{E_C/\Delta}$.
Here we should comment on the difference of our result
(\ref{H12})-(\ref{gamma2}) from the normal two-channel
system discussed in \cite{M}. In the normal
system the two tunneling amplitudes multiply, and the net
ground-state energy oscillations are proportional to $r\ln{r}$
at small $r$. In the superconducting system, the external leads
have different superconducting phases, and the tunneling
in the two contacts occurs at different values of the phase
on the grain. Therefore, the tunneling amplitudes add with
some phase factors and give the asymptotic of $\sqrt{r}$ at $r\to 0$.
In fact, the oscillations in the superconducting system will
be proportional to $r$ (similarly to the normal system \cite{M}) in
a different limit --- at the phase difference $\alpha=0$,
when the potential $U(\chi)$ has a single minimum and a single barrier.
The hybridized energy levels in the vicinity of $\alpha=\pi$
are given by the eigenvalues of the $2\times 2$
Hamiltonian
\begin{equation}
H(\alpha,N)=\pmatrix{ V_1(\alpha) & H_{12} (N) \cr
H_{12} (N) & V_2(\alpha) \cr}.
\label{Ham2}
\end{equation}
Diagonalization gives the two energy levels:
\begin{equation}
E_{1,2}(\alpha,N)=-\Delta\left[ |\sin{\alpha\over 4}| +
|\cos{\alpha \over 4}| \pm \sqrt{\left( |\sin{\alpha\over 4}| -
|\cos{\alpha \over 4}| \right)^2 + \gamma^2 (N)} \right],
\label{energies}
\end{equation}
the off-diagonal matrix elements of the Hamiltonian open a gap
at the level-crossing point $\alpha=\pi$ (fig.~5).
This gap periodically
depends on the gate voltage $V_g$, and these oscillations comprise
the Coulomb effects in the S-S-S junction.
We can obtain the Josephson current by
differentiating the energy levels with respect to the phase $\alpha$.
The gap results in smearing the singularity in $I(\alpha)$ even at
zero temperature (fig.~6):
\begin{equation}
I(\alpha)=
{\Delta\over \sqrt2} \sin({\alpha-\pi\over 4})
\left[1-{\cos({\alpha-\pi\over 4}) \over
\sqrt{\sin^2({\alpha-\pi\over 4})
+ {1\over 2}\gamma^2(N)}} \right]
\qquad {\rm for} \quad \alpha\sim\pi.
\label{I}
\end{equation}
The width of the crossover at $\alpha=\pi$ depends periodically on $V_g$:
$|\alpha-\pi| \sim |\gamma(N)|$.
In the above discussion we neglected the excited oscillator states.
The interlevel spacing for the excitations in the potential
wells is of order $\sqrt{\Delta E_C}\gg\Delta\gamma$.
Therefore the Coulomb effects have a much smaller energy
scale and the excited states do not participate
in mixing the ground states of the two potential wells.
At a nonzero temperature these Coulomb effects will compete
with the smearing by temperature so that
the width of the singularity at $\alpha=\pi$ is given at nonzero
temperature $T\ll\Delta$ by
$|\alpha-\pi| \sim \max(\gamma(N),T/\Delta)$.
Therefore, in order for Coulomb effects
to dominate the thermal
fluctuations, we must have $T\le\gamma\Delta$.
It is instructive to compare this picture with the case of
multi-channel tunnel S-S-S
junction (to distinguish from the results of \cite{GM} we should
remark that we consider the opposite to their assumption $\Delta<E_C$
limit).
If we develop a similar theory for tunnel Josephson junctions,
we find that the potentials (\ref{potential}), (\ref{groundstates})
are both sinusoidal, and, therefore, the total potential
(\ref{potential}) has only one minimum (versus two in the
nearly ballistic system). In the tunnel S-S-S system
the current-phase relation $I(\alpha)$ has a smearing at
$\alpha=\pi$ due to the difference between the critical currents
of the two Josephson contacts. The Coulomb effects
compete with this smearing and in order to win, the
charging energy $E_C$ must be greater than the difference
of the critical currents. In the tunnel system the corresponding
splitting $\gamma$ is linear in $E_C$ while in the nearly
ballistic system it is exponentially small. Otherwise,
Coulomb oscillations in $I(\alpha)$ will appear similar in these
two cases.
To summarize this section, we observed that the Coulomb
effects in the one-channel S-S-S junction smears the singularity
in the Josephson current $I(\alpha)$ at the critical
value $\alpha=(2n+1)\pi$. This smearing depends
periodically on the potential of the grain with the period $2e/C$
and is exponentially
small in the adiabatic parameter $E_C/\Delta\ll 1$. The smearing
is the result of mixing the two states in the potential minima
of the Josephson energy.
\section{Fluctuations of the Josephson current.}
In this section we compute the low-frequency spectrum
of the fluctuations of the Josephson current in our model.
We shall be interested in frequencies much less than the
oscillator energy scale $\sqrt{\Delta E_C}$,
thus we consider only transitions between the
eigenstates of the reduced ground-state Hamiltonian (\ref{Ham2}).
We also assume that the temperature is lower than
$\sqrt{\Delta E_C}$, then we may disregard the excited
oscillator states and the internal noise in the
contacts (discussed in \cite{MR1,AI,Gordei,MR3}). Obviously, under these
assumptions we can observe current fluctuations only
in the close vicinity of the resonance point $\alpha=\pm\pi$,
where the energies (\ref{energies}) of the two low-lying states
are close to each other.
We expect to observe two peaks in the noise spectrum ---
one at zero frequency (due to the thermal excitations above the
ground state), and the other at the transition frequency
$|E_1-E_2|$ (from the off-diagonal matrix elements of the
current operator). In this section we compute the intergal weights
of these peaks and postpone the discussion of their width
(determined by dissipative processes) until elsewhere.
Discuss first the zero-frequency peak. In our approximation
it is just the thermal noise of a two-level system. In the
vicinity of the resonance point $\alpha=\pi$ we can linearize
the spectrum $V_{1,2}(\alpha)$ and make an approximation that
one of the two states carries the current $I(\alpha,N)$,
and the other $-I(\alpha,N)$. The spectral weight of the noise is then
given by a simple formula:
\begin{equation}
S_0(\alpha,N,T)\equiv \langle I^2 \rangle - \langle I \rangle^2
={I^2(\alpha,N) \over \cosh^2 {E_1-E_2\over 2T}}.
\end{equation}
Substituting $I(\alpha,N)$ and $E_{1,2}(\alpha,N)$
from the previous section,
we obtain the noise intensity near the resonance:
\begin{equation}
S_0(\alpha,N,T)={\Delta^2\over 2}\,
{ \left({\alpha-\pi\over2\sqrt2}\right)^2
\over \left({\alpha-\pi\over2\sqrt2}\right)^2 + \gamma^2 (N)}
\cosh^{-2}\left({\Delta\over T}\sqrt{
\left({\alpha-\pi\over2\sqrt2}\right)^2 + \gamma^2 (N)}\right).
\label{S}
\end{equation}
For the effect of the Coulomb interaction to be observable, the
temperature must be smaller than the Coulomb gap: $T\le\gamma\Delta$.
At constant $T$ and $N$,
the noise decreases exponentially as $\alpha$ goes away
from its critical value $\alpha=\pi$, and at $\alpha=\pi$ the noise
is suppressed in the interval $|\alpha-\pi|<\gamma(N)$ (fig.~7).
Interplay between these two factors results in the strong dependence
of the peak value of the noise on the potential of the grain.
The peak value of the noise $\max_\alpha S(\alpha,N,T)$ is plotted
against $N$ in fig.~8. Most favorable is the case of identical
contacts, when $\gamma_1=\gamma_2=\gamma$ and, therefore,
$\gamma(N)=2\gamma\cos(\pi N)$. In this case,
when $\cos(\pi N)\ll T/\gamma\Delta$ (small
gap limit) the noise takes its maximal value $S\approx \Delta^2/2$.
In the opposite limit of large gap ($\cos(\pi N)\gg T/\gamma\Delta$)
the noise decreases exponentially: $S\approx \Delta^2 [{T\over\Delta
\gamma|\cos\pi N|} \exp (-4{\Delta\gamma|\cos\pi N| \over T})]$.
The noise has a sharp peak at the resonance point $\cos\pi N =0$,
where two levels on the grain with different electron numbers
have equal energies.
\begin{figure}
\centerline{\epsffile{fig7.eps}}
\caption{Zero frequency noise as a function of the phase $\alpha$.
It decays exponentially for $\alpha$ far from the resonance point
$\alpha=\pi$. At the very resonance point, the noise is suppressed,
because both of the two states carry nearly zero Josephson current.}
\label{fig7}
\end{figure}
\begin{figure}
\centerline{\epsffile{fig8.eps}}
\caption{Maximal value of
the noise versus the potential of the grain.
The period of the peaks corresponds to the period $2e$ of the
induced charge $Q=CV_g$. The width of the peaks depends on the
capacity of the grain.}
\label{fig8}
\end{figure}
Now turn to the noise peak at the interlevel frequency
$\omega=|E_1-E_2|$. Since now $\omega$ can be large compared to $T$,
one needs to discern between different kinds of frequency-dependent
correlation functions, which can be measured as a noise intensity in
different experimental
situations \cite{lesovik}; here we mean by noise the Fourrier spectrum of
the time-symmetric current-current correlation function.
In our approximation of a two-level
system such a noise is temperature independent, and
its weight is determined purely by the off-diagonal matrix
element:
\begin{equation}
S_{\omega}={1\over 2}\Big|\langle 1|I|2\rangle \Big|^2.
\end{equation}
A straightforward computation for the Hamiltonian (\ref{Ham2})
and $I=2e(\partial H/\partial\alpha)$ gives
(in the vicinity of $\alpha=\pi$):
\begin{equation}
\langle 1|I|2\rangle={\Delta^2\gamma(N) \over\omega}
(\cos{\alpha\over4}+\sin{\alpha\over4})
\end{equation}
and
\begin{equation}
S_\omega(\alpha,N)=\Delta^2\left({\Delta\gamma(N)\over\omega}\right)^2
\cos^2{\alpha-\pi\over4}.
\end{equation}
This result contrasts the corresponding noise intensity
in the single quantum point contact (found in \cite{MR1,MR3,AI}).
In the single quantum point contact the correponding
noise intensity $S_\omega$ is temperature-dependent, because
that system has four possible states (or, alternatively,
two fermion levels). In the case of the double junction
the system has only two states differing by the phase
on the grain, and the quantum fluctuations $S_\omega$
become temperature-independent.
\section{Conclusions}
We have developed a theory of Coulomb oscillations of the Josephson current
and its noise power via the S-S-S system with nearly ballistic quantum point
contacts. The period of Coulomb oscillations as function of the
gate potential is $V^0_g = 2e/C$.
These oscillations arise from the quasiclassical tunneling of the
superconducting phase on the grain and are, therefore, exponentially
small in $\sqrt{E_C/\Delta}$ at $E_C\ll\Delta$.
In addition, we predict a crossover
from adiabatic to diabatic tunneling at the backscattering probability
$r_{ad}\sim \sqrt{E_C/\Delta}$. At backscattering below $r_{ad}$, the
amplitude $\varepsilon$ of the Coulomb oscillations is proportional to the
square root of the
smallest (of the two contacts) reflection probability $\sqrt{r_{min}}$.
This constrasts the case of a normal double-contact system \cite{MF}
where $\varepsilon$ is proportional to the product $\sqrt{r_1 r_2}$.
The average Josephson current-phase relation $I(\alpha)$ is shown to
be strongly non-sinusoidal and roughly similar to the one known for a
single nearly ballistic QPC, in the sense that it contains sharp "switching"
between positive and negative values of the current as the phase varies
via $\alpha = \pi$. The new feature of our system is that it is
possible to vary the width of the swithching region $\delta\alpha$
by the electric
gate potential $V_g$; in the case of equal reflection probabilities
$r_1=r_2$ this electric modulation is especially pronounced,
$\delta\alpha \propto |\cos(\pi CV_g/2e)|$.
The noise spectrum of the supercurrent is found to consist mainly of
two peaks: the "zero-frequency" peak due to rare thermal exitations of the
upper level of the system, and another one centered around
the energy difference $\omega_{\alpha}$ between the two
levels. The widths of these peaks are determined by the inverse
life-time $\tau$ of the two states of our TLS, which is due to electron-phonon
and electromagnetic couplings. Both these sources of level decay are
expected to be very weak in the system considered, but the corresponding
quantitative analysis is postponed for the future studies, so we present
here only the results for the {\it frequency-integrated}
(over those narrow intervals $\sim 1/\tau$) noise power.
The S-S-S device with almost ballistic contacts is a new type of
a system which may be used as a realization of an
artificial "spin 1/2" --- an elementary unit for quantum computations.
In comparison with
usual Josephson systems with tunnel junctions which
were proposed for the use in adiabatic quantum computatons \cite{qaverin},
the advantage of our system is that it may operate at considerably
higher values of the Josephson critical currents; moreover, the
current-phase characteristics of such a system is almost universal in the
sense that it is determined mainly by the microscopic parameters of the
SC materials and only weakly depends on the specifics of contact
fabrication.
We are grateful to K. A. Matveev, Yu. V. Nazarov and especially to
G. B. Lesovik for many useful discussions. This research of M.V.F.
was supported by the INTAS-RFBR grant \# 95-0302,
the collaboration grant \# 7SUP J048531
from the Swiss National Science Foundation and the DGA grant \# 94-1189.
| {
"attr-fineweb-edu": 1.916992,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUc8w5qoYAttXCydsI | \section{Introduction}
With the growing interest in spintronics, measuring the spin polarization $P$ of a metal has become an important task~\cite{Prinz}. Andreev reflection spectroscopy of point contacts has been suggested as an effective and versatile technique for measuring the polarization of metals in contact with a superconductor~\cite{Jong,Soulen}. To extract the polarization, one analyses the measured differential resistance spectra with the help of a modified BTK model~\cite{BTK,Strijkers,Woods}. This method has been applied to a wide range of materials, delivering converging values for the spin polarization. The extracted $P$ has a linear or parabolic dependence on the relative strength of the interface barrier $Z$, and the true value of the polarization $P$ is measured in the $Z = 0$ limit. Unfortunately, this procedure has also encountered problems, including inadequately understood additional spectral features~\cite{Baltz} and, most of all, the degeneracy of the results of the various BTK type models~\cite{Bugoslavsky,Chalsani}. This applies especially when the polarization is low.
We have recently demonstrated how difficult it is to distinguish the effects of polarization and finite lifetime of Cooper pairs in the spectra of Nb-Co and Nb-Cu contacts~\cite{oma}. The finite lifetime of the Cooper pairs has been used to describe the spectra of non-magnetic contacts~\cite{Plecenik}, but with few exceptions~\cite{Bugoslavsky,Chalsani,Mukhopadhyay} it has not generally been applied to contacts with magnetic metals even though one should expect strong pair breaking in that case. Since spectra of both magnets and non-magnets can often be fitted rather well with the two models, we have suggested to use the $Z$ parameter as a distinguishing criterion between them. In addition to Nb - Co and Nb - Cu data, we present here measurements of Nb in contact with the ferromagnets Fe and Ni as well as the non-magnets Ag and Pt that support our previous results.
\section{Experimental details and results}
We formed the point contacts between a Nb wire and a normal metal wire with typical diameters of 0.25 mm using the shear method~\cite{kirja}. Their differential resistance was recorded in the standard four-wire scheme. Most of the measurements were carried out at 4.2 K in liquid helium and a smaller number in vacuum down to 0.7 K. The normal state resistance of the analysed contacts varied between 1 and 100 Ohm.
Only spectra that showed the typical Andreev reflection double minimum structure were analysed with two different modified BTK models: Strijkers' model~\cite{Strijkers} that uses spin polarization to explain the spectral features and Plecen\'{i}k's model~\cite{Plecenik} as alternative description using the finite Cooper pair lifetime. Figure~\ref{fitf} shows that both models fit the measured spectra well. The parabolic $P(Z)$ dependence of ferromagnetic Co, Fe and Ni extracted using the polarization model agreed well with those measured by others~\cite{Strijkers,Baltz}. However, the same analysis of the non-magnetic metals implies a similar polarization also of Ag, Cu and Pt (Figure~\ref{tulokset}). The analysis of the contacts with the lifetime model results in the $\Gamma(Z)$ dependence also shown in Figure~\ref{tulokset}.
\begin{figure}[h]
\includegraphics[width= \textwidth]{Fig-1.eps}
\caption{\label{fitf} Representative examples of Pt - Nb (left) and Fe - Nb (right) spectra measured in liquid He at 4.2 K. Black lines are the measured spectra, blue dotted lines the polarization fit, and red dashed line the lifetime fit.}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=\textwidth]{Fig-2.eps}
\end{center}
\caption{\label{tulokset} The $P(Z)$ (top) and the $\Gamma(Z)$ (bottom) distributions of the investigated Nb - normal metal combinations.}
\end{figure}
\section{Discussion}
It is obvious that the two models can not both be valid at the same time. Therefore we need to distinguish the two different interpretations from each other and find the one that is more plausible. The $Z$ dependence seems to be the most striking difference between the results of the two models~\cite{oma}. Figure~\ref{tulokset} shows that the polarization model leads to a parabolic $P(Z)$ dependence while the lifetime model gives $\Gamma$ at an almost constant $Z$.
The $Z$ parameter can be expressed as $Z = \sqrt{ Z_b^2 + Z_0^2}$ where $Z_b$ represents the strength of a real tunneling barrier and $Z_0$ the reflections caused by the mismatch of the electronic structures on both sides of the interface~\cite{Blonder1983}. A simple estimate for the latter is $Z_0 = |1-r|/2\sqrt{r}$ with the ratio of the Fermi velocities of the two electrodes $r = v_{F_1}/v_{F_2}$~\cite{Blonder1983}. This approach neglects much of the complexity of the interfaces. However, obtaining even such an estimate is still not trivial because the Fermi velocities are difficult to determine experimentally, especially for the ferromagnets. Estimates for $Z_0$ range from less than 0.1 up to 0.4 for these metal combinations~\cite{oma}.
The $Z$ parameter is related to the transmission coefficient $\tau = 1/\sqrt{1+Z^2}$ of an interface. Transmission through interfaces has been investigated using the superconducting proximity effect (PE)~\cite{Cirillo2004,Tesauro2005,Attanasio2006,Kushnir2009} and the current perpendicular to plane magnetoresistance (CPP-MR)~\cite{Stiles2000,Xu2006,Sharma2007,Park2000}. In these experiments oxide barriers can be excluded because of the preparation method, and the transmission coefficient should therefore represent the contribution from Fermi surface mismatch only. The obtained transmission coefficients for interfaces between Nb and the non-magnetic normal metals are close to 0.3 which corresponds to $Z \approx 1.5$. The proximity effect studies indicate significantly smaller $\tau$ values for the ferromagnets in contact with Nb while crude estimates based on the free electron model and the CPP-MR data~\cite{Sharma2007,Park2000} result in transmission coefficients smaller than 0.5.
Our results in Figure~\ref{tulokset} derived from the polarization model indicate a very high transmission. Some of the contacts have even $Z \approx 0$ which means a nearly perfect transmission with $\tau \approx 1$. The lifetime model on the other hand indicates a maximum transmission coefficient of approximately 0.8 which is determined by the onset of the $\Gamma(Z)$ data at $Z \approx 0.5$. Hence, results of the lifetime model agree much better with the transmission coefficients obtained by the above mentioned different experimental methods than those of the polarization model.
Neither of the two models leads to a large difference between the ferromagnetic and non-magnetic metals. The $P(Z)$ dependency and the amount of pair breaking are almost identical for all of the investigated normal metals in contact with superconducting Nb. This is unexpected because spin polarization should reduce the amount of Andreev reflection and increase pair breaking at interfaces with ferromagnets more than with non-magnets. This should have a measurable effect on the spectra - why can it not be observed?
\section{Conclusions}
We have measured superconducting Nb in contact with both ferromagnetic and non-magnetic normal metals and analysed the resulting spectra with the mutually exclusive polarization and lifetime models. There are no clear differences between the magnetically ordered and non-magnetic metals in either of the two models or the measured spectra. However, the lifetime model fits better the experimental results obtained by other methods. This would indicate that point-contact Andreev reflection spectroscopy, at least with superconducting Nb, does possibly not deliver the true spin polarization of the normal metal.
\section{Acknowledgements}
We thank the Jenny and Antti Wihuri Foundation and the Finnish Concordia Fund for financial support.
\section*{References}
| {
"attr-fineweb-edu": 1.53125,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUcAY25V5hcGj041Fq | \section{Introduction}
\begin{figure*}[htb]
\centering
\includegraphics[width=1.0\textwidth]{framework-ob.pdf}
\caption{The evolutionary algorithm framework for our joint search method. Each individual in the population is evaluated with the accuracy and
model size of the quantized model.
When architectures are fixed during search, the method could provide existing networks with quantization policies.}\label{fig:1}
\end{figure*}
Deep convolutional neural networks have successfully revolutionized various challenging tasks, e.g., image classification~\cite{he2016deep,huang2017densely,szegedy2016rethinking}, object detection~\cite{DBLP:journals/pami/RenHG017} and semantic segmentation~\cite{DBLP:journals/pami/ChenPKMY18}.
Benefited from its great representation power, CNNs have released human experts from laborious feature engineering with end-to-end learning paradigms.
However, another exhausting task appears, i.e., neural architecture design that also requires endless trails and errors.
For further liberation of human labours, many neural architecture search (NAS) methods~\cite{zoph2017learning,Real2018Regularized} have been proposed and proven to be capable of yielding high-performance models. But the technique of NAS alone is far from real-world AI applications.
As networks usually need to be deployed on devices with limited resources, model compression techniques are also indispensable.
In contrast to NAS that is considered at the topological level, model compression aims to refine the neural nodes of a given network with sparse connections or weighting-parameter quantization.
However, computation strategies also need elaborate design. Taking quantization for example, conventional quantization policies often compress all layers to the same level.
Actually each layer has different redundancy, it is wise to determine a suitable quantization bit for each layer.
However, quantization choices also involve a large search space and designing mutual heuristics would make human burden heavier.
In this paper, we make a further step for the liberation of human labours and propose to integrate architecture search and quantization policy into a unified framework for neural networks (JASQ).
A Pareto optimal model~\cite{deb2014multi} is constructed in the evolutionary algorithm to achieve good trade-offs between accuracy and model size.
By adjusting the multi-objective function, our search strategy can output suitable~models for different accuracy or model size demands.
During search, a population of models are first initialized and then evolved in iterations according to their fitness. Fig.~\ref{fig:1} shows the evolutionary framework of our method.
Our method brings the following advantages:
\begin{itemize}
\item \emph{Effectiveness} $\;$ Our method can jointly search for neural architectures and quantization policies. The resulting models, i.e., JASQNet and JASQNet-Small, achieve competitive accuracy to state-of-the-art methods~\cite{he2016deep,huang2017densely,zoph2017learning} and have relatively small model size.
For existing architectures, e.g., ResNet~\cite{he2016deep}, DenseNet~\cite{huang2017densely} and MobileNets~\cite{howard2017mobilenets,sandler2018inverted}, our quantized models can outperform their 2/4/8/16 bits counterparts and even achieve higher accuracies than float models on ImageNet.
\item \emph{Flexibility} $\;$
In our evolutionary search method, a multi-objective function is adopted as illustrated in Fig.~\ref{fig:3} and Eq.~\eqref{Eq1}. By adjusting $\mathcal{T_S}$ in the objective function, we obtain models with different accuracy and size balances. JASQNet has a comparable accuracy to ResNet34~\cite{he2016deep} but much less model size. JASQNet-Small has a similar model size to SqueezeNet~\cite{DBLP:journals/corr/IandolaMAHDK16} but much better accuracy (65.90\% vs 58.09\%).
\item \emph{Efficiency} $\;$
We need only 1 GPU across 3 days to accomplish the joint search of architectures and quantization policies.
Given hand-craft networks, their quantization policies can be automatically found in a few hours on ImageNet.
\end{itemize}
\begin{figure*}[t]
\centering
\includegraphics[width=1.0\textwidth]{architectures-white2.pdf}
\caption{Architectures for CIFAR-10 and ImageNet.
The image size in ImageNet (224x224) is much larger than that in CIFAR-10 (32x32). So there are additional reduction cells and convolution 3x3 with stride 2 in ImageNet architectures to downsample feature maps.}\label{fig:2}
\end{figure*}
\section{Related Work}
\subsection{Neural Architecture Search}
Techniques in automatically designing network~\cite{zoph2017learning,pham2018efficient,Real2018Regularized} have attracted increasing research interests. Current works usually fall into one of two categories: reinforcement learning (RL) and evolutionary algorithm (EA).
In terms of RL-based methods, NAS~\cite{zoph2016neural} abstracts networks into variable-length strings and uses a reinforcement controller to determine models sequentially. NASNet~\cite{zoph2017learning} follows this search algorithm, but adopts cell-wise search space to save computational resources.
In terms of EA-based methods, AmoebaNet~\cite{Real2018Regularized} shows that a common evolutionary algorithm without any controller can also achieve comparable results and even surpass RL-based methods.
In addition to RL and EA, some other methods have also been applied.
DARTS~\cite{DBLP:journals/corr/abs-1806-09055} introduces a gradient-based method where they formulate the originally discrete search space into continuous parameters.
PNAS~\cite{liu2017progressive} uses a sequential model-based optimization (SMBO) strategy to search architectures in order of increasing complexity.
Other methods including MCTS~\cite{DBLP:journals/corr/NegrinhoG17}, boosting~\cite{DBLP:conf/icml/CortesGKMY17} and hill-climbing~\cite{DBLP:journals/corr/abs-1711-04528} have also shown their potentials.
Most methods mentioned above have produced networks that outperforms classical hand-crafted models.
However, only neural architectures can not satisfy the demands of real-world applications. Thus, we propose a more convenient approach to provide complete schemes for deep learning practitioners.
\subsection{Model Compression}
Model compression has received increasing attention. This technique can effectively execute deep models in resource-constrained environments, such as mobile or embedded devices. A few practical methods are proposed and put into effect. Network pruning conducts channel-level compressions for CNN models~\cite{DBLP:conf/iccv/LuoWL17, DBLP:journals/corr/HanMD15}. Distillation has been introduced recently~\cite{DBLP:journals/corr/HintonVD15, DBLP:conf/nips/BaC14} that transfers the behaviour of a given model to the smaller student structure. In addition, some special convolution structures are also applied in mobile size devices, such as separable depthwise convolution~\cite{howard2017mobilenets} and 1 x 3 then 3 x 1 factorized convolution~\cite{szegedy2016rethinking}. To reduce the redundancy of the fully connected layer, some methods propose to factorize its weights into truncated pieces~\cite{DBLP:conf/nips/DentonZBLF14,DBLP:conf/interspeech/XueLG13}.
Quantization is also a significant branch of model compression and widely used in real applications~\cite{DBLP:journals/corr/abs-1802-05668,DBLP:journals/corr/ZhuHMD16,DBLP:conf/eccv/RastegariORF16}. Quantization can effectively reduce model size and thus save storage space and communication cost.
Previous works tend to use a uniform precision for the whole network regardless of the different redundancy for each layer. Determining mixed precisions for different layers seems more promising.
Actually mixed precision storage and computation have been widely supported by most hardware platforms, e.g., CPUs and FPGAs. However, because each model has tens or hundreds of layers, it is tedious to conduct this job by human experts. In this work, we combine the search of quantization policies with neural architecture search. Determining a quantization bit for a convolution layer is similar to choosing its kernel size. It is easy to implement this method based on previous NAS works.
\section{Methods}
Neural architecture design and model compression are both essential steps in deep learning applications, especially when we face mobile devices that have limited computation resources. However, both of them are time-consuming if conducted by human experts.
In this work, we joint search of neural architectures and quantization policies in a unified framework. Compared with only searching for architectures, we evolve both architectures and quantization policies and use the validation accuracies of quantized models as fitnesses. Fig.~\ref{fig:1} illustrates our framework.
\begin{figure
\centering
\includegraphics[width=0.5\textwidth]{objective.pdf}
\caption{Multi-objective Function. Take $\mathcal{T_{S}}=4~\mathtt{MB}$ for example. When size is less than $\mathcal{T_{S}}$, $\mathcal{F}$ depends only on accuracy. Otherwise, $\mathcal{F}$ sharply decreases as punishment.}\label{fig:3}
\end{figure}
\subsection{Problem Definition}
\label{3.1}
A quantized model $\Theta$ can be constructed by its neural network architecture $\mathscr{A}$ and its quantization policy $\mathscr{P}$.
After the model is quantized, we can obtain its validation accuracy $\mathcal{\alpha}$($\Theta$) and its model size $\mathcal{S}(\Theta)$.
In this paper, we define the search problem as a multi-objective problem.
The Pareto optimal model~\cite{deb2014multi} is famous for solving multi-objective problems and we define our search problem into maximizing the objective function $\mathcal{F}(\Theta)$ as follow:
\\
\begin{equation}\label{Eq1}
\begin{split}
\underset{\small{\Theta}} {\text{max}}
\; \mathcal{F}(\Theta)=
\underset{\small{\Theta}} {\text{max}}
\; \mathcal{\alpha}(\Theta) \cdot \left[\frac{\mathcal{S}(\Theta)}{\mathcal{T_{S}}}\right]^\gamma
\end{split}
\end{equation}
where $\mathcal{T_{S}}$ is the target for the model size and $\gamma$ in the formulation above is defined as follow:
\begin{equation}\label{Eq2}
\begin{split}
\gamma=\left\{
\begin{aligned}
& 0, \;\;\;\; \text{if}\;\; \mathcal{S}(\Theta) \leq \mathcal{T_{S}} \\
& -1, \;\;\;\; \text{otherwise}\\
\end{aligned}
\right.
\end{split}
\end{equation}
It means that if the model size meets the target, we simply use accuracy as the objective function. It degrades to a single objective problem. Otherwise, the objective value is penalized sharply to discourage the excessive model size. We visualize the multi-objective function in Fig.~\ref{fig:3}.
The search task is converted into finding a neural architecture $\mathscr{A}$ and a quantization policy $\mathscr{P}$ to construct an optimal model $\Theta=\{\mathscr{A},\mathscr{P}\}$
that maximizes the objective Eq.~\eqref{Eq1}.
In experiments, we first show the effectiveness of the learned quantization policies by fixing the network architecture $\mathscr{A}$ as classical hand-crafted networks. After that, the whole search space is explored as described in Section~\ref{3.2}.
\subsection{Search Space}
\label{3.2}
Our search space can be partitioned into neural architecture search space and quantization search space, $\mathbb{S}=\{\mathbb{S}_{\mathscr{A}},\mathbb{S}_{\mathscr{P}}\}$. In this section, we first introduce them respectively and then summarize our total search space in details.
For neural architecture search space $\mathbb{S}_{\mathscr{A}}$, we follow the NASNet search space~\cite{zoph2017learning}.
This search space has been widely used by many well-known methods~\cite{pham2018efficient,Real2018Regularized,liu2017progressive,DBLP:journals/corr/abs-1806-09055} and thus it is fair for comparison. This cell-wise search space consists of two kinds of Inception-like modules, called the \emph{normal cells} and the \emph{reduction cells}. When taking in a feature map as input, the \emph{normal cells} return a feature map of the same dimension. The \emph{reduction cells} return a feature map with its height and width reduced by a factor of two. These cells are stacked in certain patterns for CIFAR-10 and ImageNet respectively as shown in Fig.~\ref{fig:2}. The resulting architecture is determined by the normal cell structure and the reduction cell structure, the first convolution channels ${(F)}$ and cell stacking number ${(N)}$. Only the structure of the cells are altered during search. Each cell is a directed acyclic graph consisting of combinations. A single combination takes two inputs and applies an operation to each of them.
Therefore, each combination can be specified by two inputs and two operations, $\{{i_1,i_2,o_1,o_2}\}$.
The combination output is the addition of them and all combination outputs are concatenated as the cell output.
For quantization policy $\mathbb{S}_{\mathscr{P}}$, we aim to find optimal quantization bit for each cell. As shown in Fig.~\ref{fig:2}, there are ${k = 3\cdot N+2}$ cells in the CIFAR-10 architecture. Thus, the problem is convert into searching for a string of bits for these cells $\mathscr{P} =\{{b_1,b_2,...,b_{k}}\}$.
In our implementation, we conduct search with a string of code to represent our total search space $\mathbb{S}$. As the neural architecture is determined by the normal cell and the reduction cell, each model is specified by the normal cell structure and the reduction cell structure, $\mathbb{S}_{\mathscr{A}}=\{\mathscr{A}_{\mathtt{nom}},\mathscr{A}_{\mathtt{rec}}\}$. As mentioned above, the normal cell structure contains ${k=3\cdot N+2}$ combinations, that is, $\mathscr{A}_{\mathtt{nom}} = {\{C_1,C_2,...,C_k\}_\mathtt{nom}}$ and the reduction cell structure is same. A combination is specified by two inputs and two operations, which is presented as ${C_j=\{i_1,i_2,o_1,o_2\}_j}$.
The choices of architecture operations ${o}$ and quantization levels ${b}$ are shown below:
\emph{$\bullet$ Architecture}: 3x3 separable conv, 5x5 separable conv, 3x3 avg pooling, 3x3 max pooling, zero, identity.
\emph{$\bullet$ Quantization}: 4 bit, 8 bit, 16 bit.
Assuming there are $\text{\#}\mathbb{S}_{\mathscr{A}}$ possible architectures and $\mathtt{\text{\#}}\mathbb{S}_{\mathscr{P}}$ possible compression heuristics respectively, the total complexity of our search space is $\mathtt{\text{\#}\mathbb{S}_{\mathscr{A}} \cdot \text{\#}\mathbb{S}_{\mathscr{P}}}$. In experiments, we search on CIFAR-10 and the cell stacking number $\mathtt{(N)}$ is 6. As in Fig.~\ref{fig:2}, there are ${6\times 3 + 2=20}$ cells in each model and $\mathtt{\text{\#}}\mathbb{S}_{\mathscr{P}}$ equals to ${3^{20}=3.5\times 10^9}$. For architecture search space, all our comparison methods and our approach follow. NASNet~\cite{zoph2017learning}.
Thus, our total search space is ${3.5\times 10^9}$ times large as that of comparison methods.
\begin{comment}
Recent studies~\cite{pham2018efficient,Real2018Regularized,zoph2017learning} narrow the search space into a few complex cells and then repeatedly stack the same cells for a fixed times. Networks in this search space is hard to meet various computation device demands, because the width and depth of them are all predefined.
To abtain the best trade-off between accuracy and model size, we employ a factorized hierarchical search space~\cite{DBLP:journals/corr/abs-1807-11626} with some modifications. Fig.~\ref{fig:2} shows the entire framework of our search space. In this section, we introduce the search space by factorizing \emph{network} into \emph{block} and \emph{layer}.
The entire network consists of 7 blocks and each block consists of variable number of layers. Unlike NASNet~\cite{zoph2017learning} search space, our blocks are not same, but in each block the same layer is repeated. Each layer has a residual connection and there are several choices of the skip path operations ($OPs$).
For a block $i$, we search for its convolution operations, kernel size, skip operations, the number of repeated layers and compression levels.
Their choices are shown as follow. Note that the inverted bottleneck convolution is adopted from mobilenet-v2~\cite{sandler2018inverted}.\\
\emph{$\bullet$ Conv OPs}: regular conv, depth-wise separable conv, mobilenet inverted bottleneck conv with expansion ratio 3, mobilenet inverted bottleneck conv with expansion ratio 6.\\
\emph{$\bullet$ Kernel Size}: 3x3, 5x5.\\
\emph{$\bullet$ Skip OPs}: max or avg pooling, identity, no skip.\\
\emph{$\bullet$ Repeated Layers}: number of layers 1, 2, 3, 4.\\
\emph{$\bullet$ Compression Levels}: quantization bit 4, 8, 16.\\
\end{comment}
\subsection{Search Strategy}
\label{3.3}
We employ a classical evolutionary algorithm, \emph{tournament selection}~\cite{goldberg1991comparative}.
A population of models $\mathbf{P}$ is first initialized randomly. For any model $\Theta$, we need to optimize its architecture $\mathscr{A}$ and quantization policy $\mathscr{P}$.
Each individual model $\Theta$ of $\mathbf{P}$ is first trained on the training set $D_{train}$, quantized as its compression strategy and then evaluated on the validation set $D_{val}$. Combined with its model size $\mathcal{S}(\Theta)$, its fitness $\mathcal{F}(\Theta)$ is computed as Eq.~\eqref{Eq1}. At each evolutionary step, a subset $\mathbf{S}$ is randomly sampled from $\mathbf{P}$.
According to their fitnesses, we can select the best individual $\Theta_{\mathtt{best}}$ and the worst individual $\Theta_{\mathtt{worst}}$ among $\mathbf{S}$.
$\Theta_{\mathtt{worst}}$ is then excluded from $\mathbf{P}$ and $\Theta_{\mathtt{best}}$ becomes a parent and produces a child $\Theta_{\mathtt{mut}}$ with \emph{mutation}.
$\Theta_{\mathtt{mut}}$ is then trained, quantized and evaluated to measure its fitness $\mathcal{F}(\Theta)$. Afterwards $\Theta_{\mathtt{mut}}$ is pushed into $\mathbf{P}$.
This scheme actually keeps repeating competitions of random samples in iterations. The procedure is formulated in Algorithm \ref{algo:1}.
Specially, \emph{mutation} is conducted to the neural architecture $\mathscr{A}$ and the quantization policy $\mathscr{P}$ respectively in each iteration. For neural architecture $\mathscr{A}$, we make mutations to each combination in the cells, that is to randomly choose one from $\{{i_1,i_2,o_1,o_2}\}$, and then replace it with a random substitute. For quantization policy $\mathscr{P}$, \emph{mutation} is to randomly pick one from $\{{b_1,b_2,...,b_{k}}\}$ and reset it as a random choice of quantization bits.
\begin{algorithm}[t]
\caption{Search Strategy} \label{algo:1}
\SetKwInOut{Input}{input}\SetKwInOut{Output}{output}
\Input{population size $\mathtt{\text{\#}P}$, sample size $\mathtt{\text{\#}S}$, \\
training set \texttt{$D_{train}$}, validation set \texttt{$D_{val}$}, \\
max num epochs $\mathtt{\text{\#}E}$ }
\Output{a population of models $\mathbf{P}$}
\begin{small}
\texttt{$\mathbf{P^{(0)}}$ $\gets$ initialize($\mathtt{\text{\#}E}$)}
\For{i=1:$\mathtt{\text{\#}E}$}{
\texttt{$\mathbf{S^{(i)}}$ $\gets$ sample($\mathbf{P^{(i-1)}}$, $\mathtt{\text{\#}S}$)}
\texttt{$\Theta_{\mathtt{best}}$,$\Theta_{\mathtt{worst}}$ $\gets$ select($\mathbf{S^{(i)}}$)}
\texttt{$\mathscr{A}_{\mathtt{mut}}$ $\gets$ mutate($\mathscr{A}_{\mathtt{best}}$)}
\texttt{$\mathscr{P}_{\mathtt{mut}}$ $\gets$ mutate($\mathscr{P}_{\mathtt{best}}$)}
\texttt{$\Theta_{\mathtt{mut}}$ $\gets$ train($D_{train}$, $\mathscr{A}_{\mathtt{mut}}$)}
\texttt{$\mathcal{S}(\Theta_{\mathtt{mut}})$ $\gets$ quantize($\Theta_{\mathtt{mut}},\mathscr{P}_{\mathtt{mut}}$)}
\texttt{$\mathcal{\alpha}(\Theta_{\mathtt{mut}})$ $\gets$ test($\Theta_{\mathtt{mut}}$, $D_{val}$)}
\texttt{$\mathcal{F}(\Theta_{\mathtt{mut}})$ $\gets$ Eq.\eqref{Eq1}($\mathscr{\alpha}(\Theta_{\mathtt{mut}})$ ,$\mathcal{S}(\Theta_{\mathtt{mut}})$)}
\texttt{$\mathbf{P^{(i-1)}}$ $\gets$ push($\mathbf{P^{(i-1)}}$, $\Theta_{\mathtt{mut}}$)}
\texttt{$\mathbf{P^{(i)}}$ $\gets$ pop($\mathbf{P^{(i-1)}}$, $\Theta_{\mathtt{worst}}$)}
}
\end{small}
\end{algorithm}
\begin{table*}[ht
\small
\centering
\captionof{table}{The results of quantization policy search for existing networks on ImageNet. Here we compare to 8 bits models and float models.
Numbers in brackets are Acc increase and Size compression ratio compared to float models.}
\label{tab:1}
\begin{tabular}{ l | c r | c r | c c }
\hline
& \multicolumn{2}{c|}{\textbf{Ours}} & \multicolumn{2}{c|}{\textbf{8 bits}} & \multicolumn{2}{c}{\textbf{Float}}\\
& Acc/\% & Size/MB & Acc/\% & Size/MB & Acc/\% & Size/MB \\
\hline
ResNet18~\cite{he2016deep} & \textbf{70.02 (+0.26)}&\textbf{7.21 (6.49x)} & 69.64 (-0.12)&11.47 (4.08x) & 69.76&46.76\\
ResNet34~\cite{he2016deep} & \textbf{73.77 (+0.46)}&\textbf{11.92 (7.31x)} & 73.23 (-0.08)&21.32 (4.09x) &73.31&87.19\\
ResNet50~\cite{he2016deep} & \textbf{76.39 (+0.26)}&\textbf{14.91 (6.86x)} & 76.15 (+0.02)&24.74 (4.13x) &76.13&102.23\\
ResNet101~\cite{he2016deep} & \textbf{78.13 (+0.76)}&\textbf{31.54 (5.65x)} & 77.27 (-0.10)&43.19 (4.12x) &77.37&178.20\\
ResNet152~\cite{he2016deep} & \textbf{78.86 (+0.55)}&\textbf{46.63 (5.16x)} & 78.30 (-0.01)&58.38 (4.12x) &78.31&240.77\\
\hline
DenseNet-121~\cite{huang2017densely} & \textbf{74.56 (+0.12)}&\textbf{6.15 (5.19x)} & 74.44 (+0.00)&7.65 (4.17x) &74.44&31.92\\
DenseNet-169~\cite{huang2017densely} & \textbf{76.39 (+0.79)}&\textbf{11.89 (4.76x)} & 75.45 (-0.15)&13.54 (4.18x) &75.60&56.60\\
DenseNet-201~\cite{huang2017densely} & \textbf{77.06 (+0.16)}&\textbf{17.24 (4.64x)} & 76.92 (+0.02)&19.09 (4.19x) &76.90&80.06\\
\hline
MobileNet-v1$^*$~\cite{howard2017mobilenets} & \textbf{70.59 (+1.02)}&4.10 (4.12x) & 68.77 (-0.80)&\textbf{4.05 (4.18x)} &69.57&16.93\\
MobileNet-v2$^*$~\cite{sandler2018inverted} & \textbf{72.19 (+0.38)}&4.25 (3.30x) & 68.06 (-3.75)&\textbf{3.45 (4.06x)} &71.81&14.02\\
SqueezeNet~\cite{DBLP:journals/corr/IandolaMAHDK16} & \textbf{60.01 (+1.92)}&1.22 (1.93x) & 57.93 (-0.16)&\textbf{1.20 (1.96x)} &58.09&2.35\\
\hline
\end{tabular}
\footnotesize{\\$^*$ MobileNet-v1 and MobileNet-v2 are implemented and trained by ourselves. The pre-trained models of other networks are officially provided by Pytorch.}\\
\end{table*}
\subsection{Quantization Details}
\label{3.4}
In this section, we introduce the quantization process in details.
Given a weight vector $\omega$ and the quantization bit $b$, the quantization process can be formulated as follow:
\begin{equation}\label{Eq3}
\hat{w} = \mathcal{L}^{-1}(Q(\mathcal{L}(w),b))
\end{equation}
where $\mathcal{L}(w)=\frac{w-\mu}{\nu}$ is a linear scaling function~\cite{DBLP:journals/corr/HeWZWYZZ16} that normalizes arbitrary vectors into the range [0,1] and $\mathcal{L}^{-1}$ is the inverse function.
Specially, as the whole parameter vector usually has a huge dimension, magnitude imbalance might push most elements in the vector to zero. This would result in an extremely harm precision. To address this issue, we adopt the bucketing technique~\cite{DBLP:journals/corr/Alistarh0TV16}, that is, the scaling function is applied separately to a fixed length of consecutive values. The length is the bucket size $k$.
In Eq.\eqref{Eq3}, $Q$ is the actual quantization function that only accepts values in [0,1]. For a certain element $w_i$ and the quantization bit $b$, this process is shown as below:
\begin{equation}\label{Eq4}
Q(w_i,b) = \frac{\lfloor w_i\,2^b\rfloor}{2^b} + \frac{\xi_i}{2^b}\\
\end{equation}
This function assigns the scaled value $w_i$ to the closest quantization point and $\xi_i$ is the rounding function as follow.
\begin{equation}
\begin{split}
\xi_i=\left\{
\begin{aligned}
&1, \;\;\;\; \text{if}\;\; w_i\,2^b-\lfloor w_i\,2^b\rfloor > 0.5 \\
&0, \;\;\;\; \text{otherwise}\\
\end{aligned}
\right.
\end{split}
\end{equation}
Given a certain weight vector of size $N$ and the size of full precision weight $f$ (usually 32 bits), full precision requires $fN$ bits in total to store this vector. As we use $b$ bits per weight and two scaling parameter $\alpha$ and $\beta$ for each budget, the quantied vector needs $bN+ 2\frac{fN}{k}$ bits in total. Thus, the compressed ratio is $\frac{kf}{kb+2f}$ for this weight vector.
\section{Experimental Results}
In this section, we first apply our approach to existing networks and show the compression results on ImageNet. After that, we introduce the joint search results.
\begin{figure*}[ht]
\centering
\includegraphics[width=1.0\textwidth]{figure3.pdf}
\caption{The results of quantization policy search for existing networks on ImageNet. Here we compare to 2bits, 4 bits, 8 bits and 16 bits models.
The points of Ours are clearly under the Baselines. Models quantized by our policies have better accuracies than others.}\label{fig:4}
\end{figure*}
\subsection{Quantization on Fixed Architecture}
\label{4.1}
Our method can be flexibly applied on any existing networks for providing quantization policies.
In this section, we report the quantization results of some classical networks on ImageNet~\cite{DBLP:conf/cvpr/DengDSLL009}. These state-of-the-art networks include a series of ResNet~\cite{he2016deep}, DensenNet~\cite{huang2017densely} and some mobile size networks, e.g., MobileNet-v1~\cite{howard2017mobilenets}, MobileNet-v2~\cite{sandler2018inverted} and SqueezeNet~\cite{DBLP:journals/corr/IandolaMAHDK16}. For all ResNets~\cite{he2016deep}, DenseNets~\cite{huang2017densely} and SqueezeNet~\cite{DBLP:journals/corr/IandolaMAHDK16}, we obtain their pre-trained float models from torchvision.models class of PyTorch. Because MobileNet-v1~\cite{howard2017mobilenets} and MobileNet-v2~\cite{sandler2018inverted} models are not provided by official PyTorch, we implement and train them from scratch to get these two float models.
Table~\ref{tab:1} presents the performance of our quantization policies on the state-of-the-art networks. In the $Acc/\%$ columns, the numbers in the brackets mean the accuracy increase or decrease after compression. In the \emph{Params/M}, the numbers in the brackets mean the compression ratio.
It is worth to note that our method can effectively improve the accuracy and compress the model size. Taking ResNet18~\cite{he2016deep} for example, the model generated by our method has 70.02\% accuracy that is 0.26\% higher than the float model. Our compressed ResNet18 has 7.21M parameters while the float model has 46.76M parameters that is 6.49 times as ours. For all these ResNets~\cite{he2016deep} and DenseNets~\cite{huang2017densely}, our method can generate models that are more accurate and smaller than both 8 bits and float models. For the mobile size networks, MobileNet-v1~\cite{howard2017mobilenets} MobileNet-v2~\cite{sandler2018inverted} and SqueezeNet~\cite{DBLP:journals/corr/IandolaMAHDK16}, ours are slightly larger than 8 bits models, but much more accurate than both the 8 bits and the float models.
In addition, we also compare our results to other compression strategies in Fig.~\ref{fig:4}, including 2 bits, 4 bits and 16 bits. It shows the bi-objective frontiers obtained by our results and the corresponding 2/4/8/16 bits results. A clear improvement appears that our results have much higher accuracies than 2/4 bits models and are much smaller than 8/16 bits models of ResNets~\cite{he2016deep} and DenseNets~\cite{huang2017densely}. For mobile size models, i.e., MobileNet-v1~\cite{howard2017mobilenets}, MobileNet-v2~\cite{sandler2018inverted} and SqueezeNet~\cite{DBLP:journals/corr/IandolaMAHDK16}, our results are more accurate than all bits models.
\subsection{Joint Architecture Search and Quantization}
\label{4.2}
The joint search is conducted on CIFAR-10 to obtain the normal cell structure $\mathscr{A}_{\mathtt{nom}}$, the reduction cell structure $\mathscr{A}_{\mathtt{rec}}$ and the quantization policy $\mathscr{P}$.
After search, we retrain CIFAR-10 and ImageNet float models from scratch. CIFAR-10 results are obtained by quantizing the float models with the search quantization policy $\mathscr{P}$.
As ImageNet architectures have additional cells and layers, it is unable to directly apply $\mathscr{P}$ on ImageNet float models. Thus we use $\mathscr{P}$ to initialize an evolution population to search ImageNet quantization policies as in Section \ref{4.1}.
In Table~\ref{tab:2}, we compare the performance of ours to other state-of-the-art methods that search only for neural architectures. Note that all methods listed in Table~\ref{tab:2} use NASNet~\cite{zoph2017learning} architecture search space.
JASQNet is obtained with $\mathcal{T_{S}}$ set as $3~\mathtt{MB}$ during search and JASQNet-Small is obtained with $\mathcal{T}_{S}$ set as $1~\mathtt{MB}$ during search. Ours-Small(float) and JASQNet (float) are the float models before the searched quantization policies applied to them.
For the model JASQNet, it achieves competitive accuracies and relatively small model size to other comparison methods. On CIFAR-10, only NASNet-A~\cite{zoph2017learning} and AmoebaNet-B~\cite{Real2018Regularized} have clearly higher accuracies than JASQNet. But their search costs are hundreds of larger times than ours. On CIFAR-10, the model size of JASQNet is more than 4 times small as the size of other comparison models. On ImageNet, the accuracy of JASQNet is competitive to others and the model size of JASQNet is also 4 times or so small as that of other comparison models.
For the model JASQNet-Small, its model size is 10 times small as the size of other comparison models on CIFAR-10.
On ImageNet, its model size is 7 or 8 times small as others. Compared to SqueezeNet~\cite{DBLP:journals/corr/IandolaMAHDK16}, the model with similar size (41.91\% with 2.35 MB), its accuracy is much higher.
Compared to JASQNet (float) and JASQNet-Small (float), JASQNet and JASQNet-Small has a higher accuracy and smaller model size. It shows that our learned quantization policies are effective. Compared to other only searching for architecture methods, JASQNet (float) and JASQNet-Small (float) are not best. Because our search space is much larger that includes quantization choices and it is unfair to directly compare them with our float models.
\begin{table*}[t]
\centering
\caption{Comparisons to Architecture Search on CIFAR-10 and 224 ImageNet.} \label{tab:2}
\begin{tabular}{ l |c c| c c c | c c c}
\hline
& \multicolumn{2}{c|}{\textbf{Search Cost}} & \multicolumn{3}{c|}{\textbf{CIFAR-10}} & \multicolumn{3}{c}{\textbf{ImageNet}}\\
& GPUs & Days &\#Params/M & Size/MB & Error/\% &\#Params/M & Size/MB & Error/\% \\
\hline
PNASNet-5~\cite{liu2017progressive} & 100&1.5 & 3.2 & 12.8 & 3.41 $\pm$ 0.09 & 5.1 & 20.4 & 25.8 \\
NASNet-A$^*$ ~\cite{zoph2017learning} & 500&4 & 3.3 & 13.2 & 2.65 & 5.3 & 21.2& 26.0 \\
NASNet-B~\cite{zoph2017learning} & 500&4 & 2.6 & 10.4 & 3.73 & 5.3 & 21.2& 27.2 \\
NASNet-C~\cite{zoph2017learning} & 500&4 & 3.1 & 12.4 & 3.59 & 4.9 & 19.6& 27.5 \\
AmoebaNet-B$^*$ ~\cite{Real2018Regularized} & 450&7 & 2.8 & 11.2 & 2.55 $\pm$ 0.05 & 5.3 & 21.2& 26.0 \\
ENAS$^*$ ~\cite{pham2018efficient} & 1&0.5 & 4.6 & 18.4 & 2.89 &- &- &- \\
DARTS (1st order)$^*$~\cite{DBLP:journals/corr/abs-1806-09055} & 1&1.5 & 2.9 & 11.6 & 2.94 & 4.9 & 19.6 & 26.9 \\
DARTS (2nd order)$^*$\cite{DBLP:journals/corr/abs-1806-09055} & 1 &4 & 3.4&13.6 &2.83$\pm$ 0.06 &- &-&- \\
\hline
JASQNet (float)$^*$ & 1&3 & 3.3 & 13.2 & 2.94 & 4.7 & 18.8 & 27.25 \\
JASQNet$^*$ & 1&3 & 3.3 &\textbf{2.5} & 2.90 & 4.7 & \textbf{4.9} & 27.22\\
\hline
JASQNet-Small (float)$^*$ & 1&3 & 1.8 & 7.2 & 3.08 &2.8 &11.2 & 34.14 \\% 1.7*4=
JASQNet-Small$^*$ & 1&3 & 1.8 &\textbf{0.9} & 2.97 & 2.8 & \textbf{2.5} & 34.10\\
\hline
\end{tabular}
\footnotesize{\\$^*$ Training with cutout~\cite{cutout} on CIFAR-10. All methods use NASNet~\cite{zoph2017learning} architecture search space.
}\\
\end{table*}
\begin{figure*}[htb]
\centering
\includegraphics[width=0.9\textwidth]{cells.pdf}
\caption{JASQNet and JASQNet-Small}\label{fig:5}
\end{figure*}
It is worth to clarify \#Params and Size in Table~\ref{tab:2}. \#Params means the number of free parameters and its unit is million (M). Size means model size for storage and its unit is MByte (MB). Quantization can reduce Size but not \#Params. The result architectures are shown in Fig.~\ref{fig:5}.
\begin{figure*}[tb]
\centering
\includegraphics[width=1.0\textwidth]{mean_std.pdf}
\caption{Ablation study on whether use small proxy networks for search.}\label{fig:6}
\end{figure*}
\subsection{Analyses}
\subsubsection{Search Process Details}
Previous works~\cite{zoph2017learning, pham2018efficient, Real2018Regularized, DBLP:journals/corr/abs-1806-09055, liu2017progressive} tend to search on small proxy networks and use wider and deeper networks in the final architecture evaluation.
In Table~\ref{tab:3}, we list the depth and width of networks for search and networks for evaluation. N is the number of stacking cells in Fig.~\ref{fig:2} and F is the initial convolution channels.
Taking the width for example, DARTS~\cite{DBLP:journals/corr/abs-1806-09055} uses a network with initial channels 16 for search and evaluates on networks with initial channels 36. ENAS~\cite{pham2018efficient} searches on networks with initial channels 20 and evaluates on a network with initial channels 36.
The original purpose of searching on small proxy networks is to save time. But in our joint search experiments, we empirically find it is a bit harmful to search process. We make an ablation study on using small proxy networks as in Fig.~\ref{fig:6}. The blue line represents the experiment without small proxy networks, where the networks have the same width (F=36) and depth (N=6) to those for evaluation. The red line represents searching with small proxy networks (F=16 and N=6). We keep track of the most recent population during evolution. Fig.~\ref{fig:6}~(a) shows the highest average fitness of the population over time. Fig.~\ref{fig:6}~(b) shows the lowest standard deviation of the population fitnesses over time. Wider networks might lead to higher accuracies but it is clear that the blue line in Fig.~\ref{fig:6}~(a) converges faster than the red line. Standard deviation of the population fitnesses represents the convergence of evolution. Thus, Fig.~\ref{fig:6}~(b) also shows that searching without proxy networks leads to a faster convergence.
\begin{table}[tb]
\centering
\caption{Depth and Width for Search and Evaluation on CIFAR-10.} \label{tab:3}
\begin{tabular}{ l |c c| c c }
\hline
& \multicolumn{2}{c|}{Search} & \multicolumn{2}{c}{Evaluation}\\
& F & N & F & N\\
\hline
PNASNet-5~\cite{liu2017progressive} &24 &2 &48 &3 \\
NASNet ~\cite{zoph2017learning} &32 &2 &32 &6 \\
AmoebaNet ~\cite{Real2018Regularized} &24 &3 &36 &6 \\
ENAS ~\cite{pham2018efficient}$^*$ &20 &2 &36 & 5 \\
DARTS~\cite{DBLP:journals/corr/abs-1806-09055} &16 &2 &36 &2 \\
JASQ &36 &6 &36 &6 \\
\hline
\end{tabular}
\footnotesize{\\$^*$ This info is discovered in their released code but in not their paper.}
\end{table}
\subsubsection{Comprehensive Comparison}
Joint search performs better than only architecture search or only quantization search.
JASQNet are better than only architecture search (blue squares) and only quantization search (red circles) as illustrated in Fig.~\ref{fig:7}. Models with too many parameters (DenseNets), are not shown in it. It shows that JASQNet reaches a better multi-objective position.
In addition, as shown in results in Table~\ref{tab:1}, suitable quantization policies can improve accuracy and decrease model size simultaneously. No matter for existing networks quantization or joint search of architecture and quantization, our quantized models are more accurate than their float counterparts. In Fig.~\ref{fig:7}, we also depict JASQNet (float) as a blue pentagon.
The gap between JASQNet and JASQNet (float) shows the effectiveness of our quantization policy. Their accuracies are almost same but JASQNet has much less model size.
As shown in Table~\ref{tab:2}, JASQNet (float) and JASQNet-Small (float) are not better than NASNet~\cite{zoph2017learning} or AmoebaNet~\cite{Real2018Regularized}. The first reason is that joint search results in larger search space that might harm the quality of searched architectures. The second possible reason is that their search processes spend much more computation resources than ours.
\begin{figure}[tb]
\centering
\includegraphics[width=0.5\textwidth]{All.pdf}
\caption{Comparisons with only architecture search and only quantization search. The gap between JASQNet and JASQNet (float) shows the effectiveness of our quantization policy. JASQNet reaches a better balance point thant other models.}\label{fig:7}
\end{figure}
\section{Conclusion}
Searching for both architectures and compression heuristics is a direct and convenient way for deep learning practitioners.
To our best knowledge, this task has never been proposed in the literature.
In this work, we propose to automatically design architectures and compress models. Our method can not only conduct joint search of architectures and quantization policies, but also provide quantization policies for existing networks. The models generated by our method, JASQNet and JASQNet-Small, achieve better trade-offs between accuracy and model size than only architecture search or only quantization search.
\section*{Appendix
\subsection*{1) CIFAR-10 Classification
\paragraph{Dataset}
There are 50,000 training images and 10,000 test images in CIFAR-10. 5,000 images are partitioned from the training set as a validation set.
We whiten all images with the channel mean subtraction and standard deviation division.
32 x 32 patches are cropped from images and padded to 40 x 40. Horizontal flip is also used. We use this preprocessing procedures for both search and evaluation.
\paragraph{Training}
For fair comparisons, our training hyper-parameters on CIFAR-10 are identical to those of DARTS~\cite{DBLP:journals/corr/abs-1806-09055}. The models for evaluation are trained for 600 epochs with batch size 96 on one GPU. The version of our GPUs is Titan-XP.
The initial learning rate is 0.025 and annealed down to zero following a cosine schedule. We set the momentum rate as 0.9 and set weight decay as $3\times 10^{-4}$. Following existing works \cite{DBLP:journals/corr/abs-1806-09055,zoph2017learning,Real2018Regularized}, additional enhancements include cutout~\cite{cutout}, path dropout of probability 0.3 and auxiliary towers with weight 0.4.
\subsection*{2) ImageNet Classification
\paragraph{Dataset}
The original input images are first resized and their shorter sides are randomly sampled in [256, 480] for scale augmentation~\cite{simonyan2014very}. We then randomly crop images into $224 \times 224$ patches.
We also conduct horizontal flip, mean pixel subtraction and the standard color augmentation. These are standard augumentations that proposed in Alexnet~\cite{krizhevsky2012imagenet}.
In addition, most augmentations are excluded in the last 20 epochs with the sole exception of the crop and flip for fine-tuning.
\paragraph{Training}
Each model is trained for 200 epochs on 4 GPUs with batch size 256.
We set the momentum rate as 0.9 and set weight decay as $4\times 10^{-5}$. We also employ an auxiliary classifier located at $\frac{2}{3} $of the maximum depth weighted by 0.4. The initial learning rate is 0.1. It later decays with a polynomial schedule.
\subsection*{3) Quantization Process
Previous works \cite{DBLP:journals/corr/ZhuHMD16,DBLP:journals/corr/abs-1709-01134} do not quantize the first and last layers of ImageNet models to avoid severe accuracy harm. We also follow this convention on our ImageNet models and do not apply this constraint on CIFAR-10 models.
Another detail is that we use Huffman encoding for quantized value representation to save additional space.
\subsection*{4) Search Process
\begin{figure}[tb]
\centering
\includegraphics[width=0.5\textwidth]{P-S.pdf}
\caption{Hyper-parameter optimization experiments about population size and sample size. We conduct these experiments in a small scale by setting input filters F=16 and stacking cells number N=2. Each experiment runs for 100 iterations.}\label{fig:8}
\end{figure}
The evolutionary search algorithm employed in this paper can be classified into tournament selection \cite{goldberg1991comparative}. There are only two hype-parameters, population size $\mathtt{\text{\#}P}$ and sample size $\mathtt{\text{\#}S}$.
The hyper-parameter optimization process is illustrated in Figure~\ref{fig:8}. We conduct all these experiments with the same settings except $\mathtt{\text{\#}P}$ and $\mathtt{\text{\#}S}$.
For efficient comparison, these experiments runs in a small scale for only 100 iteration.
The input filters F is set as 16 and the stacking cells number N is set as 2. This figure shows the mean fitness of models in the population over iterations. We pick the best one ($\mathtt{\text{\#}P}=16,\mathtt{\text{\#}S}=16$) from Figure~\ref{fig:8} for the experiments in this paper.
We also employ the parameter sharing technique for acceleration \cite{pham2018efficient}, that is, a set of parameters are shared among all individual models in the population.
{\small
\bibliographystyle{ieee}
| {
"attr-fineweb-edu": 1.560547,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUcQHxK7Dgs_cY5A1F | \section*{Acknowledgments}
The successful installation, commissioning, and operation of the Pierre Auger Observatory would not have been possible without the strong commitment and effort from the technical and administrative staff in Malarg\"{u}e.
We are very grateful to the following agencies and organizations for financial support:
Comisi\'{o}n Nacional de Energ\'{\i}a At\'{o}mica, Fundaci\'{o}n Antorchas, Gobierno De La Provincia de Mendoza, Municipalidad de Malarg\"{u}e, NDM Holdings and Valle Las Le\~{n}as, in gratitude for their continuing cooperation over land access, Argentina; the Australian Research Council; Conselho Nacional de Desenvolvimento Cient\'{\i}fico e Tecnol\'{o}gico (CNPq), Financiadora de Estudos e Projetos (FINEP), Funda\c{c}\~{a}o de Amparo \`{a} Pesquisa do Estado de Rio de Janeiro (FAPERJ), S\~{a}o Paulo Research Foundation (FAPESP) Grants \# 2010/07359-6, \# 1999/05404-3, Minist\'{e}rio de Ci\^{e}ncia e Tecnologia (MCT), Brazil; MSMT-CR LG13007, 7AMB14AR005, CZ.1.05/2.1.00/03.0058 and the Czech Science Foundation grant 14-17501S, Czech Republic; Centre de Calcul IN2P3/CNRS, Centre National de la Recherche Scientifique (CNRS), Conseil R\'{e}gional Ile-de-France, D\'{e}partement Physique Nucl\'{e}aire et Corpusculaire (PNC-IN2P3/CNRS), D\'{e}partement Sciences de l'Univers (SDU-INSU/CNRS), Institut Lagrange de Paris, ILP LABEX ANR-10-LABX-63, within the Investissements d'Avenir Programme ANR-11-IDEX-0004-02, France; Bundesministerium f\"{u}r Bildung und Forschung (BMBF), Deutsche Forschungsgemeinschaft (DFG), Finanzministerium Baden-W\"{u}rttemberg, Helmholtz-Gemeinschaft Deutscher Forschungszentren (HGF), Ministerium f\"{u}r Wissenschaft und Forschung, Nordrhein Westfalen, Ministerium f\"{u}r Wissenschaft, Forschung und Kunst, Baden-W\"{u}rttemberg, Germany; Istituto Nazionale di Fisica Nucleare (INFN), Ministero dell'Istruzione, dell'Universit\`{a} e della Ricerca (MIUR), Gran Sasso Center for Astroparticle Physics (CFA), CETEMPS Center of Excellence, Italy; Consejo Nacional de Ciencia y Tecnolog\'{\i}a (CONACYT), Mexico; Ministerie van Onderwijs, Cultuur en Wetenschap, Nederlandse Organisatie voor Wetenschappelijk Onderzoek (NWO), Stichting voor Fundamenteel Onderzoek der Materie (FOM), Netherlands; National Centre for Research and Development, Grant Nos.ERA-NET-ASPERA/01/11 and ERA-NET-ASPERA/02/11, National Science Centre, Grant Nos. 2013/08/M/ST9/00322, 2013/08/M/ST9/00728 and HARMONIA 5 - 2013/10/M/ST9/00062, Poland; Portuguese national funds and FEDER funds within COMPETE - Programa Operacional Factores de Competitividade through Funda\c{c}\~{a}o para a Ci\^{e}ncia e a Tecnologia, Portugal; Romanian Authority for Scientific Research ANCS, CNDI-UEFISCDI partnership projects nr.20/2012 and nr.194/2012, project nr.1/ASPERA2/2012 ERA-NET, PN-II-RU-PD-2011-3-0145-17, and PN-II-RU-PD-2011-3-0062, the Minister of National Education, Programme for research - Space Technology and Advanced Research - STAR, project number 83/2013, Romania; Slovenian Research Agency, Slovenia; Comunidad de Madrid, FEDER funds, Ministerio de Educaci\'{o}n y Ciencia, Xunta de Galicia, European Community 7th Framework Program, Grant No. FP7-PEOPLE-2012-IEF-328826, Spain; Science and Technology Facilities Council, United Kingdom; Department of Energy, Contract No. DE-AC02-07CH11359, DE-FR02-04ER41300, DE-FG02-99ER41107 and DE-SC0011689, National Science Foundation, Grant No. 0450696, The Grainger Foundation, USA; NAFOSTED, Vietnam; Marie Curie-IRSES/EPLANET, European Particle Physics Latin American Network, European Union 7th Framework Program, Grant No. PIRSES-2009-GA-246806; and UNESCO.
\section{Conclusions}
In this work, we characterized the distribution of UHECRs with $E >
\SI{5}{\exa\electronvolt}$ in regions of \SI{0.25}{\radian} around events with $E >
\SI{60}{\exa\electronvolt}$ using observables sensitive to patterns characteristic for
deflections in cosmic magnetic fields. No such patterns have been found within
this analysis. We demonstrated the usage of this non-observation to constrain
propagation scenarios using a scenario based on parametrizations for the
propagation of UHECR protons as an example.
Within the simulated scenario, we estimate that the strength of the deflection
in the extragalactic magnetic field has to be larger than $C_\text{E} =
\SIrange{10}{120}{\degree\per\mega\parsec\tothe{1/2}\exa\electronvolt}$ for source densities
smaller than \SI{d-3}{\per\cubic\mega\parsec} assuming protons and deflections
expected from the Jansson-Farrar 2012 model for the galactic magnetic field. For protons
with an energy $ E = \SI{10}{\exa\electronvolt}$ from a source at \SI{16}{\mega\parsec} this
translates to a required strength of the deflection in extragalactic space of
more than \SI{4}{\degree} if the source density is smaller than
\SI{d-3}{\per\cubic\mega\parsec} and more than \SI{25}{\degree} if the source density is smaller than
\SI{d-4}{\per\cubic\mega\parsec}.
\section{Discussion}
\label{sec:Discussion}
In this section we first continue with analysing the directions of the thrust
axes shown as a sky map in Figure~\ref{fig:Skymap}. The aim is to search for
any individual ROI with signal contributions, e.g.~cosmic rays from a point
source, by testing the reproducibility of the axis direction. We will then
compare the measured distributions of the energy-energy correlations and the
thrust values in Figure~\ref{fig:Measurement} with astrophysical simulations
obtained with the PARSEC Monte Carlo generator. Using these comparisons, limits
on the strength of the deflection of the UHECRs in extragalactic magnetic fields
and the density of point sources of UHECRs are derived.
\subsection{Reproducibility of the Axes Measurement}
We further investigate the directional information shown by the thrust-major
axes of the individual ROIs in Figure~\ref{fig:Skymap}. From the simplified
simulations in Section~\ref{sec:ToyMC} we saw that thrust-major directions are
reproducible in repeated experiments for scenarios where coherent deflections
contribute, and turbulent deflections are not too large. In additional
simulation studies it was shown that evidence for anisotropy could sometimes
be found in reproducibility of axis directions even when the thrust scalar
values were consistent with isotropy~\cite{Winchen2013}. Hence, analysis of the directions of
the thrust-major axes could potentially reveal further information.
As we have obtained a single set of measured UHECR data at this point in time, we
perform here a stability test on subsets of the data in the following sense. If
the measured thrust-major direction obtained in a single ROI is related to a
deflection pattern reasonably constant in time then the analysis of subsets of the
measured data should also reflect this pattern. As only a fraction of the ROIs
may contain such a deflection pattern we perform tests of reproducibility
on each ROI individually.
We first define the ROIs as before using all available data. We then split the
dataset into $n$ independent subsamples and compare the directions
$\vec{n}_{2,j=1} \ldots \vec{n}_{2,j = n}$ obtained in each subsample for every
individual region of interest. A low variability of directions in the subsets
of the data provides evidence for a non-triviality of the thrust-major axis and
consequently for an anisotropic distribution of UHECRs.
The optimal choice for the number of subsamples to split the data into is not
known a priori. On the one hand, a large number of $n$ maximizes the number of
repeated experiments. On the other hand, as the total number of UHECRs is fixed,
$n = 2$ maximizes the number of UHECRs in every subsample. We investigated the
choice of $n$ using simulations of the simplified model described in
Section~\ref{sec:ToyMC}. The test power to distinguish regions of interest
containing 600 anisotropically distributed UHECRs from regions with
isotropically distributed UHECRs using the circular variance $V$ reaches a
plateau for $n \gtrsim 12$.
The dependence of the results and their variance with random splits of the data
set into 12 parts was investigated. The observed axis directions shown in
Figure~\ref{fig:Skymap} were not reproducible in subsets of the data with this
analysis. No evidence for a non-triviality of the axes was thus found.
\subsection{Limits on Propagation Parameters}
\begin{figure*}[tbp]
\includegraphics[width=\textwidth]{Plots/ExclusionLimits_Deflection.pdf}
\caption{95\% $CL_S$ limits on the strength of the deflection of cosmic-ray protons $C_\text{E}$ (cf.~Equation~\eqref{eq:CECTRelation} and \eqref{eq:CTurbulent}~ff.) and density of point sources $\rho$ in simulations using the
PARSEC software~\cite{Bretz2013} from the analysis of the \textbf{(a)} energy-energy correlations, \textbf{(b)} thrust, \textbf{(c)} thrust-major and \textbf{(d)} thrust-minor distributions. The gray areas
are excluded by the measurements.}
\label{fig:Limits}
\end{figure*}
A prime value of the measurements lies in their ability to constrain UHECR
propagation scenarios. We outline the
procedure to derive limits on scenario parameters using a simple model for
extragalactic propagation of protons based on parameterizations as implemented
in version 1.2 of the PARSEC software~\cite{Bretz2013}. Although this model is
likely too coarse to allow definite conclusions on the sources of UHECRs, it
includes at least qualitatively the effects influencing patterns in the UHECR
distributions. Its fast computability allows a scan of a large range of
parameter combinations in the source density and the strength of the deflection
in the extragalactic magnetic field, thus limiting these important
parameters within this model. The procedure to obtain limits from the
measurements reported in this paper as outlined here can be applied to any other
model.
The PARSEC software simulates ultra-high energy protons by calculating the
probability-density function (pdf) to observe a cosmic ray for discrete
directions and energies using parameterizations for energy losses and
energy-dependent deflections. In the calculations, energy losses of the UHECRs
from interaction with extragalactic-photon backgrounds, effects from the
expansion of the universe and deflection in extragalactic magnetic fields are
accounted for using parameterizations. To account for deflections in the
galactic magnetic field, the calculated pdf is transformed using matrices
derived from backtracked UHECRs using the CRT software~\cite{Sutherland2010}.
As model for the galactic magnetic field, we use here the model proposed by
Jansson and Farrar~\cite{Jansson2012,Jansson2013}. For the random field we
assume Kolmogorov turbulences with a coherence length $L_\text{c} =
\SI{60}{\parsec}$ and a maximum wavelength $L_\text{max} \simeq
\SI{260}{\parsec}$. We use only one realization of the random component of the
model in all simulations. The directions in the simulations are discretized
into 49,152 equal-area pixels following the HEALPix layout~\cite{Gorski2005}.
The energy is discretized into 100 log-linear spaced bins ranging from
$10^{18.5}$~eV to $10^{20.5}$~eV. Both choices result in angular and energy
bins smaller than the corresponding measurement errors.
We simulated scenarios with unstructured point sources with density $\rho$ and
strength of the deflection of the cosmic rays
\begin{equation}
C_\text{T} = C_\text{E} \sqrt{D}
\label{eq:CECTRelation}
\end{equation}
with distance $D$ of the source.
We scanned the parameter range $C_\text{E} =
\SIrange{2}{200}{\degree\per\mega\parsec\tothe{1/2}\exa\electronvolt}$ and source densities up to $\rho =
\SI{1d-3}{\per\cubic\mega\parsec}$. We considered contributions
from sources up to a distance $D_{\max} = \SI{2}{\giga\parsec}$. At every point
of the parameter space we simulated sets of 200 pseudo experiments with the same number of events as in the measurement presented in Section~\ref{sec:Results}.
Since the sources of the UHECRs are randomly distributed and have a maximum
injection energy $E_{\max} = \SI{1000}{\exa\electronvolt}$, some realizations do not include
sources within 43 Mpc, the maximum propagation distance
of the most energetic particle in this
analysis. Due to the continuous energy loss approximation the maximum distance is here a hard limit and
these simulations cannot reproduce the observed energies. To restrict the reported
limits to information from the observables such scenarios are not used here.
Note that within such a scenario, the necessity of a close source could be used
as an additional constraint. The probability of including at least one source
in a pdf set can be calculated analytically (e.g.~\cite{Chandrasekhar1943}) and
is higher than $96 \%$ for source densities
greater than $\rho=\SI{1d-5}{\per\cubic\mega\parsec}$. Using this argument
alone, source densities with $\rho < \SI{1d-7}{\per\cubic\mega\parsec}$ may be
disfavored. However, the inclusion of this argument only marginally modifies
the reported limits.
Limits on the strength of the deflection and the density of point
sources in the simulation are set using the $CL_S$
method~\cite{Read2000,Read2002}. Here,
\begin{equation}
Q = -2 \log \frac{\mathcal{L}_\text{a}}{\mathcal{L}_0}
\label{ieq:Likelihoodratio}
\end{equation}
is the ratio of the likelihood $\mathcal{L}_0$ of the data given isotropically distributed
UHECRs, and the likelihood $\mathcal{L}_\text{a}$ of the data given the alternative
hypothesis simulated with PARSEC. In the $CL_S$ method, not $Q$
directly, but the modified likelihood ratio
\begin{equation}
CL_S = \frac{P_\text{a}(Q \geq Q_\text{obs})}{1-
P_0(Q\leq Q_\text{obs})}
\label{eq:CLSMethod}
\end{equation}
is used as test statistic.
Here $P_\text{a}(Q \geq Q_\text{obs})$ is the frequency with which likelihood ratios $Q$
larger than the observed value are obtained in simulations
of the alternative hypothesis and $1- P_0(Q\leq Q_\text{obs})$ the corresponding frequency
in simulations of the null hypothesis.
Points in parameter space with $CL_S < 0.05$ are excluded at
the 95\% confidence level.
The resulting limits are shown in Figure~\ref{fig:Limits} for the individual
observables.
A combination of the limits is not attempted here as it depends on
scenario-specific correlations between the observables. If the cosmic rays are
not protons but heavier nuclei the limits are reduced accordingly.
For the extreme case that all cosmic rays are iron nuclei with $Z=26$ the limits
shift down by more than one order of magnitude.
For the proton
case shown in Figure~\ref{fig:Limits} the extragalactic deflection of cosmic
rays needs to be larger than
$C_\text{E} = \SIrange{10}{120}{\degree\per\mega\parsec\tothe{1/2}\exa\electronvolt}$ for source
densities smaller than \SI{d-3}{\per\cubic\mega\parsec} and assuming deflections
in the galactic magnetic field as expected from the Jansson-Farrar 2012 model
with a coherence length set to $L_\text{c} =
\SI{60}{\parsec}$. The exact value depends on the source density. Without
galactic random field the limits are only marginally more constraining, choosing a
higher coherence length lowers the limits according to the stronger
deflections.
Previously, we derived from two-point correlations of UHECRs with an energy
$E>\SI{60}{\exa\electronvolt}$ lower bounds on the density of uniformly distributed sources
of, e.g., $2 \times 10^{-4}\,\text{Mpc}^{-3}$ if the deflection of cosmic rays
above 60 EeV is 5 degrees~\cite{Abreu2013a}. Only the total deflection due to the
EGMF and GMF was taken into account, and no explicit model for the Galactic
magnetic field was used. An approximate comparison with the current analysis
can be performed assuming the average deflections in the EGMF and GMF add up
linearly. The average deflection of 60 EeV cosmic rays in the JF2012 field
accounts to 5 degrees. The above density therefore gives a lower limit for
negligible deflections in the EGMF.
With the current analysis we obtain for the lowest EGMF considered a limit of
$9 \times 10^{-4}\,\text{Mpc}^{-3}$ from an analysis of the Thrust Minor. We
therefore extend the lower bound on the density of uniformly distributed
sources by a factor of more than four in the case of small
extragalactic deflections.
\subsection{Energy-Energy Correlations}
Energy-energy correlations (EECs) are used to obtain information on the
turbulent part of galactic and extragalactic magnetic
fields~\cite{Erdmann2009}. The concept of the EEC was originally developed for
tests of quantum chromodynamics (QCD)~\cite{Basham1978}. The
Energy-energy correlation $\Omega_{ij}$ is calculated for every pair of UHECRs
$i,j$ within a ROI using
\begin{equation}\label{eq:EEC}
\Omega _{ij}= \frac{(E_i-\langle E(\alpha_i) \rangle)\,(E_j-\langle E(\alpha_j) \rangle) }{E_i \, E_j}.
\end{equation}
Here $E_i$ is the energy of the UHECR $i$ with the angular separation
$\alpha_i$ to the center of the ROI. $\langle E_i(\alpha_i) \rangle$ is
the average energy of all UHECRs at the angular separation $\alpha_i$ from
the center of the ROI.
The cosmic rays in a ROI can be separated into a signal fraction, whose arrival
direction is correlated with the energy, and an isotropic background fraction.
The values of $\Omega_{ij}$ can be positive or negative depending on the
cosmic-ray pair having energies above or below the average energies. An angular
ordering is measured in the following sense. A pair of cosmic rays, one being
above and the other below the corresponding average energy, results in a
negative correlation $\Omega_{ij} < 0$. This is a typical case for a background
contribution. A pair with both cosmic rays having energies above or below the
average energy at their corresponding angular separation gives a positive
correlation $\Omega_{ij} > 0$. Here both signal and background pairs are
expected to contribute. As the correlations are determined as a function of the
opening angle to the center of the ROI, circular patterns can be found that are
expected from turbulent magnetic deflections which are sometimes viewed as
random-walk propagation.
We present the angular distribution of the EEC as the average
distribution of all ROIs. Each value $\Omega_{ij}$ is taken into account twice,
once at the angular separation $\alpha_i$ and once at $\alpha_j$.
\section{Introduction}
The long-standing question about the origin and nature of the ultra-high
energy cosmic rays (UHECRs) is yet unanswered.
Presumably, UHECRs are charged nuclei of extragalactic origin. They are
deflected in extragalactic magnetic fields and the magnetic field of the Milky
Way such that their arrival directions may not point back to their
sources~\cite{Kotera2011}. The structure, strength, and origin of these cosmic
magnetic fields are open questions in astrophysics as
well~\cite{Ryu2012, Widrow2012}. Consequently, UHECRs can also be considered to be
probes of the magnetic fields they traverse~\cite{Lee1995, Lemoine1997} as the
deflections lead to energy-dependent patterns in their arrival directions,
and an analysis of such patterns may allow for conclusions on the strength and
structure of the fields.
The Pierre Auger Observatory~\cite{PAO2004, PAO2010b} is currently the largest
experiment dedicated to observations of UHECRs. In
2007, we reported evidence for a correlation of events with energies above
\SI{60}{\exa\electronvolt} ($\SI{1}{\exa\electronvolt} = \SI{d18}{\electronvolt}$) with the distribution of nearby extragalactic
matter~\cite{PAO2007, PAO2008}. An update of the analysis yielded a
correlation strength which is reduced compared
to the initial result~\cite{PAO2010d}. Further searches for anisotropy using
variants of autocorrelation functions~\cite{PAO2012b} yielded no
statistically-significant deviation from isotropic scenarios. Following this
observation, constraints on the density of point sources and magnetic fields
have been reported~\cite{Abreu2013a}. Also a direct search for
magnetically-induced alignment in the arrival directions of cosmic rays
assuming they were protons has been performed without uncovering so-called
multiplet structures beyond isotropic expectations~\cite{PAO2012} .
Nevertheless, if the highest-energy cosmic rays with $E > \SI{60}{\exa\electronvolt}$ are
tracers of their sources and even if their deflection in magnetic fields is
dependent on their nuclear charges, some of the lower-energy cosmic rays in a
region around them may be of the same origin. From deflections both in
extragalactic magnetic fields and the magnetic field of the Milky Way, their
distribution of arrival directions may show energy-dependent patterns. In
particular a circular `blurring' of the sources is expected from deflection in
turbulent magnetic fields, while energy dependent linear structures are
expected from deflection in coherent magnetic fields.
In this report, we investigate the local regions around cosmic rays
with $E \geq \SI{60}{\exa\electronvolt}$ by analyzing cosmic rays with energies above $E = 5$~EeV
arriving within an angular separation of \SI{0.25}{\radian}. The lower energy cut
just above the ankle is motivated by the assumption that the selected
cosmic rays are predominantly of extragalactic origin. The angular
separation cut has been optimized from simulation studies and will be
explained below.
We use two methods to characterize the energy distributions inside the local
regions. In one method we study energy-energy correlations between pairs of
cosmic rays depending on their angular separation from the center of the region.
With this measurement we search for signal patterns
expected from particle deflection in turbulent magnetic fields.
In the second method we decompose the directional energy
distribution of the cosmic rays along its principal axes. This general decomposition method
imposes no requirement on the sign of the cosmic-ray charge, or the charge
itself. Beyond measuring the strength of collimation along principal axes, the
axis directions of the individual regions around the highest-energy cosmic rays
potentially reveal local deflection patterns due to magnetic fields.
Both methods were originally studied in particle physics,
and were referred to as energy-energy correlations and thrust observables,
respectively~\cite{Basham1978, Brandt1964}. Simulations of their application in cosmic-ray physics
have demonstrated the capability to reveal effects from coherent and turbulent
magnetic fields~\cite{Erdmann2009, Erdmann2013}.
This paper is structured as follows. The observables of the energy-energy
correlations and the principal-axis analysis are defined in
Section~\ref{sec:Methods}. Their response to structure potentially expected from
deflection in magnetic fields is illustrated using a simplified model in Section~\ref{sec:ToyMC}. The
measured distributions of the observables using data of the surface detector of
the Pierre Auger Observatory are presented in Section~\ref{sec:Results}. In
Section~\ref{sec:Discussion}, we first analyze the directional characteristics
of the measured principal axes by studying their reproducibility. We then
present a comparison of the measurements with an astrophysical model of UHECR
origin and propagation, and determine constraints on the source density, and
the strength of cosmic-ray deflection as the two dominant model parameters.
\section{Definitions}
\label{sec:Methods}
In this section we introduce the main components used for the
measurement. We first define the local regions in which we analyze the
cosmic-ray energies and arrival directions. We then explain the
energy-energy correlation observable and its angular dependence.
Finally, we present the method of calculating the principal axes of the
energy distribution which results in the three values to characterize
the strength of collimation along each axis, and the directions of the
axes themselves.
\subsection{Region of Interest}
The observables used here are calculated from the events detected in a bounded
region in the sky, here denoted as `region of interest' (ROI). To minimize the
statistical penalty from multiple tries, we do not scan the entire sky but
investigate a limited number of ROIs located around events with an energy above
\SI{60}{\exa\electronvolt}. This energy cut is motivated by the limitation of the
propagation distance by, e.g., the GZK effect~\cite{Greisen1966, Zatsepin1966}
and corresponds to the energy used in the AGN correlation
analysis~\cite{PAO2007}. The size of the ROIs, i.e. the maximum angular
separation of a UHECR belonging to the ROI to the center of the ROI, is set to
\SI{0.25}{\radian}. To choose these values we simulated the UHECR propagation
in magnetic fields with the UHECR simulation tool PARSEC~\cite{Bretz2013} for
different strengths of the deflection and source density. The simulations were
analyzed with varying choices of parameters. The chosen values maximize the
power of the observables to discriminate between scenarios with strong
deflections and isotropic scenarios~\cite{Schiffer2011, Winchen2013}. To avoid
a possible bias of the characterization of the ROI, we exclude the cosmic ray
seeding the ROI from the calculation of the observables.
\input{EEC.tex}
\input{Thrust.tex}
\section{Measurement}
\label{sec:Results}
For the measurement of the observables we selected events above \SI{5}{\exa\electronvolt}
recorded with the surface detector of the Pierre Auger Observatory up to March
19, 2013. We require that the zenith angle of the events is smaller than
\SI{60}{\degree} and that the detector stations surrounding the station with
the highest signal are active~\cite{PAO2010b}. 30,664 events are included in
the analysis; 70 fulfill the conditions $E \ge \SI{60}{\exa\electronvolt}$ and are at least
\SI{0.25}{\radian} inside the field of view of the Pierre Auger Observatory and
therefore seed an ROI.
In order to estimate the uncertainty on the measurement, we repeatedly vary the
energy and arrival directions of all events detected with the Pierre Auger
Observatory above $E = \SI{3}{\exa\electronvolt}$ and $\theta < \SI{60}{\degree}$ within
their experimental uncertainties and repeat the calculation of the observables
with the new values. The mean and spread of the resulting distributions then
serve as measured observables and their corresponding uncertainty. The energy
resolution of the surface detector is 16\%~\cite{DiGiulio2009} and the angular
resolution of the SD is better than $1^{\circ}$ for energies above
\SI{5}{\exa\electronvolt}~\cite{Bonifazi2008}. The selected ROIs are kept fixed to the
original positions in all repetitions. Because of the decreasing spectrum, the
number of events in the analysis increases as more events propagate above the
lower energy threshold than vice versa. To keep the number of events in the
uncertainty analysis fixed, the 30,664 events with the highest energy after
variation are selected.
\begin{figure*}[tbp]
\includegraphics[width=\textwidth]{Plots/Measurement.pdf}
\caption{Measurement of the \textbf{(a)} energy-energy correlation
$\Omega$ and \textbf{(b-d)} thrust observables $T_{1,2,3}$ with the
Pierre Auger Observatory (red squares and error bars). The
measurements are compared to distributions without structure in the arrival
directions of UHECRs (gray distributions).}
\label{fig:Measurement}
\end{figure*}
In Figure~\ref{fig:Measurement} the distributions of the measured EEC and
thrust observables are shown together with the distributions expected from
isotropic arrival directions of UHECRs. The goodness-of-fit of the
measurements compared to expected distributions without structure in the
arrival directions of UHECRs, using a $\chi^2$ test, yields $p$-values which are
all above $p=0.2$ except for the thrust minor distribution with $p(T_3)=0.01$.
Note that the $p$-value for $T_3$ results from a lack of signal-like regions
in the data which are expected to broaden the distribution. The measured
distributions of all four observables reveal thus no local patterns in the
arrival directions of UHECRs.
\begin{figure*}[htp]
\includegraphics[width=\textwidth]{Plots/ThrustAxesSkyMap.pdf}
\caption{Hammer projection of the map of principal axes of the
directional energy distribution in galactic coordinates. The red
shaded areas represent the regions of interest. Black lines denote
the second principal axes (thrust-major axes) $\vec{n}_2$, black
dots mark the positions of the thrust axes $\vec{n}_1$. The blue
shading indicates the exposure of the Pierre Auger Observatory; the
dashed line marks the extent of its field of view.}\label{fig:Skymap}
\end{figure*}
From the principal-axes analysis, a map of the thrust-major axes is derived
which is shown in Figure~\ref{fig:Skymap}. If not trivial, these axes
correspond to the direction of preferred cosmic-ray deflections.
This question is further studied in the following
section.
\subsection{Principal Axes}
To further characterize energy-dependent patterns within each individual ROI, we
calculate the three principal axes of the energy distribution which we
denote as $\vec{n}_{k=1,2,3}$. For this we successively maximize the quantity
\begin{equation}
T_k = \max_{\vec{n}_k} \left(\frac{\sum_i |\omega_i^{-1}\; \vec{p}_{i}\cdot
\vec{n}_k|}{\sum_i |\omega_i^{-1}\; \vec{p}_i|} \right)
\label{eq:ThrustValues}
\end{equation}
with respect to the axes $\vec{n}_{k}$ starting with $k=1$.
Here $\vec{p}_i$ is the cosmic-ray momentum and $\omega_i$ the corresponding exposure of
the detector~\cite{Sommers2001} in the direction of particle $i$. The values of
$T_{k=1,2,3}$ quantify the strength of the collimation of the particle momenta
along each of the three axes $\vec{n}_{k=1,2,3}$ of the principal system. We
denote $T_{k=1,2,3}$ as thrust observables following previous studies of
perturbative QCD in particle collisions~\cite{Brandt1964, Farhi1977}.
For $k = 1$ the quantity $T_1$ is called the `thrust' and consequently the
first axis of the principal system $\vec{n}_1$ is called `thrust axis'. For the
second axis the additional condition $\vec{n}_1 \perp \vec{n}_2$ is used in
Equation~\eqref{eq:ThrustValues}. The resulting value $T_2$ is denoted as `thrust
major', the axis as `thrust-major axis'. Finally, the third quantity $T_3$ is
called `thrust minor' with corresponding `thrust-minor axis'. For the
thrust-minor axis $\vec{n}_3$ it is $\vec{n}_1 \perp \vec{n}_2 \perp \vec{n}_3$
which renders the maximization in Equation~\eqref{eq:ThrustValues} trivial. From
this definition follows $T_1 > T_2 > T_3$.
In arbitrarily defined spherical coordinates $(r, \phi, \theta)$ with
orthonormal basis $(\vec{e}_r, \vec{e}_\phi, \vec{e}_\theta )$ and the
observer at the center, the momenta of the
particles at the high energies considered here can be written as $\vec{p}_i =
\vert E_i \vert \vec{e}_{r_i}$ with the energy $E_i$ and the radial
unit vector $\vec{e}_{r_i}$ in the arrival direction of particle $i$. The
thrust axis is thus the radial unit vector $\vec{e}_r$ pointing to the local
barycenter of the energy distribution, and the thrust value is a measure for the
energy-weighted strength of clustering of the events.
For no dispersion of the particles in the
region it takes on the value $T_1 = 1$, whereas for an isotropic distribution in a circular
region the expectation value of $T_1$ depends dominantly on the size of the ROI~\cite{Winchen2013}.
The thrust-major and
thrust-minor axes can consequently be written as
\begin{eqnarray}
\vec{n}_{2} = \cos{\xi_{2}} \, \vec{e}_\phi + \sin{\xi_{2}} \, \vec{e}_\theta \\
\vec{n}_{3} = \cos{\xi_{3}} \, \vec{e}_\phi + \sin{\xi_{3}} \, \vec{e}_\theta
\end{eqnarray}
with the angles $\xi_2$ and $\xi_3 = 90^\circ + \xi_2$ between the corresponding axes
and the vector $\vec{e}_\phi$. Using this together with
Equation~\eqref{eq:ThrustValues}, the thrust-major $T_2$ becomes maximal if $\vec{n}_2$ is
aligned with a linear distribution of UHECR arrival directions. The thrust-major axis thus points
along threadlike structures in the energy distribution of UHECRs. As the thrust
minor axis is chosen perpendicular to $\vec{n}_1$ and $\vec{n}_2$ it has no
physical meaning beyond its connection to the thrust-major axis. However, the
thrust-minor $T_3$ gives meaningful information as it denotes the
collimation strength perpendicular to the thrust-major axis.
Note that in a perfect isotropic scenario, the energy distribution within the
plane defined by $\vec{n}_2$ and $\vec{n}_3$ exhibits perfect symmetry. The
values of $T_2$ and $T_3$ are approximately equal, and the axis directions are accidental.
However, even with a small signal contribution beyond an isotropic background,
the circular symmetry in the $(\vec{n}_2 , \vec{n}_3)$ plane is broken giving
rise to unequal values of $T_2$ and $T_3$. In addition, the direction of the
thrust-major axis then reveals valuable directional information.
This directional information can be compared to the direction of deflection
obtained in a multiplet analysis~\cite{PAO2012}. However, in contrast to the multiplet
analysis the principal axes analysis does not require a uniform charge of the
cosmic rays. Its sensitivity is driven by the total deflection amount.
\section{Benchmark Distributions for Coherent and Turbulent Magnetic Fields}
\label{sec:ToyMC}
For obtaining a general understanding of the energy-energy correlations and the
thrust observables, we use simple scenarios of cosmic-ray deflections in
magnetic fields to demonstrate resulting distributions. First we describe the
procedure for simulating mock data representing cosmic-ray deflection in
turbulent and coherent magnetic fields. For different quantitative mixtures of these field
types we then present the distributions of the energy-energy correlations
and finally discuss the resulting thrust distributions.
\subsection{Simulation Procedure}
To demonstrate the sensitivity of the observables to deflections expected from
magnetic fields, we simulate a ROI with UHECRs in a simplified scenario.
The deflection in cosmic magnetic fields is supposed to result in two different
kinds of patterns in the arrival direction of the UHECRs. First, if the UHECR's
trajectory resembles a directed random walk, a symmetric blurring of the source
is expected. Second, if the particles are deflected in
large-scale coherent fields, e.g. in the Milky Way, an energy ordering of the
UHECRs in threadlike multiplets is expected.
\begin{figure*}[tb]
\begin{center}
\includegraphics[width=\textwidth]{Plots/ToyMCSketch}
\end{center}
\caption{Generation of anisotropically distributed UHECRs in a region
of interest. \textbf{(a)} First, UHECRs are
distributed symmetrically around the center of the ROI using a Fisher
distribution with energy dependent concentration parameter according
to Equation~\eqref{eq:CTurbulent}. \textbf{(b)} The UHECRs are then deflected
in one direction using Equation~\eqref{eq:CCoherent}. \textbf{(c)} UHECRs
deflected outside of the ROI are moved to a random position inside the
region.}
\label{fig:ToyMCSketch}
\end{figure*}
Here we model the distribution of UHECRs in a region around the source as
a superposition of both effects. Events in this region of interest are
generated in three steps as sketched in Figure~\ref{fig:ToyMCSketch}.
First, the UHECRs are distributed around the center of the ROI
following a Fisher
distribution~\cite{Fisher1953} with probability density
\begin{equation}
f(\alpha,\kappa) = \frac{\kappa}{4\pi\, \sinh{\kappa}} e^{(\kappa\,
\cos{\alpha)}}
\label{FisherDistribution}
\end{equation}
for angle $\alpha$ between cosmic ray and center of the ROI. The Fisher
distribution can be considered here as the normal
distribution on the sphere. The concentration
parameter $\kappa$ is chosen with an energy dependence that emulates the
deflection in turbulent magnetic fields as
\begin{equation}
\kappa = C_\text{T}^{-2} E^{2}.
\label{eq:CTurbulent}
\end{equation}
For small deflections the distribution resembles a Rayleigh distribution
where $\kappa$ is related to the root-mean-square $\delta_{\text{RMS}}$ of the
deflection angles by $\kappa = \delta_{\text{RMS}}^{-2}$ and thus
\begin{equation}
\delta_{\text{RMS}} \simeq \frac{C_\text{T}}{E}.
\label{eq:dRMS_CTRelation}
\end{equation}
A value of $C_\text{T} = \SI{1}{\radian\exa\electronvolt}$
is equivalent to an RMS of the deflection angle $\delta_{\text{RMS}} =
\SI{5.7}{\degree}$ for 10~EeV particles. For example, using the usual
parametrization for deflections in turbulent magnetic
fields~\cite{Achterberg1998, Harari2002} this corresponds to the
expected deflection of \SI{10}{\exa\electronvolt} protons from a source at a distance
$D \approx \SI{16}{\mega\parsec}$ propagating through a turbulent
magnetic field with coherence length $\Lambda \approx
\SI{1}{\mega\parsec}$ and strength $ B \approx \SI{4}{\nano\gauss}$.
Second, a simple model for the deflection in coherent magnetic fields is added
on top of the model for turbulent magnetic fields used above. Here the
individual cosmic rays are deflected in one direction by an angle $\alpha$ that
depends on the energy of the particles according to
\begin{equation}
\alpha = C_\text{C}\, E^{-1} \label{eq:CCoherent}
\end{equation}
where the parameter
$C_\text{C}$ is used to model the strength of the coherent deflection.
The procedure is illustrated in Figure~\ref{fig:ToyMCSketch}~(b).
Third, particles deflected outside the region of interest are added as
a background to keep the number of particles in this setup constant
(cf.~Figure~\ref{fig:ToyMCSketch}~(c)). The energies of all events are chosen
following a broken power law with spectral index $\gamma_1 = -2.7$ below
\SI{40}{\exa\electronvolt} and $\gamma_2 = -4.2$ above \SI{40}{\exa\electronvolt} to be comparable with the observed cosmic-ray energy spectrum~\cite{PAO2010a}.
\subsection{Response of the Energy-Energy Correlation}
\begin{figure*}[tbp]
\includegraphics[width=\textwidth]{Plots/EEC_ToyMC.pdf}
\caption{Response of the EEC to typical deflection patterns
from simulations of three different turbulent deflection strengths with
$C_\text{T}=\SI{0.3}{\radian\exa\electronvolt}$ (red squares), $C_\text{T}=\SI{1}{\radian\exa\electronvolt}$ (blue
upward triangles) and $C_\text{T}=\SI{3}{\radian\exa\electronvolt}$ (magenta downward triangles).
The dashed line marks the isotropic expectation value according to
Equation~\eqref{eq:expvalue}; black circles denote the result from simulation of
isotropically distributed UHECRs.}
\label{fig:EECToyMC}
\end{figure*}
The EEC distributions resulting from simulated scenarios using the three values
for the turbulent deflection strength $C_\text{T} = 0.3, 1.0,\;\SI{3.0}{\radian\exa\electronvolt}$
are shown in Figure~\ref{fig:EECToyMC}. As the EEC is expected to provide only
minor sensitivity to coherent deflections~\cite{Erdmann2009} $C_\text{C} = 0$ is used
here. For each scenario 50 realizations of an ROI with 300 UHECRs have been
used, which is approximately the number of UHECRs in a low-coverage region of the measurement presented in Section~\ref{sec:Discussion}.
All scenarios are compared with the result for an isotropic distribution
of UHECRs. Without structure in the arrival directions of UHECRs, the EEC
distribution is flat with an expectation value
\begin{equation}\label{eq:expvalue}
\left<\Omega_{ij}\right> =\left< \frac{\left( E_i -\left< E \right> \right)\, \left( E_j -\left< E \right> \right)} {E_i\, E_j}\right> = \left(1- \left<E\right> \left<\frac{1}{E}\right> \right)^2.
\end{equation}
For a source signal the typical signature is an increase towards small angles,
as can be seen in Figure~\ref{fig:EECToyMC}. With increasing angular separation
the UHECRs average energies decrease, and so do the differences between the
UHECR energies and their corresponding average (Equation~\eqref{eq:EEC}).
Consequently, the values of $\Omega_{ij}$ can become small in contrast to a
scenario where all UHECR energies contribute at every angular scale. The shape
of the EEC distribution in response to a source signal depends on the
deflection pattern. In general it can be seen that a small deflection causes
an increase only in the innermost bins, while a larger deflection will smear
this signature over the whole ROI.
\subsection{Response of the Principal-Axes Analysis}
\begin{figure*}[tbp]
\includegraphics[width=\textwidth]{Plots/ThrustObservables_ToyMC.pdf}
\caption{Response of the thrust observables to typical deflection patterns.
\textbf{(a-c)} Mean and spread of the observables $T_{1,2,3}$ as a
function of the strength of the deflection in turbulent magnetic
fields $C_\text{T}$. Red circles correspond to no directed deflection,
green triangles to $C_\text{C} = \SI{0.5}{\radian\exa\electronvolt}$ and blue squares to
$C_\text{C} = \SI{1.0}{\radian\exa\electronvolt}$. The shaded area corresponds to the
$1\sigma$ and $2\sigma$ expectations of the observables for an
isotropic distribution of cosmic rays. \textbf{(d)} Circular
variance of the thrust-major axes calculated in the simulations in 100
ROIs. Gray shading corresponds to the probability density of the
expectation value of the circular variance of uniformly-distributed
directions.}
\label{fig:ThrustObservables_ToyMC}
\end{figure*}
In Figure~\ref{fig:ThrustObservables_ToyMC}~(a-c)
the mean and spread of the
thrust observables $T_{1,2,3}$
of 100 realizations of the ROI at each point in the explored parameter space are shown. We used $C_\text{T}
=$\SIrange{0.1}{10}{\radian\exa\electronvolt}, without coherent deflection, and alternatively with
$C_\text{C}=\SI{0.5}{\radian\exa\electronvolt}$ as well as $C_\text{C}=\SI{1.0}{\radian\exa\electronvolt}$.
All three observables are sensitive to a symmetric blurring of the
source. For increasing $C_\text{T}$ the
distribution of cosmic rays in the ROI becomes isotropic, and the observables
approach the corresponding expectation value. The value of the thrust major
and thrust minor for strong patterns is here below the expectation for no
patterns, as the particles are concentrated in the center of the ROI. The
thrust minor, Figure~\ref{fig:ThrustObservables_ToyMC}~(c), does not
depend on the strength of coherent deflection, as the width of the
blurring is determined here only by the strength of $C_\text{T}$.
When measuring a thrust-major axis of an individual ROI, we also want to
determine the stability of the axis direction. As explained in
Section~\ref{sec:Methods}, the thrust major-axis is located in the plane
tangential to a sphere around the observer, and provides a directional
characteristic on the sky.
We quantify the stability of the axis
using the circular variance $V$ derived in the specialized statistics for
directional data~(e.g.~\cite{Mardia1972, Jammalamadaka2001}). The direction of
the thrust-major axis $\vec{n}_{2,i}$ in a region of interest $i$ is defined by the
angle $\theta_i$ between the axis and the local unit vector $\vec{e_\phi}$ in
spherical coordinates with $\theta_i \in [0 \ldots \pi)$.
To calculate the circular variance $V$ from the $n$ observations
$\theta_i$, first the $\theta_i$
are transformed to
angles on the full circle by $\theta^*_i = \ell \cdot \theta_i$ with $\ell = 2$ owing to
the symmetry of the thrust-major axis.
With
\begin{equation}
C = \sum_{i=1}^n \cos\theta^*_i,\qquad S = \sum_{i=1}^n \sin\theta^*_i
\label{}
\end{equation}
the resultant length $R$ is defined as
\begin{equation}
R = \sqrt{C^2 + S^2}.
\label{ResultantLength}
\end{equation}
Based on the resultant length $R$ in Equation~\eqref{ResultantLength} the
circular variance $V$ of a sample of size $n$ is defined
as
\begin{equation}
V = 1 - \left(\frac{R}{n}\right)^{1/\ell^2}.
\label{eq:CircularVariance}
\end{equation}
In contrast to the variance in linear statistics, $V$ is limited to the
interval $[0,1]$. The circular variance is a consistent measure
for the concentration of observations on periodic intervals with $V=0$ for
data from a single direction and $V=1$ for perfectly dispersed data.
Even in the limit $ n \ll \infty$ a value $V < 1$ is also expected
for non-directed data as perfect dispersion is unlikely in a random
sample.
To demonstrate the strength of correlation of the axes with the direction of
deflection in the simulation we use the circular variance $V$ among the
simulated sample as a measure. The resulting values for the 100 simulated
scenarios at every point of the aforementioned parameter space are shown in
Figure~\ref{fig:ThrustObservables_ToyMC}~(d). In case of zero coherent
deflection, and also in case of strong blurring of the sources, no stable axis
is found. For small blurring of the sources, the variance between the directions
is zero, if there is coherent deflection.
| {
"attr-fineweb-edu": 1.064453,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUcQLxK6-gDw5AuVon | \section{Introduction}\label{sec:introduction}}
\IEEEPARstart{T}{emporal} action localization aims at discovering action instances via predicting the corresponding start times, end times, and action category labels \cite{gaidon2013temporal}. It is a challenging yet practical research topic, with potential benefits to a wide range of intelligent video processing systems, e.g., video summary \cite{zhao2019property}, smart surveillance \cite{ko2018deep}. To achieve precise localization performance, the fully supervised temporal action localization methods \cite{xu2017r, chao2018rethinking, zeng2019graph,xu2019two} learn from human annotations. However, the data annotation process is burdensome and expensive; meanwhile, it is difficult to consistently determine the boundary of action among different annotators. In contrast, the weakly supervised methods \cite{wang2017untrimmednets,nguyen2018weakly, nguyen2019weakly} learn from the video-level category labels, which are cheap and convenient to obtain.
\begin{figure}[t]
\graphicspath{{figure/}}
\centering
\includegraphics[width=1\linewidth]{click-annotation.pdf}
\caption{Illustration of click-level supervision. The ground truth is shown above the frame sequence. (a) The action-click supervision (shown in orange) makes a random click within each action instance, records the timestamp and classification label, used by SF-Net \cite{ma2020sf}. (b) The proposed background-click supervision (shown in red) makes a random click within each background segment and record the timestamp.}
\label{motivation}
\end{figure}
Inherently, most weakly supervised algorithms follow an underlying assumption that video segments contributing more evidence to video-level classification are more likely to be action. Nonetheless, the algorithm developed following this assumption would be stuck into an action-context confusion dilemma when background segments are more related to video-level classification, as pointed out by Shi et al. \cite{shi2020weakly} and Choe et al. \cite{choe2020evaluating}. Recently, SF-Net \cite{ma2020sf} enhances the weakly supervised algorithms via introducing the action-click supervision\footnote{The action-click supervision is termed as single-frame supervision in SF-Net \cite{ma2020sf}. To indicate one timestamp is clicked within each action instance, we term this supervision as ``action-click supervision".}, as shown in Fig. \ref{motivation}(a). They randomly click a timestamp within each action instance and annotate the corresponding category label. In this work, Ma et al. \cite{ma2020sf} show that given a one-minute video, making video-level, click-level, and instance-level annotation require 45s, 50s, and 300s, respectively. Specifically, it requires watching the whole video when annotating the video-level category label. Similarly, annotating the action-click supervision requires watching the whole video and randomly click an action frame once encountering an action segment. Because the click information is automatically generated by the annotation tool, the action-click supervision costs similar annotation time with the video-level supervision. However, the instance-level annotation needs to roll back and forth to precisely determine the starting frame and the ending frame, whose annotation cost dramatically increases. Because the cost to annotate the click-level supervision is affordable, meanwhile, as verified in experiments, click-level supervision can indicate the action frame and partially mitigate the action-context confusion challenge, weakly supervised temporal action localization with the click-level annotation exhibits promising research prospect.
Although SF-Net \cite{ma2020sf} has advanced the localization performance by exploring action-click supervision, it is still questionable whether the click-level supervision on background segments would perform better. Specifically, besides the video-level classification label, we can click a frame within each background segment (see Fig. \ref{motivation}(b)). To study this, we implement a baseline method by following previous weakly supervised methods \cite{lee2020background, liu2019completeness, min2020adversarial, zhai2020two} and carry two experiments on THUMOS14. Specifically, we forward video features through three temporal convolutional layers and predict the classification score for each frame. The experimental results are reported in Fig. \ref{bg-superiority}. We employ the diagnosing tool \cite{alwassel2018diagnosing} to perform error analysis. Among five types of errors, the vast majority of errors come from the Background Error and take up $74.7\%$, as shown in Fig. \ref{bg-superiority}(a). On the contrary, a part of action frames can be confidently determined via the top-$k$ aggregation procedure. Specifically, given each frame's classification score, the existing paradigm usually selects the highest $k$ scores and regards the mean value as the video-level classification score. Consequently, a well-trained model can confidently discover reliable action frames according to selected top-$k$ frames. We measure the ratio that the highest $k$ scores fall into action segments and obtain $69.7\%$, as shown in Fig. \ref{bg-superiority}(b). The diagnosing results and the distribution of top-$k$ frames inspire us to convert the action-click supervision into the background-click supervision.
\begin{figure}[t]
\graphicspath{{figure/}}
\centering
\includegraphics[width=1\linewidth]{bg-superiority.pdf}
\caption{Performance analysis of a baseline method for weakly supervised action localization. (a) From the diagnosing results, it can be found that the majority of errors come from Background Error. (b) Given class activation sequence, the majority of top-$k$ frames fall into action segments.}
\label{bg-superiority}
\end{figure}
To effectively learn the action localization model, we propose a novel learning framework under background-click supervision, called BackTAL, as shown in Fig. \ref{fig-framework}. Given background-click supervisions, a direct way to leverage the annotation information is to mine the position information via performing the supervised classification on the annotated frames, which is principally explored by SF-Net \cite{ma2020sf} by performing category-specific classification and category-agnostic classification. Besides, considering the commonly used top-$k$ aggregation procedure only constrains the highest $k$ scores but ignores other positions, we design a \textit{Score Separation Module}. It strives to enlarge the score differences between potential action frames and annotated background frames, which can thoroughly mine the position information and further improve the localization performance.
In addition to position information, click-level supervision can also guide the process to build feature embedding spaces that separate action features from background features. However, this feature information is ignored by previous work \cite{ma2020sf}. We propose an \textit{Affinity Module} to utilize the feature information and realize the dynamic and precise calculation for the temporal convolutional operation. Given annotated background frames and confident action frames, the affinity module strives to learn an appropriate embedding for each frame by requiring all action embeddings to be compact, all background embeddings to be compact, while action embeddings to preserve considerable distance from background embeddings. Whereafter, high-quality embeddings are used to estimate similarities between a frame and its local neighboring frames, generating similarity masks. Assisted with the proposed frame-specific similarity masks, the convolution kernel can dynamically attend to relevant neighboring frames when performing calculation on each frame, achieving precise calculation for the temporal convolution.
Our contributions can be summarized as follows:
\begin{itemize}
\item We propose background-click supervision for the temporal action localization task. Compared with the existing action-click supervision, it can effectively discover action instances but with similar annotation costs.
\item In BackTAL, we propose a score separation module and an affinity module to endow the simple yet effective modeling of the position information and the feature information, respectively.
\item Extensive experiments are performed on three benchmarks, and BackTAL achieves new high performances, e.g., 36.3 with [email protected] on THUMOS14.
\end{itemize}
The rest of this paper is organized as follows. Section \ref{sec-related-work} reviews recent progresses in temporal action localization under both full supervision and weak supervision, as well as the development of the click-level supervision. Then, section \ref{sec-method} presents the proposed BackTAL in details, including the holistic method, the score separation module and the affinity module. Afterwards, experimental results are exhibited in section \ref{sec-experiments}. Specifically, we introduce the background-click annotation process, perform comparison experiments on three benchmark datasets, and carry ablation studies to analyze the effectiveness of each module, in quantitative and qualitative manner. Finally, section \ref{sec-conclusion} draws the conclusion and discusses further potential works.
\section{Related Work}
\label{sec-related-work}
This section summarizes recent progresses about the temporal action localization task \cite{gaidon2013temporal, lee2021weakly, zhang2021weakly}. We start from fully supervised methods, and review one-stage methods and two-stage methods. Then, we discuss weakly supervised methods that only learn from video-level classification labels. In the end, we discuss an enhanced weakly supervised learning paradigm, i.e., click-level supervision.
\textbf{Fully supervised temporal action localization} methods learn from precise annotation for each frame. Existing works can be summarized into one-stage methods \cite{lin2017single, long2019gaussian, xu2020g} and two-stage methods \cite{xu2017r, chao2018rethinking, lin2019bmn, zeng2019graph,xu2019two}. For the former type, Lin et al. \cite{lin2017single} simultaneously predict action boundaries and labels, which is developed by GTAN \cite{long2019gaussian} via exploiting gaussian kernels. Recently, Xu et al. \cite{xu2020g} employ a graph convolutional network to perform one-stage action localization. In contrast, two-stage methods first generate action proposals, then refine and classify confident proposals. Specifically, a majority of proposals are generated by the anchor mechanism \cite{xu2017r, gao2017turn, chao2018rethinking,xu2019two, yang2020revisiting}. In addition, other ways to generate proposals include sliding window \cite{shou2016temporal}, temporal actionness grouping \cite{zhao2017temporal}, combining confident starting and ending frames \cite{lin2018bsn, lin2019bmn}. Afterwards, MGG\cite{liu2019multi} integrates the anchor mechanism and frame actionness mechanism into a unified framework, which achieves high recall and precision for the proposal generation task. Unlike fully supervised methods, the studied weakly supervised setting lacks precise instance-level annotations, leaving distinguishing actions from backgrounds challenging.
\textbf{Weakly supervised temporal action localization} chiefly mines video-level classification labels. The pioneering works UntrimmedNet \cite{wang2017untrimmednets}, STPN \cite{nguyen2018weakly}, and AutoLoc \cite{shou2018autoloc} build the paradigm that localizes action instances via thresholding the class activation sequence. Whereafter, the video-level category label is thoroughly mined, e.g., W-TALC \cite{paul2018w} explores the co-activity similarity between two videos sharing the same label, which inspires Gong et al. \cite{gong2020learning} to mine co-attention features. Besides, both Liu et al. \cite{liu2019completeness} and Min et al. \cite{min2020adversarial} demonstrate that learning multiple parallel and complementary branches is beneficial to generate complete action instances, which is developed by HAM-Net \cite{islam2021a} that learns hybrid attention weights to localize complete action instances. Moreover, CleanNet \cite{liu2019weakly} proposes a pseudo supervision paradigm, which firstly generates pseudo action proposals, and then employs these proposals to train an action proposal network. The pseudo supervision paradigm is further developed by some recent works, such as TSCN \cite{zhai2020two} and EM-MIL \cite{luo2020weakly}. In addition, BaS-Net \cite{lee2020background} designs a background suppression network, which is developed by Moniruzzaman et al. \cite{moniruzzaman2020action} via further modeling action completeness. Similarly, Nguyen et al. \cite{nguyen2019weakly} point out that it is crucial to model backgrounds. This is summarized as the action-context confusion challenge by \cite{shi2020weakly} and \cite{zhao2021soda}. Recently, Lee et al. \cite{lee2021weakly} study frame's inconsistency and model background frames as out-of-distribution samples. Meanwhile, Liu et al. \cite{liu2021acsnet, liu2021weakly} aim to separate action frames and neighboring context frames via employing the positive component and negative component \cite{liu2021acsnet}, or learning explicit subspace \cite{liu2021weakly}.
However, the action-context confusion challenge is far from solved if only using video-level labels. In contrast, introducing extra information may be a more effective solution. For example, CMCS \cite{liu2019completeness} collects stationary video clips as backgrounds. Besides, 3C-Net \cite{narayan20193c} introduces action count cues. Nguyen et al. \cite{nguyen2019weakly} employs micro-videos. Recently, ActionBytes \cite{jain2020actionbytes} learns from trimmed videos and localizes actions in untrimmed videos.
\textbf{Click-level supervision} is a kind of weakly supervised learning paradigm. As Bilen \cite{Bilen14wsol} points out, weakly supervised learning refers to an algorithm that requires cheaper annotations at the training phase than the desired output at the inference phase. For example, from point supervision to pixel-level semantic mask \cite{bearman2016s}, from points at frames to spatio-temporal action boxes \cite{mettes2016spot}, from scribble to pixel-level segmentation mask \cite{lin2016scribblesup} and saliency maps \cite{zhang2020weakly}. Recently, Moltisanti et al. \cite{moltisanti2019action} employ simulated action-click supervision to learn video recognition models.
Compared with the most relevant work, SF-Net \cite{ma2020sf}, our proposed BackTAL exhibits two distinguishable contributions. (1) Although SF-Net uses the action-click annotation, we find that action frames can be confidently discovered by the learning algorithm, while the performance bottleneck lies in background errors. Thus, we convert the action-click annotation to the background-click annotation. (2) Given the click-level annotation, SF-Net principally mines the position information via supervised classification on the annotated frame, while we jointly explore the position information and the feature information via the score separation module and the affinity module.
The proposed affinity module is related to embedding learning \cite{harley2017segmentation, liu2020picanet, ci2018video}. In detail, PiCANet \cite{liu2020picanet} directly learns affinity among neighboring pixels, while BackTAL learns embedding for each frame and then measures affinity. Moreover, existing methods \cite{harley2017segmentation, liu2020picanet, ci2018video} learn embedding under a fully supervised setting, while BackTAL learns from the background-click annotation.
In addition to temporal action localization works discussed above, there is a similar task temporal action segmentation, which receives promising advances recently. For example, MS-TCN++ \cite{li2020ms} utilizes a multi-stage architecture to tackle the temporal action segment task. Kuehne et al. \cite{kuehne2018hybrid} learn action classifiers from videos with action order information, and integrate a framewise RNN model with hidden Markov model to segment video. In addition, some works \cite{tran2013video,soomro2018online, su2020progressive} study the spatio-temporal action detection, which detects action instances via spatial boxes within each temporal frame.
\begin{figure*}[t]
\graphicspath{{figure/}}
\centering
\includegraphics[width=1\linewidth]{framework.pdf}
\caption{Framework of the proposed BackTAL. BackTAL first extracts video features, then uses three temporal convolutional layers to classify each frame and obtain the class activation sequence. Finally, it performs top-$k$ aggregation and predicts the video-level classification score. Based on the background-click supervision, BackTAL adopts the affinity module (see section \ref{affinity-module}) to mine the feature information and estimate the local attention mask. This assists in calculating frame-specific temporal convolution. Besides, BackTAL explores the score separation module (see section \ref{score-separation}) and mines the position information. This can enlarge the response gap between action frames and background frames.}
\label{fig-framework}
\end{figure*}
\section{Method}
\label{sec-method}
In this section, we elaborate the proposed BackTAL method to tackle the weakly supervised temporal action localization under background-click supervision. First, section \ref{sec-problem-statement} formally defines the studied problem. Then, a holistic overview is presented in section \ref{sec-overview}, where we introduce the traditional video-level classification loss and the frame-level classification loss to mine the position information. Given background-click supervision, BackTAL simultaneously mines the position information and the feature information. The former is depicted in section \ref{score-separation} while the latter is depicted in section \ref{affinity-module}. Afterwards, the evaluation process is introduced in section \ref{method-action-localization}.
\subsection{Problem Statement}
\label{sec-problem-statement}
The proposed BackTAL tackles untrimmed videos via learning from video-level classification labels and background-click annotations. Given a video, we denote the background-click label as $\mathbf{b}=[b_{1}, b_{2},...,b_{T}]$. Before the human annotation process, the background-click labels for all frames are $b_{t}=-1, t=1,...,T$, indicating that it is uncertain whether each frame belongs to action or background. In the annotation process, the annotator goes through all video frames. Once the annotator encounters a series of consecutive background frames, he/she randomly selects a frame and makes the background-click annotation, i.e., marking the corresponding background-click label as $b_{t}=1$.\footnote{Please refer to Subsection \textit{4.2 Background-Click Annotation} for a detailed annotation process.} During training, the algorithm selects the highest $k$ scores to estimate the video-level classification score, which is called the top-$k$ aggregation procedure. We regard the selected $k$ frames as confident action frames, and mark the corresponding label as $b_{t}=0$, then obtain pseudo label $\hat{\mathbf{b}}$. BackTAL learns from training videos and aims to precisely discover action instances, e.g., $(t^{s}_{i},t^{e}_{i},c_{i},p_{i})$, in the testing videos. Specifically, the $i^{th}$ action instance starts at $t^{s}_{i}$, ends at $t^{e}_{i}$, belongs to the $c^{th}$ class, and the confidence score for this prediction is $p_{i}$.
\subsection{BackTAL Overview}
\label{sec-overview}
The framework of BackTAL is shown in Fig. \ref{fig-framework}. BackTAL employs three temporal convolutional layers to dispose video feature sequences, to perform classification for each frame and to generate the class activation sequence. For the input video with feature $\mathbf{X}$, the corresponding class activation sequence is $\mathbf{S} \in \mathbb{R}^{(C+1) \times T}$. Afterwards, we utilize the top-$k$ aggregation strategy to calculate the video-level classification score $s^{c}_{v}$:
\begin{equation}
s^{c}_{v}=\frac{1}{k} \max _{\mathcal{M} \subset \mathbf{S}[c,:] \atop |\mathcal{M}|=k} \sum_{l=1}^{k} \mathcal{M}_{l},
\end{equation}
where $s^{c}_{v}$ is the classification score for the $c^{th}$ class.\footnote{The temporal locations of the selected highest $k$ scores can be denoted as a set $\mathcal{K}=\{k_{1}, k_{2}, ..., k_{k}\}$, ($k_{i}\in\{1,2,...,T\}$), where the corresponding pseudo frame-level label satisfies $\hat{b}_{t}=0, t \in \mathcal{K}$.}
Given video-level classification score $\mathbf{s}_{v}=[s^{0}_{v},s^{1}_{v},...,s^{C}_{v}]$ and classification label $\mathbf{y}$, the video-level classification loss $\mathcal{L}_{\rm cls}$ can be calculated via the cross-entropy loss:
\begin{equation}
\mathcal{L}_{\rm cls} = - \sum_{c=0}^{C} y^{c} {\rm log}(\hat{s}^{c}_{v}),
\end{equation}
where $\mathbf{\hat{s}}_{v}=[\hat{s}^{0}_{v},\hat{s}^{1}_{v},...,\hat{s}^{C}_{v}]$ is the classification score after softmax normalization.
In addition to video-level classification, we perform supervised classification on annotated background frames to improve the quality of the class activation sequence $\mathbf{S}$. Specifically, consider an annotated frame with label $b_{t}=1$, this frame's classification score is $\mathbf{S}[:,t] \in \mathbb{R}^{(C+1) \times 1}$. We first perform softmax normalization and obtain the frame-level classification score $\mathbf{\hat{s}}_{t}=[\hat{s}^{0}_{t},\hat{s}^{1}_{t},...,\hat{s}^{C}_{t}]$. Then, we calculate the frame-level classification loss $\mathcal{L}_{\rm frame}$ via the cross-entropy loss:
\begin{equation}
\mathcal{L}_{\rm frame} = - \frac{1}{N_{\rm frame}} \sum_{t=1}^{N_{\rm frame}} {\rm log}(\hat{s}^{0}_{t}),
\end{equation}
where $N_{\rm frame}$ is the number of annotated background frames within this video, and $\hat{s}^{0}_{t}$ is the classification score for the background class.
During training, background frames are annotated, and the highest $k$ scores of the class activation sequence can be regarded as confident action frames. In the Score Separation Module (see section \ref{score-separation}), we aim to separate response scores between confident action frames and annotated background frames via the separation loss $\mathcal{L}_{\rm sep}$. In the Affinity Module (see section \ref{affinity-module}), we learn embedding for each frame via the affinity loss $\mathcal{L}_{\rm aff}$, and employ embedding vectors to measure similarities among neighboring frames.
The complete learning process is jointly driven by video-level classification loss $\mathcal{L}_{\rm cls}$, frame-level classification loss $\mathcal{L}_{\rm frame}$, separation loss $\mathcal{L}_{\rm sep}$ and affinity loss $\mathcal{L}_{\rm aff}$. The total loss can be calculated as:
\begin{equation}
\mathcal{L}=\mathcal{L}_{\rm cls}+\mathcal{L}_{\rm frame}+\lambda \cdot \mathcal{L}_{\rm sep} + \beta \cdot \mathcal{L}_{\rm aff},
\end{equation}
where $\lambda$ and $\beta$ are trade-off coefficients.
\begin{figure}[t]
\graphicspath{{figure/}}
\centering
\includegraphics[width=1\linewidth]{score-separation.pdf}
\caption{Class activation sequence before (a) and after (b) employing the score separation module. We can find that the score separation module contributes to suppress the responses of background frames and enhance the responses of action frames, which is beneficial to separate adjacent action instances.}
\label{fig:discrimination}
\end{figure}
\subsection{Score Separation Module}
\label{score-separation}
This section starts from the traditional weakly supervised action localization paradigm, and reveals that the top-$k$ aggregation procedure cannot explicitly influence the confusing frames. Then, we analyze the frame-level supervised classification in SF-Net \cite{ma2020sf}, and point out that it cannot thoroughly constrain action response. Afterwards, we propose the score separation module to utilize the position information within the click-level annotation and generate high-quality classification responses.
In the weakly supervised action localization paradigm, the top-$k$ aggregation procedure relies on the $k$ highest scores to predict the video-level classification score. In each training iteration, only the selected $k$ scores are influenced and optimized, while others would be ignored. Although the top-$k$ positions vary at the early training phase, a mature model would steadily select similar top-$k$ positions at the later training phase. Consequently, as shown in Fig. \ref{fig:discrimination}(a), the predicted classification score confidently shows high responses for action frames, but cautiously shows low responses for background frames. As long as scores of these confusing frames are lower than top-$k$ scores, they would not influence the video-level classification score. Thus, the responses of the action frames and the confusing background frames cannot be clearly separable, leading to imprecise prediction of the subsequent thresholding-based temporal action localization process.
To separate responses of actions and backgrounds, SF-Net \cite{ma2020sf} makes the action-click annotation and performs frame-level supervised classification. There are also other similar choices, such as performing the binary classification to learn actionness \cite{ma2020sf} and employing supervision on attention weights \cite{lee2020background}. However, based on our investigation (see section \ref{mining-position-information}), multiple variants of performing frame-level supervised classification are prone to obtain coessential information and cannot additively improve the action localization performance. Essentially, the frame-level cross-entropy loss can encourage the response of background class to be higher than the response of other classes, which implicitly suppresses the responses of all action classes. However, considering a video containing actions from $c^{th}$ class, the background frame-level cross-entropy loss cannot explicitly enforce the response of the $c^{th}$ class to be as low as possible on background frames, e.g., lower than responses of all other action classes.
In this work, we explicitly constrain the response at background positions by using the score separation module, as shown in Fig. \ref{fig:discrimination}(b). In particular, given a video containing actions from $c^{th}$ category, we regard top-$k$ highest scores as the potential actions and calculate the mean score $p_{\rm act}$ via:
\begin{equation}
p_{\rm act}=\frac{1}{k}\sum_{\forall \hat{b}_{t}=0} s^{c}_{t},
\end{equation}
where $\hat{b}_{t}=0$ indicates top-$k$ action frames, whose total number is $k$. Similarly, given $N_{\rm frame}$ annotated background frames, the mean score $p_{\rm bg}$ is defined as:
\begin{equation}
p_{\rm bg}=\frac{1}{N_{\rm frame}} \sum_{\forall b_{t}=1} s^{c}_{t}.
\end{equation}
To enlarge the relative difference between mean action score $p_{\rm act}$ and mean background score $p_{\rm bg}$, we perform \textit{Softmax} normalization over $p_{act}$ and $p_{bg}$ as follows:
\begin{equation}
\hat{p}_{act} = \frac{{\rm e}^{p_{act}}}{{\rm e}^{p_{act}} + {\rm e}^{p_{bg}}}, \ \ \hat{p}_{bg} = \frac{{\rm e}^{p_{bg}}}{e^{p_{act}} + {\rm e}^{p_{bg}}}.
\end{equation}
Afterwards, we guide $\hat{p}_{\rm act}$ to be one while $\hat{p}_{\rm bg}$ to be zero as follows:
\begin{equation}
\mathcal{L}_{\rm sep} = - {\rm log}\ \hat{p}_{\rm act} - {\rm log}\ (1-\hat{p}_{\rm bg}).
\end{equation}
The score separation loss $\mathcal{L}_{\rm sep}$ can guide the action response to be separated from the background response on the $c^{th}$ category.
\subsection{Affinity Module}
\label{affinity-module}
The affinity module is designed to explore the feature information within the background-click supervisions. Based on annotated background frames and pseudo action frames, we learn an embedding space for the input video. Then, considering a frame, we can measure its affinity with neighboring frames and obtain a frame-specific attention weight, namely local attention mask, which is injected into the convolutional calculation process. The frame-specific attention weight can guide the convolution process to dynamically attend to related neighbors, which generates more precise response.
In the affinity module, we first learn an embedding space to distinguish class-agnostic actions from backgrounds. Given input features, we use a temporal convolutional layer to learn embedding for each frame, i.e., $\mathbf{E}=[\mathbf{e}_{1},\mathbf{e}_{2},...,\mathbf{e}_{T}]$, where $\mathbf{e}_{t} \in \mathbb{R}^{D_{\rm emb}}$ is a $D_{\rm emb}$-dimension vector. Each embedding vector is $L2-$normalized. Specifically, we use the cosine similarity to measure the affinity between two embeddings $\mathbf{e}_{u}$ and $\mathbf{e}_{v}$:
\begin{equation}
\mathcal{A}(\mathbf{e}_{u},\mathbf{e}_{v})=\frac{\mathbf{e}^{T}_{u} \cdot \mathbf{e}_{v}}{\left\|\mathbf{e}_{u}\right\|_{2} \cdot \left\|\mathbf{e}_{v}\right\|_{2}}
\label{equ-cosine-similarity}.
\end{equation}
Based on the annotated background frames and potential action frames, we can calculate the affinity loss $\mathcal{L}_{\rm aff}$ from three terms, i.e., between two background frames, between two action frames and between the action-background pair. Particularly, we employ the online hard example mining strategy \cite{shrivastava2016training} to constrain the training frame pair. For the first term, embedding vectors of two background frames should be similar to each other, and the loss $\mathcal{L}^{\rm bg}_{\rm aff}$ can be formulated as:
\begin{equation}
\mathcal{L}^{\rm bg}_{\rm aff}=\max \limits_{\forall b_{u}=1, b_{v}=1, u\neq v} \lfloor \tau_{\rm same}-\mathcal{A}(\mathbf{e}_{u},\mathbf{e}_{v}) \rfloor_{+},
\end{equation}
where $\lfloor \cdot \rfloor_{+}$ denotes clipping bellowing at zero, $\tau_{\rm same}$ is the similarity threshold between frames from the same category. Specifically, we constrain the similarity between the two most dissimilar background frames should be larger than $\tau_{\rm same}$. Likewise, embedding vectors for action frames should be similar to each other as well:
\begin{equation}
\mathcal{L}^{\rm act}_{\rm aff}=\max \limits_{\forall \hat{b}_{u}=0, \hat{b}_{v}=0, u\neq v} \lfloor \tau_{\rm same}-\mathcal{A}(\mathbf{e}_{u},\mathbf{e}_{v}) \rfloor_{+}.
\end{equation}
\begin{figure}[t]
\centering
\graphicspath{{figure/}}
\includegraphics[width=1\linewidth]{similarity.pdf}
\caption{Visualization of the local similarity mask. Given a video containing the \textit{shotput} action, we select an action frame (shown in orange), and calculate similarities between the selected frame and its local neighbors. The generated local similarity mask exhibits high response for action frames and low response for background frames.}
\label{fig:similarity}
\end{figure}
In contrast to $\mathcal{L}^{\rm bg}_{\rm aff}$ and $\mathcal{L}^{\rm act}_{\rm aff}$, embedding vectors of background frames should differ from embedding vectors of action frames. This can be formulated as:
\begin{equation}
\mathcal{L}^{\rm diff}_{\rm aff}=\max \limits_{\forall b_{u}=1, \hat{b}_{v}=0} \lfloor \mathcal{A}(\mathbf{e}_{u},\mathbf{e}_{v}) - \tau_{\rm diff} \rfloor_{+}
\end{equation}
where $\tau_{\rm diff}$ is the threshold to constrain the similarity between actions and backgrounds. The affinity loss jointly considers the above three terms and can be calculated as:
\begin{equation}
\mathcal{L}_{\rm aff} = \mathcal{L}^{\rm bg}_{\rm aff} + \mathcal{L}^{\rm act}_{\rm aff} + \mathcal{L}^{\rm diff}_{\rm aff}
\end{equation}
When high-quality embedding vectors are obtained, we can measure the cosine similarity between a frame and its local neighbors. As shown in Fig. \ref{fig:similarity}\footnote{Because some videos are shot from a distant perspective, the undergoing action would be small and hard to recognize when exhibited in original video frames. We follow previous works \cite{zhao2020temporal,shi2020weakly} to exhibit the undergoing action with cropped frames.}, the embedding vectors can distinguish action frames from background frames, and uniformly highlight coessential frames when given the reference frame.
Consider a video feature $\mathbf{X} \in \mathbb{R}^{D_{in} \times T}$ whose dimension is $D_{in}$ and temporal length is $T$, the temporal convolutional operation learns the convolutional kernel $\mathcal{H} \in \mathbb{R}^{h \times D_{in} \times D_{out}}$ to dispose of video feature $\mathbf{X}$, where $h$ is the size of the temporal convolutional kernel and $D_{out}$ is the dimension of the output feature. For simplicity, we only consider the $m^{th}$ channel of the output feature and use the convolutional kernel $\mathcal{H}^{m} \in \mathbb{R}^{h \times D_{in}}$. Then, the vanilla temporal convolutional operation for the $t^{th}$ feature can be formulated as:
\begin{equation}
\overline{f}^{m}_{t} = \sum_{i=0}^{h-1} \mathcal{H}^{m}[i, :] \cdot \mathbf{X}[:, t- \lfloor \frac{h}{2} \rfloor +i],
\end{equation}
where $[\cdot]$ means indexing data from the matrix, $\overline{f}^{m}_{t}$ is the value in the $m^{th}$ channel of the output feature vector, $(\cdot)$ indicates inner product, and $\lfloor \cdot \rfloor$ means round down.
Given a video, we calculate the local similarity for each temporal position and obtain affinity matrix $\mathbf{a} \in \mathbb{R}^{h \times T}$, where $\mathbf{a}[:, t]$ indicates the affinity between the $t^{th}$ feature vector and its $h$ neighbors. In contrast to vanilla convolution, we employ the affinity weight to modulate neighboring features of the $t^{th}$ position before performing temporal convolution:
\begin{equation}
\overline{\mathbf{X}}[:, t- \lfloor \frac{h}{2} \rfloor +i] = \mathbf{X}[:, t- \lfloor \frac{h}{2} \rfloor +i] \times \mathbf{a}[i, t], i \in [0, ..., h-1],
\end{equation}
where $\overline{\mathbf{X}}$ is the modulated feature. Then, we perform temporal convolution on the $t^{th}$ position:
\begin{equation}
f^{m}_{t} = \sum_{i=0}^{h-1} \mathcal{H}^{m}[i, :] \cdot \overline{\mathbf{X}}[:, t- \lfloor \frac{h}{2} \rfloor +i].
\end{equation}
\vspace{-0.3cm}
In traditional methods, all temporal frames are tackled by the sharing convolutional kernel. In contrast, the frame-specific affinity weight guides the convolution to make the frame-specific calculation. Based on background frames and potential action frames, the affinity module adequately mines feature information. The affinity weights and frame-specific temporal convolution help distinguish actions from confusing backgrounds, which is beneficial to separate two closely adjacent actions. Although the affinity module contains three loss terms (i.e., $\mathcal{L}_{\rm aff}^{\rm bg}$, $\mathcal{L}_{\rm aff}^{\rm act}$, and $\mathcal{L}_{\rm aff}^{\rm diff}$), each term can be effectively calculated via the similarity measurement based on matrix multiplication.
\subsection{Inference}
\label{method-action-localization}
In inference, we forward a testing video through the learned network and obtain the class activation sequence $\mathbf{S}$. Then, the top-$k$ aggregation procedure predicts the video-level classification score $\mathbf{s}_{v}$. Among $C$ candidate categories, we discard categories whose video-level classification score is lower than threshold $\tau_{\rm cls}$. Next, we take the class activation sequence for the confident categories and regard consecutive frames with high scores as action instances, obtaining the start time $t_{i}^{s}$ and the end time $t_{i}^{e}$. Afterwards, the confidence score $p_{i}$ for this predicted action instance is determined via outer-inner-contrastive strategy \cite{shou2018autoloc}.
\vspace{-0.3cm}
\section{Experiments}
\label{sec-experiments}
In this section, we carry experiments to evaluate and analyze the proposed BackTAL method. We start from experimental setups in section \ref{sec-experimental-setups}. Then, section \ref{sec-click-annotation} presents the annotation process for background-click information. Next, we compare BackTAL with recent start-of-the-art methods on three benchmark datasets and verify the superior performance of BackTAL in section \ref{sec-cmp-exps}. Afterwards, section \ref{section-ablation} carries ablation studies to analyze the superiority of background-click supervision, the effectiveness of each module, and studies the influence of parameters. Additionally, we depict qualitative analysis in section \ref{sec-qual-exps}.
\vspace{-0.3cm}
\subsection{Experimental Setups}
\label{sec-experimental-setups}
\textbf{Benchmark Datasets.}
We evaluate the efficacy of BackTAL on three benchmarks, including THUMOS14 \cite{THUMOS14}, ActivityNet v1.2 \cite{caba2015activitynet}, and HACS \cite{zhao2019hacs}. In THUMOS14, the training set consists of 2765 trimmed videos, while the validation set and test set consist of 200 and 213 untrimmed videos, respectively. As a common practice in the literature \cite{wang2017untrimmednets, nguyen2018weakly, lee2020background}, we employ the validation set in the training phase and evaluate the performance on the test set, where videos are from 20 classes. The main challenge in THUMOS14 is dramatic variation of action instances' duration. Specifically, a short action instance only lasts tenths of a second, while a long action instance can last hundreds of seconds \cite{xu2017r,chao2018rethinking,xu2019two}. ActivityNet v1.2 \cite{caba2015activitynet} includes 9682 videos from 100 classes, which are divided into training, validation, and testing subsets via the ratio 2:1:1. Challenges in ActivityNet v1.2 usually lie on numerous action categories, large intra-class variations, etc. In addition to these two commonly used datasets, we notice a recently proposed dataset HACS \cite{zhao2019hacs}. It contains 50 thousand videos spanning 200 classes, where training set, validation set, and testing set consist of 38 thousand, 6 thousand, and 6 thousand videos, respectively. Compared with existing benchmarks, HACS contains large-scale videos and action instances, serving as a more realistic and challenging benchmark. In addition, we follow SF-Net \cite{ma2020sf} and evaluate the performance of BackTAL on BEOID dataset \cite{damen2014you}. BEOID consists of 58 videos, 742 action instances, coming from 34 action categories. We make the background-click annotation for videos on BEOID dataset.
\textbf{Evaluation Metric.}
Mean average precision (mAP) under different thresholds \cite{THUMOS14, caba2015activitynet} is used to evaluate the performance. On THUMOS14, we report mAP under thresholds tIoU=\{0.3,0.4,0.5,0.6,0.7\}, and follow previous works \cite{liu2019weakly, yu2019temporal,luo2020weakly} to focus on [email protected]. Besides, considering some methods may exhibit superiority on low or high tIoU threshold, we report average mAP under thresholds tIoU=\{0.3,0.4,0.5,0.6,0.7\} to perform a holistic comparison, as Liu et al. \cite{liu2019completeness} have tried. The evaluation on ActivityNet and HACS employ the average mAP under ten uniformly distributed thresholds from tIoU=0.5 to tIoU=0.95, i.e., [0.5:0.05:0.95]. On BEOID \cite{damen2014you}, we follow SF-Net \cite{ma2020sf} to report mAP under threshold [0.1:0.1:0.7] as well as the average value of these seven mAPs.
\textbf{Baseline Method.} We follow a recent work BaS-Net \cite{lee2020background} to build our baseline method, principally considering its simplicity. The network utilizes three temporal convolutional layers to perform classification for video frames and to generate the class activation sequence. For each video, this network is used twice. The first time disposes of basic video features and the second time tackles filtered video features. We make one simplification over the official implementation of BaS-Net and improve the performance from 27.0 to 28.6, under the metric mAP(\%)@tIoU0.5. Specifically, BaS-Net \cite{lee2020background} performs data augmentation via randomly selecting a part of the video, but we scale video features to fixed temporal length and use complete features via linear interpolation, following \cite{long2019gaussian, lin2019bmn}. A potential reason for the improvement is that, selecting a part of video is faced with a certain risk to drop some action instances or to cut one complete action instance into a part, which may confuse the learning algorithm.
\textbf{Implementation Details.}
Following previous works \cite{nguyen2018weakly, liu2019completeness, shi2020weakly} , we use the I3D \cite{carreira2017quo} model pre-trained on the Kinetics-400 \cite{carreira2017quo} dataset to extract both RGB and optical flow features. As for scaled feature sequence, the temporal length $T$ for THUMOS14, ActivityNet v1.2, and HACS is 750, 100 and 200, respectively. The top-$k$ aggregation procedure selects $k=\lfloor \frac{1}{8} \times T \rfloor$ highest scores on each dataset, where $\lfloor \cdot \rfloor$ means floor down.
\begin{figure}[thbp]
\graphicspath{{figure/}}
\centering
\includegraphics[width=1\linewidth]{figure/annotation-process.pdf}
\vspace{-0.4cm}
\caption{Processes to annotate the background-click information, illustrated with a video containing the action \textit{LongJump}. Firstly, we sparsely extract frames from the video with 2\textit{fps}. Then, when meeting a background segment, the annotator randomly clicks a frame and annotates it as background. Afterwards, the video-level classification label is recorded at the end of the video. Finally, the annotation file can be generated for the complete video.}
\label{ann-proc}
\vspace{-0.4cm}
\end{figure}
The proposed BackTAL is implemented on PyTorch 1.5 \cite{paszke2019pytorch} and optimized via the Adam algorithm. We use batch size 16, learning rate $1 \times 10^{-4}$ and weight decay $5 \times 10^{-4}$. We train 100, 25 and 8 epochs for THUMOS14, ActivityNet v1.2 and HACS, respectively. We set embedding dimension as $\mathbb{R}^{D_{\rm emb}}=32$. For fair comparison, we follow BaS-Net \cite{lee2020background} to set hyper-parameters. Specifically, we adopt the same inference paradigm with BaS-Net and set $\tau_{\rm cls}$=0.25. We employ the gird search strategy to empirically determine the proper values for hyper-parameters. Specifically, the balancing coefficients are set as $\lambda=$1, $\beta=$0.8. In affinity loss, we set $\tau_{\rm same}=$0.5, $\tau_{\rm diff}=$0.1. The influence of these hyper-parameters are discussed via ablation experiments in section \ref{section-ablation}.
\vspace{-0.4cm}
\subsection{Background-Click Annotation}
\label{sec-click-annotation}
Before conducting experiments, we make the background-click annotation on THUMOS14 \cite{THUMOS14}. To start with, we train three annotators with a few actions and backgrounds to make them familiar with each action category. Then, annotators are requested to randomly annotate a background frame once they see a new background segment. Employing the annotation tool provided by Tang et al. \cite{tang2020comprehensive}, annotators can quickly skim action frames and make efficient annotations. Fig. \ref{ann-proc} exhibits the detailed annotation process. Specifically, as the sparse extraction can reduce frame number and speed up the annotation process, we extract frames with the frame rate of 2\textit{fps}. In a video, we click background frames and only record the video-level classification label at the end of the video. As a result, the annotation process is efficient. On average, it takes 48 seconds to annotate a one-minute video.\footnote{In addition, we explore the cost to annotate both action clicks and background clicks for a one-minute video, and spend 53s after sparse frame extraction.}
\begin{table*}[htbp]
\centering
\caption{Comparison experiments on THUMOS14 dataset. We compare BackTAL with three fully-supervised methods (instance-level supervision), recent weakly-supervised methods (video-level supervision) and weakly-supervised methods with extra informations (Video-level + $*$).}
\begin{threeparttable}[t]
\begin{tabular}{c|ccc|cccccc}
\toprule
\toprule
\multirow{2}[4]{*}{Research} & \multirow{2}[4]{*}{Publication} & \multirow{2}[4]{*}{Feature} & \multirow{2}[4]{*}{Supervision} & \multicolumn{5}{c}{mAP@tIoU (\%)} & \multicolumn{1}{l}{avg-mAP} \\
\cmidrule{5-9} & & & & 0.3 & 0.4 & 0.5 & 0.6 & 0.7 & \multicolumn{1}{l}{(0.3:0.7)} \\
\midrule
\midrule
R-C3D \cite{xu2017r} & ICCV 2017 & C3D & Instance-level & 44.8 & 35.6 & \cellcolor[rgb]{ .906, .902, .902}28.9 & - & - & \cellcolor[rgb]{ .906, .902, .902}- \\
BMN \cite{lin2019bmn} & ICCV 2019 & UNT & Instance-level & \textbf{56.0} & 47.4 & \cellcolor[rgb]{ .906, .902, .902}38.8 & 29.7 & 20.5 & \cellcolor[rgb]{ .906, .902, .902}38.5 \\
G-TAD \cite{xu2020g} & CVPR 2020 & UNT & Instance-level & 54.5 & \textbf{47.6} & \cellcolor[rgb]{ .906, .902, .902}\textbf{40.2} & \textbf{30.8} & \textbf{23.4} & \cellcolor[rgb]{ .906, .902, .902}\textbf{39.3} \\
\midrule
\midrule
TSRNet \cite{zhang2019learning} & AAAI 2019 & ReSNet-101 & Video-level & 38.3 & 28.1 & \cellcolor[rgb]{ .906, .902, .902}18.6 & 11.0 & 5.6 & \cellcolor[rgb]{ .906, .902, .902}20.3 \\
Xu et al. \cite{xu2019segregated} & AAAI 2019 & I3D & Video-level & 48.7 & 34.7 & \cellcolor[rgb]{ .906, .902, .902}23.0 & - & - & \cellcolor[rgb]{ .906, .902, .902}- \\
CMCS \cite{liu2019completeness} & CVPR 2019 & I3D & Video-level & 41.2 & 32.1 & \cellcolor[rgb]{ .906, .902, .902}23.1 & 15.0 & 7.0 & \cellcolor[rgb]{ .906, .902, .902}23.7 \\
Yu et al. \cite{yu2019temporal} & ICCV 2019 & I3D & Video-level & 39.5 & 31.9 & \cellcolor[rgb]{ .906, .902, .902}24.5 & 13.8 & 7.1 & \cellcolor[rgb]{ .906, .902, .902}23.4 \\
BaS-Net \cite{lee2020background} & AAAI 2020 & I3D & Video-level & 44.6 & 36.0 & \cellcolor[rgb]{ .906, .902, .902}27.0 & 18.6 & 10.4 & \cellcolor[rgb]{ .906, .902, .902}27.3 \\
TSCN \cite{zhai2020two} & ECCV 2020 & I3D & Video-level & 47.8 & 37.7 & \cellcolor[rgb]{ .906, .902, .902}28.7 & 19.4 & 10.2 & \cellcolor[rgb]{ .906, .902, .902}28.8 \\
DGAM \cite{shi2020weakly} & CVPR 2020 & I3D & Video-level & 46.8 & 38.2 & \cellcolor[rgb]{ .906, .902, .902}28.8 & 19.8 & 11.4 & \cellcolor[rgb]{ .906, .902, .902}29.0 \\
Liu et al. \cite{liu2021weakly} & AAAI 2021 & I3D & Video-level & 50.8 & 41.7 & \cellcolor[rgb]{ .906, .902, .902}29.6 & 20.1 & 10.7 & \cellcolor[rgb]{ .906, .902, .902}30.6 \\
Gong et al. \cite{gong2020learning} & CVPR 2020 & I3D & Video-level & 46.9 & 38.9 & \cellcolor[rgb]{ .906, .902, .902}30.1 & 19.8 & 10.4 & \cellcolor[rgb]{ .906, .902, .902}29.2 \\
A2CL-PT \cite{min2020adversarial} & ECCV 2020 & I3D & Video-level & 48.1 & 39.0 & \cellcolor[rgb]{ .906, .902, .902}30.1 & 19.2 & 10.6 & \cellcolor[rgb]{ .906, .902, .902}29.4 \\
EM-MIL \cite{luo2020weakly} & ECCV 2020 & I3D & Video-level & 45.5 & 36.8 & \cellcolor[rgb]{ .906, .902, .902}30.5 & 22.7 & 16.4 & \cellcolor[rgb]{ .906, .902, .902}30.4 \\
ACSNet \cite{liu2021acsnet} & AAAI 2021 & I3D & Video-level & 51.4 & 42.7 & \cellcolor[rgb]{ .906, .902, .902}32.4 & 22.0 & 11.7 & \cellcolor[rgb]{ .906, .902, .902}32.0 \\
ACM-BANet \cite{moniruzzaman2020action} & ACM MM 2020 & I3D & Video-level & 48.9 & 40.9 & \cellcolor[rgb]{ .906, .902, .902}32.3 & 21.9 & 13.5 & \cellcolor[rgb]{ .906, .902, .902}31.5 \\
HAM-Net \cite{islam2021a} & AAAI 2021 & I3D & Video-level & 52.2 & 43.1 & \cellcolor[rgb]{ .906, .902, .902}32.6 & 21.9 & 12.5 & \cellcolor[rgb]{ .906, .902, .902}32.5 \\
Lee et al. \cite{lee2021weakly} & AAAI 2021 & I3D & Video-level & 52.3 & 43.4 & \cellcolor[rgb]{ .906, .902, .902}33.7 & 22.9 & 12.1 & \cellcolor[rgb]{ .906, .902, .902}32.9 \\
\midrule
3C-Net \cite{narayan20193c} & ICCV 2019 & I3D & Video-level + action count & 44.2 & 34.1 & \cellcolor[rgb]{ .906, .902, .902}26.6 & - & 8.1 & \cellcolor[rgb]{ .906, .902, .902}- \\
Nguyen et al. \cite{nguyen2019weakly} & ICCV 2019 & I3D & Video-level + microvideos & 49.1 & 38.4 & \cellcolor[rgb]{ .906, .902, .902}27.5 & 17.3 & 8.6 & \cellcolor[rgb]{ .906, .902, .902}28.2 \\
ActionBytes \cite{jain2020actionbytes} & CVPR 2020 & I3D & Video-level + Kinetics val & 43.0 & 37.5 & \cellcolor[rgb]{ .906, .902, .902}29.0 & - & 9.5 & \cellcolor[rgb]{ .906, .902, .902}- \\
SF-Net \cite{ma2020sf} & ECCV 2020 & I3D & Video-level + click-level & 52.8 & 42.2 & \cellcolor[rgb]{ .906, .902, .902}30.5 & 20.6 & 12.0 & \cellcolor[rgb]{ .906, .902, .902}31.6 \\
BackTAL & - & I3D & Video-level + click-level & \textbf{54.4} & \textbf{45.5} & \cellcolor[rgb]{ .906, .902, .902}\textbf{36.3} & \textbf{26.2} & \textbf{14.8} & \cellcolor[rgb]{ .906, .902, .902}\textbf{35.4} \\
\bottomrule
\bottomrule
\end{tabular}%
\begin{tablenotes}
\item \textit{As for feature extract, most works utilize I3D model \cite{carreira2017quo}. BMN \cite{lin2019bmn}, G-TAD \cite{xu2020g} and CleanNet \cite{liu2019weakly} utilize UntrimmedNet model \cite{wang2017untrimmednets, wang2018temporal}. TSRNet utilizes ReSNet-101 \cite{he2016deep}.}
\end{tablenotes}
\end{threeparttable}
\label{tab:cmp-thumos}%
\vspace{-0.3cm}
\end{table*}%
\begin{figure}[htbp]
\graphicspath{{Fig./}}
\centering
\includegraphics[width=1\linewidth]{figure/annotation-distribution.pdf}
\caption{Statistics of background-click annotations on THUMOS14 dataset. The x-axis indicates the relative position for each annotation, while the y-axis indicates percentage of annotated frames. We can find that background-click annotations approximately exhibit the uniform distribution. ``A1", ``A2" and ``A3" indicate three different annotators.}
\label{fig:ann_dis}
\end{figure}
We statistic the relative position of background-click annotations, with respect to the corresponding background segment. Considering a background segment starts at $t^{b}_{s}$ and ends at $t^{b}_{e}$, for a background-click annotation with timestamp $t^{b}$, the relative position can be calculated via $\frac{t^{b} - t^{b}_{s}}{t^{b}_{e} - t^{b}_{s}}$. As shown in Fig. \ref{fig:ann_dis}, annotation positions from three annotators approximately exhibit the uniform distribution. Potential reasons for uniform distribution include: first, the annotator randomly clicks a background frame within the background segment. Besides, because the background frame is easy to identify, the annotator hardly makes error. For experiments on THUMOS14, the performance of BackTAL is the average of three trials employing three different annotations. On THUMOS14, SF-Net \cite{ma2020sf} observes similar performances between human annotations and simulated annotations. Because the ActivityNet v1.2 contains dozens of times videos than THUMOS14, SF-Net \cite{ma2020sf} adopts a simulation strategy, i.e., randomly annotating a frame within each action instance based on the ground truth. In this paper, we follow SF-Net and use the simulated annotations on large-scale datasets ActivityNet v1.2 \cite{caba2015activitynet} and HACS \cite{zhao2019hacs}.
\begin{table*}[htbp]
\vspace{-0.2cm}
\centering
\caption{Comparison experiments on ActivityNet v1.2 dataset. We compare BackTAL with recent weakly-supervised methods (video-level supervision), and weakly-supervised methods with extra informations (video-level + $*$).}
\setlength{\tabcolsep}{3.5pt}
\begin{threeparttable}[t]
\begin{tabular}{c|ccc|ccccccccccc}
\toprule
\toprule
\multirow{2}[4]{*}{Research} & \multirow{2}[4]{*}{Publication} & \multirow{2}[4]{*}{Feature} & \multirow{2}[4]{*}{Supervision} & \multicolumn{10}{c}{mAP@tIoU (\%)} & \multicolumn{1}{l}{avg-mAP} \\
\cmidrule{5-14} & & & & 0.50 & 0.55 & 0.60 & 0.65 & 0.70 & 0.75 & 0.80 & 0.85 & 0.90 & 0.95 & 0.50:0.95 \\
\midrule
\midrule
EM-MIL \cite{luo2020weakly} & ECCV 2020 & I3D & Video-level & 37.4 & - & - & - & 23.1 & - & - & - & 2.0 & - & \cellcolor[rgb]{ .906, .902, .902}20.3 \\
CleanNet \cite{liu2019weakly} & ICCV 2019 & UNT & Video-level & 37.1 & 33.4 & 29.9 & 26.7 & 23.4 & 20.3 & 17.2 & 13.9 & 9.2 & 5.0 & \cellcolor[rgb]{ .906, .902, .902}21.6 \\
CMCS \cite{liu2019completeness} & CVPR 2019 & I3D & Video-level & 36.8 & - & - & - & - & 22.0 & - & - & - & 5.6 & \cellcolor[rgb]{ .906, .902, .902}22.4 \\
TSCN \cite{zhai2020two} & ECCV 2020 & I3D & Video-level & 37.6 & - & - & - & - & 23.7 & - & - & - & \textbf{5.7} & \cellcolor[rgb]{ .906, .902, .902}23.6 \\
BaSNet \cite{lee2020background} & AAAI 2020 & I3D & Video-level & 38.5 & - & - & - & - & 24.2 & - & - & - & 5.6 & \cellcolor[rgb]{ .906, .902, .902}24.3 \\
DGAM \cite{shi2020weakly} & CVPR 2020 & I3D & Video-level & 41.0 & 37.5 & 33.5 & 30.1 & 26.9 & 23.5 & 19.8 & 15.5 & 10.8 & 5.3 & \cellcolor[rgb]{ .906, .902, .902}24.4 \\
Gong et al. \cite{gong2020learning} & CVPR 2020 & I3D & Video-level & 40.0 & - & - & - & - & 25.0 & - & - & - & 4.6 & \cellcolor[rgb]{ .906, .902, .902}24.6 \\
HAM-Net \cite{islam2021a} & AAAI 2021 & I3D & Video-level & 41.0 & - & - & - & - & 24.8 & - & - & - & 5.3 & \cellcolor[rgb]{ .906, .902, .902}25.1 \\
Liu et al. \cite{liu2021weakly} & AAAI 2021 & I3D & Video-level & 39.2 & - & - & - & - & 25.6 & - & - & - & 6.8 & \cellcolor[rgb]{ .906, .902, .902}25.5 \\
Lee et al. \cite{lee2021weakly} & AAAI 2021 & I3D & Video-level & 41.2 & - & - & - & - & 25.6 & - & - & - & 6.0 & \cellcolor[rgb]{ .906, .902, .902}25.9 \\
ACSNet \cite{liu2021acsnet} & AAAI 2021 & I3D & Video-level & 40.1 & - & - & - & - & 26.1 & - & - & - & 6.8 & \cellcolor[rgb]{ .906, .902, .902}26.0 \\
\midrule
\midrule
3C-Net \cite{narayan20193c} & ICCV 2019 & I3D & Video-level + action count & 37.2 & - & - & - & 23.7 & - & - & - & 9.2 & - & \cellcolor[rgb]{ .906, .902, .902}21.7 \\
SF-Net \cite{ma2020sf} & ECCV 2020 & I3D & Video-level + click-level & 37.8 & - & - & - & 24.6 & - & - & - & 10.3 & - & \cellcolor[rgb]{ .906, .902, .902}22.8 \\
BackTAL & - & I3D & Video-level + click-level & \textbf{41.5} & \textbf{39.0} & \textbf{36.4} & \textbf{32.9} & \textbf{30.2} & \textbf{27.3} & \textbf{23.7} & \textbf{19.8} & \textbf{14.4} & 4.7 & \cellcolor[rgb]{ .906, .902, .902}\textbf{27.0} \\
\bottomrule
\bottomrule
\end{tabular}%
\end{threeparttable}
\label{tab:cmp-anet12}%
\end{table*}%
\begin{table*}[t]
\centering
\caption{Comparison experiments on HACS dataset, in comparison with a fully-supervised SSN \cite{zhao2017temporal} and a weakly-supervised BaS-Net \cite{lee2020background}.}
\begin{threeparttable}[t]
\begin{tabular}{c|cc|cccccccccc|c}
\toprule
\toprule
\multirow{2}[4]{*}{Research} & \multirow{2}[4]{*}{Publication} & \multirow{2}[4]{*}{Supervision} & \multicolumn{10}{c}{mAP@tIoU (\%)} & \multicolumn{1}{l}{avg-mAP} \\
\cmidrule{4-13} & & & 0.50 & 0.55 & 0.60 & 0.65 & 0.70 & 0.75 & 0.80 & 0.85 & 0.90 & \multicolumn{1}{c}{0.95 } & 0.50:0.95 \\
\midrule
\midrule
SSN \cite{zhao2017temporal} & ICCV 2017 & Instance-level & 28.8 & - & - & - & - & 18.8 & - & - & - & 5.3 & \cellcolor[rgb]{ .906, .902, .902}19.0 \\
\midrule
\midrule
BaS-Net \cite{lee2020background} & AAAI 2020 & Video-level & 30.6 & 27.7 & 25.1 & 22.6 & 20.0 & 17.4 & 14.8 & 12.0 & 9.2 & \textbf{5.7} & \cellcolor[rgb]{ .906, .902, .902}18.5 \\
\midrule
BackTAL & - & Video-level + click-level & \textbf{31.5} & \textbf{29.1} & \textbf{26.8} & \textbf{24.5} & \textbf{22.0} & \textbf{19.5} & \textbf{17.0} & \textbf{14.2} & \textbf{10.8} & 4.7 & \cellcolor[rgb]{ .906, .902, .902}\textbf{20.0} \\
\bottomrule
\bottomrule
\end{tabular}%
\begin{tablenotes}
\item \textit{Results of SSN \cite{zhao2017temporal} are taken from \cite{zhao2019hacs}. Results of BaS-Net \cite{lee2020background} are from our implementation.}
\end{tablenotes}
\end{threeparttable}
\label{tab:cmp-hacs}%
\end{table*}%
\subsection{Comparison with State-of-the-Art Methods}
\label{sec-cmp-exps}
\textbf{THUMOS14.}
Table \ref{tab:cmp-thumos} compares BackTAL with recent state-of-the-art methods on THUMOS14 dataset. As this paper focuses on weakly supervised temporal action localization, we only list three representative supervised methods \cite{xu2017r, lin2019bmn, xu2020g} to indicate the progress under the supervised paradigm. For weakly supervised paradigm, we distinguish methods only using the video-level classification label from methods that use extra information. In Table \ref{tab:cmp-thumos}, the most similar competitor to the proposed BackTAL is SF-Net \cite{ma2020sf}, where SF-Net employs action-click supervision and BackTAL employs background-click supervision. Under similar annotation cost, BackTAL exhibits 5.8 mAP improvements over SF-Net under tIoU threshold 0.5, demonstrating background-click supervision is more effective. Besides, with the rapid development of weakly supervised methods, some recent works \cite{liu2021acsnet, moniruzzaman2020action, islam2021a, lee2021weakly} achieve superior performance than weakly supervised methods employing extra information. However, the proposed BackTAL performs 2.6 mAP higher than current well-performed method \cite{lee2021weakly} under tIoU threshold 0.5. Moreover, in comparison with supervised methods, BackTAL can exceed a classical method \cite{xu2017r} but still shows obvious performance gap with recent supervised methods. This indicates the weakly supervised methods should be persistently developed.
\textbf{ActivityNet v1.2.}
Table \ref{tab:cmp-anet12} reports the performance of BackTAL and current state-of-the-art methods on ActivityNet v1.2 benchmark. ActivityNet v1.2 possesses different characteristics with THUMOS14, e.g., a large percentage of action instances are extremely long, dramatic variations within an action instance. The previous counterpart SF-Net \cite{ma2020sf} principally performs supervised classification on action-click annotated frames, which is effective to learn similar patterns within neighboring frames but is insufficient to propagate information over long-range interval. As a result, SF-Net exhibits inferior performance to some weakly supervised methods \cite{islam2021a,liu2021weakly,lee2021weakly,liu2021acsnet}. In contrast, the proposed BackTAL converts valuable click-level supervision to background segments and discovers action instances through video-level classification process, i.e., the top-$k$ aggregation process. As shown in Table \ref{tab:cmp-anet12}, BackTAL performs favorable over recent weakly supervised methods, under metric average mAP. Under ten different thresholds, BackTAL achieves high performance on nine thresholds. As tIoU threshold 0.95 is a strict criteria, a potential reason is that there is a trade-off between the performance on the holistic dataset and the precise boundary localization on some action instances. BackTAL focuses on the holistic performance and achieves high average mAP.
\textbf{HACS.} In addition to two traditional benchmarks, we make an early attempt and verify the effectiveness of weakly supervised temporal action localization on the large-scale HACS dataset, shown in Table \ref{tab:cmp-hacs}. SSN \cite{zhao2017temporal} is a classical fully supervised temporal action localization method. It models action structure with a pyramid architecture, and employs the activity classifier and completeness classifier to predict action category and completeness score, respectively. As a weakly supervised method, BaS-Net \cite{lee2020background} shows inferior performance than SSN under the metric average mAP.
\begin{table}[thbp]
\centering
\setlength{\tabcolsep}{4.0pt}
\caption{Comparison experiments on BEOID dataset, measured by mAP under different tIoU threshold.}
\begin{tabular}{c|cccccccc}
\toprule
\toprule
\multirow{2}[4]{*}{Research} & \multicolumn{7}{c}{mAP@tIoU (\%)} & avg-mAP \\
\cmidrule{2-8} & 0.1 & 0.2 & 0.3 & 0.4 & 0.5 & 0.6 & 0.7 & (0.1:0.7) \\
\midrule
\midrule
SF-Net \cite{ma2020sf} & \textbf{62.9} & - & 40.6 & - & 16.7 & - & 3.5 & 30.1 \\
BackTAL & 60.1 & 49.5 & \textbf{40.9} & 30.8 & \textbf{21.2} & 14.0 & \textbf{11.0} & \textbf{32.5} \\
\bottomrule
\bottomrule
\end{tabular}%
\label{expBEOID}%
\end{table}%
In contrast, the proposed BackTAL exceeds SSN under nine over ten different tIoU thresholds as well as the average mAP. Considering HACS \cite{zhao2019hacs} is a large-scale realistic dataset, this experiment reveals the promising prospect of the background-click supervision.
\textbf{BEOID.} Table \ref{expBEOID} reports the performance comparison between SF-Net \cite{ma2020sf} and the proposed BackTAL. SF-Net utilizes the action-click supervision and achieves 30.1 mAP, under the metric average mAP. BackTAL employs the background-click supervision, achieves 32.5 mAP, and exhibits 2.4 mAP improvements over SF-Net. In Table \ref{expBEOID}, it can be noticed that SF-Net performs well under low tIoU threshold (e.g., 0.1). One potential reason is that action-click supervision contributes to discovering action instances, but can only generate coarse temporal boundaries. In general, BackTAL performs well under other tIoU thresholds and average mAP, which demonstrates the effectiveness of the proposed background-click supervision.
\begin{figure}[thbp]
\centering
\includegraphics[width=1\linewidth]{figure/trade-off.pdf}
\caption{Trade-off between annotation cost and action localization performance on THUMOS14 dataset. We compare BackTAL with recent methods that employ weak supervision, weak supervision with extra information, and full supervision. Annotation costs for weakly supervised methods \cite{lee2020background, ma2020sf, lee2021weakly} and fully supervised methods \cite{xu2017r, lin2019bmn, xu2020g} are taken from \cite{ma2020sf}.} The x-axis is the log-scale.
\label{fig-trade-off}
\end{figure}
\begin{table}[t]
\centering
\caption{Effectiveness of click supervision. We exhibit the annotation cost and performance gains in both the sematic segmentation domain and the temporal action localization domain, based on a pioneering work \cite{bearman2016s} and BackTAL. It can be found that BackTAL requires less annotation cost but achieves more improvements.}
\begin{tabular}{l|cc|cc}
\toprule
\toprule
\multicolumn{1}{r}{} & \multicolumn{2}{c|}{Semantic Segmenation} & \multicolumn{2}{c}{Action Localization} \\
\midrule
\midrule
\multicolumn{5}{c}{Annotation Cost} \\
\midrule
Corresponding SOTA & \cite{papandreou2015weakly} & 67 h & \cite{moniruzzaman2020action} & 45 s \\
Click Supervision & \cite{bearman2016s} & 79 h & BackTAL & 48 s \\
Relative Improvement & - & 17.9\% & - & 6.7\% \\
\midrule
\midrule
\multicolumn{5}{c}{Performance Gains} \\
\midrule
Corresponding SOTA & \cite{papandreou2015weakly} & 39.6 & \cite{moniruzzaman2020action} & 32.3 \\
Click Supervision & \cite{bearman2016s} & 43.6 & BackTAL & 36.3 \\
Relative Improvement & - & 10.1\% & - & 12.4\% \\
\bottomrule
\bottomrule
\end{tabular}%
\label{tab-trade-off}%
\end{table}%
\textit{Further discussions.} There may be one concern about the trade-off between annotation costs and performance gains. As shown in Fig. \ref{fig-trade-off}, we compare the proposed BackTAL with recent works. It can be found that the background click supervision requires similar annotation cost (48s \textit{v.s.} 45s) with traditional weakly supervised methods, but can steadily improve the performance from 33.7 mAP (reported by Lee et al. \cite{lee2021weakly}) to 36.3 mAP. Besides, we analyze the effectiveness of the click-level supervision in both the semantic segmentation domain and the action localization domain, based on a pioneering work \cite{bearman2016s} and the proposed BackTAL. As shown in Table \ref{tab-trade-off}, compared with corresponding state-of-the-art method \cite{papandreou2015weakly}, Bearman et al. \cite{bearman2016s} require 17.9\% extra annotation cost and make 10.1\% relative improvement. In comparison, BackTAL achieves 12.4\% relative improvements while the extra annotation cost is 6.7\%\footnote{Following Bearman et al. \cite{bearman2016s}, we compare with state-of-the-art work \cite{moniruzzaman2020action} published in the previous year.}. From above analysis, we can find the effectiveness of the proposed BackTAL method, especially the good trade-off between annotation costs and performance gains.
\begin{figure*}[t]
\graphicspath{{figure/}}
\centering
\includegraphics[width=1\linewidth]{similarity-exp.pdf}
\caption{Visualization of the local attention mask. For each example, we show the attention mask calculated between a selected action frame (shown in orange) or a background frame (shown in red) and its corresponding neighboring frames.}
\label{fig:similarity-exp}
\end{figure*}
\subsection{Ablation Studies}
\label{section-ablation}
\textbf{Superiority of annotating backgrounds.} We compare the proposed background-click annotation with previous action-click annotation proposed by SF-Net \cite{ma2020sf}, and report the results in Table \ref{ann-bg}. We adopt the action-click annotation released by SF-Net \cite{ma2020sf} for fair comparison. Starting from the same baseline method, introducing action-click supervision brings 0.5 mAP improvement, while the proposed background-click supervision brings 6.4 mAP improvements. When only action-click supervision or background-click supervision is available, apart from the video-level classification loss used by the baseline method, we only introduce the frame-level classification loss on the annotated frame, and do not employ any other loss functions. This demonstrates our assumption that the background-click annotation is more valuable than the action-click one, because representative action frames can be discovered by the top-$k$ aggregation process and the majority of localization errors come from the \textit{Background Error}. Moreover, starting from \textit{Baseline + Action Click}, introducing the background-click annotation can still improve the performance from 29.1 mAP to 36.8 mAP. This demonstrates the background-click annotation are quite complementary to the action-click annotation. In contrast, starting from \textit{Baseline + Background Click}, introducing the action-click annotation only improves the performance from 35.0 mAP to 36.8 mAP, which is consistent with our hypothesis that the action-click annotation are redundant with the top-$k$ aggregation process to some extent. It is worth noting that the performance gains brought by the score separation module and the affinity module are comparable to some recent works \cite{ma2020sf, liu2021weakly}.
\begin{table}[t]
\centering
\caption{Comparison of the background-click annotation with the action-click annotation on THUMOS14 dataset.}
\begin{threeparttable}[t]
\begin{tabular}{l|c}
\toprule
\toprule
Setting & [email protected] (\%) \\
\midrule
\midrule
Baseline & 28.6 \\
Baseline + Action Click & 29.1 \\
Baseline + Background Click & 35.0 \\
Baseline + Action \& Background Click & 36.8 \\
\bottomrule
\bottomrule
\end{tabular}%
\begin{tablenotes}
\item \textit{The action-click annotation is from SF-Net \cite{ma2020sf}.}
\end{tablenotes}
\end{threeparttable}
\label{ann-bg}%
\end{table}%
\begin{table}[t]
\centering
\caption{Ablation studies about the efficacy of the score separation module and the affinity module on THUMOS14 dataset.}
\begin{tabular}{cccc|c}
\toprule
\toprule
\multirow{2}[2]{*}{Baseline} & Background & Score & Affinity & mAP@ \\
& Click & Separation & Module & tIoU0.5(\%) \\
\midrule
\midrule
\checkmark & & & & 28.6 \\
\checkmark & \checkmark & & & 35.0 \\
\checkmark & \checkmark & \checkmark & & 35.6 \\
\checkmark & \checkmark & & \checkmark & 35.8 \\
\checkmark & \checkmark & \checkmark & \checkmark & 36.3 \\
\bottomrule
\bottomrule
\end{tabular}%
\label{tab:ablation-components}%
\end{table}%
\begin{figure*}[t]
\graphicspath{{figure/}}
\centering
\includegraphics[width=1\linewidth]{visualization.pdf}
\caption{Qualitative comparisons between the proposed BackTAL and SF-Net \cite{ma2020sf} on THUMOS14 dataset, where the start time and end time for each action instance is depicted. For the second visualization, please view in zoom and pay attention to the tennis ball to distinguish action frames from backgrounds.}
\label{fig:visualization}
\end{figure*}
\textbf{Effectiveness of each module.} Table \ref{tab:ablation-components} reports ablation studies about each module. Specifically, although the official implementation of BaS-Net \cite{lee2020background} gets 27.0 mAP under tIoU threshold 0.5, we achieve 28.6 mAP via simplifying the data augmentation procedure. The background-click annotation brings obvious performance improvements and achieves 35.0 mAP. Based on this, the score separation module and the affinity module bring 0.6 mAP and 0.8 mAP improvement, respectively. In the end, the complete BackTAL method achieves 36.3 mAP. There may exist a concern that the score separation module and the affinity module do not bring obvious improvements as the background-click annotation. For one thing, the core contribution of this work is to convert action-click supervision to background-click supervision, which achieves noticeable performance gains. For another, starting from a well-performed method, the score separation module and the affinity module can further make improvements and contribute 1.3 mAP gains in total, which identifies their effectiveness. In addition, we study the influence of the affinity loss $\mathcal{L}_{\rm aff}$ by removing it from the affinity module. This experiment obtains 35.1 mAP, and verifies that removing the affinity loss would make the affinity module lose efficacy. To our best knowledge, this is due to that the insufficient supervision would cause low-quality local attention masks.
\begin{table}[t]
\centering
\caption{Ablation studies about mining position information in different manners. Directly mining the position information is to perform supervised classification on attention weight (weight supervision) or on class activation sequence (CAS supervision). Moreover, we propose the score separation module to further mine the position information. Experiments are performed on THUMOS14 dataset.}
\begin{tabular}{l|c}
\toprule
\toprule
\multicolumn{1}{c|}{Setting} & [email protected] (\%) \\
\midrule
\midrule
Baseline & 28.6 \\
\midrule
Baseline + Weight Supervision & 34.1 \\
Baseline + CAS Supervision & 35.0 \\
Baseline + Weight Supervision + CAS Supervision & 35.2 \\
\midrule
Baseline + CAS Supervision+ Score Separation & 35.6 \\
\bottomrule
\bottomrule
\end{tabular}%
\label{mine-position}%
\end{table}%
\textbf{Different ways to mine the position information.}
\label{mining-position-information}
Given the background-click annotation, a natural choice to mine the position information is performing supervised classification on the class activation sequence. Besides, as the network learns a class-agnostic attention weight to filter out backgrounds,
\begin{table}[thbp]
\centering
\caption{Ablation studies about the influence of the neighboring frame number, measured by mAP (\%) under IoU threshold 0.5 on THUMOS14 dataset.}
\begin{tabular}{l|cccc}
\toprule
\toprule
Neighboring frame number & 3 & 5 & 7 & 9 \\
\midrule
[email protected] (\%) & 36.3 & 36.1 & 36.0 & 35.8 \\
\bottomrule
\bottomrule
\end{tabular}%
\label{tabAblNeighFrame}%
\end{table}%
we can apply supervision to attention weight via performing binary classification. Moreover, we can jointly mine the position information on both class activation sequence and attention weights. Experimental results are reported in Table \ref{mine-position}, under tIoU threshold 0.5. First of all, mining the position information brings adequate performance gains over the baseline method. To be specific, ``CAS Supervision" performs better than ``Weight Supervision", but simultaneously utilizing these two kinds of supervision cannot obviously exhibit further improvement. This demonstrates multiple variants of simple frame-wise classification are coessential and cannot additively improve the localization performance. In contrast, the proposed score separation module explicitly models responses of actions and backgrounds. The target to enlarge the score gap lifts the response for action frames and suppresses the response for backgrounds, which further improves the performance from 35.0 mAP to 35.6 mAP.
\textbf{Ablations about the number of neighboring frames.} In the affinity module, we keep the same value between the number of neighboring frames $h$ and the size of the temporal convolutional kernel. Alternatively, we can first calculate the weighted sum of $h$ neighboring frames, then perform the temporal convolution. As shown in Table \ref{tabAblNeighFrame}, we do not observe performance improvement when varying $h$ from 3 to 9. Because the number of neighboring frames influences the scope of context, one potential reason is that a proper context (e.g., $h=3$) can enhance the feature representation, while excessive context would bring unnecessary noise.
\begin{table}[thbp]
\centering
\caption{Complexity comparison between our BackTAL and recent action localization methods, in terms of model parameters (M) and computational FLOPs (G).}
\footnotesize
\setlength{\tabcolsep}{2.0pt}
\begin{tabular}{llllll}
\toprule
\toprule
\multicolumn{1}{l|}{Research} & \multicolumn{1}{c}{3C-Net \cite{narayan20193c}} & \multicolumn{1}{c}{SF-Net \cite{ma2020sf}} & \multicolumn{1}{c}{UM \cite{lee2021weakly}} & \multicolumn{1}{c|}{{\scriptsize HAM-Net} \cite{islam2021a}} & \multicolumn{1}{c}{BackTAL} \\
\multicolumn{1}{l|}{Publication} & \multicolumn{1}{c}{ICCV 2019} & \multicolumn{1}{c}{ECCV 2020} & \multicolumn{1}{c}{{\scriptsize AAAI 2021}} & \multicolumn{1}{c|}{AAAI 2021} & \multicolumn{1}{c}{-} \\
\midrule
\multicolumn{1}{l|}{Para.} & \multicolumn{1}{c}{4.41} & \multicolumn{1}{c}{16.83} & \multicolumn{1}{c}{12.63} & \multicolumn{1}{c|}{29.15} & \multicolumn{1}{c}{4.29} \\
\multicolumn{1}{l|}{FLOPs} & \multicolumn{1}{c}{6.60} & \multicolumn{1}{c}{25.24} & \multicolumn{1}{c}{18.94} & \multicolumn{1}{c|}{43.73} & \multicolumn{1}{c}{6.51} \\
\midrule
\midrule
\multicolumn{6}{l}{``Para." indicates model parameters.} \\
\end{tabular}%
\label{tabMedComplex}%
\end{table}%
\textbf{Computational complexity.} Table \ref{tabMedComplex} compares the computational complexity in terms of model parameters and computational FLOPs. As can be seen, our approach has lower computational complexity than recent methods SF-Net \cite{ma2020sf}, UM \cite{lee2021weakly}, and HAM-Net \cite{islam2021a}. Notably, compared to the most recent method HAM-Net \cite{islam2021a}, our BackTAL only has 14.72\% of its parameters and 14.89\% of its FLOPs.
\textbf{Dimension of embedding.}
In the affinity module, BackTAL learns an embedding for each frame with the target of distinguishing action frames from background frames. Considering different embedding dimensions lead to different representation ability of the embedding vector, we carry ablation experiments to study the influence of embedding dimension $D_{\rm emb}$. As reported in Table \ref{embedding-dim}, BackTAL achieves high performance 36.3 mAP when $D_{\rm emb}$=32. Smaller embedding dimensions may constrain the representation ability, while larger embedding dimensions are difficult to learn, which constrains the performance of BackTAL.
\begin{table}[t]
\centering
\caption{Exploration of different embedding dimensions for the temporal action localization performance on THUMOS14 dataset.}
\begin{tabular}{l|ccccc}
\toprule
\toprule
Embedding Dimension & 8 & 16 & 32 & 64 & 128 \\
\midrule
[email protected] (\%) & 35.7 & 35.9 & 36.3 & 36.2 & 35.9 \\
\bottomrule
\bottomrule
\end{tabular}%
\label{embedding-dim}%
\end{table}%
\begin{table}[t]
\centering
\caption{Ablation studies about the influence of four hyper-parameters: the balance coefficients in the complete loss function $\lambda$ and $\beta$, thresholds $\tau_{\rm same}$ and $\tau_{\rm diff}$ to calculate the embedding loss on THUMOS14 dataset.}
\begin{tabular}{c|ccc}
\toprule
\toprule
$\lambda$ & 0.8 & \cellcolor[rgb]{ .906, .902, .902}1.0 & 1.2 \\
[email protected] (\%) & 35.8 & \cellcolor[rgb]{ .906, .902, .902}36.3 & 36.2 \\
\midrule
$\beta$ & 0.6 & \cellcolor[rgb]{ .906, .902, .902}0.8 & 1.0 \\
[email protected] (\%) & 35.9 & \cellcolor[rgb]{ .906, .902, .902}36.3 & 36.2 \\
\midrule
$\tau_{\rm same}$ & 0.3 & \cellcolor[rgb]{ .906, .902, .902}0.5 & 0.7 \\
[email protected] (\%) & 36.1 & \cellcolor[rgb]{ .906, .902, .902}36.3 & 35.6 \\
\midrule
$\tau_{\rm diff}$ & 0.0 & \cellcolor[rgb]{ .906, .902, .902}0.1 & 0.2 \\
[email protected] (\%) & 35.6 & \cellcolor[rgb]{ .906, .902, .902}36.3 & 36.0 \\
\bottomrule
\bottomrule
\end{tabular}%
\label{tabCoefExp}%
\end{table}%
\begin{table}[thbp]
\centering
\caption{Ablation studies about the efficacy of the score separation module and the affinity module on the THUMOS14 dataset, based on the action-click annotation and mined background frames.}
\setlength{\tabcolsep}{4.0pt}
\begin{tabular}{ccccc|c}
\toprule
\toprule
\multirow{2}[2]{*}{Baseline} & Action & Mined & Score & Affinity & mAP@ \\
& Click & Bg. Frames & Separation & Module & tIoU0.5(\%) \\
\midrule
\midrule
\checkmark & & & & & 28.6 \\
\checkmark & \checkmark & & & & 29.1 \\
\checkmark & \checkmark & \checkmark & & & 30.2 \\
\checkmark & \checkmark & \checkmark & \checkmark & & 31.6 \\
\checkmark & \checkmark & \checkmark & & \checkmark & 31.8 \\
\checkmark & \checkmark & \checkmark & \checkmark & \checkmark & 32.4 \\
\bottomrule
\bottomrule
\end{tabular}%
\label{tabAblActionClickModule}%
\end{table}%
\textbf{Influence of hyper-parameters.} In the proposed BackTAL, the balance coefficients $\lambda$ and $\beta$, thresholds $\tau_{\rm same}$ and $\tau_{\rm diff}$ are empirically determined. We carry ablation experiments to study the influence of these hyper-parameters. Specifically, we change one hyper-parameter when fixing others, and verify temporal action localization performance on THUMOS14 dataset. As shown in Table \ref{tabCoefExp}, when hyper-parameters change in a reasonable range, we can observe a certain performance variation. For example, decreasing the coefficients $\lambda$ in the loss function would drop the 0.5 mAP performance. Increasing $\tau_{\rm same}$ would make the algorithm to select similar action (or background) frames more strict. Consequently, BackTAL would select less vectors to learn the embedding space, which damages the performance. Similar tendency can be found for decreasing the threshold $\tau_{\rm diff}$. In contrast, decreasing $\tau_{\rm same}$ or increasing $\tau_{\rm diff}$ would guide BackTAL to select more vectors to learn the embedding space. The redundant embedding vectors may bring noises to the learning process and constrain the performance.
\textbf{Performance based on the action-click annotation.} Moreover, we use the action-click annotation of SF-Net \cite{ma2020sf} and adopt SF-Net's strategy to mine background frames. This experimental result obtains 32.4 mAP, as shown in Table \ref{tabAblActionClickModule}. On the one hand, owing to the proposed score separation module and affinity module, our BackTAL (32.4 mAP) exceeds SF-Net (30.5 mAP) when using the same action-click supervision. On the other hand, the performance gap between the action-click based method (32.4 mAP) and background-click based BackTAL (36.3 mAP) demonstrates the effectiveness of the background-click supervision.
\subsection{Qualitative Analysis}
\label{sec-qual-exps}
This section analyzes the proposed BackTAL method in qualitative manner. First of all, Fig. \ref{fig:similarity-exp} visualizes the local attention mask employed in the affinity module. It can be found that, given an action frame, the local attention mask can highlight neighboring action frames and suppress background frames, and vice versa. Based on this, the local attention mask serves as the frame-specific attention weight and guides the calculation of temporal convolution. In the end, high-quality local-attention mask assists in generating discriminative class activation sequence.
Besides, Fig. \ref{fig:visualization} compares the proposed BackTAL with the baseline method and the strong competitor SF-Net \cite{ma2020sf}. Both the baseline method and SF-Net take some risks to improperly regard confusing background frames as actions. For example, the people surfaced after diving may be regarded as a part of \textit{CliffDiving} action. The hand moving, but not the complete swing action, can be regarded as a \textit{TennisSwing} action. Because the insufficient ability to suppress confusing background frames, the algorithm may regard multiple adjacent action instances as a long action instance, or localize imprecise action boundaries. In contrast, the proposed BackTAL can consistently suppress confusing background frames and precisely separate adjacent action instances. In experiments, we also notice that BackTAL breaks some long action instances into several separated instances. These failure cases occur when there is extreme variations within the action instance. For example, the viewpoint change can cause extreme variation about object size. These failure cases remind that the weakly supervised temporal action localization should be further developed.
\section{Conclusion}
\label{sec-conclusion}
We develop the action-click supervision into the background-click supervision, and propose BackTAL for weakly supervised temporal action localization. We cast the learning process as mining both the position information and the feature information, and propose the score separation module and the affinity module, to mitigate the action-context confusion challenge. In experiments, BackTAL builds new high performance on two traditional benchmarks, i.e., THUMOS14 and ActivityNet v1.2, and reports a promising performance on a recent large-scale benchmark, HACS. Moreover, we verify the efficacies of explicitly separating action scores and background scores, as well as dynamically attending to informative neighbors. In the future, we plan to introduce the spirit of background-click supervision to similar weakly supervised learning domain, e.g., weakly supervised object localization \cite{guo2021strengthen} and detection \cite{zhang2020weakly}, pointly-supervised semantic segmentation \cite{bearman2016s}. Besides, it is promising to study the inherent correlations between position information and feature information to further develop the background-click supervision.
\bibliographystyle{IEEEtran}
| {
"attr-fineweb-edu": 1.624023,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUcW7xK3xgpZzQnkGJ | \section{Introduction}
We consider the classic stable matching problem of \cite{gale1962college} described as follows. There is a two-sided market with two types of agents: workers on one side and firms on the other. Workers seek employment while firms have vacancies and so are looking to hire \GG{(let us assume for now that every firm wishes to employ just one worker and vice versa)}.
Workers have preference lists over firms and firms have preference lists over workers. This is modeled as a bipartite graph $G$, where the partite sets are workers $W$ and firms $F$ and there is a preference list for each $a\in W\cup F$ consisting of all neighbors of $a$ in $G$. A matching $M$ of workers to firms in $G$ is said to be stable if there is no edge $wf$ in $G$ ($w\in W$, $f\in F$) such that $w$ is either not matched in $M$ or matched to a firm less preferable to $w$ than $f$, and $f$ is either not matched in $M$ or matched to a worker less preferable to $f$ than $w$.
In the {\sc Stable Matching Problem} (SMP), given a bipartite graph $G$ and a set $L_G$ of preference lists, the goal is to find a stable matching.
The deferred acceptance algorithm of Gale and Shapley \cite{gale1962college} confirms that a stable matching always exists. By definition, every stable matching is maximal, but in general not every maximal matching is stable.
In this paper we focus on the relationship between maximal matchings and stable matchings. In particular, when a maximal matching is not a stable matching, how far from being stable is it?
By virtue of the Rural Hospitals Theorem \cite{McVitieW70,Roth84,Roth86}, the set of agents matched is the same in every stable matching. Therefore, to get a larger stable matching, the preference lists need to be amended/altered.
Consider a worker who ranks firm $x$ just ahead of firm $y$ in their preference list. A \emph{swap} exchanges the relative placing of firms $x$ and $y,$ leaving the remainder of the preference list unchanged. (A similar statement holds for the preference lists of firms.) With swaps so defined, we can now address questions like that above concerning the distance between stable matchings and maximal matchings, where the unit of distance we employ is that of a swap.
The economic motivation for this paper concerns how a policy maker might amend an existing two-sided matching market (a so-called market without prices) so that the resulting stable outcomes are more socially desirable. For example, when a maximum matching is a stable matching the number of unemployed individuals and the number of unfilled vacancies is minimised.
Historically, markets with prices were allowed to evolve naturally, with policy makers only interfering when they saw ways to `improve' outcomes.\footnote{Examples of tools available to policy makers in such markets include taxes, tariffs, quotas, outright bans, price ceilings, etc.} But studies that address how market interference might improve existing matching markets are surprisingly scarce.\footnote{This should not be confused with the enormous literature on \emph{market design}, that seeks to engineer from scratch matching markets with `desirable' properties.}
Above for simplicity we assumed that every firm wishes to employ just one worker and every worker wants only one job. This is an example of so-called one-to-one SMP. Such a model is restrictive as, for example, it does not allow us to consider natural situations like that where firms have more than one vacancy and/or a worker may seek more than one job \cite{Roth84,Roth86,KamadaKojima:2015}.
\noindent
\textbf{Contributions.} We study two related problems on using swaps to obtain large (maximum) stable matchings:
\begin{description}
\item[Problem 1:] Decide whether at most $k$ swaps can turn a given maximal matching into a stable matching.
\item[Problem 2:] Decide whether at most $k$ swaps are enough to have an arbitrary stable {\em maximum} matching.
\end{description}
While the two problems look similar at the first glance, their computational complexities differ. For Problem 1, we consider many-to-many SMP and obtain a polynomial-time algorithm, see Theorem \ref{thm:capacitated_algorithm}.
Our design of the algorithm uses two tools:
(a) a new representation of SMP input as an extended bipartite graph ${\cal G}_G$ which encodes both $G$ and its preference lists and provides us with a convenient way to study swaps, and
(b) a reduction to submodular function minimization.
Theorem \ref{thm:capacitated_algorithm} may suggest that Problem 2 is also polynomial-time solvable. Unfortunately, this is highly unlikely as Theorem \ref{thm:hardness_swap_distance} shows that the problem is NP-hard and moreover if $k$ is the parameter then the problem is W[1]-hard\footnote{For an excellent introduction to parameterized complexity, see \cite{CyganFKLMPPS15}.},
even if $G$ has a perfect matching. We also obtain a lower bound on the running time for solving the problem using the Exponential Time Hypothesis (ETH) \cite{ImpagliazzoP99}.
\noindent
\textbf{Related work.} \GG{The classical formulation of SMP uses agents and their preference lists, or equivalently bipartite graphs and preference lists of the vertices.
Maffray \cite{Maffray1992} gave another representation of SMP using directed graphs, which does not use explicit preference lists. Maffray's representation is better suitable for some SMP studies than the standard one \cite{BalinskiR97,BalinskiR98,GutinNY21}. Our new representation of SMP uses extended bipartite graphs and no preference lists either.}
\MW{Boehmer et al. \cite{BoehmerBHN20} give examples of external
manipulations which can lead to changes in one-to-one SMP preference
lists, among them the notion of a swap. They study several problems
related to matchings that are stable after the application of a small
number of such manipulations steps, including proving that Problem 1
is polynomial-time solvable for one-to-one SMP. Thus,
Theorem~\ref{thm:capacitated_algorithm} generalizes their result.}
Chen et al. \cite{ChenSS19} study the stable matching $k$-robustness problem for one-to-one SMP, which is the problem of deciding whether $(G,L_G)$ has a stable matching remaining stable even after at most $k$ swaps. Surprisingly, the problem turns out polynomial-time solvable. Chen et al. \cite{ChenSS19} observed that Problem 2 is in XP if $k$ is the parameter and proved that Problem 2 is W[1]-hard if $G$ has a perfect matching and the parameter is $n_u$ the number of vertices unmatched in any stable matching of $G$.
\GG{By Propositions~\ref{prop:comparison_to_chen_at_all} and \ref{prop:comparison_to_chen_at_all2}, our W[1]-hardness result is strictly stronger than that in \cite{ChenSS19}.}
Mai and Vazirani \cite{MaiV18,MaiV2020} studied one-to-one SMP when the number of vertices in both partite sets of $G$ is the same, denoted by $n.$ A {\em shift} is an exchange of any two (not necessarily consecutive) vertices in a preference list.
Mai and Vazirani \cite{MaiV18} considered a given distribution among all shifts and studied the effect of a random shift on matching stability. They obtained a polynomial-time algorithm for finding a matching which is stable in the input instance of SMP and has the maximum probability of remaining stable after a random shift took place. This algorithm applies to the problem we study in this paper only for $k=1$ as a sequence of two swaps may not be a shift. Mai and Vazirani \cite{MaiV2020} considered a set $S$ of permutations in a single preference list and designed an $O(|S|{\rm poly}(n))$-time algorithm for finding a matching which is stable in the given instance of SMP and remains so after any permutation in $S$ took place.
Bredereck et al. \cite{BredereckCKLN20} designed a polynomial-time algorithm for the following problem: given a bipartite graph $G$, a pair $L$ and $L'$ of preference lists for $G$, a stable matching $M$ of $(G,L)$ and a natural number $k$, decide whether $(G,L')$ has a stable matching $M'$ such that the symmetric difference of $M$ and $M'$ is at most $k.$
A way to increase the size of stable matchings is to relax the zero blocking pair condition.
Gupta et al. \cite{gupta2020parameterized} studied the almost stable matching problem, which is a problem of finding a matching whose size exceeds that of a stable matching by at least $t$ and has at most $k$ blocking edges. Let $d$ be the maximum length of a preference list.
Gupta et al. \cite{gupta2020parameterized} proved the following surprising result: the problem is intractable even when parameterized by the combined parameter $k+t+d.$ This result demonstrates significant computational difficulty in solving natural approximating stable matching problems.
\section{Terminology, Notation and Preliminaries}\label{sec:another_representation}
In this section, we will first give a more formal definition of the {\sc Stable Matching Problem} (SMP) using bipartite graphs and preference lists for vertices and then introduce an equivalent new representation of SMP which also uses bipartite graphs, but without explicit preference lists. The new representation gives us a more suitable approach than the standard one for some SMP problems studied in this paper.
The input of SMP is $(G,L_G)$, where
$G=(A,B;E)$ is a bipartite graph with partite sets $A$ and $B$ and edge set $E$ and $L_G$ is the set of lists $\ell(v)=(u_1,\dots ,u_{d(v)})$ which order all the neighbors of every $v\in A\cup B.$
If $i<j$ we say that $v$ {\em prefers} $u_i$ to $u_j.$
Let $M$ be a matching in $G$. Then an edge $ab\in E\setminus M$ with $a\in A,b\in B$ is a {\em blocking edge} of $M$ if the following two conditions hold simultaneously: (i) either $a$ is not an endpoint of any edge in $M$ or there is a $b'\in B$
such that $ab'\in M$ and {$a$ prefers $b$ to $b'$
and, similarly, (ii) either $b$ is not an endpoint of any edge in $M$ or there is an $a'\in A$ such that $a'b\in M$ and {$b$ prefers $a$ to $a'$}.
Figure \ref{fig:reduction_vertex_cover1} shows preference lists for the vertices of the depicted graph; $a_1$ prefers $b_1$ to $b_3$ and $b_3$ to $b_2$; $(1,3,2)$ only includes indexes of the corresponding vertices in $B$. Figure \ref{fig:reduction_vertex_cover1} also gives examples of blocking edges.
A matching $M$ is {\em stable} if it has no blocking edges.
It follows from the definition of a blocking edge that a stable matching $M$ is {\em maximal} i.e. no edge from $G$ can be added to $M$ such that it remains a matching. A matching $M$ is {\em perfect} if $|M|=|A|=|B|.$ The aim of SMP is to find a stable matching in $G$. The deferred acceptance algorithm of
\cite{gale1962college} outputs a stable matching in $G$. In particular, \cite{gale1962college} studied the above wherein (i) the bipartite graph $G$ is complete, and (ii) the partite sets are of equal size ($|A| =|B|$).
For an instance of SMP, a {\em swap} $\sigma$ is an adjacent transposition in $\ell(v)$ for some $v\in A\cup B$ i.e. this operation results in exchanging two consecutive elements $u_{i-1},u_{i}$ of $\ell(v)$ and is denoted by
$u_{i-1} \leftrightarrow u_i$. We denote by $\sigma(L_G)$ the new set of preference lists after the swap $\sigma$.
Let us now introduce the new SMP representation, which fuses $G$ and $L_G$ into a single bipartite graph, called the {\em extended bipartite graph}, with specifically structured partite sets as follows.
Let $G=(A,B;E),$ where $A=\{a_1,\dots ,a_{n}\}$ and $B=\{b_1,\dots ,b_{n'}\}.$ Let $d_i$ ($d'_i$, respectively) be the degree of $a_i$ ($b_i$, respectively).
Now we construct a new bipartite graph {${\cal G}_G$} by replacing every $a_i$ $(i\in [n])$ with a $d_i$-tuple $A_i=(a^i_1, \dots, a^i_{d_i})$ of vertices
and every $b_i$ $(i\in [n'])$ with a $d'_i$-tuple $B_i=(b^i_1, \dots, b^i_{d'_i})$ of vertices.
{If $G$ is clear from the context, we may omit the subscript and write ${\cal G}$ instead of ${\cal G}_G$ and $L$ instead of $L_G$.}
The partite sets of $\cal G$ are ${\cal A}=A_1\cup \dots \cup A_n$ and ${\cal B}=B_1\cup \dots \cup B_{n'}$.
The edge set $E({\cal G})$ of ${\cal G}$ is defined as follows: $a^i_pb^j_q$ is an edge in ${\cal G}$ if $a_ib_j\in E$, $b_j$ is the $p$'th element of $\ell(a_i)$ and $a_i$ is the $q$'th element of $\ell(b_j).$ Note that the maximum degree of $\cal G$ is 1.
Thus, ${\cal G}=({\cal A} ,{\cal B};E({\cal G})).$ Figure \ref{fig:reduction_vertex_cover2} shows $\cal G$ for $(G,L)$ in Figure \ref{fig:reduction_vertex_cover1}.
A \emph{matching} in $\cal G$
is a subset $M$ of $E({\cal G})$ such that for every $i\in [n]$,
there is at most one edge between $A_i$ and $\cal B$ that belongs to $M$,
and for every $i'\in [n']$, there is
at most one edge between $\cal A$ and $B_{i'}$ that belongs to $M$.
A matching $M$ is said to be \emph{maximal} if no edge of $\cal G$ can be added to it.
A matching $M$ is \emph{stable} if there is no edge
of the form $a^i_j b^{i'}_{j'} \in E({\cal G}) \setminus M$ such that:
\begin{eqnarray*}
j &< &\min \{n+1\} \cup \{r \mid a^i_r \mbox{ is matched in } M\} \GG{\mbox{ and }}\\
j' & < &\min \{n'+1\} \cup
\{r' \mid b^{i'}_{r'} \mbox{ is matched in } M\}
\end{eqnarray*}
Otherwise, such an edge is called a \emph{blocking} edge. Figure \ref{fig:reduction_vertex_cover2} illustrates the notion of a blocking edge in $\cal G$.
{It is not hard to see that the blocking edges in $G$ (the stable matchings of $G$, respectively) are in a one-to-one correspondence to the blocking edges in $\cal G$ (the stable matchings of $\cal G$, respectively).}
A \emph{swap} $\sigma$ is an adjacent transposition of two vertices in some $A_i$ $(i\in [n])$ or $B_{i'}$ $({i'}\in [n'])$. Thus, swaps can be
of the form $a^i_j \leftrightarrow a^i_{j-1}$ in some $A_i$,
or $b^{i'}_{j'} \leftrightarrow b^{i'}_{j'-1}$ in some $B_{i'}$.
We denote by $\sigma(\mathcal{A}, \mathcal{B}) =
(\mathcal{A}', \mathcal{B}')$ the resulting values
of $\mathcal{A}$ and $\mathcal{B}$.
\begin{figure}[hbtp]
\centering
\begin{subfigure}[b]{0.4\textwidth}
\centering
\input{fig_reduction_vertex_cover1_MM}
\caption{An example of an SMP instance with the preference lists.
A given matching $M$ is represented by the bold edges and
the blocking edges are dashed.}
\label{fig:reduction_vertex_cover1}
\end{subfigure}
\hspace{2cm}
\begin{subfigure}[b]{0.4\textwidth}
\centering
\input{fig_reduction_vertex_cover2_MM}
\caption{The corresponding new representation, ${\cal G}_G$. The higher
a vertex is placed, the higher its preference.}
\label{fig:reduction_vertex_cover2}
\end{subfigure}
\caption{Illustration of the correspondence between the
two representations of an SMP instance.}
\label{fig:another_representation}
\end{figure}
We also consider the more general capacitated extension of SMP (it is equivalent to many-to-many SMP), where every vertex of $G$ has a positive weight given by a capacity function $c : A \cup B \to \mathbb{Z}_{+}.$ Then we are looking for a stable generalization of a matching, which is a subgraph $S$ of $G$ where the degree $d_S(v)\le c(v)$ for every $v\in A\cup B.$ The notion of a blocking edge can be extended from the uncapacited case (where $c(v)=1$ for every $v$) to the capacitated case by saying that $ab\in E\setminus E(S)$ is a {\em blocking edge} of $S$
if the following two conditions hold simultaneously: (i) either $d_S(a) < c(a)$ or there is a $b'\in B$
such that $ab'\in E(S)$ and {$a$ prefers $b$ to $b'$
and, similarly, (ii) either $d_S(b) < c(b)$ or there is an $a'\in A$ such that $a'b\in E(S)$ and {$b$ prefers $a$ to $a'.$}
A subgraph $S$ is {\em stable} if it has no blocking edges.
\section{Making a given matching stable}\label{sec:generalisation_to_capacities}
\MW{In this section, we show that Problem~1 can be solved in polynomial
time, even in the capacitated case; i.e., given a capacitated SMP
instance $I=(G, L_G)$ and a subgraph $S$ of $G$, there is a
polynomial-time algorithm for computing the minimum number of swaps
required to make $S$ stable (if possible). Our algorithm builds on
the extended bipartite graph $\cal G$ defined in the previous
section. In the uncapacitated case, it is possible to show that the
problem reduces to a vertex cover instance on a bipartite graph
derived from $\cal G$; however, an efficient solution for Problem~1 in
the uncapacitated case was already given by Bredereck et
al.~\cite{BredereckCKLN20}, so we omit the details and focus on the
capacitated case.}
\MW{Note that the standard method of reducing capacitated matching
problems to uncapacitated problems, by taking $c(v)$ copies of every
vertex $v$ (with capacity $c(v)$), does not work here since the cost
of making swaps in $\ell(v)$ is lost in the translation.
Instead we solve the problem using more powerful tools. We show that
the number of swaps required in $\ell(v)$, as a function of a set of
blocking edges to fix at $v$, is \emph{submodular}, allowing us to
reduce Problem~1 to a case of submodular function minimization.}
For a set \(X\), a function $f\colon \mathcal{P}(X)\rightarrow \mathbb{N}$ is submodular if for every \(Y, Z\subseteq X\) we have that $f(Y)+f(Z)\ge f(Y\cup Z) + f(Y\cap Z)$. In our proof it will be convenient to consider also the following equivalent definition of a submodular function. It is well known that $f\colon \mathcal{P}(X)\rightarrow \mathbb{N}$ is submodular if and only if for
\(Y\subseteq X\) and for every \(x_1, x_2\in X\setminus Y\) such that $x_1\neq x_2$ it holds that \(f(Y\cup \{x_1,x_2\}) - f(Y\cup \{x_1\})\le f(Y\cup \{x_2\}) - f(Y)\). Before we proceed to proof our main result of this section, we state the following result in \cite{schrijver2000combinatorial}.
\begin{theorem}[\cite{schrijver2000combinatorial}]\label{thm:schrijver}
Let $X$ be a finite set and $f\colon \mathcal{P}(X)\rightarrow \mathbb{N}$ a submodular function such that for every \(Y\subseteq X\) the value \(f(Y)\) can be computed in polynomial time. Then there is a strongly polynomial-time algorithm minimizing the function \(f\).
\end{theorem}
\begin{theorem}\label{thm:capacitated_algorithm}
Let $(G=(A,B,E, c), L_G)$ be an SMP instance
with capacities and $S$ a subgraph of $G$ such that $d_S(v)\le c(v)$ for every $v\in A\cup B$.
There exists a polynomial-time algorithm that
finds a minimum length sequence $\sigma_1, \dots, \sigma_k$ of swaps
such that $S$ is a stable subgraph in
$(G, (\sigma_k\circ\dots\circ\sigma_1(L_G)))$.
\end{theorem}
\begin{proof}
Recall that $n=|A|$, $n'=|B|$, $A= \{a_1, \ldots, a_n\}$, and $B= \{b_1, \ldots, b_{n'}\}$. We first construct the extended graph \(\mathcal{G}_G\) with $\ensuremath{\mathcal{A}}=A_1\cup \ldots \cup A_{n}$ and \(\ensuremath{\mathcal{B}}=B_1\cup \ldots \cup B_{n'}\). Recall that $A_i=\{a^i_1,\ldots, a^i_{d_G(a_i)}\}$ corresponds to the vertex $a_i\in A$ and $B_j=\{b^j_1,\ldots, b^j_{d_G(b_j)}\}$ corresponds to the vertex $b_j\in B$.
Note that $S$ induces a matching in \(\mathcal{G}_G\) and we denote this matching $M$.
Our goal is to determine a minimal sequence of swaps making $M$ stable.
Let $E_b$ be the set of blocking edges of $M$. Furthermore, let $E^A_b\subseteq E_b$
be the subset of blocking edges of $M$ such that for every edge $a^i_pb^j_q\in E^A_b$
it holds that $d_S(a_i) = c(a_i)$. Note that if $e=a^i_pb^j_q \in E_b \setminus E^A_b$, then $e$
cannot become
non-blocking by permuting $A_i$. Similarly let $E^B_b\subseteq E_b$
be the subset of blocking edges for $M$ such that for every edge $a^i_pb^j_q\in E^B_b$
it holds that $d_S(b_j) = c(b_j)$. Note that if $E^A_b\cup E^B_b\neq E_b$, then there is
a blocking edge in $E_b$ such that both endpoints of the edge do not have full capacity
and $S$ cannot be made stable.
The idea of the proof is that making $M$ stable means
unblocking every blocking edge, where unblocking a blocking edge corresponds to
moving at least one of its endpoints below the vertices matched by $M$.
Define a cost function $h: \mathcal{P}(E^A_b) \to \mathbb{N}$
that maps a set $F \subseteq E_b$ of blocking edge
to the minimum number of swaps to fix every edge in $F$ by moving its endpoint in $\ensuremath{\mathcal{A}}$ and every other blocking edge by moving its endpoint in $\ensuremath{\mathcal{B}}$.
We define for every \(i\in \{1,2,\ldots, n\} \)
the function $f_i\colon \mathcal{P}(E_b) \to \mathbb{N}$
such that
for every \(F\subseteq E_b\), the value \(f_i(F)\) is equal to the minimum
number of swaps in $A_i$ moving every vertex in \(A_i\cap\{u \mid uv\in F\}\) below
the matched vertices, i.e., the vertices in \(A_i\cap \bigcup_{e\in M}e\). Furthermore,
for every \(j\in \{1,2,\ldots, n'\} \) we define the function
$g_j\colon \mathcal{P}(E_b) \to \mathbb{N}$\ such that
for every \(F\subseteq E_b\), the value \(g_i(F)\) is equal to the minimum
number of swaps in $B_j$ moving every vertex in \(B_j\cap\{u \mid uv\in F\}\) below
the matched vertices, i.e., the vertices in \(B_j\cap \bigcup_{e\in M}e\).
Now, let us consider a minimal sequence of swaps $\sigma_1, \dots, \sigma_k$ making
$M$ stable, and let $F$ be the subset of
blocking edges that are fixed using swaps in $\ensuremath{\mathcal{A}}$; i.e., if $a^i_pb^j_q\in F$, then $d_S(a_i)=c(a_i)$ and \(a^i_p\) is below every matched vertex in $A_i$ after applying the sequence of swaps $\sigma_1, \dots, \sigma_k$ and if $a^i_pb^j_q\in E_b\setminus F$, then $d_S(b_j)=c(b_j)$ and \(b^j_q\) is below every matched vertex in $B_j$ after applying the sequence of swaps $\sigma_1, \dots, \sigma_k$. Clearly, $E_b\setminus E^B_b \subseteq F\subseteq E^A_b$.
It is easy to see that the number of swaps in the sequence $\sigma_1, \dots, \sigma_k$ that are in $A_i$ is at least $f_i(F)$ and the number of swaps that are in $B_j$ is at least $g_{j}(E_b \setminus F)$.
Therefore, the total number of swaps $k$ is at least
$h(F) = \sum_{i\in [n]} f_i(F) + \sum_{j\in [n']} g_{j}(E_b \setminus F)$. On the other
hand, it is easy to see that for any $F'$ such that \(E_b\setminus E^B_b \subseteq F'\subseteq E^A_b\) it is always possible to
make $M$ stable with at most $h(F')$ many swaps. Hence $k=h(F)$ and to find
a minimum length sequence of swaps, it suffices to find $F$ such that $E_b\setminus E^B_b \subseteq F\subseteq E^A_b$ minimizing $h(F)$.
To avoid the constraint $E_b\setminus E^B_b \subseteq F$, let $J'\subseteq [n']$ be the set of indices such that $j \in J'$ if and only if $d_S(b_j) < c(b_j)$. For every $j\in J'$ we define a function $g'_j\colon \mathcal{P}(E_b)\rightarrow \mathbb{N}\cup \{\infty\}$ such that for every $F\subseteq E_b$, we have $g'_j(F)=0$ if \(B_j\cap\{u \mid uv\in F\} = \emptyset\) and $g'_j(F)=\infty$\footnote{Here infinity can be replaced by some large number, e.g., $(nn')^2$ would suffice.} otherwise. We define $h'\colon \mathcal{P}(E^A_b)\rightarrow \mathbb{N}\cup \{\infty\}$ such that $h'(F)=h(F)+\sum_{j\in J'} g'_{j}(E_b \setminus F)$. Note that for every $F$ such that \(E_b\setminus E^B_b \subseteq F\subseteq E^A_b\), we have $h'(F)=h(F)$. On the other hand if $a^i_pb^j_q\in (E_b\setminus E^B_b)\setminus F$, then $a^i_pb^j_q\in E_b\setminus F$ and $g'_j(F)=\infty$ implying $h'(F)=\infty$.
We will show that $h'$ is submodular and for fixed \(F\subseteq E_b\) we can compute $h'(F)$ in polynomial time. Then we can use Theorem~\ref{thm:schrijver}
to minimise $h'$ in polynomial time.
Elementary operations on submodular functions show that it suffices to
prove that every function $f_i$ as well as every function $g_j$ and every function $g'_j$ are submodular.
It is easy to see that $g'_j$ is submodular and computable in polynomial time.
The proof for a function $g_j$ is analogous to the proof of
submodularity of function $f_i$ and hence it is omitted. Let us now
fix \(i\in [n]\) for the rest of the proof.
For the ease of explanation, we partition $A_i$ into three kinds
of vertices:
\begin{enumerate}[label=(\roman*)]
\item the \emph{red} vertices -- the endpoint of
a blocking edge in $F$,
\item the \emph{blue} vertices -- the matched vertices in $M$, and
\item the \emph{black} vertices -- the rest.
\end{enumerate}
First, we make some basic observations:
\begin{itemize}
\item red vertices never need to go up,
\item blue vertices never need to go down,
\item two vertices of the same kind never need to be swapped,
\item if $a^i_r$ is red and $a^i_b$ blue below $a^i_r$, then
$a^i_r$ and $a^i_b$ must be swapped.
\end{itemize}
For any vertex $a^i_j$, we denote by
$B_{F}(j)$ the number of blue vertices below $a^i_j$
and $R_{F}(j)$ the number of red vertices above $a^i_j$.
The key observation is the following claim.
\begin{claim}\label{clm:value_sv}
$f_i(F) = \sum_{a^i_r \text{ is red}} B_{F}(r) + \sum_{a^i_j \text{ is black}}
\min\{B_{F}(j),R_{F}(j)\}.$
\end{claim}
\begin{proof}
Let $s(j)$ be the number of swaps with the vertex $a^i_j$.
Observe that $$f_i(F) = \sum_{a^i_j \in A_i, a^i_j \text{ is black}} s(j) + N,$$
where $N$ is the number of swaps between a red and a blue vertex,
but $$N = |\{(a^i_r,a^i_b) \mid a^i_r \text{ is red, }a^i_b \text{ is blue, and }
a^i_r \text{ is above } a^i_b\}|,$$ which is equal to $$\sum_{a^i_r \text{ is red}} B_{F}(r)= \sum_{a^i_b \text{ is blue}}R_{F}(b)$$ and can be computed in time $\mathcal{O}(n^2)$. Note that given \(F\) the value $N$ is fixed. Hence, it suffices to minimize $\sum_{a^i_j \text{ black}} s(j)$.
This is because our goal is to find the minimum length sequence of swaps
such that all red vertices are below the blue vertices.
First, it is easy to see that $s(j) \ge \min\{B_{F}(j),R_{F}(j)\}$, because
we have either swap with \(a^i_j\) all the blue vertices below $a^i_j$, so they end up above $a^i_j$
or all the red vertices that are above $a^i_j$, so they end up below $a^i_j$.
It remains to give a sequence of swaps such that
$s(j) = \min\{B_{F}(j),R_{F}(j)\}$. One can verify that the following procedure gives such a sequence:
while at least one of the following rules can be applied, perform it
\begin{itemize}
\item if $a^i_r$ is a red vertex, $a^i_b$ blue vertex such that $r=b-1$, that is $a^i_r$ is a predecessor of $a^i_b$,
swap $a^i_r$ and $a^i_b$,
\item if $a^i_j$ is black, $a^i_{j+1}$ is blue, and $B_{F}(j) \leq R_{F}(j)$, then swap $a^i_j$ and $a^i_{j+1}$,
\item if $a^i_j$ is black, $a^i_{j-1}$ red, and $R_{F}(j) < B_{F}(j)$, then swap $a^i_j$ and $a^i_{j-1}$.
\end{itemize}
Let us fix a black vertex $a^i_j$. It is easy to show by induction on $\min\{B_{F}(j),R_{F}(j)\}$ that the number of swaps
of this vertex is exactly $\min\{B_{F}(j),R_{F}(j)\}$. Clearly if $\min\{B_{F}(j),R_{F}(j)\} = 0$, then $a^i_j$ never swaps with any vertex in the above sequence. If $R_{F}(j) < B_{F}(j)$, then $a^i_j$ might swap with red vertex at position $a^i_{j-1}$. In this case the black vertex we consider becomes $a^i_{j-1}$ and the number of blue vertices below remains the same and the number of red vertices above decreases by one and $s(j) = \min\{B_{F}(j),R_{F}(j)\}$ by induction. Similarly, if $R_{F}(j) \ge B_{F}(j)$, $a^i_j$ might swap with blue vertex at position $a^i_{j+1}$ and $s(j) = \min\{B_{F}(j),R_{F}(j)\}$ follows by induction as well.
It remains to show that if none of these rules can be applied, then
there is no red vertex above a blue one.
For the sake of contradiction let us assume that none of the rules apply and there is some red vertex above a blue vertex.
Let $a^i_r$ be a red vertex and $a^i_b$ a blue vertex such that $r<b$ (i.e., $a^i_r$ is above $a^i_b$) and all vertices between $a^i_r$ be a red vertex and $a^i_b$ are black. Clearly $a^i_{r}\neq a^i_{b-1}$, else the first rule applies. Hence both vertices $a^i_{r+1}$ and $a^i_{b-1}$ are black (possibly $r+1=b-1$). Now, all vertices between $a^i_{r+1}$ and $a^i_{b-1}$ are black and it follows that $B_{F}(r+1)=B_{F}(b-1)$ and $R_{F}(r+1)=R_{F}(b-1)$. Since the second rule does not apply, it follows that $B_{F}(b-1) > R_{F}(b-1)$. However that implies $R_{F}(r+1) < B_{F}(r+1)$ and the third rule applies, which is a contradiction.
%
%
%
%
%
\end{proof}
Let $N_{F'}(j)$ be the number of black
vertices $a_b^i$ below $a_j^i$ such
that $R_{F'}(b) <B_{F'}(b)$. From Claim~\ref{clm:value_sv} we can now deduce that
if $e,e' \in E_b$ such that $e'\cap A_i = a^i_j$, then
\begin{align*}
f_i(F&+e+e') - f_i(F+e) \\
&= B_{F+e+e'}(j) - \min\{B_{F+e}(j), R_{F+e}(j)\} + N_{F+e}(j)\\
&= B_{F+e}(j) - \min\{B_{F+e}(j), R_{F+e}(j)\} + N_{F+e}(j) \\
&= \max\{0, B_{F+e}(j) - R_{F+e}(j)\} + N_{F+e}(j) \\
&\leq \max\{0, B_{F}(j) - R_{F}(j)\} + N_F(j)\\
&=f_i(F+e') - f_i(F).
\end{align*}
The above inequality follows because $R_{F+e}(j) \geq R_{F}(j)$ and $B_{F+e}(j)=B_{F}(j)$. Furthermore $N_{F+e}(j) \leq N_F(j)$ since $R_F(b)
\leq R_{F+e}(b)$ and $B_{F+e}(b)=B_F(b)$.
It follows that $f$ is computable in polynomial time and submodular.
We can now efficiently minimise the total cost $h'$
by the algorithm of Theorem~\ref{thm:schrijver}
and thus compute in polynomial time a minimal sequence of swaps
making $M$ and hence $S$ stable
\end{proof}
\section{Hardness results}\label{sec:NP_hardness}
\newcommand{\textsc{Swap Distance to Perfect Stable Matching}}{\textsc{Swap Distance to Perfect Stable Matching}}
\newcommand{$\Delta$-PSM}{$\Delta$-PSM}
In this section we consider the problem of finding a minimum length sequence of swaps that leads to an arbitrary maximum stable matching. Note that even though we can find minimum length sequence for a fixed maximum matching, there might be exponentially many different maximum matchings, so the algorithms from the previous section cannot be applied to solve this problem. Indeed, we will show that the problem of deciding whether there exists a sequence of at most $k$ swaps that leads to an instance with a perfect stable matching is NP-hard and W[1]-hard parameterized by $k$. We will call this problem \textsc{Swap Distance to Perfect Stable Matching}\ ($\Delta$-PSM).
\EE{Note that Chen et al. \cite{ChenSS19} proved that $\Delta$-PSM\ is W[1]-hard if $G$ has a perfect matching and the parameter is $n_u,$ the number of vertices unmatched in any stable matching of $G$. The following two propositions show that W[1]-hardness with parameter $k$ is strictly stronger than the result by Chen et al. \cite{ChenSS19}.
\begin{proposition}\label{prop:comparison_to_chen_at_all}
\EE{Let $I=(G, L)$ be an SMP instance, $k$ the minimum number of swaps that leads to an instance with a perfect stable matching and $n_u$ the number of vertices unmatched in any stable matching of $G$, then $n_u\le 2k$.}
\end{proposition}
\begin{proof}
\EE{Let $M$ be a stable matching in $I$ and $M'$ a perfect stable matching in an instance $I'$ obtained from $I$ by exactly $k$ swaps. Note that $M\cup M'$ induces a union of cycles and paths. Moreover, $M\cup M'$ has to contain at least $|M'| - |M| = \frac{n_u}{2}$ paths that start and end with an edge in $M'$ and the consecutive edges alternate between an edge in $M$ and an edge in $M'$. Note that any such alternating path starts and ends in a vertex unmatched by $M$. Moreover, the set of unmatched vertices is the same in any stable matching of $I$ due to the Rural Hospitals Theorem. Let $P=v_1v_2v_3\ldots v_{q}$ be one such alternating path, where $v_1$ and $v_q$ are unmatched by $M$. The edges} \GG{$v_{2i-1}v_{2i}$, for $i\in [\frac{q}{2}]$, are in $M'$ and the edges $v_{2i}v_{ 2i+1}$, for $i\in [\frac{q-2}{2}]$, are in $M$.}
\EE{We show that there is a swap in a preference list of at least one vertex of $P$. The proposition then follows from the fact that there are at least $\frac{n_u}{2}$ such paths and these paths are vertex disjoint.}
\EE{For the sake of contradiction, let us assume that there is no swap in a preference list of any vertex on $P$. First note that since $v_1$ is unmatched by $M$, the vertex $v_2$ prefers $v_3$ to $v_1$, otherwise $v_1v_2$ would be a blocking edge of $M$. We now show by induction that since there is no swap in a preference list of any vertex of $P$, then $v_2$ also prefers $v_1$ to $v_3$, which is a contradiction. Since, $v_q$ is unmatched by $M$ and $v_{q-2}v_{q-1}\in M$, the vertex $v_{q-1}$ prefers $v_{q-2}$ to $v_{q}$. As an induction hypothesis assume that for some $i$, $2 < i < q$, the vertex $v_i$ prefers $v_{i-1}$ to $v_{i+1}$. Edges $v_{i-2}v_{i-1}$ and $v_{i}v_{i+1}$ are together in a stable matching. Since by the induction hypothesis $v_i$ prefers $v_{i-1}$ to $v_{i+1}$, if the vertex $v_{i-1}$ prefers $v_{i}$ to $v_{i-2}$, then the edge $v_{i-1}v_i$ is blocking for this stable matching. We conclude that $v_{i-1}$ prefers $v_{i-2}$ to $v_{i}$ and the induction step holds. Applying repeatedly the induction step, we obtain that $v_2$ prefers $v_1$ to $v_3$, which is the desired contradiction.}
\end{proof}
\AY{
\begin{proposition}\label{prop:comparison_to_chen_at_all2}
For every $n\in\mathbb{N}$, $n\ge 1$, there exists an SMP instance $(G, L)$ such that $G$ is a bipartite graph with $2n+2$ vertices that admits a perfect matching, only two vertices are unmatched in the unique stable matching of $G$ and the minimum number of swaps that leads to an instance with a perfect stable matching is at least $n$.
\end{proposition}
\begin{proof}
Let $G$ be a path of length $2n+1.$ That is, $V(G)=\{v_0,v_1,\ldots, v_{2n+1}\}$ and $E(G)=\{v_0 v_1, v_1 v_2, v_2 v_3, \ldots, v_{2n} v_{2n+1}\}.$ Assume that for all $i\in [n]$,
$v_{2i}$ prefers $v_{2i-1}$ over $v_{2i+1}$ and $v_{2i-1}$ prefers $v_{2i}$ over $v_{2i-2}$.
Now $M=\{v_1 v_2, v_3 v_4, \ldots, v_{2n-1} v_{2n}\}$ is the only stable matching in $G$. If we want $M'=\{v_0 v_1, v_2 v_3, \ldots, v_{2n} v_{2n+1}\}$ to become a stable matching then we need to change the preference of $v_{2i-1}$ or $v_{2i}$ for every $i\in [n]$, as otherwise $v_{2i-1}v_{2i}$ would be a blocking edge.
Hence, $k \geq n.$
\end{proof}
}
Gupta et al. \cite{gupta2020parameterized} studied a related problem called \textsc{Almost Stable Marriage (ASM)}, which takes as an input an instance $(G,L)$ of SMP and two non-negative integers $k$ and $t$ and asks whether there is a matching whose size is at least $t$ more than the size of a stable matching in $G$ such that the matching has at most $k$ blocking edges. The authors show that ASM is W[1]-hard parameterized by $k+t$ even when the input graph $G$ has max degree $3$. While they did not state it explicitly their hardness proof implies the following result.
\begin{theorem}[\cite{gupta2020parameterized}]\label{thm:ASM_hardness}
ASM is NP-hard and W[1]-hard parametrized by $k+t$ even on instances $\mathcal{I}=(G,L,k,t)$ such that
\begin{enumerate}
\item $G$ has max degree three,
\item $G$ admits a perfect matching and a stable matching of size $\frac{|V(G)|}{2}-t$, and
\item If $\mathcal{I}$ is a YES-instance, then in every perfect matching $M$ with at most $k$ blocking edges, every blocking edge is incident to a vertex of degree two.
\end{enumerate}
Moreover, ASM does not admit an $|V(G)|^{o(\sqrt{k+t})}$ algorithm, unless ETH fails.
\end{theorem}
To show that $\Delta$-PSM\ is NP-hard and W[1]-hard parameterized by $k$, we start with the following observation.
\begin{observation}\label{obs:decrease_after_1_swap}
Let $I=(G=(A,B;E), L)$ be an SMP instance, $M$ a matching such that there are exactly $k$ blocking edges of $M$ in $E$, and let $\sigma$ be a swap. Then there are at least $k-1$ blocking edges of $M$ in $E$ in \(\sigma(I)\).
\end{observation}
\begin{proof}
Swap \(\sigma\) increases preference of exactly one vertex $v$ in exactly one list of some vertex $u$. Hence the only blocking edge in the original instance that can become non-blocking is the edge $uv$.
\end{proof}
Now we are ready to show the hardness of $\Delta$-PSM.
\begin{theorem}\label{thm:hardness_swap_distance}
$\Delta$-PSM\ is NP-complete. Moreover, $\Delta$-PSM\ is W[1]-hard parameterized by the length $k$ of the sought sequence of swaps.
\end{theorem}
\begin{proof}
Given a sequence of $k$ swaps, it is easy to verify that the instance obtained by applying the sequence of swaps admits a stable perfect matching.
Moreover, given a perfect matching $M$ of $G$, \GG{a polynomial in $n$ many swaps suffices to make $M$ stable}. Hence $\Delta$-PSM\ is in NP.
To show that the problem is NP-hard and W[1]-hard parameterized by $k$, we reduce from ASM.
Let $\mathcal{I}=(G,L,k,t)$ be an instance of ASM such that
\begin{enumerate}
\item $G$ has max degree three,
\item $G$ admits a perfect matching and a stable matching of size $\frac{|V(G)|}{2}-t$, and
\item If $\mathcal{I}$ is a YES-instance, then in every perfect matching $M$ with at most $k$ blocking edges, every blocking edge is incident to a vertex of degree two.
\end{enumerate}
Recall that ASM remains NP-hard and W[1]-hard parameterized by $k+t$ on such instances by Theorem~\ref{thm:ASM_hardness}.
We let $\mathcal{I}'= (G,L,k)$ be an instance of $\Delta$-PSM\ and we will show that \(\mathcal{I}\) is a YES-instance of ASM if and only if \(\mathcal{I}'\) is a YES-instance of $\Delta$-PSM{}. Since, this construction can be performed in polynomial time (we just take the same instance) and the reduction is parameter-preserving, this implies that $\Delta$-PSM\ is NP-hard and W[1]-hard parameterized by $k$.
Observation~\ref{obs:decrease_after_1_swap} implies that if a graph $G$ does not admit a perfect matching $M$ with at most $k$ blocking edges, then $\ensuremath{\mathcal{I}}'$ is a NO-instance of $\Delta$-PSM. As the instance \(\ensuremath{\mathcal{I}}\) of ASM admits a perfect matching and a stable matching of size $\frac{|V(G)|}{2}-t$, it is easy to see that if \(\ensuremath{\mathcal{I}}\) is a NO-instance of ASM, then $G$ does not admit a perfect matching $M$ with at most $k$ blocking edges and $\ensuremath{\mathcal{I}}'$ is a NO-instance of $\Delta$-PSM.
On the other hand, if \(\ensuremath{\mathcal{I}}\) is a YES-instance of ASM, then $G$ admits a perfect matching $M$ with at most $k$ blocking edges and, by propery 3 of \(\ensuremath{\mathcal{I}}\), every blocking edge of $M$ is incident to a vertex of degree $2$. It is easy to see that if $e=uv$ is a blocking edge and $u$ has degree $2$, then it suffices to transpose the two vertices in \(l(u)\). This swap also does not introduce new blocking edges, as after this swap, vertex $u$ is matched with its most preferred choice. Therefore, in this case, there is a sequence of at most $k$ swaps that makes $M$ a stable perfect matching and \(\ensuremath{\mathcal{I}}'\) is a YES-instance.
\end{proof}
The ETH lower bound for ASM implies that $\Delta$-PSM\ does not admit an algorithm of running time of $(n+n')^{o(\sqrt{k})}$, where $k$ is the length of the sought solution. On the other hand, at each step there are only $2nn'$ possible swaps and so there is a simple $O((2nn')^{k})$ algorithm enumerating all possible sequences of at most $k$ swaps. By a small change to the reduction by Gupta et al. \cite{gupta2020parameterized} one can show that a significant improvement over the $(2nn')^{k}$ algorithm is unlikely.
In what follows we will sketch how to adapt the hardness proof by Gupta et al. \cite{gupta2020parameterized} to show that ASM, and hence also $\Delta$-PSM, does not admit an $(n+n')^{o(k/\log k)}$-time algorithm, unless ETH fails, and even existence of an $(n+n')^{o(k)}$-time algorithm would imply a breakthrough in the area of parameterized algorithms and complexity.
The hardness proof of Gupta et al. \cite{gupta2020parameterized} is a reduction from \textsc{Multicolored Clique} (MCQ), where we are given a graph $G$ and a partition of $V(G)$ into $q$ parts $V_1,\ldots,V_q$; the goal is to decide the existence of a set $S\subseteq V(G)$ such that $|V_i\cap S|=1$, for all $i\in [q]$, and $G[S]$ induces a clique, that is, there is an edge between every pair of vertices in $G[S]$. It is well known that MCQ does not admit an $|V(G)|^{o(q)}$ algorithm, unless ETH fails.
The main idea of the reduction
is to introduce three types of gadgets - ``vertex set" gadgets, ``edge set" gadgets, and ``connection" gadgets. They introduce one ``vertex set" gadget for every vertex set $V_i$, one ``edge set" gadget for every $1\le i < j \le q$ that represents the set of edges between $V_i$ and $V_j$. Finally the "connection" gadgets connect a ``vertex set" gadget for $V_i$ with every "edge set" gadget for edges between $V_i$ and $V_j$, $j\neq i$. The parameter $t$ is then the number of ``vertex set" gadgets + the number of ``edge set" gadgets and $k$ is the number of ``vertex set" gadgets + 2 times the number of ``edge set" gadgets. Hence $t=q+\binom{q}{2}$ and $k=q^2$. Moreover, every perfect matching of ``vertex set" gadget forces at least one blocking edge inside the gadget that depends on the selected vertex for the clique in the corresponding vertex set. Every perfect matching of ``edge set" gadget forces at least two blocking edges inside the gadget that depend on the selected edge for the clique in the corresponding vertex set. Finally, a ``connection" gadget is just a set of edges between a ``vertex set" gadget and an ``edge set" gadget that contains blocking edge if the edge selected by the ``edge set" gadget is not incident to the vertex selected by the ``vertex set" gadget.
Given this high-level description of the hardness proof of \cite{gupta2020parameterized} for ASM parameterized by $k+t$, we sketch how one can adapt it to obtain $(n+n')^{o(k/\log k)}$ lower bound, under ETH. Namely, reduction will be analogous, but from a different problem.
In \textsc{Partitioned Subgraph Isomorphism} (PSI) we are given on the input
{two undirected graphs $G$ and $H$ with $|V(H)| \le |V(G)|$ ($H$ is \emph{smaller}) and a mapping $\psi\colon V(G) \to V(H)$ and the task is to determine whether
{$H$ is isomorphic to a subgraph of $G$ (i.e., is there an injective mapping $\phi\colon V(H) \to V(G)$ such that $\{\phi(u),\phi(v)\} \in E(G)$ for each $\{u,v\} \in E(H)$ and $\psi \circ\phi$ is the identity)?} Observe that MCQ is PSI, where the smaller graph $H$ is a complete graph.
\begin{theorem}[see {\cite{Marx10}} and \cite{EibenKPS19}]\label{cor:psi_hard}
If there is an algorithm $\mathbb{A}$ and an arbitrary
function $f$ such that $\mathbb{A}$ correctly decides every instance $(G,H)$
of PSI with the smaller graph $H$ being 3-regular in time $f(|V(H)|)|V(G)|^{o(|V(H)|/\log |V(H)|)}$, then ETH fails.\footnote{We would like to point out that, as far as we know, it is open whether PSI admits even $f(|V(H)|)|V(G)|^{o(|V(H)|)}$ algorithm.}
\end{theorem}
Note that the mapping $\psi\colon V(G) \to V(H)$ partitions the vertices of $V(G)$ into $q=|V(H)|$ many parts $V_1,\ldots, V_q$, each corresponding to a specific vertex of $H$. Moreover, we wish to select in each part $V_i$, $i\in [q]$, exactly one vertex $v_i$, such that if $uw\in E(H)$ and $V_i$ corresponds to $u$ and $V_j$ corresponds to $w$, then $v_iv_j$ is an edge in $G$.} The reduction from PSI to ASM is precisely the same as the reduction from MCQ to ASM, with only difference that we have ``edge set" gadgets only for pairs $1\le i < j \le q$ such that the sets $V_i$ and $V_j$ correspond to adjacent vertices of $H$. If $H$ is $3$-regular, then the number of ``edge set" gadgets is $\frac{3q}{2}$ and hence in the instance of ASM we obtain in the reduction, we get $t=\frac{5q}{2}$ and $k=4q$, implying that ASM does not admit $|V(G)|^{o((k+t)/\log(k+t))}$-time algorithm, unless ETH fails. Following the proof of Theorem~\ref{thm:hardness_swap_distance}, we then obtain the following result.
\begin{theorem}\label{thm:ETH_lower_bound}
If there is an algorithm that for every instance of SMP with a bipartite graph on $n+n'$ vertices decides whether there is a sequence of at most $k$ swaps that results in an instance of SMP with a perfect matching in time $(n+n')^{o(k/\log k)}$, then ETH fails.
\end{theorem}
\vspace{2mm}
{\bf Acknowledgement.} We are grateful to the referee for a number of helpful suggestions.
| {
"attr-fineweb-edu": 1.571289,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUcdvxK6nrxqmoywXb | \section{Introduction}\label{sec:introduction}}
\IEEEPARstart{T}{o} the best of our knowledge, there is not one publication in which user based scribbles are combined with standardized questionnaires in order to assess an interactive image segmentation system's quality.
This type of synergetic usability measure is a contribution of this work.
In order to provide a guideline for an objective comparison of interactive image segmentation approaches,
a prototype providing a \mbox{semi-manual} pictorial user input, introduced in \textbf{Sec.}\,\ref{sec:semi-manual_prototype}, is compared to a prototype with a guiding
menu-driven {UI}, described in \textbf{Sec.}\,\ref{sec:guided_prototype}.
Both evaluation results are analyzed with respect to a joint prototype, defined in \textbf{Sec.}\,\ref{sec:joint_prototype}, incorporating aspects of both interface techniques.
All three prototypes are built utilizing modern web technologies.
An evaluation of the interactive prototypes is performed utilizing pragmatic usability aspects described in \textbf{Sec.}\,\ref{sec:results_pragmatic}, as well as hedonic usability aspects analyzed in \textbf{Sec.}\,\ref{sec:results_hedonic}.
\addition[label=c:a121,ref=c:c12]{%
These aspects are evaluated via two standardized questionnaires (System Usability Scale and \mbox{AttrakDiff-2}) which form the ground truth for a subsequent prediction of the questionnaires' findings via a regression analysis outlined in \textbf{Sec.}\,\ref{sec:prediction_of_questionnaire_results}.
The outcome of questionnaire result prediction from interaction log data only is detailed in \textbf{Sec.}\,\ref{sec:prediction_of_questionnaire_results_from_log_data}.%
This novel automatic assessment of pragmatic as well as hedonic usability aspects is a contribution of this work. %
}
\addition[label=c:a211,ref=c:c21]{%
Our source code release for the automatic usability evaluation from user interaction log data can be found at \oldUrl{https://github.com/mamrehn/interactive_image_segmentation_evaluation}. %
}
\subsection{Image Segmentation Systems}
Image segmentation can be defined as the partitioning of an image into a finite number of semantically non-overlapping regions.
A semantic label can be assigned to each region.
In medical imaging, each individual region of a patients' abdominal tissue might be regarded as healthy or cancerous.
Segmentation systems can be grouped into three principal categories,
each differing in the degree of involvement of an operating person (user): manual, automatic, and interactive.
(1) During manual tumor segmentation, a user provides all elements $i$ in the image grid which have neighboring elements $N(i)$ of different labels than $i$.
The system then utilizes this closed curve contour line information to infer the labels for remaining image elements via simple region growing.
This minimal assistance by the system causes the overall segmentation process of one lesion to take up to several minutes of user interaction time.
However, reaching an appropriate or even perfect segmentation result (despite noteworthy \mbox{inter-observer} difference~\cite{becker2017increased}) is feasible~\cite{kim2016interobserver,hong2014interobserver}.
In practice, few \mbox{time-consuming} manual segmentations are performed by domain experts, in order to utilize the results as a reference standard in radiotherapy planning~\cite{moltz2011analysis}.
(2) A fully automated approach does not involve a user's interference with the system.
The introduced deficiency in domain knowledge for accurately labeling regions may be restored partially by automated segmentation approaches.
The maximum accuracy of the segmentation result is therefore highly dependent on the individual set of rules or amount of training data available.
If the segmentation task is sufficiently complex, a perfect result may not be reachable.
(3) Interactive approaches aim at a fast and exact segmentation by combining substantial assistance by the system with knowledge about a very good estimate of the true tumor
extent provided by trained physicians during the segmentation process~\cite{olabarriaga1997setting}.
In contrast to fully automated solutions, prior knowledge is (also) provided during the segmentation process.
Although, interactive approaches are also costly in terms of manual labor to some extent,
they can supersede fully automated techniques in terms of accuracy.
Due to their exact segmentation capabilities, interactive segmentation techniques are frequently chosen to outline pathologies during imaging assisted medical procedures, like hepatocellular carcinomata during trans-catheter arterial chemoembolization
(see \textbf{Sec.}\,\ref{sec:tace}).
\subsection{Evaluation of Image Segmentation Systems}
Performance evaluation is one of the most important aspects during the continuous improvement of systems and methodologies.
With non-interactive computer vision and machine learning systems for image segmentation, an objective comparison of systems
can be achieved by evaluating \mbox{pre-selected} data sets for training and testing.
Similarity measures between segmentation outcome and ground truth images are utilized to quantify the quality of the segmentation result.
With \addition[label=c:a23,ref=c:c23]{interactive segmentation systems} (\mbox{ISS}), a complete ground truth data set would also consist of the adaptive user interactions which advance the segmentation process.
Therefore, when comparing \mbox{ISS}, the user needs to be involved in the evaluation process.
User interaction data however is highly dependent on
(1) the users' domain knowledge and the unique learning effect of the human throughout a period of exposure to the problem domain,
(2) the system's underlying segmentation method and the users' preferences toward this technique, as well as
(3) the design and usability (the user experience~\cite{hassenzahl2006user,law2009understanding}) of the interface which is presented to the user during the interactive segmentation procedure~\cite{caro1979inter,hong2014interobserver}.
This includes users' differing preferences towards diverse interaction systems and tolerances for unexpected system behavior.
Considering \mbox{(1--3)}, an analytically expressed objective function for an interactive system is hard to define.
Intuitively, the user wants to achieve a satisfying result in a short amount of time with ease~\cite{kohli2012user}.
A direct assessment of a system's usability is enabled via standardized questionnaires, as described in \textbf{Sec.}\,\ref{sec:questionnaires}.
Individual usage of \mbox{ISS} can be evaluated via the segmentation result's similarity to the ground truth labeling according to the S{\o}rensen-Dice coefficient (\mbox{Dice})~\cite{dice1945measures} after each interaction.
The interaction data utilized for these segmentations has to be representative in order to generalize the evaluation results.
\subsection{Types of User Interaction}
As described by Olabarriaga et al.~\cite{olabarriaga2001interaction} as well as Zhao and Xie~\cite{zhao2012interactive}, user interactions can be categorized with
regards to the type of interface an \mbox{ISS} provides.
The following categories are emphasized.
(1) A pictorial mask image is the most intuitive form of user input.
Humans use this technique when transferring knowledge via a visual medium~\cite{puranik2011scribbles}.
The mask overlayed on the visualization of the image \mbox{$\mathbf{I}\in\mathbb{R}^{w,h}$} to segment consists of structures called scribbles, where $w$ is the width and $h$ is the height of the \mbox{2-D} image $\mathbf{I}$ in pixels.
Scribbles are seed points, lines, and complex shapes, each represented as a set of individual seed points.
One seed point is a tuple \mbox{$\mathbf{s}_i=\left(\mathbf{p}_i,\mathbf{\ell}_i\right)$}, where \mbox{$\mathbf{p}_i\in\mathbb{R}^2$} describes the position of the seed in image space.
The class label of a scribble in a binary segmentation system is represented by \mbox{$\mathbf{\ell}_i\in\left\{\text{background},\text{foreground}\right\}$}.
Scribbles need to be defined by the user in order to act as a representative subset $\mathbf{S}$ of the ground truth segmentation \mbox{$\mathbf{G}=\left\{\mathbf{s}_1,\mathbf{s}_2,\dots\right\}$}.
(2) A menu-driven user input scheme as in~\cite{rupprecht2015image, udupa1997multiple} limits the user's scope of action.
Users trade distinct control over the segmentation outcome for more guidance provided by the system.
The locations or the shapes of newly created scribbles are fixed before presentation to the user.
It is challenging to achieve an exact segmentation result using a method from this category.
Rupprecht et al.~\cite{rupprecht2015image} describe significant deficits in finding small objects and outline a tendency of the system to automatically choose
seed point locations near the object border,
which cannot be labeled by most users' visual inspection and would therefore not have been selected by the users themselves.
Advantages of \mbox{menu-driven} user input are the high level of abstraction of the process, enabling efficient guidance for inexperienced users in their
decision which action to perform for an optimal segmentation outcome (regarding accuracy over time or
number of interactions)~\cite{olabarriaga1999human,olabarriaga2001interaction}.
\subsection{Generation of Representative User Input}
Nickisch et al.~\cite{nickisch2010learning} describe crowd sourcing and user studies as two methods to generate plausible user input data.
The cost efficient crowd sourcing method often lacks control and knowledge of the users' motivation.
Missing context information for crucial aspects of the data acquisition procedure creates a challenging task objectifying the evaluation results.
Specialized fraud detection methods are commonly used in an attempt to pre-filter the recorded corpus and extract a usable subset of data.
McGuinness and O'Connor~\cite{mcguinness2010comparative} proposed an evaluation of \mbox{ISS} via extensive user experiments.
In these experiments, users are shown images with descriptions of the objects they are required to extract.
Then, users mark foreground and background pixels utilizing a platform designed for this purpose.
These acquisitions are more time-consuming and cost intensive than \mbox{crowd-sourcing}, since they require a constant involvement of users.
However, the study's creators are able to control many aspects of the data recording process, which enables detailed observations of user reactions.
The data samples recorded are a representative subset of the focus group of the finalized system.
A user study aims at maximizing repeatability of its results.
In order to increase the objectivity of the evaluation in this work, a user study is chosen to be conducted.
The study is described in \textbf{Sec.}\,\ref{sec:usability_test_setup}.
\subsection{State-of-the-art Evaluation of Interactive Segmentation Systems}
\subsubsection{Segmentation Challenges}
In segmentation challenges like \mbox{SLIVER07}~\cite{van20073d} (mainly) fully automated approaches are competing for the highest score regarding a predefined image quality metric.
Semi-automatic methods are allowed for submission if the manual interaction with the test data is strictly limited to pre-processing and (single seed point) initialization of an otherwise fully automated process.
\mbox{ISS} may be included into the contests' final ranking, but are regarded as non-competing,
since the structure of the challenges is solely designed for automated approaches.
The \mbox{PROMISE12} challenge~\cite{litjens2014evaluation} had a separate category for proposed interactive approaches, where the user (in this case, the person also describing the algorithm) may add an unlimited number of hints during segmentation, without observing the experts' ground truth for the test set.
No group of experts was provided to operate the interactive method for comparative results.
The submitted interactive methods' scores in the challenge's ranking are therefore highly dependent on the domain knowledge of single operating users and can not be regarded as an objective measure.
\subsubsection{Comparisons for Novel Segmentation Approaches}
In principle, with every new proposal of an interactive segmentation algorithm or interface, the authors have to demonstrate the new method's capabilities in an objective
comparison with already established techniques.
The effort spent for these comparisons by the original authors varies substantially.
According to~\cite{kohli2012user}, many evaluation methods only consider a fixed input.
This approach is especially unsuited for evaluation, without simultaneously defining an appropriate interface, which actually validates that a real person
utilizing this {UI} is capable of generating similar input patterns to the ones provided.
Although, there are some overview publications, which compare several approaches~\cite{zhao2013overview,olabarriaga2001interaction,mcguinness2010comparative,mcguinness2011toward,amrehn2016comparative}, the number of publications outlining new methods is disproportionately greater,
leaving comparisons insufficiently covered.
\addition[label=c:a122,ref=c:c12]{%
Olabarriaga et al.~\cite{olabarriaga2001interaction} main contribution is the proposition of criteria to evaluate interactive segmentation methods: accuracy, repeatability, and efficiency.
McGuinness et al.~\cite{mcguinness2010comparative} utilized a unified user interface with multiple underlying segmentation methods for the survey they conducted.
They recorded the current segmentation masks after each interaction to gauge segmentation accuracy over time.
Instead of utilizing a standardized questionnaire, users were asked to rate the difficulty and perceived accuracy of the segmentation tasks on a scale of 1 to 5.
Their main contribution is an empirical study by $20$ subjects segmenting with four different segmentation methods in order to conclude that one of the four methods is best, given their data and participants.
Their ranking is primarily based on the mean accuracy over time achieved per segmentation method.
McGuinness et al.~\cite{mcguinness2011toward} define a robot user in order to simulate user interactions during an automated interactive segmentation system evaluation.
However, they do not investigate the similarity of their rule-based robot user to seed input pattern by individual human subjects.
} %
\addition[label=c:a123,ref=c:c12]{%
Zhao et al.~\cite{zhao2013overview} concluded in their overview over interactive medical image segmentation techniques, that there is a clear need of well-defined performance evaluation protocols for interactive systems.
} %
In \textbf{Tab.}\,\ref{tab:interactiveSegmentationEvaluationComparison}, a clustering of popular publications describing novel interactive segmentation techniques is depicted.
The evaluation methods can be compared by the type of data utilized as user input.
Note that there is a trend towards more elaborate evaluations in more recent publications.
\addition[label=c:a124,ref=c:c12]{%
The intent and perception of the interacting user are a valuable resource worth considering when comparing interactive segmentation systems~\cite{yang2010user}.
However, only two of the $42$ related publications listed in \textbf{Tab.}\,\ref{tab:interactiveSegmentationEvaluationComparison} make use of the insights about complex thought processes of a human utilizing an interactive segmentation system for the ranking of novel interactive segmentation methods.
Ramkumar et al.~\cite{ramkumar2016using,ramkumar2016user} acquire these data by well-designed questionnaires, but do not automate their evaluation method.
We propose an automated, i.\,e.\ scalable, system to approximate pragmatic as well as hedonic usability aspects of a given interactive segmentation system.
} %
\begin{table*}[thp]%
\caption{Overview of seed point location selection methods for a set of influential publications in the field of interactive image segmentation.
Additional \additioncaption{unordered}
seed information can be retrieved \deletioncaption{in arbitrary order}
by
a) manually drawn seeds or
b) randomly generated seeds.
Seeds can be inferred rule-based from the ground truth segmentation by
c) sampling the binary mask image,
d) from provided bounding box mask images,
e) random sampling from tri-maps generated by erosion and dilation, or
f) by a robot user i.\,e.\ user simulation.
\additioncaption{A tri-map specifies background, foreground, and mixed areas.}
Seeds can also be provided by real users via the
g) final seed masks after all interactions on one input image, or
h) the \changecaption{ordered}{actual} iterative scribbles.
i) Questionnaire data from \emph{Goals, Operators, Methods, and Selection rules}
(\mbox{GO}) as well as \emph{National Aeronautics and Space Administration Task Load Index} (\mbox{TL})
may be retrieved by interviewing users after the segmentation process.
\additioncaption{%
Check marks indicate the usage of seeds in the publications listed.
Publications with check marks in brackets display these seeds but do not utilize them for evaluation.
} %
}
\label{tab:interactiveSegmentationEvaluationComparison}
\rowcolors{3}{gray!1}{gray!4}
\setlength\extrarowheight{1pt}
\resizebox{\textwidth}{!}{%
\begin{tabular}{r|r|p{0.09\linewidth}p{0.09\linewidth}|p{0.09\linewidth}p{0.09\linewidth}p{0.09\linewidth}p{0.09\linewidth}|p{0.09\linewidth}p{0.09\linewidth}p{0.09\linewidth}}
\multicolumn{2}{r}{} & \multicolumn{2}{c}{Arbitrary Seeds} & \multicolumn{4}{c}{Seeds Derived from GT} & \multicolumn{3}{c}{Multiple User Data based Seeds} \\[1pt]\hline
& & (a) & (b) & (c) & (d) & (e) & (f) & (g) & (h) & (i) \\[1pt]
Year & Publication & Manual & Random & Binary Mask & Box & Tri-maps & Robot & Final Seeds & Scribbles & Questionnaire \\[1pt]\hline
\additioncaption{2019} & \additioncaption{Amrehn~\cite{amrehn2019interactive}} & & & & & & \additioncaption{$\checkmark$~\cite{kohli2012user,xu2016deep,wang2017deepigeos}} & & & \\[1pt]
2018 & Chen~\cite{chen2018swipecut} & ($\checkmark$) & & & & & $\checkmark$~\cite{rupprecht2015image} & & $\checkmark$ ($N=10$) & \\[1pt]
& Amrehn~\cite{amrehn2018ideal} & & & & & & $\checkmark$ & & & \\[1pt]
2017 & Liew~\cite{liew2017regional} & ($\checkmark$) & ($\checkmark$) & & & & $\checkmark$~\cite{kohli2012user} & & & \\[1pt]
& Wang~\cite{wang2017interactive} & & & & & & & & $\checkmark$ ($N=2$) & \\[1pt]
& Wang~\cite{wang2017deepigeos} & & & & & & $\checkmark$~\cite{amrehn2017uinet} & & $\checkmark$ ($N=2$) & \\[1pt]
& Amrehn~\cite{amrehn2017uinet} & & & & & $\checkmark$ & $\checkmark$~\cite{wang2017deepigeos} & & & \\[1pt]
& Amrehn~\cite{amrehn2017robust} & & & & & & $\checkmark$ & & & \\[1pt]
2016 & Ramkumar~\cite{ramkumar2016using} & & & & & & & & & $\checkmark$(GO, TL) \\[1pt]
& Ramkumar~\cite{ramkumar2016user} & & & & & & & & & $\checkmark$(TL) \\[1pt]
& Jiang~\cite{jiang2016automatic} & & & $\checkmark$~\cite{martin2001database} & & & & & $\checkmark$ ($N=5$) & \\[1pt]
& Xu~\cite{xu2016deep} & ($\checkmark$) & ($\checkmark$) & & & & $\checkmark$ & & & \\[1pt]
& Chen~\cite{chen2016interactive} & & & & & & & $\checkmark$ & & \\[1pt]
2015 & Andrade~\cite{andrade2015supervised} & & & & & & & & \href{https://github.com/flandrade/dataset-interactive-algorithms}{$\checkmark$} & \\[1pt]
& Rupprecht~\cite{rupprecht2015image} & & & & & $\checkmark$ & & $\checkmark$ & & \\[1pt]
2014 & Bai~\cite{bai2014error} & $\checkmark$ & $\checkmark$ & & & & & & & \\[1pt]
2013 & Jain~\cite{jain2013predicting} & & & $\checkmark$ & & & & & $\checkmark$ & \\[1pt]
& He~\cite{he2013interactive} & $\checkmark$ & & & & & & & & \\[1pt]
2012 & Kohli~\cite{kohli2012user} & $\checkmark$ & & & & $\checkmark$ & $\checkmark$ & ($\checkmark$) & $\checkmark$ & \\[1pt]
2011 & Zhao~\cite{zhao2011benchmark} & & $\checkmark$ & & & $\checkmark$ & & & & \\[1pt]
& Top~\cite{top2011active} & ($\checkmark$) & & & & & $\checkmark$ & & $\checkmark$ ($N=4$) & \\[1pt]
& McGuinness~\cite{mcguinness2011toward} & ($\checkmark$) & & & & & $\checkmark$ & $\checkmark$ & & \\[1pt]
2010 & Nickisch~\cite{nickisch2010learning} & $\checkmark$ & & & & $\checkmark$ & & ($\checkmark$) & $\checkmark$ & \\[1pt]
& Gulshan\cite{gulshan2010geodesic} & $\checkmark$ & & & & & & ($\checkmark$) & & \\[1pt]
& Batra~\cite{batra2010icoseg} & $\checkmark$ & & & & & & $\checkmark$ & & \\[1pt]
& Ning~\cite{ning2010interactive} & $\checkmark$ & & & & & & & & \\[1pt]
& Price~\cite{price2010geodesic} & $\checkmark$ & & & $\checkmark$~\cite{singaraju2009p} & $\checkmark$~\cite{rother2004grabcut} & & & & \\[1pt]
& Moschidis~\cite{moschidis2010systematic} & & $\checkmark$ & & & & & & & \\[1pt]
2009 & Moschidis~\cite{moschidis2009simulation} & & $\checkmark$ & & & $\checkmark$ & & & & \\[1pt]
& Singaraju~\cite{singaraju2009p} & & & & $\checkmark$ & $\checkmark$~\cite{rother2004grabcut} & & & & \\[1pt]
2008 & Duchenne~\cite{duchenne2008segmentation} & $\checkmark$ & & & & $\checkmark$~\cite{rother2004grabcut} & & & & \\[1pt]
& Levin~\cite{levin2008closed} & $\checkmark$ & & & & & & & & \\[1pt]
& Vicente~\cite{vicente2008graph} & $\checkmark$ & & & & & & & & \\[1pt]
2007 & Protiere~\cite{protiere2007interactive} & $\checkmark$ & & & & & & & & \\[1pt]
2006 & Boykov~\cite{boykov2006graph} & $\checkmark$ & & & & & & & & \\[1pt]
& Grady~\cite{grady2006random} & $\checkmark$ & & & & & & & & \\[1pt]
2005 & Vezhnevets~\cite{vezhnevets2005growcut} & $\checkmark$ & & & & & & & & \\[1pt]
& Cates,\cite{cates2005case} & ($\checkmark$) & & & & & & & $\checkmark$ ($N=8+3$) & \\[1pt]
2004 & Li~\cite{li2004lazy} & & & & & & & & $\checkmark$ & \\[1pt]
& Rother~\cite{rother2004grabcut} & $\checkmark$ & & ($\checkmark$) & ($\checkmark$) & \href{https://web.archive.org/web/20161203110733/http://research.microsoft.com/en-us/um/cambridge/projects/visionimagevideoediting/segmentation/grabcut.htm}{$\checkmark$} & & & & \\[1pt]
& Blake~\cite{blake2004interactive} & & & $\checkmark$ & & $\checkmark$~\cite{martin2001database} & & & & \\[1pt]
2001 & Martin~\cite{martin2001database} & & & $\checkmark$ & & \href{https://www2.eecs.berkeley.edu/Research/Projects/CS/vision/grouping/segbench/}{$\checkmark$} & & & & \\[1pt]
\end{tabular}%
}%
\end{table*}
\subsection{Clinical Application for Interactive Segmentation}\label{sec:tace}
Hepatocellular carcinoma (\mbox{HCC}) is among the most prevalent malignant tumors worldwide~\cite{chung2006transcatheter, mcglynn2011global}.
Only \mbox{$20$\,--\,$30$\,\%} of cases are curable via surgery.
Both, a patient's \mbox{HCC} and hepatic cirrhosis in advanced stages may lead on to the necessity of alternative treatment methods.
For these inoperable cases, trans-catheter arterial chemoembolization (\mbox{TACE})~\cite{lewandowski2011transcatheter} is a promising and widely used minimally invasive intervention technique~\cite{bruix2005management,bruix2011management}.
During \mbox{TACE}, \mbox{extra-hepatic} collateral vessels are occluded, which previously supplied the {HCC} with oxygenated blood.
To locate these vessels, it is crucial to find the exact shape as well as the position of the tumor inside the liver.
Interventional radiology is utilized to generate a volumetric cone-beam C-arm computed tomography (\mbox{CBCT})~\cite{strobel20093d} image of the patient's abdomen, which is processed to precisely outline and label the lesion.
The toxicity of \mbox{TACE} decreases, the less healthy tissue is labeled as pathologic.
The efficacy of the therapy increases, the less cancerous tissue is falsely labeled as healthy~\cite{lo2002randomized}.
However, precisely outlining the tumor is challenging, especially due to its variations in size and shape, as well as a high diversity in X-ray attenuation coefficient values representing the lesion as illustrated in \textbf{Fig.}\,\ref{fig:hepatic_tumor_segmentation_outcome}. %
While fully automated systems may yield insufficiently accurate segmentation results, \mbox{ISS} tend to be well suited for an application during \mbox{TACE}.
\begin{figure
\centering
\resizebox{0.85\columnwidth}{!}{%
{\def\arraystretch{1.1}\tabcolsep=2pt
\begin{tabular}{lll
\includegraphics[trim={10 10 10 10},clip,height=0.255\textheight,width=0.255\textheigh
]{images/Tumors_ground_truth/clinical_expert_ground_truth_01.pdf} & %
\includegraphics[trim={10 10 10 10},clip,height=0.255\textheight,width=0.255\textheigh
]{images/Tumors_ground_truth/clinical_expert_ground_truth_02.pdf} & %
\includegraphics[trim={10 10 10 10},clip,height=0.255\textheight,width=0.255\textheigh
]{images/Tumors_ground_truth/clinical_expert_ground_truth_03.pdf}\\
\includegraphics[trim={10 10 10 10},clip,height=0.255\textheight,width=0.255\textheigh
]{images/Tumors_ground_truth/clinical_expert_ground_truth_04.pdf} & %
\includegraphics[trim={10 10 10 10},clip,height=0.255\textheight,width=0.255\textheigh
]{images/Tumors_ground_truth/clinical_expert_ground_truth_05.pdf} & %
\includegraphics[trim={10 10 10 10},clip,height=0.255\textheight,width=0.255\textheigh
]{images/Tumors_ground_truth/clinical_expert_ground_truth_06.pdf}\\
\includegraphics[trim={10 10 10 10},clip,height=0.255\textheight,width=0.255\textheigh
]{images/Tumors_ground_truth/clinical_expert_ground_truth_07.pdf} & %
\includegraphics[trim={10 10 10 10},clip,height=0.255\textheight,width=0.255\textheigh
]{images/Tumors_ground_truth/clinical_expert_ground_truth_08.pdf} & %
\includegraphics[trim={10 10 10 10},clip,height=0.255\textheight,width=0.255\textheigh
]{images/Tumors_ground_truth/clinical_expert_ground_truth_06_minus_12.pdf}%
\end{tabular}%
}
}%
\caption{Liver lesion segmentations.
Depicted are central slices through the volumes of interest of reconstructed images acquired by a C-arm {\additioncaption{CB}CT} scanner.
The manually annotated ground truth segmentation is displayed as an overlay contour line in green.}%
\label{fig:hepatic_tumor_segmentation_outcome}%
\end{figure}
\section{Methods}\label{sec:methods}
\addition[label=c:a241,ref=c:c24]{In the following Section, the segmentation method underlying the user interface prototypes is described in \textbf{Sec.}\,\ref{sec:segmentation_method} in order to
subsequently adequately outline the different characteristics of each novel interface prototype in \textbf{Sec.}\,\ref{sec:sgmentation_prototypes}.
Usability evaluation methods utilized are detailed regarding questionnaires in \textbf{Sec.}\,\ref{sec:questionnaires}, semi-structured feedback in \textbf{Sec.}\,\ref{sec:qualitative_measures}, as well as the test environment in \textbf{Sec.}\,\ref{sec:hci_evaluation}.
}
\subsection{Segmentation Method}\label{sec:segmentation_method}
\mbox{GrowCut}~\cite{vezhnevets2005growcut} is a seeded image segmentation algorithm based on cellular automaton theory.
The automaton is a tuple \mbox{$(\mathbf{G}_\mathbf{I},\mathbf{Q},\delta)$}, where $\mathbf{G}_\mathbf{I}$ is the \change[label=c:c241,ref=c:c24]{data the automaton operates on. In this case $\mathbf{G}_\mathbf{I}$ is the graph of image}{graph of} $\mathbf{I}$, where the pixels/voxels act as nodes $\mathbf{v}_e$.
The nodes are connected by edges on a grid defined by the Moore neighborhood system.
\addition[label=c:a242,ref=c:c24]{$\mathbf{Q}$ defines the automaton's possible states and $\delta$ the state transition function utilized.}
\begin{equation}
\mbox{$\mathbf{Q}\ni\mathbf{Q}_e^t=\left((\mathbf{p}_e, \,\mathbf{\ell}_e^t), \,\mathbf{\Theta}_e^t, \,\mathbf{c}_e, \,\mathbf{h}_e^t\right)$}
\label{eq:growcutgraph}
\end{equation}
\change[label=c:c242,ref=c:c24]{As detailed in \textbf{Eq.}\,\ref{eq:growcutgraph}, $\mathbf{Q}$ is the set of each node's state, where
$\mathbf{p}_e$ is the node's position in image space and $\mathbf{\ell}_e^t$ is the class label}{is a state set, where \mbox{$\mathbf{\Theta}_e^t \in [0.0, 1.0]\subset \mathbb{R}$} is the strength} of node $e$ at \mbox{GrowCut} iteration $t$.
\change[label=c:c243,ref=c:c24]{\mbox{$0 \le \mathbf{\Theta}_e^t \le 1$} is the strength of $e$ at iteration $t$.
The feature vector $\mathbf{c}_e$ describes}{and $c_e$ is the feature vector describing} the node's characteristics.
\addition[label=c:a2410,ref=c:c24]{ %
The pixel value $\mathbf{I}\left(\mathbf{p}_e\right)$ at image location $\mathbf{p}_e$ is typically utilized as feature vector $\mathbf{c}_e$~\cite{vezhnevets2005growcut}.
}
Here, we additionally define $\mathbf{h}_e^t \in \mathbb{N}^{0}$ as a counter for accumulated label changes of $e$ during the \mbox{GrowCut} iteration, as described in~\cite{amrehn2018ideal}, with
\change[label=c:c244,ref=c:c24]{%
\mbox{$\mathbf{h}_e^{t=0}=0$}}{%
\mbox{$\mathbf{h}_e^0=0$}}.
\addition[label=c:a243,ref=c:c24]{ %
Note that this extension of GrowCut is later utilized for seed location suggestion in two of the three prototypes tested.} %
\change[label=c:c245,ref=c:c24]{A node's strength \mbox{$\mathbf{\Theta}_e^{t=0}$}}{\mbox{$\mathbf{\Theta}_e^0$}}
is initialized with $1$ for scribbles, i.\,e.\ \change[label=c:c246,ref=c:c24]{%
\mbox{$(\mathbf{p}_e, \,\mathbf{\ell}_e^{t=0})\in\mathbf{S^{t=0}}$}}{%
\mbox{$(\mathbf{p}_e, \,\mathbf{\ell}_e^0)\in\mathbf{S^0}$}}%
, and $0$ otherwise.
Iterations \mbox{$\operatorname{\delta}\left(\mathbf{Q}_e^t\right)=\mathbf{Q}_e^{t+1}$} are performed utilizing local state transition rule $\delta$:
starting from initial seeds, labels are propagated based on local intensity features $\mathbf{c}$.
At each discrete time step $t$, each node $f$ attempts to conquer its direct neighbors.
A node $e$ is conquered if \addition[label=c:a244,ref=c:c24]{the condition in \textbf{Eq.}\,\ref{eq:growcutisconquered} is true.} %
\begin{align}
\mathbf{\Theta}_f^t\cdot\operatorname{g}(\mathbf{c}_e,\mathbf{c}_f)&>\mathbf{\Theta}_e^t\,,\ \text{where}\label{eq:growcutisconquered}\\
\operatorname{g}(\mathbf{c}_e,\mathbf{c}_f) &= 1 - \frac{\Vert\mathbf{c}_e-\mathbf{c}_f\Vert_2}{\max_{j,k}\Vert\mathbf{c}_j-\mathbf{c}_k\Vert_2}
\end{align}
If node $e$ is conquered, the automaton's state set is updated \addition[label=c:a245,ref=c:c24]{ %
according to \textbf{Eq.}\,\ref{eq:growcutupdatestate}.
If $e$ is not conquered, the node's state remains unchanged, i.\,e.\ \mbox{$\mathbf{Q}_e^{t+1}=\mathbf{Q}_e^t$}.
} %
\begin{equation} \mbox{$\mathbf{Q}_e^{t+1}=((\mathbf{p}_e,\mathbf{\ell}_f^t),\mathbf{\Theta}_f^t\cdot\operatorname{g}(c_e,c_f),\mathbf{c}_e,\mathbf{h}_e^t+1)$},
\label{eq:growcutupdatestate}
\end{equation}
The process is guaranteed to converge with positive and bounded node strengths \addition[label=c:a246,ref=c:c24]{ %
($\forall_{e,t} \ \mathbf{\Theta}_e^t \le 1$) %
} monotonously decreasing \addition[label=c:a247,ref=c:c24]{ %
(since $\operatorname{g}(.) \le 1$).
The image's final segmentation mask after convergence is encoded as part of state $\mathbf{Q}^{t=\infty}$, specifically in $(\mathbf{p}_e, \,\mathbf{\ell}_e^{t=\infty})$ for each node $e$.
}
\subsection{Interactive Segmentation Prototypes}\label{sec:sgmentation_prototypes}
Three interactive segmentation prototypes with different \mbox{UIs} were implemented for usability testing.
The segmentation technique applied in all prototypes is based on the \mbox{GrowCut} approach as described in \textbf{Sec.}\,\ref{sec:segmentation_method}.
\mbox{GrowCut} allows for efficient and parallelizable computation of image segmentations while providing an acceptable accuracy from only few initial seed points.
\addition[label=c:a248,ref=c:c24]{The method is also chosen due to its tendency to benefit from careful placement of large quantities of seed points.}
It is therefore well suited for an integration into a highly interactive system.
\addition[label=c:a249,ref=c:c24]{A learning-based segmentation system was not utilized for usability testing due to its inherent dependence of segmentation quality on the characteristics of prior training data, which potentially adds a significant bias to the test results, given only a small data set as utilized in the scope of this work.}
All three user interfaces provided include an \emph{undo} button to reverse the effects of the user's latest action.
A \emph{finish} button is used to define the stopping criterion for the interactive image partitioning.
The transparency of both, the contour line and seed mask displayed, is adjustable to one of five fixed values via the \emph{opacity} toggle button.
The image contrast and brightness (windowing) can be adapted with standard control sliders for the window width and the window center operating on the image intensity value range~\cite{jin2001contrast}.
All protoypes incorporate a \emph{help} button used to provide additional guidance for the prototype's usage during the segmentation task.
The segmentation process starts with a set of pre-defined background-labels $\mathbf{S}^0$ along the edges of the image,
since an object is assumed to be located in its entirety inside the displayed region of the image.
\subsubsection{\mbox{Semi-manual} Segmentation Prototype}\label{sec:semi-manual_prototype}
The \mbox{UI} of the \mbox{semi-manual} prototype, depicted in \textbf{Fig.}\,\ref{fig:semi-manual_prototype}, provides several interaction elements.
A user can add seed points as an overlay mask displayed on top of the image.
These seed points have a pre-defined label of either \mbox{\emph{foreground}} for the object or \mbox{\emph{background}} used for all other image elements.
The label of the next brush strokes (scribbles) can be altered via the buttons named \mbox{\emph{object seed}} and \mbox{\emph{background seed}}.
After each interaction \mbox{$n\in\mathbb{N}$}, a new iteration of the seeded segmentation is started given the image $\mathbf{I}$ as well as the
updated set of seeds \mbox{$\mathbf{S}^n=\mathbf{S}^{n-1}\cup\{\mathbf{s}^n_1,\mathbf{s}^n_2,\dots\}$} as input.
\begin{figure}
\includegraphics[width=\columnwidth,height=0.61296534017\columnwidth]{images/semi-manual_segmentation_prototype_contrast.png}
\caption{\mbox{Semi-manual} segmentation prototype user interface.
The current segmentation's contour line (light blue) is \changecaption{adjusted towards the user's estimate of the ground truth segmentation}{adapted} by manually adding foreground (blue) or background (red) seed points.}
\label{fig:semi-manual_prototype}
\end{figure}
\subsubsection{Guided Segmentation Prototype}\label{sec:guided_prototype}
The system selects two seed point locations \mbox{$\mathbf{p}^n_1$ and $\mathbf{p}^n_2$}, each with the lowest label certainty values assigned by the previous segmentation process.
The seed point locations are shown to the user in each iteration $n$, as depicted in \textbf{Fig.}\,\ref{fig:guided_prototype}.
There are four possible labeling schemes for those points in the underlying \mbox{two-class} classification problem, since each seed point
\mbox{$\mathbf{s}^n_i=(\mathbf{p}^n_i,\mathbf{\ell}^n_i)$} has a label \mbox{$\mathbf{\ell}^n_i\in\{{background},{foreground}\}$}.
The interface providing advanced user guidance displays the four alternative segmentation contour lines, which are a result of the four possible next steps
during the iterative interactive segmentation with respect to the labeling of the new seed points $\mathbf{s}^n_1$ and $\mathbf{s}^n_2$.
The user selects the only correct labeling, where all displayed object and background seeds are inside the object of interest and the image background, respectively.
The alternative views on the right act as four buttons to define a selection.
To further assist the user in their decision making, the region of interest, defined by $\mathbf{p}^n_1$ and $\mathbf{p}^n_2$, is zoomed in for the option view
on the right and displayed as a cyan rectangle in the overview image on the left of the \mbox{UI}.
The differences regarding the previous iteration's contour line and one of the four new options each are highlighted by dotted areas in the four overlay mask images.
After the user selects one of the labelings, the two new seed points are added to the current set of scribbles $\mathbf{S}^n$.
The scribbles \mbox{$\mathbf{S}^n:=\mathbf{S}^{n-1}\cup\left\{\mathbf{s}^n_1,\mathbf{s}^n_2\right\}$} are utilized as input for the next iteration, on which basis two new
locations \mbox{$\mathbf{p}^{n+1}_1$ and $\mathbf{p}^{n+1}_2$} are computed.
The system-defined locations of the additional seed points can be determined by \mbox{$\argmax_e\mathbf{h}_e^{t=\infty,{n-1}}$}, the location(s) with maximum number of label changes during \mbox{GrowCut} segmentation.
Frequent changes define specific image elements and areas in which the \mbox{GrowCut} algorithm indicates uncertainty in finding the correct labels.
Two locations in $\mathbf{h}^{t=\infty,{n-1}}$ are then selected as $\mathbf{p}^n_1$ and $\mathbf{p}^n_2$, which stated the most changes in labeling during the previous
segmentation with input image $\mathbf{I}$ and seeds $\mathbf{S}^{n - 1}$.
\begin{figure}
\includegraphics[width=\columnwidth,height=0.61296534017\columnwidth]{images/guided_segmentation_prototype_contrast.png}
\caption{Guided segmentation prototype user interface.
The current segmentation displayed on the upper left can be improved by choosing one of the four segmentation alternatives displayed on the right.
The user is expected to choose the upper-right option in this configuration,
\additioncaption{due to the two new seeds' matching background and foreground labels}.}
\label{fig:guided_prototype}
\end{figure}
\subsubsection{Joint Segmentation Prototype}\label{sec:joint_prototype}
The joint prototype depicted in \textbf{Fig.}\,\ref{fig:joint_prototype} is a combination of a pictorial interaction scheme and a menu-driven approach.
(1) A set of \mbox{$J\in\mathbb{N}$} pre-selected new seeds is displayed in each iteration.
The seeds' initial labels are set automatically, based on whether their position is inside (foreground) or outside (background) the current segmentation mask.
The user may toggle the label of each of the new seeds, which also provides an intuitive \mbox{\emph{undo}} functionality.
The automated suggestion process for new seed point locations is depicted in \textbf{Fig.}\,\ref{fig:joint_prototype_prob_map}.
The seed points are suggested deterministically based on the indices of the maximum values in an element-wise sum of three approximated influence maps.
These maps are
the gradient magnitude image of $\mathbf{I}$,
the previous label changes \mbox{$\mathbf{h}^{t=\infty,{n-1}}$} per element in $\mathbf{G}_\mathbf{I}$ weighted by an empirically determined factor of $17/12$,
and an influence map based on the distance of each element in $\mathbf{I}$ to the current contour line.
Note that for the guided prototype (see \textbf{Sec.}\,\ref{sec:guided_prototype}), only $\mathbf{h}$ was used for the selection of suggested seed point locations.
This scheme was extended for the joint prototype, since extracting \mbox{$J\approx20$} instead of only the top two points solely from $\mathbf{h}$ potentially introduces suggested point locations forming impractical local clusters instead of spreading out with higher variance in the image domain.
This process approximates the true influence or entropy (information gain) of each possible location for a new seed.
When all seed points \mbox{$\left\{\mathbf{s}^n_1,\mathbf{s}^n_2,\dots,\mathbf{s}^n_J\right\}$} presented to the user are toggled to their correct label,
the user may click on the \emph{new points} button to initiate the next iteration with an updated set of seed points
\mbox{$\mathbf{S}^n=\mathbf{S}^{n-1}\cup\{\mathbf{s}^n_1,\mathbf{s}^n_2,\dots,\mathbf{s}^n_J\}$}.
Another set of seed points \mbox{$\{\mathbf{s}^{n+1}_1,\mathbf{s}^{n+1}_2,\dots,\mathbf{s}^{n+1}_J\}$} is generated and displayed.
(2) In addition to pre-selected seeds, a single new seed point $\mathbf{s}^n_0$ can be added manually via a user's long-press on any location in the image.
A desired change in the current labeling of this region is interpreted given this user action.
Therefore, the new seed point's initial label is set by inverting the current label of the given location.
A new segmentation is initiated by this interaction based on \mbox{$\mathbf{S}^n=\mathbf{S}^{n-1}\cup\left\{\mathbf{s}^{n}_0,\mathbf{s}^{n}_1,\dots,\mathbf{s}^n_J\right\}$}.
Note that the labels of \mbox{$\mathbf{s}^n_i$} are still subject to change via toggle interactions until the \mbox{\emph{new points}} button is pressed.
\begin{figure}
\includegraphics[width=\columnwidth,height=0.61296534017\columnwidth]{images/joint_segmentation_prototype_contrast.png}
\caption{Joint segmentation prototype user interface.
The user toggles the labels of pre-positioned seed points\additioncaption{, which positions are displayed to them as colored circles,} to properly indicate their inclusion into the set of object or background representatives.
New seeds can be added at the position of \additioncaption{current} interaction via a long-press on the overlay image.
The segmentation result as well as \additioncaption{the} displayed contour line adapt accordingly after each interaction.}
\label{fig:joint_prototype}
\end{figure}
\begin{figure}
\centering
\includegraphics[height=0.61296534017\columnwidth]{images/prob_map_1.png}%
\caption{The approximated influence map for new seed point locations \additioncaption{for the joint segmentation prototype}.
The map is generated by a weighted sum of gradient magnitude image, number of cell changes \additioncaption{$h_e^{t=\infty}$ per cell $e$}
obtained from \additioncaption{the} previous \mbox{GrowCut} segmentation, \changecaption{as well as the}{and} distance to the contour line of the current segmentation.
}%
\label{fig:joint_prototype_prob_map}
\end{figure}
\subsection{Questionnaires}\label{sec:questionnaires}
\subsubsection{System Usability Scale (\mbox{SUS})}\label{sec:questionnaires_sus}
The \mbox{SUS}~\cite{brooke1996sus,lewis2009factor} is a widely used, reliable, and low-cost survey to assess the overall usability of a prototype, product, or service~\cite{kortum2013usability}.
Its focus is on pragmatic quality evaluation~\cite{ISO92411998,ISO92412018}.
The survey is technology agnostic, which enables a utilization of the usability of many types of user interfaces and \mbox{ISS}~\cite{bangor2009determining}.
The questionnaire consists of ten statements and an unipolar five-point Likert scale~\cite{likert1932technique}.
This allows for an assessment in a time span of about three minutes per participant.
The statements are as follows:
\begin{enumerate}
\item I think that I would like to use this system frequently.
\item I found the system unnecessarily complex.
\item I thought the system was easy to use.
\item I think that I would need the support of a technical person to be able to use this system.
\item I found the various functions in this system were well integrated.
\item I thought there was too much inconsistency in this system.
\item I would imagine that most people would learn to use this system very quickly.
\item I found the system very cumbersome to use.
\item I felt very confident using the system.
\item I needed to learn a lot of things before I could get going with this system.
\end{enumerate}
The Likert scale provides a fixed choice response format to these expressions.
The \mbox{$(N-1)/2$\,th} choice in an \mbox{$N$-point} Likert scale always is the neutral element.
Using the scale, subjects are asked to define their degree of consent to each given statement.
The fixed choices for the five-point scale are named \emph{strongly disagree}, \emph{disagree}, \emph{undecided}, \emph{agree}, and \emph{strongly agree}.
During the evaluation of the survey, these names are assigned values
\change[label=c:c171,ref=c:c17]{%
\mbox{$\mathbf{x}^\text{SUS}_{s,i} \in \left\{0, 1, \dots, 4\right\}$} per subject $s$
{$\mathbf{x}_i$ from zero to four}
in the order presented, for statements with index \change[label=c:c131,ref=c:c13]{\mbox{$i \in \left\{1, 2, \dots, 10\right\}$}}{\mbox{$i \in \left[1, 10\right]$}}.
\mbox{SUS} scores enable simple interpretation schemes, understandable also in multi-disciplinary project teams.
The result of the \mbox{SUS} survey is a single scalar value, in the range of zero to $100$ as a composite measure of the overall usability.
The score is computed according to
\addition[label=c:a131,ref=c:c13]{\textbf{Eq.}\,\ref{eq:sus_score}},
\addition[label=c:a151,ref=c:c15]{as outlined in~\cite{brooke1996sus},}
given $S$ participants, where
\change[label=c:c172,ref=c:c17]{$\mathbf{x}^\text{SUS}_{s,i}$}{$\mathbf{x}_s$}
is the response to
\change[label=c:c173,ref=c:c17]{the statement}{all statements} $i$ by subject $s$.
\begin{equation}
\operatorname{sus}(\mathbf{x}) = \frac{2.5}{S} \sum_{s}\left[\, \sum_{\text{odd } i} \mathbf{x}^\text{SUS}_{s,i} + \sum_{\text{even } i} (4 - \mathbf{x}^\text{SUS}_{s,i})\, \right]
\label{eq:sus_score}
\end{equation}
\addition[label=c:a156,ref=c:c15]{A neutral participant (\mbox{$\forall_{i} \ \mathbf{x}^\text{SUS}_{s,i} = 2$}) would produce a \mbox{SUS} score of $50$.}
Although the \mbox{SUS} score allows for straightforward comparison of the usability throughout different systems, there is no simple intuition associated with the resulting scalar value.
\addition[label=c:a141,ref=c:c14]{ %
\mbox{SUS} scores do not provide a linear mapping of a system's quality in terms of overall usability. %
}
In practice, a \mbox{SUS} of less than $80$ is often interpreted as an indicator of a substantial usability problem with the system.
Bangor et al.~\cite{bangor2008empirical,bangor2009determining}
proposed an interpretation of the score in a seven-point scale.
\addition[label=c:a142,ref=c:c14]{ %
They added an eleventh question to $959$ surveys they conducted.
Here, participants were asked to describe the overall system as one of these seven items of an adjective rating scale%
}:
\emph{worst imaginable}, \emph{awful}, \emph{poor}, \emph{OK}, \emph{good}, \emph{excellent}, and \emph{best imaginable}.
\addition[label=c:a143,ref=c:c14]{ %
The resulting \mbox{SUS} scores could then be correlated with the adjectives.
The mapping from scores to adjectives resulting from their evaluation is depicted in \textbf{Fig.}\,\ref{fig:sus_adjective}.
}
This mapping also enables an absolute interpretation of a single \mbox{SUS} score.
\begin{figure}%
\includegraphics[width=\linewidth]{images/sus/sus_adjective_rating}\\
\centerline{\small System usability scale (\mbox{SUS}) rating}%
\caption{Mapping from a \mbox{SUS} score to an adjective rating scheme proposed by Bangor et al.~\cite{bangor2009determining}\additioncaption{. %
Given a \mbox{SUS} rating, the relative height of the Gaussian distributions approximate the probabilities for each adjective. %
Distributions' $\mu$ and $\sigma$ were extracted evaluating} $959$ surveys
\additioncaption{with added adjective rating as an 11th question}.
}%
\label{fig:sus_adjective}%
\end{figure}
\subsubsection{Semantic Differential \mbox{AttrakDiff-2}}\label{sec:questionnaires_attrakdiff}
A semantic differential is a technique for the measurement of meaning as defined by Osgood et al.~\cite{osgood1952nature,osgood1957measurement}.
Semantic differentials are based on the theory, that the implicit anticipatory response of a person to a stimulus object is regarded as the object's meaning.
Since these implicit responses themselves cannot be recorded directly, more apparent responses like verbal expressions have to be considered~\cite{mehrabian1974approach,fishbein1975belief}.
These verbal responses have to be sensitive to and maximally dependent on meaningful states while independent from each other~\cite{osgood1957measurement}.
Hassenzahl et al.~\cite{hassenzahl2003attrakdiff, hassenzahl2000hedonic} defined a set of $28$ pairs of verbal expressions suitable to represent a subject's opinion on the hedonic as well as pragmatic quality (both aspects of perception) and attractiveness (an aspect of assessment) of a given interactive system separately~\cite{hassenzahl2001effect}.
During evaluation, the pairs of complementary adjectives are clustered into four groups, each associated with a different aspect of quality.
Pragmatic quality (\mbox{PQ}) is defined as the perceived usability of the interactive system, which is the ability to assist users to reach their goals by providing utile and usable functions~\cite{hassenzahl2008user}.
The attractiveness (\mbox{ATT}) quantizes the overall appeal of the system~\addition[label=c:a281,ref=c:c28]{\cite{hassenzahl2002importance}}.
The hedonic quality (\mbox{HQ})~\cite{diefenbach2008give} is separable into hedonic identity (\mbox{HQ-I}) and hedonic stimulus (\mbox{HQ-S}).
\mbox{HQ-I} focuses on a user's identification with the system and describes the ability of a product to communicate with other persons benefiting the user's self-esteem~\cite{hassenzahl2007hedonic}.
\mbox{HQ-S} describes the perceived novelty of the system. \mbox{HQ-S} is associated with the desire to advance ones knowledge and proficiencies.
The clustering into these four groups for the $28$ word pairs are defined as depicted in \textbf{Tab.}\,\ref{tab:attrakdiff_statements}.
\begin{table*}%
\caption{\mbox{AttrakDiff-2} statement pairs \deletioncaption{ for each category}.
\additioncaption{The pairs of complementary adjectives are clustered into four groups, each associated with a different aspect of quality.}
All $28$ pairs are presented to participants in randomized order. %
}%
\label{tab:attrakdiff_statements}%
\resizebox{\textwidth}{!}{%
\begin{tabular}{llll}%
Pragmatic quality ({PQ}) & Attractiveness ({ATT}) & Hedonic identity ({HQ-I}) & Hedonic stimulus ({HQ-S})\\\hline
complicated, simple & bad, good & alienating, integrating & cautious, bold \\
confusing, clearly structured & disagreeable, likeable & cheap, premium & conservative, innovative\\
cumbersome, straightforward & discouraging, motivating & isolating, connective & conventional, inventive\\
impractical, practical & rejecting, inviting & separates me from, brings me closer to people & dull, captivating\\
technical, human & repelling, appealing & tacky, stylish & ordinary, novel\\
unpredictable, predictable & ugly, attractive & unpresentable, presentable & undemanding, challenging\\
unruly, manageable & unpleasant, pleasant & unprofessional, professional & unimaginative, creative
\end{tabular}%
}%
\end{table*}
For each participant, the order of word pairs and order of the two elements of each pair are randomized prior to the survey's execution.
A bipolar~\cite{mccroskey1989bipolar} seven-point Likert scale is presented to the subjects to express their relative tendencies toward one of the two opposing statements (\mbox{poles}) of each expression pair, where index three denotes the neutral element.
For the questionnaire's evaluation for subject \change[label=c:c132,ref=c:c13]{\mbox{$s \in \left\{0, 1, \dots, S-1\right\}$}}{\mbox{$s \in [0, S)$}}, each of the seven adjective pairs \change[label=c:c133,ref=c:c13]{\mbox{$i \in \left\{0, 1, \dots, 6\right\}$}}{\mbox{$i \in [0, 6]$}} per group
\mbox{$g \in \{\text{PQ}, \,\text{ATT}, \,\text{HQ-I}, \,\text{HQ-S}\}$} is assigned a score \mbox{$\mathbf{x}^g_{s,i} \in \left\{1, 2, \dots, 7\right\}$} by each participant, reflecting their tendency towards the positive of the two adjectives.
The overall ratings per group are \addition[label=c:a132,ref=c:c13]{defined in \cite{hassenzahl2003attrakdiff} as} the mean scores computed over all subjects $s$ and statements $i$,
\addition[label=c:a133,ref=c:c13]{as depicted in \textbf{Eq.}\,\ref{eq:attrakdiff_score}}%
.
Here, $S$ is the number of participants in the survey.
\begin{equation}
\operatorname{attrakdiff}(\mathbf{x}, \,g) = \frac{1}{7 \cdot S} \sum_{s} \sum_{i} \mathbf{x}^g_{s,i}
\label{eq:attrakdiff_score}
\end{equation}
Therefore, a neutral participant would produce an \mbox{AttrakDiff-2} score of four.
The final averaged score of each group $g$ ranges from one (worst) to seven (best rating).
An overall evaluation of the \mbox{AttrakDiff-2} results can be conducted in the form of a portfolio representation~\cite{hassenzahl2008user}.
\mbox{HQ} is the mean of a system's \mbox{HQ-I} and \mbox{HQ-S} scores.
{PQ} and {HQ} scores of a specific system and user are visualized as a point in a two-dimensional graph.
The $95$\,\% confidence interval is an estimate of plausible values for rating scores from additional study participants,
and determines the extension of the rectangle around the described data point in each dimension.
A small rectangle area represents a more homogeneous rating among the participants than a larger area.
If a rectangle completely lies inside one of the seven fields with associated adjectives defined in~\cite{hassenzahl2008user}, this adjective is regarded as the dominant descriptor of the system.
Otherwise, systems can be particularized by overlapping fields' adjectives.
If the confidence rectangles of two systems overlap in their one-dimensional projection on either \mbox{HQ} or \mbox{PQ}, their difference in \mbox{AttrakDiff-2} scores in regards to this dimension is not significant.
\subsection{Qualitative Measures}\label{sec:qualitative_measures}
In order to collect, normalize, and analyze visual and verbal feedback given by the participants, a summative qualitative content analysis is conducted via abstraction~\cite{hsieh2005three,elo2008qualitative}.
The abstraction method reduces the overall transcript material while preserving its substantial contents by summarization.
The corpus retains a valid mapping of the recording.
An essential part of abstraction is the formulation of macro operators like elimination, generalization, construction, integration, selection and
bundling.
The abstraction of statements is increased iteratively by the use of macro operators, which map statements of the current level of abstraction to the next, while clustering items based on their similarity~\cite{mayring2014qualitative}.
\subsection{HCI Evaluation}\label{sec:hci_evaluation} %
A user study is the most precise method for the evaluation of the quality of different interactive segmentation approaches~\cite{nickisch2010learning}.
Analytical measures as well as subjective measures can be derived from standardized user tests~\cite{gao2013mental}.
From interaction data recorded during the study, the reproducibility of segmentation results as well as the achievable accuracy with a given system per time can be estimated.
The complexity and novelty of the system can be expressed via the observed convergence to the ground truth over time spent by the participants segmenting multiple images each.
The user's satisfaction with the interactive approaches is expressed by the analysis of questionnaires, which the study participant fills out
immediately %
after their tests are conducted and before any discussion or debriefing has started. %
The respondent is asked to fill in the questionnaire as spontaneously as possible.
Intuitive answers are desired as user feedback instead of well-thought-out responses for each item in the questionnaire~\cite{brooke1996sus}.
For the randomized A/B study, individuals are selected to approximate a representative sample of the intended users of the final system~\cite{siroker2013b}.
During the study, subjects are given multiple interactive segmentation tasks to fulfill each in a limit time frame.
The user segments all $m$ images provided with two different methods (A and B).
All subjects are given $2 \cdot m$ tasks in a randomized order to prevent a learning effect bias, which would allow for higher quality outcomes for the later tasks.
Video and audio data of the subjects are recorded.
Every user interaction recognized by the system and its time of occurrence are logged.
\section{Experiments}\label{sec:experiments}
\subsection{Data Set for the Segmentation Tasks}\label{sec:study_data_sets}
In \textbf{Fig.}\,\ref{fig:study_data_sets} the data set used for the usability test is depicted.
For this evaluation, the \mbox{RGB} colored images are converted to grayscale in order to increase similarity to the segmentation process of medical images acquired from \mbox{CBCT}.
The conversion is performed in accordance with the \href{https://www.itu.int/rec/R-REC-BT.709/en}{\mbox{ITU--R BT.709-6}} recommendation~\cite{recommendation1990basic} for the extraction of true luminance
\change[label=c:c161,ref=c:c16]{%
\mbox{$\mathbf{I}\in\mathbb{R}^{w,h}$}}{%
\mbox{$\mathbf{Y}\in\mathbb{R}^{w,h}$}}
defined by the International Commission on Illumination (\mbox{CIE}) from contemporary cathode ray tube (\mbox{CRT}) phosphors via \change[label=c:c134,ref=c:c13]{\textbf{Eq.}\,\ref{eq:rgbtograyscale},
where \mbox{$\mathbf{I}'_R\in\mathbb{R}^{w,h}$}, \mbox{$\mathbf{I}'_G\in\mathbb{R}^{w,h}$}, and \mbox{$\mathbf{I}'_B\in\mathbb{R}^{w,h}$}}{where $\mathbf{R}$, $\mathbf{G}$, and $\mathbf{B}$} are the linear red, green, and blue color channels \addition[label=c:a165,ref=c:c16]{of \mbox{$\mathbf{I}'\in\mathbb{R}^{w,h,3}$}} respectively.
\begin{equation}
\mathbf{I} = 0.2126 \cdot \mathbf{I}'_R + 0.7152 \cdot \mathbf{I}'_G + 0.0722 \cdot \mathbf{I}'_B
\label{eq:rgbtograyscale}
\end{equation}
\addition[label=c:a161,ref=c:c16]{Image \textbf{Fig.}\,\ref{fig:study_data_sets}}(b) is initially presented to the \change[label=c:c162,ref=c:c16]{study participants}{users} in order to familiarize themselves with the upcoming segmentation process.
The segmentation tasks associated with images \addition[label=c:a162,ref=c:c16]{\textbf{Fig.}\,\ref{fig:study_data_sets}}(a,\,c,\,d) are then displayed sequentially to the subjects in randomized order.
The images are chosen to fulfill two goals of the study.
(1) Ambiguity of the ground truth has to be minimized in order to suppress noise in the quantitative data.
Each test person should have the same understanding and consent about the correct outline of the object to segment.
Therefore, clinical images can only be utilized with groups of specialized domain experts.
(2) The degree of complexity should vary between the images displayed to the users.
Image (b), depicted in \textbf{Fig.}\,\ref{fig:study_data_sets}, of moderate complexity with regards to its disagreement coefficient~\cite{hanneke2007bound}, is displayed first to learn the process of segmentation with the given prototype.
\addition[label=c:a163,ref=c:c16]{ %
Users are asked for an initial testing of a prototype's features utilizing this image without any time pressure.
The subsequent interactions during the segmentations of the remaining three images are recorded for each prototype and participant.
}
The complexity increases from (a) to (d), \addition[label=c:a164,ref=c:c16]{according to the \mbox{GTs'} Minkowski-Bouligand dimensions~\cite{mandelbrot1967long}}.
The varying complexity enables a more objective and extended differentiation of subjects' performances with given prototypes.
\begin{figure}%
\resizebox{\columnwidth}{!}{%
{\def\arraystretch{1.1}\tabcolsep=2pt
\begin{tabular}{cccc}%
\includegraphics[height=0.2357\textwidth]{images/Usability_data_sets/aneurysm_re.png} &
\includegraphics[height=0.2357\textwidth]{images/Usability_data_sets/llama_.png} &
\includegraphics[height=0.2357\textwidth]{images/Usability_data_sets/106024_.png} &
\includegraphics[height=0.2357\textwidth]{images/Usability_data_sets/124084_.png} \\
\includegraphics[height=0.2357\textwidth]{images/Usability_data_sets/aneurysm_re_gt.png} &
\includegraphics[height=0.2357\textwidth]{images/Usability_data_sets/llama__gt.png} &
\includegraphics[height=0.2357\textwidth]{images/Usability_data_sets/106024__gt.png} &
\includegraphics[height=0.2357\textwidth]{images/Usability_data_sets/124084__gt.png} \\
\Large{(a)} & \Large{(b)} & \Large{(c)} & \Large{(d)}
\end{tabular}%
}
}%
\caption{In the top row, image data utilized in the usability tests are depicted.
In the bottom row, the ground truth segmentations of the images are illustrates.
The image of a contrast enhanced aneurysm (a) and its ground truth \additioncaption{annotation by a medical expert} were composed for this study.
Images (b\,--\,d) are selected from the \href{https://web.archive.org/web/20161203110733/http://research.microsoft.com/en-us/um/cambridge/projects/visionimagevideoediting/segmentation/grabcut.htm}{GrabCut image database} initially created for~\cite{rother2004grabcut}.%
}
\label{fig:study_data_sets}%
\end{figure}%
\subsection{Usability Test Setup}\label{sec:usability_test_setup}
Two separate user studies are conducted to test all prototypes described in \textbf{Sec.}\,\ref{sec:sgmentation_prototypes},
in order to keep the time for each test short (less than \change[label=c:c261,ref=c:c26]{$10$ minutes per prototype}{$20$ minutes}), thus retaining the focus of the participants, while minimizing the occurrence of learning effect artifacts in the acquired data.
\addition[label=c:a261,ref=c:c26]{Note that the participants use this time not only to finish the segmentation tasks, but also to familiarize themselves with the novel interaction system, as well as to form opinions about the system while testing their provided interaction features.}
(1) The first user test is a randomized A/B test of the \mbox{semi-manual} prototype (\textbf{Sec.}\,\ref{sec:semi-manual_prototype}) and the guided prototype (\textbf{Sec.}\,\ref{sec:guided_prototype}).
Ten individuals are selected as test subjects due to their advanced domain knowledge in the fields of medical image processing and mobile input devices.
The subjects are given the task to segment \mbox{$m=3$} different images with varying complexity, which are described in \textbf{Sec.}\,\ref{sec:study_data_sets}, in random order.
A fourth input image of medium complexity is provided for the users to familiarize themselves with the \mbox{ISS} before the tests.
As an interaction device, a mobile tablet computer is utilized, since the final segmentation method is intended for usage via such a medium.
The small $10.1$ inch \mbox{($13.60\,\text{cm}\cdot21.75\,\text{cm}$)} \mbox{WUXGA} display and fingers utilized as a multi-touch pointing device further exacerbate the challenge to fabricate an exact segmentation for the participants~\cite{norman2010gestural}.
The user study environment is depicted in \textbf{Fig.}\,\ref{fig:study_setup}.
Audio and video recordings are evaluated via a qualitative content analysis, described in \textbf{Sec.}\,\ref{sec:qualitative_measures},
in order to detect possible improvements for the tested prototypes and their interfaces.
After segmentation, each participant fills out the \mbox{SUS} (\textbf{Sec.}\,\ref{sec:questionnaires_sus}) and \mbox{AttrakDiff-2} (\textbf{Sec.}\,\ref{sec:questionnaires_attrakdiff}) questionnaires.
(2) The second user test is conducted for %
the joint segmentation prototype (\textbf{Sec.}\,\ref{sec:joint_prototype}). %
The data set and test setup are the same as in the first user study and all test persons of study (1) also participated in study (2). One additional subject participated only in study (2).
Two months passed between the conduction of the two studies, in which the former participants were not exposed to any of the prototypes.
Therefore, the learning effect bias for the second test is neglectable.
\begin{figure}
\includegraphics[width=\columnwidth]{images/usability_test_setup_new_.pdf}
\caption{User testing setup for the usability evaluation of the prototypes.
In this environment, a user performs an interactive segmentation on a mobile tablet computer while sitting.
\mbox{RGB} cameras record the hand motions on the input device and facial expressions of the participant.
\additioncaption{In addition, each recognized input is recorded on the tablet device (the interaction log).}
}
\label{fig:study_setup}
\end{figure}
\subsection{Prediction of Questionnaire Results}\label{sec:prediction_of_questionnaire_results} %
The questionnaires' \mbox{PQ}, \mbox{HQ}, \mbox{HQ-I}, \mbox{HQ-S}, \mbox{ATT}, and \mbox{SUS} results are predicted, based on features extracted from the interaction log data.
For the prediction, a regression analysis is performed.
Stochastic Gradient Boosting Regression Forests (\mbox{GBRF}) are an additive model for regression analysis~\cite{friedman2001greedy,friedman2002stochastic,hastie2009boosting}.
In several stages, shallow regression trees are generated.
Such a tree is a weak base learner each resulting in a prediction error \mbox{$\varepsilon = b + v$}, with high bias $b$ and low variance $v$.
These regression trees are utilized
to minimize an arbitrarily differentiable loss function each on the negative gradient of the previous stage's outcome, thus reducing the overall bias via boosting~\cite{breiman1999using}.
The Huber loss function~\cite{huber1964robust} is utilized for this evaluation due to its increased robustness to outliers in the data with respect to the squared error loss. %
The collected data set of user logs is split randomly in a ratio of \mbox{$4:1$} for training and testing.
An exhaustive grid search over $20,480$ parameter combinations is performed for each of the six \mbox{GBRF} estimators (one for each questionnaire result) with scorings based on an eight-fold cross-validation on the training set.
\subsubsection{Feature Definition}\label{sec:feature_definition}
The collected data contains $31$ samples with $216$ possible features each.
The $31$ questionnaire results (\mbox{PQ}, \mbox{HQ}, \mbox{HQ-S}, \mbox{HQ-I}, \mbox{ATT}, \mbox{SUS}), are predicted based on
features extracted from the interaction log data of the four images segmented with the system\deletion[label=c:d271,ref=c:c27]{(see \textbf{Fig.}\,\ref{fig:study_data_sets})}.
Four features are the relative median seed positions per user and their standard deviation in two dimensions.
$22$ additional features, like the number of undo operations (\mbox{\emph{\#Undos}}) and number of interactions (\mbox{\emph{\#Interactions}}), the overall computation time (\mbox{\emph{$\Sigma$Computation\_time}}), overall interaction time (\mbox{\emph{$\Sigma$Interaction\_time}}), elapsed real time (\mbox{\emph{$\Sigma$Wall\_time}}), \mbox{\emph{Final\_Rand\_index}}, and \mbox{\emph{Final\_Dice\_score}} are reduced to one scalar value each by the mean and median, over the four segmentations per prototype and user,
to obtain $48$ base features.
Since these features each only correlate weakly with the questionnaire results, composite features are added in order to assist the model's learning process for feature relations.
Added features are composed of one base feature value divided by (the mean or median of) %
computation time, interaction time, or elapsed real time. %
The relations between those time values themselves are also added.
In total, $216$ features directly related to the interaction log data are used.
In addition, a principal component analysis (\mbox{PCA}) is performed in order to add $10$\,\% ($22$) features with maximized variance to the directly assessed ones to further assist the feature selection step via \mbox{GBRFs}.
\subsubsection{Feature Selection for \mbox{SUS} Prediction}\label{sec:sus_prediction}
For the approximation of \mbox{SUS} results, a feature selection step is added to decrease the prediction error by an additional three percent points:
here, after the described initial grid search, $1$\,\% (205) of the \mbox{GBRF} estimators, with the lowest mean deviance from the ground truth, %
are selected to approximate the most important features.
From those estimators, the most important features for the \mbox{GBRFs} are extracted via a \emph{$1/\text{loss}$}-weighted
feature importance voting. %
This feature importance voting by $205$ estimators ensures a more robust selection than deciding the feature ranking from only a single trained \mbox{GBRF}.
After the voting, a second grid search over the same $20,480$ parameter combinations, but with a reduction from $238$ to only $25$ of the most important features is performed.
\section{Results}\label{sec:results}
\subsection{Overall Usability}\label{sec:results_overall_usability}
\begin{figure*}%
\resizebox{\textwidth}{!}{%
{\def\arraystretch{1.5}\tabcolsep=4pt
\begin{tabular}{cccc!{\color{gray}\vrule}c}%
\rotatebox{90}{$\quad$ \mbox{SUS} score per Subject} &
\includegraphics[height=0.2357\textwidth]{images/sus/big/sus_data_manual0.pdf} &
\includegraphics[height=0.2357\textwidth]{images/sus/big/sus_data_guided0.pdf} &
\includegraphics[height=0.2357\textwidth]{images/sus/big/sus_data_joint0.pdf} & \\
\rotatebox{90}{$\quad$ \mbox{SUS} score per Statement} &
\includegraphics[height=0.2357\textwidth]{images/sus/big/sus_data_manual1.pdf} &
\includegraphics[height=0.2357\textwidth]{images/sus/big/sus_data_guided1.pdf} &
\includegraphics[height=0.2357\textwidth]{images/sus/big/sus_data_joint1.pdf} &
\includegraphics[height=0.2357\textwidth]{images/sus/big/sus_data_overall0.pdf} \\
& SUS \mbox{semi-manual} prototype & \mbox{SUS} guided prototype & \mbox{SUS} joint prototype & \mbox{SUS} overall\\
\end{tabular}%
}
}%
\caption{Results of the \mbox{SUS} questionnaires per prototype.
Values are normalized \additioncaption{in accordance with \textbf{Eq.}\,\ref{eq:sus_score}}, such that $4$ is considered the best possible result for each question.
The \mbox{Semi-manual} prototype's \mbox{SUS} mean is $88$, guided prototype's mean is $67$, and joint prototype's mean \mbox{SUS} score is $82$.
}
\label{fig:result_sus}%
\end{figure*}%
The result of the \mbox{SUS} score is depicted in \textbf{Fig.}\,\ref{fig:result_sus}.
According to the mapping (\textbf{Fig.}\,\ref{fig:sus_adjective}) introduced in \textbf{Sec.}\,\ref{sec:questionnaires_sus}, the adjective rating of the \mbox{semi-manual} and joint prototypes are \emph{excellent} ($88$ respective $82$), the adjective associated with the guided prototype is \emph{good} ($67$).
\begin{figure}
\centering %
\resizebox{0.75\columnwidth}{!}{%
{\Large\input{images/correlation___.pdf_tex}}%
}%
\caption{Pearson correlation coefficients for the \mbox{AttrakDiff-2} (blue) and \mbox{SUS} (red) questionnaire results,
based on the acquired questionnaire data.
The line thickness is proportionate to correlation strength \additioncaption{of the different aspects of quality measured}.}%
\label{fig:result_questionnaire_results_correlation
\end{figure}
A graph representation of the similarity of individual usability aspects, based on the acquired questionnaire data, is depicted in \textbf{Fig.}\,\ref{fig:result_questionnaire_results_correlation}.
Based on the Pearson correlation coefficients utilized as a metric for similarity, the \mbox{SUS} score has the most similarity to the pragmatic (\mbox{PQ}) and attractiveness (\mbox{ATT}) usability aspects provided by the \mbox{AttrakDiff-2} questionnaire.
\subsection{Pragmatic Quality}\label{sec:results_pragmatic}
The \mbox{PQ} results of the \mbox{AttrakDiff-2} questionnaire are illustrated in \textbf{Fig.}\,\ref{fig:result_attrakdiff}.
The \mbox{PQ} scores for \mbox{semi-manual}, guided, and joint prototypes are $88$\,\%, $50$\,\%, and $74$\,\% of the maximum score, respectively.
Since each of the $95$\,\% confidence intervals are non-overlapping,
the prototypes' ranking regarding \mbox{PQ} are significant.
The quantitative evaluation of recorded interaction data is depicted in \textbf{Fig.}\,\ref{fig:result_logs}.
Dice scores before the first interaction are zero, except for the guided prototype ($0.82\pm0.02$), where few fixed seed points had to be provided to initialize the system.
Utilizing the \mbox{semi-manual} prototype and starting from zero, a similar Dice measure to the guided prototype's initialization is reached after about seven interactions, which takes $13.06\pm2.05$ seconds on average.
The median values of final Dice scores per prototype are $0.95$ (\mbox{semi-manual}), $0.94$ (guided), and $0.82$ (joint). %
The mean overall elapsed wall time in seconds spent for interactive segmentations per prototype are $73\pm11$ (\mbox{semi-manual}), $279\pm36$ (\mbox{guided}), and $214\pm24$ (\mbox{joint}). %
Since segmenting with the guided version takes the longest time and does not yield the highest final Dice scores, the initial advantage from pre-existing seed points does not bias the top ranking of a prototype in this evaluation.
\begin{figure*}%
\resizebox{\textwidth}{!}{%
{\def\arraystretch{1.45}\tabcolsep=4pt
\begin{tabular}{cccc!{\color{gray}\vrule}c}%
\rotatebox{90}{$\quad$ \mbox{AttrakDiff-2} per Subject} &
\includegraphics[height=0.2357\textwidth]{images/attrakdiff/big/attrakdiff_data_manual0.pdf} &
\includegraphics[height=0.2357\textwidth]{images/attrakdiff/big/attrakdiff_data_guided0.pdf} &
\includegraphics[height=0.2357\textwidth]{images/attrakdiff/big/attrakdiff_data_joint0.pdf} &
\includegraphics[height=0.2357\textwidth]{images/attrakdiff/attrakdiff_data_group0.pdf} \\
\rotatebox{90}{$\quad$ \mbox{AttrakDiff-2} per Statement} &
\includegraphics[height=0.2357\textwidth]{images/attrakdiff/big/attrakdiff_data_manual1.pdf} &
\includegraphics[height=0.2357\textwidth]{images/attrakdiff/big/attrakdiff_data_guided1.pdf} &
\includegraphics[height=0.2357\textwidth]{images/attrakdiff/big/attrakdiff_data_joint1.pdf} &
\includegraphics[height=0.2357\textwidth]{images/attrakdiff/big/attrakdiff_data_overall0.pdf} \\
& \mbox{AttrakDiff-2} \mbox{semi-manual} prototype & \mbox{AttrakDiff-2} guided prototype & \mbox{AttrakDiff-2} joint prototype & \mbox{AttrakDiff-2} overall
\end{tabular}%
}
}%
\caption{Results of the \mbox{AttrakDiff-2} questionnaires per prototype.
A value of $7$ is considered the best possible result.
The \mbox{Semi-manual} prototype's \mbox{AttrakDiff-2} mean is $5.46$, guided prototype's mean is $4.50$, and joint prototype's mean \mbox{AttrakDiff-2} score is $5.22$.
}
\label{fig:result_attrakdiff}%
\end{figure*}%
\begin{figure*}%
\resizebox{\textwidth}{!}{%
{\def\arraystretch{1.0}\tabcolsep=4pt
\begin{tabular}{cccc}%
\rotatebox{90}{\Large{$\ $S{\o}rensen-Dice Coefficient}} &
\includegraphics[height=0.32\textwidth]{images/log_data/big/demo_manual0.pdf} &
\includegraphics[height=0.32\textwidth]{images/log_data/big/demo_guided0.pdf} &
\includegraphics[height=0.32\textwidth]{images/log_data/big/adjuvant_data-driven0.pdf} \\ %
& \Large{\mbox{Semi-manual} Prototype Interactions} & \Large{Guided Prototype Interactions} & \Large{Joint Prototype Interactions}
\end{tabular}%
}
}%
\caption{Evaluation of the user interaction data.
The segmentations' similarity to the ground truth according to the Dice score is depicted per interaction.
The median Dice rating as well as the $75$\,\% and $95$\,\% confidence intervals are illustrated.
}
\label{fig:result_logs}%
\end{figure*}%
\subsection{Hedonic Quality}\label{sec:results_hedonic}
\subsubsection{Identity and Stimulus}
The \mbox{AttrakDiff-2} questionnaire provides a measure for the \mbox{HQ} of identity and stimulus introduced in \textbf{Sec.}\,\ref{sec:questionnaires_attrakdiff}.
The \mbox{HQ} scores for \mbox{semi-manual}, guided, and joint prototypes are $72$\,\%, $70$\,\%, and $77$\,\% of the maximum score, respectively.
Since the $95$\,\% confidence intervals are overlapping for all three prototypes, no system ranks significantly higher than the others.
An overall evaluation of the \mbox{AttrakDiff-2} results is conducted in the form of a portfolio representation depicted in \textbf{Fig.}\,\ref{fig:result_attrakdiff_portfolio}.
\begin{figure}%
\begin{tikzpicture}
\begin{axis}[width=9cm,height=9cm,grid,xtick={1,3,5,7},xticklabels={1,3,5,7},xmin=1,xmax=7,ytick={1,3,5,7},yticklabels={1,3,5,7},ymin=1,ymax=7,xlabel={Pragmatic Quality (PQ)},ylabel={Hedonic Quality (HQ)}]
\node[text width=1.5cm,align=center] at (axis cs:2,2) {super-fluous};
\node[text width=1.5cm,align=center] at (axis cs:2,6) {too self-oriented};
\node[text width=1.5cm,align=center] at (axis cs:4,4) {neutral};
\node[text width=1.5cm,align=center] at (axis cs:4,6) {self-oriented};
\node[text width=1.5cm,align=center] at (axis cs:6,2) {too task-oriented};
\node[text width=1.5cm,align=center] at (axis cs:6,4) {task-oriented};
\node[text width=1.5cm,align=center] at (axis cs:6,6) {desired};
%
\fill [fill=semi_manual_prot_color,semi_manual_prot_color, fill opacity=0.4] (axis cs:5.64106645752803,4.798427683218617) rectangle (axis cs:6.158933542471971,5.301572316781383);
\addplot[color=semi_manual_prot_color,mark=otimes*,mark size=5pt,fill=white] coordinates {(5.9,5.05)};
\fill [fill=guided_prot_color,guided_prot_color, fill opacity=0.4] (axis cs:3.143832023165031,4.6750586928307385) rectangle (axis cs:3.856167976834969,5.167798450026405);
\addplot[color=guided_prot_color,mark=otimes*,mark size=5pt,fill=white] coordinates {(3.5,4.921428571428572)};
\fill [fill=joint_prot_color,joint_prot_color, fill opacity=0.4] (axis cs:4.696608735705655,4.953083574359387) rectangle (axis cs:5.446248407151487,5.389773568497756);
\addplot[color=joint_prot_color,mark=otimes*,mark size=5pt,fill=white] coordinates {(5.071428571428571,5.171428571428572)};
%
\addplot[only marks,color=semi_manual_prot_color,mark=otimes*,mark size=1.5pt] coordinates {(6.4285714285714288, 3.8571428571428572)(6.1428571428571432, 6.0)(5.5714285714285712, 4.6428571428571432)(6.4285714285714288, 6.2142857142857144)(5.7142857142857144, 5.2857142857142856)(6.8571428571428568, 5.2857142857142856)(5.1428571428571432, 4.4285714285714288)(6.4285714285714288, 5.1428571428571432)(5.0, 4.6428571428571432)(5.2857142857142856, 5.0)};
%
\addplot[only marks,color=guided_prot_color,mark=otimes*,mark size=1.5pt] coordinates {(2.0, 4.2857142857142856)(3.2857142857142856, 5.9285714285714288)(3.8571428571428572, 5.1428571428571432)(4.2857142857142856, 5.5)(2.7142857142857144, 4.7142857142857144)(5.5714285714285712, 5.1428571428571432)(4.7142857142857144, 4.0)(3.2857142857142856, 5.2142857142857144)(3.4285714285714284, 4.9285714285714288)(1.8571428571428572, 4.3571428571428568)};
%
\addplot[only marks,color=joint_prot_color,mark=otimes*,mark size=1.5pt] coordinates {(5.1428571428571432, 4.7142857142857144)(4.8571428571428568, 5.0714285714285712)(5.8571428571428568, 5.5714285714285712)(4.4285714285714288, 4.3571428571428568)(6.0, 6.0)(5.2857142857142856, 4.7857142857142856)(5.7142857142857144, 6.3571428571428568)(2.1428571428571428, 4.8571428571428568)(6.4285714285714288, 5.0)(4.8571428571428568, 5.0)};
%
\end{axis}
\end{tikzpicture}
\caption{\mbox{AttrakDiff-2} portfolio representation, \additioncaption{according to \cite{hassenzahl2008user},} depicting results from the evaluation of the \mbox{semi-manual} segmentation prototype (blue), guided prototype (green), and joint prototype (red).
The rectangular areas illustrate the $95$\,\% confidence intervals for the mean value in each dimension.
The mean intervals are $5.5$\,\% for \mbox{PQ} and $4.0$\,\% for \mbox{HQ}. %
}%
\label{fig:result_attrakdiff_portfolio}%
\end{figure}
\begin{table}
\caption{Relative absolute prediction errors for \mbox{AttrakDiff-2} and \mbox{SUS} test set samples.
Predictions are computed by six separately trained \changecaption{Stochastic Gradient Boosting Regression Forests (\mbox{GBRFs})}{\mbox{GBRFs}}, one for each figure of merit. Note that each training process only utilizes the interaction log data.
Results displayed are the median values of $10^4$ randomly initialized training processes.} %
\label{tab:prediction_results_gbrf}%
\tabcolsep=5.25pt
\begin{tabular}{lrrrrrr}
Relative Error & ATT & HQ & HQ-I & HQ-S & PQ & SUS \\[1pt]\hline
Mean & 11.5\,\% & 7.4\,\% & 10.5\,\% & 8.0\,\% & 15.7\,\% & 10.4\,\% \\
Median & 8.9\,\% & 6.3\,\% & 9.4\,\% & 6.2\,\% & 13.7\,\% & 8.8\,\% \\
Std & 8.0\,\% & 5.5\,\% & 6.7\,\% & 6.9\,\% & 12.0\,\% & 7.1\,\%
\end{tabular}
\end{table}
\subsubsection{Qualitative Content Analysis}
A summative qualitative content analysis as described in \textbf{Sec.}\,\ref{sec:qualitative_measures} is conducted on the audio and video data recorded during the study.
After generalization and reduction of given statements, the following user feedback is extracted with respect to three problem statements:
positive usability aspects,
negative usability aspects, and
user suggestions concerning existing functions or new functions.
\textbf{Feedback for multiple prototypes}
\begin{enumerate}
\item Responsiveness: the most common statement concerning the \mbox{semi-manual} and joint version is that the user expected the zoom function to be more responsive and thus more time efficient. %
\item Visibility: $20$\,\% of the participants had difficulties distinguishing between the segmentation contour line and either the background image or the foreground scribbles in the overlay mask, due to the proximity of their assigned color values.
\item Feature suggestion: deletion of individual seed points instead of all seeds from last interaction using \emph{undo}.
\end{enumerate}
\textbf{\mbox{Semi-manual} segmentation prototype}
\begin{enumerate}
\item Mental model: $30$\,\% of test persons suggested clearly visible indication whether the label for the scribble drawn next will be foreground or background.
\item Visibility: hide previously drawn seed points, in order to prevent confusion with the current contour line and occultation of the underlying image.
\end{enumerate}
\textbf{Guided segmentation prototype}
\begin{enumerate}
\item Responsiveness: $50$\,\% of test persons suggested an indicator for ongoing computations during their time of waiting.
\item Control: users would like to influence the location of new seed points, support for manual image zoom, and fine grained control for the \emph{undo} function.
\end{enumerate}
\textbf{Joint prototype}
\begin{enumerate}
\item Visibility: $64$\,\% of users intuitively found the toggle functionality for seed labels without prior explanation.
\item Visibility: $64$\,\% of participants suggested visible instructions for manual seed generation.
\end{enumerate}
\subsection{Prediction of Questionnaire Results from Log Data}\label{sec:prediction_of_questionnaire_results_from_log_data}
The questionnaires' results are predicted via a regression analysis, based on features extracted from the interaction log data.
A visualization of the feature importances for the regression analysis with respect to the \mbox{GBRF} is depicted in \textbf{Fig.}\,\ref{fig:gbrf_feature_importance}.
An evaluation with the test set is conducted as depicted in \textbf{Tab.}\,\ref{tab:prediction_results_gbrf}.
The mean prediction errors for the questionnaires' results are $15.7$\,\%
for \mbox{PQ} and $7.4$\,\%
for \mbox{HQ}.
In both cases, the error of these (first) estimates is larger but close to the average $95$\,\% confidence intervals of $5.5$\,\% (\mbox{PQ}) and $4.0$\,\% (\mbox{HQ}) for the overall questionnaire results in the portfolio representation. %
\begin{figure}%
\resizebox{\columnwidth}{!}{%
\begin{tabular}{rc
\rotatebox{90}{$\qquad\qquad$Feature importance} &
\includegraphics[width=\columnwidth]{images/feature_importance/normalized_feature_importance__.pdf} \\
& Feature Indices\vspace{0.5em} \\
\rotatebox{90}{$\qquad\qquad$Feature importance} & %
\includegraphics[width=\columnwidth]{images/feature_importance/normalized_sorted_log_feature_importance_.pdf} \\
& Feature indices sorted by importance \\
\end{tabular}%
}
\caption{Relative feature importance measures from $1$\,\% ($205$) of best \mbox{GBRF} estimators from grid search as described in \textbf{Sec.}\,\ref{sec:sus_prediction}.
The orange rectangle on the \changecaption{top}{upper} right highlights features added via \mbox{PCA} transformation.
Relative feature importance is depicted on a log scale on the bottom.}
\label{fig:gbrf_feature_importance}%
\end{figure}%
The similarity graph for the acquired usability aspects introduced in \textbf{Fig.}\,\ref{fig:result_questionnaire_results_correlation} can be extended to outline the direct relationship between questionnaire results and recorded features.
Such a graph is depicted in \textbf{Fig.}\,\ref{fig:feature_correlations_and_feature_importance}.
Notably, there is no individual feature, which strongly correlates with one of the questionnaire results.
However, as the results of the regression analysis in \textbf{Tab.}\,\ref{tab:prediction_results_gbrf} depict, there is a noteworthy dependence of the usability aspects measured by the \mbox{SUS} and \mbox{AttrakDiff-2} questionnaires and combinations of the recorded features.
The most important features for the approximation of the questionnaire results are depicted in \textbf{Tab.}\,\ref{tab:most_frequently_used_features}.
\begin{figure*}%
\resizebox{\textwidth}{!}{%
\includegraphics{images/correlation_all_bold.pdf}
}%
\caption{Features from user interaction logs (green) correlated with \mbox{SUS} (red) and \mbox{AttrakDiff-2} (blue) questionnaire results.
Bold feature names highlight top five most important features with regards to \mbox{GBRFs}.
Only relations with a Pearson correlation coefficient $\operatorname{abs}(c) > 0.5$ and $p < 0.05$ are displayed.
\additioncaption{Note that this visualization is an extension to \textbf{Fig.}\,\ref{fig:result_questionnaire_results_correlation}.}
}
\label{fig:feature_correlations_and_feature_importance}%
\end{figure*}%
\begin{table*}
\caption{The five most important features per \mbox{GBRF} estimator/label.
Orange background colors indicate the most frequently used features \additioncaption{in the trained decision trees of the \mbox{GBRFs}}.
Yellow backgrounds highlight semantically similar feature pairs.
The abbreviations represent the receiver operating characteristic area under the curve (ROC\_AUC), logistic loss (LOG), and relative absolute area/volume difference (RAVD).
}
\label{tab:most_frequently_used_features}
\resizebox{\textwidth}{!}{%
{\def\arraystretch{1.0}\tabcolsep=3pt
\begin{tabular}{r|lllll}
& 1.\ & 2.\ & 3.\ & 4.\ & 5.\ \\[1pt]\hline
ATT & \cellcolor{orange!10}Mean(ROC\_AUC/$\Sigma$wtime) & \cellcolor{orange!5}Mean(Dice)/Mean($\Sigma$wtime) & \cellcolor{orange!5}Mean(LOG)/Mean($\Sigma$ctime) & Med(OBJ\_TPR)/Med($\Sigma$ctime) & Med($\Sigma$ctime) \\
%
HQ-I & \cellcolor{orange!10}Mean(ROC\_AUC/$\Sigma$wtime) & \cellcolor{orange!5}PCA\_VAL\_17 & \cellcolor{orange!5}Mean(Dice)/Mean($\Sigma$wtime) & \cellcolor{yellow!5}Med(Med\_ctime)/Med($\Sigma$wtime) & \cellcolor{orange!5}Mean(LOG)/Mean($\Sigma$ctime) \\
%
HQ & Med(Jaccard/$\Sigma$ctime) & \cellcolor{orange!5}PCA\_VAL\_17 &
\cellcolor{orange!10}Mean(ROC\_AUC/$\Sigma$wtime) & Mean(OBJ\_TPR/$\Sigma$wtime) & \cellcolor{yellow!5}Mean(RAVD/$\Sigma$ctime) \\
%
HQ-S & \cellcolor{yellow!5}Mean(RAVD)/Mean($\Sigma$ctime) & Med(Med\_wtime/$\Sigma$wtime) & Med(LOG) & \cellcolor{orange!5}Std(Relative\_Seed\_Coord\_H) & Med(MSE) \\
%
PQ & PCA\_VAL\_16 & Mean($\Sigma$otime/$\Sigma$ctime) & Mean(Dice)/Mean($\Sigma$ctime) & PCA\_VAL\_11 & \cellcolor{yellow!5}Med(Med\_ctime/$\Sigma$wtime) \\
%
SUS & PCA\_VAL\_2 & PCA\_VAL\_18 & \cellcolor{orange!5}Std(Relative\_Seed\_Coord\_H) & Med(Med\_wtime) & PCA\_VAL\_20
\end{tabular}
}
}%
\end{table*}
\section{Discussion}\label{sec:discussion}
\subsection{Usability Aspects}
Altough the underlying segmentation algorithm is the interactive \mbox{GrowCut} method for all three prototypes tested, the measured user experiences varied significantly.
In terms of user stimulus \mbox{HQ-S} a more innovative interaction system like the joint prototype is preferred to a traditional one.
Pragmatic quality aspects, evaluated by \mbox{SUS} as well as \mbox{AttrakDiff-2}'s \mbox{PQ}, clearly outline that the \mbox{semi-manual} approach has an advantage over the other two techniques.
This conclusion also manifests in the {Dice} coefficient values' fast convergence rate towards its maximum for this prototype.
The normalized median \mbox{\emph{$\Sigma$Wall\_time}}
spent for the overall segmentation of each image are $100$\,\% (\mbox{semi-manual}), $550$\,\% (guided), and $380$\,\% (joint).
As a result, users prefer the simple, pragmatic interface as well as a substantial degree of freedom to control each iterative step of the segmentation.
The less cognitively challenging approach is preferred~\cite{ramkumar2016user}. %
The other methods provide more guidance for aspects which the user aims to control themselves.
In order to improve the productivity of an \mbox{ISS}, less guidance should be imposed in these cases,
while providing more guidance on aspects of the process not apparent to the users' focus of attention~\cite{heron1957perception}.
\subsection{Usability Aspects Approximation}
For \mbox{ATT} and \mbox{HQ-I}, the most discriminative features selected by \mbox{GBRFs} are the
receiver operating characteristic area under the curve (\mbox{ROC\_AUC}) of the final interactive segmentations over the elapsed real time which passed during segmentation (\mbox{\emph{$\Sigma$Wall\_time}}).
The Jaccard index~\cite{jaccard1912distribution} as well as the relative absolute area/volume difference (\mbox{RAVD}) each divided by the computation time are most relevant for \mbox{HQ}, respective \mbox{HQ-S}.
The pragmatic quality's \mbox{(PQ)} dominant features are composed of final Dice scores and time measurements per segmentation.
The \mbox{SUS} results, quantifying the overall usability of a prototype, is mainly predicted based on %
the features with the highest level of abstraction used.
In the top $10$\,\% ($22$) selected features, $45$\,\% of top \mbox{SUS} features are \mbox{PCA} values, as indicated in \textbf{Tab.}\,\ref{tab:most_frequently_used_features} and \textbf{Fig.}\,\ref{fig:gbrf_feature_importance}(top).
In comparison: \mbox{PQ} $41$\,\%, \mbox{HQ} $36$\,\%, \mbox{HQ-I} $18$\,\%, \mbox{ATT} $14$\,\%, and \mbox{HQ-S} $9$\,\%.
\section{Conclusion}\label{sec:conclusion}
For sufficiently complex tasks like the accurate segmentation of lesions during \mbox{TACE}, fully automated systems are, by their lack of domain knowledge, inherently limited in the achievable quality of their segmentation results.
\mbox{ISS} may supersede fully automated systems in certain niches by cooperating with the human user in order to reach the common goal of an exact segmentation result in a short amount of time.
The evaluation of interactive approaches is more demanding and less automated than the evaluation with other approaches, due to complex human behavior.
However, there are methods like extensive user studies to assess the quality of a given system.
It was shown, that even a suitable approximation of a study's results regarding pragmatic as well as hedonic usability aspects is achievable from a sole analysis of the users' interaction recordings.
Those records are straightforward to acquire during normal (digital) prototype usage and can lead to a good first estimate of the system's usability aspects, without the need to significantly increase the temporal demands on each participant by a mandatory completion of questionnaires after each system usage.
This mapping of quantitative low-level features, which are exclusively based on measurable interactions with the system (like the final Dice score, computation times, or relative seed positions), may allow for a fully automated assessment of an interactive system's quality.
\section{\additioncaption{Outlook}}\label{sec:outlook}
For \change[label=c:c181,ref=c:c18]{the}{this} proposed automation, a rule-based user model (robot user) like~\cite{amrehn2017uinet,amrehn2019interactive} or a learning-based user model could interact with the prototype system instead of a human user.
This evaluation scheme may significantly reduce the amount of resources necessary to investigate each variation of a prototype's \mbox{UI} features and segmentation methodologies.
\addition[label=c:a181,ref=c:c18]{An estimate of a system's usability can therefore be acquired fully automatically with dependence only on the chosen user model.}
\addition[label=c:a182,ref=c:c18]{In addition, the suitable approximation of a usability study's result can be used as a descriptor, i.\,e.\ feature vector, for a user.
These features can be utilized for a clustering of users, which is a necessary step for the application of a personalized segmentation system.
Such an interactive segmentation system might benefit from prior knowledge about a user's preferences and input patterns in order to achieve accurate segmentations from less interactions.}
\section*{Disclaimer}
The concept and software presented in this paper are based on research and are not commercially available.
Due to regulatory reasons its future availability cannot be guaranteed.
{\color{addedmarkupcolor}
\section*{Conflicts of Interest}
The authors declare that there are no conflicts of interest regarding the publication of this paper.
}
\section*{Acknowledgment}
Thanks to Christian Kisker and Carina Lehle for their hard work with the data collection.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
| {
"attr-fineweb-edu": 1.608398,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUchA25V5hegR8X7gg | \section{Introduction}
Blazars, the subclass of active galactic nuclei (AGN) showing jets almost aligned with the observer's line of sight \citep{BR78,UP95}, offer an invaluable laboratory of physics. Their spectral energy distribution (SED) shows a characteristic double-hump shape and is usually well modeled as due to synchrotron and inverse Compton radiation \citep{Gh98}. Relativistic Doppler boosting of the observed emission is likely involved in the large-amplitude variability observed essentially at all frequencies \citep[e.g.][]{Gh93}.
Although the interpretative scenario seems to be well established, many open problems are still present. The availability of continuously improving multi-wavelength (MW) data has revealed the need for more sophisticated approaches, with models assuming that the observed emission originates in multiple zones with typically independent physical parameters \citep[e.g.][]{Ale12}. The possibility of inhomogeneity in the emitting region, mimicked by multi-zone models, is definitely plausible. However, this immediately introduces a strong degeneracy in the already large parameter space, in turn requiring additional information to disentangle the various possible components in the observed emission.
The dominance of non-thermal emission processes (e.g. synchrotron radiation, etc.) in the blazar emission suggests that a wealth of information might come from polarimetric studies \citep[][to mention some of the most recent papers]{Lar13,Sor13,Sas14,Sor14,Zha14,Ito15}. In the optical, the detection of polarized emission was considered the smoking-gun signature for synchrotron emission from a non-thermal distribution of electrons\citep{AS80}. In general, the addition of polarimetric data to the modeling of blazar photometric/spectral information has widely shown its potential to derive information about, e.g., the magnetic field state \citep[e.g.][]{Lyu05,Mar14}, or to drive the modeling of different SED components \citep{Bar14}.
A relatively less explored regime is that of short timescale polarimetry \citep{Tom01a,Tom01b,And05,Sas08,Cha12,Ito13}. Short timescale photometry, on the contrary, is indeed a common practice in the field and has revealed to be a powerful diagnostic technique \citep{Mon06,Ran10,Dan13,Zha13,San14}, also in the very-high energy regime \citep[e.g.][]{Aha07,Alb07,Abd10,Fos13}.
In this paper we present and discuss well-sampled observations of two blazars: \object{BL\,Lacertae} (hereinafter \object{BL\,Lac}) and \object{PKS\,1424+240}. The observations were carried out with the optical polarimeter PAOLO\footnote{\url{http://www.tng.iac.es/instruments/lrs/paolo.html}} equipping the 3.6\,m INAF / Telescopio Nazionale Galileo (TNG) at the Canary Island of La Palma. The relatively large collective area of the TNG enabled us to explore time scales as short as several tens of seconds in both photometry and polarimetry.
The paper is organized as follows: observations are described in Sect.\,\ref{sec:data}. In Sect.\,\ref{sec:resdis} results of the analyses and a general discussion are presented, and conclusions are drawn in Sect.\,\ref{sec:conc}.
\section{Observations}
\label{sec:data}
PAOLO is an optical polarimeter integrated in the Naysmith focus instrument DOLORES\footnote{\url{http://www.tng.iac.es/instruments/lrs/}} at the TNG. The observations presented here were part of the commissioning and scientific activities of the instrument.
\object{BL\,Lac} is the prototype of the class of BL\,Lac objects, and is located at a redshift $z=0.069$ \citep{MH97}. The host is a fairly bright and massive elliptical galaxy \citep{Sca00,Hyv07}. Due to its relative proximity it is one of the most widely studied objects of the class. \object{PKS\,1424+240} is also a BL\,Lac object and its redshift is still uncertain. \citet{Fur13} report a lower limit at $z \gtrsim 0.6$, which can make it one of most luminous objects in its class. Its host galaxy was possibly detected by \citet{MR10} at typically a few percent of the nuclear emission, although \citet{Sca00} reported much fainter limits.
\object{BL\,Lac} was observed for about 8 hours during the night of 2012 September 1 -- 2. The observations consisted of short integrations of about 20-40\,s each with the $r$ filter, interrupted every $\sim 45$\,min to observe polarized and unpolarized polarimetric standard stars (\object{BD+28d4211}, \object{W2149+021}, \object{HD\,204827}) for a total of more than 300 data points. The data reduction is carried out following standard procedures and aperture photometry is performed using custom tools\footnote{\url{https://pypi.python.org/pypi/SRPAstro.FITS/}}. Photometric calibration was secured by comparison with isolated unsaturated stars in the field with magnitudes derived by the APASS catalogue\footnote{\url{http://www.aavso.org/apass}}. Photometric and polarimetric light curves are shown in Fig.\,\ref{fig:bllac}.
\begin{figure}
\centering
\resizebox{\hsize}{!}{\includegraphics{bllac.pdf}}
\caption{PAOLO observations of BL\,Lac. In the top panel we show the magnitude of the source (AB mag) not corrected for Galactic reddening and for the host galaxy brightness. In the middle panel we show the polarization degree and in the bottom panel the position angle. }
\label{fig:bllac}
\end{figure}
\object{PKS\,1424+240} was observed for about 5 hours during the night of 2014 June 1 -- 2. The observations consisted of short integrations of 1-2\,min each with the $r$ filter interrupted at the beginning and at the end of the sequence to observe an unpolarized polarimetric standard star (GD\,319) for a total of more than 100 data points. Reduction and calibration were carried out as for \object{BL\,Lac.} Photometric and polarimetric light curves are shown in Fig.\,\ref{fig:pks}.
\begin{figure}
\centering
\resizebox{\hsize}{!}{\includegraphics{pks.pdf}}
\caption{PAOLO observations of PKS\,1424+240. The top panel shows the magnitude of the source (AB mag) versus time not corrected for Galactic reddening and for the host galaxy brightness. The middle panel shows the polarization degree and the bottom panel the position angle versus time.}
\label{fig:pks}
\end{figure}
The removal of the few percent instrumental polarization typical of Nasmyth focus instruments \citep{Tin07,Wit11,Cov14} can be carried out rather efficiently and with PAOLO we can estimate \citep{Cov14} a residual r.m.s. of the order of $\sim 0.2$\% or better. If the observations cover a limited range in hour angles the correction is generally more accurate. This is a systematic uncertainty superposed onto our observations and it is already included in the reported errors for our data. The results reported here supersede the preliminary ones shown in \citet{Cov14}.
Where required, $\chi^2$ minimization is performed by using the downhill (Nelder-Mead) simplex algorithm as coded in the {\tt python}\footnote{\url{http://www.python.org}} {\tt scipy.optimize}\footnote{\url{http://www.scipy.org/SciPyPackages/Optimize}} library, v.\,0.14.0. The error search is carried out following \citet{Cas76}. Throughout this paper the reported uncertainties are at $1\sigma$.
Distances are computed assuming a $\Lambda$CDM-universe with $\Omega_\Lambda = 0.73, \Omega_{\rm m} = 0.27,$ and H$_0 = 71$\,km\,s$^{-1}$\,Mpc$^{-1}$ \citep{Kom11}. Magnitudes are in the AB system. Flux densities are computed following \citet{Fuk96}. The raw and reduced data discussed here are available from the authors upon request.
\section{Results and discussion}
\label{sec:resdis}
\object{BL\,Lac} and \object{PKS\,1424+240} are sources belonging to the same class and, during our observations, also showed a comparable brightness. This is already a remarkable finding since the latter is more than one order of magnitude farther away than the former. \object{PKS\,1424+240} is therefore intrinsically about 100 times more luminous in the optical than \object{BL\,Lac} in the considered period. The host galaxy of \object{BL\,Lac} was measured at $R \sim 15.5$ \citep{Sca00}, roughly 30\% of the source luminosity during our observations. The source showed intense short-term variability, as expected for a blazar which has previously been found to be strongly variable at any time-scale \citep{Rai13}. On the contrary, \object{PKS\,1424+240} was remarkably stable during the observations with slow (hours) variations at most at a few percent level. This behavior is rather unexpected although this source presented a less intense variability (at least compared to \object{BL\,Lac}) during long-term monitoring campaigns \citep[e.g.][]{Arc14,Ale14} and in particular close to our observation epoch\footnote{\url{http://users.utu.fi/kani/1m/PG\_1424+240.html}}.
\subsection{Analysis of flux variability}
\label{sec:var}
\begin{figure}
\centering
\resizebox{\hsize}{!}{\includegraphics{bllaclc.pdf}}
\caption{BL\,Lac light-curve after subtraction of the host galaxy contribution and correction for Galactic extinction. A few episodes of rapid variability are labelled (see Table\,\ref{tab:bllacrapvar}) and fits based on Eq.\,(\ref{exppeak}) are also shown (blue solid line).}
\label{fig:bllaclc}
\end{figure}
The rapid variability observed in \object{BL\,Lac}, although in most cases of rather low level in absolute terms ($\sim 5-10$\%), is characterized by a fair number of well sampled rise/decay phases \citep[see also][for a similar behavior in S5\,0716+714]{Mon06}. Following \citet{Dan13}, we modeled these episodes with a sum of exponentials after having converted the light-curves to flux densities. The rationale is based on the idea that the derived time-scales, $\tau$, can give constraints on the size of the emitting regions. In addition, the time-scales of the decay phases, if the emission is due to synchrotron radiation, can allow us to derive inferences about the cooling times of the accelerated electrons and, in turn, the magnetic fields.
The adopted empirical functional form \citep{Dan13} is:
\begin{equation}
\label{exppeak}
f_{\rm i}(t)=\frac{2F_{\rm i}}{exp\left( \frac{t_{\rm i}-t}{\tau_{\rm {r, i}}}\right) +exp\left( \frac{t-t_{\rm i}}{\tau_{\rm {d, i}}}\right) },
\end{equation}
where $ F_{\rm i} $ is the flare normalization, $ \tau_{\rm {r, i}} $ and $ \tau_{\rm {d, i}} $ are, respectively, the flux rise and decay time-scales, and $t_{\rm i}$ is the time of the pulse maximum. The inverse of Eq.\,\ref{exppeak}, $1/f_{\rm i}(t)$, is used when the light-curve shows a decay followed by a rise, and $t_{\rm i}$ corresponds in this case to the pulse minimum.
The dense sampling of our light-curve allowed us to derive four events with well constrained time-scales (Table\,\ref{tab:bllacrapvar} and Fig.\,\ref{fig:bllaclc}). In all cases the time-scales for rise or decay phases are approximately in the range 2-15\,min, considering the uncertainties.
Variability time-scales as short as a few minutes have already been singled out for BL\,Lac objects, mainly at high energies \citep[e.g.][]{Aha07,Alb07,Arl13} or X-rays \citep[e.g.][]{WW95}, where the flux variation is a large fraction of the total. In the optical the percentage amplitudes of flux variations are typically lower, possibly due to the superposition of several emission episodes with largely different time-scales \citep[e.g.][]{Cha11,Dan13,San14} originating from different emitting regions. Therefore, strictly speaking, constraints derived by the light-curve analysis hold only for a portion of the emitting region of the order of the ratio of the flux variability to the total flux.
\begin{table}
\caption{Parameters of rapid flares during our BL\,Lac monitoring. The epochs are relative to 00:00 UT on 2012 September 2. $F_i$ is the amplitude of the variability episode. $1\sigma$ errors are computed with two parameters of interest \citep{Cas76}.}
\label{tab:bllacrapvar}
\centering
\begin{tabular}{lrccl}
\hline \hline
Event & Epoch & $F_i$ & $\tau$ & notes \\
& (hours) & (mJy) & (min) & \\
\hline
A & $-2.7$ & $0.48_{-0.05}^{+0.11}$ & $3.3^{+1.2}_{-0.6}$ & decay \\
B & $-1.3$ & $0.27_{-0.15}^{+2.80}$ & $2.5^{+17.1}_{-1.3}$ & rise \\
C & $0.7$ & $0.13_{-0.04}^{+1.87}$ & $3.6^{+6.4}_{-1.9}$ & rise \\
D & $2.3$ & $0.06^{+1.94}_{-0.02}$ & $2.4^{+9.1}_{-1.5}$ & decay \\
\hline
\end{tabular}
\end{table}
The size of the emitting region can be constrained as:
\begin{equation}
R \lesssim \frac{\delta c \tau}{1+z},
\end{equation}
where $z$ is the source redshift, $c$ the speed of light, and $\delta$ is the relativistic Doppler factor of the emitting region. Assuming a reference time scale of $\sim 5$\,min we get $R \lesssim 3\times10^{-5} \times \frac{\delta}{10}\,{\rm pc} \sim 10^{14} \times \frac{\delta}{10}$\,cm. The rapid variability identified here amounts to only a few percent of the total emitted flux from \object{BL\,Lac}.
Under the hypothesis that the (variable) emission is due to synchrotron and Compton processes, the cooling time-scale can limit the time-scale of a decay phase as:
\begin{equation}
\label{eq:cool}
\tau_{\rm d} \gtrsim t_{\rm cool} = \frac{3m_{e}c(1+z)}{4\sigma_{\rm T} \delta u^{'}_{0}\gamma_{e}}\,{\rm s},
\end{equation}
where $m_{\rm e}$ is the electron mass, $\sigma_{\rm T}$ the Thomson cross-section, $u^{'}_{0}=u^{'}_{B}+u^{'}_{\rm rad}=(1+q)B^{'2}/8\pi$ the co-moving energy density of the magnetic field (determining the synchrotron cooling rate) plus the radiation field (determining the inverse-Compton cooling rate), $q= u^{'}_{\rm rad}/u^{'}_{B}$ the Compton dominance parameter, typically of order of unity for BL\,Lacs \citep{Tav10}, and $\gamma_{e}$ is the characteristic random Lorentz factor of electrons producing the emission.
The peak frequency of the synchrotron emission is at
\begin{equation}
\label{eq:syn}
\nu_{\rm syn}=\frac{0.274 \delta e \gamma_{e}^{2}B^{'}}{(1+z)m_{e}c}\,{\rm Hz},
\end{equation}
where $e$ is the electron charge.
Finally, substituting $\gamma_e$ in Eq(s).\,\ref{eq:cool} and \ref{eq:syn}, the co-moving magnetic field can be constrained as:
\begin{eqnarray}
B^{'} \gtrsim [\pi m_e c (1+z) e / \sigma_{\rm T}^{2}]^{1/3} \nu_{\rm syn}^{-1/3} t_{\rm cool}^{-2/3} \delta^{-1/3} \sim \\ \nonumber
\sim 4 \times 10^7 (1+z)^{1/3} \nu_{\rm syn}^{-1/3} t_{\rm cool}^{-2/3} \delta^{-1/3}\,{\rm G}.
\end{eqnarray}
\object{BL\,Lac} is an intensively monitored object. \citet{Rai13} reported on a comprehensive study of its long-term behavior, including the epoch of our observations. From that data set the position of the synchrotron peak frequency can be inferred to be close to $\nu_{\rm syn} \sim 5 \times 10^{14}$\,Hz, and therefore, again assuming a reference time scale for decay of $\sim 5$\,min, and considering that the cooling time should be shorter than this, we get $B' \gtrsim 6 \times (\frac{\delta}{10})^{-1/3}$\,G.
The SED of \object{BL\,Lac} and that of a number of sources of the same class were studied in \citet{Tav10} based on observations carried out in 2008. A single zone model allowed the authors to estimate an average magnetic field $B \sim 1.5$\,G, a Doppler factor $\delta \sim 15$, and radius of the emitting region $R \sim 7 \times 10^{-4}$\,pc. Compared to the results from our analysis, based however on observations carried out in 2012, the emitting region of \object{BL\,Lac} turns out to be, as expected, a small fraction of that responsible for the whole emission and the magnetic field is locally higher but still close to the one zone model inference.
A similar analysis for \object{PKS\,1424+240} is not possible due to the very low level of variability shown during our observations. A fit with a constant is indeed perfectly acceptable although during the first $\sim 30$\,min of observations the source was slightly brighter by $\sim 0.01-0.02$\,mag.
The length of our monitoring does not allow us to derive general conclusions, although the difference in the observed flux variability between the two objects is remarkable. \object{PKS\,1424+240} is actually at a higher redshift compared to \object{BL\,Lac} ($z \sim 0.6$ vs. $z = 0.069$). Time dilation will lead to a reduction of any intrinsic variability for the former source by a factor of about 1.5 with respect to the latter. In addition, based on the SEDs shown in \citet{Tav10} and \cite{Ale14}, the optical band is at a higher frequency than the synchrotron peak for \object{BL\,Lac}, and at a lower frequency (or close to) for \object{PKS\,1424+240}. As widely discussed in \citet{Kir98}, under the assumption that magnetic fields in the emitting region are constant, flux and spectral variability depend on the observed frequency. If electrons with a given energy, corresponding to photons at a given frequency, cool more slowly than they are accelerated, variability is smoothed out, as it might be the case for \object{PKS\,1424+240}. Variability is expected to be particularly important close to frequencies emitted by the highest-energy electrons, where both radiative cooling and acceleration have similar timescales. Different short-term variability behaviors for sources with the synchrotron peak at lower or higher frequencies than the observed band were indeed already singled out \citep{HW96,HW98,Rom02,Hov14}.
In the literature it is also customary to look for the total flux doubling/halving times \citep[e.g.][]{Sba11,Imp11,Fos13}. The small amplitude of the variability we observed does not allow us to derive strong constraints, since this would always require large extrapolation. However, the shortest time-scales we could detect are of the order of less than four hours for \object{BL\,Lac}, consistent with the values found in other blazars, which is consistent with the idea that the the whole emitting region is much larger than the regions responsible for the rapid variability.
A variability analysis can be carried out for the polarimetric light curves too. The results show variability timescales at the same level as the total flux curves, although with larger uncertainties. Rapid time variability on minute to hours time scales for the polarized flux was singled out in other blazars, as for instance AO\,0235+164 \citep{Hag08}, S5\,0716+714 \citep{Sas08}, CGRaBS\,J0211+1051 \citep{Cha12} or CTA\,102 \citep{Ito13}. Intranight variability for a set of radio-quiet and radio-loud AGN was studied by \citet{Vil09}.
\subsection{Polarimetry}
Blazar emission is known to be characterized by some degree of polarization that is often variable, both in intensity and direction, on various time-scales \citep[see][for a recent review about optical observation of BL\,Lacs]{Fal14}. Occasionally, some degree of correlation or anticorrelation between the total and polarized flux is observed \citep[e.g.][]{Hag08, Rai12,Sor13,Gau14}, while often no clear relation is singled out. The complexity of the observed behaviors likely implies that, even when a single zone modeling can satisfactorily describe the broad-band SEDs, more emission components are actually active. It was proposed \citep[e.g.][]{Bar10,Sak13} that a globally weakly polarized fraction of the optical flux is generated in a relatively stable jet component, while most of the shorter term variability, both in total and polarized flux, originates from the development and propagation of shocks in the jet.
\object{BL\,Lac} and \object{PKS\,1424+240} show rather different behaviors in the linear polarimetry as well. The degree of polarization of \object{BL\,Lac} starts at about 11\% and decreases slowly for a few hours to about 9\%; then, for the remaining three hours of our monitoring, it decreases more quickly to about 6\%. The position angle increases rather quickly after the first hour, from about $14^\circ$ to $23^\circ$; then it remains stable for a couple of hours and then increases again to about $30^\circ$. Superposed on these general trends there is considerable short-term variability above the observational errors. \object{PKS\,1424+240}, on the contrary, shows a fairly constant polarization degree at about 4\% and a position angle close to $127^\circ$, with some variability only at the beginning of our monitoring. These behaviors are in general agreement with the results reported by \citet{And05} studying intra-night polarization variability for a set of BL\,Lac objects.
The \object{PKS\,1424+240} jet was likely in a low activity state, although it was not in its historical minimum (see Sect.\,\ref{sec:resdis}). This is also confirmed by the publicly available information and data at other wavelengths, such as high-energy gamma rays provided by the {\it Fermi}/LAT Collaboration\footnote{{\tt http://fermisky.blogspot.it/2014\_06\_01\_archive.html}}, and soft X-rays available from the {\it Swift}/XRT monitoring program\footnote{{\tt http://www.swift.psu.edu/monitoring/source.php?source=PKS1424+240}}. \citet{Ale14} reported a higher polarization degree, $7-9$\%, in 2011, when the source was brighter than during our monitoring. Lower polarization degrees, $4.4-4.9$\%, were reported by \citet{Mea90} in 1988, when the source was instead fainter. The position angle was about $113-119^\circ$, similar to that observed during our monitoring. The latter is also consistent with the direction of the jet as measured by VLBA radio observations at 2\,cm (Lister et al. 2013). The kinematics of the most robust radio component showed a position angle of $141^{\circ}$ with a velocity vector direction of $108^{\circ}$, i.e. with a very small offset \citep[$33^{\circ}\pm20^{\circ}$,][]{Lis13}. VLBA observations in the framework of the MOJAVE Project\footnote{{\tt http://www.physics.purdue.edu/astro/MOJAVE/sourcepages/1424+240.shtml}} \citep{Lis09} showed a decreasing trend in the polarization degree from 5\% in 2011 to 2.8\% in 2013, with a roughly stable position angle ($126^{\circ}-154^{\circ}$), consistent with our results in the optical. In general, looking at historical data, the polarization degree of \object{PKS\,1424+240} seems to be almost constant ($\sim 4$\%) below a given optical flux (likely $\lesssim 9.0$\,mJy, based on the refereed studies). The optical position angle seems to be quite stable and aligned with the kinematic direction of the radio jet and with the radio polarization position angle. This behavior might suggest some kind of ``magnetic switch'' (i.e. a threshold effect) in the jet activity \citep[e.g.][]{PuCo90, Mei97, Mei99}.
Neglecting the short-term variability of \object{BL\,Lac}, the total rotation of the position angle, taking the minimum at approximately $\sim -2$\,hours (see Fig.\,\ref{fig:bllac}) and the value at the end of our monitoring, amounts to about $15^\circ$, i.e. $2 - 2.5^\circ$/hour ($45-60^\circ$/day). Rapid position angle rotations of this magnitude are not unusual for blazars in general, and for \object{BL\,Lac} specifically \citep[e.g.][]{All81,Sil93,Mar08}. The observation of relatively stable and long-lasting rotational trends (days to months) suggested that the polarized emission could be generated in a jet with helical magnetic fields or crossed by transverse shock waves, or in a rather stable jet with an additional linearly rotating component \citep{Rai13}.
\begin{figure*}
\centering
\resizebox{\hsize}{!}{\includegraphics{bllacpol.pdf}} \\
\resizebox{\hsize}{!}{\includegraphics{bllacpolqu.pdf}}
\caption{({\it upper left}) BL\,Lac (host galaxy subtracted) flux density vs. linear polarization \citep[host galaxy corrected, assuming unpolarized emission, e.g.][]{Cov03}. At least three different regimes are singled out: at early time the flux changes rapidly with a slowly varying and rather high polarization (brown, circles), then an intermediate phase with chaotic flux and polarization variations (green, stars), and finally a sharp decrease in polarization with almost constant flux (blue, squares). Times in the legend are in hours (see Fig.\,\ref{fig:bllac}). ({\it upper right}) BL\,Lac position angle vs. linear polarization. Same symbols of the upper left panel. The position angle tends to increase when the linear polarization decreases. The trend becomes very clear at the end of our observation. ({\it bottom}) Flux density vs. Stokes parameters $Q$ and $U$. Periods with approximately linear dependence between polarization and total flux are also singled out. Same symbols as in the upper left panel.}
\label{fig:bllacpol}
\end{figure*}
Our well-sampled monitoring observations allow us to disentangle different behaviors even during the relatively short-duration coverage of \object{BL\,Lac} (Fig.\,\ref{fig:bllacpol}, upper left plot). At the beginning of our monitoring period, we see a rapid flux decrease with polarization slowly decreasing. After that, the source enters a phase characterized by rapid small-scale variability both in the total and polarized flux. Finally, the flux begins to increase regularly by a small amount and the polarization decreases abruptly down to the lowest observed level. The relation between polarization and position angle (Fig.\,\ref{fig:bllacpol}, upper right plot) shows the already mentioned rotation of the position angle with the decrease of the linear polarization. However, again superposed on this general trend there is considerable variability \citep[see also][for a similar analysis]{Hag08}.
In \citet{Rai13} the long-term (years) flux light curve was modeled assuming the flux variation to be (mainly) due to Doppler factor variations with a nearly constant Lorentz factor, i.e. due to small line of sight angle variations. We applied the same technique for our rapid monitoring. Knowing the viewing angle required to model the flux variations, it is then possible to predict the expected polarization in different scenarios. In the case of helical magnetic fields, following \citet{Lyu05}, we can derive a polarized flux fraction at 9-10\%, roughly in agreement with our observations. However, a detailed agreement, explaining the short-time variability for both the total and polarized flux, is not possible. Alternatively, we may consider transverse shock wave models \citep{Hug85}, with which again rough agreement for the polarization degree is reached, but no detailed agreement is possible.
A geometric model for the flux variation is therefore unable to simultaneously interpret the total flux and polarization behavior at the time resolution discussed here.
As already introduced in Sect.\,\ref{sec:var}, a possible interpretation of both total and polarized flux curves can be derived if it is assumed that the observed emission is due to a constant (within the time-scale of our monitoring) component with some degree of polarization and one \citep[or many,][]{Bri85} rapidly varying emission component(s) with different polarization degree and position angle \citep[see also][]{Sas08,Sak13}. The idea is rather simple; using the first three Stokes parameters the observed polarization can be described as:
\begin{equation}
S = \left \{
\begin{aligned}
I_{\rm obs} & = & I_{\rm const} & + & I_{\rm var} \\
Q_{\rm obs} I_{\rm obs} & = & Q_{\rm const} I_{\rm const} & + & Q_{\rm var} I_{\rm var} \\
U_{\rm obs} I_{\rm obs} & = & U_{\rm const} I_{\rm const} & + & U_{\rm var} I_{\rm var}
\end{aligned}
\right .
\label{eq:stokes}
\end{equation}
where the suffixes "obs", "const" and "var" refer to the observed (total), constant and variable quantities. The redundancy in Eq.\,\ref{eq:stokes} can be reduced following various possible assumptions, often depending on the availability of multi-wavelength datasets or long-term monitoring \citep[see, e.g.][]{Hol84,Qui93,Bri96,Bar10}. \citet{Hag02} assumed, based on their long-term polarimetric monitoring, that the stable component in \object{BL\,Lac} could be characterized by $P \sim 9.2$\% and $\theta \sim 24^\circ$.
As discussed in \citet{Hag99} and \citet{Hag08}, if a linear relation between polarized and total flux is singled out, this can allow one to estimate the polarization degree and position angle of the variable component. A linear relation between the Stokes parameters and the total flux implies that polarization degree and position angle are essentially constant \citep{Hag99} and their values can be derived as the slopes of the linear relations. At the beginning of our monitoring we can identify a sufficiently long and well defined linear relation between the Stokes parameters $Q$ and $U$ and the total flux (see Fig\,\ref{fig:bllacpol}, bottom panel). As already mentioned, we find considerable variability superposed on the linear trend. Neglecting the shorter term variability, we can roughly estimate $P_{\rm var} \sim 22$\% and $\theta \sim 34^\circ$. The constant component turns out to be remarkably consistent with the one identified by \citet{Hag02} at a flux level $\sim 9.5$\,mJy.
At any rate, the increasing complexity singled out by long- and short-term monitoring requires new theoretical frameworks for a proper interpretation. \citet{Mar14}, for instance, proposed a scenario in which a turbulent plasma is flowing at relativistic speeds crossing a standing conical shock. In this model, total and polarized flux variations are due to a continuous noise process rather than by specific events such as explosive energy injection at the basis of the jet. The superposition of ordered and turbulent magnetic field components can easily explain random fluctuations superposed on a more stable pattern, without requiring a direct correlation between total and polarized flux. As discussed in \citet{Mar14}, simulations based on this scenario can also give continuous and relatively smooth position angle changes as observed during our monitoring of \object{BL\,Lac}.
\begin{table}
\caption{Parameters of interest for the model based on the scenario extensively discussed in \citet{Zha14} and \citet{Zha15}. Angles are in the co-moving frame.}
\label{tab:bllacsim}
\centering
\begin{tabular}{lc}
\hline \hline
Parameter & Value \\
\hline
Bulk Lorentz factor & 15 \\
Length of the disturbance ($L$) & $3.8\times10^{14}$\,cm \\
Radius of the disturbance ($A$) & $4.0\times10^{15}$\,cm \\
Orientation of the line of sight & $90^\circ$ \\
Helical magnetic field strength & 2.5\,G \\
Helical pitch angle & $47^\circ$ \\
Electron density & $4.5\times10^{2}$\,cm$^{-3}$ \\
\hline
\end{tabular}
\end{table}
\citet{Zha14} presented a detailed analysis of a shock-in-jet model assuming a helical magnetic field throughout the jet. They considered several different mechanisms for which a relativistic shock propagating through a jet may produce a synchrotron (and high-energy) flare. They find that, together with a correlation between synchrotron and synchrotron self-Compton flaring, substantial variability in the polarization degree and position angle, including large position angle rotations, are possible. This scenario assumes a cylindrical geometry for the emitting region moving along the jet, which is pervaded by a helical magnetic field and a turbulent component. On its trajectory, it encounters a flat stationary disturbance, which could be a shock. This shock region does not occupy the entire layer of the emitting region, but only a part of it. In the comoving frame of the emitting region, this shock will travel through the emitting region, and temporarily enhance the particle acceleration, resulting in a small flare. After the shock moves out, the particle distribution will revert to its initial condition due to cooling and escape.
The 3DPol (3D Multi-Zone Synchrotron Polarization) code presented in \citet{Zha14} and MCFP (Monte Carlo/Fokker-Planck) code presented in \citet{Che12} realizes the above model. As elaborated in \citet{Zha15}, since the shock is relatively weak and localized, the enhanced acceleration will lead to a small time-symmetric perturbation in the polarization signatures. Some of the key parameters for the model are reported in Table\,\ref{tab:bllacsim}, and the fits to the polarization degree and position angle light curves are shown in Fig. \ref{fig:bllacmod}. Near the end of the observation, the polarization degree experienced a sudden drop, while the position angle continued to evolve in a time-symmetric pattern. Therefore an increase in the turbulent contribution is necessary although, due to the lack of a multi-wavelength SED, it cannot be well constrained. The total flux, given the very low variability amplitude observed during our observations, was set at a constant level of about 12\,mJy (see Fig.\,\ref{fig:bllaclc}). Nevertheless, rapid polarimetry clearly reveals its diagnostic power, showing need of inhomogeneity and turbulence in the emitting region.
\begin{figure}
\centering
\resizebox{\hsize}{!}{\includegraphics{bllacmod.pdf}}
\caption{Fit of the \citet{Zha14,Zha15} model scenario (green solid line) to the \object{BL\,Lac} linear polarization \citep[host galaxy corrected, assuming unpolarized emission, e.g.][]{Cov03} and position angle data. The general behavior is fairly well described by the model, although for the polarization the addition of weakly constrained turbulence in the magnetic field is required.}
\label{fig:bllacmod}
\end{figure}
\section{Conclusions}
\label{sec:conc}
In this work we are presenting results from rapid time-resolved observations in the $r$ band for two blazars: \object{BL\,Lac} and \object{PKS\,1424+240}. The observations were carried out at the 3.6\,m TNG and allowed us to measure linear polarimetry and photometry almost continuously for several hours for both sources. In practice, long-term monitoring observations of relatively bright blazars can only be achieved with dedicated small-size telescopes; however the richness of information obtainable with a rather large facility as the TNG allows us to study regimes which were in the past only partially explored.
\object{BL\,Lac} and \object{PKS\,1424+240} show remarkably different variability levels, with the former characterized by intense variability at a few percent level, while the latter was almost constant for the whole duration of our observations. The shortest well constrained variability time scales for \object{BL\,Lac} are as short as a few minutes, allowing us to derive constraints on the physical size and magnetic fields of the source regions responsible for the variability.
The variability time-scales for the polarization of \object{BL\,Lac} are compatible with those derived for the total flux, while \object{PKS\,1424+240} shows an almost constant behavior also in the polarization. The position angle of \object{BL\,Lacertae} rotates quasi-monotonically during our observations, and an analysis of the total vs. polarized flux shows that different regimes are present even at the shortest time-scales.
Different recipes to interpret the polarimetric observations are considered. In general, with the simplest geometrical models, only the average level of polarization can be correctly predicted. More complex scenarios involving some turbulence in the magnetic fields are required, and promising results are derived by a numerical analysis carried out following the framework described in \citet{Zha14,Zha15}, which requires some symmetry in the emitting region, as shown by the time-symmetric position angle profile. The time-asymmetric polarization profile, and its decrease during the second part of the event, which is accompanied by a few small flares, can be described by adding some turbulent magnetic field structure to the model.
\begin{acknowledgements}
This work has been supported by ASI grant I/004/11/0. HZ is supported by the LANL/LDRD program and by DoE/Office of Fusion Energy Science through CMSO. Simulations were conducted on LANL's Institutional Computing machines. The work of MB is supported by the Department and Technology and the National Research Foundation of South Africa through the South African Research Chair Initiative (SARChI)\footnote{Any opinion, finding, and
conclusion or recommendation expressed in this material is that of the authors and the NRF does not accept any
liability in this regard.}. We also thank the anonymous referee for her/his competent comments that greatly enhanced the quality of the paper.
\end{acknowledgements}
| {
"attr-fineweb-edu": 1.958984,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUchPxK0iCl7UGW2Pl | \section{Introduction}
\label{INTRO}
The asymptotic conformal invariance of QCD has fostered a number of
theoretical attempts to push the original AdS/CFT duality \cite{MA98}
beyond its conjectured domain of validity. The AdS/CFT correspondence
postulates more generally a relation, through a well-defined set of
prescriptions \cite{WI98, GKP98}, between weakly coupled string
theories living in the bulk of an anti-de Sitter (AdS) space and
strongly coupled conformally invariant field theories defined on its
boundary. The efforts within the so-called AdS/QCD approach rely on
the assumption that the AdS/CFT dictionary can still describe the
strong coupling regime of a confining gauge theory like QCD, despite
the breaking of conformal invariance, by computing correlation
functions within a semi-classical field theory formulated in a
five-dimensional AdS spacetime.
The various AdS/QCD models usually fall into two main categories. In
the so-called ``top-down'', or ``gauge/string'', approach one sticks
as much as possible to the original formulation of the AdS/CFT
correspondence, and starts from a superstring theory living on
$AdS_5\times M_5$ (where $M_5$ is some five-dimensional compact
manifold) and tries to derive an effective theory which describes the
low energy phenomena in QCD. The second, ``bottom-up'', or
``gauge/gravity'', approach puts aside the string-theoretic motivation
of the correspondence and starts from a phenomenological Lagrangian in
an appropriate five dimensional metric background which
incorporates as much as possible the known properties of QCD. Contrary
to some claims, the former approach is no more rigourous, in its
present stage, than the latter.
The simplest way to break conformal invariance is by introducing a
hard cut-off in AdS space which can be interpreted as an infrared mass
scale of the gauge theory. The first ``hard-wall'' model \cite{PS01}
was able to generate a power-law scaling of glueball elastic
scattering amplitudes at fixed angles. It is also possible to break
conformal invariance softly through a background dilaton field which
can be chosen \cite{KKSS} so as to reproduce the linear Regge behavior
of the meson trajectories. Inside the bottom-up approach, it has
proved possible to reproduce qualitatively within both hard-wall and
soft-wall models the spectra of low-lying hadron states as well as
various decay constants and coupling constants
\cite{BOS02,TER05,HAM06,COL08}.
A frequent criticism claims that AdS/QCD models are nothing but some
kind of ``bag models'' since quite a few other phenomenological models
of QCD are able to reproduce the static parameters of hadronic states
at the same level of accuracy. In fact such criticisms are pretty
superficial because they ignore that AdS/QCD can incorporate in a
coherent framework several of these models (vector meson dominance,
1/N expansion, sum rules, $\cdots$, see \cite{EH09} for an up-to-date
review of the pros and cons of the AdS/QCD approach).
Moreover the AdS/QCD models can also be used to study in a completely
relativistic Lorentz invariant manner dynamical non-perturbative
aspects like hadronic form factors or structure functions. In
particular one can expect that conformal symmetry is most relevant for
describing the electromagnetic interactions of hadrons, since the
photon is a massless particle \footnote{e.g. there is a dynamical
$O(4,2)$ symmetry which is sufficient to determine completely the
spectrum and eigenfunctions of the relativistic hydrogen atom.}. A
very important process that reveals the internal structure of hadrons
is the deep inelastic scattering (DIS) of a highly energetic lepton
off a hadron. This process was first investigated in a hard-wall
model in \cite{PS03}. Since then, there
have been a number of related studies within the different flavors of
AdS/QCD models \cite{HAT07,BAY08-1,BAY08-2,COR08,ALB08,HAT08,BAY08-3,
GAO09,HAT09,LEV09,COR10,KOV09}. However the semi-classical density
of states of the hard-wall or soft-wall models with canonical
dimensions does not agree \cite{PRSW} with the power-law behavior of
the structure functions observed at high energy. It is presently far
from clear how a correct partonic description of hadrons can emerge
from the stringy corrections expected in the kinematical regime of
DIS.
On the other hand the AdS/QCD models provide a low-energy description
of the electromagnetic form factors which does agree with the
dimensional counting rules of hadrons up to a few GeV
\cite{GR07-1,GR07-2,BRO07,KL07,AC08,WARD08}. Getting additional
information about the electromagnetic structure of hadrons requires
the study of four-point functions. There is one distinguished
electromagnetic process which supplies all experimentally accessible
information, namely Compton scattering. There has been a lot of
theoretical work about Compton scattering. Strong interactions can
significantly modify the amplitude but there is a low-energy theorem
\cite{LOW,GG54} which guarantees that the Born contribution dominates
near threshold. The two leading orders of an expansion of the real
Compton scattering amplitude off a nucleon in terms of the frequency
of the photon are entirely given in terms of the charge, mass and
anomalous magnetic moment of the nucleon. For spinless targets like
the pion, there is no linear term in the energy of photons. This
theorem is based only on Lorentz invariance, gauge invariance and
crossing symmetry. Quadratic corrections to the Born terms (electric
and magnetic polarizabilities as well as generalized polarizabilities
with virtual photons) are not specified by symmetry arguments alone
and characterize non point-like elementary particles. Measuring these
quantities allows to test the different models of strong interactions
(see \cite{PP95} for a review of the theoretical predictions on
charged and neutral pion polarizabilities).
The purpose of this work is to study the Compton amplitude off a
spinless target in the AdS/QCD formalism. We shall focus on the
kinematical region where the photons are soft. The semi-classical
approximation in the AdS/QCD duality should apply best in this region
and there exist experimental measurements of the pion polarizabilities
to compare with \cite{MAMI,COMPASS}. But we shall also study the
kinematical region with one deeply virtual photon (DVCS) since it is
straightforward to extract the corresponding structure functions in
the Bjorken limit.
The plan of the paper is as follows. We shall begin by a brief summary
of the standard lore about the general structure of the virtual
Compton amplitude off an unpolarized target. Next we shall describe
the generic soft-wall model we work with. In section \ref{4PT} we
shall explain how to calculate four-point functions in AdS/QCD and in
section \ref{SCALAR} we shall compute the Compton amplitude generated
by the minimal coupling of a bulk scalar field with a bulk U(1) gauge
field. We also compare our results with the recent hard-wall
calculation of \cite{GX09}. In the subsequent section we shall make
explicit the structure of this Compton amplitude in the deep inelastic
region, and in section \ref{DVCS} we shall clarify its Lorentz and
gauge-invariant structure in the DVCS kinematical region. Then we
shall extract the corresponding structure functions in the Bjorken
limit. Section \ref{POL} is devoted to the calculation of the
polarizabilities of a spinless target. We shall comment on the
implications of our results in the conclusion.
\section{Virtual Compton amplitude off an unpolarized target}
\label{COMPTON}
The amplitude of the virtual Compton scattering, $\gamma^{\star}(q_1)
+ A(p_1) \rightarrow \gamma^{\star}(q_2) + A(p_2)$, where $A$ is a
spinless, or spin-averaged, hadron is defined through the off-forward
matrix element of the time-ordered product of two electromagnetic
currents,
\begin{gather}
T_{\mu\nu} = i\int d^4x\,e^{iq\cdot x}\,\langle p_2
\vert T\{J_{\mu}(x/2)J_{\nu}(-x/2)\}\vert p_1\rangle\,,\quad
q=\frac{1}{2}(q_1+q_2)\,.
\end{gather}
In general, a two-to-two scattering amplitude depends on six
independent kinematical invariants, namely the external virtualities
$q_1^2, q_2^2, p_1^2, p_2^2$ and the usual Mandelstam variables
\begin{gather}
s = (p_1+q_1)^2\,,\ t=(p_1-p_2)^2\,,\ u= (p_2-q_1)^2\,,
\end{gather}
obeying the constraint
\begin{gather}
s + t + u = q_1^2 + q_2^2 + p_1^2 + p_2^2\,.
\end{gather}
It is convenient, for calculating the Compton form factors, to choose
$q_1$, $q_2$ and $p=p_1+p_2$ as the three independent momenta of the
process. At most thirteen independent tensors can contribute to the
Compton amplitude,
\begin{gather}
\begin{split}
g^{\mu\nu},\,p^{\mu}p^{\nu},\,q_1^{\mu}q_1^{\nu},\,
q_2^{\mu}q_2^{\nu},\,q_1^{\mu}q_2^{\nu},\,q_2^{\mu}q_1^{\nu},\,
p^{\mu}q_1^{\nu},\,q_2^{\mu}p^{\nu},\,p^{\mu}q_2^{\nu},\,q_1^{\mu}p^{\nu}, \\
\epsilon_{\mu\nu\rho\sigma}p^{\rho}q_1^{\sigma},\,
\epsilon_{\mu\nu\rho\sigma}p^{\rho}q_2^{\sigma},\,
\epsilon_{\mu\nu\rho\sigma}q_1^{\rho}q_2^{\sigma}\,.\qquad\qquad\quad
\end{split}
\end{gather}
The antisymmetric tensors are parity-violating. Electromagnetic gauge
invariance implies that
\begin{align}
q_1^{\mu}T_{\mu\nu} = T_{\mu\nu}\,q_2^{\nu} = 0\,.
\end{align}
From the vanishing of the six coefficients of the linearly independent
vectors, one can deduce five linearly independent conditions. Hence
the most general spin-averaged, gauge-invariant, and parity-conserving
Compton amplitude has five independent form factors:
\begin{align}
\label{Vmunu}
\begin{split}
T^{\mu\nu} &= V_1\left(g^{\mu\nu} - \frac{q_1^{\mu}q_1^{\nu}}{q_1^2}
- \frac{q_2^{\mu}q_2^{\nu}}{q_2^2}
+ q_1^{\mu}q_2^{\nu}\frac{(q_1.q_2)}{q_1^2q_2^2} \right) \\
&+ V_2\left( p^{\mu} - q_1^{\mu}\frac{(p.q_1)}{q_1^2}\right)
\left(p^{\nu} - q_2^{\nu}\frac{(p.q_2)}{q_2^2}\right) \\
&+ V_3\left(q_2^{\mu} - q_1^{\mu}\frac{(q_1.q_2)}{q_1^2}\right)
\left(q_1^{\nu} - q_2^{\nu}\frac{(q_1.q_2)}{q_2^2}\right) \\
&+ V_4\left( p^{\mu} - q_1^{\mu}\frac{(p.q_1)}{q_1^2}\right)
\left(q_1^{\nu} - q_2^{\nu}\frac{(q_1.q_2)}{q_2^2}\right) \\
&+ V_5\left(q_2^{\mu} - q_1^{\mu}\frac{(q_1.q_2)}{q_1^2}\right)
\left(p^{\nu} - q_2^{\nu}\frac{(p.q_2)}{q_2^2}\right)\,.
\end{split}
\end{align}
The form factors $V_1$, $V_2$, $V_3$, $V_4$ and $V_5$ can be readily
identified as the coefficients of the tensors $g^{\mu\nu}$,
$p^{\mu}p^{\nu}$, $q_1^{\nu}q_2^{\mu}$, $p^{\mu}q_1^{\nu}$ and
$p^{\nu}q_2^{\mu}$ respectively. They are in general functions of the
six independent scalar invariants. The off-shell Compton form factors
are not directly measurable. The Compton form factors of on-shell
virtual Compton amplitudes, defined by the conditions,
\begin{align}
p_1^2 = p_2^2 = -M_H^2\,,
\end{align}
depend only on four independent scalar invariants. The gauge-invariant
tensors in \eqref{Vmunu} will be denoted respectively as
$\V_i^{\mu\nu}(p,q_1,q_2)\,,\ i=1,\cdots 5$.
We shall be interested more particularly in two kinematical regimes,
according to whether one or two photons are real. In
electrophotoproduction the outgoing photon is real, $q_2^2=0$, and
thus transversely polarized. One can contract the Compton amplitude
with the polarization $\epsilon_2$, set $\epsilon_2\cdot q_2=0$ and
still impose a gauge condition on the outgoing photon, e.g.
$\epsilon_2\cdot p = 0$, by choosing the Coulomb gauge
$\epsilon_2^0=0$ and the frame $\v{p}=\b{0}$. Then the contracted
amplitude becomes,
\begin{align}
\label{Vmu}
\begin{split}
A^{\mu}_{\text{VCS}} &= T^{\mu\nu}\,\epsilon_{2\nu}^{\star} =
V_1\left(\epsilon_2^{*\mu} -
\frac{q_1^{\mu}}{q_1^2}(\v{\epsilon_2^*}\cdot\v{q_1})\right)
+ V_3\left(q_2^{\mu} - q_1^{\mu}\frac{(q_1.q_2)}{q_1^2}\right)
(\v{\epsilon_2^*}\cdot\v{q_1}) \\
&+ V_4\left( p^{\mu} - q_1^{\mu}\frac{(p.q_1)}{q_1^2}\right)
(\v{\epsilon_2^*}\cdot\v{q_1}) \,.
\end{split}
\end{align}
Therefore there are only three independent form factors when the
outgoing photon is real.
Finally if the ingoing photon is also real, $q_1^2=0$, one can
contract the amplitude $A^{\mu}_{\text{VCS}}$ with the polarization
$\epsilon_1$ and impose similarly the conditions $\epsilon_1\cdot q_1
= \epsilon_1\cdot p = 0$. Hence the real Compton amplitude has in
general two independent form factors,
\begin{align}
\label{RCS}
A_{\text{RCS}} = \epsilon_1^{\mu}\,T_{\mu\nu}\,\epsilon_2^{\star\nu}
= V_1\,\v{\epsilon_1}\cdot\v{\epsilon_2}^{\star}
+ V_3\,(\v{\epsilon_1}\cdot \v{q_2})(\v{\epsilon_2}^{\star}\cdot\v{q_1}) \,.
\end{align}
\section{The soft-wall model}
\label{MODEL}
The AdS/CFT correspondence is based upon the fact that the isometry
group of the five-dimensional (5D) anti-de Sitter space is the same as
the four-dimensional conformal group $SO(4,2)$. In Poincar\'e coordinates,
the AdS$_5$ metric reads
\begin{align}
\label{ADS}
ds^2 &= \frac{R^2}{z^2}\left(\eta_{\mu\nu}dx^{\mu}dx^{\nu}+dz^2\right)\,,
\qquad \sqrt{-g} = \frac{R^5}{z^5}\,,
\end{align}
where $\eta_{\mu\nu}\equiv(-1,1,1,1)$ is the four-dimensional
Minkowski metric. We shall set the curvature radius $R$ to 1 from now on.
Following \cite{PS03} we introduce a massless 5D vector field
$A_m(x,z)$ with a $U(1)$ gauge invariance, which is dual to the
electromagnetic current. The free field $A_m(x,z)$ must satisfy the
Maxwell equations in the bulk (with no dilaton coupling, which would
break the conformal invariance of the electromagnetic field),
\begin{align}
\label{MAXWELL}
(\nabla_m F)^{mn} = \frac{1}{\sqrt{-g}}\,
\p_m\left(\sqrt{-g}F^{mn}\right) = 0\,.
\end{align}
Choosing the linear gauge fixing condition,
\begin{align}
\p^{\mu}A_{\mu} + z\partial_z\left(z^{-1}A_z\right) = 0\,,
\end{align}
the general plane-wave solution of \eqref{MAXWELL} in the space-like
region, $Q^2=q\cdot q>0$, reads
\begin{align}
\begin{split}
A_{\mu}(x,z) &= \epsilon_{\mu}e^{iq\cdot x}QzK_1(Qz)\,,\\
A_z(x,z) &= -i\frac{(\epsilon\cdot q)}{q^2}e^{iq\cdot x}
\p_z\left(QzK_1(Qz)\right)\,,
\end{split}
\end{align}
where $\epsilon_{\mu}$ is a polarization vector. The boundary
condition is chosen such that the solution becomes a plane-wave on the
Minkowski slice at $z=0$,
\begin{align}
\label{BC}
\lim_{z\rightarrow 0} A_{\mu}(x,z) = \epsilon_{\mu}\,e^{iq\cdot x}\,.
\end{align}
The boundary condition in the timelike region, $q^2<0$, can be
obtained by analytic continuation in the $q^2$ variable. A crucial
property of the boundary condition \eqref{BC} is that the vector field
$A_{\mu}(x,z)$ is in fact a constant plane-wave throughout the bulk of
the AdS space when $q^2=0$.
We introduce a massive 5D scalar field $\Phi(x,z)$ which will be the
dual of an operator which creates the spinless target. Following
\cite{KKSS} the bulk scalar field is coupled to a background dilaton
field $\chi(z)$ which deforms the AdS$_5$ metric. The action which
describes the propagation of $\Phi$ in this background reads
\begin{align}
S_{\Phi} = \frac{1}{2}\int
d^4x\,dz\sqrt{-g}e^{-\chi}\left(g^{ij}\partial_i\Phi\partial_j\Phi +
m^2_S\Phi^2\right)\,,
\end{align}
where $g$ is the AdS$_5$ metric. The classical field equation reads
\begin{align}
\label{laplace}
\Delta_g\Phi \equiv \frac{e^{\chi}}{\sqrt{-g}}\partial_i
\left(e^{-\chi}\sqrt{-g}g^{ij}\p_j\Phi\right)
= m_S^2\Phi\,.
\end{align}
In Poincar\'e coordinates, the Laplacian equation becomes
\begin{align}
\label{scalar}
z^2\square\Phi + z^5e^{\chi}\partial_z
\left(z^{-3}e^{-\chi}\partial_z\Phi\right)
= m_S^2\Phi\,.
\end{align}
Looking for a solution that is a plane-wave in Minkowski space and setting
\begin{align}
\Phi(x,z) = e^{ip\cdot x}e^{\chi(z)/2}z^{3/2}\psi(z) \equiv
e^{ip\cdot x}\,\h{\Phi}(z) \,,
\end{align}
the Laplacian equation is transformed into a Schr\"odinger-like equation
\begin{gather}
\frac{d^2\psi}{dz^2} - V(z)\psi = p^2\psi\,, \\
V(z) = \frac{m_S^2+15/4}{z^2} + \frac{3}{2z}\partial_z\chi
+ \frac{1}{4}(\partial_z\chi)^2 - \frac{1}{2}\partial_z^2\chi\,.
\end{gather}
If the potential $V(z)$ has the right properties, and with appropriate
boundary conditions, this equation has a complete
set of solutions $\{\psi_n(z)\,,\ n\in\NB\}$ which form an
orthonormal basis of the Hilbert space $\H$ defined by the inner
product
\begin{gather}
\langle \phi|\psi\rangle_{\H} = \int_0^{\infty}dz\,\phi^{\star}(z)\psi(z) \,.
\end{gather}
We shall assume that the dilaton background is such that $\H$ is
well-defined. Otherwise we let the dilaton profile unspecified to be
as generic as possible, except that conformal invariance in the
ultraviolet requires that $\chi(z)\rightarrow 0$ for $z\rightarrow 0$.
In terms of the plane-wave solutions of the Laplacian equation
\eqref{scalar}, with the corresponding boundary conditions, the
completeness relation takes the form
\begin{align}
\delta(z-z') = \sum_n z'^{-3/2}e^{-\chi(z')/2}\,\h{\Phi}_n^{\star}(z')\,
\h{\Phi}_n(z)\,z^{-3/2}e^{-\chi(z)/2}\,.
\end{align}
Therefore the set of classical solutions $\{\h{\Phi}_n(z)\,,\
n\in\NB\}$ form a complete orthonormal basis of the Hilbert space
$\H_S$ spanned by the solutions of the Laplacian equation and defined
by the inner product,
\begin{align}
\label{HS}
\left\langle\h{\Phi}\right|\left.\h{\Phi}'\right\rangle_{\H_S} =
\int_0^{\infty}dz\,z^{-3}e^{-\chi(z)}\,
\h{\Phi}^{\star}(z)\,\h{\Phi}'(z) \,.
\end{align}
Each $\h{\Phi}_n$ is a normalized eigenfunction, with appropriate
boundary conditions, of the operator
\begin{align}
\label{H}
\h{H}_S\,\h{\Phi}_n = \left(z^3\,e^{\chi}\partial_z
\left(z^{-3}e^{-\chi}\partial_z\right) - m_S^2z^{-2}\right)\h{\Phi}_n
= -m_n^2\h{\Phi}_n\,.
\end{align}
The scalar Green function $G(x,z;x',z')$ is defined by the
inhomogeneous equation
\begin{align}
\label{Green}
\left(\Delta_g - m_S^2\right) G(x,z;x',z') = \frac{e^{\chi}}{\sqrt{-g}}
\delta^{4}(x-x')\delta(z-z')\,.
\end{align}
Its four-dimensional Fourier transform $\h{G}(z,z';p)$
\begin{align}
\label{FT}
G(x,z;x',z') =
\frac{1}{(2\pi)^4}\int_{-\infty}^{+\infty}d^4p
\,e^{ip(x'-x)}\h{G}(z,z';p)
\end{align}
satisfies the equation
\begin{align}
\label{green}
\h{H}_S\,\h{G} = p^2\h{G} + z^3e^{\chi}\,\delta(z-z') \,.
\end{align}
$\h{G}$ has an expansion in terms of the normalized
eigenfunctions satisfying the same boundary conditions,
\begin{align}
\label{expansion}
\h{G}(z,z';p) &= -\sum_{n=0}^{\infty}
\frac{\h{\Phi}^{\star}_n(z)\h{\Phi}_n(z')}{p^2+m^2_n-i\epsilon} \,.
\end{align}
The handling of the singularities at $p^2=-m_n^2$ is done with the
standard Feynman prescription.
In order to complete the definition of the model we still need to
specify the interaction between the bulk scalar field and the bulk
$U(1)$ field. Since we are interested in describing the
electromagnetic interactions of a charged spinless hadron, we shall
take a $U(1)$ covariant coupling, \DS{D_n\Phi = \p_n\Phi -ie A_n\Phi}.
Hence we shall consider the full anti-de Sitter action,
\begin{equation}
\label{AdS}
S_{\text{AdS}}[\Phi,\Phi^*,A^m]=\int d^4xdz\ \sqrt{-g}
\left(-\frac{1}{4}F^{mn}F_{mn}
+ e^{-\chi}\left((D^m\Phi)^*D_m\Phi+m_S^2\Phi^*\Phi\right)\right)\ .
\end{equation}
\section{Calculation of four-point functions in AdS/QCD}
\label{4PT}
The gauge/gravity correspondence relates generating functions in a
strongly-coupled gauge theory to the classical supergravity
partition function in the following way:
\begin{align}
\label{corres}
\begin{split}
Z_{CFT}(c,\bar{c},n+\bar{n}) &= \left\langle\exp\left(\int d^4x\
(n_\mu+\bar{n}_\mu)J^\mu+\bar{c}\,O+c\,O^\dagger\right)\right\rangle_{CFT} \\
&= \exp\left(-S^{cl}_{\text{AdS}}
[\Phi(c),\Phi^*(\bar{c}),A^m(n_\mu+\bar{n}_\mu)]
\right) \,,
\end{split}
\end{align}
where the 4-dimensional sources for the CFT appear as boundary
conditions for the 5-dimensional classical supergravity fields.
Correlation functions of CFT operators can be obtained by expanding
\eqref{corres} to linear-order with respect to the sources. We shall
use the prescription \eqref{corres} as a recipe for the AdS/QCD model.
The correlation function we are interested in can be
obtained from the coefficient of $\bar{c}n_\mu\bar{n}_\nu c.$
In the contracted Compton amplitude, $\epsilon_{1\mu}
T^{\mu\nu}\epsilon_{2\nu}^*,$ the QCD operators are coupled to
asymptotic states, therefore these will serve as boundary conditions
for the free bulk fields $\Phi^{(0)},$ $\Phi^{*(0)},$ and
$A_\mu^{(0)}.$ After we express $S^{\text{cl}}_{\text{AdS}}$ in terms
of free fields, it will be easy to read off the
$\bar{c}n_\mu\bar{n}_\nu c$ coefficient.
With the notations of the previous section, the equations of motion
for the interacting classical bulk fields read
\begin{align}
\frac{1}{\sqrt{-g}}\partial_n\left(\sqrt{-g}F^{mn}\right)
&= ie\,e^{-\chi}\left(\Phi^* D^m\Phi-(D^m\Phi^*)\Phi\right) \,, \\
(\Delta_g-m_S^2)\Phi &= V(A)\Phi=ieV_1(A)\Phi+e^2V_2(A)\Phi \,, \\
(\Delta_g-m_S^2)\Phi^* &= \o{V}(A)\Phi^*=-ieV_1(A)\Phi^*+e^2V_2(A)\Phi^* \,,
\end{align}
where the linear operators $V_1(A)$ and $V_2(A)$ act on the right and read
\begin{align}
V_1(A)&=\frac{e^\chi}{\sqrt{-g}}\,
\partial_m\left(\sqrt{-g}e^{-\chi}A^m\right)+2A^m \partial_m \,,
\\ V_2(A)&=A^m A_m \,.
\end{align}
The function $V$ can be also used to write the interaction term in
$S_{\text{AdS}}:$
\begin{align}
S_{int}&= \int d^4xdz\ \sqrt{-g}e^{-\chi}
\left(ieA^m(\Phi^*\partial_m\Phi-\Phi\partial_m\Phi^*)
+e^2A^mA_m\Phi^*\Phi\right) \,,
\nonumber\\
&= \frac{1}{2}\int d^4xdz\ \sqrt{-g}e^{-\chi}
\left(\Phi^*V(A)\Phi+\Phi\o{V}(A)\Phi^*\right) \,.
\end{align}
The solutions for $\Phi$ and $\Phi^*$ can be written as
\begin{align}
\Phi^{(*)}(y)&=\Phi^{(*)}_{(0)}(y)+\int dy'\sqrt{-g'}e^{-\chi(z')}G(y;y')\,
\overset{(-)}{V}\left(A(y')\right)\Phi^{(*)}(y') \,,
\end{align}
where the free bulk fields $\Phi_{(0)}$, $\Phi^*_{(0)}$ and the Green
function $G$ are respectively solutions of \eqref{laplace} and
\eqref{green}. We use the shorthand notations $y=(x,z)$ and
$dy=d^4xdz$. We can now write $\Phi^*V(A)\Phi$ in terms of the free
fields:
\begin{align}
\label{phivphi}
\begin{split}
\Phi^*V(A)\Phi &= \left(\Phi^*_{(0)}(y) -
ie\int dy'\sqrt{-g'}e^{-\chi(y')}\,G(y;y')
\,V_1\left(A_{(0)}(y')\right)\Phi^*_{(0)}(y')\right) \\
&\times\left(ieV_1\left(A_{(0)}(y)\right) + e^2 V_1\left(A_{(1)}(y)\right)
+e^2 V_2\left(A_{(0)}(y)\right)\right) \\
&\times\left(\Phi_{(0)}(y) + ie\int dy''\sqrt{-g''}e^{-\chi(y'')}\,G(y;y'')
\,V_1\left(A_{(0)}(y'')\right)\Phi_{(0)}(y'')\right) + {\cal O}(e^3) \,,
\end{split}
\end{align}
and similarly for $\Phi\o{V}(A)\Phi^*.$ In $S^{cl}_{int},$ the
contribution involving $A_{(0)} A_{(0)} \Phi^*_{(0)}\Phi_{(0)}$
appears at order $e^2:$
\begin{align}
\begin{split}
S^{cl}_{int} &= \int dy\,\sqrt{-g}e^{-\chi}
\left(ieA^m(\Phi^*\partial_m\Phi-\Phi\partial_m\Phi^*)
+e^2A^m A_m\Phi^*\Phi\right) \\
&+ e^2\int dydy'\,\sqrt{-g}e^{-\chi(y)}\sqrt{-g'}e^{-\chi(y')} A^m(y)
\left\{\left(G(y;y')\partial_m\Phi^*(y)\right.\right.
\\ &\left.\left.\qquad\qquad-\Phi^*(y)\partial_m G(y;y')\right)
V_1(A(y'))\Phi(y')+\left(\Phi\leftrightarrow\Phi^*\right)\right\}
+{\cal O}(e^3) \,,
\end{split}
\end{align}
where we have dropped the ${}_{(0)}$ notation for clarity, and we are
now dealing only with free fields. Note that the term
$V_1(A_{(1)})$ in (\ref{phivphi}) contributes also at order
$e^2$ but we have dropped it since it does not contribute to
$T^{\mu\nu},$ but rather to a $\langle\Phi^{*2}\Phi^2\rangle$
correlator. Finally, after integrating by parts over $y'$ one writes:
\begin{align}
S^{cl}_{int} &= ie\int dy\,\sqrt{-g}e^{-\chi}
A^m(\Phi^*\partial_m\Phi-\Phi\partial_m\Phi^*)
+ e^2\int dy\,\sqrt{-g}e^{-\chi} A^m A_m\Phi^*\Phi \nonumber\\
&+ e^2\int dydy'\,\sqrt{-g}e^{-\chi} \sqrt{-g'}e^{-\chi'}
A^m(y)A^n(y') \nonumber\\
&\qquad\quad\left\{
\Bigl(\Phi^*(y)\partial_m-(\partial_m\Phi^*(y))\Bigl)
\Big(\Phi(y')\partial'_n-(\partial'_n\Phi(y'))\Bigl)\right.\nonumber\\
&\qquad\quad\left.+\Bigl(\Phi(y)\partial_m-(\partial_m\Phi(y))\Bigl)
\Bigl(\Phi^*(y')\partial'_n-(\partial'_n\Phi^*(y'))\Bigl)
\right\}G(y;y')\,.
\end{align}
In this expression, the boundary conditions at $z=0$ of the classical
fields $A^m,$ $\Phi,$ and $\Phi^*$ are respectively
$n_{\mu}+\bar{n}_{\mu},$ $c,$ and $\bar{c}.$ These enter in a linear
way in the fields, therefore the $\bar{c}n_\mu\bar{n}_\nu c$
coefficient in the expansion of \eqref{corres} is simply obtained.
After contractions, one can write
\begin{align}
\epsilon_\mu T^{\mu\nu}\epsilon^*_\nu
&=2 e^2\int dy\,\sqrt{-g}e^{-\chi}\
A^m(y)A^*_m(y)\Phi^*(y)\Phi(y)\nonumber\\
&+e^2\int dydy'\,\sqrt{-g}e^{-\chi} \sqrt{-g'}e^{-\chi'}
\left(A^m(y)A^{*n}(y')+A^{*m}(y)A^n(y')\right)
\nonumber\\&\quad\times
\Big(\Phi(y)\partial_m-(\partial_m\Phi(y))\Big)\
\Big(\Phi^*(y')\partial'_n-(\partial'_n\Phi^*(y'))\Big)
G(y;y')\,,
\label{fullTmunu}
\end{align}
where the fields $\Phi(x,z)$ and $A(x,z)$ are the plane-wave solutions
defined in section \ref{MODEL}.
In a diagrammatic representation, the first contribution in
(\ref{fullTmunu}) is a contact interaction, while the second
contribution contains an $s$-channel diagram (with the term
$A^m(x,z)A^{*n}(x',z')$) and a $u$-channel diagram (with the term
$A^{*m}(x,z)A^{n}(x',z')$) \cite{GX09}. This is not surprising, since
it is well-known that taking the classical limit of a quantum field
theory is equivalent to keeping only tree diagrams in the perturbative
expansion.
\section{On-shell Compton amplitude}
\label{SCALAR}
The $s$-channel contribution in eq.\,\eqref{fullTmunu} can be written as
\begin{align}
\begin{split}
\epsilon^{\mu}T_{\mu\nu}^s\epsilon^{\star\nu} &=
(ie)^2\int d^4x\,d^4x'\,dz\,dz'\,z^{-3}e^{-\chi(z)}A_k(x,z)
A^{\star}_l(x',z')z'^{-3}e^{-\chi(z')} \\
&\qquad\qquad\quad\times
\left(\Phi_{\text{in}}(x,z)\op{\p^k}_{(x,z)}G(x,z;x',z')
\op{\p^l}_{(x',z')}\Phi^{\star}_{\text{out}}(x',z')\right) \,,
\end{split}
\end{align}
where the initial and final wave-functions of the bulk scalar fields,
$\Phi_{\text{in}}$ and $\Phi_{\text{out}}$, are normalized plane-wave
solutions of the Laplacian equation \eqref{scalar}, whereas the bulk
vector field $A_k$ is a normalized plane-wave solution of the Maxwell
equations. The Green function $G(x,z;x',z')$ is defined by
eqs.\,\eqref{FT} and \eqref{green}. We shall introduce the shorthand
notations,
\begin{align}
\label{notations}
\begin{split}
&\Phi_{\text{in}}(x,z) = e^{ip_1\cdot x}\h{\Phi}_i(z)\,,\quad
\Phi_{\text{out}}(x',z') = e^{ip_2\cdot x'}\h{\Phi}_f(z')\,, \\
&A_{\mu}(x,z) = \epsilon_{\mu}e^{iq_1\cdot x}A_1(z)\,,\quad
A_z(x,z) = -i\frac{\epsilon\cdot q_1}{q_1^2}e^{iq_1\cdot x}\p_zA_1(z)\,, \\
&A_{\nu}(x',z') = \epsilon_{\nu}e^{iq_2\cdot x'}A_2(z')\,,\quad
A_z'(x',z') = -i\frac{\epsilon\cdot q_2}{q_2^2}
e^{iq_2\cdot x'}\p_{z'}A_2(z')\,,\\
&A_1(z) = Q_1z\,K_1(Q_1z) \,,\quad A_2(z) = Q_2z\,K_1(Q_2z) \,.
\end{split}
\end{align}
A straightforward calculation yields,
\begin{align}
\begin{split}
T_{\mu\nu}^s &= (2\pi)^4\delta^{(4)}(p_1+q_1-p_2-q_2)\,e^2\times\biggl(
(p_1+k)_{\mu}(p_2+k)_{\nu}\,\F_1\left(q^2_1,q_2^2,s\right) \\
&\quad
- \frac{(p_1+k)_{\mu}q_{2\nu}}{q_2^2}\,\F_2\left(q^2_1,q_2^2,s\right)
+ \frac{q_{1\mu}(p_2+k)_{\nu}}{q_1^2}\,\F_3\left(q^2_1,q_2^2,s\right)
- \frac{q_{1\mu}q_{2\nu}}{q_1^2q_2^2}\,\F_4\left(q^2_1,q_2^2,s\right)
\biggr)\,.
\end{split}
\end{align}
with $k=p_1+q_1=p_2+q_2$, $s=k^2$, and
\begin{align}
\begin{split}
\F_1\left(q^2_1,q_2^2,k^2\right) &=
\iint dz\,dz'\,\,z^{-3}e^{-\chi(z)}A_1(z)\h{\Phi}_i(z)\h{G}(z,z',k^2)
\h{\Phi}_f^{\star}(z')A_2^{\star}(z')z'^{-3}e^{-\chi(z')}\,, \\
\F_2\left(q^2_1,q_2^2,k^2\right) &=
\iint dz\,dz'\,\,z^{-3}e^{-\chi(z)}A_1(z)\h{\Phi}_i(z)\left(\h{G}(z,z',k^2)
\op{\p}_{z'}\h{\Phi}_f^{\star}(z')\right)
\p_{z'}A_2^{\star}(z')z'^{-3}e^{-\chi(z')}\,,\\
\F_3\left(q^2_1,q_2^2,k^2\right) &=
\iint dz\,dz'\,\,z^{-3}e^{-\chi(z)}\p_zA_1(z)
\left(\h{\Phi}_i(z)\op{\p}_z\h{G}(z,z',k^2)\right)
\h{\Phi}_f^{\star}(z')A_2^{\star}(z')z'^{-3}e^{-\chi(z')}\,, \\
\F_4\left(q^2_1,q_2^2,k^2\right) &=
\iint dz\,dz'\,\,z^{-3}e^{-\chi(z)}\p_zA_1(z)
\left(\h{\Phi}_i(z)\op{\p}_z\h{G}(z,z',k^2)
\op{\p}_{z'}\h{\Phi}_f^{\star}(z')\right)
\p_{z'}A_2^{\star}(z')z'^{-3}e^{-\chi(z')}\,.
\end{split}
\end{align}
The $u$-channel amplitude is related to the $s$-channel amplitude
according to the crossing symmetry by interchanging $\mu\lr\nu$,
$q_1\lr -q_2$,\ $\epsilon\lr \epsilon^{\star}$ and
$A_1\lr A_2^{\star}$. Hence it reads
\begin{align}
\begin{split}
T_{\mu\nu}^u &= (2\pi)^4\delta^{(4)}(p_1+q_1-p_2-q_2)\,e^2
\times\biggl( (p_2+k')_{\mu}(p_1+k')_{\nu}\,\F_1(q_2^2,q_1^2,u) \\
&\quad
+ \frac{q_{1\mu}(p_1+k')_{\nu}}{q_1^2}\,\F_2(q_2^2,q_1^2,u)
- \frac{(p_2+k')_{\mu}q_{2\nu}}{q_2^2}\,\F_3(q_2^2,q_1^2,u)
- \frac{q_{1\mu}q_{2\nu}}{q_1^2q_2^2}\,\F_4(q_2^2,q_1^2,u) \biggr)\,,
\end{split}
\end{align}
with $k' = p_1-q_2 = p_2-q_1$ and $u=k'^2$.
Integrating by parts the partial derivative of $A_1$,
\begin{align}
\F_3(q_1^2,q_2^2,k^2) &= -\iint dz\,dz'\,\,z'^{-3}e^{-\chi(z')}
A^{\star}_2(z')\h{\Phi}_f^{\star}(z')A_1(z)\p_{z}
\left(z^{-3}e^{-\chi(z)}\left(\h{\Phi}_i(z)\op{\p}_z\h{G}(z,z',k^2)\right)
\right)\,,
\end{align}
and using equations \eqref{scalar} and \eqref{green}, one gets
\begin{align}
\nonumber
&\p_{z}\left(z^{-3}e^{-\chi(z)}\left(\h{\Phi}_i(z)\op{\p}_z\h{G}(z,z',k^2)
\right)\right) = \\
\nonumber
&\h{\Phi}_i(z)\p_{z}\left(z^{-3}e^{-\chi(z)}\p_z\h{G}(z,z',k^2)\right)
-\p_{z}\left(z^{-3}e^{-\chi(z)}\p_z\h{\Phi}_i(z)\right)\h{G}(z,z',k^2) = \\
\label{rel}
& z^{-3}e^{-\chi}\left(k^2-p_1^2\right)\h{\Phi}_i(z)\h{G}(z,z',k^2)
+ \delta(z-z')\h{\Phi}_i(z) \,.
\end{align}
Hence
\begin{align}
\F_3(q_1^2,q_2^2,k^2) = (p_1^2-k^2)\F_1(q_1^2,q_2^2,k^2) -
\int dz\, z^{-3}e^{-\chi(z)}A_1(z)A_2^{\star}(z)
\h{\Phi}_i(z)\h{\Phi}_f^{\star}(z)\,.
\end{align}
Similarly,
\begin{align}
\F_2(q_1^2,q_2^2,k^2) &=
-\iint dz\,dz'\,z^{-3}e^{-\chi(z)}A_1(z)\h{\Phi}_i(z)A_2^{\star}(z')
\p_{z'}\left(z'^{-3}e^{-\chi(z')}\left(\h{G}(z,z',k^2)
\op{\p}_{z'}\h{\Phi}_f^{\star}(z')\right)\right)\,, \nonumber\\
&= (k^2-p_2^2)\F_1(q_1^2,q_2^2,k^2) + \int dz\, z^{-3}e^{-\chi(z)}
A_1(z)A_2^{\star}(z)\h{\Phi}_i(z)\h{\Phi}_f^{\star}(z)\,.
\end{align}
The four-point interaction amplitude reads
\begin{align}
\label{CONTACT}
\epsilon^{\mu}T^c_{\mu\nu}\epsilon^{\star\nu} &=
-2(ie)^2\int d^4x\,dz\,\sqrt{-g}e^{-\chi}
\h{\Phi}_i(x,z)g^{mn}A_m(x,z)A_n^{\star}(x,z)\h{\Phi}_f^{\star}(x,z)\,,
\nonumber\\
\begin{split}
T^c_{\mu\nu} &= 2 e^2(2\pi)^4\delta^{(4)}(p_1+q_1-p_2-q_2) \times \\
&\quad\int dz z^{-3}e^{-\chi}\,
\h{\Phi}_i(z)\left(g_{\mu\nu}A_1(z)A_2^{\star}(z) +
\frac{q_{1\mu}q_{2\nu}}{q_1^2q_2^2}
\p_zA_1(z)\p_zA_2^{\star}(z)\right)\h{\Phi}_f^{\star}(z) \,.
\end{split}
\end{align}
Let
\begin{align}
\C_1(q_1^2,q_2^2) &= \C_1(q_2^2,q_1^2) = \int dz\,z^{-3}e^{-\chi(z)}
\,A_1(z)\,A_2^{\star}(z)\,\h{\Phi}_i(z)\,\h{\Phi}_f^{\star}(z) \,, \\
\C_0(q_1^2,q_2^2) &= \C_0(q_2^2,q_1^2) =
\int dz\, z^{-3}e^{-\chi(z)}\,\p_zA_1(z)\,\p_zA_2^{\star}(z)
\,\h{\Phi}_i(z)\,\h{\Phi}_f^{\star}(z) \,.
\end{align}
Setting $p=p_1+p_2$, the total amplitude can be written as
\begin{align*}
T_{\mu\nu} &= e^2(2\pi)^4\delta^{(4)}(p_1+q_1-p_2-q_2) \times \biggl(\\
&\quad
\left(\F_1\left(q_1^2,q_2^2,s\right) +
\F_1\left(q_2^2,q_1^2,u\right)\right)\times
\biggl(\left(p_{\mu}p_{\nu} - \frac{p\cdot q_2}{q_2^2}p_{\mu}q_{2\nu}
- \frac{p\cdot q_1}{q_1^2}p_{\nu}q_{1\mu}\right) + \\
& \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad
\left(q_{1\nu}q_{2\mu} - \frac{q_1\cdot q_2}{q_1^2}q_{1\mu}q_{1\nu}
- \frac{q_1\cdot q_2}{q_2^2}q_{2\mu}q_{2\nu}\right)\biggr) \\
&+ \left(\F_1\left(q_1^2,q_2^2,s\right)-\F_1\left(q_2^2,q_1^2,u\right)\right)
\times\biggl(
\left(p_{\mu}q_{1\nu} - \frac{q_1\cdot q_2}{q_2^2}p_{\mu}q_{2\nu}
- \frac{p\cdot q_1}{q_1^2} q_{1\mu}q_{1\nu}\right) + \\
& \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad
\left(p_{\nu}q_{2\mu} - \frac{q_1\cdot q_2}{q_1^2}p_{\nu}q_{1\mu} -
\frac{p\cdot q_2}{q_2^2}q_{2\mu}q_{2\nu}\right)\biggr) \\
&\ + 2\C_1\left(q_1^2,q_2^2\right)\left(g_{\mu\nu}
- \frac{q_{1\mu}q_{1\nu}}{q_1^2}
- \frac{q_{2\mu}q_{2\nu}}{q_2^2}\right)
+ \left(2\C_0(q_1^2,q_2^2) - \F_4\left(q_1^2,q_2^2,s\right) -
\F_4\left(q_2^2,q_1^2,u\right)\right)\frac{q_{1\mu}q_{2\nu}}{q_1^2q_2^2}
\biggr) \,.
\end{align*}
Comparing with the gauge-invariant tensor basis \eqref{Vmunu}, gauge
invariance holds true if, and only if,
\begin{align}
\begin{split}
2\C_0(q_1^2,q_2^2) &= \F_4\left(q_1^2,q_2^2,s\right)
+\F_4\left(q_2^2,q_1^2,u\right) \\
&+ \left(\F_1\left(q_1^2,q_2^2,s\right)+\F_1\left(q_2^2,q_1^2,u\right)\right)
\left((p\cdot q_1)(p\cdot q_2)+(q_1\cdot q_2)^2\right) \\
&+ \left(\F_1\left(q_1^2,q_2^2,s\right)-\F_1\left(q_2^2,q_1^2,u\right)\right)
\left(p\cdot q_1 + p\cdot q_2\right)(q_1\cdot q_2) \\
&+ 2\C_1(q_1^2,q_2^2)(q_1\cdot q_2)\,.
\end{split}
\end{align}
Expanding the bidirectional derivatives, the form factor $\F_4$ reads
\begin{align}
\begin{split}
\F_4 &= \iint dz\,dz'\,z^{-3}e^{-\chi(z)}\,\p_zA_1(z)\times \\
&\qquad\qquad\biggl(
\left(\h{\Phi}_i(z)\,\p_z\h{G} - \h{G}\,\p_z\h{\Phi}_i(z)\right)
\p_{z'}\h{\Phi}_f^{\star}(z')
- \p_{z'}\left(\h{\Phi}_i(z)\,\p_z\h{G} - \p_z\h{\Phi}_i(z)\,\h{G}\right)
\h{\Phi}_f^{\star}(z') \biggr) \\
&\qquad\quad\times \p_{z'}A_2^{\star}(z')z'^{-3}e^{-\chi(z')} \,.
\end{split}
\end{align}
We first integrate by parts over $\p_zA_1(z)$ and use equation
\eqref{rel} in the $s$-channel,
\begin{align}
\nonumber
\begin{split}
\F_4(s) &= -\iint dz\,dz'\,A_1(z)\times \\
&\qquad\qquad\biggl(
\left(z^{-3}e^{-\chi(z)}\left(s-p_1^2\right)\h{\Phi}_i(z)\h{G}(z,z',s)
+ \delta(z-z')\,\h{\Phi}_i(z)\right)\p_{z'}\h{\Phi}_f^{\star}(z') \\
&\qquad\qquad
-\p_{z'}\left(z^{-3}e^{-\chi}\left(s-p_1^2\right)\h{\Phi}_i(z)\h{G}(z,z',s)
+ \delta(z-z')\,\h{\Phi}_i(z)\right)\h{\Phi}_f^{\star}(z') \biggr) \\
&\qquad\quad\times \p_{z'}A_2^{\star}(z')z'^{-3}e^{-\chi(z')} \,,
\end{split}
\\
\begin{split}
&= (p_1^2-s)\iint dz\,dz'\,z^{-3}e^{-\chi(z)}\,A_1(z)\,\h{\Phi}_i(z)\times \\
& \qquad\qquad\qquad\quad\left(\h{G}\,\p_{z'}\h{\Phi}_f^{\star}(z') -
\h{\Phi}_f^{\star}(z')\,\p_{z'}\h{G}\right)
\p_{z'}A_2^{\star}(z')\,z'^{-3}e^{-\chi(z')} \\
&\quad-
\int dz\,z^{-3}e^{-\chi(z)}\,A_1(z)\,\h{\Phi}_i(z)\,\p_z\h{\Phi}_f^{\star}(z)
\,\p_zA_2^{\star}(z) \\
&\quad- \int dz\,A_1(z)\,\h{\Phi}_i(z)\,\p_z
\left(\h{\Phi}_f^{\star}(z)\,\p_zA_2^{\star}(z)\,z^{-3}e^{-\chi(z)}\right)\,.
\end{split}
\end{align}
We now integrate the first term by parts over $\p_{z'}A^{\star}_2(z')$,
use again equation \eqref{rel}, and integrate by parts the third term,
\begin{align}
\begin{split}
\F_4(s) &= (p_1^2-s)
\iint dz\,dz'\,z^{-3}e^{-\chi(z)}\,A_1(z)\,\h{\Phi}_i(z)\times
\\ & \qquad\qquad\qquad\quad
\left(z'^{-3}e^{-\chi(z')}\left(s-p_2^2\right)\h{G}(z,z',s)
+ \delta(z-z')\right)\h{\Phi}_f^{\star}(z')\,A_2^{\star}(z') \\
&\quad-
\int dz\,z^{-3}e^{-\chi(z)}\,A_1(z)\,\h{\Phi}_i(z)\,\p_z\h{\Phi}_f^{\star}(z)
\,\p_zA_2^{\star}(z) \\
&\quad+ \int dz\,z^{-3}e^{-\chi(z)}\,\p_z\left(A_1(z)\,\h{\Phi}_i(z)\right)
\,\h{\Phi}_f^{\star}(z)\,\p_zA_2^{\star}(z) \,.
\end{split}
\end{align}
Hence
\begin{align}
\label{T4}
\begin{split}
\F_4(s) &= -(p_1^2-s)(p_2^2-s)\F_1(s) + (p_1^2-s)\C_1 +\C_0 \\
&\quad- \int dz\,z^{-3}e^{-\chi(z)}\,A_1(z)
\left(\h{\Phi}_i(z)\,\op{\p}_z\h{\Phi}_f^{\star}(z)\right)
\,\p_zA_2^{\star}(z)\,.
\end{split}
\end{align}
A similar identity holds in the $u$-channel. Noting that
\begin{gather}
\nonumber
\begin{split}
(p\cdot q_1)(p\cdot q_2)+(q_1\cdot q_2)^2 \pm
(p\cdot q_1 + p\cdot q_2)(q_1\cdot q_2) =
(p\pm q_1)\cdot q_2 \times (p\pm q_2)\cdot q_1 \,,
\end{split}
\\
\begin{split}
(p+q_2)\cdot q_1\times(p+q_1)\cdot q_2 = (p_1^2-s)(p_2^2-s) \,, \\
(p-q_2)\cdot q_1\times(p-q_1)\cdot q_2 = (p_1^2-u)(p_2^2-u) \,,
\end{split}
\end{gather}
gauge invariance is recovered when $p_1^2=p_2^2$. Indeed, then
$\h{\Phi}_i(z)=\h{\Phi}_f(z)$ since they satisfy the same equation, so the
last term in eq.\,\eqref{T4} vanishes.
To summarize, the on-shell Compton amplitude reads, with
$p_1^2=p_2^2=-m^2$,
\begin{align}
\label{SCA}
\begin{split}
T^{\mu\nu} &= e^2\left(2\C_1\,\V_1^{\mu\nu}
+ \C_+\left(\V_2^{\mu\nu} + \V_3^{\mu\nu}\right)
+ \C_-\left(\V_4^{\mu\nu} + \V_5^{\mu\nu}\right)\right) \,,
\end{split}
\end{align}
where the tensors $\V_i^{\mu\nu},\ i=1,\cdots 5$, are defined in
eqs.~\eqref{Vmunu} and
\begin{align}
C_{\pm}(m^2,q_1^2,q_2^2,s,u) = \F_1(m^2,q_1^2,q_2^2,s) \pm
\F_1(m^2,q_2^2,q_1^2,u) \,.
\end{align}
The $\delta$ factor expressing energy-momentum conservation is
implicitly understood from now on in all formulas for the Compton
amplitude. The form factors $\F_1(m^2,q_1^2,q_2^2,k^2)$ and
$\C_1(m^2,q_1^2,q_2^2)$ are defined by
\begin{align}
\label{SVCF}
\begin{split}
\F_1\left(m^2,q^2_1,q_2^2,k^2\right) &=
\iint dz_1\,dz_2\,\,z_1^{-3}e^{-\chi(z_1)}\,A_1(z_1)\,\h{\Phi}_i(z_1)\,
\h{G}(z_1,z_2,k^2)
\,\h{\Phi}_f^{\star}(z_2)\,A_2^{\star}(z_2)\,z_2^{-3}e^{-\chi(z_2)}\,, \\
\C_1(m^2,q_1^2,q_2^2) &= \int dz\,z^{-3}e^{-\chi(z)}
\,A_1(z)\,A_2^{\star}(z)\,\h{\Phi}_i(z)\,\h{\Phi}_f^{\star}(z) \,.
\end{split}
\end{align}
These formulas are valid for any dilaton background $\chi(z)$ on the
5D AdS space which yields a well-defined inner product on the vector
space of solutions of the classical field equations. If $\chi\equiv
0$, our formulas coincide with the Compton amplitude off a dilaton
calculated in \cite{GX09}, once we identify the normalized functions
$\h{\Phi}_i$ and $\h{\Phi}_f$ with the Bessel solutions of the
hard-wall model. We have shown quite generally that the
gauge-invariant structure of the Compton amplitude is similar for
hard-wall and soft-wall models. Note that we only get three
independent Compton form factors out of a possible five. These generic
properties do not depend upon the special way of breaking the
conformal invariance for extending the AdS/CFT correspondence to QCD,
nor upon detailed relations between Bessel functions or other special
functions. It depends only upon the general structure of the classical
equations satisfied by the bulk scalar fields and their Green
functions in AdS space with a dilaton background.
Moreover we have shown explicitly that the off-shell Compton
amplitudes ($p_1^2 \ne p_2^2$) calculated in these simplest AdS/QCD
models are not gauge-invariant. This result is also not obvious and is
in fact very specific to single scalar intermediate states. For
example, had we allowed for non-minimal couplings with only single
vector intermediate states, we would get gauge-invariant off-shell
Compton amplitudes. One should perhaps emphasize that the non-gauge
invariance of the off-shell four-point Compton amplitude has nothing
to do with the (trivial) non-gauge invariance of the three-point
function when the conformal dimensions of the initial and final scalar
states are different. In our case, the initial, final, and
intermediate scalar states are solutions of the same classical field
equation in AdS space and have the same conformal dimension. The
non-gauge invariance, when the initial and final scalar states are not
the same mass eigenstates in Minkowski space, is due to the particular
form of the contact term \eqref{CONTACT}.
\section{Deeply inelastic scattering}
\label{DIS}
The Compton amplitude \eqref{SCA} has a rather simple structure which
it is enlightening to unravel. Expanding the Green function over the
orthonormal eigenstates, we can write the form factor $\F_1$ in the
doubly spacelike region as
\begin{align}
\label{IF}
\F_1\left(m^2,q^2_1,q_2^2,k^2\right) &= -\sum_{n=0}^{\infty}
\frac{\Gamma(m^2,m_n^2,q_1^2)\,\Gamma^*(m^2,m_n^2,q_2^2)}
{k^2+m^2_n-i\epsilon} \,, \\
\label{IV}
\Gamma(m^2,m_n^2,Q^2) &= Q\int dz\,z^{-2}e^{-\chi(z)}
K_1(Qz)\,\h{\Phi}(z)\,\h{\Phi}^*_n(z) \,.
\end{align}
It is immediate to show from the AdS/QCD dictionary that the vertex
function $\Gamma$ is just the unique form factor which parametrizes
the most general matrix element of the conserved electromagnetic
current between two (pseudo)scalar states,
\begin{align}
\label{FF}
\langle p_2|J^{\mu}(0)|p_1\rangle = \Gamma(p_1^2,p_2^2,k^2)
\left(p^{\mu} - \frac{p_2^2-p_1^2}{k^2}\,k^{\mu}\right)
\,,\qquad p =p_1 + p_2\,,\quad k = p_2-p_1\,.
\end{align}
The electromagnetic form factor of the spinless target is defined as
the elastic limit of $\Gamma$,
\begin{align}
\label{FFEM}
F_{\gamma}(Q^2) = \Gamma(m^2,m^2,Q^2) = \C_1(m^2,Q^2,0)
= Q\int dz\,z^{-2}e^{-\chi(z)}
K_1(Qz)\,\left|\h{\Phi}_m(z)\right|^2 \,.
\end{align}
Taking into account the tensorial identities
\begin{align}
\V_2^{\mu\nu} + \V_3^{\mu\nu} \pm \V_4^{\mu\nu} \pm \V_5^{\mu\nu}
= \left((p\pm q_2)^{\mu} - q_1^{\mu}\frac{(p\pm q_2)\cdot q_1}{q_1^2}\right)
\left((p\pm q_1)^{\nu} - q_2^{\nu}\frac{(p\pm q_1)\cdot q_2}{q_2^2}\right) \,,
\end{align}
the amplitude \eqref{SCA} can be written in a form which exhibits the
tensorial structure used in \cite{GX09}
\begin{align}
\label{SCA2}
\begin{split}
T^{\mu\nu} &= e^2\biggl\{2\C_1(m^2,q_1^2,q_2^2)\,\V_1^{\mu\nu} \\
&\qquad\quad-
\left(2p_1^{\mu} - q_1^{\mu}\frac{2p_1\cdot q_1}{q_1^2}\right)
\left(2p_2^{\nu} - q_2^{\nu}\frac{2p_2\cdot q_2}{q_2^2}\right)
\sum_{n=0}^{\infty}
\frac{\Gamma(m^2,m_n^2,q_1^2)\,\Gamma^*(m^2,m_n^2,q_2^2)}{s+m^2_n-i\epsilon}
\\ &\qquad\quad-
\left(2p_2^{\mu} - q_1^{\mu}\frac{2p_2\cdot q_1}{q_1^2}\right)
\left(2p_1^{\nu} - q_2^{\nu}\frac{2p_1\cdot q_2}{q_2^2}\right)
\sum_{n=0}^{\infty}
\frac{\Gamma(m^2,m_n^2,q_2^2)\,\Gamma^*(m^2,m_n^2,q_1^2)}{u+m^2_n-i\epsilon}
\biggr\} \,.
\end{split}
\end{align}
As a by-product, the absorptive part of the forward Compton scattering
amplitude reads
\begin{align}
\label{IM}
\nonumber
&\text{Im}\,T^{\mu\nu}(q^2,s) = e^2
\left(p^{\mu} + \frac{1}{x}q^{\mu}\right)
\left(p^{\nu} + \frac{1}{x}q^{\nu}\right)
\sum_{n=0}^{\infty} \delta(s+m_n^2)\left|\Gamma(m^2,m_n^2,q^2)\right|^2 \,,\\
&\qquad\qquad\quad\ \approx
e^2\left.\left(\px{m_n^2}{n}\right)^{-1}\right|_{m_n^2=-s}
\left|\Gamma(m^2,-s,q^2)\right|^2
\left(p^{\mu} + \frac{1}{x}q^{\mu}\right)
\left(p^{\nu} + \frac{1}{x}q^{\nu}\right) \,,\\
\nonumber
&\qquad\qquad
q_1=q_2=q\,,\qquad p=2p_1=2p_2\,,\qquad x = -\frac{q^2}{p\cdot q} \,.
\end{align}
and do yield, in the hard-wall model, the same structure functions
$F_1=0$ and $F_2$ as found in \cite{PS03}, in the large-$x$ region and
in the one-particle approximation for intermediate states.
More generally, eq.\,\eqref{IM} relates the scaling properties of the
vertex function $\Gamma$ and of the structure function $F_2$ in a
generic soft-wall model. Indeed the function $Qz\,K_1(Qz)$ decreases
monotonically from 1 to 0 and is exponentially small at large
$Qz$. Hence the $z$-dependence of the electromagnetic current can be
roughly approximated as a step function of width $\O(1/Q)$ and
\begin{align}
\label{overlap}
\Gamma(m^2,m_n^2,Q^2) \approx \int_0^{1/Q}\frac{dz}{z^3}\,e^{-\chi(z)}
\,\h{\Phi}(z)\,\h{\Phi}^*_n(z) \,.
\end{align}
One can differentiate three kinematical regimes for the evaluation of
the overlap integral \eqref{overlap}:
\begin{itemize}
\item $Q^2\gg m^2,\ Q^2\gtrsim m_n^2.\quad$ For $z\lesssim 1/Q$ and
$Q$ large enough at fixed $\Q^2/m_n^2$, $\h{\Phi}(z)$ has the
asymptotic behavior $\h{\Phi}(z)\sim z^{\Delta}$, where the
conformal dimension $\Delta$ is the same as in the hard-wall model
as long as $\chi(z)\rightarrow 0$ when $z\rightarrow 0$. On the
other hand, since $n$ is a highly excited state, it is legitimate to
use a WKB approximation for $\h{\Phi}_n(z)$ \cite{PRSW},
\begin{align}
\Gamma(m^2,m_n^2,Q^2) \propto C(m_n)
\left(\frac{1}{Q}\right)^{\Delta}\,F\left(\frac{Q}{m_n}\right) \,.
\end{align}
The squared normalization constant $C^2(m_n)$ of $\h{\Phi}_n(z)$ and
the semiclassical density of states $\frac{\p m_n^2}{\p n}$ have the
same dependence upon $m_n$. Therefore the structure function
$F_2(Q^2,x)$ has the power-law scaling,
\begin{align}
\frac{x}{Q^2}F_2(Q^2,x) \propto
\left(\frac{1}{Q^2}\right)^{\Delta} F^2(x)
\,,\qquad x = \frac{Q^2}{Q^2-s} \,.
\end{align}
\item $Q^2\gg m^2,\ Q^2\gg m_n^2.\quad$ Then both $\h{\Phi}(z)$ and
$\h{\Phi}_n(z)$ behave as $z^{\Delta}$ for $z\lesssim 1/Q$. It
follows that the electromagnetic form factor has the power law
scaling \cite{BRO07},
\begin{align}
F_{\gamma}(Q^2) \propto \left(\frac{1}{Q^2}\right)^{\Delta-1} \,.
\end{align}
The identity of the asymptotic scaling behavior for the
electromagnetic form factor and for the structure functions is a
generic property of the AdS/QCD models that we consider. Such a
property, which does not even depend upon whatever value of $\Delta$ is
picked, is certainly difficult to reconcile with a partonic picture.
\item $Q^2\rightarrow 0.\quad$ In that limit the overlap integral
reduces to the scalar product of $\h{\Phi}$ and $\h{\Phi}_n$. Since
the asymptotic states $\Phi_{\text{in}}(x,z)$ and
$\Phi_{\text{out}}(x,z)$ must be eigenstates of the operator
\eqref{H} with the same mass eigenvalue, all inelastic channels
decouple when $Q^2=0$ and only the elastic channel
remains opened. We shall work out some of the consequences in the
next sections.
However we can already observe that there is a violation of elastic
unitarity in the Compton amplitude \eqref{SCA2} which is inherent to
the $N_c\rightarrow\infty$ approximation involved in the AdS/QCD
recipes. Indeed the total elastic Compton cross-section is of order
$e^4$. Hence the imaginary part of the forward amplitude must vanish
at order $e^2$ in the elastic limit to comply with the optical
theorem. An absorptive part of the elastic amplitude can be
generated only by loop effects which are at present beyond the reach
of the AdS/QCD correspondence.
\end{itemize}
\section{Virtual Compton Scattering}
\label{DVCS}
When the outgoing photon is real, $q_2^2=0$, a mere inspection of
eq.\,\eqref{Vmunu} shows that, in order to cancel the poles in
$q_2^2$, we must have the following relations between the form factors,
\begin{gather}
\label{DVCSR}
\begin{split}
V_1 + (q_1\cdot q_2)V_3 + (p\cdot q_2)V_5 = 0 \,, \\
(p\cdot q_2)V_2 + (q_1\cdot q_2) V_4 = 0\,.
\end{split}
\end{gather}
Hence only three Compton form factors remain independent as already
observed in section \ref{COMPTON}. However eqs.\,\eqref{DVCSR} are not
manifestly satisfied by eqs.\,\eqref{SCA} and \eqref{SVCF}. We should
have
\begin{gather}
\label{DVCSR1}
2\C_1 + (q_1\cdot q_2)\,\C_+ + (p\cdot q_2)\,\C_- = 0 \,, \\
\label{DVCSR2}
(p\cdot q_2)\,\C_+ + (q_1\cdot q_2)\,\C_- = 0\,.
\end{gather}
The second relation \eqref{DVCSR2} reads explicitly, with the
notations \eqref{notations},
\begin{align}
\begin{split}
&(p+q_1)\cdot q_2\times \F_1(m^2,q_1^2,0,s)
+ (p-q_1)\cdot q_2\times \F_1(m^2,0,q_1^2,u) = \\
&\iint dz_1\,dz_2\,\,z_1^{-3}e^{-\chi(z_1)}\,\h{\Phi}_i(z_1)\,\times \\
&\qquad\left(
-(p_1^2-s) A_1(z_1)\,\h{G}(z_1,z_2,s)\,A_2^{\star}(z_2) +
(p_2^2-u) A_2^{\star}(z_1)\,\h{G}(z_1,z_2,u)\,A_1(z_2)
\right) \\
&\qquad \times\h{\Phi}_f^{\star}(z_2)\,z_2^{-3}e^{-\chi(z_2)}
= 0 \,.
\end{split}
\end{align}
We now use the completeness relation \eqref{expansion} satisfied by
the Green function $\h{G}$, and take into account the fact that for
the electromagnetic field,
\begin{align}
\lim_{q_2^2\rightarrow 0} A_2(z) = 1 \,,\qquad
\lim_{q_2^2\rightarrow 0} \p_{z}A_2(z) = 0 \,.
\end{align}
We integrate each term over $z_2$ and $z_1$ respectively,
\begin{align}
\begin{split}
\int dz_2\,z_2^{-3}e^{-\chi(z_2)}\,
\h{\Phi}_f^{\star}(z_2)\,\h{\Phi}_n(z_2)
&= C^{\star}(p_2^2,m_n^2) \,, \\
\int dz_1\,z_1^{-3}e^{-\chi(z_1)}\,
\h{\Phi}_i(z_1)\,\h{\Phi}_n^{\star}(z_1)
&= C(p_1^2,m_n^2) \,.
\end{split}
\end{align}
Hence eq.\,\eqref{DVCSR2} reads
\begin{align}
\begin{split}
&-(p_1^2-s)\sum_n\frac{C^{\star}(p_2^2,m_n^2)}{m_n^2+s-i\epsilon}
\int dz\,z^{-3}e^{-\chi(z)}\,\h{\Phi}_i(z)\,
\h{\Phi}^{\star}_n(z)\,A_1(z) \\
&+
(p_2^2-u)\sum_n\frac{C(p_1^2,m_n^2)}{m_n^2+u-i\epsilon}
\int dz\,z^{-3}e^{-\chi(z)}\,\h{\Phi}_f^{\star}(z)\,
\h{\Phi}_n(z)\,A_1(z) = 0 \,.
\end{split}
\end{align}
Using the orthogonality relations, the identity \eqref{DVCSR2} holds
true exactly only if the virtual Compton scattering is on-shell,
\begin{align}
p_1^2 = p_2^2 = -m_{n_0}^2\,,\quad\text{for some }n_0\,.
\end{align}
By the same token eq.\,\eqref{DVCSR1} reduces to the definition of
$\C_1$ in \eqref{SVCF},
\begin{align}
\begin{split}
&2\C_1(m^2,q_1^2,0) + (p+q_1)\cdot q_2\times \F_1(m^2,q_1^2,0,s)
- (p-q_1)\cdot q_2\times \F_1(m^2,0,q_1^2,u) = \\
&2\C_1(m^2,q_1^2,0)
- 2\int dz\,z^{-3}e^{-\chi(z)}\left|\h{\Phi}_m(z)\right|^2\,A_1(z)
= 0 \,.
\end{split}
\end{align}
We can solve for $\C_{\pm}$ in terms of $\C_1$:
\begin{gather}
\begin{split}
\C_+ = -\frac{2(q_1\cdot q_2)}{(q_1\cdot q_2)^2 - (p\cdot q_2)^2}\,\C_1\,,
\qquad
\C_- = \frac{2(p\cdot q_2)}{(q_1\cdot q_2)^2 - (p\cdot q_2)^2}\,\C_1\,, \\
(q_1\cdot q_2)^2 - (p\cdot q_2)^2 = (q_1-p)\cdot q_2 \times
(q_1+p)\cdot q_2 = (m^2+s)(m^2+u) \,,
\end{split}
\end{gather}
The VCS amplitude has no absorptive part since $(m^2+s)(m^2+u)$ vanish
only when $q_2=0$ owing to a non-vanishing mass $m$. Therefore the exact
VCS amplitude with $q_2^2=0$ can be written as
\begin{align}
\label{TDVCS}
\begin{split}
T^{VCS}_{\mu\nu} &= e^2\,\C_1(m^2,q_1^2,0)\biggl(2\V_{1,\mu\nu} -
\frac{2m^2+s+u}{(m^2+s)(m^2+u)}\left(\V_{2,\mu\nu}+\V_{3,\mu\nu}\right) \\
&\qquad\qquad\qquad\qquad+
\frac{s-u}{(m^2+s)(m^2+u)}
\left(\V_{4,\mu\nu}+\V_{5,\mu\nu}\right)\biggr) \,.
\end{split}
\end{align}
The unique Compton form factor reads
\begin{align}
\label{FF1}
\C_1(m^2,q_1^2,0) &=
\int_0^{\infty} dz\,z^{-3}e^{-\chi(z)}
\left|\h{\Phi}_m(z)\right|^2\,A_1(z) \,,
\end{align}
and is nothing but the electromagnetic form factor of the spinless
target. Note that this formula holds in principle (for on-shell
amplitudes) for any $q_1^2$, spacelike or timelike, if one can make an
analytic continuation in the photon momentum.
The tensorial structure of the amplitude \eqref{TDVCS} is identical to
point-like scalar electrodynamics in the tree level approximation
except for the electromagnetic form factor which encodes all the
internal structure of a spinless particle in this kind of AdS/QCD
models. The threshold theorem imposes that
\begin{align}
\label{NORM}
\C_1(m^2,0,0) &= \int_0^{\infty} dz\,z^{-3}e^{-\chi(z)}
\left|\h{\Phi}_m(z)\right|^2 = 1 \,.
\end{align}
The normalization is completely fixed by electromagnetic gauge
invariance and the Hilbert space structure of the classical solutions
in AdS space with appropriate dilaton background.
\section{Bjorken scaling of the DVCS amplitude}
\label{GPD}
It is instructive to understand the consequences of the simplistic form
of the DVCS amplitude \eqref{TDVCS} for a would-be dual picture in
terms of partonic constituents in these kinds of AdS/QCD models.
The Bjorken scaling of the virtual Compton form factors on a
(pseudo)scalar target is usually analyzed in terms of independent
gauge-invariant tensors expressed in the momenta $q=(q_1+q_2)/2$,
$p=p_1+p_2$ and $\Delta=p_2-p_1=q_1-q_2$ with $p_1^2=p_2^2=-M^2$. The
four independent scalar invariant $s$, $u$, $q_1^2$ and $q_2^2$ are
ordinarily traded for $Q^2$, $\Delta^2$, and the scaling variables
$\xi$ and $\eta$ defined by
\begin{align}
\label{BJ}
Q^2 = q^2 \,,\qquad
\xi = -\frac{Q^2}{p\cdot q}\,,\qquad
\eta=-\frac{\Delta\cdot q}{p\cdot q} \,.
\end{align}
The large $Q^2$ expansion of the virtual Compton scattering amplitude
on any target can be described up to twist-three, and with $q_1^2$ and
$q_2^2$ arbitrary, by the tensorial structure \cite{BMKS}
\begin{align}
\label{TWIST3}
\begin{split}
T^{TW3}_{\mu\nu}(q,p,\Delta) &= -\P_{\mu\sigma}g^{\sigma\tau}\P_{\tau\nu}
\frac{q\cdot W_1}{p\cdot q}
+ \left(\P_{\mu\sigma}p^{\sigma}\P_{\rho\nu}
+ \P_{\mu\rho}p^{\sigma}\P_{\sigma\nu}\right)
\frac{W^{\rho}_2}{p\cdot q} \\
&\quad- \P_{\mu\sigma}i\epsilon^{\sigma\tau q\rho}\P_{\tau\nu}
\frac{A_{1\rho}}{p\cdot q}\,.
\end{split}
\end{align}
where current conservation is ensured by means of the projector
\begin{align}
\P_{\mu\nu} = g_{\mu\nu} - \frac{q_{2\mu}q_{1\nu}}{q_1\cdot q_2} \,,
\end{align}
and the transverse component of the momentum transfer is defined by
\begin{align}
\Delta^{\perp}_{\mu} = \Delta_{\mu} + \eta\,p_{\mu} \,.
\end{align}
The vector $W_{2\rho}$ depends on $W_{1\rho}$ and $A_{1\rho}$ by the
relation ($\epsilon_{0123}=1$),
\begin{align}
W_{2\rho} &= \xi W_{1\rho} - \frac{\xi}{2}\frac{q\cdot W_1}{p\cdot q} p_{\rho}
+ \frac{i}{2}\frac{\epsilon_{\rho\sigma\Delta q}}{p\cdot q} A_{1\sigma} \,.
\end{align}
For a spinless target, the vectors $W_{1\rho}$ and $A_{1\rho}$ are
defined in terms of three generalized form factors
$\H(\xi,\eta,\Delta^2,Q^2)$, $\H_3(\xi,\eta,\Delta^2,Q^2)$ and
$\w{\H}_3(\xi,\eta,\Delta^2,Q^2)$ by the relations
\begin{align}
\begin{split}
W_1 &= \H\,p + \H_3\,\Delta_{\perp}\,,\quad
A_{1\rho} = \frac{i\epsilon_{\rho\Delta p q}}{p\cdot q}\w{H}_3\,.
\end{split}
\end{align}
When $q_2^2=0$, the VCS relations \eqref{DVCSR} are satisfied and the
generalized form factors $\H$'s are related to the independent form
factors $V_1$, $V_3$ and $V_4$ in \eqref{Vmu} as follows,
\begin{align}
\begin{split}
V_1 &= -\H \,, \\
V_3 &= \frac{1}{\left(2-\frac{\eta}{\xi}\right)}\frac{1}{Q^2}\left(
\left(1-\frac{1}{2-\frac{\eta}{\xi}}\right)\H
- \xi\w{H}_3\left(2-3\frac{\eta}{\xi}\right)\right) \,, \\
V_4 &= \frac{-\xi}{2-\frac{\eta}{\xi}}\frac{1}{Q^2}
\left(\H + 2\eta\H_3
+ \xi\w{\H}_3\left(2-\frac{\eta}{\xi}\right)^2\right) \,\\
Q^2 &= \frac{q_1^2}{2}\left(1-\frac{\Delta^2}{2q_1^2}\right) \,.
\end{split}
\end{align}
In perturbative QCD, these form factors can in principle be related,
through factorization, to generalized parton distributions (GPDs).
However, the absence of an absorptive part in the DVCS amplitude
\eqref{TDVCS} is difficult to accommodate with a partonic
interpretation which is based on the convolution of real GPDs with
coefficient functions which contain both a real and an imaginary part.
Specializing to the Bjorken limit of the DVCS amplitude \eqref{TDVCS},
\begin{align}
\Delta^2 = 0\,,\qquad \xi=\eta=\frac{x_B}{2-x_B}\,,\qquad
x_B = -\frac{q_1^2}{2p_1\cdot q_1} \,,
\end{align}
one gets for the generalized form factors
\begin{align}
\begin{split}
\H &= -2\,\C_1(m^2,2Q^2,0) \,, \\
\H_3 &= -\w{\H}_3 = \frac{\xi}{1-\xi^2}\,\H
= \frac{x_B(2-x_B)}{4(1-x_B)}\,\H\,.
\end{split}
\end{align}
Therefore the asymptotic behavior in $Q^2$ of the DVCS cross-section
integrated over $t$ and over the azimuthal angle is governed by the
power-law behavior of the electromagnetic form factor. A power-law
behavior in accordance with the dimensional counting rules, e.g. a
scaling dimension $\Delta=2$ for the pion, cannot be consistent with a
partonic interpretation of the DVCS amplitude for spinless hadronic
targets.
\section{Polarizabilities}
\label{POL}
The structure of the VCS amplitude \eqref{TDVCS} and the threshold
theorem, eq.\,\eqref{NORM}, have a still more drastic
consequence. Real Compton scattering on a scalar target in AdS/QCD
models with minimal coupling to the photon is exactly the same as in
point-like scalar electrodynamics in the tree level approximation, a
fact which was observed in the hard-wall model of \cite{GX09}. We
elaborate on the implications for AdS/QCD in this section.
The first consequence is that the static polarizabilities of the scalar
target vanish. Polarizabilities give the corrections to Thompson
scattering which are quadratic in the energy of the photons
\cite{POL}. The amplitude for real Compton scattering off a spinless
particle like the pion,
\begin{align*}
\gamma(q_1)\,\pi(p_1)\ \longrightarrow\ \gamma(q_2)\,\pi(p_2) \,,
\end{align*}
can be expanded in powers of the energies of the photons near
threshold and reads, in the non-relativistic limit, $\omega_i^2\ll
m_{\pi}^2$, in the laboratory frame and in the Coulomb gauge,
\begin{align}
A(\gamma\pi\rightarrow\gamma\pi) = 2e^2\v{\epsilon_1}\cdot\v{\epsilon_2}
+ 8\pi m_{\pi}\,\omega_1\omega_2
\bigl(\alpha_{E}\v{\epsilon_1}\cdot\v{\epsilon_2} + \beta_{M}\,
(\v{\epsilon_1}\times \h{q}_1)\cdot(\v{\epsilon_2}\times \h{q}_2)\bigr)
+ \cdots \,
\end{align}
where $q_i=\omega_i(1,\h{q}_i)$ and $\v{\epsilon_i}$ are the momentum
and polarization vector of each photon (with
$\h{q}_i^2=\v{\epsilon_i}^2=1$). $\alpha_{E}$ and $\beta_{M}$ are the
electric and magnetic polarizabilities respectively. They measure the
linear response of a particle with an internal structure to a small
external electromagnetic perturbation.
The cancellations of the poles in $q_1^2=0$ and $q_2^2=0$ in the
Compton tensor \eqref{Vmunu} impose the following relations between
the Compton form factors,
\begin{align}
V_1 + (q_1\cdot q_2) V_3 = -(p\cdot q_1)V_4 = -(p\cdot q_2)V_5
= \frac{(p\cdot q_1)(p\cdot q_2)}{q_1\cdot q_2}\,V_2\,.
\end{align}
The most general gauge-invariant real Compton tensor can be written in
terms of the two independent form factors $V_1$ and $V_2$,
\begin{align}
T^{\mu\nu}_{RCS} &= V_1
\left(g^{\mu\nu}-\frac{q_1^{\nu}q_2^{\mu}}{q_1\cdot q_2}\right)
+ V_2\left(p^{\mu}-\frac{p\cdot q_1}{q_1\cdot q_2}q_2^{\mu}\right)
\left(p^{\nu}-\frac{p\cdot q_2}{q_1\cdot q_2}q_1^{\nu}\right) \,.
\end{align}
Since the static polarizabilties $\alpha_E$ and $\beta_M$ are defined
in the laboratory frame, $\v{p_1}= \b{0}$, it is convenient to choose
the Coulomb gauge and impose the conditions $\epsilon_1\cdot p_1 =
\epsilon_2^*\cdot p_1 = 0$. Therefore the contracted real Compton
amplitude can be written as
\begin{align}
A_{\text{RCS}} = \epsilon_1^{\mu}\,T_{\mu\nu}\,\epsilon_2^{\star\nu}
= V_1\,\v{\epsilon_1}\cdot\v{\epsilon_2}^{\star}
+\left(V_3-V_2\right)
(\v{\epsilon_1}\cdot \v{q_2})(\v{\epsilon_2}^{\star}\cdot\v{q_1}) \,.
\end{align}
We can use the identity,
\begin{align}
(\v{\epsilon_1}\times\v{q_1})\cdot
(\v{\epsilon_2}\times\v{q_2})
&= (\v{\epsilon_1}\cdot\v{\epsilon_2})
(\v{q_1}\cdot\v{q_2}) -
(\v{\epsilon_1}\cdot\v{q_2})(\v{\epsilon_2}\cdot\v{q_1}) \,,
\end{align}
and relate the electric and magnetic polarizabilities in the lab-frame
to the Compton form factors,
\begin{align}
\label{EM}
\begin{split}
8\pi m\,\alpha_E &= \left.\pxx{}{\omega_1}{\omega_2}
\left(V_1+(V_3-V_2)\v{q_1}\cdot\v{q_2}\right)\right|_{\omega_1=\omega_2=0}\,,
\\ 8\pi m\,\beta_M &= \left.(V_2-V_3)\right|_{\omega_1=\omega_2=0} \,.
\end{split}
\end{align}
In the particular case of \eqref{TDVCS}, in the limit $q_1^2=0$, we
have $V_1 = 2$ and $V_2=V_3$. Hence $\alpha_E=\beta_M=0$ as expected
for real Compton scattering on a structureless particle.
The point-like nature of the real Compton scattering is a direct
consequence of the minimal coupling between the bulk vector field and
the bulk scalar field in AdS space together with the boundary
condition \eqref{BC}. In order to get non-vanishing polarizabilities
we need to introduce non-minimal couplings between the bulk vector
field and the bulk scalar field in anti-de Sitter space.
It is well-known that the same problematics is encountered in the
calculation of the pion polarizabilities in chiral perturbation theory
($\chi$PT). At lowest-order in the momentum expansion, the pion is
coupled minimally to the electromagnetic field $A_{\mu}$ and the
polarizabilities vanish. The chiral Lagrangian at tree evel can only
predict the $\pi$-$\pi$ scattering lengths. Only the phenomenological
chiral couplings of order 4 can produce non-zero polarizabilities. It
can be shown \cite{DH89} that the predictions at order $p^4$ in the
chiral limit for the electric and magnetic polarizabilities of the
charged pion,
\begin{align}
\alpha_E = \frac{4\alpha}{m_{\pi}F_{\pi}^2}(L_9^r+L_{10}^r)\,,\qquad
\alpha_E + \beta_M = 0 \,,
\end{align}
are generated by the four-dimensional effective Lagrangian,
\begin{gather}
-i\,L_9\,F_{\mu\nu}\tr\left(Q\,D^{\mu}U(D^{\nu}U)^{\dagger}
+ Q\,(D^{\mu}U)^{\dagger}D^{\nu}U\right)
+ L_{10}\,F_{\mu\nu}F^{\mu\nu}\,
\tr\left(Q\,U\,Q\,U^{\dagger}\right) \,, \\ \nonumber
D_{\mu}U = \p{_\mu}U + ie\,A_{\mu}\,[Q,U] \,.
\end{gather}
It is therefore very easy to write down a covariant and
gauge-invariant effective action in the 5D AdS space that can be added
to the minimal action \eqref{AdS} to generate non-vanishing
polarizabilities at the classical level for a charged (pseudo)scalar
particle, e.g.,
\begin{align}
\label{XPT}
\begin{split}
S'_{\text{AdS}}[\Phi,\Phi^*,A^m] &=
\int d^4xdz\,\sqrt{-g}e^{-\chi}\biggl(
ig_1\frac{e}{2}\,F_{mn}(D^m\Phi(D^n\Phi)^* - (D^m\Phi)^*D^n\Phi) \\
&\qquad\qquad\qquad\qquad\qquad
+ g_2\frac{e^2}{4} F_{mn}F^{mn}\Phi^*\Phi\biggr) \,.
\end{split}
\end{align}
Of course one could even go one step further in phenomenology and
introduce non-minimal couplings between bulk fields of various spin
and parity, in the spirit of the effective Lagrangian approach.
\section{Conclusion}
\label{END}
We have worked within the bottom-up approach to the AdS/QCD
correspondence and calculated the Compton amplitude with an arbitrary
dilaton background. There is a very recent study \cite{GX09}, within
the approach of \cite{PS03}, which overlaps with ours. There
are however significant differences which make the two papers
complementary. Working in a generic soft-wall model has helped us to
clarify the Lorentz-invariant and gauge-invariant structure of the
Compton amplitude predicted by AdS/QCD. Moreover the structure of the
Compton amplitude does not depend upon the infrared cutoff
parametrized by the dilaton background.
We have found that the minimal coupling of a bulk (pseudo)scalar field
to the electromagnetic current cannot reproduce the expected
low-energy behavior of the Compton amplitude off a spinless composite
charged particle, and produces a too simple structure in the DVCS
kinematical region for a partonic interpretation.
We have pointed out an obvious signature of this failure, namely the
vanishing of the electric and magnetic polarizabilities of the scalar
target. The experimental situation is rather unsatisfactory since the
extraction of the experimental values is model dependent. For instance
the most recent experimental values for the polarizabilities of the
charged pion are,
\begin{align}
\begin{split}
(\alpha_E-\beta_M)_{\pi^+} &=
(11.6\pm 1.5_{stat}\pm 3.0_{syst}\pm0.5_{mod})\times 10^{-4}\text{fm}^3
\quad\text{\cite{MAMI}}\,, \\
(\alpha_E = -\beta_M)_{\pi^+} &= (2.5 \pm 1.7_{stat}\pm 0.6_{syst})
\times 10^{-4}\text{fm}^3\quad\text{\cite{COMPASS}}\,.
\end{split}
\end{align}
Even if these values are still imprecise, the inclusion of non-minimal
couplings to the photon is certainly required to obtain a realistic
description of Compton scattering in AdS/QCD at the classical level.
For instance, we note that non-minimal couplings to vector fields
generate five independent Compton form factors, (as opposed to only
three with the minimal coupling we considered in this paper), as
allowed by gauge and Lorentz invariance.
Such couplings appear naturally in chiral perturbation theory.
Besides, an algebra of currents based on chiral symmetry is the
standard framework to describe the hadronic electromagnetic
current. The AdS/QCD models we have examined do not incorporate chiral
flavor symmetry nor vector meson dominance. There are several
variants of chiral AdS/QCD models \cite{SS04,EKSS05,RP05,HS05} and it
is not the purpose of the present work to commit to one of
them. Nethertheless we have identified a bare-bones effective anti-de
Sitter action that can contribute to the polarizabilities in many
chiral models.
In any case it should now be clear that the calculation, and the
precise measurement, of the hadronic polarizabilities is a selective
testing ground for the AdS/QCD correspondence.
\subsection*{Acknowledgements}
We wish especially to thank B.~Pire for inspiring discussions and very
useful comments about the manuscript. We also wish to thank
V.~Bernard, J.P.~Lansberg, B.~Moussallam, T.N.~Pham, U.~Reinosa,
L.~Szymanowski and B.~Xiao for very interesting discussions. C.M. is
supported by the European Commission under the FP6 program, contract
No. MOIF-CT-2006-039860. This work is partly supported by the
ANR-06-JCJC-0084.
\subsection*{Appendix:\quad Explicit formulas in the hard-wall model}
The hard-wall model is defined by the absence of dilaton background,
$\chi(z)=0$, and by imposing a Dirichlet boundary condition on the
massive fields at some finite cutoff in $z$. Then the plane-wave
solution for a massive scalar field reads (in the timelike region),
\begin{align}
\Phi(x,z) &= C_{\Delta-1}(m)\,e^{ip\cdot x}z^2\,
J_{\Delta-2}\left(m\,z\right)
\,,\quad \Delta=2\pm\sqrt{m_S^2+4} \geq 1\,,\quad p^2=-m^2<0 \,,
\end{align}
(this is not the most general admissible solution for $1\leq\Delta\leq
3$). The normalization constants,
\begin{align}
\label{normalization}
C_{\Delta-1}(m) = \sqrt{2}\,\Lambda\,
J_{\Delta-1}^{-1}\left(\frac{m}{\Lambda}\right)\,,
\end{align}
are defined by requiring,
\begin{align}
\int_0^{1/\Lambda}dz\,z^{-3}|\Phi(x,z)|^2 &= 1\,,
\quad J_{\Delta-2}\left(\frac{m}{\Lambda}\right) = 0 \,.
\end{align}
The scalar propagator reads,
\begin{gather}
\h{G}(z_1,z_2,-m^2) = \sum_n C^2_{\Delta-1}(m_n)
\frac{z_1^2J_{\Delta-2}(m_nz_1)\,z_2^2J_{\Delta-2}(m_nz_2)}
{m^2-m_n^2+i\epsilon}\,,\\
m_n = \zeta_{\Delta-2,n}\Lambda \,,
\end{gather}
where $\zeta_{\nu,n}$ are the zeroes of the Bessel function
$J_{\nu}(z)$. When $\Lambda\rightarrow 0$, the scalar propagator
becomes,
\begin{align}
\h{G}(z_1,z_2,-m^2) \underset{\Lambda\rightarrow 0}{\approx}
(z_1z_2)^2\int_0^{\infty}d\mu\,\mu\,
\frac{J_{\Delta-2}(\mu z_1)\,J_{\Delta-2}(\mu z_2)}
{m^2-\mu^2+i\epsilon} + \O(\Lambda) \,.
\end{align}
\vskip 0.2cm
Plugging the explicit wave-functions into \eqref{FF1} one
gets for the scalar DVCS form factor,
\begin{align}
\C_1(m^2,Q^2,0) = C^2_{\Delta-1}(m)Q\int_0^{1/\Lambda} dz\, z^2\,
J^2_{\Delta-2}(mz)\,K_1(Qz) \,.
\end{align}
As long as $\Delta>1$ and $Q\gg\Lambda$, we can set $\Lambda=0$ in the
integration domain and use the integral formula,
\begin{align}
\C_1(m^2,Q^2,0) &\simeq 2(\Delta-1)\frac{C^2_{\Delta-1}(m)}{m^2}\times
\left(\frac{m^2}{Q^2}\right)^{\Delta-1}\times
\frac{(1-w)^{2\Delta}}{(1-w^2)^2}
\left(1 + \frac{1}{\Delta-1}\frac{2w^2}{1-w^2}\right) \,,
\end{align}
where $w$ is defined by
\begin{align}
w = 1+\frac{Q^2}{2m^2} -
\sqrt{\left(1+\frac{Q^2}{2m^2}\right)^2-1}\,.
\end{align}
Noting that
\begin{gather}
w \underset{Q^2\rightarrow\infty}{\approx}
\frac{m^2}{Q^2}\left(1+\O\left(\frac{m^2}{Q^2}\right)\right)
\longrightarrow 0\,,
\end{gather}
we obtain the leading large $Q^2$ behavior of the DVCS form factor
found in \cite{GX09},
\begin{align}
\C_1(m^2,Q^2,0) &= 2(\Delta-1)\frac{C^2_{\Delta-1}(m)}{m^2}\times
\left(\frac{m^2}{Q^2}\right)^{\Delta-1} \,.
\end{align}
| {
"attr-fineweb-edu": 1.609375,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUczY4eIXhokFxRlWa | \section{Introduction}
The subject of hyperon physics is a vast one, as indicated by the fact that
this workshop will run for three days, with presentations involving a
range of different issues. Obviously it would be impossible for me to
cover all of the interesting features in this introductory presentation.
Instead, I will present a very personal picture of some of the issues in
hyperon physics which {\it I} think need to be answered, and will trust the
various speakers to fill in areas which I have omitted.
\section{Hyperon Processes}
I have divided my presentation into sections, which cover the various arenas
which I think need attention:
\subsection{Nonleptonic Hyperon Decay}
The dominant decay mode of the ${1\over 2}^+$ hyperons is, of course,
the pionic decay $B\rightarrow B'\pi$. On the theoretical side there
remain two interesting and important issues which have been with us since the
1960's---the origin of the $\Delta I=1/2$ rule and the S/P-wave
problem\cite{dghbk}:
\begin{itemize}
\item [i)] The
former is the feature that $\Delta I=3/2$ amplitudes are suppressed with
respect to their $\Delta I=1/2$ counterparts by factors of the order of
twenty or so. This suppression exists in both hyperon as well as kaon
nonleptonic decay and, despite a great deal of theoretical work, there
is still no simple explanation for its existence. The lowest order
weak nonleptonic $\Delta S=1$ Hamiltonian possesses comparable $\Delta
I=1/2$ and $\Delta I=3/2$ components and leading log gluonic effects
can bring about a $\Delta I=1/2$ enhancement of a factor of three to
four or so\cite{llog}. The remaining factor of five seems to arise
from the validity of what is called the Pati-Woo theorem in the baryon
sector\cite{pw} while for kaons it appears to be associated with detailed
dynamical structure\cite{ddyn}. Interestingly the
one piece of possible evidence for its violation comes from a hyperon
reaction----hypernuclear decay\cite{hnuc}. A hypernucleus is produced when a
neutron in an atomic nucleus is replaced by a $\Lambda$. In this case
the usual pionic decay mode is Pauli suppressed, and the
hypernucleus primarily
decays via the non-mesonic processes $\Lambda p\rightarrow np$ and
$\Lambda n\rightarrow nn$. There does exist a rather preliminary
indication here of a possibly significant $\Delta I=1/2$ rule
violation, but this has no Fermilab relevance and will have to be
settled at other laboratories\cite{dub}.
\item [ii)] The latter problem is not as well known but has been a
longstanding difficulty to those of us theorists who try to calculate these
things. Writing the general decay amplitude as
\begin{equation}
{\rm Amp}=\bar{u}(p')(A+B\gamma_5)u(p)
\end{equation}
The standard approach to such decays goes back to current algebra
days and expresses the S-wave (parity-violating) amplitude---$A$---as
a contact term---the baryon-baryon matrix element of the
axial-charge-weak Hamiltonian commutator.
The corresponding P-wave (parity-conserving)
amplitude---$B$---uses a simple pole model ({\it cf.} Figure 1)
with the the weak baryon to baryon
matrix element given by a fit to the S-wave sector. Parity violating
$BB'$ matrix elements are neglected in accord with the Lee-Swift
theorem\cite{ls}.
\begin{figure}
\centerline{\psfig{figure=polefiga.eps,width=7.0cm}}
\caption{Pole diagrams used to calculated parity-conserving nonleptonic
hyperon decay.}
\end{figure}
With this procedure
one can obtain a good S-wave fit but finds P-wave amplitudes which are
in very poor agreement with experiment. On the other hand, one can fit
the P-waves, in which case the S-wave predictions are very bad\cite{dghrv}.
Clearly
the solution requires the input of additional physics, such as inclusion
of $(70,1^-)$ intermediate states as done in an SU(6) calculation
by Le Youaunc et al.\cite{you} or of intermediate ${1\over 2}^-$
and ${1\over 2}^+$
resonant states by Borasoy and myself in a chiral picture\cite{bh1}.
\end{itemize}
\noindent In either case, we do {\it not} require more and better data.
The issues are already clear. What we need is more and better theory!
Where we {\it do} need data involves
the possibility of testing the standard model
prediction of CP violation, which predicts the presence of various asymmetries
in the comparision of hyperon and antihyperon nonleptonic decays\cite{dcp}.
The basic
idea is that one can write the decay amplitudes in the form
\begin{equation}
A=|A|\exp i(\delta_S+\phi_S),\quad B=|B|\exp i(\delta_P+i\phi_P)
\end{equation}
where $\delta_S,\delta_P$ are the strong S,P-wave phase shifts at the
decay energy of the mode being considered and $\phi_S,\phi_P$ are CP-violating
phases which are expected to be of order $10^{-4}$ or so in standard model
CP-violation. One can detect such phases by comparing hyperon and antihyperon
decay parameters. Unfortunately nature is somewhat perverse here in that
the larger the size of the expected effect, the more difficult the
experiment. For example, the asymmetry in the overall decay rate, which is
the easiest to measure, has the form\footnote{Here $B^r$ indicates a reduced
amplitude---$B^r= B(E'-M_{B'})/(E'+M_{B'})$.}
\begin{eqnarray}
C&=&{\Gamma-\bar{\Gamma}\over \Gamma+\bar{\Gamma}}\nonumber\\
&\sim&
\left(-2(A_1A_3\sin(\delta_S^1
-\delta_S^3)\sin(\phi_S^1-\phi_S^3)\right.\nonumber\\
&+&\left.B_1^rB_3^r\sin(\delta_P^1-\delta_P^3)\sin(
\phi_P^1-\phi_P^3)\right)\nonumber\\
&/& |A_1|^2+|B_1^r|^2
\end{eqnarray}
where the subscripts, superscripts 1,3
indicate the $\Delta I={1\over 2},{3\over 2}$
component of the amplitude. We see then that there is indeed sensitivity to
the CP-violating phases but that it is multiplicatively
suppressed by {\it both} the the
strong interaction phases ($\delta\sim 0.1$) as well as by the $\Delta I=
{3\over 2}$ suppression $A_3/A_1\sim B_3/B_1\sim 1/20$. Thus we
find $C\sim\phi/100\sim 10^{-6}$
which is much too small to expect to measure in present generation experiments.
More sanguine, but still not optimal, is a comparison of the asymmetry
parameters $\alpha$, defined via
\begin{equation}
W(\theta)\sim1+\alpha\vec{P}_B\cdot\hat{p}_{B'}
\end{equation}
In this case, one finds
\begin{eqnarray}
A&=&{\alpha+\bar{\alpha}\over \alpha-\bar{\alpha}}=-\sin(\phi_S^1-\phi_P^1)
\sin(\delta_S^1-\delta_P^1)\nonumber\\
&\sim& 0.1\phi\sim 4\times 10^{-4}
\end{eqnarray}
which is still extremely challenging.
Finally, the largest signal can be found in the combination
\begin{equation}
B={\beta+\bar{\beta}\over \beta-\bar{\beta}}=\cot(\delta_S^1
-\delta_P^1)\sin(\phi_S^1-\phi_P^1)\sim\phi
\end{equation}
Here, however, the parameter $\beta$ is defined via the general expression
for the final state baryon polarization
\begin{eqnarray}
<\vec{P}_{B'}>&=&{1\over W(\theta)}\left((\alpha+\vec{P}_B\cdot\hat{p}_{B'})
\hat{p}_{B'}\right.\nonumber\\
&+&\left.\beta\vec{P}_B\times\hat{p}_{B'}
+\gamma(\hat{p}_{B'}
\times(\vec{P}_B\times\hat{p}_{B'}))\right)\nonumber\\
\quad
\end{eqnarray}
and, although the size of the effect is largest---$B\sim 10^{-3}$---this
measurement seems out of the question experimentally.
Despite the small size of these effects, the connection with standard model
CP violation and the possibility of finding larger effects due to new
physics demands a no-holds-barred effort to measure these parameters.
\subsection{Nonleptonic Radiative Decay}
Another longstanding thorn in the side of theorists attempting to understand
weak decays of hyperons is the nonleptonic radiative mode $B\rightarrow
B'\gamma$\cite{zrev}. In this case one can write
the most general decay amplitude
as
\begin{eqnarray}
{\rm Amp}&=&{e\over M_B+M_{B'}}\epsilon^{*\mu}q^\nu\nonumber\\
&\times&\bar{u}(p')\left(-i\sigma_{\mu\nu}C
-i\sigma_{\mu\nu}\gamma_5D)\right)u(p)
\end{eqnarray}
where $C$ is the magnetic dipole (parity conserving) amplitude and $D$ is its
(parity conserving) electric dipole counterpart. There are two
quantities of interest in the
analysis of such decays---the decay rate and photon asymmetry, which go as
\begin{equation}
\Gamma\sim|C|^2+|D|^2,\quad A_\gamma={2{\rm Re}C^*D\over |C|^2+|D|^2}
\end{equation}
The difficulty here is associated with ``Hara's Theorem'' which requires that
in the SU(3) limit the parity violating decay amplitude must vanish for decay
between states of a common U-spin multiplet---{\it i.e.} $\Sigma^+\rightarrow
p\gamma$ and $\Xi\rightarrow \Sigma^-\gamma$\cite{hara}. (The proof
here is very much analogous to the
one which requires the vanishing of the axial tensor form factor in
nuclear beta decay between members of a common isotopic spin
multiplet\cite{ht}.)
Since one does not expect significant
SU(3) breaking effects, we anticipate
a relatively small photon asymmety parameter for such decays. However,
in the case of $\Sigma^+\rightarrow p\gamma$ the asymmetry is known to be
large and negative\cite{pdg}
\begin{equation}
A_\gamma(\Sigma^+\rightarrow p\gamma)=-0.76\pm 0.08\label{eq:aa}
\end{equation}
and for thirty years theorists have been struggling to explain this result.
In leading order the amplitude is given by the simple pole diagrams, with
the weak baryon-baryon matrix elements being those determined in the
nonradiative decay analysis.
\begin{figure}
\centerline{\psfig{figure=polefigb.eps,width=7cm}}
\caption{Pole diagrams used to calculate radiative hyperon decay.}
\end{figure}
The Lee-Swift theorem asserts that such matrix
elements must be purely parity conserving in the SU(3) limit and this is
the origin of Hara's theorem in such a model\cite{ls}. Although SU(3) breaking
corrections have been calculated, none is large enough to explain
the experimental result---Eq. \ref{eq:aa}\cite{gh}.
As in the case of the S/P-wave
puzzle, what is clearly required is the inclusion of additional physics
and here too the inclusion of $(70,1^-)$ states by Le Youaunc et al.\cite{you}
or of
${1\over 2}^-$ and ${1\over 2}^+$ resonant states in a chiral framework by
Borasoy and myself\cite{bh2}
appears to naturally predict a large negative asymmetry.
However, in order to confirm the validity of these or any model what
will be required is a set of measurements of {\it both} rates and
asymmetries for
such decays. In this regard, it should be noted that theoretically one
expects all asymmetries to be negative in any realistic
model\cite{pz}. It would be very difficult
to accomodate a large positive asymmetry. Thus the present particle data
group listing\cite{pdg}
\begin{equation}
A_\gamma(\Xi^0\rightarrow\Lambda\gamma)=+0.43\pm0.44
\end{equation}
deserves to be carefully remeasured.
\subsection{Hyperon Beta Decay}
A mode that theory does well in predicting (in fact some would say
{\it too} well) is that of hyperon beta decay---$B\rightarrow B'\ell\nu_\ell$,
where $\ell$ is either an electron or a muon. Since this is a semileptonic
weak interaction, the decays are described in general by matrix elements of
the weak vector, axial-vector currents
\begin{eqnarray}
<B'|V_\mu|B>&=&\bar{u}(p')(f_1\gamma_\mu+{-if_2\over M_B+M_{B'}}
\sigma_{\mu\nu}q^\nu\nonumber\\
&+&{f_3\over M_B+M_{B'}}q_\mu)u(p)\nonumber\\
<B'|A_\mu|B>&=&\bar{u}(p')(g_1\gamma_\mu+{-ig_2\over M_B+M_{B'}}
\sigma_{\mu\nu}q^\nu\nonumber\\
&+&{g_3\over M_B+M_{B'}}q_\mu )\gamma_5u(p)\nonumber\\
\quad
\end{eqnarray}
Here the dominant terms are the vector, axial couplings $f_1,g_1$ and the
standard approach is simple Cabibbo theory, wherein one fits the $g_1$
in terms of SU(3) F,D coefficients and $f_1$ using
CVC and simple F coupling. When this is done, one finds in general a
very satisfactory fit---$\chi^2/d.o.f\sim 2.0$---which can be made even better
by inclusion of simple quark model SU(3) breaking effects---$\chi^2/d.o.f.
\sim 0.85$\cite{dhk}. An output of such a fit is the value of the KM mixing parameter
$V_{us}=0.220\pm 0.003$, which is in good agreement with the value
$V_{us}=0.2196\pm 0.0023$ measured in $K_{e3}$ decay.
However, differing assumptions about SU(3)
breaking will lead to slightly modified values.
The importance of such a measurement of $V_{us}$ has to do with its use as
an input to a test of the standard model via the unitarity prediction
\begin{equation}
|V_{ud}|^2+|V_{us}|^2+|V_{ub}|^2=1
\end{equation}
From an analysis of B-decay one obtains $|V_{ub}|\sim 0.003$, which when
squared leads to a negligible contribution to the unitarity sum. So the
dominant effect comes from $V_{ud}$, which is measured via $0^+-0^+$
superallowed nuclear beta decay---
\begin{equation}
V_{ud}^2={2\pi^3\ln 2m_e^{-5}\over 2G_F^2(1+\Delta_R^V)\bar{\cal F}t}
\end{equation}
Here $\Delta_R^V=2.40\pm 0.08\%$ is the radiative correction and
$\bar{\cal F}t= 3072.3\pm 0.9$ sec. is the mean (modified) ft-value
for such decays. Of course, there exist important issues in the analysis
of such ft-values including the importance of isotopic spin breaking
effects and of possible Z-dependence omitted from the radiative corrections,
but if one takes the above-quoted number as being correct we obtain\cite{htow}
\begin{equation}
\begin{array}{c}
V_{ud}=0.9740\pm0.0005\\
{\rm and}\\
|V_{ud}|^2+|V_{us}|^2+|V_{ub}|^2=
0.9968\pm 0.0014
\end{array}
\end{equation}
which indicates a possible violation of unitarity. If correct, this would
suggest the existence of non-standard-model physics, but clearly additional
work, both theoretical and experimental, is needed before drawing this
conclusion.
What is needed in the case of hyperon beta decay is good set of data including
rates {\it and} asymmetries, both in order to produce a possibly improved
value of $V_{us}$ but also to study the interesting issue of SU(3) breaking
effects, which {\it must} be present, but whose effects seem somehow
to be hidden. A related focus of such studies should be the
examination of higher
order---recoil---form factors such as weak magnetism ($f_2$) and the axial
tensor ($g_2$). In the latter case, Weinberg showed that in the standard
quark model $G=C\exp (i\pi I_2)$-invariance requires $g_2=0$ in neutron
beta decay $n\rightarrow pe^-\bar{\nu}_e$\cite{scc}.
(This result usually is called
the stricture arising from ``no second class currents.'') In the SU(3)
limit one can use V-spin invariance to show that $g_2=0$ also obtains for
$\Delta S=1$ hyperon beta decay, but in the real world this condition
will be violated. A simple quark model calculation suggests that
$g_2/g_1\sim -0.2$\cite{dh2} but other calculations, such as
a recent QCD sum rule
estimate give a larger number---$g_2/g_1\sim -0.5$.
In any case good hyperon beta decay data---with rates and
asymmetries---will be needed in order to extract the size of such effects.
\subsection{Hyperon Polarizabilities}
Since this subject is not familiar to many physicists, let me spend just
few moments giving a bit of motivation. The idea goes back to simple classical
physics. Parity and time reversal invariance, of course, forbid the existence
of a permanent electric dipole moment for an elementary system.
However, consider the application of a
uniform electric field to a such a system.
Then the positive charges will move in one
direction and negative charges in the other---{\it i.e.} there will be a
charge separation and an electric dipole moment will be induced. The size of
the edm will be proportional to the applied field and
the constant of proportionality between the applied field and the induced
dipole moment is the electric polarizability $\alpha_E$
\begin{equation}
\vec{p}=4\pi\alpha_E\vec{E}
\end{equation}
The interaction of this dipole moment with the field leads to an
interaction energy
\begin{equation}
U=-{1\over 2}\vec{p}\cdot\vec{E}=-{1\over 2}4\pi\alpha_E\vec{E}^2,
\end{equation}
where the ``extra'' factor of $1\over 2$ compared to elementary physics
result is due to the feature that the dipole moment is {\it induced}.
Similarly in the presence of an applied magnetizing field $\vec{H}$ there will
be generated an induced magnetic dipole moment
\begin{equation}
\vec{\mu}=4\pi\beta_M\vec{H}
\end{equation}
with interaction energy
\begin{equation}
U=-{1\over 2}\vec{\mu}\cdot\vec{H}=-{1\over 2}4\pi\beta_M\vec{H}^2.
\end{equation}
For wavelengths large compared to the size of the system, the effective
Hamiltonian describing the interaction of a system of charge $e$ and
mass $m$ with an electromagnetic
field is, of course, given by
\begin{equation}
H^{(0)}={(\vec{p}-e\vec{A})^2\over 2m}+e\phi,
\end{equation}
and the Compton scattering cross section has the simple Thomson form
\begin{equation}
{d\sigma\over d\Omega}=\left({\alpha_{em}\over m}\right)^2\left({\omega'\over
\omega}\right)^2[{1\over 2}(1+\cos^2\theta)],
\end{equation}
where $\alpha_{em}$ is the fine structure constant and $\omega,\omega'$
are the initial, final photon energies respectively.
As the energy increases, however, so does the resolution and
one must also take into account polarizability
effects, whereby the effective Hamiltonian becomes
\begin{equation}
H_{\rm eff}=H^{(0)}-{1\over 2}4\pi(\alpha_E\vec{E}^2+\beta_M\vec{H}^2).
\end{equation}
The Compton scattering cross section from such a system (taken, for simplicity,
to be spinless) is then
\begin{eqnarray}
{d\sigma\over d\Omega}&=&\left({\alpha_{em}\over m}\right)^2\left({\omega'\over
\omega}\right)^2\left({1\over 2}
(1+\cos^2\theta)\right.\nonumber\\
&-&\left.{m\omega\omega'\over \alpha_{em}}[{1\over
2}(\alpha_E+\beta_M)(1+\cos\theta)^2\right.\nonumber\\
&+&\left.{1\over 2}(\alpha_E-\beta_M)
(1-\cos\theta)^2]\right)\label{eq:sss}
\end{eqnarray}
It is clear from Eq. \ref{eq:sss}
that from careful measurement of the differential scattering cross section,
extraction of these structure dependent polarizability terms is possible
provided
\begin{itemize}
\item [i)] that the energy is large enough that these terms are
significant compared to the
leading Thomson piece and
\item [ii)] that the energy is not so large that higher order
corrections become important.
\end{itemize}
In this fashion the measurement of electric and
magnetic polarizabilities for the proton has recently been accomplished
at SAL and at MAMI using
photons in
the energy range 50 MeV $<\omega <$ 100 MeV, yielding\cite{PPol}
\footnote{Results for the neutron extracted from $n-Pb$ scattering cross
section
measurements have been reported\cite{npol},
but have been questioned\cite{ques}.
Extraction via studies using a deuterium target may be possible
in the future\cite{bean}.}
\begin{eqnarray}
\alpha_E^p&=&(12.1\pm 0.8\pm 0.5)\times 10^{-4}\; {\rm fm}^3\nonumber\\
\beta_M^p&=&(2.1\mp 0.8\mp 0.5)\times 10^{-4}\; {\rm fm}^3. \label{abexp}
\end{eqnarray}
Note that in
practice one generally exploits the strictures of causality and unitarity as
manifested
in the validity of the forward scattering dispersion relation, which yields the
Baldin sum rule\cite{bgm}
\begin{eqnarray}
\alpha_E^{p,n}&+&\beta_M^{p,n}={1\over 2\pi^2}\int_0^\infty{d\omega\over
\omega^2}\sigma_{\rm tot}^{p,n}\nonumber\\
&=&\left\{
\begin{array}{ll}(13.69\pm 0.14)\times 10^{-4}{\rm fm}^3& {\rm proton}\\
(14.40\pm 0.66)\times 10^{-4}{\rm fm}^3& {\rm neutron}
\end{array}\right.\nonumber\\
\quad
\end{eqnarray}
as a rather precise constraint because of the small uncertainty associated
with the photoabsorption cross section $\sigma_{\rm tot}^p$.
As to the meaning of such results we can compare with the corresponding
calculation of the electric polarizability of the hydrogen atom,
which yields\cite{merz}
\begin{equation}
\alpha_E^H={9\over 2}a_0^2\quad{\rm vs.}\quad\alpha_E^p\sim
10^{-3}<r_p^2>^{3\over 2}
\end{equation}
where $a_0$ is the Bohr radius. Thus the polarizability of the hydrogen
atom is of order the atomic volume while that of the proton is
only a thousandth of its volume, indicating that the proton is
much more strongly bound.
The relevance to our workshop is that the
polarizability of a hyperon can also be measured using Compton
scattering, via the reaction $B+Z\rightarrow B+Z+\gamma$ extrapolated to
the photon pole---{\it i.e.} the Primakoff effect. Of course, this is
only feasible for charged hyperons---$\Sigma^\pm,\Xi^-$, and the
size of such polarizabilities predicted
theoretically are somewhat smaller than that of the proton\cite{meis}
\begin{equation}
\alpha_E^{\Sigma^+}\sim9.4\times 10^{-4}\,{\rm fm}^3,\qquad
\alpha_E^{\Xi^-}\sim2.1\times 10^{-4}\,{\rm fm}^3
\end{equation}
but their measurement would be of great interest.
\subsection{Polarization and Hyperon Production}
My final topic will be that of polarization in strong interaction
production of hyperons, a field that began here at FNAL in 1976 with
the discovery of $\Lambda$ polarization in the reaction\cite{fnal}
\begin{equation}
p(300\,\,{\rm GeV})+Be\rightarrow\vec{\Lambda}+X
\end{equation}
This process has been well studied in the intervening years\cite{hel} and we
now know that in the fragmentation region the polarization is large and
negative---$\vec{P}
\cdot\hat{p}_{inc}\times\hat{p}_\Lambda<0$---and that it satisfies
scaling, {\it i.e.} is a function only of $x_F={p^\parallel_\Lambda\over
p_p},p^\perp_\Lambda$ and not of the center of mass energy . Various
theoretical approaches have been applied in order to try to understand
this phenomenon---{\it e.g.}, Soffer and T\"{o}rnqvist have developed a
Reggized pion exchange picture\cite{st}, while DeGrand, Markkanen,
and Miettinen
have used a quark-parton approach wherein the origin of the
polarization is related to
the Thomas precession\cite{dm}---but none can be said
to be definitive. One thing which seems to be clear is that there exists
a close connection with the large negative polarizations seen in
inclusive hyperon production and the large positive analyzing powers observed
at FNAL in inclusive meson production with polarized protons\cite{anp}
\begin{equation}
\vec{p}+p\rightarrow \pi^++X
\end{equation}
Another input to the puzzle may be the availability in the lower
energy region of new exclusive
data from Saturne involving\cite{sat}
\begin{equation}
\vec{p}+p\rightarrow p+\vec{\Lambda}+K^+
\end{equation}
which seems best described in terms of a kaon exchange mechanism.
Clearly there is much more to do in this field.
\section{Summary}
I conclude by noting that, although the first hyperon was discovered
more then half a century ago and much work has been done since,
the study of hyperons remains an interesting and challenging
field. As I have tried to indicate above, many questions exist
as to their strong, weak, and electromagnetic interaction properties,
and I suspect that these particles will remain choice targets for particle
hunters well into the next century.
| {
"attr-fineweb-edu": 1.713867,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUczw4uzlhZUrOXzjc | \section{Introduction}
\label{sec:intro}
The modern S-matrix program has transformed our understanding of quantum field theory (QFT). On-shell methods have driven progress in scattering amplitudes by simplifying computations and revealing new physics. Instead of working with individual Feynman diagrams, which can be numerous and gauge-dependent, one works in terms of on-shell building blocks. Constraints from locality, causality, and unitarity then help determine the full amplitude. In particular, unitarity methods allow us to calculate loop-level amplitudes from lower-loop on-shell quantities \cite{Bern:1994zx,Bern:1994cg}.\footnote{We refer the reader to \cite{Bern:2011qt,Elvang:2013cua,Dixon:2013uaa, Henn:2014yza,Arkani-Hamed:2016byb,Cheung:2017pzi} for references and reviews.} Unitarity is a powerful tool for exploring loop-level structure in the S-matrix that is not manifest at the level of the action, including Yangian symmetry \cite{ArkaniHamed:2010kv}, color-kinematic duality, and the double-copy property of gravity theories \cite{Bern:2008qj,Bern:2010ue}. The arsenal of modern amplitude methods brings difficult computations into reach, opening paths to novel ultraviolet physics \cite{Bern:2018jmv}.
By contrast, we understand far less about perturbation theory in the absence of an S-matrix. For example, how do structures uncovered in flat space amplitudes generalize to theories in Anti-de Sitter (AdS) space? While the S-matrix can be recovered from the flat space limit of AdS/CFT \cite{Susskind:1998vk,Polchinski:1999ry,Heemskerk:2009pn,Gary:2009ae,Penedones:2010ue, Raju:2012zr}, one cannot define an S-matrix in global AdS itself as an overlap of in- and out-states \cite{Balasubramanian:1999ri}. Instead, the asymptotic observables are the correlation functions of the boundary CFT. Like flat-space amplitudes, boundary correlators obey notions of locality, causality, and unitarity, and so it is natural to expect that S-matrix technology may be applicable to AdS/CFT. Furthermore, the program of studying AdS/CFT correlators using an on-shell approach has made remarkable progress, for example by leveraging the constraints of crossing symmetry \cite{Heemskerk:2009pn} and writing correlators in Mellin space \cite{Mack:2009mi,Penedones:2010ue,Rastelli:2016nze}. However, AdS analogues of basic S-matrix ideas remain unknown.
In this paper, we study the following question: what are the on-shell building blocks of $1/N$ perturbation theory in AdS? Our starting point will be the Cutkosky rules, i.e. that the discontinuity of a Feynman diagram can be calculated by cutting internal lines \cite{Cutkosky:1960sp,Eden:1966dnq}. These cuts place lines on shell by replacing the time-ordered propagator with the corresponding Wightman, or on-shell, propagator. The cutting rules underpin the optical theorem for the S-matrix, ensuring that the discontinuity of an amplitude factorizes into products of on-shell sub-amplitudes. As we review\footnote{For modern work on the flat-space cutting rules see \cite{Abreu_2014,Abreu_2017,Bourjaily:2020wvq}.} and show in explicit examples, these and more general cutting rules follow from basic Lorentzian properties of QFT correlators \cite{Veltman:1963th} and persist in curved space.
The cutting rules we explore are directly related to the Lorentzian inversion formula \cite{Caron-Huot:2017vep}, a centerpiece of modern CFT unitarity methods. The inversion formula is a CFT generalization of the Froissart-Gribov formula for the S-matrix \cite{Gribov:1961fr,Froissart:1961ux} and has generated recent progress in the study of higher-dimensional CFTs. For example, the inversion formula proves the existence of Regge trajectories and leads to a dispersion formula for CFT four-point functions \cite{Carmi:2019cub}. In the CFT dispersion formula, the four-point function is reconstructed from its much simpler \textit{double-commutator}.\footnote{As with the dispersion formula for scattering amplitudes in QFT, there are possible polynomial ambiguities affecting operators of bounded spin.} In the context of AdS/CFT, the double-commutator reduces the loop order. That is, the double-commutator of an $L$-loop one-particle irreducible Witten diagram can be computed in terms of $(L-1)$-loop data. The CFT dispersion formula then provides a way to bootstrap loop-level physics purely from tree-level data. The double-commutator then plays the same role in the CFT dispersion formula that the discontinuity of the amplitude plays in the S-matrix dispersion formula \cite{Eden:1966dnq}.
Previously, it was unclear how the double-commutator could be computed via Cutkosky rules in CFTs, at either weak or strong coupling. The goal of this work is to derive these rules for both classes of CFTs. To compute the double-commutator, we will classify the corresponding \textit{unitarity cuts} of Feynman diagrams in weakly-coupled CFTs and of Witten diagrams in the AdS dual of holographic CFTs. In the holographic case, the set of allowed cuts agrees with the previous bulk analysis \cite{Meltzer:2019nbs}. We also show that, for certain kinematics, the CFT optical theorem computes the double-commutator appearing in the inversion formula.
The relationships between the cutting rules, the CFT optical theorem, and the inversion formula are all elementary properties of CFTs and therefore extend the S-matrix unitarity method to a wider class of theories.
To derive the cutting rules, we will follow a somewhat historical route and import Veltman's derivation of the flat-space cutting rules, via the largest-time equation \cite{Veltman:1963th}, to AdS. This approach\footnote{An alternative strategy would be to derive the Feynman rules for the CFT double-commutator directly using time-folded contours i.e. the Schwinger-Keldysh formalism \cite{Schwinger:1960qe,Keldysh:1964ud,Chou:1984es,Stanford:2015owe,Haehl:2016pec,Haehl:2017qfl,Murugan:2017eto}.} gives a simple derivation of the cutting rules and makes manifest that AdS unitarity methods are a direct generalization of the standard flat space methods. In practice, our approach amounts to replacing the double-commutator with a simpler out-of-time-ordered correlator.
While conformal field theories are typically studied in position space, one theme of this work is that Lorentzian momentum space is convenient for studying unitarity. For instance, the derivation of the allowed cuts takes a simple form in momentum space. A cut diagram has a natural interpretation in momentum space as well: it is the gluing of two sub-diagrams via a phase space integral, which is equivalent to summing over physical exchanged states.
The study of AdS/CFT correlators in momentum space is also motivated by their relation to cosmological observables, see e.g.
\cite{Maldacena:2002vr,Maldacena:2011nz,Mata:2012bx,Kundu:2014gxa,Ghosh:2014kba,Arkani-Hamed:2015bza,Kundu:2015xta,Sleight:2019mgd,Sleight:2019hfp,Sleight:2020obc,Arkani-Hamed:2017fdk,Arkani-Hamed:2018bjr,Benincasa:2018ssx,Benincasa:2019vqr,Arkani-Hamed:2018kmz,Baumann:2019oyu,Baumann:2020dch}, and the study of on-shell AdS recursion relations \cite{Raju:2010by,Raju:2011mp,Raju:2012zr,Raju:2012zs}.\footnote{For further work on AdS/CFT in momentum space see \cite{Isono:2018rrb,Isono:2019wex,Farrow:2018yni,Lipstein:2019mpu,Albayrak:2018tam,Albayrak:2019asr,Albayrak:2019yve,Albayrak:2020isk,Albayrak:2020bso}.}
In this work we will only study the cutting rules in momentum space, although they are also applicable to AdS/CFT correlators in position and Mellin space.
Finally, it is useful to compare our approach to other studies of unitarity in AdS/CFT. One well-established method to compute loops in AdS is via the bootstrap equations. Here one determines the operator product expansion (OPE) of the CFT at tree level by solving the crossing equations \cite{Heemskerk:2009pn}. One can then plug this data into the loop-level crossing equations \cite{Aharony:2016dwx}, or equivalently use it to compute the double-commutator \cite{Caron-Huot:2017vep, Alday:2017vkk}, to solve for loop-level OPE data. This gives a boundary method to bootstrap loops in AdS purely from tree-level data. A related approach is the Euclidean bulk method \cite{Ponomarev:2019ofr,Meltzer:2019nbs}. In this approach, one determines the boundary OPE data by studying Witten diagrams themselves and working in Euclidean signature. The split representation for bulk-to-bulk propagators \cite{Leonhardt:2003qu,Costa:2014kfa} and on-shell conditions in CFT spectral space together lead to an efficient method to study the double-commutator. The bulk and boundary methods, which are ultimately equivalent \cite{Meltzer:2019nbs}, give the double-commutator in terms of the boundary OPE data. In this work we instead study AdS in Lorentzian signature and express the double-commutator directly as a sum over cut Witten diagrams, bypassing the boundary OPE.
The cuts we study factorize AdS diagrams using bulk normalizable modes and therefore make AdS locality and factorization manifest.
As we review in Section \ref{sec:ReviewCutkosky}, the definition of a cut diagram we use here is a direct generalization of the one used in flat-space unitarity methods.
\subsection*{Summary of results}
We will now give a summary of the main results, followed by an outline of the paper. Unless stated otherwise, we will study scalar field theories in flat space and AdS throughout this work.\footnote{Strictly speaking, the boundary correlators for a QFT in AdS define a conformal theory that does not have a stress-tensor and is therefore non-local \cite{Heemskerk:2009pn,Paulos:2016fap}. However, all of our results can be generalized to study theories of gravity in AdS with a local CFT dual.} The cutting rules will be derived for CFTs in general spacetime dimension and we will only specialize to specific dimensions when computing examples.
The derivations of the cutting rules in weakly and strongly-coupled CFTs will be essentially the same and rely on working in Lorentzian signature. Specifically, we will need the following properties of Lorentzian CFTs:
\begin{enumerate}
\item \textbf{Positive Spectrum.} The space of physical states $|\Psi(k)\>$ have momentum $k$ lying in the forward lightcone, $k^{2}\leq0$ and $k^0\geq0$. We use the mostly-plus signature for the metric $\eta_{\mu\nu}$.
\item \textbf{Causality.} Operators commute at spacelike separation:
\begin{align}
[\f(x),\f(y)]=0 \quad \text{ for } \quad (x-y)^{2}>0~.
\end{align}
\item \textbf{CFT Optical Theorem.}\footnote{While this identity is valid in general QFTs, we will refer to this as the CFT optical theorem to avoid confusion with the S-matrix optical theorem.} We use the following combinatoric identity \cite{Gillioz:2016jnn,Gillioz:2018kwh,Gillioz:2018mto} for partially time-ordered operators:
\begin{align}
\sum\limits_{r=0}^{n}(-1)^r\sum\limits_{\sigma\in\Pi(r,n-r)}\overline{T}[\f(x_{\sigma_1})\ldots\f(x_{\sigma_r})]T[\f(x_{\sigma_{r+1}})\ldots\f(x_{\sigma_{n}})]=0~. \label{eq:opticalV0}
\end{align}
Here $\overline{T}$ and $T$ are the (anti-)time-ordering symbols and $\Pi(r,n-r)$ is the set of partitions of $\{1,\ldots,n\}$ into two sets of size $r$ and $n-r$.
\end{enumerate}
The CFT optical theorem can be verified at low points by using the definition of the (anti-)time-ordering symbol and checking that the $\theta$-functions cancel. It then follows for all $n$ by induction.
To derive CFT cutting rules, we begin by using the CFT optical theorem to relate the real part of a time-ordered four-point function to a double-commutator \cite{Polyakov:1974gs}:
\begin{align}
-2~\Re \<T[\f(k_1)\f(k_2)\f(k_3)\f(k_4)]\>=\<[\f(k_3),\f(k_4)]_{A}[\f(k_1),\f(k_2)]_{R}\>~, \label{eq:FromReToDC}
\end{align}
where the subscripts $A,R$ indicate the advanced and retarded commutators,
\begin{align}
[\f(x_1),\f(x_2)]_{A}=\theta(x_2^0-x_1^0)[\f(x_1),\f(x_2)]~,
\\
[\f(x_1),\f(x_2)]_{R}=\theta(x_1^0-x_2^0)[\f(x_1),\f(x_2)]~.
\end{align}
We will refer to the right-hand side of \eqref{eq:FromReToDC} as the causal double-commutator. This is the same double-commutator that appears in the CFT inversion formula and can be computed by taking a double-discontinuity of the correlator in cross-ratio space \cite{Caron-Huot:2017vep}.\footnote{The fact that the inversion formula involves a causal double-commutator is manifest in \cite{ssw,Kravchuk:2018htv}, where the causal restrictions are put into the definition of the integration region.} The identity \eqref{eq:FromReToDC} only holds for the kinematics
\begin{align}
&k_{i}^{2}>0, \quad (k_i+k_j)^{2}>0 \quad \text{except for} \quad k_1+k_2\in V_+, \quad k_3+k_4 \in V_{-}~, \label{eq:kinematics}
\end{align}
where $V_{\pm}$ are the closed forward and backward lightcones,
\begin{align}
V_{\pm}=\{k \ | \ k^{2}\leq0, \ \pm k^{0}\geq0\}~.
\end{align}
Next, we derive the diagrammatic rules for the left-hand side of \eqref{eq:FromReToDC} by generalizing Veltman's derivation of the Cutkosky rules for the S-matrix \cite{Veltman:1963th}. Veltman's derivation is based on the largest-time equation, which is a general relationship between Feynman diagrams for Lorentzian QFTs. Crucially, this relationship holds in both flat space and AdS. The largest-time equation involves a set of ``cut" graphs, which come from introducing black and white vertices for both internal and external points in the original diagram.\footnote{These are the circling rules of \cite{Veltman:1963th} and should not be confused with on-shell diagrams.} The largest-time equation states that the sum over all possible colorings of the vertices vanishes. This is simply the graphical version of the CFT optical theorem \eqref{eq:opticalV0}. Finally, there is a one-to-one correspondence between the set of non-vanishing graphs with these two types of vertices and the cut graphs of Cutkosky \cite{Cutkosky:1960sp,Eden:1966dnq,Veltman:1963th} (e.g. see figure \ref{AdS_Box_Example}).
\begin{figure}
\begin{center}
\includegraphics[scale=.25]{AdS_Box_Cut_Example.pdf}
\end{center}
\caption{The map from the cutting to coloring rules for the AdS box diagram.}
\label{AdS_Box_Example}
\end{figure}
The relation \eqref{eq:FromReToDC} turns the cutting rules for $\Re \<T[\f\f\f\f]\>$ into the cutting rules for the causal double-commutator $\<[\f,\f]_{A}[\f,\f]_{R}\>$ in the restricted kinematics \eqref{eq:kinematics}. To analytically continue to general momenta, we use that the retarded commutator $[\f(x_1),\f(x_2)]_{R}$ is only non-zero if $x_{1}$ is in the causal future of $x_2$. Using standard arguments for Laplace transforms of Wightman functions, this causality condition in position space translates into an analyticity property in momentum space \cite{Streater:1989vi,Haag:1992hx}. Using this property for both commutators, we analytically continue away from the restricted kinematics \eqref{eq:kinematics} and prove the cutting rules for the causal double-commutator with general momenta. Here we are analytically continuing only the double commutator and not $\Re \<T[\f\f\f\f]\>$, which in general differs from $\<[\f,\f]_{A}[\f,\f]_{R}\>$ for generic momenta.
For the reader interested in the final result, we will now summarize the AdS cutting rules for $\<[\f(k_3),\f(k_4)]_{A}[\f(k_1),\f(k_2)]_{R}\>$ in the Poincar\'e patch. This double-commutator is only non-zero for $k_1+k_2\in V_{+}$, so the momentum necessarily runs from the left to the right. Aside from satisfying momentum conservation, the other momenta are left generic. The cutting rules for an individual Witten diagram are as follows:
\begin{enumerate}
\item Draw a cut that crosses only bulk-to-bulk propagators. For each of these cut propagators, replace the time-ordered, or Feynman, propagator $G_{\Delta}(k,z_i,z_j)$ by the corresponding on-shell propagator $G^{+}_{\Delta}(k,z_i,z_j)$.
\item For each propagator to the (right) left of the cut, use the (anti-)time-ordered propagator.
\item For each internal vertex to the left of the cut, multiply by $ig$. For internal vertices to the right of the cut, multiply by $-ig$. Here $g$ is the bulk coupling.
\item Sum over all cuts consistent with momentum conservation.
\end{enumerate}
The AdS on-shell propagator, $G^{+}_{\Delta}(k,z_i,z_j)$, is a two-point Wightman function for a free scalar in AdS. This is precisely the same structure as in flat space: the on-shell propagator in flat space is by definition a free-field two-point Wightman function, which is a $\delta$-function in momentum space. The standard cutting rules for the S-matrix correspond to replacing cut lines by two-point Wightman functions. In the examples we study, we will find that cut AdS diagrams reduce to the corresponding cut scattering amplitudes in the flat space limit, confirming that we have generalized the flat-space methods to AdS.
In AdS we also find that the on-shell propagator $G^{+}_{\Delta}(k,z_1,z_2)$ has a simple split representation in terms of the on-shell, or Wightman, bulk-to-boundary propagator $K^{+}_{\Delta}(k,z)$:
\begin{align}
G^{+}_{\Delta}(k,z_1,z_2)&\propto (\sqrt{-k^{2}})^{d-2\Delta}K^{+}_{\Delta}(k,z_1)K^{+}_{\Delta}(k,z_2)~.
\end{align}
Diagrams with on-shell bulk-to-boundary propagators are known as \textit{transition amplitudes} and have been studied in the context of recursion relations \cite{Raju:2010by,Raju:2011mp,Raju:2012zr,Raju:2012zs}. The on-shell bulk-to-boundary propagators correspond to normalizable solutions of the bulk equations of motion \cite{Balasubramanian:1998sn,Balasubramanian:1998de,Balasubramanian:1999ri}. This aligns with our interpretation of a cut diagram as a sum over states. We have drawn this schematically in figure \ref{fig:Cutbubble_Transition} and provide more details in Section \ref{sec:UnitarityCutsAdS}.
\begin{figure}
\begin{center}
\includegraphics[scale=.25]{Cut_Bubble_V1.pdf}
\caption{A cut bubble is the on-shell gluing of contact diagrams: the undotted lines are Feynman propagators and the dotted lines are Wightman, or on-shell, propagators.}
\label{fig:Cutbubble_Transition}
\end{center}
\end{figure}
\subsection*{Outline}
In Section \ref{sec:Polyakov} we review the CFT optical theorem and how to relate the real part of a four-point function to the causal double-commutator. In Section \ref{sec:ReviewCutkosky} we use this identity and the largest-time equation to derive the double-commutator cutting rules for weakly coupled theories in flat space. In Section \ref{sec:UnitarityCutsAdS} we generalize this argument to correlation functions in AdS/CFT and introduce the transition amplitudes. In Section \ref{sec:Examples} we discuss the OPE and flat space limit of cut AdS diagrams, and give examples at tree and loop level. We conclude with a summary and discussion of future directions in Section \ref{sec:Discussion}. In Appendix \ref{app:largest_time} we review the largest-time equation. In Appendix \ref{app:Analyticity_k_Space} we summarize the analyticity properties of the double-commutator in momentum space. In Appendix \ref{sec:FeynmanTree} we give a short derivation of the Feynman tree theorem in AdS. Finally, in Appendix \ref{sec:SKDerivation} we give an alternative derivation of the cutting rules using the Schwinger-Keldysh formalism.
\section{CFT Unitarity Conditions}
\label{sec:Polyakov}
In this section we review how unitarity conditions apply to the full correlator and derive \eqref{eq:FromReToDC}. This identity is useful because it relates the real part of a time-ordered correlator, which can be computed via a set of cutting rules for theories with a weakly-coupled description, to the causal double-commutator, which is the central element of AdS/CFT unitarity methods \cite{Aharony:2016dwx,Caron-Huot:2017vep,Meltzer:2019nbs}.
This section will be a review of \cite{Polyakov:1974gs}, although we will follow the presentation of \cite{Gillioz:2016jnn,Gillioz:2018kwh,Gillioz:2018mto}. The identities reviewed here will hold for all causal unitary QFTs and do not rely on assuming either conformal invariance or weak coupling.\footnote{The arguments in this section will rely on using $\theta$-function identities, which can be subtle in general non-perturbative QFTs. However, all the identities derived in this section can also be derived by using the axiomatic definition of the time-ordered product \cite{Bogolyubov:1990kw}, see \cite{Meltzer:2021bmb} for a review.}
The proof of \eqref{eq:FromReToDC} follows from the CFT optical theorem and the positive spectrum condition. The four-point version of the CFT optical theorem states \cite{Gillioz:2016jnn,Gillioz:2018kwh,Gillioz:2018mto}:
\begin{align}
0=\<\overline{T}[\f(k_1)\f(k_2)\f(k_3)\f(k_4)]\>\hspace{.1cm}+\hspace{.1cm}&\<T[\f(k_1)\f(k_2)\f(k_3)\f(k_4)]\>
\nonumber \\ +\hspace{.1cm}&\<\overline{T}[\f(k_3)\f(k_4)]T[\f(k_1)\f(k_2)]\> \nonumber
\nonumber \\
-\hspace{.1cm}&\<\f(k_1)T[\f(k_2)\f(k_3)\f(k_4)]\>
\nonumber \\ \vspace{.1cm}
-\hspace{.1cm}&\<\overline{T}[\f(k_2)\f(k_3)\f(k_4)]\f(k_1)\>
+\left( \text{partitions} \right)
~,\label{eq:CombCor}
\end{align}
where for convenience we have gone to momentum space and suppressed the other partitions of the external operators. This equation is shown graphically in figure \ref{fig:graphicaloptical} and says that summing over all ``cuts" of a four-point function vanish. What we call a cut here refers to how the four external operators are grouped under the (anti-)time-ordering symbols. For example, the first three correlators of \eqref{eq:CombCor} correspond to the first three diagrams shown in figure \ref{fig:graphicaloptical}. In Sections \ref{sec:ReviewCutkosky} and \ref{sec:UnitarityCutsAdS} we explain how to compute cuts of Feynman and Witten diagrams.
\begin{figure}
\begin{center}
\includegraphics[scale=.2]{Graphical_Optical.pdf}
\end{center}
\caption{Optical theorem for a QFT four-point function. The grey disk represents a general correlation function and not a Feynman diagram. The external lines label the momentum of the external operator. Operators to the (right) left of the blue line are (anti-)time-ordered in the four-point functions given in \eqref{eq:CombCor}.}
\label{fig:graphicaloptical}
\end{figure}
To simplify this equation, it is useful to choose the momentum to lie in the configuration \eqref{eq:kinematics}. The salient feature of these kinematics is that only the sum $k_1+k_2\in V_+$ and is therefore on shell. By restricting to this configuration, we ensure that only $s$-channel cuts contribute to $\Re\<T[\f\f\f\f]\>$. This means only the three diagrams shown in figure \ref{fig:graphicaloptical} are non-zero. The other cut diagrams (not shown) all vanish. To show this, we will use the identities:
\begin{align}
\f(k_i)|0\>&=0 \quad \text{ if } \ k_{i}\notin V_+~,
\\
T[\f(k_i)\f(k_j)]|0\>&=0 \quad \text{ if } \ k_i+k_j\notin V_+~.
\end{align}
Both equalities follow from the fact that all the states in the physical Hilbert space, $\mathcal{H}$, have momentum in the forward lightcone $V_{+}$. For the chosen kinematics, the CFT optical theorem then becomes
\begin{align}
-2\hspace{.1cm}\Re \<T[\f(k_1)\f(k_2)\f(k_3)\f(k_4)]\>=\<\overline{T}[\f(k_3)\f(k_4)]T[\f(k_1)\f(k_2)]\>~, \label{eq:CFTOptical}
\end{align}
where we used that $\<T[\f\f\f\f]\>+\<\overline{T}[\f\f\f\f]\>$ gives twice the real part of the time-ordered correlator.
This unitarity condition is analogous to its S-matrix counterpart. Writing the S-matrix as $\mathcal{S}=1+i\mathcal{T}$, the unitarity condition $\mathcal{S}^{\dagger}\mathcal{S}=1$ implies that\footnote{Writing the non-trivial piece as $i\mathcal{T}$ explains why we take the imaginary piece for the S-matrix but the real part for the four-point function.}
\begin{align}
2\hspace{.1cm}\Im(\mathcal{T})=\mathcal{T}^{\dagger}\mathcal{T}~. \label{eq:SMatrixOptical}
\end{align}
In both \eqref{eq:CFTOptical} and \eqref{eq:SMatrixOptical}, the left-hand side is found by taking a discontinuity while the right-hand side has a naturally factorized form in terms of two lower-loop objects.\footnote{The relation (\ref{eq:CombCor}) is also used in axiomatic studies of QFT to prove unitarity of the S-matrix \cite{Schweber:1961zz}.} More precisely, \eqref{eq:CFTOptical} is the correlation function analogue of studying the $s$-channel discontinuity of a scattering amplitude.
To make a connection with the inversion formula, we need to replace the time-ordering symbol with the retarded commutator. To do this, we use
\begin{align}
T[\f(k_1)\f(k_2)]|0\>-&[\f(k_1),\f(k_2)]_{R}|0\>
\nonumber \\ &=\int d^{d}x_1d^{d}x_2 e^{i(k_1\cdot x_1 +k_2\cdot x_2)}\bigg(\theta(t_1-t_2)\f(x_1)\f(x_2)+\theta(t_2-t_1)\f(x_2)\f(x_1)
\nonumber \\ &\hspace{1.85in}-\theta(t_1-t_2)(\f(x_1)\f(x_2)-\f(x_2)\f(x_1))\bigg)|0\>
\nonumber \\ &=\int d^{d}x_1d^{d}x_2 e^{i(k_1\cdot x_1 +k_2\cdot x_2)}\f(x_2)\f(x_1)|0\>=\f(k_2)\f(k_1)|0\>=0~,
\end{align}
where the final equality follows from having $\f(k)|0\>=0$ when $k^2>0$. In other words,
both $T[\f(k_1)\f(k_2)]$ and $[\f(k_1),\f(k_2)]_{R}$ have the same action on the vacuum for spacelike momenta. With the same kinematics, a similar argument gives
\begin{align}
\<0| \overline{T}[\f(k_3)\f(k_4)]-&\<0| [\f(k_3),\f(k_4)]_{A}=0~.
\end{align}
Finally, we arrive at
\begin{align}
-2~\Re \<T[\f(k_1)\f(k_2)\f(k_3)\f(k_4)]\>&=\<\overline{T}[\f(k_3)\f(k_4)]T[\f(k_1)\f(k_2)]\>
\nonumber \\ &=\<[\f(k_3),\f(k_4)]_{A}[\f(k_1),\f(k_2)]_{R}\>~, \label{eq:FromReToDCV2}
\end{align}
in the configuration \eqref{eq:kinematics}. While \eqref{eq:FromReToDCV2} is a property of general QFTs, our primary focus will be computing the left-hand side of \eqref{eq:FromReToDCV2} in theories with a weakly-coupled description.
Using \eqref{eq:FromReToDCV2}, we can now compute $\<[\f,\f]_{A}[\f,\f]_{R}\>$ in the restricted configuration \eqref{eq:kinematics} from our knowledge of $\Re \<T[\f\f\f\f]\>$. Moreover, once the causal double-commutator is known for these momenta, it can be analytically continued to general configurations.
As we review in Appendix \ref{app:Analyticity_k_Space}, the position-space causality conditions for this double-commutator imply analyticity properties in momentum space \cite{Polyakov:1974gs,Streater:1989vi,Haag:1992hx}. We can choose the three independent momenta to be $k_1$, $k_4$, and $k_1+k_2$ and then analytically continue in $k_1$ and $k_4$ according to
\begin{align}
k_{i}\rightarrow k_{i}-i\eta_{i}, \quad \text{for} \quad i=1,4 \quad \text{and} \quad \eta_i\in V_+ ~.
\end{align}
The causal double-commutator is analytic in this region, and we can continue to $k_i^{2}<0$ to recover the full causal double-commutator. The double-commutator is only non-zero for $k_1+k_2\in V_+$, so we do not need to relax any conditions on this variable.
\section{Cutting Rules at Weak Coupling}
\label{sec:ReviewCutkosky}
In this section we study \eqref{eq:FromReToDCV2} for weakly-coupled QFTs. As an example, we consider a real scalar field $\phi$ with a non-derivative interaction $g\phi^n$. We will review the derivation of the cutting rules for the connected part of the time-ordered correlator, following the method of \cite{Veltman:1963th}. The new result is the combination of these cutting rules with the identity \eqref{eq:FromReToDCV2} to derive the corresponding cutting rules for the causal double-commutator. We will also explain why this double-commutator is a simpler object to study than the real part of a time-ordered correlator. By working this case out in detail, the generalization to AdS will be manifest. We will not need to assume conformal invariance in this section, but due to the role of the double-commutator in the inversion formula, this is an interesting case to study.
We will need the Feynman propagator $\Delta_{F}(x)$ and the two-point Wightman functions $\Delta^{\pm}(x)$ for the free field $\phi$,
\begin{align}
\Delta_{F}(x)&=\<T[ \f(x)\f(0)]\>_{\text{free}}=\int\frac{d^{d}k}{(2\pi)^{d}}\frac{-i}{k^{2}+m^{2}-i\epsilon}e^{ik\cdot x} \nonumber
\\ &=\theta(x^0)\Delta^{+}(x)+\theta(-x^0)\Delta^{-}(x)~, \label{eq:time_ordered}
\\ \Delta^{\pm}(x)&=\<\f(\pm x)\f(0)\>_{\text{free}}=\int \frac{d^{d}k}{(2\pi)^{d}}2\pi \theta(\pm k^0)\delta(k^2+m^2)e^{ik\cdot x}~.
\end{align}
Similarly, for the anti-time-ordered propagator we have:
\begin{align}
\Delta_{F}^*(x)=\<\overline{T}[ \f(x)\f(0)]\>_{\text{free}}&=\int\frac{d^{d}k}{(2\pi)^{d}}\frac{i}{k^{2}+m^{2}+i\epsilon}e^{ik\cdot x} \nonumber
\\ &=\theta(x^0)\Delta^{-}(x)+\theta(-x^0)\Delta^{+}(x)~.
\label{eq:time_ordered_star}
\end{align}
An important difference between \eqref{eq:time_ordered} and \eqref{eq:time_ordered_star} is the opposite signs of the $i\epsilon$. We will use $\Delta^{-}(-x)=(\Delta^{-}(x))^*=\Delta^{+}(x)$ to express all quantities in terms of $\Delta^{+}$. The free-field two-point Wightman functions correspond to on-shell propagators, which in momentum space are given by
\begin{align}
\Delta^{+}(k)=2\pi \delta(k^2+m^2)\theta(k^0)~.
\end{align}
We will also refer to $\Delta^{+}(k)$ as the Wightman propagator. In the standard S-matrix unitarity method, cut propagators are placed on shell by replacing a propagator with a $\delta$-function, which we see corresponds to $\Delta_{F}\rightarrow \Delta^{+}$.
We will now review Veltman's derivation of the cutting rules, which are formulated in terms of the Wightman propagators. For simplicity, we start with the following one-loop correction to the two-point function $\<\f(x_1)\f(x_2)\>$ in $\phi^3$ theory,
\vspace{-.3in}
\begin{center}
\begin{eqnarray}
\includegraphics[scale=.33]{Two_Pt_Bubble_1.pdf}~.
\end{eqnarray}
\end{center}
As usual, the Feynman rules tell us to assign a factor of $ig$ to each interaction vertex, a Feynman propagator $\Delta_{F}(x_{ij})$ to each line, and then to integrate over the internal points $y_i$. When relevant, we also need to include symmetry factors.
From any Feynman diagram, $F(x_1,\ldots,x_n)$, with $n$ external points and $m$ internal points, we generate $2^{n+m}$ new graphs by introducing two distinct vertices, which we label with white and black dots. We will refer to both internal and external points as the vertices of the diagram. The new collection of ``decorated'' graphs, $\widehat{F}_{q}(x_i)$ with $q=1,\ldots,2^{n+m}$, are defined via the following rules:
\begin{enumerate}
\item For each internal vertex of either color, multiply by $ig$.
\item For each white vertex, either internal or external, multiply by $-1$.
\item For lines connecting two black vertices, $x_{i}$ and $x_{j}$, use $\Delta_{F}(x_{ij})$.
\\ For lines connecting two white vertices, $x_i$ and $x_j$, use $\Delta_{F}^{*}(x_{ij})$.
\\ For lines connecting a white vertex, $x_i$, and a black vertex, $x_j$, use $\Delta^{+}(x_{ij})$.
\end{enumerate}
For simplicity, we assume no line begins and ends at the same point, i.e. no propagator has a vanishing argument.\footnote{To take into account loop corrections, one should instead work with the renormalized propagator. Then the same cutting rules carry over, but now with the corresponding renormalized on-shell propagator.} These rules were first given in \cite{Veltman:1963th} for amputated Feynman diagrams to give an alternative derivation of the Cutkosky rules. We will not need the specific map between the label $q$ and the assignment of black and white vertices as most decorated diagrams vanish by momentum conservation. Instead, we will find that the non-zero decorated diagrams are in one-to-one correspondence with the allowed unitarity cuts of the original diagram.
To find the unitarity cuts of a diagram, we need to introduce the largest-time equation \cite{Veltman:1963th,tHooft:1973wag,Veltman:1994wz}:
\begin{align}
\sum\limits_{q=1}^{2^{m+n}}\widehat{F}_{q}(k_1,\ldots,k_n)=0~. \label{eq:Largesttime}
\end{align}
We give a short derivation of this identity in Appendix \ref{app:largest_time}. To explain its connection to the cutting rules, we need to isolate two terms in the sum (\ref{eq:Largesttime}): the graph with all black vertices, $\widehat{F}_{q=1}(k_i)$, and the graph with all white vertices, $\widehat{F}_{q=2^{m+n}}(k_i)$. Their relation to the original Feynman diagram is
\begin{align}
\widehat{F}_{q=1}(k_1,\ldots,k_n)&=F(k_1,\ldots,k_n),
\\
\widehat{F}_{q=2^{n+m}}(k_1,\ldots,k_n)&=(-1)^nF^*(k_1,\ldots,k_n)~. \label{eq:ComplexConjF}
\end{align}
To show \eqref{eq:ComplexConjF}, recall that using white vertices amounts to letting $ig\rightarrow -ig$ and \\ $\Delta_{F}(x)\rightarrow \Delta_{F}^{*}(x)$. This replacement generates the complex conjugated graph and the overall factor of $(-1)^n$ comes from using white vertices for the external points. If we pull out these two graphs, the largest-time equation says
\begin{align}
F(k_1,\ldots,k_n)+(-1)^n F^*(k_1,\ldots,k_n)=-\sum\limits_{q=2}^{2^{n+m}-1}\widehat{F}_{q}(k_1,\ldots,k_n)~. \label{eq:unintLargesttimeV2}
\end{align}
Note that for $n$ even or odd, the left-hand side is the real or imaginary part of the correlation function respectively.\footnote{For the S-matrix we always amputate external lines, in which case the $(-1)^n$ is not present and we always take the real part of the amputated diagram.}
To use \eqref{eq:unintLargesttimeV2} we must na\"ively study a large set of diagrams that grows exponentially in the number of vertices. Moreover, it is also not yet manifest how these diagrams are related to the usual cutting rules. Fortunately, most of the $\widehat{F}_{q}(k_i)$ will vanish by momentum conservation. For example, consider the following two diagrams:
\begin{equation}
\includegraphics[scale=.3]{The_Graph_Vanishes.pdf}~.
\label{eq:graphvanishes}
\end{equation}
The first graph vanishes because positive timelike momentum is flowing into the white vertex from each propagator. The second graph vanishes because all the momenta is flowing out of the bubble. In order to have a non-zero decorated graph, we need both black and white external vertices, which serve as the ``source" and ``sink" for the momenta.
The classic result of Veltman \cite{Veltman:1963th} is that the set of non-zero colorings of vertices is in one-to-one correspondence with the allowed unitarity cuts of the diagram, where a unitarity cut splits a diagram into two on-shell sub-diagrams. The map between cut diagrams and the coloring rules given above is as follows:
\begin{equation}
\includegraphics[scale=.3]{Cut_Bubble_1.pdf} \label{eq:CutToVert}.
\end{equation}
That is, we cut a diagram in two and use black vertices to the left of the cut and white vertices to the right. Then the cut lines correspond to the on-shell, Wightman propagators. When we draw general cuts there can be an ambiguity about the assignment of vertices, i.e. on which side to place which vertices. However, there is only one choice that is non-zero for a given choice of external momenta. For that reason, we will leave the assignment of black and white vertices implicit when drawing cut diagrams. In (\ref{eq:CutToVert}), if we take the momentum to run from left to right, then the other possible assignment vanishes.
According to \eqref{eq:unintLargesttimeV2}, we can compute the real or imaginary part of a correlation function by summing over all cuts of the diagrams. For example, the internal cut of the bubble is shown in \eqref{eq:CutToVert}. Here we are studying correlation functions with off-shell external legs, so we need to consider the external line cuts as well. These do not reduce the loop order of the diagram, as can be seen for the bubble,
\begin{equation}
\includegraphics[scale=.3]{Cut_Bubble_2.pdf}~.
\label{eq:ExtCutBubble}
\end{equation}
For the four-point function, which will be our main object of study, we can similarly have cuts passing through only the external lines,
\begin{equation}
\includegraphics[scale=.3]{4pt_Cut_Bubble.pdf}~.
\label{eq:extcut4ptBubble}
\end{equation}
For the remainder of this section, we will focus on four-point functions.
The external line cuts for $\<T[\f\f\f\f]\>$ are trivial in flat space as they simply place the external momenta on shell, i.e. $k^{2}=-m^{2}$ such that $k^0\geq 0$.\footnote{The terminology ``external cut" for $\<T[\f\f\f\f]\>$ refers to a cut of a propagator connected to an external point. We study diagrams for this correlator in particular because the same topologies will appear for single-trace correlators in holographic CFTs. Nevertheless, the cutting rules can also be studied for more general correlators of composite operators, such as $\<T[\f^2\f^2\f^2\f^2]\>$.} As long as the external momenta do not lie exactly on the mass-shell, these cuts vanish. In preparation for AdS/CFT, where the analogous external cuts are non-trivial, it useful to have a dispersion formula which depends solely on the internal cuts. As we will demonstrate shortly, the CFT dispersion formula \cite{Caron-Huot:2017vep,Carmi:2019cub} meets this criteria for four-point correlators by using the causal double-commutator as input.
To see why the double-commutator only depends on internal cuts, we study the cutting rules in the kinematics \eqref{eq:kinematics}, where all external momenta are spacelike. In this configuration, external cuts such as \eqref{eq:extcut4ptBubble} manifestly vanish. As reviewed in Section \ref{sec:Polyakov}, the real part of a time-ordered four-point function, $\Re\<T[\f\f\f\f]\>$, is equivalent to the causal double-commutator, $\<[\f,\f]_{A}[\f,\f]_{R}\>$, in this configuration. The causal double-commutator with restricted kinematics therefore only has internal cuts. This is the first hint that the double-commutator is a simpler object to study than the real part of the correlator.
As the external momenta are all spacelike, we can go further: any internal line cut that leaves one operator to the left or right of the cut must vanish. In other words, the following class of cuts for a general Feynman diagram $F$ vanish:
\begin{center}
\begin{equation}
\includegraphics[scale=.25]{External_Cut_General.pdf}~.
\end{equation}
\end{center}
The reason is simple - we need to have timelike momenta flowing through a unitarity cut in order for it to be non-zero. With spacelike external momenta, this cannot happen if only one external point is to the left or right of the cut. The vanishing of these cuts is equivalent to terms in \eqref{eq:CombCor} such as $\<\f(k_1)T[\f(k_2)\f(k_3)\f(k_4)]\>$ being zero for spacelike $k_i$.
To give examples, the following cuts both give zero in the kinematics \eqref{eq:kinematics}:
\begin{center}
\begin{equation}
\includegraphics[scale=.24]{More_Cuts_Vanish.pdf}~.
\end{equation}
\end{center}
These types of cuts typically vanish when studying amplitudes because on-shell three-point amplitudes for gluons and gravitons vanish. Here all of our external lines are off-shell and these cuts vanish due to the choice of external momenta.
Finally, we will consider cuts that split the external legs into pairs, i.e. the $s$, $t$, and $u$-channel cuts familiar from S-matrix unitarity. In the kinematics \eqref{eq:kinematics}, we have $k_1+k_2\in V_{+}$ but $k_1+k_3$ and $k_1+k_4$ are spacelike. With this choice, only the $s$-channel cuts are non-zero:
\begin{equation}
\includegraphics[scale=.25]{General_Allowed_Cut.pdf}~.
\end{equation}
For example, the following cuts are all non-zero:
\begin{equation}
\includegraphics[scale=.235]{Allowed_Cuts_Examples.pdf}~.
\end{equation}
We can now make an explicit connection to the double-commutator. Using the largest-time equation
\begin{align}
-2~\Re F(k_1,k_2,k_3,k_4)=\sum\limits_{q=2}^{2^{n+m}-1}\widehat{F}_{q}(k_1,k_2,k_3,k_4)~, \label{eq:unintLargesttimeV3}
\end{align}
and the identity \eqref{eq:FromReToDCV2}, we find
\begin{align}
\<[\f(k_3),\f(k_4)]_{A}[\f(k_1),\f(k_2)]_{R}\>=\sum\limits_{q=2}^{2^{n+m}-1}\widehat{F}_{q}(k_1,k_2,k_3,k_4)~.
\label{eq:DC_Cutting_Rules}
\end{align}
In other words, the sum over decorated graphs in the kinematics \eqref{eq:kinematics} computes the causal double-commutator. On the right-hand side we have written the full sum over $q$, but as emphasized earlier only a few graphs are consistent with momentum conservation.
While we derived \eqref{eq:DC_Cutting_Rules} using the vertex coloring rules, we can summarize the result more simply.
The cut graphs that contribute to the double-commutator $\<[\f,\f]_A[\f,\f]_R\>$ are determined by working in the kinematics (\ref{eq:kinematics}) and using the following cutting rules:
\begin{enumerate}
\item For each Feynman diagram, draw a cut that passes only through internal lines.
\\ For each line that is cut, use the on-shell propagator $\Delta^{+}(k)$.
\item For all propagators to the left of the cut, use $\Delta_{F}(k)$.
\\
For all propagators to the right of the cut, use $\Delta_{F}^{*}(k)$.
\item For each internal vertex multiply by $ig$.
\\ For each vertex to the right of the cut, internal or external, multiply by an additional $-1$.
\item Sum over cuts consistent with momentum conservation.
\end{enumerate}
As a reminder, we choose the on-shell propagators $\Delta^{+}(k)$ such that the momenta is flowing across the cut from $\f(k_1)$ and $\f(k_2)$ to $\f(k_3)$ and $\f(k_4)$. For cut four-point functions, all external points come in pairs and we can replace the $3^{\text{rd}}$ rule by:
\\
\\
\hphantom{>>} 3$'$. For each internal vertex to the left of the cut multiply by $ig$.
\\ \hphantom{>>>.....}For each internal vertex to the right of the cut multiply by $-ig$.
\\
\\
However, when studying higher-point functions it will be important to keep track of how many external points lie to the right of the cut. One can restore the previous vertex coloring rules by assigning black and white vertices to the left and right of the cut respectively, see for example \eqref{eq:CutToVert}.
We were careful to work with kinematics (\ref{eq:kinematics}) in order to classify the cut diagrams that contribute to the double-commutator. Once we have classified and computed these cuts, the argument reviewed in Section \ref{sec:Polyakov} allows us to analytically continue the final result to general kinematics. While using spacelike momenta and studying the cutting rules for $\Re\<T[\f\f\f\f]\>$ are not strictly necessary to derive the cutting rules for the double-commutator, we find this to be a particularly simple approach.
As an aside, we can also derive the cutting rules by assuming $k_i^{2}>0$ and applying the identity
\begin{align}
\<\overline{T}[\f(k_3)\f(k_4)]T[\f(k_1)\f(k_2)]\>=\<[\f(k_3),\f(k_4)]_{A}[\f(k_1),\f(k_2)]_{R}\>~.\label{eq:TTbtodDisc}
\end{align}
As we reviewed in Section \ref{sec:Polyakov}, this follows from the positive spectrum condition. Then
the partially time-ordered correlation function on the left can be computed using the Schwinger-Keldysh rules \cite{Schwinger:1960qe,Keldysh:1964ud,Chou:1984es,Haehl:2016pec,Haehl:2017qfl} and one arrives at the same set of cutting rules for the double-commutator.\footnote{The relation between the Schwinger-Keldysh formalism and unitarity cuts is also given in section 11 of
\cite{Haehl:2016pec}.} In this approach working with spacelike momenta is useful as well: we only need a single time-fold to compute the left-hand side of \eqref{eq:TTbtodDisc} while for the right-hand side, for generic momenta, we need two time-folds \cite{Haehl:2017qfl}. We explain how to derive the cutting rules from the Schwinger-Keldysh formalism in Appendix \ref{sec:SKDerivation}.
\section{Unitarity Cuts in AdS/CFT}
\label{sec:UnitarityCutsAdS}
\subsection{Cutting Rules}
In this section we will generalize the analysis of Section \ref{sec:ReviewCutkosky} to AdS$_{d+1}$/CFT$_{d}$, the main application of interest in this work. The generalization is straightforward as the derivation of the cutting rules did not rely on the explicit form of the propagators. Instead, it followed from general features of Lorentzian two-point functions. The cutting rules will therefore also hold for weakly coupled theories in AdS. Our aim here is to discuss how the cutting rules generalize, connect to previous work, and give the explicit expressions necessary for later computations.
We will study a bulk scalar field $\Phi$ that has a non-derivative interaction $g\Phi^n$ and is dual to the boundary scalar operator $\phi$. We work in the Poincar\'e patch of AdS with the standard metric
\begin{align}
ds^{2}=\frac{dz^{2}+\eta_{\mu\nu}dx^{\mu}dx^{\nu}}{z^{2}}~,
\end{align}
where we again take $\eta_{\mu\nu}$ to be mostly plus, and $z=0$ is the boundary of AdS. Finally, we will only study the connected Witten diagrams for correlation functions of the single-trace operator, $\<T[\f(x_1)...\f(x_n)]\>$.
We begin by expanding the Feynman bulk-to-bulk propagator $G_{\Delta}(x_1,z_1;x_2,z_2)$ in terms of the Wightman bulk-to-bulk propagators,
\begin{align}
G_{\Delta}(x_1,z_1;x_2,z_2)=&\<T[\Phi(x_1,z_1)\Phi(x_2,z_2)]\>_{\text{free}}
\nonumber \\=&\theta(x^0_1-x^0_2)G_{\Delta}^{+}(x_1,z_1;x_2,z_2)+\theta(x^0_2-x^0_1)G_{\Delta}^{+}(x_2,z_2;x_1,z_1)~,
\\
G_{\Delta}^{+}(x_1,z_1;x_2,z_2)=&\<\Phi(x_1,z_1)\Phi(x_2,z_2)\>_{\text{free}}~,
\end{align}
where $\Delta$ is the scaling dimension of the boundary scalar $\f$.
As in flat space, the Wightman propagators are defined to be the free-field two-point Wightman functions. These will again correspond to the on-shell propagators.
We now Fourier transform in the flat $x^{\mu}$ directions to use the AdS/CFT momentum-space propagators. The bulk-to-bulk propagators take the form \cite{Liu:1998ty}:
\begin{align}
G_{\Delta}(k,z_1,z_2)&= -i(z_1 z_2)^{\frac{d}{2}}\int\limits_{0}^{\infty} dp \hspace{.07cm} p\frac{\mathcal{J}_{\nu}(pz_1)\mathcal{J}_{\nu}(pz_2)}{k^{2}+p^{2}-i\epsilon}~,
\\
G^{\pm}_{\Delta}(k,z_1,z_2)&=\pi(z_1z_2)^{\frac{d}{2}}\mathcal{J}_{\nu}(\sqrt{-k^{2}}z_1)\mathcal{J}_{\nu}(\sqrt{-k^{2}}z_2)\theta(-k^{2})\theta(\pm k^0)~, \label{eq:PosBB}
\end{align}
where $\mathcal{J}_{\nu}$ is the Bessel function of the first kind and $\nu=\Delta-d/2$.\footnote{When studying the Euclidean principal series we usually write $\Delta=\frac{d}{2}+i\nu$, but to be consistent with previous work on AdS/CFT momentum space we use $\Delta=\frac{d}{2}+\nu$.} The bulk-to-boundary propagator, $K_{\Delta}(k,z)$, is then defined by taking one point to the boundary:
\begin{align}
K_{\Delta}(k,z)&=\lim\limits_{z'\rightarrow 0}z'^{-\Delta}G_{\Delta}(k,z,z')
\nonumber \\ &=-i\frac{1}{2^{\nu}\Gamma(1+\nu)}z^{\frac{d}{2}}(\sqrt{k^{2}})^{\nu}\mathcal{K}_{\nu}(\sqrt{k^{2}}z)~,
\\
K^{\pm}_{\Delta}(k,z)&=\frac{\pi}{2^{\nu}\Gamma(1+\nu)}(\sqrt{-k^{2}})^{\nu}z^{\frac{d}{2}}\mathcal{J}_{\nu}(\sqrt{-k^{2}}z)\theta(-k^{2})\theta(\pm k^0)~, \label{eq:PosBb}
\end{align}
where $\mathcal{K}_{\nu}$ is the modified Bessel function of the second kind.\footnote{In general, we need to work with the regulated bulk-to-boundary propagators \cite{Freedman:1998tz},
\begin{align}
K^{\delta}_{\Delta}(k,z)=\left(\frac{z}{\delta}\right)^{d/2}\frac{\mathcal{K}_{\nu}(\sqrt{k^{2}} z)}{\mathcal{K}_{\nu}(\sqrt{k^{2}}\delta)}~,
\end{align}
where $\delta\ll1$ is the cut-off on the $z$-coordinate. However, for the discussion and examples considered in this work we can use $K_{\Delta}(k,z)$ throughout.} Here we have given the Feynman bulk-to-boundary propagator $K_{\Delta}(k,z)$ for spacelike $k$. When analytically continuing to timelike momenta, we give $k^{2}$ a small imaginary part as dictated by the $i\epsilon$ prescription. Finally, $K^{\pm}_{\Delta}(k,z)$ are the on-shell, or Wightman, bulk-to-boundary propagators.
We can now repeat the arguments of Section \ref{sec:ReviewCutkosky} with minor changes. For completeness, we spell them out here. For a Witten diagram $W(x_1,\ldots,x_n)$ with $n$ external (boundary) points and $m$ internal (bulk) points, we can define $2^{n+m}$ new graphs $\widehat{W}_{q}(x_1,\ldots,x_n)$ by using two types of vertices. We again distinguish them by using white or black dots. The new decorated graphs are defined as follows:
\begin{enumerate}
\item For each internal vertex multiply by $ig$.
\item For each white vertex, internal or external, multiply by an additional $-1$.
\item For lines between black vertices in the bulk use $G_{\Delta}(x_i,z_i;x_j,z_j)$.
\\ For lines between white vertices in the bulk use $G_{\Delta}^{*}(x_i,z_i;x_j,z_j)$.
\\ For lines between a white vertex at $(x_i,z_i)$, and a black vertex at $(x_j,z_j)$, use $G_{\Delta}^{+}(x_i,z_i;x_j,z_j)$.
\item If a line ends on the boundary, use the appropriate bulk-to-boundary propagator.
\end{enumerate}
The only difference from the previous section is that we now have two kinds of propagators, depending on whether a point lies on the boundary or in the bulk. Here we have taken all external points to the boundary in order to study the CFT correlator $\<\f(x_1)\ldots\f(x_n)\>$. For a QFT in AdS, we can also study purely bulk correlation functions $\<\Phi(x_1,z_1)\ldots\Phi(x_n,z_n)\>$. When all external points lie in the bulk, the derivation of the cutting rules in AdS is exactly the same as in flat space. In this work however we will focus on CFT correlators.
The largest-time equation in AdS says:
\begin{align}
W(x_1,\ldots,x_n)+(-1)^nW^{*}(x_1,\ldots,x_n)=-\sum\limits_{q=2}^{2^{n+m}-1}\widehat{W}_{q}(x_1,\ldots,x_n)~, \label{eq:largest_time_witten}
\end{align}
where we pulled out the original graph and its complex conjugate. Restricting to four-point functions ($n=4$) and using the kinematics \eqref{eq:kinematics}, we can again use \eqref{eq:FromReToDC} to go from the real part of a time-ordered correlator to the causal double-commutator. The diagrammatic expansion for the double-commutator in the configuration \eqref{eq:kinematics} is:
\begin{align}
\<[\f(k_3),\f(k_4)]_{A}[\f(k_1),\f(k_2)]_{R}\>=\sum\limits_{q=2}^{2^{n+m}-1}\widehat{W}_{q}(k_1,\ldots,k_4)~.\label{eq:DC_Cutting_Rules_Witten}
\end{align}
Although the right-hand side runs over a large number of terms, only a few Witten diagrams are non-zero for our choice of momenta, just as in flat space. To simplify the presentation, we use the same cutting notation as before:
\begin{equation}
\includegraphics[scale=.25]{Bubble_cuts_colors.pdf}~. \label{eq:Witten_CutToVertices}
\end{equation}
Our assumption that the external momenta are spacelike implies that cuts of bulk-to-boundary propagators vanish identically. This is consistent with earlier work on unitarity cuts in AdS/CFT \cite{Fitzpatrick:2011dm,Meltzer:2019nbs}, in which it was found that internal cuts compute the ``absorptive" part of the diagram. Using external spacelike momenta also implies the cuts split the external points into two pairs. For our choice of momenta \eqref{eq:kinematics}, we must have $\{k_1,k_2\}$ to the left of the cut and $\{k_3,k_4\}$ to the right. Cuts through internal lines that leave a single external point on one side of the cut will vanish. In short, the cut structure for Witten diagrams is exactly the same as for the corresponding Feynman diagrams in flat space.
One difference in comparison to flat space is that external line cuts in AdS are less restrictive. Cutting through an external line in flat space means an external momentum must lie on the mass-shell, i.e. $k^2=-m^2$ and $k^0\geq 0$. In AdS a cut external line is non-zero as long as $k\in V_+$. Therefore, the external line cuts will contribute to $\Re\<T[\f\f\f\f]\>$ for general external timelike momenta and furthermore these cuts do not reduce the loop order of a diagram. By working with spacelike momenta, or by studying the causal double-commutator, we ensure that only internal cuts are allowed, and these do simplify the diagram.
The diagrams that contribute to the right-hand side of \eqref{eq:DC_Cutting_Rules_Witten} are found from the rules summarized in Section \ref{sec:intro}, which we repeat here for convenience:
\begin{enumerate}
\item Given a Witten diagram, draw a cut such that only bulk-to-bulk propagators are cut. For each cut propagator use the on-shell propagator $G^{+}_{\Delta}(k,z_i,z_j)$.
\item For all propagators to the left of the cut, use $G_{\Delta}(k,z_i,z_j)$. \\ For all propagators to the right of the cut, use $G^*_{\Delta}(k,z_i,z_j)$.
\item For each internal vertex multiply by $ig$.
\\ For each vertex to the right of the cut, multiply by an additional $-1$.
\item Sum over all cuts consistent with momentum conservation.
\end{enumerate}
To revert to the vertex assignment rules, we again use black and white dots for all vertices to the left and right of the cut respectively, see e.g. \eqref{eq:Witten_CutToVertices}.
The rules given in Section \ref{sec:intro} were specialized to four points, in which case we can ignore factors of $-1$ from external points as they always come in pairs. Here we have given the rules for a general $n$-point Witten diagram as we will later study cut five-point diagrams.
\subsection{AdS Transition Amplitudes}
\label{sec:transitionamps}
Our methods rely on using Lorentzian signature, and studying Lorentzian AdS allows us to interpret the cut diagrams in terms of a sum over states, or equivalently a phase-space integral. Specifically, we will relate the cut propagators to normalizable solutions to the bulk equations of motion. This provides another sense in which a cut diagram is on shell and allows us to make a connection with the CFT optical theorem.
To set the stage, recall the unitarity condition for the flat-space S-matrix:
\begin{align}
\<\textrm{out}|\Im( \mathcal{T})|\textrm{in}\>=\<\textrm{out}|\mathcal{T}^{\dagger}\mathcal{T}|\textrm{in}\>~.
\end{align}
Inserting a complete set of states, the right-hand side factorizes as
\begin{align}
\<\textrm{out}|\Im(\mathcal{T})|\textrm{in}\>=\sum\limits_{\Psi}\<\textrm{out}|\mathcal{T}^{\dagger}|\Psi\>\<\Psi|\mathcal{T}|\textrm{in}\>~.
\end{align}
It is well-known that there is a one-to-one map between the allowed unitarity cuts of a diagram and the physical states $|\Psi\>$ that can be exchanged.
As reviewed earlier, see \eqref{eq:FromReToDCV2}, the CFT statement of unitarity is
\begin{align}
-2~\Re \<T[\f(k_1)\f(k_2)\f(k_3)\f(k_4)]\>=\<\overline{T}[\f(k_3)\f(k_4)]T[\f(k_1)\f(k_2)]\>~, \label{eq:FromReToTTb}
\end{align}
in the momentum configuration \eqref{eq:kinematics}. Once again, the right-hand side factorizes when we insert a complete set of states. However, it may not be clear what the map is between the states exchanged and the bulk cutting procedure. In other words, which basis of the AdS/CFT Hilbert space are we picking out with our cuts? As we will demonstrate, the natural set of states are the normalizable modes of the Poincar\'e patch, which are also used to define Poincar\'e transition amplitudes \cite{Balasubramanian:1999ri}.
In Lorentzian AdS, the bulk equations of motion have both normalizable and non-normalizable solutions \cite{Avis:1977yn,Breitenlohner:1982jf,Breitenlohner:1982bm}. The normalizable modes are quantized to obtain the bulk Hilbert space and the non-normalizable modes are classical, non-fluctuating backgrounds. The bulk normalizable and non-normalizable solutions are dual to boundary states and sources in the CFT respectively \cite{Balasubramanian:1998sn,Balasubramanian:1998de}.
To find these solutions, we solve the equations of motion for a scalar $\Phi$,
\begin{align}
(\Box -m^2)\Phi=0~,
\end{align}
by working in momentum space. For
spacelike momenta, $k^{2}>0$, there is a single solution that is regular in the interior of AdS:
\begin{align}
\Phi(k,z)=\phi_0 z^{d/2}\mathcal{K}_{\nu}(\sqrt{k^{2}}z)~.
\end{align}
For timelike momenta $k^{2}<0$, there are two solutions:
\begin{align}
\Phi(k,z)=\phi_0 z^{d/2}\mathcal{J}_{\nu}(\sqrt{-k^{2}}z)~,
\\
\Phi(k,z)=\phi_0 z^{d/2}\mathcal{Y}_{\nu}(\sqrt{-k^{2}}z)~.
\end{align}
Here $\mathcal{J}$ and $\mathcal{K}$ are the Bessel functions defined earlier, and $\mathcal{Y}$ is a Bessel function of the second kind. The $\mathcal{J}$ solution gives a normalizable mode while the $\mathcal{Y}$ and $\mathcal{K}$ solutions give non-normalizable modes. In the limit $z\rightarrow 0$, the normalizable solutions scale like $z^{\Delta}$ while the non-normalizable solutions scale like $z^{d-\Delta}$. Correlation functions are computed by choosing non-normalizable modes for all the external legs of the Witten diagram.
The connection between the AdS cutting rules and the Hilbert space can be seen from a ``split representation''. It is well-known that time-ordered bulk-to-bulk propagators in AdS can be expressed as \cite{Leonhardt:2003qu,Penedones:2010ue}:
\begin{align}
G_{\Delta}(k,z_1,z_2)=\int\limits_{-\infty}^{\infty} d\omega P(\omega,\Delta)K_{\frac{d}{2}+i\omega}(k,z_1)K_{\frac{d}{2}-i\omega}(k,z_2)~, \label{eq:EucSplitRep}
\\
P(\omega,\Delta)=\frac{1}{\omega^{2}+\left(\Delta-\frac{d}{2}\right)^{2}}\frac{\omega^{2}}{\pi}~. \label{eq:Pdef}
\end{align}
That is, the Feynman bulk propagator is a spectral integral over the corresponding bulk-to-boundary propagators.
By comparing \eqref{eq:PosBB} and \eqref{eq:PosBb}, we observe that the on-shell bulk-to-bulk propagator also has a simple split representation:
\begin{align}
G^{+}_{\Delta}(k,z_1,z_2)&=\frac{2^{2\nu}\Gamma(1+\nu)^{2}}{\pi(\sqrt{-k^{2}})^{2\nu}}K^{+}_{\Delta}(k,z_1)K^{+}_{\Delta}(k,z_2)~. \label{eq:LorSplitRep}
\end{align}
Unlike the split representation for time-ordered propagators, we do not have a spectral integral.\footnote{The split representation \eqref{eq:EucSplitRep} is the basis of the Euclidean analysis in \cite{Meltzer:2019nbs,Ponomarev:2019ofr}. There, putting a line on shell corresponds to closing the $\omega$ integral on the pole in $P(\omega,\Delta)$. This produces bulk-to-boundary propagators of dimension $\Delta$ and $d-\Delta$, so the OPE of the resulting diagram has unphysical ``shadow" operators. Projecting these out yields the double-commutator. The Lorentzian split representation \eqref{eq:LorSplitRep} uses the on-shell propagators, so this projection is not required.} We can also identify the overall factor in \eqref{eq:LorSplitRep} as a boundary, Wightman, two-point function. Taking both points of the bulk-to-bulk on-shell propagator to the boundary yields:\footnote{Here we are dropping analytic terms in $k$ that contribute to contact terms in position space.}
\begin{align}
\<\!\<\f(-k)\f(k)\>\!\>=\frac{\pi}{2^{2\nu}\Gamma(1+\nu)^{2}}(\sqrt{-k^{2}})^{2\nu}\theta(-k^{2})\theta(k^0)~,
\end{align}
where we use the notation
\begin{align}
\<\f(k_1)\ldots\f(k_n)\> \equiv (2\pi)^{d}\delta^{d}(k_1+\ldots+k_n)\<\!\<\f(k_1)\ldots\f(k_n)\>\!\>~.
\end{align}
As before, $\Delta=d/2+\nu$ is the dimension of the boundary scalar $\phi$. We can then write the on-shell propagator as:\footnote{Another way to derive this is to consider the two-point Wightman function in free-field theory, $\<\Phi(k_1,z_1)\Phi(k_2,z_2)\>_{\text{free}}$, and expand the fields in terms of creation and annihilation operators for the normalizable Poincar\'e modes \cite{Balasubramanian:1999ri}.}
\begin{align}
G^{+}_{\Delta}(k,z_1,z_2)&=K^{+}_{\Delta}(k,z_1)\frac{1}{\<\!\<\f(-k)\f(k)\>\!\>}K^{+}_{\Delta}(k,z_2)~. \label{eq:SplitLorentzian}
\end{align}
As shown in \eqref{eq:PosBb}, $K^{+}_{\Delta}(k,z) \sim \mathcal{J}(\sqrt{-k^2} z)$, and so the bulk-to-bulk on-shell propagator factorizes into a product of normalizable modes.
We can now see explicitly that cutting bulk-to-bulk propagators inside a Witten diagram produces two sub-diagrams glued together via on-shell bulk-to-boundary propagators with the correct normalization. The on-shell condition restricts the momentum $k$ to lie in the forward lightcone, $V_+$, and this turns the momentum integral into a phase space integral. Finally, dividing by the two-point CFT Wightman function gives the correct normalization for the exchanged states.
The relation between the bulk and boundary descriptions of unitarity becomes clear when we work in terms of the ``transition amplitudes'', $\<\Psi_{q'}|T[\f(k_1)\ldots\f(k_n)]|\Psi_{q}\>$ \cite{Balasubramanian:1999ri}.\footnote{While transition amplitudes are the standard name in AdS/CFT, these are more precisely the analogues of flat space form factors.} In the Poincar\'e patch, the states are defined via boundary conditions on the past and future Poincar\'e horizons. For the transition amplitudes studied here, the states $|\Psi_{q}\>$ and $\<\Psi_{q'}|$ are defined in terms of a collection of normalizable modes with momenta $q_1,\ldots,q_r$ and $q'_1,\ldots,q'_s$ such that $q_{i}$, $q'_{j}\in V_{+}$. The $q_i$ and $q'_j$ are incoming and outgoing momenta respectively. In practice, these transition amplitudes are computed by replacing some of the time-ordered bulk-to-boundary propagators in a standard Witten diagram with the corresponding on-shell propagators, i.e. the normalizable modes \cite{Balasubramanian:1998de,Balasubramanian:1998sn}. From \eqref{eq:SplitLorentzian}, we see that cutting a bulk-to-bulk propagator produces sub-diagrams with normalizable external lines. To be concrete, we can consider the cut of a tree-level exchange diagram in $\Phi^3$ theory,
\begin{equation}
\includegraphics[scale=.22]{Cut_Tree_Lorentzian.pdf}~.
\label{eq:Cut_Tree_Lorentzian}
\end{equation}
The dotted lines on the right-hand side of \eqref{eq:Cut_Tree_Lorentzian} are the on-shell bulk-to-boundary propagators, $K^{+}_{\Delta}(k,z)$, while the undotted lines are the corresponding Feynman propagators, $K_{\Delta}(k,z)$. Following the cutting rules, we also complex conjugate the three-point Witten diagram to the right of the cut. Specifically, we find:
\begin{align*}
-2~\Re W'_{\phi,\text{exch}}(k_1,\ldots,k_4)=g^{2}\int\limits_{0}^{\infty}\frac{dz_1dz_2}{z_1^{d+1}z_2^{d+1}}&K_{\Delta}(k_1,z_1)K_{\Delta}(k_2,z_1)G^{+}_{\Delta}(k_{12},z_1,z_2)
\\
&K^*_{\Delta}(k_3,z_2)K^*_{\Delta}(k_4,z_2), \label{eq:Cut_Tree_Lorentzian_eqn}
\numberthis
\end{align*}
where $k_{ij}=k_i+k_j$ and the prime means we drop the overall momentum conserving $\delta$-function,
\begin{align}
W(k_1,\ldots,k_4)=(2\pi)^{d}\delta^d(k_1+\ldots+k_4)W'(k_1,\ldots,k_4)~.
\end{align}
Using \eqref{eq:SplitLorentzian} we can rewrite this as
\begin{align*}
-2~\Re W'_{\phi,\text{exch}}(k_1,\ldots,k_4)=& g^{2} \int\limits_{0}^{\infty}\frac{dz_1dz_2}{z_1^{d+1}z_2^{d+1}}K_{\Delta}(k_1,z_1)K_{\Delta}(k_2,z_1)K^{+}_{\Delta}(k_{12},z_1)
\\
&\frac{1}{\<\!\<\f(-k_{12})\f(k_{12})\>\!\>}K^{+}_{\Delta}(k_{12},z_2)K^{*}_{\Delta}(k_3,z_2)K^{*}_{\Delta}(k_4,z_2)~.
\numberthis
\label{eq:ReTreeExch}
\end{align*}
Denoting the three-point transition amplitude as
\begin{align}
\mathcal{R}_{3-\textrm{pt}}(k_1,k_2 | k)= ig\int\limits_{0}^{\infty}\frac{dz}{z^{d+1}}&K_{\Delta}(k_1,z)K_{\Delta}(k_2,z)K^{+}_{\Delta}(k,z)~,
\end{align}
we see that the cut Witten diagram is a product of transition amplitudes:
\begin{align}
-2~\Re W'_{\phi,\text{exch}}(k_1,\ldots,k_4)=\mathcal{R}_{3-\textrm{pt}}(k_1,k_2| k_{12}) \frac{1}{\<\!\<\f(-k_{12})\f(k_{12})\>\!\>} \mathcal{R}^*_{3-\textrm{pt}}(k_3,k_4|k_{12} )~. \label{eq:ReWtoTransition}
\end{align}
The final result and ordering agree with inserting a resolution of the identity in the right-hand side of \eqref{eq:FromReToTTb}. Specifically, we can insert a complete set of single-particle states, which we label as $|\Psi_{k}\>$, into \eqref{eq:FromReToTTb} to find:
\begin{align}
\<\overline{T}[\f(k_3)\f(k_4)]T[\f(k_1)\f(k_2)]\>=& \int\limits_{V_+}\frac{d^{d}k}{(2\pi)^{d}}\frac{\<0|\overline{T}[\f(k_3)\f(k_4)]|\Psi_{k}\>\<\Psi_{k}|T[\f(k_1)\f(k_2)]|0\>}{\<\Psi_{k}|\Psi_{k}\>}
\nonumber \\
=& \int\limits_{V_+}\frac{d^{d}k}{(2\pi)^{d}}\frac{\<\Psi_{k}|T[\f(k_3)\f(k_4)]|0\>^*\<\Psi_{k}|T[\f(k_1)\f(k_2)]|0\>}{\<\Psi_{k}|\Psi_{k}\>}~. \label{eq:sumpoincstates}
\end{align}
We can restrict to single-particle states because we are working at tree-level in the AdS theory.
The result from the cutting rules \eqref{eq:ReWtoTransition} and from inserting a complete set of states \eqref{eq:sumpoincstates} then agree due to the relations,
\begin{align}
\<\Psi_{k}|T[\f(k_1)\f(k_2)]|0\>&= (2\pi)^{d}\delta^{d}(k+k_{12})\mathcal{R}_{3-\textrm{pt}}( k_1,k_2|k)~,
\\
\<\Psi_{k}|\Psi_{k}\>&=(2\pi)^{d}\<\!\<\f(-k)\f(k)\>\!\>~.
\end{align}
This example shows that the cutting rules for Witten diagrams, which were derived using purely diagrammatic identities, have a simple correspondence with transition amplitudes defined between states on the Poincar\'e horizons.
The definition we have used for the transition amplitudes is perturbative in nature, as they are defined directly via Witten diagrams.
In principle, one can also give a non-perturbative definition for Poincar\'e transition amplitudes via correlation functions in global AdS. We will not need this definition and will instead point the reader to \cite{Balasubramanian:1999ri,Raju:2011mp} for more details.
\subsection{Higher-Point Functions}
\label{sec:Higher_Point}
In this section we will briefly discuss the cutting rules for higher-point functions. We start by using the CFT optical theorem \eqref{eq:CombCor} for general points \cite{Gillioz:2016jnn}:
\begin{align}
&\<T[\f(x_1)\ldots\f(x_n)]\>+(-1)^n\<\overline{T}[\f(x_1)\ldots\f(x_n)]\>
\nonumber \\ &\hspace{1in}=-\sum\limits_{r=1}^{n-1}(-1)^r\sum\limits_{\substack{\sigma\in\Pi(r,n-r)}}\<\overline{T}[\f(x_{\s_1})\ldots\f(x_{\s_r})]T[\f(x_{\s_{r+1}})\ldots\f(x_{\s_n})]\>~, \label{sec:CutnPoint}
\end{align}
where we recall $\Pi(r,n-r)$ is the set of partitions of $\{1,\ldots,n\}$ into two sets of size $r$ and $n-r$. This relation tells us that the (real) imaginary parts of (even) odd-point correlators can be expressed in terms of lower-point correlators. We can then factorize the right-hand side by using a resolution of the identity.
Next, we use that the cutting rules given in Sections \ref{sec:ReviewCutkosky} and \ref{sec:UnitarityCutsAdS} also compute the real and imaginary piece for even and odd-point functions, respectively. In the cutting rules, this happens because there is a factor of $(-1)$ for each external point to the right of the cut. For general $n$-point Witten diagrams we find
\begin{align}
W(x_1,\ldots,x_n)+(-1)^nW^{*}(x_1,\ldots,x_n)=-\sum\limits_{q=2}^{2^{n+m}-1}\widehat{W}_{q}(x_1,\ldots,x_n)~.\label{eq:WittenGenNpoints}
\end{align}
This result is expected, as one can also derive \eqref{eq:WittenGenNpoints} directly from \eqref{sec:CutnPoint} using the Schwinger-Keldysh rules.
In Section \ref{sec:Polyakov} we used a special choice of kinematics to relate $\Re \<T[\f\f\f\f]\>$ to a double-commutator. The motivation there was to make a connection with the Lorentzian inversion formula \cite{Caron-Huot:2017vep,ssw,Kravchuk:2018htv}, where the same double-commutator appears. For higher-point functions the corresponding inversion formula is not known, but certain kinematics still simplify \eqref{sec:CutnPoint}. One natural choice is to set all the external momenta to be spacelike, $k_i^{2}>0$. Then the terms with $r=1 \ \text{and} \ n-1$ vanish in \eqref{sec:CutnPoint} since $\f(k_i)|0\>=0$ with this choice. At the level of Witten diagrams, this choice of momenta sets the external cuts to zero.
As an example, we can consider a five-point function with the following kinematics:
\begin{align}
k_{i}^{2}>0, \quad (k_i+k_j)^{2}>0, \quad \text{except} \quad k_1+k_2\in V_+~.
\label{eq:5ptKinematics}
\end{align}
In this case,
\begin{align}
-2i\Im \<T[\f(k_1)\ldots\f(k_5)]\>= \<\overline{T}\left[\f(k_3)\f(k_4)\f(k_5)\right]T[\f(k_1)\f(k_2)]\>~.
\end{align}
Next, we will look at the allowed cuts for an individual Witten diagram. Given the restrictive kinematics we have chosen, the momentum flowing through a cut has to be equal to $k_{12}$. For example, for the following five-point tree-level diagram,
\begin{equation}
\includegraphics[scale=.28]{Five_Point_Cut.pdf}
\label{eq:fivepointdiagram}
\end{equation}
only the above cut is non-zero. Defining the three and four-point transition amplitudes as
\begin{align*}
\mathcal{R}_{3-\textrm{pt}}(k_1,k_2 | k) &= ig\int\limits_{0}^{\infty}\frac{dz}{z^{d+1}}K_{\Delta}(k_1,z)K_{\Delta}(k_2,z)K^{+}_{\Delta}(k,z)~,
\numberthis
\\
\mathcal{R}_{4-\textrm{pt}}(k_3,k_4,k_5 | k) &=-g^2\int\limits_{0}^{\infty}\frac{dz_1dz_2}{z_1^{d+1}z_2^{d+2}}K^{+}_{\Delta}(k,z_1)K_{\Delta}(k_5,z_1)G_{\Delta}(k+k_5,z_1,z_2)
\\
&~~~~~~~~~~~~~~~~~~~~~~~~~~
K_{\Delta}(k_3,z_2)K_{\Delta}(k_4,z_2)~,
\numberthis
\end{align*}
we find, in the kinematics \eqref{eq:5ptKinematics}, that
\begin{align}
-2i\hspace{.1cm}\Im \<T[\f(k_1)\ldots\f(k_5)]\>=\frac{1}{\<\!\<\f(-k_{12})\f(k_{12})\>\!\>}\mathcal{R}_{3-\textrm{pt}}( k_1,k_2 |k_{12}) \mathcal{R}^*_{4-\textrm{pt}}( k_3,k_4,k_5 | k_{12})~.
\end{align}
We see that the cut five-point diagram can be written as the product of two transition amplitudes, in agreement with our previous analysis.
An important open question is: what is the minimal set of reduced correlators that we need to know in order to reconstruct the full five-point function? At four points we can choose spacelike momenta to reduce $\Re \<T[\f\f\f\f]\>$ to double-commutators. There are three double-commutators we can consider, but the Lorentzian inversion formula \cite{Caron-Huot:2017vep} shows that two of them are sufficient to reconstruct the full correlator. It would be interesting to answer this question at higher points and understand the connection to the cutting rules presented here.
\section{Applications to Witten Diagrams}
\label{sec:Examples}
In this section we check our cutting rules in a variety of ways. At tree level, we confirm that our cutting rules agree with the discontinuity of the full Witten diagram. By using the momentum-space OPE \cite{Gillioz:2019lgs,Gillioz:2019iye}, we will relate the bulk cut structure to the spectrum of the dual CFT and find agreement with previous work on the OPE limit of Witten diagrams. Finally, by studying tree and loop examples, we show that cut AdS diagrams become the corresponding cut flat space diagrams in the flat space limit. This gives evidence that the flat space limit of the AdS Cutkosky are the corresponding S-matrix rules.
\subsection{OPE and Flat Space Limits}
\label{sec:Limits}
\subsubsection*{OPE in momentum space}
In this section we study how the bulk cutting procedure in the Poincar\'e patch is related to the standard boundary OPE. We begin with the relation
\begin{align}
-2~\Re \<T[\f(k_1)\f(k_2)\f(k_3)\f(k_4)]\>=\<\overline{T}[\f(k_3)\f(k_4)]T[\f(k_1)\f(k_2)]\>~,
\end{align}
which holds in the configuration \eqref{eq:kinematics}. To find the Lorentzian OPE \cite{Gillioz:2016jnn,Gillioz:2018kwh,Gillioz:2018mto,Gillioz:2019lgs}, we will insert a complete set of states between the pairs of ordered operators. Specifically, we will use
\begin{align}
\mathbb{I}=|0\>\<0| + \sum\limits_{{\cal O}}\int\limits_{V_{+}}\frac{d^dk}{(2\pi)^{d}}P^{\Delta_{{\cal O}}}_{\mu_1\ldots\mu_\ell,\nu_1\ldots\nu_\ell}(k)|{\cal O}^{\mu_1\ldots\mu_\ell}(k)\>\<{\cal O}^{\nu_1\ldots\nu_\ell}(-k)|~,
\label{eq:IdRes}
\end{align}
where the sum runs over all the local primary operators ${\cal O}$ of the boundary CFT.
The explicit form of the projector is
\begin{align}
P^{\Delta}_{\mu_1\ldots\mu_\ell,\nu_1\ldots\nu_\ell}(k)=&\frac{(-k^{2})^{d/2-\Delta}}{C_{\Delta}}\sum\limits_{n=0}^{\ell}\frac{2^n\ell!(\Delta-\frac{d}{2})_{n}}{n!(\ell-n)!(\Delta-\ell-d+2)_{n}}
\nonumber
\\
& \left(\frac{1}{\ell!}\frac{k_{\mu_1}k_{\nu_1}\ldots k_{\mu_n}k_{\nu_n}}{(-k^{2})^{n}}\eta_{\mu_{n+1}\nu_{n+1}}\ldots\eta_{\mu_{\ell}\nu_{\ell}}+\text{perms} - \text{traces}\right)~.
\end{align}
The tensor $P^{\Delta}_{\mu_1\ldots\mu_\ell,\nu_1\ldots\nu_\ell}(k)$ is what appears in the two-point function for the shadow operator $\widetilde{{\cal O}}_{d-\Delta,\ell}$, i.e. for a fictitious operator of dimension $d-\Delta$ and spin $\ell$. The factor $C_{{\cal O}}$ is related to the normalization of the two-point function and is given by\footnote{To compare with eqn 2.15 of \cite{Gillioz:2018mto} we note that the operators there are unit normalized, $\<{\cal O}_{\Delta,\ell}|{\cal O}_{\Delta,\ell}\>=1$, while here we have $\<{\cal O}_{\Delta,\ell}|{\cal O}_{\Delta,\ell}\>=\frac{(\ell+\Delta-1)\Gamma(\Delta)}{2\pi^{d/2}(\Delta-1)\Gamma(\Delta+1-d/2)}$.}
\begin{align}
C_{\Delta}=\frac{ 2^{d-2 \Delta }\pi}{\Gamma \left(-\frac{d}{2}+\Delta +1\right)^2}~.
\end{align}
Finally, using \eqref{eq:IdRes} gives the momentum-space OPE \cite{Gillioz:2019lgs}
\begin{align}
\hspace{-.7cm}
\<\!\<\overline{T}[\f(k_3)\f(k_4)]T[\f(k_1)\f(k_2)]&\>\!\>=\sum\limits_{{\cal O}}\<\!\<\overline{T}[\f(k_3)\f(k_4)]{\cal O}^{\mu_1\ldots\mu_\ell}(k_{12})\>\!\>
\nonumber \\
\times& P^{\Delta_{{\cal O}}}_{\mu_1\ldots\mu_\ell,\nu_1\ldots\nu_\ell}(k)\<\!\<{\cal O}^{\nu_1\ldots\nu_\ell}(-k_{12})T[\f(k_1)\f(k_2)]\>\!\>~.
\end{align}
As a reminder, the $\<\!\<\ldots\>\!\>$ notation means that we drop the overall momentum-conserving $\delta$-function.
To relate the bulk cutting rules to the momentum-space OPE, we will study Witten diagrams in the limit that the exchanged momentum goes to zero, $k_{12}\rightarrow 0$. In this limit, we have \cite{Gillioz:2019lgs}:
\begin{align}
\<\!\<{\cal O}(-k_{12})T[\f(k_1)\f(k_2)]\>\!\>\sim (-(k_{12})^2)^{\Delta_{{\cal O}}-d/2}(k_1^2-i\epsilon)^{\Delta-\Delta_{{\cal O}}/2-d/2}~.
\end{align}
The exchange of the operator ${\cal O}$ therefore gives the scaling
\begin{align}
\<\!\<\overline{T}[\f(k_3)\f(k_4)]T[\f(k_1)\f(k_2)]\>\!\>\bigg|_{{\cal O}}\sim (-k_{12}^2)^{\Delta_{{\cal O}}-d/2}~, \ \label{eq:OPETTb}
\end{align}
where we used that the projector scales as $P^{\Delta}_{\mu_1\ldots\mu_\ell,\nu_1\ldots\nu_\ell}(k)\sim (-k^{2})^{d/2-\Delta}$. Using this zero-momentum limit, we will show that there is a correspondence between the cut lines of a Witten diagram and the operators that appear in the boundary OPE. That is, if we can perform a cut in which only a single propagator for the bulk scalar $\Phi$ is cut, then its dual operator $\f$ must appear in the boundary OPE. Similarly, if we can cut multiple $\Phi$ lines then the corresponding multi-trace operator built from $\f$ must appear in the OPE.
The correspondence between bulk cuts and boundary operators is expected, both from previous work on AdS/CFT unitarity \cite{Fitzpatrick:2011dm,Aharony:2016dwx,Yuan:2017vgp,Yuan:2018qva,Meltzer:2019nbs} and from the previous discussion on cut graphs and Poincar\'e transition amplitudes. However, one subtlety is that our derivation of the cutting rules is based on quantizing the AdS theory on slices of constant Poincar\'e time. This is why there is a simple map between the cuts of a diagram and the Poincar\'e transition amplitudes. On the other hand, in order to study the OPE we quantize the CFT using radial quantization, which is dual to quantizing the AdS theory on slices of constant global time. We therefore do not expect that our Poincar\'e cuts necessarily isolate the dual single- or multi-trace operator in the boundary OPE. Instead, we will give evidence for a weaker but still useful statement: the existence of a bulk cut implies the existence of the corresponding single- or multi-trace operators in the boundary OPE.
We begin by studying the simplest non-trivial case, the exchange Witten diagram, which is given in \eqref{eq:Cut_Tree_Lorentzian}-\eqref{eq:Cut_Tree_Lorentzian_eqn} and for convenience is reproduced below,
\begin{align*}
-2~\Re W'_{\f,\text{exch}}(k_1,\ldots,k_4)=\int\limits_{0}^{\infty}\frac{dz_1dz_2}{z_1^{d+1}z_2^{d+1}}&K_{\Delta}(k_1,z_1)K_{\Delta}(k_2,z_1)G^{+}_{\Delta}(k_{12},z_1,z_2)
\\
&K^*_{\Delta}(k_3,z_2)K^*_{\Delta}(k_4,z_2)~.
\numberthis
\label{eq:treeRePart}
\end{align*}
The tree diagram will be a useful example for seeing the cutting rules in action and understanding the structure in more general diagrams. As expected, we will show that the boundary OPE of \eqref{eq:treeRePart} involves only the exchange of the single-trace operator $\f$ and its descendants \cite{Caron-Huot:2017vep}.
To understand how the scaling \eqref{eq:OPETTb} emerges, we will study the limit $k_{12}\rightarrow 0$ under the $z$ integrals.\footnote{In Section \ref{subsec:4ptexchange} we will show the expected scaling emerges when we perform the $z$ integrals first.} From \eqref{eq:treeRePart}, we see that the dependence on $k_{12}$ comes from the on-shell bulk-to-bulk propagator. In the zero-momentum limit, the propagator takes the form
\begin{align}
G^{+}_{\Delta}(k,z_1,z_2)\approx \frac{\pi}{2^{2\nu}\Gamma(1+\nu)^{2}} (z_1z_2)^{\Delta}(-k^2)^{\Delta-d/2}~,
\end{align}
where we used the explicit expression \eqref{eq:PosBB}. Substituting this into \eqref{eq:treeRePart} and comparing to \eqref{eq:OPETTb} confirms the expected scaling due to $\f$ exchange.
We next study the bubble diagram, drawn in \eqref{eq:Witten_CutToVertices}, in the same limit. As we are performing a two-particle cut, we expect that by taking the limit $k_{12} \rightarrow0$ we will see the exchange of the double-trace operators $[\f\f]_{n,J}$ in the boundary OPE. The double-traces have the form
\begin{align}
[\f\f]_{n,J}=\f\partial^{\mu_1}\ldots\partial^{\mu_J}\f -\text{traces}~,
\\
\Delta_{n,J}=2\Delta+2n+J~.
\end{align}
We will study the leading OPE contribution, which is governed by the exchange of the scalar $[\f\f]_{0,0}$ with dimension $2\Delta$. The full expression for the cut bubble is
\begin{align}
-2~\Re W'_{\text{bubble}}(k_1,\ldots,k_4)=\int\limits_{V_+} \frac{d^{d}\ell}{(2\pi)^{d}}\int\limits_{0}^{\infty}&\frac{dz_1dz_2}{z_1^{d+1}z_2^{d+1}}K_{\Delta}(k_1,z_1)K_{\Delta}(k_2,z_1)G^{+}_{\Delta}(\ell,z_1,z_2)\nonumber
\\ &G^{+}_{\Delta}(k_{12}-\ell,z_1,z_2) K^*_{\Delta}(k_3,z_2)K^*_{\Delta}(k_4,z_2)~.
\end{align}
We will take the limit $k_{12}\rightarrow 0$ with $k_{12}\in V_{+}$. The above expression involves on-shell propagators that are non-zero only for $k_{12}-\ell\in V_{+}$ and $\ell \in V_+$. These conditions imply that when $k_{12}\rightarrow 0$, we must also have $\ell\rightarrow0$. That is, the integration region for the phase space integral is bounded by the size of the incoming momenta. To see this explicitly, we write
\begin{align}
k_{12}=v+\ell, \qquad v\in V_{+}~.
\end{align}
Squaring both sides yields
\begin{align}
k_{12}^2=v^2+2v\cdot \ell +\ell^2~.
\end{align}
As $v$, $\ell\in V_+$, each term on the right-hand side is negative. Taking $k_{12}\rightarrow 0$ then requires that $v$, $\ell\rightarrow 0$ as well. When we take this limit, we therefore find that each on-shell propagator scales like $(-k_{12}^2)^{\Delta-d/2}$ while the shrinking phase-space integral gives an extra factor of $(-k_{12}^2)^{d/2}$. Putting this together, we find the expected scaling:
\begin{align}
-2~\Re W'_{\text{bubble}}(k_1,\ldots,k_4)\sim (-k_{12}^{2})^{\Delta-d/2}~.
\end{align}
This pattern continues at higher loops: when we cut $n$ lines, we find the expected scaling for an $n$-trace operator. For each propagator we cut, we have a factor of $(-k_{12}^{2})^{\Delta-d/2}$ and for each loop momentum in a cut line we find a factor of $(-k_{12}^{2})^{d/2}$ from the loop measure.
\subsubsection*{Flat Space Limit}
Studying the flat space limit together with our cutting rules will provide another non-trivial consistency check. As we are working in the Poincar\'e patch, we will use the flat space limit given in \cite{Raju:2012zr}, which we review here. To define this limit, we write the Witten diagram as an independent function of the momenta $k_i$ and their norms $|k_i| \equiv \sqrt{k^{2}}$,
\begin{align}
W(k_1,|k_1|,\ldots,k_4,|k_4|)~.
\end{align}
We will assume the $d$-dimensional momentum $k$ is spacelike and then define a $(d+1)$-dimensional null vector,
\begin{align}
\tilde{k}=(k,i|k|)~.\label{eq:flat_momenta}
\end{align}
If we define the total energy as
\begin{align}
E_{T}=\sum\limits_{i}|k_i|~,
\end{align}
then the flat-space amplitude comes from a total energy pole of the Witten diagram,
\begin{align}
M(\tilde{k}_1,\ldots,\tilde{k}_{4})\propto\lim\limits_{E_{T}\rightarrow0} (E_T)^{\alpha}W(k_1,|k_1|,\ldots,k_4,|k_4|)~.
\end{align}
In general, the exact strength of the pole and proportionality factor depend on the loop order and the theory.\footnote{For related work on dS correlators see e.g. \cite{Arkani-Hamed:2017fdk,Arkani-Hamed:2018kmz,Baumann:2020dch,Benincasa:2018ssx,Arkani-Hamed:2018bjr,Benincasa:2019vqr}.}
In the physical region, all $|k_i|$ are positive for $k$ spacelike and we do not have access to the total energy pole. To reach this pole, we instead treat the $|k_i|$ as independent complex variables and analytically continue in them.\footnote{This analytic continuation is distinct from the one used to go from spacelike to timelike momenta, where the $i\epsilon$ prescription determines how to approach the branch cut at $k^2<0$ and $|k|$ is not an independent variable. Instead one keeps $|k|=\sqrt{k^2}$, which is imaginary for timelike momenta.} However, to obtain null momenta in the flat space limit, we still need to impose that $|k_i|^2=k_i\cdot k_i$ before taking the flat space limit. The procedure is then: we first analytically continue in some of the $|k_i|$ to flip their signs and then take the limit $E_{T}\rightarrow 0$. By using \eqref{eq:flat_momenta}, we recover the flat-space amplitude with complexified $(d+1)$-dimensional momenta. This flat space limit originates from the fact that the total energy pole comes from the $z\rightarrow \infty$ limit of the AdS integration, where the AdS integrand takes the same form as the flat-space integrand. Comparing the AdS and flat-space expressions fixes the coefficient of $|k|$ in \eqref{eq:flat_momenta} \cite{Raju:2012zr}.
To be concrete, we can consider a conformally-coupled scalar in AdS, which is dual to a boundary scalar of dimension $\Delta_{c}=\frac{1}{2}(d+1)$. The flat-space amplitude is then the residue of the total energy pole,
\begin{align}
W(k_1,|k_1|,\ldots,k_4,|k_4|)=\frac{M(\tilde{k}_{1},\ldots,\tilde{k}_{4})}{E_T}+\ldots,
\end{align}
where the omitted terms are regular at $E_T=0$.
Before performing any analytic continuation in $|k_i|$, we find
\begin{align}
\Re W(k_1,|k_1|,\ldots,k_4,|k_4|)=\frac{\Re M(\tilde{k}_{1},\ldots,\tilde{k}_{4})}{E_T}+\ldots
\end{align}
and so we identify the discontinuity of the flat space tree-level amplitude as the coefficient of a total energy pole in $\Re W$.
One simple way to understand this limit is to write the real part of the CFT correlator as
\begin{align}
2~\Re\<T[\f\f\f\f]\> =\<T[\f\f\f\f]\>+\<\overline{T}[\f\f\f\f]\>~. \label{eq:RePartFlatSpace}
\end{align}
We can then take the flat space limit of each correlator on the right-hand side individually. The flat space limit of the time-ordered correlator gives matrix elements for $i\mathcal{T}$ \cite{Gary:2009ae,Okuda:2010ym} while the flat space limit of the anti-time-ordered correlator gives matrix elements for $-i\mathcal{T}^{\dagger}$. Their sum is then the natural object to study whose flat space limit yields $\Im(\mathcal{T})$. We then see from \eqref{eq:FromReToDC} that for certain kinematics the total energy pole in the causal double-commutator computes the discontinuity of a flat space amplitude \cite{Alday:2017vkk,Alday:2018kkw,Bissi:2020wtv}. We will verify this explicitly in the following sections.
\subsection{Four-Point Scalar Exchange}
\label{subsec:4ptexchange}
To make the previous discussions more concrete, we will now consider explicit examples of cut Witten diagrams. For simplicity, we consider diagrams with external conformally-coupled scalars $\phi_{c}$ with dimension $\Delta_{c}=\frac{1}{2}(d+1)$. When calculating the real part of a four-point Witten diagram, we will always work in the kinematics \eqref{eq:kinematics}. Computing the real part of a diagram is then equivalent to taking a discontinuity with respect to $k_{12}^{2}$ across the branch cut at $k_{12}^{2}<0$. In contrast to the flat space limit, when computing this discontinuity we impose that $|k_i|=\sqrt{k_{i}^{2}}$ and similarly for $k_{ij}$.
One benefit of using conformally-coupled scalars is that the bulk-to-boundary propagator takes a simple form,
\begin{align}
K_{\Delta_c}(k,z)= -i z^{\frac{d-1}{2}}e^{-|k|z}~.
\end{align}
First, we will consider an exchange diagram for $\<\f_c\f_c\f_c\f_c\>$ where the exchanged scalar $\mathcal{O}$ has arbitrary dimension $\Delta_{\mathcal{O}}$:
\begin{align}
-2~\Re W'_{{\cal O},\text{exch}}(k_1,\ldots,k_4)=g^{2}\int\frac{dz_1dz_2}{z_1^{d+1}z_{2}^{d+1}}&K_{\Delta_c}(k_1,z_1)K_{\Delta_c}(k_2,z_1)G^+_{\Delta_{{\cal O}}}(k_{12},z_1,z_2)
\nonumber \\ &K^*_{\Delta_c}(k_3,z_2)K^*_{\Delta_c}(k_4,z_2)~.\label{eq:cutExchV2}
\end{align}
The $z$ integrals can be evaluated and we find
\begin{align}
-2~\Re W'_{{\cal O},\text{exch}}(k_1,\ldots,k_4)=&
g^2\frac{\pi 2^{d-2 \Delta_{{\cal O}}} \Gamma (\Delta_{{\cal O}}-1)^2 }{\Gamma \left(-\frac{d}{2}+\Delta_{{\cal O}}+1\right)^2}
\frac{
(-k_{12}^{2})^{\Delta_{{\cal O}}-\frac{d}{2}} \theta(-k^{2})\theta(k^0)
}
{
((|k_1|+|k_2|) (|k_3|+|k_4|))^{\Delta_{{\cal O}}-1}
}
\nonumber \\
&\, _2F_1\left(\frac{\Delta_{{\cal O}}-1}{2},\frac{\Delta_{{\cal O}}}{2};\Delta_{{\cal O}}-\frac{d}{2}+1;\frac{k_{12}^{2}}{(|k_1|+|k_2|)^2}\right)
\nonumber \\
& _2F_1\left(\frac{\Delta_{{\cal O}}-1}{2},\frac{\Delta_{{\cal O}}}{2};\Delta_{{\cal O}}-\frac{d}{2}+1;\frac{k_{12}^{2}}{(|k_3|+|k_4|)^2}\right)~.
\end{align}
We see that when $k_{12}^{2}\rightarrow 0$, the Witten diagram scales like
\begin{align}
-2~\Re W'_{{\cal O},\text{exch}}(k_1,\ldots,k_4)\sim (-k_{12}^{2})^{\Delta_{{\cal O}}-\frac{d}{2}}~,
\end{align}
which corresponds to the exchange of ${\cal O}$ in the boundary CFT. Expanding the ${}_{2}F_1$ hypergeometric functions yields additional powers that correspond to descendants of ${\cal O}$. We use the notation $k_{12}^{2}$ instead of $|k_1+k_2|^2$ to make the analytic continuation in these variables clearer. This will also distinguish them from $|k_i|$, which are analytically continued to obtain the flat space limit.
To verify that our cutting rules give $-2\hspace{.1cm}\Re W$, we will consider a case where the Witten diagram can be computed in full and then take its discontinuity. One simple example is $d=5$ and $\Delta_{{\cal O}}=\Delta_{c}=3$. Assuming $k_i$ and $k_{12}$ are spacelike, we find
\begin{align}
W'_{\f_c,\text{exch}}(k_1,\ldots,k_4)=&\int\limits_{0}^{\infty} dz_1dz_2\int\limits_{0}^{\infty} dp\frac{2ig^2}{\pi}\frac{ \sin (p z_1) \sin (p z_2) e^{-(|k_1|+|k_2|)z_1 -(|k_3|+|k_4|)z_2 }}{ \left(k_{12}^{2}+p^2\right)}
\nonumber \\ =&\int\limits_{0}^{\infty}dp\frac{2ig^2}{\pi}\frac{ p^2}{ \left(k_{12}^{2}+p^2\right) \left((|k_1|+|k_2|)^2+p^2\right) \left((|k_3|+|k_4|)^2+p^2\right)}
\nonumber \\ =&\frac{i g^2}{\left(\sqrt{k_{12}^{2}}+|k_1|+|k_2|\right) \left(\sqrt{k_{12}^{2}}+|k_3|+|k_4|\right) (|k_1|+|k_2|+|k_3|+|k_4|)}~. \label{eq:exch_confScalar_d5}
\end{align}
Next, to go to the kinematics \eqref{eq:kinematics} we need to take $k_{12}^{2}<0$. From \eqref{eq:exch_confScalar_d5} we see that there is a square root branch cut for timelike $k_{12}$. To compute the discontinuity across the cut, we take the difference between taking $k_{12}^{2}$ negative and real from below and above the real line in the complex $k_{12}^{2}$ plane. This yields
\begin{align}
-2~\Re W'_{\f_c,\text{exch}}(k_1,\ldots,k_4)=\frac{2 g^2 \sqrt{-k_{12}^{2}}}{\left(k_{12}^{2}-(|k_1|+|k_2|)^2\right) \left(k_{12}^{2}-(|k_3|+|k_4|)^2\right)}~. \label{eq:exch_confScalar_d5_Cut}
\end{align}
This agrees with the cutting rules \eqref{eq:cutExchV2} when we set $d=5$ and ${\cal O}=\phi_c$.
Next, we study the flat space limit for the exchange diagram. The total energy pole in \eqref{eq:exch_confScalar_d5} appears explicitly, and its residue gives the flat space amplitude:
\begin{align}
\lim\limits_{E_T\rightarrow 0} E_T \hspace{.1cm} W'_{\f_c,\text{exch}}(k_1,\ldots,k_4)
=-\frac{ig^{2}}{s}~, \label{eq:flat_space_Exchange}
\end{align}
where we identified the flat space Mandelstam invariant
\begin{align}
s=(|k_1|+|k_2|)^{2}-k_{12}^{2}~. \label{eq:flatspaceinvariant}
\end{align}
By contrast, the real part of the Witten diagram as given in \eqref{eq:exch_confScalar_d5_Cut} does not have a total energy pole and then appears to vanish in the flat space limit. In order to capture the discontinuity of the flat space amplitude \eqref{eq:flat_space_Exchange}, which is given by $\delta(s)$, we need to implement the $i\epsilon$ prescription more carefully when analytically continuing the norms $|k_i|$. To see how the $\delta$-function emerges in the flat space limit, it will be convenient to use regulated $\delta$-functions in the cut propagators:
\begin{align}
G^{+,\epsilon}_\Delta(k,z_1,z_2)&=2\pi (z_1z_2)^{\frac{d}{2}}\int\limits_{0}^{\infty} dp \hspace{.1cm} p \mathcal{J}_{\nu}(pz_1)\mathcal{J}_{\nu}(pz_2)\delta^{\epsilon}(k^2+p^2)\theta(k^0)\theta(-k^{2})~,
\\
\delta^{\epsilon}(x)&=\frac{1}{\pi}\frac{\epsilon}{x^2+\epsilon^2}~.
\end{align}
Using this expression for the on-shell propagator in the cut tree-diagram gives
\begin{align*}
-2~\Re W'_{\f_c,\text{exch}}&(k_1,\ldots,k_4)
\\
=&\int\limits_{0}^{\infty} dp \frac{4 g^2}{\pi}\frac{ p^2 }{ \left((|k_1|+|k_2|)^2+p^2\right) \left((|k_3|+|k_4|)^2+p^2\right)}\frac{\epsilon}{\left(\left(k_{12}^{2}+p^2\right)^2+\epsilon ^2\right)}~.
\numberthis
\end{align*}
As the integrand is symmetric under $p\rightarrow -p$, we can extend the $p$ integration to $(-\infty,\infty)$ and evaluate the integral by closing the contour in the upper-half of the complex $p$ plane. We observe that there are four poles:
\begin{align}
p&=i(|k_1|+|k_2|)~, \label{eq:totEPt1}
\\
p&=i(|k_3|+|k_4|)~,\label{eq:totEPt2}
\\
p&=-\sqrt{-k_{12}^{2}-i\epsilon}~,\label{eq:nopolept1}
\\
p&=\sqrt{-k_{12}^{2}+i\epsilon}~.\label{eq:nopolept2}
\end{align}
Picking up the poles \eqref{eq:totEPt1} and \eqref{eq:totEPt2} will lead to a total energy pole in the final answer while the poles \eqref{eq:nopolept1} and \eqref{eq:nopolept2} will reproduce our earlier expression, which does not contain a total energy pole. Closing the $p$ contour and taking the limit $E_T\rightarrow 0$ before taking $\epsilon\rightarrow 0$ then yields the expected result:
\begin{align}
\lim\limits_{\epsilon\rightarrow 0}\lim\limits_{E_T\rightarrow 0}-2E_T~\Re W'_{\f_c,\text{exch}}(k_1,\ldots,k_4)&= \lim\limits_{\epsilon\rightarrow 0}\frac{2 \epsilon g^2}{\epsilon^2+\left((|k_1|+|k_2|)^2-k_{12}^{2}\right)^2}\theta(k_{12}^{0})
\nonumber \\ &=2\pi g^{2}\delta(s)\theta(k_{12}^{0})~.
\end{align}
We see that when sitting on a total energy pole, the real part of the AdS/CFT correlator reorganizes itself into a cut flat space amplitude.
One noteworthy aspect of this flat space limit is that on the $E_{T}=0$ pole, the norms of the CFT momenta are identified with an emergent $(d+1)^{\text{th}}$ component of the external momenta. Heuristically, these norms can be thought of as the ``radial" momenta in the AdS dual. We see from \eqref{eq:flatspaceinvariant} that the emergent component can be identified with the energy of the respective particle in flat space. An emergent energy variable is natural in the study of dS correlators, where we expect to see bulk time emerge from the study of purely spatial correlators \cite{Arkani-Hamed:2015bza,Arkani-Hamed:2017fdk}. Here we are working in Lorentzian AdS, which already has a notion of time and energy, and the extra component is more naturally identified with a complex spatial momentum.
\subsection{Four-Point Gauge Boson Exchange}
We will now repeat the previous analysis for Yang-Mills in AdS to illustrate the process for spinning fields. We will compute the cut of the Yang-Mills exchange diagram, both from the cutting rules and by taking a discontinuity of the full diagram.
Following \cite{Raju:2010by,Raju:2011mp} we work in the axial gauge $A_z^a=0$, where $a$ is the color index. Throughout this section we drop the color indices, although it is straightforward to restore them. In the axial gauge, $\epsilon \cdot k=0$ for physical bulk modes, where $\epsilon$ is the polarization vector.\footnote{We hope it is clear from context where $\epsilon$ stands for a polarization vector and where it gives the $i\epsilon$ prescription for time-ordered propagators.}
The Yang-Mills bulk-to-bulk propagators are
\begin{align}
G_{\mu\nu}^{\text{YM}}(k,z_1,z_2) &= -i (z_1 z_2)^{\frac{d-2}{2}} \int\limits_0^\infty dp ~ p \frac{\mathcal{J}_{\frac{d-2}{2}}(p z_1)\mathcal{J}_{\frac{d-2}{2}}(p z_2)}{k^2+p^2 - i \epsilon}\mathcal{P}_{\mu\nu}(k,p)~,\label{eq:YMPropBB}
\\
G_{\mu\nu}^{\text{YM},\pm}(k,z_1,z_2) &= \pi (z_1 z_2)^{\frac{d-2}{2}} \mathcal{J}_{\frac{d-2}{2}}(p z_1)\mathcal{J}_{\frac{d-2}{2}}(p z_2)\mathcal{P}_{\mu\nu}(k,\sqrt{-k^{2}})\theta(-k^{2})\theta(\pm k^0)~, \label{eq:YMPropBBW}
\end{align}
where we have defined the tensor
\begin{align}
\mathcal{P}_{\mu\nu}(k,p)=\eta_{\mu\nu}+\frac{k_{\mu}k_{\nu}}{p^{2}}~.
\end{align}
Taking one point to the boundary, we then find the bulk-to-boundary propagators\footnote{To obtain the bulk-to-boundary propagator, we have dropped terms analytic in $k$ that contribute to contact terms in position space.}
\begin{align}
K_{\mu\nu}^{\text{YM}}(k,z)&=-i\frac{1}{\Gamma(\frac{d}{2})2^{d/2-1}}(\sqrt{k^{2}}z)^{\frac{d-2}{2}}\mathcal{K}_{\frac{d-2}{2}}(\sqrt{k^{2}}z)\mathcal{P}_{\mu\nu}(k,\sqrt{-k^{2}})~, \label{eq:YMPropbB}
\\
K_{\mu\nu}^{\text{YM},\pm}(k,z)&=\frac{\pi}{\Gamma(\frac{d}{2})2^{d/2-1}}(\sqrt{k^{2}}z)^{\frac{d-2}{2}}\mathcal{J}_{\frac{d-2}{2}}(\sqrt{k^{2}}z)\mathcal{P}_{\mu\nu}(k,\sqrt{-k^{2}})\theta(-k^{2})\theta(\pm k^0)~. \label{eq:YMPropbBW}
\end{align}
We see in \eqref{eq:YMPropBB}, \eqref{eq:YMPropbB}, and \eqref{eq:YMPropbBW} that the factor $\mathcal{P}_{\mu\nu}(k,\sqrt{-k^2})$ projects onto directions orthogonal to $k$.\footnote{In \cite{Raju:2011mp} the factor $\mathcal{P}_{\mu\nu}(k,-k^2)$ is not included in the bulk-to-boundary propagators as the condition $\epsilon\cdot k =0$ is imposed on the external polarizations.} The on-shell bulk-to-bulk propagator therefore factorizes into a product of on-shell bulk-to-boundary propagators. This is the same structure we saw earlier for scalar propagators in Section \ref{sec:transitionamps}. Finally, the cubic vertex is
\begin{align}
\mathcal{V}^{\mu\nu\rho}(k_1,k_2,k_3)=\frac{i}{\sqrt{2}}(\eta^{\mu\nu}(k_1-k_2)^\rho+\eta^{\nu\rho}(k_2-k_3)^{\mu}+\eta^{\rho\mu}(k_3-k_1)^\nu)~,
\end{align}
which takes the same form as the flat-space vertex factor. The full tree-level exchange diagram is then\footnote{The factors of $z_i^4$ come from using the inverse metric to contract the vertices and propagators.}:
\begin{align*}
W^{\text{YM}}_{\text{exch},\mu_1\ldots\mu_4}(k_1,k_2,k_3,k_4) &=g^{2} \int \frac{dz_1 dz_2 }{z_1^{d+1}z_2^{d+1}}
K^{\text{YM}}_{\mu_1\nu_1}(k_1,z_1)K^{\text{YM}}_{\mu_2\nu_2}(k_2,z_1)
z_1^4 \mathcal{V}^{\nu_1\nu_2\rho} (k_1,k_2,k_{12})
\\
&
G^{\text{YM}}_{\rho\sigma}(k_{12},z_1,z_2)
z_2^4\mathcal{V}^{\nu_3\nu_4\sigma}(k_3,k_4,-k_{12})
K^{\text{YM}}_{\mu_3\nu_3}(k_3,z_2)K^{\text{YM}}_{\mu_4\nu_4}(k_4,z_2)~.
\numberthis
\end{align*}
To make the notation more compact, we will contract the external indices with polarization vectors
\begin{align}
W^{\text{YM}}_{\text{exch}}(k_1,k_2,k_3,k_4)=\epsilon^{\mu_1}_1\ldots\epsilon^{\mu_4}_4W^{\text{YM}}_{\text{exch},\mu_1\ldots\mu_4}(k_1,k_2,k_3,k_4)~.
\end{align}
The condition $\epsilon_i\cdot k_i=0$ trivializes the projector in the bulk-to-boundary propagators (\ref{eq:YMPropbB}) and (\ref{eq:YMPropbBW}). Finally, by specializing to $d=3$ one can perform the $p$ and $z$ integrals in closed form. This computation was carried out in \cite{Albayrak:2018tam} so, accounting for differences in normalization, we will quote the final result:
\begin{align}
W^{\text{YM}}_{\text{exch}}(k_1,k_2,k_3,k_4)=-ig^{2}&
\frac{\mathcal{V}^{12\rho}(k_1,k_2,-k_{12})\mathcal{V}^{34\sigma}(k_3,k_4,k_{12})}
{(\sqrt{k_{12}^{2}}+|k_1|+|k_2|)(\sqrt{k_{12}^{2}}+|k_3|+|k_4|)E_T}
\nonumber \\
&\left(\eta_{\rho\sigma}+\frac{(\sqrt{k_{12}^{2}}+E_T)(k_{12})_\rho (k_{12})_\sigma}{\sqrt{k_{12}^{2}}(|k_1|+|k_2|)(|k_3|+|k_4|)}\right)~,\label{eq:YMAdS4Full}
\end{align}
where $\mathcal{V}^{12\rho}=(\epsilon_{1})_\mu (\epsilon_{2})_\nu \mathcal{V}^{\mu\nu\rho}$. As a consistency check, we can take the flat space limit:
\begin{align}
\lim\limits_{E_T\rightarrow 0}E_{T}\hspace{.1cm}W^{\text{YM}}_{\text{exch}}(k_1,k_2,k_3,k_4)=\frac{ig^{2}}{s}&\mathcal{V}^{12\rho}(k_1,k_2,-k_{12})\mathcal{V}^{34\sigma}(k_3,k_4,k_{12})
\nonumber
\\
&\left(\eta_{\rho\sigma}-\frac{(k_{12})_\rho (k_{12})_\sigma}{(k_{12}\cdot n)^{2}}\right)~,
\end{align}
where $n=(0,0,0,1)$. This matches the flat-space amplitude, where we recall the vertices $\mathcal{V}^{\mu\nu\rho}$ only have indices in the first three directions.
We can now compute the real part of \eqref{eq:YMAdS4Full} by taking the discontinuity and using the cutting rules applied to spinning particles. Using the cutting rules yields
\begin{align}
-2~\Re W^{\text{YM}}_{\text{exch},\mu_1...\mu_4}(k_1,k_2,k_3,k_4)=-g^{2} \int & \frac{dz_1 dz_2 }{z_1^{d+1}z_2^{d+1}}
K^{\text{YM}}_{\mu_1\nu_1}(k_1,z_1)K^{\text{YM}}_{\mu_2\nu_2}(k_2,z_1)
\mathcal{V}^{\nu_1\nu_2\rho} (k_1,k_2,k_{12})
\nonumber \\
&
G^{+\text{YM}}_{\rho\sigma}(k_{12},z_1,z_2)
\mathcal{V}^{\nu_3\nu_4\sigma}(k_3,k_4,-k_{12})
\nonumber
\\
&
K^{*\text{YM}}_{\mu_3\nu_3} (k_3,z_2) K^{*\text{YM}}_{\mu_4\nu_4} (k_4,z_2) ~,
\end{align}
where the overall minus sign on the right-hand side comes because the vertices include a factor of $i$. Evaluating the $z$ integrals and contracting with the polarization vectors gives
\begin{align}
-2~\Re W^{\text{YM}}_{\text{exch}}(k_1,k_2,k_3,k_4)=&-2g^{2}\frac{\sqrt{-k_{12}^{2}}}{(k_{12}^{2}-(|k_1|+|k_2|)^{2})(k_{12}^{2}-(|k_3|+|k_4|)^{2})}
\nonumber \\ &\mathcal{V}^{12\rho}(k_1,k_2,k_{12})\mathcal{V}^{34\sigma}(k_3,k_4,-k_{12})\left(\eta_{\rho\sigma}-\frac{k_{12,\rho}k_{12,\sigma}}{k_{12}^{2}}\right)~.
\end{align}
This agrees with a direct calculation of the real piece by analytically continuing (\ref{eq:YMAdS4Full}) to timelike momenta, $k_{12}^{2}<0$, and computing the discontinuity across the cut. With the exception of the polarization dependence, the analysis is the same as the scalar case considered in the previous section.
\subsection{Five-Point Tree}
The analysis for higher-point tree diagrams is similar to the four-point case. As an example, we consider the five-point tree shown in (\ref{eq:fivepointdiagram}) and use conformally coupled scalars $\phi_c$ in $d=5$. The five-point tree-diagram is
\begin{align}
W'_{\text{5-pt}}(k_1,\ldots,k_5)=&(ig)^{3}\int \frac{dz_1dz_2dz_3}{z_1^{d+1}z_2^{d+1}z_3^{d+1}}K_{\Delta_c}(k_1,z_1)K_{\Delta_c}(k_2,z_1)G_{\Delta_c}(k_{12},z_1,z_2)
\nonumber \\ &K_{\Delta_c}(k_5,z_2)G_{\Delta_c}(k_3+k_4,z_2,z_3)K_{\Delta_c}(k_3,z_3)K_{\Delta_c}(k_4,z_3)\bigg|_{\Delta_c=3,d=5}~. \label{eq:fivept_pt1}
\end{align}
To check the cutting rules, we use the five-point kinematics given in (\ref{eq:5ptKinematics}), that is we choose $k_{1}+k_2 \in V_+$, $k_3+k_4+k_5\in V_{-}$, and take all the other invariants to be spacelike. In this configuration the only non-zero cut places $G_{\Delta_c}(k_{12},z_1,z_2)$ on shell, as shown in (\ref{eq:fivepointdiagram}). Using the cutting rules, we have
\begin{align}
-2i~\Im W'_{\text{5-pt}}(k_1,\ldots&,k_5)=ig^3\int \frac{dz_1dz_2dz_3}{z_1^{d+1}z_2^{d+1}z_3^{d+1}}K_{\Delta_c}(k_1,z_1)K_{\Delta_c}(k_2,z_1)G^+_{\Delta_c}(k_{12},z_1,z_2)
\nonumber \\ &K^*_{\Delta_c}(k_5,z_2)G^*_{\Delta_c}(k_3+k_4,z_2,z_3)K^*_{\Delta_c}(k_3,z_3)K^*_{\Delta_c}(k_4,z_3)\bigg|_{\Delta_c=3,d=5}~.
\label{eq:5ptFull}
\end{align}
As a reminder, we have a factor of $(ig)$ for the vertex to the left of the cut, a factor of $(-ig)^2$ for the two vertices to the right of the cut, and finally an overall $(-1)$ because we have an odd number of external points to the right of cut. Performing the $z$ integrals yields:
\begin{align}
-2i~\Im W'_{\text{5-pt}}(k_1,\ldots,k_5)=-8 g^3&\frac{ |k_5| |k_3+k_4| \sqrt{-k_{12}^{2} }}{\left(2 |k_5|^2 (k_{12}^{2}-k_{34}^{2})+(k_{12}^{2}+k_{34}^{2})^2+|k_5|^4\right)}
\nonumber \\
& \frac{1}{\left((|k_1|+|k_2|)^2+k_{12}^{2}\right) \left((|k_3|+|k_4|)^2-k_{34}^{2}\right) }~.\label{eq:5ptCut}
\end{align}
Next, we compute the imaginary piece of the five-point function directly from the full correlator. Evaluating the $p$ and $z$ integrals for (\ref{eq:fivept_pt1}) gives:
\begin{align}
W'_{\text{5-pt}}(k_1,\ldots,k_5)=g^3
\frac{1}{E_T} &\frac{\sqrt{k_{12}^{2}}+\sqrt{k_{34}^{2}}+|k_1|+|k_2|+|k_3|+|k_4|+2 |k_5|}{ \sqrt{k_{34}^{2}}+\sqrt{k_{12}^{2}}+|k_5|}
\nonumber \\
& \frac{1}{\left(\sqrt{k_{34}^{2}}+|k_3|+|k_4|\right) \left(\sqrt{k_{34}^{2}}+|k_1|+|k_2|+|k_5|\right) }
\nonumber \\ &\frac{ 1}{\left(\sqrt{k_{12}^{2}}+|k_1|+|k_2|\right)\left(\sqrt{k_{12}^{2}}+|k_3|+|k_4|+|k_5|\right) }
~
, \label{eq:fivepointtreefull}
\end{align}
where $E_T=|k_1|+\ldots+|k_5|$.
To compute $-2i\hspace{.05cm} \Im\<T[\f(k_1)\ldots\f(k_5)]\>$, we first analytically continue $k_{12}^{2}$ to be timelike and then take the discontinuity across the branch cut. The result agrees exactly with the answer from the cutting rules in (\ref{eq:5ptCut}).
As a consistency check, our result \eqref{eq:fivepointtreefull} agrees with \cite{Albayrak:2020isk}, and we also see that on the total energy pole the five-point Witten diagram reduces to the correct five-point flat space amplitude:
\begin{align}
\lim\limits_{E_T\rightarrow 0}E_T\hspace{.1cm}W'_{\text{5-pt}}(k_1,\ldots,k_5)=&\hspace{.1cm}g^{3}\frac{1}{\left((|k_1|+|k_2|)^{2}-k_{12}^{2}\right)}\frac{1}{\left((|k_3|+|k_4|)^{2}-k_{34}^{2})\right)}
\nonumber \\=&\hspace{.1cm}g^{3}\frac{1}{s_{12}s_{34}}~.
\end{align}
One can also recover the discontinuity of the flat space five-point diagram by taking the flat space limit of \eqref{eq:5ptFull}. As with the four-point exchange diagram, it is useful to make the replacement $G^{+}_{\Delta}\rightarrow G^{+,\epsilon}_{\Delta}$ to see how the flat-space $\delta$-function emerges in this limit. The analysis is identical in form to the case of the exchange diagram. For $d=5$ the $z$ integrals can be evaluated in closed form and the $p$ integral can be extended to $(-\infty,\infty)$, and then evaluated via a contour analysis. Finally, we take the limit $E_{T}\rightarrow 0$ before taking $\epsilon\rightarrow 0$ to find the $\delta$-function. The final answer is:
\begin{align}
\lim\limits_{\epsilon\rightarrow 0}\lim\limits_{E_{T}\rightarrow 0}-2iE_{T}\Im W'_{\text{5-pt}}(k_1,\ldots,k_5)&=\lim\limits_{\epsilon\rightarrow 0}\hspace{.1cm}\frac{2 i g^3 \epsilon }{(|k_3|-|k_{34}|+|k_4|) (|k_3|+|k_{34}|+|k_4|) }
\nonumber
\\
&\hspace{1in}\frac{\theta(k_{12}^{0})}{\left(\left(-k_{12}^2+(|k_1|+|k_2|)^2\right)^2+\epsilon ^2\right)}
\nonumber \\
&=ig^{3}\frac{2\pi}{s_{34}^{2}}\delta(s_{12}^{2})\theta(k_{12}^{0}).
\end{align}
Here we made the identifications
\begin{align}
s_{ij}=(|k_{i}|+|k_{j}|)^{2}-k_{ij}^{2},
\end{align}
where $s_{ij}$ are the flat-space Mandelstam invariants. The final result agrees with the cut flat-space amplitude.
\subsection{One-Loop Bubble}
Finally, we consider a more non-trivial example corresponding to the following one-loop bubble diagram:
\begin{equation}
\includegraphics[scale=.25]{AdS_Bubble.pdf}~.
\end{equation}
To verify that the cut diagram has the correct OPE limit, it is simpler to begin in position space. We will assume the external scalars are conformally coupled and that the two internal propagators are identical, but correspond to a distinct operator ${\cal O}$:
\begin{align}
W'_{{\cal O},\text{bubble}}(x_1,\ldots,x_4)=(ig)^2&\int \limits_{AdS}\frac{d^{d}y_1d^{d}y_2dz_1dz_2}{z_1^{d+1}z_2^{d+1}}K_{\Delta_c}(x_1; y_1,z_1)K_{\Delta_c}(x_2; y_1,z_1)
\nonumber
\\
& G_{\Delta_{\cal O}}(y_1,z_1;y_2,z_2)^{2}K_{\Delta_c}(x_3; y_2,z_2)K_{\Delta_c}(x_4; y_2,z_2)~.
\end{align}
The K\"all\'en-Lehmann spectral representation in AdS says \cite{Dusedau:1985ue,Fitzpatrick:2011dm}:
\begin{align}
G_{\Delta}(y_1,z_1;y_2,z_2)^{2}=\sum\limits_{n=0}^{\infty}a_{\Delta}(n)G_{2\Delta+2n}(y_1,z_1;y_2,z_2)~,
\\
a_{\Delta}(n)=\frac{(d/2)_n(2\Delta+2n)_{1-d/2}(2\Delta+n-d+1)_n}{2\pi^{d/2}n!(\Delta+n)^{2}_{1-d/2}(2\Delta+n-d/2)_n}~,
\end{align}
where $(a)_n$ is the Pochhammer symbol. In other words, the bubble diagram reduces to an infinite sum over tree-level exchange diagrams. Using this identity and then passing into momentum space, we obtain
\begin{align}
W'_{{\cal O},\text{bubble}}(k_1,\ldots,k_4)=\sum\limits_{n}a_{\Delta}(n)W'_{[{\cal O}\O]_{n,0} \hspace{.05cm} \text{exch}}(k_1,\ldots,k_4)~.
\end{align}
We then take the real part of both sides, expand in the limit $k_{12}\rightarrow 0$, and find the expected scaling behavior for the exchange of double-trace operators:
\begin{align}
-2~\Re W'_{{\cal O},\text{bubble}}(k_1,\ldots,k_4)\sim (-k_{12}^{2})^{2\Delta_{\cal O}-d/2}~, \label{eq:OPEBubble}
\end{align}
which we argued for in Section \ref{sec:Limits} using the integral representation of the cut diagram. This argument can be generalized to other ``bubble" type diagrams, as was done for example in \cite{Fitzpatrick:2011hu,Yuan:2018qva}.
Next, we will study the cut bubble diagram directly in momentum space for $d=3$ when all scalars are conformally coupled. We will show how the correct OPE limit emerges directly from the cutting rules and also that in the flat space limit we recover the cut flat-space bubble diagram. The cut bubble diagram is given by
\begin{align}
-2~\Re W'_{\f_c,\text{bubble}}(k_1,\ldots,k_4)=&g^{2}\int \frac{dz_1dz_2}{z_1^{d+1}z_2^{d+1}}\int \frac{d^{d}\ell}{(2\pi)^d}K_{\Delta_c}(k_1,z_1)K_{\Delta_c}(k_2,z_1)G^{+,\epsilon}_{\Delta_c}(\ell,z_1,z_2)
\nonumber \\ &G^{+,\epsilon}_{\Delta_c}(k_{12}-\ell,z_1,z_2)K^*_{\Delta_c}(k_3,z_2)K^*_{\Delta_c}(k_4,z_2)\bigg|_{\Delta_c=2,d=3}~,
\end{align}
where we used the regulated $\delta$-functions inside the cut propagators.
It will also be useful to define
\begin{align}
E_L=|k_1|+|k_2|, \qquad E_R=|k_3|+|k_4|~.
\end{align}
Performing the $z$ integrals, we find
\begin{align}
-2~\Re W'_{\f_c,\text{bubble}}(k_1,\ldots,k_4)=64g^2&\int_0^{\infty} dp_1dp_2 \int \frac{d^3\ell}{(2\pi)^3}E_L E_R p_1 p_2 \nonumber
\\
&
\frac{ \delta^{\epsilon}(\ell^2+p_1^2)\delta^{\epsilon}((k_{12}-\ell)^2+p_2^2) }{\left(E_L^2+(p_1-p_2)^2\right) \left(E_L^2+(p_1+p_2)^2\right)}
\nonumber
\\ &\frac{\theta^+(-\ell^2)\theta^+(-(k_{12}-\ell)^2)}{ \left(E_R^2+(p_1-p_2)^2\right) \left(E_R^2+(p_1+p_2)^2\right)}~.\label{eq:bubbleIntegrand}
\end{align}
Here the function $\theta^+$ is defined to be a $\theta$-function for the forward lightcone $V_+$:
\begin{align}
\theta^+(-\ell^2) = \theta(-\ell^2)\theta(\ell^0)~.
\end{align}
To find the OPE limit for this diagram, we restrict to physical values for the norm, that is $|k_i|=\sqrt{k_i^2}$ with $k_i$ spacelike, and take the limit $k_{12}\rightarrow 0$. In this case we have $E_L$, $E_R>0$ and can take $\epsilon\rightarrow 0$ inside the integrand. The on-shell propagators yield $\delta$-functions that trivialize the $p$ integrals:
\begin{align}
\hspace{-.3in}-2~\Re \ W'_{\f_c,\text{bubble}}(k_1,\ldots,k_4)=&16g^2 \int \frac{d^3\ell}{(2\pi)^3} E_L E_R |\ell| |k_{12}-\ell|\theta^+(-\ell^2)\theta^+(-(k_{12}-\ell)^2)
\nonumber \\ &\frac{1}{\left(E_L^2+(|\ell|-|k_{12}-\ell|)^2\right) \left(E_L^2+(|\ell|+|k_{12}-\ell|)^2\right)}
\nonumber \\ & \frac{1}{ \left(E_R^2+(|\ell|-|k_{12}-\ell|)^2\right) \left(E_R^2+(|\ell|+|k_{12}-\ell|)^2\right)}~. \label{eq:bubbleOPEPt1}
\end{align}
To evaluate this integral in the OPE limit, we make the following change of variables,
\begin{align}
\ell=r\big(\hspace{-.05cm}\cosh(\phi),\sinh(\phi)\cos(\theta),\sinh(\phi)\sin(\theta)\big), \quad \text{ with } \quad 0\leq r, \phi \leq \infty, \quad 0\leq\theta\leq2\pi~.
\end{align}
This parameterization trivializes the $\theta^+(-\ell^2)$ function, but we still need to impose the constraint from the other $\theta^+$ function. To further simplify the analysis, we work in the center-of-mass frame,
\begin{align}
k_{12}=(k_{12}^0,0,0)~.
\end{align}
In this frame, requiring $k_{12}-\ell\in V_+$ implies
\begin{align}
0\leq r \leq e^{-\phi}k_{12}^0~.
\end{align}
Imposing these constraints, we find that the measure for the integrand becomes
\begin{align}
\int \frac{d^{3}\ell}{(2\pi)^{3}}\theta^+(-\ell^2)\theta^+(-(k_{12}-\ell)^2)= \frac{1}{(2\pi)^{3}}\int\limits_{0}^{2\pi} d\theta\int \limits_{0}^{\infty}d\phi \int\limits_{0}^{e^{-\phi}k_{12}^0}dr \hspace{.1cm} r^{2}\sinh(\phi)~,
\end{align}
where the factor of $r^{2}\sinh(\phi)$ comes from the Jacobian. To compute the OPE limit, we make the change of variables $r=k_{12}^{0}r'$ and then expand at small $k_{12}^0$ for fixed $r'$. Performing the $r'$, $\phi$, and $\theta$ integrals in this limit gives
\begin{align}
-2~\Re W'_{\f_c,\text{bubble}}(k_1,\ldots,k_4)\approx\frac{ g^2 (k_{12}^0)^5}{45 \pi^2(|k_1|+|k_2|)^3 (|k_3|+|k_4|)^3}~.
\end{align}
In the OPE limit, the exchange of an operator with dimension $\Delta$ leads to the overall scaling $(-k_{12}^2)^{\Delta-d/2}$. Here $d=3$ and $\Delta_c=2$ and we recognize that the overall $(k_{12}^0)^5$ dependence comes from the exchange of the scalar double-trace operator of dimension $4$, $[\f_c\f_c]_{0,0}$. This agrees exactly with the previous result for the bubble in the OPE limit (\ref{eq:OPEBubble}) using the K\"all\'en-Lehmann spectral representation. By expanding to higher orders in $k_{12}^0$ one can capture sub-leading terms in the OPE limit.
Finally, we will recover the cut bubble diagram in flat space from the corresponding AdS diagram. As with the four-point exchange diagram, using the regulated $\delta^{\epsilon}$-functions will make the total energy pole manifest. We will check that the flat space limit holds directly at the level of the integrand rather than working with the full integrated diagram. This approach makes manifest that in the flat space limit a $p$ integral becomes the $(d+1)^{\text{th}}$ component of the flat-space loop integral \cite{Raju:2012zr}.
To evaluate (\ref{eq:bubbleIntegrand}), we first extend the $p_{1,2}$ integrals to the entire real line. Then we can evaluate these integrals via a contour analysis. As shown in \cite{Raju:2012zr}, the total energy pole in $E_L+E_R$ comes from poles pinching the $p_i$ contours. Here we can see that closing the $p_2$ contour on the poles explicitly written in the denominator of (\ref{eq:bubbleIntegrand}) will yield the total energy pole:
\begin{align}
\hspace{-.3in}\lim\limits_{E_R\rightarrow -E_L}-2(E_L+E_R)\Re W'_{\f_c,\text{bubble}}(k_1,\ldots,k_4)=&\int\limits_{-\infty}^{\infty} dp_1 \int \frac{d^3\ell}{(2\pi)^{3}}2\pi g^2 \theta^+(\ell^0)\theta^+(k_1^0+k_2^0-\ell^0)
\nonumber \\ &\delta^{\epsilon} (\ell^{2}+p_1^2)\delta^{\epsilon} ((k_{12}-\ell)^2+(iE_L+p_1)^2)~.
\end{align}
If we identify the $4$-dimensional external momenta as $\tilde{k}_i=(k_i,i |k_i|)$ and the internal $4$-dimensional momenta as $\tilde{\ell}=(\ell,p)$, we find
\begin{align}
\hspace{-.3in}\lim\limits_{E_R\rightarrow -E_L}-2(E_L+E_R)\Re W'_{\f_c,\text{bubble}}(k_1,\ldots,k_4)=\int \frac{d^4\tilde{\ell}}{(2\pi)^{4}}& (2\pi)^2 g^2 \delta (\tilde{\ell}^{2})\delta((\tilde{k}_1+\tilde{k}_2-\tilde{\ell})^2)
\nonumber \\ &\theta^+(-\tilde{\ell}^2)\theta^+(-(\tilde{k}_1+\tilde{k}_2-\tilde{\ell})^2)~.
\end{align}
This agrees with the cut flat-space bubble diagram exactly.
\section{Conclusion}
\label{sec:Discussion}
\subsection{Discussion}
In this work, we derived and applied the AdS Cutkosky rules. Together with the Lorentzian inversion formula, these cutting rules furnish a holographic unitarity method for AdS$_{d+1}$/CFT$_d$. In the process, we also provided the cutting rules for weakly-coupled CFTs. We used basic properties of Lorentzian QFTs to derive these rules, and so the results can be generalized to study QFT in other curved spaces.
The proof of the CFT Cutkosky rules relies on the CFT optical theorem \eqref{eq:opticalV0} in combination with constraints from positivity of the spectrum and causality. Using positivity, we showed that for the restricted set of momenta \eqref{eq:kinematics},
\begin{equation}
-2\hspace{.1cm}\Re \< T[\f \f \f \f] \> = \<[\f,\f]_{A}[\f,\f]_{R}\>. \label{eq:summaryReT}
\end{equation}
This statement of CFT unitarity allows us to relate two seemingly different objects, the real part of a time-ordered correlator and a causal double-commutator. The right-hand side is the same double-commutator that appears in the Lorentzian inversion and CFT dispersion formulas \cite{Caron-Huot:2017vep,ssw,Kravchuk:2018htv,Carmi:2019cub}. The left-hand side is a natural generalization of $\Im(\mathcal{T})$, but now at the level of the off-shell correlation function. Like $\Im(\mathcal{T})$, the cutting rules for the real part can be derived by using the largest-time equation. Using \eqref{eq:summaryReT} and analyticity in momentum space, we can then derive the cutting rules for the double-commutator. The derivation of these rules relied on using Lorentzian momentum space, but they can also be studied using other representations of the correlator, e.g. by working in position or Mellin space.
Our method for CFT correlators is a direct generalization of the flat-space S-matrix method. In both cases, a cut replaces a time-ordered propagator with the corresponding Wightman, or on-shell, propagator and therefore factorizes the diagram into a product of on-shell sub-diagrams. Dispersion formulas can then be used to reconstruct the full diagram from its cuts. Moreover, we checked in explicit examples that the AdS unitarity cuts reduce to the usual S-matrix cuts in the flat space limit.
The identity \eqref{eq:summaryReT} gives a notion of factorization in CFT$_d$: one can always insert a complete set of states in the right-hand side to find an infinite sum over three-point functions. The non-trivial feature of holographic CFTs is that the right-hand side can be rewritten as a phase-space integral over two AdS Witten diagrams. In other words, for holographic CFTs we have a stronger notion of factorization that comes from the locality of the bulk dual. The rules presented here make bulk locality manifest and are complementary to the previous work \cite{Meltzer:2019nbs}, where different bulk rules were derived to compute the conformal block expansion of the double-commutator. These two methods make different properties of AdS/CFT manifest -- bulk factorization and the perturbative structure of the boundary OPE -- and open new windows into $1/N$ perturbation theory via unitarity.
\subsection{Future work}
There are many open questions in the broader study of unitarity methods for CFT correlators. The appearance of the double-commutator in the real part of a time-ordered momentum-space correlator provides a hint that momentum space may be useful in the study of the CFT dispersion formula \cite{Carmi:2019cub}. We expect that the real/imaginary part of even/odd-point time-ordered correlators with spacelike external momenta will provide natural generalizations of the double-commutator. Such correlators factorize into partially time-ordered correlators and can also be computed via the cutting rules. Using momentum space may therefore clarify the structure of the higher-point inversion formula and the larger problem of bootstrapping general $n$-point functions.
It is also important to develop efficient ways of using the cutting rules in practice to determine a one-loop correlator. In this work we have given a set of rules to compute the double-commutator, but we did not introduce new tools to evaluate the dispersion formula. One possible avenue is to use the dispersion formula directly in momentum space. Another potentially useful approach is to use generalized unitarity to fix the one-loop correlator by allowing for more general cuts \cite{Bern:2011qt}. This has already been done for correlation functions in weakly-coupled $\mathcal{N}=4$ SYM \cite{Engelund:2012re}, but its application to more general weakly-coupled CFTs, such as the $O(N)$ vector models, appears to be less explored. While our work gives natural candidates for the relevant cuts, generalized unitarity has not yet been studied in AdS, and we expect its development will teach us more about a rich class of observables and theories. For example, at tree level in type IIB supergravity on AdS$_{5}\times$S$^{5}$ there exists a fascinating hidden $10d$ conformal symmetry \cite{Caron-Huot:2018kta}.\footnote{See \cite{Rastelli:2019gtj,Giusto:2020neo} for a generalization to $AdS_3\times S^3$.} This symmetry explains the simplicity of Mellin amplitudes and anomalous dimensions of the CFT dual \cite{Rastelli:2016nze,Rastelli:2017udc,Aprile:2018efk}. Recursion relations and generalized unitarity can help clarify to what extent this symmetry continues to hold at higher points and at loop level.
Studying cutting rules for holographic CFTs in Mellin space may also provide new insight. The Mellin amplitude shares important similarities with a scattering amplitude, but it also encodes the OPE in a simple way \cite{Mack:2009gy,Penedones:2010ue,Fitzpatrick:2011ia}. This simplicity continues to hold beyond tree level in supersymmetric theories \cite{Alday:2018pdi,Alday:2018kkw,Binder:2019jwn,Chester:2019pvm,Chester:2020dja,Alday:2019nin,Alday:2020tgi,Drummond:2019hel,Bissi:2020wtv}. While we have derived the cutting rules in momentum space, it would be interesting to study their application to one-loop Mellin amplitudes. Relatedly, while most recent work on holographic correlators focuses on bootstrapping the full, integrated correlator, much of the recent progress in the study of scattering amplitudes comes from studying the integrand \cite{Elvang:2013cua,Henn:2014yza,Arkani-Hamed:2016byb}. To import this technology into AdS, it may prove useful to understand the structure of AdS integrands using Mellin space ideas. This may also help determine the class of functions that can appear in holographic correlators \cite{Aprile:2017bgs,Aprile:2017qoy,Aprile:2019rep,Drummond:2019hel}.
The cutting rules derived here contribute to the larger program of bootstrapping weakly-coupled theories in curved space via unitarity methods. Understanding unitarity constraints directly in the bulk of AdS opens up applications to other spacetimes, from deformed versions of AdS to the study of inflationary observables relevant for cosmology. We anticipate that by further generalizing S-matrix methods, we can open new avenues into this broader class of theories.
\section*{Acknowledgments}
We thank Soner Albayrak, Simon Caron-Huot, Clifford Cheung, Savan Kharel, Per Kraus, Julio Parra-Martinez, Eric Perlmutter, and David Simmons-Duffin for discussions. We also thank Julio Parra-Martinez for comments on the draft. AS thanks the Walter Burke Institute for Theoretical Physics for hospitality while this work was in progress. The research of DM is supported by Simons Foundation grant 488657, the Walter Burke Institute for Theoretical Physics and the Sherman Fairchild Foundation. AS is supported by the College of Arts and Sciences of the University of Kentucky.
| {
"attr-fineweb-edu": 1.847656,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUd-XxK7IDOE3osb_u | \section{HSIC estimation in the self-supervised setting}
\label{app:estimator}
Estimators of HSIC typically assume i.i.d. data, which is not the case for self-supervised learning -- the positive examples are not independent. Here we show how to adapt our estimators to the self-supervision setting.
\subsection{Exact form of HSIC(Z, Y)}
Starting with $\mathrm{HSIC}(Z, Y)$, we assume that the ``label'' $y$ is a one-hot encoding of the data point, and all $N$ data points are sampled with the same probability $1/N$. With a one-hot encoding, any kernel that is a function of $y_i^\top y_j$ or $\|y_i - y_j\|$ (e.g. linear, Gaussian or IMQ) have the form
\begin{equation}
l(y_i, y_j) = \begin{cases}
l_1 & y_i = y_j,\\
l_0 & \mathrm{otherwise}
\end{cases}
\equiv \Delta l\,\mathbb{I}(y_i=y_j) + l_0
\label{app:eq:y_kernel_form}
\end{equation}
for some $\Delta l = l_1 - l_0$.
\begin{theorem}
\label{app:th:hsic_z_y}
For a dataset with $N$ original images sampled with probability $1/N$, and a kernel over image identities defined as in \eqref{app:eq:y_kernel_form}, $\mathrm{HSIC}(Z, Y)$ takes the form
\begin{equation}
\mathrm{HSIC}(Z, Y) =\frac{\Delta l}{N}\,\mathbb{E}_{Z,Z'\sim \mathrm{pos}}\,\left[k(Z, Z')\right] - \frac{\Delta l}{N}\mathbb{E}\, \left[k(Z, Z')\right]\,,
\label{app:eq:hsic_z_y}
\end{equation}
where $Z,Z'\sim \mathrm{pos}$ means $p_{\mathrm{pos}}(Z, Z')=\sum_i p(i)p(Z|i)p(Z'|i)$ for image probability $p(i)=1/N$.
\end{theorem}
\begin{proof}
We compute HSIC (defined in \eqref{eq:hsic-pop}) term by term. Starting from the first, and denoting independent copies of $Z, Y$ with $Z', Y'$,
\begin{align*}
\mathbb{E}\,\left[k(Z, Z') l(Y, Y')\right] \,&= \Delta l\,\mathbb{E}\,\left[ k(Z, Z')\mathbb{I}[Y=Y']\right] + l_0\,\mathbb{E}\, \left[k(Z, Z')\right]\\
&=\Delta l\,\sum_{i=1}^N\sum_{j=1}^N\mathbb{E}_{Z|y_i,Z'|y_j}\, \left[ \frac{1}{N^2} k(Z, Z')\mathbb{I}[y_i=y_j] \right]+ l_0\,\mathbb{E}\, \left[k(Z, Z')\right]\\
&=\frac{\Delta l}{N}\,\sum_{i=1}^N\mathbb{E}_{Z|y_i,Z'|y_i}\, \left[\frac{1}{N} k(Z, Z')\right] + l_0\,\mathbb{E}\, \left[k(Z, Z')\right]\\
&=\frac{\Delta l}{N}\,\mathbb{E}_{Z,Z'\sim\mathrm{pos}}\, \left[ k(Z, Z')\right] + l_0\,\mathbb{E}\,\left[ k(Z, Z') \right]\,,
\end{align*}
where $\mathbb{E}_{Z,Z'\sim \mathrm{pos}}$ is the expectation over positive examples (with $Z$ and $Z'$ are sampled independently conditioned on the ``label'').
The second term, due to the independence between $Z'$ and $Y''$, becomes
\begin{align*}
\mathbb{E}\,& \left[k(Z, Z') l(Y, Y'')\right]=\mathbb{E}_{Z Y}\mathbb{E}_{Z'} \left[k(Z, Z') \brackets{\frac{\Delta l}{N} + l_0}\right] =\brackets{\frac{\Delta l}{N} + l_0} \mathbb{E}\, \left[k(Z, Z')\right]\,.
\end{align*}
And the last term becomes identical to the second one,
\begin{align*}
\mathbb{E}\,\left[k(Z, Z')\right] \mathbb{E}\,\left[l(Y, Y')\right]=\brackets{\frac{\Delta l}{N} + l_0} \mathbb{E}\, \left[ k(Z, Z') \right]\,.
\end{align*}
Therefore, we can write $\mathrm{HSIC}(Z, Y)$ as
\begin{align*}
\mathrm{HSIC}(Z, Y) \,&= \mathbb{E}\,\left[k(Z, Z') l(Y, Y')\right] -2\,\mathbb{E}\, \left[k(Z, Z') l(Y, Y'')\right] + \mathbb{E}\,\left[k(Z, Z')\right] \mathbb{E}\,\left[l(Y, Y')\right]\\
&=\frac{\Delta l}{N}\,\mathbb{E}_{Z,Z'\sim \mathrm{pos}}\,\left[k(Z, Z')\right] - \frac{\Delta l}{N}\mathbb{E}\, \left[k(Z, Z')\right]\,,
\end{align*}
as the terms proportional to $l_0$ cancel each other out.
\end{proof}
The final form of $\mathrm{HSIC}(Z, Y)$ shows that the $Y$ kernel and dataset size come in only as pre-factors. To make the term independent of the dataset size (as long as it is finite), we can assume $\Delta l=N$, such that
\begin{align*}
\mathrm{HSIC}(Z, Y) = \mathbb{E}_{Z,Z'\sim \mathrm{pos}}\,\left[k(Z, Z')\right] - \mathbb{E}\, \left[ k(Z, Z') \right] \,.
\end{align*}
\subsection{Estimator of HSIC(Z, Y)}
\begin{theorem}
\label{app:th:hsic_z_y_estimator}
In the assumptions of \cref{app:th:hsic_z_y}, additionally scale the $Y$ kernel to have $\Delta l = N$, and the $Z$ kernel to be $k(z, z)=1$. Assume that the batch is sampled as follows: $B < N$ original images are sampled without replacement, and for each image $M$ positive examples are sampled independently (i.e., the standard sampling scheme in self-supervised learning). Then denoting each data point $z_{i}^{p}$ for ``label'' $i$ and positive example $p$,
\begin{align}
\label{app:eq:hsic_z_y_unbiased}
\widehat\mathrm{HSIC}(Z, Y)\,&= \brackets{\frac{M}{M-1} + \frac{N-1}{N(B-1)}-\frac{M}{N(M-1)}}\frac{1}{BM^2}\sum_{ipl} k(z_{i}^p, z_{i}^l) \\
&- \frac{B(N-1)}{(B-1)N}\frac{1}{B^2M^2}\sum_{ijpl} k(z_{i}^p, z_{j}^l) - \frac{N-1}{N(M-1)}\,.
\end{align}
is an unbiased estimator of \eqref{app:eq:hsic_z_y}.
\end{theorem}
While we assumed that $k(z, z)=1$ for simplicity, any change in the scaling would only affect the constant term (which is irrelevant for gradient-based learning).
Recalling that $\lvert k(z, z') \rvert \le \max( k(z, z), k(z', z') )$,
we can then obtain a slightly biased estimator from \cref{app:th:hsic_z_y_estimator} by simply discarding small terms:
\begin{corollary}
If $|k(z, z')| \leq 1$ for any $z, z'$, then
\begin{align}
\label{app:eq:hsic_z_y_biased}
\widehat\mathrm{HSIC}(Z, Y)\,&= \frac{1}{BM(M-1)}\sum_{ipl} k(z_i^p, z_i^l)- \frac{1}{B^2M^2}\sum_{ijpl} k(z_i^p, z_j^l) - \frac{1}{M-1}
\end{align}
has a $O(1/B)$ bias.
\end{corollary}
\begin{proof}[Proof of \cref{app:th:hsic_z_y_estimator}]
To derive an unbiased estimator, we first compute expectations of two sums: one over all positives examples (same $i$) and one over all data points.
Starting with the first,
\begin{align}
\label{app:eq:hsic_z_y_unbiased_sum_1}
\mathbb{E}\, \left[\frac{1}{BM^2}\sum_{ipl} k(z_i^p, z_i^l)\right] \,&= \mathbb{E}\, \left[\frac{1}{BM^2}\sum_{ip,l\neq p} k(z_i^p, z_i^l)\right] + \mathbb{E}\, \left[\frac{1}{BM^2}\sum_{ip} k(z_i^p, z_i^p)\right]\\
&=\frac{M-1}{M}\mathbb{E}_{Z,Z'\sim \mathrm{pos}}\,\left[k(Z, Z')\right] + \frac{1}{M}\,.
\end{align}
As for the second sum,
\begin{align*}
\mathbb{E}\,\left[\frac{1}{B^2M^2}\sum_{ijpl} k(z_i^p, z_j^l) \right]\,&= \mathbb{E}\, \left[\frac{1}{B^2M^2}\sum_{i,j\neq i,pl} k(z_i^p, z_j^l)\right] + \mathbb{E}\, \left[\frac{1}{B^2M^2}\sum_{ipl} k(z_i^p, z_i^l)\right]\,.
\end{align*}
The first term is tricky: $\mathbb{E}\, \left[k(z_i^p, z_j^l)\right] \neq \mathbb{E}\, \left[k(Z, Z')\right]$ because we sample without replacement.
But we know that $p(y,y')=p(y)p(y'|y)=1/(N(N-1))$, therefore for $i\neq j$
\begin{align}
\label{app:eq:expect_i_not_j}
\mathbb{E}\, k(z_i^p, z_j^l) \,& =
\sum_{y, y'\neq y} \frac{1}{N(N-1)}\mathbb{E}_{Z|y, Z'|y'}\,k(Z, Z')\\
&=\sum_{yy'} \frac{1}{N(N-1)}\mathbb{E}_{Z|y, Z'|y'}\,k(Z, Z') - \sum_{y} \frac{1}{N(N-1)}\mathbb{E}_{Z|y, Z'|y}k(Z, Z')\\
&=\frac{N}{N-1}\mathbb{E}\, k(Z, Z') - \frac{1}{N-1}\mathbb{E}_{Z,Z'\sim \mathrm{pos}}k(Z, Z')\,.
\end{align}
Using the expectations for $ipl$ and $ijpl$,
\begin{align}
\label{app:eq:hsic_z_y_unbiased_sum_2}
\mathbb{E}\, &\frac{1}{B^2M^2}\sum_{ijpl} k(z_i^p, z_j^l) = \mathbb{E}\, \frac{1}{B^2M^2}\sum_{i,j\neq i,pl} k(z_i^p, z_j^l) + \mathbb{E}\,\frac{1}{B^2M^2}\sum_{ipl} k(z_i^p, z_j^l)\\
&=\frac{B-1}{B(N-1)}\brackets{N\,\mathbb{E}\, k(Z,Z') - \mathbb{E}_{Z,Z'\sim \mathrm{pos}}\,k(Z, Z')} + \frac{M - 1}{BM}\mathbb{E}_{Z,Z'\sim \mathrm{pos}}\,k(Z, Z') + \frac{1}{BM}\\
&=\frac{(B-1)N}{B(N-1)}\mathbb{E}\, k(Z, Z')- \frac{B-1}{B(N-1)} \mathbb{E}_{Z,Z'\sim \mathrm{pos}}\,k(Z, Z')+ \frac{M-1}{BM}\mathbb{E}_{Z,Z'\sim \mathrm{pos}}k(Z, Z') + \frac{1}{BM}\\
&=\frac{(B-1)N}{B(N-1)}\mathbb{E}\, k(Z, Z') + \frac{1}{B}\brackets{\frac{M-1}{M} - \frac{B-1}{N-1}}\mathbb{E}_{Z,Z'\sim \mathrm{pos}}\,k(Z, Z') + \frac{1}{BM}\,.
\end{align}
Combining \eqref{app:eq:hsic_z_y_unbiased_sum_1} and \eqref{app:eq:hsic_z_y_unbiased_sum_2} shows that \eqref{app:eq:hsic_z_y_unbiased} is indeed an unbiased estimator.
\end{proof}
It's worth noting that the i.i.d. estimator \eqref{eq:biased-hsic} is flawed for $\mathrm{HSIC}(Z,Y)$ for two reasons: first, it misses the $1/N$ scaling of $\mathrm{HSIC}(Z,Y)$ (however, it's easy to fix by rescaling); second, it misses the $1/(M(M-1))$ correction for the $ipl$ sum. As we typically have $M=2$, the latter would result in a large bias for the (scaled) i.i.d. estimator.
\subsection{Estimator of HSIC(Z, Z)}
Before discussing estimators of $\mathrm{HSIC}(Z,Z)$, note that it takes the following form:
\begin{align*}
\mathrm{HSIC}(Z,Z) = \mathbb{E}\, \left[k(Z, Z')^2\right] - 2 \mathbb{E}_Z\, \left[\mathbb{E}_{Z'}\, \left[k(Z, Z')\right]^2\right] + \brackets{\mathbb{E}\, \left[k(Z, Z')\right]}^2\,.
\end{align*}
This is because $X$ and $Y$ in $\mathrm{HSIC}(X,Y)$ become the \textit{same} random variable, so $p(X,Y)=p_Z(X)\delta(X-Y)$ (see \cite{pogodin2020kernelized}, Appendix A).
\begin{theorem}
\label{app:th:hsic_z_z_estimator}
Assuming $k(z,z')\leq 1$ for any $z, z'$, the i.i.d. HSIC estimator by \citet{gretton2005measuring},
\begin{align*}
\widehat\mathrm{HSIC}(Z, Z) = \frac{1}{(BM-1)^2} \mathrm{Tr}(KHKH)\,,
\end{align*}
where $H = I - \frac{1}{BM} \bb 1 \bb 1^\top$, has a $O(1/B)$ bias for the self-supervised sampling scheme.
\end{theorem}
\begin{proof}
First, observe that
\begin{align*}
\mathrm{Tr}(KHKH) \,& = \mathrm{Tr}(KK) - \frac{2}{BM}\bb 1^\top K K \bb 1 + \frac{1}{B^2M^2}\brackets{\bb 1^\top K\bb 1}^2\,.
\end{align*}
Starting with the first term, and using again the result of \eqref{app:eq:expect_i_not_j} for sampling without replacement,
\begin{align*}
\mathbb{E}\, \left[ \mathrm{Tr}(KK) \right]\,&= \mathbb{E}\, \left[\sum_{ijpl}k(z_{i}^p,z_{j}^l)^2\right] = \mathbb{E} \left[\sum_{i,j\neq i,pl}k(z_{i}^p,z_{j}^l)^2\right] + \mathbb{E}\, \left[\sum_{ipl}k(z_{i}^p,z_{j}^l)^2\right]\\
&=\frac{B(B-1)M^2}{N-1}\brackets{N\,\mathbb{E}\, k(Z, Z')^2 - \mathbb{E}_{Z,Z'\sim \mathrm{pos}}k(Z, Z')^2} + \mathbb{E}\, \sum_{ipl}k(z_{i}^p,z_{j}^l)^2\\
&=B^2M^2\,\mathbb{E}\, k(Z, Z')^2 + O(BM^2)\,.
\end{align*}
Similarly, the expectation of the second term is
\begin{align*}
\mathbb{E}\, \bb 1^\top K K \bb 1 \,&= \mathbb{E}\,\sum_{ijq pld}k(z_i^p, z_q^d)k(z_j^l, z_q^d)\\
&= \mathbb{E}\,\sum_{i,j\neq i,q\neq \{i,j\}, pld}k(z_i^p, z_q^d)k(z_j^l, z_q^d) + O(B^2M^3)\,.
\end{align*}
Here we again need to take sampling without replacement into account, and again it will produce a very small correcton term. For $i\neq j\neq q$, repeating the calculation in \eqref{app:eq:expect_i_not_j},
\begin{align*}
\mathbb{E}\, k(z_i^p, z_j^l)k(z_i^p, z_q^d) \,& =
\sum_{y, y'\neq y, y''\neq \{y, y'\}} \frac{1}{N(N-1)(N-2)}\mathbb{E}_{Z|y, Z'|y', Z''|Y''}\,k(Z, Z')k(Z, Z'')\\
&=\mathbb{E}\, k(Z, Z')k(Z, Z'') + O(1/N)\,.
\end{align*}
As $B<N$, we obtain that
\begin{align*}
\mathbb{E}\, \bb 1^\top K K \bb 1 = B(B-1)(B-2)M^3\, \mathbb{E}\, k(Z, Z')k(Z, Z'') + O(B^2M^3)\,.
\end{align*}
Finally, repeating the same argument for sampling without replacement,
\begin{align*}
\mathbb{E}\,\brackets{\bb 1^\top K\bb 1}^2 \,&= \mathbb{E}\sum_{ijqr pldf} k(z_i^p, z_j^l)k(z_q^d, z_r^f)\\
&=\mathbb{E}\sum_{i,j\neq i,q\neq \{i, j\},r\neq\{i,j,q\}, pldf} k(z_i^p, z_j^l)k(z_q^d, z_r^f) + O(B^3M^4)\\
&=B(B-1)(B-2)(B-3)M^4\,\mathbb{E}\,k(Z, Z') k(Z'', Z''') + O(B^3M^4)\,.
\end{align*}
Combining all terms together, and expressing $B(B-1)$ (and similar) terms in big-O notation,
\begin{align*}
\mathbb{E}\frac{\mathrm{Tr}(KHKH)}{(BM-1)^2} \,& =\mathbb{E}\brackets{k(Z, Z')^2 - 2\, k(Z, Z')k(Z, Z'') + k(Z, Z')k(Z'', Z'''')} + O\brackets{\frac{1}{B}}\\
&=\mathbb{E}\, k(Z, Z')^2 - 2 \mathbb{E}_Z\, \brackets{\mathbb{E}_{Z'}\, k(Z, Z')}^2 + \brackets{\mathbb{E}\, k(Z, Z')}^2 + O\brackets{\frac{1}{B}}\\
&= \mathrm{HSIC}(Z,Z)+ O\brackets{\frac{1}{B}}\,.
\end{align*}
\end{proof}
Essentially, having $M$ positive examples for the batch size of $BM$ changes the bias from $O(1/(BM))$ (i.i.d. case) to $O(1/B)$. Finally, note that even if $\widehat \mathrm{HSIC}(Z, Z)$ is unbiased, its square root is not.
\section{Theoretical properties of SSL-HSIC}
\label{app:section:theory}
\subsection{InfoNCE connection}
\label{app:section:infonce_connection}
To establish the connection with InfoNCE, define it in terms of expectations:
\begin{equation}
\mathcal{L}_{\mathrm{InfoNCE}}(\theta)=\mathbb{E}_{Z} \left[ \log\mathbb{E}_{Z'} \left[ \exp\brackets{k(Z, Z')}\right] \right] - \mathbb{E}_{Z,Z'\sim\mathrm{pos}} \left[k(Z,Z') \right]\,.
\label{app:eq:infonce}
\end{equation}
To clarify the reasoning in the main text, we can Taylor expand the exponent in \eqref{app:eq:infonce} around $\mu_1\equiv\mathbb{E}_{Z'}\,\left[k(Z, Z')\right]$. For $k(Z,Z')\approx \mu_1$,
\begin{align*}
& \mathbb{E}_{Z}\left[\log\mathbb{E}_{Z'} \left[\exp\brackets{k(Z,Z')}\right]\right] \\ \,&\approx\mathbb{E}_{Z}\left[\mu_1\right]+\mathbb{E}_{Z}\left[\log\mathbb{E}_{Z'}\left[1 + k(Z,Z') - \mu_1 + \frac{(k(Z,Z') - \mu_1)^2}{2} \right]\right]\\
&=\mathbb{E}_{Z}\left[\mu_1\right]+\mathbb{E}_{Z}\left[\log\mathbb{E}_{Z'}\left[1 + \frac{(k(Z,Z') - \mu_1)^2}{2} \right]\right]\,.
\end{align*}
Now expanding $\log(1+x)$ around zero,
\begin{align*}
\mathbb{E}_{Z}\left[\log\mathbb{E}_{Z'} \left[\exp\brackets{k(Z,Z')}\right]\right] \,&\approx \mathbb{E}_{Z}\left[\mu_1\right] + \mathbb{E}_{Z}\mathbb{E}_{Z'}\left[\frac{(k(Z,Z') - \mu_1)^2}{2}\right]\\
&=\mathbb{E}_{Z}\mathbb{E}_{Z'} \left[k(Z,Z') \right] +\frac{1}{2} \mathbb{E}_{Z} \left[\mathbb{V}\mathrm{ar}_{Z'}\left[k(Z,Z')\right] \right]\,.
\end{align*}
The approximate equality relates to expectations over higher order moments, which are dropped. The expression
gives the required intuition behind the loss, however: when the variance in $k(Z,Z')$ w.r.t. $Z'$ is small, InfoNCE combines $-\mathrm{HSIC}(Z,Y)$ and a variance-based penalty. In general, we can always write (assuming $\Delta l=N$ in $\mathrm{HSIC}(Z, Y)$ as before)
\begin{equation}
\mathcal{L}_{\mathrm{InfoNCE}}(\theta) = -\mathrm{HSIC}(Z, Y) + \mathbb{E}_{Z}\left[\log\mathbb{E}_{Z'}\left[\exp\brackets{k(Z,Z') - \mu_1}\right]\right]\,.
\label{app:eq:infonce_hsic_formulation}
\end{equation}
In the small variance regime, InfoNCE also bounds an HSIC-based loss. To show this, we will need a bound on $\exp(x)$:
\begin{lemma}
\label{app:lemma:exp_bound}
For $0 < \alpha \leq 1/4$ and $x \geq - \brackets{1 + \sqrt{1-4\alpha}} / (2\alpha)$,
\begin{equation}
\label{app:eq:exp_bound}
\exp(x) \geq 1 + x + \alpha\, x^2 \,.
\end{equation}
\end{lemma}
\begin{proof}
The quadratic equation $1 + x + \alpha\, x^2$ has two roots ($x_1 \leq x_2$):
\begin{align*}
x_{1,2} = \frac{\pm\sqrt{1-4\alpha} -1}{2\alpha}\,.
\end{align*}
Both roots are real, as $\alpha \leq 1/4$. Between $x_1$ and $x_2$, \eqref{app:eq:exp_bound} holds trivially as the rhs is negative.
For $x \geq -2$ and $\alpha \leq 1/4$,
\begin{align*}
\exp(x) \geq 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} + \frac{x^4}{4!} + \frac{x^5}{5!} \geq 1 + x + \alpha\, x^2\,.
\end{align*}
The first bound always holds; the second follows from
\begin{align*}
\frac{x}{3!} + \frac{x^2}{4!} + \frac{x^3}{5!} \geq \alpha - \frac{1}{2}\,,
\end{align*}
as the lhs is monotonically increasing and equals $-7/30$ at $x=-2$. The rhs is always smaller than $-1/4 < -7/30$. As $x_2 \geq -2$ (due to $\alpha \leq 1/4$), \eqref{app:eq:exp_bound} holds for all $x \geq x_1$.
\end{proof}
We can now lower-bound InfoNCE:
\begin{theorem}
\label{app:th:infonce_bound}
Assuming that the kernel over $Z$ is bounded as $|k(z, z')| \leq k^{\mathrm{max}}$ for any $z,z'$, and the kernel over $Y$ satisfies $\Delta l=N$ (defined in \eqref{app:eq:y_kernel_form}). Then for $\gamma$ satisfying $\min\{-2, -2 k^{\mathrm{max}}\} = - \brackets{1 + \sqrt{1-4\gamma}} / (2\gamma)$,
\begin{align*}
&-\mathrm{HSIC}(Z, Y) +\gamma\, \mathrm{HSIC}(Z,Z)\leq \mathcal{L}_{\mathrm{InfoNCE}}(\theta) - \mathbb{E}_{Z}\frac{\brackets{\gamma\mathbb{V}\mathrm{ar}_{Z'}\left[k(Z,Z')\right]}^2}{1 + \gamma\mathbb{V}\mathrm{ar}_{Z'}\left[k(Z,Z')\right]}\,.
\end{align*}
\end{theorem}
\begin{proof}
As we assumed the kernel is bounded, for $k(Z,Z') - \mu_1 \geq -2k^{\mathrm{max}}$ (almost surely; the factor of 2 comes from centering by $\mu_1$). Now if we choose $\gamma$ that satisfies $\min\{-2, -2 k^{\mathrm{max}}\} = - \brackets{1 + \sqrt{1-4\gamma}} / (2\gamma)$ (the minimum is to apply our bound even for $k^{\mathrm{max}} < 1$), then (almost surely) by \cref{app:lemma:exp_bound},
\begin{align*}
\exp\brackets{k(Z,Z') - \mu_1} \geq 1 + k(Z,Z') - \mu_1 + \gamma\, \brackets{k(Z,Z') - \mu_1}^2\,.
\end{align*}
Therefore, we can take the expectation w.r.t. $Z'$, and obtain
\begin{align*}
\mathbb{E}_{Z}\log\mathbb{E}_{Z'}\exp\brackets{k(Z,Z') - \mu_1} \geq \mathbb{E}_{Z}\log\brackets{1 + \gamma\,\mathbb{V}\mathrm{ar}_{Z'}\left[k(Z,Z')\right]}\,.
\end{align*}
Now we can use that $\log(1+x) \geq x / (1+x) = x - x^2 / (1+x)$ for $x > -1$, resulting in
\begin{align*}
\mathbb{E}_{Z}\log\mathbb{E}_{Z'}\exp\brackets{k(Z,Z') - \mu_1} \,&\geq \gamma\,\mathbb{E}_{Z}\mathbb{V}\mathrm{ar}_{Z'}\left[k(Z,Z')\right]+ \mathbb{E}_{Z}\frac{\brackets{\gamma\,\mathbb{V}\mathrm{ar}_{Z'}\left[k(Z,Z')\right]}^2}{1 + \gamma\,\mathbb{V}\mathrm{ar}_{Z'}\left[k(Z,Z')\right]}\,.
\end{align*}
Using \eqref{app:eq:infonce_hsic_formulation}, we obtain that
\begin{align*}
\mathcal{L}_{\mathrm{InfoNCE}}(\theta) \,&= -\mathrm{HSIC}(Z, Y) + \mathbb{E}_{Z}\log\mathbb{E}_{Z'}\exp\brackets{k(Z,Z') - \mu_1}\\
&\geq -\mathrm{HSIC}(Z, Y) +\gamma\,\mathbb{E}_{Z}\mathbb{V}\mathrm{ar}_{Z'}\left[k(Z,Z')\right] + \mathbb{E}_{Z}\frac{\brackets{\gamma\mathbb{V}\mathrm{ar}_{Z'}\left[k(Z,Z')\right]}^2}{1 + \gamma\mathbb{V}\mathrm{ar}_{Z'}\left[k(Z,Z')\right]}\,.
\end{align*}
Finally, noting that by Cauchy-Schwarz
\begin{align*}
\mathrm{HSIC}(Z, Z) \,&= \mathbb{E}_{Z,Z'}k(Z,Z')^2 - 2\,\mathbb{E}_{Z}\brackets{\mathbb{E}_{Z'}k(Z,Z')}^2 + \brackets{\mathbb{E}_{Z,Z'}k(Z,Z')}^2\\
&\leq \mathbb{E}_{Z,Z'}k(Z,Z')^2 - \mathbb{E}_{Z}\brackets{\mathbb{E}_{Z'}k(Z,Z')}^2 = \mathbb{E}_{Z}\mathbb{V}\mathrm{ar}_{Z'}\left[k(Z,Z')\right]\,,
\end{align*}
we get the desired bound.
\end{proof}
\cref{app:th:infonce_bound} works for any bounded kernel, because $- \brackets{1 + \sqrt{1-4\gamma}} / (2\gamma)$ takes values in $(\infty, -2]$ for $\gamma \in (0, 1/4]$. For inverse temperature-scaled cosine similarity kernel $k(z,z')=z^\top z' / (\tau\|z\|\|z'\|)$, we have $k^{\mathrm{max}}=1 / \tau$. For $\tau=0.1$ (used in SimCLR \cite{chen2020simple}), we get $\gamma=0.0475$.
For the Gaussian and the IMQ kernels, $k(z,z')\geq -\mu_1 \geq -1$, so we can replace $\gamma$ with $\frac{1}{3}$ due to the following inequality: for $x \geq a$,
\begin{align*}
\exp(x) \geq 1 + x + \frac{x^2}{2} + \frac{x^3}{6} \geq 1 + x + \frac{a + 3}{6} x^2\,,
\end{align*}
where the first inequality is always true.
\subsection{MMD interpretation of HSIC(X,Y)}
\label{app:mmd}
The special label structure of the self-supervised setting allows to understand $\mathrm{HSIC}(X, Y)$ in terms of the maximum mean discrepancy (MMD). Denoting labels as $i$ and $j$ and corresponding mean feature vectors (in the RKHS) as $\mu_i$ and $\mu_j$,
\begin{align*}
\mathrm{MMD}^2(i, j) \,&= \|\mu_i-\mu_j\|^2 = \dotprod{\mu_i}{\mu_i} + \dotprod{\mu_j}{\mu_j} - 2\dotprod{\mu_i}{\mu_j}\,.
\end{align*}
Therefore, the average over all labels becomes
\begin{align*}
\frac{1}{N^2}\sum_{ij}\mathrm{MMD}^2(i, j) \,&= \frac{2}{N}\sum_i\dotprod{\mu_i}{\mu_i} - \frac{2}{N^2}\sum_{ij}\dotprod{\mu_i}{\mu_j}\\
&=\frac{2}{N}\sum_i\dotprod{\mu_i}{\mu_i} - \frac{2}{N^2}\dotprod{\sum_i \mu_i}{\sum_j \mu_j}\\
&=2\mathbb{E}_i\mathbb{E}_{Z|i, Z'|i}\dotprod{\phi(Z)}{\phi(Z')} - 2\dotprod{\mathbb{E}_i\mathbb{E}_{Z|i} \phi(Z)}{\mathbb{E}_j\mathbb{E}_{Z'|j} \phi(Z')}\\
&=2\mathbb{E}_{Z,Z'\sim \mathrm{pos}}\left[k(Z,Z')\right] - 2\mathbb{E}_{Z}\mathbb{E}_{Z'} \left[k(Z,Z')\right]\,,
\end{align*}
where the second last line uses that all labels have the same probability $1/N$, and the last line takes the expectation out of the dot product and uses $k(Z,Z') = \dotprod{\phi(Z)}{\phi(Z')}$.
Therefore,
\begin{align*}
\frac{1}{2 N^2}\sum_{ij}\mathrm{MMD}^2(i, j) \,&= \frac{N}{\Delta l}\mathrm{HSIC}(Z, Y)\,.
\end{align*}
\subsection{Centered representation assumption for clustering}
\label{app:section:features}
In \Cref{subsec:clustering}, we make the assumption that the features are centered and argue that the assumption is valid for BYOL. Here we show empirical evidence of centered features. First, we train BYOL for 1000 epochs which reaches a top-1 accuracy of 74.5\% similar to the reported result in \cite{grill2020bootstrap}. Next, we extract feature representations (predictor and target projector outputs followed by re-normalization) under training data augmentations for a batch of 4096 images. One sample Z-test is carried out on the feature representations with $H_0:\mu=0$ and $H_1:\mu \neq 0$. The null hypothesis is accepted under threshold $\alpha=0.025$.
\section{Random Fourier Features (RFF)}
\label{app:rff}
\subsection{Basics of RFF}
Random Fourier features were introduced by \citet{rahimi2007random} to reduce computational complexity of kernel methods. Briefly, for translation-invariant kernels $k(z-z')$ that satisfy $k(0)=1$, Bochner's theorem gives that
\begin{align*}
k(z-z') = \int p(\omega)e^{i\omega^\top(z-z')} d^n\omega =\mathbb{E}_{\omega} \left[e^{i\omega^\top z}\brackets{e^{i\omega^\top z'}}^*\right]\,,
\end{align*}
where the probability distribution $p(\omega)$ is the $n$-dimensional Fourier transform of $k(z-z')$.
As both the kernel and $p(\omega)$ are real-valued, we only need the real parts of the exponent. Therefore, for $b\sim \mathrm{Uniform}[0, 2\pi]$,
\begin{align*}
k(z-z') = \mathbb{E}_{\omega,b} \left[2\cos(\omega^\top z+b)\cos(\omega^\top z' + b)\right]\,.
\end{align*}
For $N$ data points, we can draw $D$ $\omega_d$ from $p(\omega)$, construct RFF for each points $z_i$, put them into matrix $R\in \mathbb{R}^{N\times D}$, and approximate the kernel matrix as
\begin{align*}
K \approx R R^\top,\ R_{id} = \sqrt{\frac{2}{D}} \cos(\omega_d^\top z_i+b)\,,
\end{align*}
and $\mathbb{E}\, R R^\top = K$.
For the Gaussian kernel, $k(z-z')=\exp(-\|z-z'\|^2 / 2)$, we have $p(\omega) = (2\pi)^{-D/2}\exp(-\|\omega\|^2 / 2)$ \cite{rahimi2007random}. We are not aware of literature on RFF representation for the inverse multiquadratic (IMQ) kernel; we derive it below using standard methods.
\subsection{RFF for the IMQ kernel}
\begin{theorem}
For the inverse multiquadratic (IMQ) kernel,
\begin{align*}
k(z, z') \equiv k(z-z') = \frac{c}{\sqrt{c^2 + \|z-z'\|^2}}\,,
\end{align*}
the distribution of random Fourier features $p(\omega)$ is proportional to the following (for $s = \|w\|$),
\begin{equation}
p(\omega) \equiv \hat h(s) \propto \frac{K_{\frac{n-2}{2} + \frac{1}{2}}(cs)}{s^{\frac{n-2}{2} + \frac{1}{2}}} = \frac{K_{\frac{n-1}{2}}(cs)}{s^{\frac{n-1}{2}}}\,,
\label{app:eq:imq_rff_full_form}
\end{equation}
where $K_{\nu}$ is the modified Bessel function (of the second kind) of order $\nu$.
\end{theorem}
\begin{proof}
To find the random Fourier features, we need to take the Fourier transform of this kernel,
\begin{align*}
\hat k(\omega) = \int e^{-i \omega^\top z} k(z) d^nz\,.
\end{align*}
As the IMQ kernel is radially symmetric, meaning that $k(z, z') = h(r)$ for $r=\|z-z'\|$, its Fourier transform can be written in terms of the Hankel transform \cite[Section B.5]{grafakos2008classical}
(with $\|\omega\|=s$)
\begin{align*}
\hat k(\omega)=\hat h(s)=\frac{(2\pi)^{n/2}}{s^{\frac{n-2}{2}}}H_{\frac{n-2}{2}}\left[r^{\frac{n-2}{2}}h(r) \right](s)\,.
\end{align*}
The Hankel transform of order $\nu$ is defined as
\begin{align*}
H_\nu[g(t)](s)=\int_0^{\infty}J_\nu(st) g(t)t\,dt\,,
\end{align*}
where $J_\nu(s)$ is the Bessel function (of the first kind) of order $\nu$.
As $h(r) = c / \sqrt{c^2+r^2}$,
\begin{align*}
H_{\frac{n-2}{2}}\left[r^{\frac{n-2}{2}}h(r) \right](s) = c \frac{\sqrt{2}c^{\frac{n-2}{2} + 1/2}}{\sqrt{s}\Gamma(\frac{1}{2})} K_{\frac{n-2}{2} + \frac{1}{2}}(cs)\,,
\end{align*}
where $K_{\nu}$ is a modified Bessel function (of the second kind) of order $\nu$.
Therefore, by using a table of Hankel transforms \cite[Chapter 9, Table 9.2]{piessens2000hankel},
\begin{align*}
\hat h(s) \propto \frac{K_{\frac{n-2}{2} + \frac{1}{2}}(cs)}{s^{\frac{n-2}{2} + \frac{1}{2}}} = \frac{K_{\frac{n-1}{2}}(cs)}{s^{\frac{n-1}{2}}}\,.
\end{align*}
\end{proof}
\subsubsection{How to sample}
To sample random vectors from \eqref{app:eq:imq_rff_full_form}, we can first sample their directions as uniformly distributed unit vectors $d / \|d\|$, and then their amplitudes $s$ from $\hat h(s) s^{n-1}$ (the multiplier comes from the change to spherical coordinates).
Sampling unit vectors is easy, as for $d\sim\mathcal{N}(0, I)$, $d / \|d\|$ is a uniformly distributed unit vector.
To sample the amplitudes, we numerically evaluate
\begin{equation}
\tilde p(s) = \hat h(s) s^{n-1} = K_{\frac{n-1}{2}}(cs) s^{\frac{n-1}{2}}
\label{app:eq:imq_rff_ampltudes}
\end{equation}
on a grid, normalize it to get a valid probability distribution, and sample from this approximation. As for large orders $K_{\nu}$ attains very large numbers, we use mpmath \cite{mpmath}, an arbitrary precision floating-point arithmetic library for Python. As we only need to sample $\tilde p(s)$ once during training, this adds a negligible computational overhead.
Finally, note that for any IMQ bias $c$, we can sample $s$ from \eqref{app:eq:imq_rff_ampltudes} for $c=1$, and then use $\tilde s = s / c$ to rescale the amplitudes. This is because
\begin{align*}
P(s/c \leq x) = P(s \leq cx) = C \int_0^{cx} K_{\frac{n-1}{2}}(t) t^{\frac{n-1}{2}} dt = C c^{\frac{n-1}{2}} \int_0^{x} K_{\frac{n-1}{2}}(c\tilde t) \tilde t^{\frac{n-1}{2}} cd\tilde t\,.
\end{align*}
In practice, we evaluate $\tilde p(s)$ for $c=1$ on a uniform grid over $[10^{-12}, 100]$ with $10^4$ points, and rescale for other $c$ (for output dimensions of more that 128, we use a larger grid; see \cref{app:sec:pseudo-code}).
\subsection{RFF for SSL-HSIC}
To apply RFF to SSL-HSIC, we will discuss the $\mathrm{HSIC}(Z,Y)$ and the $\mathrm{HSIC}(Z,Z)$ terms separately. We will use the following notation:
\begin{align*}
k(z_i^p, z_j^l) \approx \sum_{d=1}^Dr_{d}^{ip} r_{d}^{jl}
\end{align*}
for $D$-dimensional RFF $r^{ip}$ and $r^{jl}$.
Starting with the first term, we can re-write \eqref{app:eq:hsic_z_y_biased} as
\begin{align*}
\widehat\mathrm{HSIC}(Z, Y)_{\mathrm{RFF}}\,&= \frac{1}{BM(M-1)}\sum_{ipld} r_{d}^{ip} r^{il}_d- \frac{1}{B^2M^2}\sum_{ijpld} r_d^{ip} r_d^{jl} - \frac{1}{M-1}\\
&=\frac{1}{BM(M-1)}\sum_{id} \brackets{\sum_p r_{d}^{ip}}^2 - \frac{1}{B^2M^2}\sum_d \brackets{\sum_{ip} r_d^{ip}}^2 - \frac{1}{M-1}\,.
\end{align*}
The last term in the equation above is why we use RFF: instead of computing $\sum_{ijpl}$ in $O(B^2M^2)$ operations, we compute $\sum_{ip}$ in $O(BM)$ and then sum over $d$, resulting in $O(BMD)$ operations (as we use large batches, typically $BM > D$). As $\mathrm{HSIC}(Z,Y)$ is linear in $k$, $\mathbb{E}_{\omega, b}\, \widehat\mathrm{HSIC}(Z, Y)_{\mathrm{RFF}} = \widehat\mathrm{HSIC}(Z, Y)$.
To estimate $\widehat\mathrm{HSIC}(Z, Z)$, we need to sample RFF twice. This is because
\begin{align*}
\widehat\mathrm{HSIC}(Z, Z) = \frac{1}{(BM-1)^2}\mathrm{Tr} \brackets{KHKH}\,,
\end{align*}
therefore we need the first $K$ to be approximated by $RR^\top$, and the second -- by an independently sampled $\tilde R\tilde R^\top$. This way, we will have $\mathbb{E}_{\omega, b, \tilde{\omega}, \tilde b}\, \widehat\mathrm{HSIC}(Z, Z)_{\mathrm{RFF}} = \widehat\mathrm{HSIC}(Z, Z)$.
Therefore, we have (noting that $HH=H$)
\begin{align*}
\widehat\mathrm{HSIC}(Z, Z)_{\mathrm{RFF}} \,&= \frac{1}{(BM-1)^2}\mathrm{Tr} \brackets{RR^\top H\tilde R \tilde R^\top H} = \frac{1}{(BM-1)^2}\|R^\top H\tilde R\|^2_F\\
& = \frac{1}{(BM-1)^2}\|R^\top HH\tilde R\|^2_F \\
&=\frac{1}{(BM-1)^2}\sum_{d_1, d_2} \brackets{\sum_{ip} \brackets{r_{d_1}^{ip}- \frac{1}{BM}\sum_{jl} r_{d_1}^{jl}} \brackets{r_{d_2}^{ip} - \frac{1}{BM}\sum_{jl} r_{d_2}^{jl}}}^2\,.
\end{align*}
To summarize the computational complexity of this approach, computing $D$ random Fourier features for a $Q$-dimensional $z$ takes $O(DQ)$ operations (sampling $D\times K$ Gaussian vector, normalizing it, sampling $D$ amplitudes, computing $\omega_d^\top z$ $D$ times), therefore $O(BMDQ)$ for $BM$ points. After that, computing $\mathrm{HSIC}(Z, Y)$ takes $O(BMD)$ operations, and $\mathrm{HSIC}(Z, Z)$ -- $O(BMD^2)$ operations. The resulting complexity per batch is $O(BMD(Q + D))$. Note that we sample new features every batch.
In contrast, computing SSL-HSIC directly would cost $O(Q)$ operations per entry of $K$, resulting in $O((BM)^2Q)$ operations. Computing HSIC would then be quadratic in batch size, and the total complexity would stay $O((BM)^2Q)$.
In the majority of experiments, $B=4096$, $M=2$, $Q=128$ and $D=512$, and the RFF approximation performs faster (with little change in accuracy; see \cref{tab:ablation-rff}).
\section{Experiment Details}
\label{app:section:experiments}
\subsection{ImageNet Pretraining}
\label{app:experiment-pretrain}
\subsubsection{Data augmentation}
We follow the same data augmentation scheme as BYOL \cite{grill2020bootstrap} with exactly the same parameters. For completeness, we list the augmentations applied and parameters used:
\begin{itemize}
\item random cropping: randomly sample an area of $8\%$ to $100\%$ of the original image with an aspect ratio logarithmically sampled from $3/4$ to $4/3$. The cropped image is resized to $224 \times 224$ with bicubic interpolation;
\item flip: optionally flip the image with a probability of $0.5$;
\item color jittering: adjusting brightness, contrast, saturation and hue in a random order with probabilities $0.8$, $0.4$, $0.4$, $0.2$ and $0.1$ respectively;
\item color dropping: optionally converting to grayscale with a probability of $0.2$;
\item Gaussian blurring: Gaussian kernel of size $23 \times 23$ with a standard deviation uniformly sampled over $[0.1, 2.0]$;
\item solarization: optionally apply color transformation $x \mapsto x \cdot 1_{x < 0.5} + (1 - x) \cdot 1_{x \geq 0.5}$ for pixels with values in $[0, 1]$. Solarization is only applied for the second view, with a probability of $0.2$.
\end{itemize}
\subsubsection{Optimizing kernel parameters}
\label{app:kernel-param}
Since we use radial basis function kernels, we can express the kernel $k(s)$ in term of the distance $s= \lVert z_i-z_j \rVert^2$. The entropy of the kernel distance $k_{\sigma}(s_{ij})$ can be expressed as follows:
\begin{align*}
H[k] & = -\int p(k) \log p(k) dk \\
& = -\int q(s) \log \left( q(s) \left\lvert \frac{ds}{dk} \right\rvert \right) ds \\
& = H[s] + \int q(s) \log \left\lvert \frac{dk}{ds} \right\rvert ds \\
& = \E\left[ \log \lvert k_{\sigma}'(s)\rvert \right] + \text{const} \\
& \propto \E\left[ \log \lvert k_{\sigma}'(s) \rvert^2 \right] + \text{const}
.\end{align*}
We use the kernel distance entropy to automatically tune kernel parameters: for the kernel parameter $\sigma$, we update it to maximize $\E\left[ \log \lvert k_{\sigma}' \rvert^2 \right]$ (for IMQ, we optimize the bias $c$) at every batch. This procedure makes sure the kernel remains sensitive to data variations as representations move closer to each other.
\subsection{Evaluations}
\label{app:experiment-eval}
\subsubsection{ImageNet linear evaluation protocol}
After pretraining with SSL-HSIC, we retain the encoder weights and train a linear layer on top of the frozen representation. The original ImageNet training set is split into a training set and a local validation set with $10000$ data points. We train the linear layer on the training set. Spatial augmentations are applied during training, i.e., random crops with resizing to $224 \times 224$ pixels, and random flips. For validation, images are resized to $256$ pixels along the shorter side using bicubic resampling, after which a $224 \times 224$ center crop is applied. We use \texttt{SGD} with Nesterov momentum and train over 90 epochs, with a batch size of 4096 and a momentum of 0.9. We sweep over learning rate and weight decay and choose the hyperparameter with top-1 accuracy on local validation set. With the best hyperparameter setting, we report the final performance on the original ImageNet validation set.
\subsubsection{ImageNet semi-supervised learning protocol}
We use ImageNet 1\% and 10\% datasets as SimCLR \cite{chen2020simple}. During training, we initialize the weights to the pretrained weights, then fine-tune them on the ImageNet subsets. We use the same training procedure for augmentation and optimization as the linear evaluation protocol.
\subsubsection{Linear evaluation protocol for other classification datasets}
\label{app:semi-supervised_protocol}
We use the same dataset splits and follow the same procedure as BYOL \cite{grill2020bootstrap} to evaluate classification performance on other datasets, i.e. 12 natural image datasets and Pascal VOC 2007. The frozen features are extracted from the frozen encoder. We learn a linear layer using logistic regression in \texttt{sklearn} with l2 penalty and \texttt{LBFGS} for optimization. We use the same local validation set as BYOL \cite{grill2020bootstrap} and tune hyperparameter on this local validation set. Then, we train on the full training set using the chosen weight of the l2 penalty and report the final result on the test set.
\subsubsection{Fine-tuning protocol for other classification datasets}
Using the same dataset splits described in \Cref{app:semi-supervised_protocol}, we initialize the weights of the network to the pretrained weights and fine-tune on various classification tasks. The network is trained using \texttt{SGD} with Nestrov momentum for $20000$ steps.
The momentum parameter for the batch normalization statistics is set to $\max(1 -10/s, 0.9)$ where $s$ is the number of steps per epoch.
We sweep the weight decay and learning rate, and choose hyperparameters that give the best score on the local validation set. Then we use the selected weight decay and learning rate to train on the whole training set to report the test set performance.
\subsubsection{Transfer to semantic segmentation}
In semantic segmentation, the goal is to classify each pixel. The head architecture is a fully-convolutional network (FCN)-based \cite{DBLP:journals/corr/LongSD14} architecture as \cite{DBLP:journals/corr/abs-1911-05722,grill2020bootstrap}. We train on the \texttt{train\_aug2012} set and report results on \texttt{val2012}. Hyperparameters are selected on a $2119$ images, which is the same held-out validation set as \cite{grill2020bootstrap}. A standard per-pixel softmax cross-entropy loss is used to train the FCN. Training uses random scaling (by a ratio in $[0.5, 2.0]$), cropping (crop size 513), and horizontal flipping for data augmentation. Testing is performed on the $[513, 513]$ central crop. We train for $30000$ steps with a batch size of $16$ and weight decay $10^{-4}$. We sweep the base learning rate with local validation set. We use the best learning rate to train on the whole training set and report on the test set. During training, the learning rate is multiplied by $0.1$ at the $70th$ and $90th$ percentile of training. The final result is reported with the average of 5 seeds.
\subsubsection{Transfer to depth estimation}
The network is trained to predict the depth map of a given scene. We use the same setup as BYOL \cite{grill2020bootstrap} and report it here for completeness. The architecture is composed of a ResNet-50 backbone and a task head which takes the $conv5$ features into 4 upsampling blocks with respective filter sizes 512, 256, 128, and 64. Reverse Huber loss function is used for training.
The frames are down-sampled from $[640, 480]$ by a factor 0.5 and center-cropped to size $[304, 228]$. Images are randomly flipped and color transformations are applied: greyscale with a probability of 0.3; brightness adjustment with a maximum difference of 0.1255; saturation with a saturation factor randomly picked in the interval $[0.5, 1.5]$; hue adjustment with a factor randomly picked in the interval $[-0.2, 0.2]$.
We train for 7500 steps with batch size 256, weight decay 0.001, and learning rate 0.05.
\subsubsection{Transfer to object detection}
We follow the same setup for evaluating COCO object detection tasks as in DetCon \cite{DBLP:journals/corr/abs-2103-10957}. The architecture used is a Mask-RCNN \cite{DBLP:journals/corr/HeGDG17} with feature pyramid networks \cite{DBLP:journals/corr/LinDGHHB16}. During training, the images are randomly flipped and resized to $(1024\cdot s)\times (1024\cdot s)$ %
where $s \in [0.8, 1.25]$. Then the resized image is cropped or padded to a $1024\times1024$. We fine-tune the model for 12 epochs ($1\times$ schedule \cite{DBLP:journals/corr/abs-1911-05722}) with \texttt{SGD} with momentum with a learning rate of 0.3 and momumtem 0.9. The learning rate increases linearly for the first 500 iterations and drops twice by a factor of 10, after $2/3$ and
$8/9$ of the total training time. We apply a weight decay of $4\times 10^{-5}$ and train with a batch size of 64.
\section{SSL-HSIC pseudo-code}
\label{app:sec:pseudo-code}
\begin{minted}[breaklines=true,fontsize=\scriptsize]{python}
import jax
import jax.numpy as jnp
import mpmath
import numpy as np
def ssl_hsic_loss(hiddens, kernel_param, num_rff_features, gamma, rng):
"""Compute SSL-HSIC loss."""
hsic_yz = compute_hsic_yz(hiddens, num_rff_features, kernel_param, rng)
hsic_zz = compute_hsic_zz(hiddens, num_rff_features, kernel_param, rng)
return - hsic_yz + gamma * jnp.sqrt(hsic_zz)
def compute_hsic_yz(hiddens, num_rff_features, kernel_param, rng):
"""Compute RFF approximation of HSIC_YZ."""
# B - batch size; M - Number of transformations.
B = hiddens[0].shape[0]
M = len(hiddens)
rff_hiddens = jnp.zeros((B, num_rff_features))
mean = jnp.zeros((1, num_rff_features))
for hidden in hiddens:
rff_features = imq_rff_features(hidden, num_rff_features, kernel_param, rng)
rff_hiddens += rff_features
mean += rff_features.sum(0, keepdims=True)
return (rff_hiddens ** 2).sum() / (B * K * (K - 1)) - (mean ** 2).sum() / (B * M) ** 2
def compute_hsic_zz(hiddens, num_rff_features, kernel_param, rng):
"""Compute RFF approximation of HSIC_ZZ."""
rng_1, rng_2 = jax.random.split(rng, num=2)
B = hiddens[0].shape[0]
M = len(hiddens)
z1_rffs = []
z2_rffs = []
center_z1 = jnp.zeros((1, num_rff_features))
center_z2 = jnp.zeros((1, num_rff_features))
for hidden in hiddens:
z1_rff = imq_rff_features(hidden, num_rff_features, kernel_param, rng_1)
z2_rff = imq_rff_features(hidden, num_rff_features, kernel_param, rng_2)
z1_rffs.append(z1_rff)
center_z1 += z1_rff.mean(0, keepdims=True)
z2_rffs.append(z2_rff)
center_z2 += z2_rff.mean(0, keepdims=True)
center_z1 /= M
center_z2 /= M
z = jnp.zeros(shape=(num_rff_features, num_rff_features), dtype=jnp.float32)
for z1_rff, z2_rff in zip(z1_rffs, z2_rffs):
z += jnp.einsum('ni,nj->ij', z1_rff - center_z1, z2_rff - center_z2)
return (z ** 2).sum() / (B * M - 1) ** 2
def imq_rff_features(hidden, num_rff_features, kernel_param, rng):
"""Random Fourier features of IMQ kernel."""
d = hidden.shape[-1]
rng1, rng2 = jax.random.split(rng)
amp, amp_probs = amplitude_frequency_and_probs(d)
amplitudes = jax.random.choice(rng1, amp, shape=[num_rff_features, 1], p=amp_probs)
directions = jax.random.normal(rng2, shape=(num_rff_features, d))
b = jax.random.uniform(rng2, shape=(1, num_features)) * 2 * jnp.pi
w = directions / jnp.linalg.norm(directions, axis=-1, keepdims=True) * amplitudes
z = jnp.sqrt(2 / num_rff_features) * jnp.cos(jnp.matmul(hidden / kernel_param, w.T) + b)
return z
def amplitude_frequency_and_probs(d):
"""Returns frequencies and probabilities of amplitude for RFF of IMQ kernel."""
# Heuristics for increasing the upper limit with the feature dimension.
if d >= 4096:
upper = 200
elif d >= 2048:
upper = 150
elif d >= 1024:
upper = 120
else:
upper = 100
x = np.linspace(1e-12, upper, 10000)
p = compute_prob(d, x)
return x, p
def compute_prob(d, x_range):
"""Returns probabilities associated with the frequencies."""
prob = list()
for x in x_range:
prob.append(mpmath.besselk((d - 1) / 2, x) * mpmath.power(x, (d - 1) / 2))
normalizer = prob[0]
for x in prob[1:]:
normalizer += x
normalized_prob = []
for x in prob:
normalized_prob.append(float(x / normalizer))
return np.array(normalized_prob)
\end{minted}
\section{Background}
\subsection{Self-supervised learning}
Recent developments in self-supervised learning, such as contrastive learning, try to ensure that features of two random views of an image are more associated with each other than with random views of other images.
Typically, this is done through some variant of a classification loss, with one ``positive'' pair and many ``negatives.'' Other methods can learn solely from ``positive'' pairs, however.
There have been many variations of this general framework in the past few years.
\Citet{oord2018representation} first formulated the InfoNCE loss, which estimates a lower bound of the mutual information between the feature and the context.
SimCLR \cite{chen2020simple, chen2020big} carefully investigates the contribution of different data augmentations, and scales up the training batch size to include more negative examples.
MoCo \cite{DBLP:journals/corr/abs-1911-05722} increases the number of negative examples by using a memory bank.
BYOL \cite{grill2020bootstrap} learns solely on positive image pairs, training so that representations of one view match that of the other under a moving average of the featurizer.
Instead of the moving average, SimSiam \cite{DBLP:journals/corr/abs-2011-10566} suggests a stop-gradient on one of the encoders is enough to prevent BYOL from finding trivial solutions.
SwAV \cite{caron2020unsupervised} clusters the representation online, and uses distance from the cluster centers rather than computing pairwise distances of the data.
Barlow Twins \cite{DBLP:journals/corr/abs-2103-03230} uses an objective related to the cross-correlation matrix of the two views, motivated by redundancy reduction. It is perhaps the most related to our work in the literature (and their covariance matrix can be connected to HSIC \cite{tsai2021note}), but our method measures dependency more directly. While Barlow Twins decorrelates components of final representations, we maximize the dependence between the image's abstract identity and its transformations.
On the theory side, InfoNCE is proposed as a variational bound on Mutual Information between the representation of two views of the same image \cite{oord2018representation,DBLP:journals/corr/abs-1905-06922}. \Citet{tschannen2019mutual} observe that InfoNCE performance cannot be explained solely by the properties of the mutual information, however, but is influenced more by other factors, such as the formulation of the estimator and the architecture of the feature extractor. Essentially, representations with the same MI can have drastically different representational qualities.
\begin{wrapfigure}{r}{0.32\textwidth}
\centering
\includegraphics[width=.98\linewidth]{figures/mi-bad-example.pdf}
\vspace*{-2mm}
\caption{Three distributions of positive examples for two classes (green and purple) that have the same mutual information, but drastically different quality for downstream learners.}
\label{fig:mi-bad}
\end{wrapfigure}
To see this, consider a problem with two inputs, $A$ and $B$ (\cref{fig:mi-bad}, green and purple), and a one-dimensional featurizer, parameterized by the integer $M$, which maps $A$ to $\mathrm{Uniform}(\{0, 2, \dots, 2M\})$ and $B$ to $\mathrm{Uniform}(\{1, 3, \dots, 2M + 1\})$. When $M=0$, the inputs are encoded into linearly separable features $A=0$ and $B=1$ (\cref{fig:mi-bad}, bottom). Otherwise when $M>0$, they are interspersed like $ABABABAB$ -- a representation which is much harder to work with for downstream learners. Nevertheless, the mutual information between the features of any two augmentations of the same input (a positive pair) is independent of $M$, that is $H[Z_1] - H[Z_1|Z_2]=\log 2$
for any $M$. Note that InfoNCE would strongly prefer $M=0$, indeed behaving very differently from MI.
Later theories suggest that contrastive losses balance alignment of individual features and uniformity of the feature distribution \cite{DBLP:journals/corr/abs-2005-10242}, or in general alignment and some loss-defined distribution \cite{chen2020intriguing}. We propose to interpret the contrastive loss through the lens of statistical dependence, and relate it to metric learning, which naturally leads to alignment and uniformity.
\subsection{Hilbert-Schmidt Independence Criterion (HSIC)}
The Hilbert-Schmidt Independence Criterion (HSIC) \cite{gretton2005measuring} is a kernel-based measure of dependence between probability distributions.
Like mutual information, for a wide range of kernels $\mathrm{HSIC}(X, Y) = 0$ if and only if $X$ and $Y$ are independent \citep{hsic-characteristic}, and large values of the measure correspond to ``more dependence.''
Unlike mutual information,
HSIC incorporates a notion of geometry (via the kernel choice),
and is both statistically and computationally easy to estimate.
It has been used in a variety of applications,
particularly for independence testing \citep{gretton2007kernel},
but it has also been maximized in applications such as
feature selection \citep{song2012feature},
clustering \cite{cluhsic,BG09:taxonomies},
active learning \citep{jain2020information},
and as a classification loss called HSIC Bottleneck \citep{ma2019hsic,pogodin2020kernelized} (similar ideas were expressed in \cite{wu2018dependency, nokland2019training}).
HSIC measures the dependence between two random variables by first taking a nonlinear feature transformation of each,
say $\phi : \mathcal X \to \mathcal F$
and $\psi : \mathcal Y \to \mathcal G$
(with $\mathcal F$ and $\mathcal G$ reproducing kernel Hilbert spaces, RKHSes\footnote{%
In a slight abuse of notation, we use $\phi(x) \psi(y)^\top$ for the tensor product $\phi(x) \otimes \psi(y) \in \mathcal F \otimes \mathcal G$.}),
and then evaluating the norm of the cross-covariance between those features:
\begin{gather}
\mathrm{HSIC}(X, Y) = \left\lVert
\E[\phi(X) \, \psi(Y)^\top]
- \E[\phi(X)] \, \E[\psi(Y)]^\top
\right\rVert^2_\mathit{HS}
.\end{gather}
Here $\lVert \cdot \rVert_\mathit{HS}$ is the Hilbert-Schmidt norm, which in finite dimensions is the usual Frobenius norm.
HSIC measures the scale of the correlation in these nonlinear features,
which allows it to identify nonlinear dependencies between $X$ and $Y$ with appropriate features $\phi$ and $\psi$.
Inner products in an RKHS are by definition \emph{kernel functions}: $k(x,x') = \left\langle \phi(x), \phi(x') \right\rangle_{\mathcal{F}}$ and $l(y,y') = \left\langle \psi(y), \psi(y') \right\rangle_{\mathcal{G}}$.
Let $(X',Y')$, $(X'', Y'')$ be independent copies of $(X, Y)$; this gives
\begin{equation} \label{eq:hsic-pop}
\mathrm{HSIC}(X, Y) =
\E\left[k(X,X') l(Y,Y')\right]
-2\E\left[ k(X,X') l(Y, Y'') \right]
+ \E\left[ k(X, X') \right] \E\left[l(Y, Y')\right]
.
\end{equation}
HSIC is also straightforward to estimate: given i.i.d.\ samples $\{(x_1,y_1), \dots, (x_N,y_N)\}$ drawn i.i.d.\ from the joint distribution of $(X, Y)$, \citet{gretton2005measuring} propose an estimator
\begin{align}
\widehat\mathrm{HSIC}(X, Y) &= \frac{1}{(N-1)^2} \mathrm{Tr}(KHLH)\,,
\label{eq:biased-hsic}
\end{align}
where $K_{ij}=k(x_i, x_j)$ and $L_{ij}=l(y_i, y_j)$ are the kernel matrices,
and $H = I - \frac{1}{N} \bb 1 \bb 1^\top$ is called the centering matrix.
This estimator has an $O(1/N)$ bias, which is not a concern for our uses; however, an unbiased estimator with the same computational cost is available \cite{song2012feature}.
\section{Conclusions}
We introduced SSL-HSIC, a loss function for self-supervised representation learning based on kernel dependence maximization. We provided a unified view on various self-supervised learning losses: we proved that InfoNCE, a lower bound of mutual information, actually approximates SSL-HSIC with a variance-based regularization, and we can also interpret SSL-HSIC as metric learning where the cluster structure is imposed by the self-supervised label, of which the BYOL objective is a special case. We showed that training with SSL-HSIC achieves performance on par with the state-of-the-art on the standard self-supervised benchmarks.
Although using the image identity as self-supervised label provides a good inductive bias, it might not be wholly satisfactory; we expect that some images pairs are in fact more similar than others, based e.g.\ on their ImageNet class label. It will be interesting to explore methods that combine label structure discovery with representation learning (as in SwAV \cite{caron2020unsupervised}). In this paper, we only explored learning image representations, but in future work SSL-HSIC can be extended to learning structure for $Y$ as well, building on existing work \cite{BG09:taxonomies,BZG:better-taxonomies}.
\section*{Broader impact}
Our work concentrates on providing a more theoretically grounded and interpretable loss function for self-supervised learning. A better understanding of self-supervised learning, especially through more interpretable learning dynamics, is likely to lead to better and more explicit control over societal biases of these algorithms. SSL-HSIC yields an alternative, clearer understanding of existing self-supervised methods. As such, it is unlikely that our method introduces further biases than those already present for self-supervised learning.
The broader impacts of the self-supervised learning framework is an area that has not been studied by the AI ethics community, but we think it calls for closer inspection. An important concern for fairness of ML algorithms is dataset bias. ImageNet is known for a number of problems such as offensive annotations, non-visual concepts and lack of diversity, in particular for underrepresented groups. Existing works and remedies typically focus on label bias. Since SSL doesn't use labels, however, the type and degree of bias could be very different from that of supervised learning. To mitigate the risk of dataset bias, one could employ dataset re-balancing to correct sampling bias \cite{yang2020towards} or completely exclude human images from the dataset while achieving the same performance \cite{asano2021pass}.
A new topic to investigate for self-supervised learning is how the bias/unbiased representation could be transferred to downstream tasks. We are not aware of any work in this direction. Another area of concern is security and robustness. Compared to supervised learning, self-supervised learning typically involves more intensive data augmentation such as color jittering, brightness adjustment, etc. There is some initial evidence suggesting self-supervised learning improves model robustness \cite{hendrycks2019using}. However, since data augmentation can either be beneficial \cite{sablayrolles2019white} or detrimental \cite{yu2021does} depending on the type of adversarial attacks, more studies are needed to assess its role for self-supervised learning.
\section{Experiments}
In this section, we present our experimental setup, where we assess the performance of the representation learned with SSL-HSIC both with and without a target network. First, we train a model with a standard ResNet-50 backbone using SSL-HSIC as objective on the training set of ImageNet ILSVRC-2012 \cite{russakovsky2015imagenet}. For evaluation, we retain the backbone as a feature extractor for downstream tasks. We evaluate the representation on various downstream tasks including classification, object segmentation, object detection and depth estimation.
\begin{figure}[htbp]
\centering
\includegraphics[width=\textwidth]{figures/architecture_2.pdf}
\caption{Architecture and SSL-HSIC objective. A self-supervised label $y$ -- an indicator of the image identity -- is associated with an image $x$.
Image transformation functions $t$ are sampled and applied to the original image, resulting in views $t^1(x)$ and $t^2(x)$.
Features $z^1$ and $z^2$ are obtained after passing the augmented views through encoder ($f$), projector ($g$), and possibly predictor ($q$) networks, while label $y$ is retained.
Kernel matrices, $K$ for the latents and $L$ for the labels, are computed on the mini-batch of data;
SSL-HSIC is estimated with $K$ and $L$ as in \eqref{eq:ssl_hsic_batch}. The blue boxes reflect two potential options: when using a target network, $\xi$ is a moving average of $\theta$, and a predictor network $q$ is added;
without the target network, $q$ is removed and $\xi$ is simply equal to $\theta$. }
\label{fig:architecture}
\end{figure}
\subsection{Implementation}
\textbf{Architecture}
\cref{fig:architecture} illustrates the architecture we used for SSL-HSIC in this section. To facilitate comparison between different methods, our encoder $f_{\theta}$ uses the standard ResNet-50 backbone without the final classification layer. The output of the encoder is a 2048-dimension embedding vector, which is the representation used for downstream tasks. As in BYOL \cite{grill2020bootstrap}, our projector $g$ and predictor $q$ networks are 2-layer MLPs with $4096$ hidden dimensions and $256$ output dimensions. The outputs of the networks are batch-normalized and rescaled to unit norm before computing the loss. We use an inverse multiquadric kernel (IMQ) for the latent representation (approximated with 512 random Fourier features that are resampled at each step; see \cref{app:rff} for details) and a linear kernel for labels. $\gamma$ in \eqref{eq:ssl_hsic} is set to $3$. When training without a target network, unlike SimSiam \cite{DBLP:journals/corr/abs-2011-10566}, we do not stop gradients for either branch. If the target network is used, its weights are an exponential moving average of the online network weights. We employ the same schedule as BYOL \cite{grill2020bootstrap}, $\tau = 1 - 0.01 \cdot (\cos(\pi t/T) + 1)/2$ with $t$ the current step and $T$ the total training steps.
\textbf{Image augmentation}
Our method uses the same data augmentation scheme as BYOL (see \cref{app:experiment-pretrain}). Briefly, we first draw a random patch from the original image and resize it to $224 \times 224$. Then, we apply a random horizontal flip, followed by color jittering, consisting of a random sequence of brightness, contrast, saturation, hue adjustments, and an optional grayscale conversion. Finally Gaussian blur and solarization are applied,
and the view is normalized with ImageNet statistics.
\textbf{Optimization}
We train the model with a batch size of 4096 on 128 Cloud TPU v4 cores. Again, following \cite{chen2020simple,grill2020bootstrap}, we use the LARS optimizer \cite{DBLP:journals/corr/abs-1708-03888} with a cosine decay learning rate schedule over 1000 epochs. The base learning rate to all of our experiments is $0.4$ and it is scaled linearly \cite{DBLP:journals/corr/GoyalDGNWKTJH17} with the batch size $lr = 0.4 \times batch\_size / 256$. All experiments use weight decay of $10^{-6}$.
\textbf{Learning kernel parameters}
We use a linear kernel for labels, since the type of kernel only scales \eqref{eq:ssl_hsic_batch}. Our inverse multiquadric kernel for the latent $Z$ has an additional kernel scale parameter. We optimize this along with all other parameters, but regularize it to maximize the entropy of the distribution $k_{\sigma}(s)$, where $s_{ij}= \lVert z_i - z_j \rVert^2$; this amounts to maximizing $\log \lVert k_{\sigma}'(s) \rVert^2$ (\cref{app:kernel-param}).
\subsection{Evaluation Results}
\textbf{Linear evaluation on ImageNet}
Learned features are evaluated with the standard linear evaluation protocol commonly used in evaluating self-supervised learning methods \cite{oord2018representation,bachman2019learning,henaff2020data,chen2020simple,DBLP:journals/corr/abs-1911-05722,DBLP:journals/corr/abs-2003-04297,grill2020bootstrap,caron2020unsupervised,DBLP:journals/corr/abs-2103-03230}. \cref{tab:linear-imagenet} reports the top-1 and top-5 accuracies obtained with SSL-HSIC on ImageNet validation set, and compares to previous self-supervised learning methods. Without a target network, our method reaches 72.2\% top-1 and 90.7\% top-5 accuracies. Unlike BYOL, the SSL-HSIC objective prevents the network from finding trivial solutions as explained in \cref{subsec:clustering}. Adding the target network, our method outperforms most previous methods, achieving top-1 accuracy of $74.8\%$ and top-5 accuracy of $92.2\%$. The fact that we see performance gains from adopting a target network suggests that its effect is not yet well understood, although note discussion in \cite{grill2020bootstrap} which points to its stabilizing effect.
\begin{table}[htbp]
\begin{minipage}[t]{.38\linewidth}
\centering
\scriptsize
\caption{Linear evaluation on the ImageNet validation set.}
\label{tab:linear-imagenet}
\begin{tabular}[b]{@{}lcc@{}}
\toprule
& Top-1(\%) & Top-5(\%) \\
\midrule
Supervised \cite{DBLP:journals/corr/abs-1905-03670} & 75.9 & 92.8\\
\midrule
SimCLR \cite{chen2020simple} & 69.3 & 89.0 \\
MoCo v2 \cite{DBLP:journals/corr/abs-2003-04297} & 71.1 & 90.1 \\
BYOL \cite{grill2020bootstrap} & 74.3 & 91.6 \\
SwAV \cite{caron2020unsupervised} & \textbf{75.3} & - \\
Barlow Twins \cite{DBLP:journals/corr/abs-2103-03230} & 73.2 & 91.0 \\
\rowcolor{LightBlue}
SSL-HSIC (w/o target) & 72.2 & 90.7 \\
\rowcolor{LightBlue}
SSL-HSIC (w/ target) & 74.8 & \textbf{92.2} \\
\bottomrule
\end{tabular}
\end{minipage}\hspace{\fill}%
\begin{minipage}[t]{0.58\linewidth}
\centering
\scriptsize
\caption{Fine-tuning on 1\%, 10\% and 100\% of the ImageNet training set and evaluating on the validation set.}
\label{tab:finetune-imagenet}
\begin{tabular}[b]{@{}lcccccc@{}}
\toprule
& \multicolumn{3}{c}{Top-1(\%)} & \multicolumn{3}{c}{Top-5(\%)}\\
\cmidrule(r){2-4}
\cmidrule(r){5-7}
& 1\% & 10\% & 100\% & 1\% & 10\% & 100\% \\
\midrule
Supervised \cite{DBLP:journals/corr/abs-1905-03670} & 25.4&56.4&75.9&48.4&80.4&92.8\\
\midrule
SimCLR \cite{chen2020simple} & 48.3&65.6&76.0&75.5&87.8&93.1\\
BYOL \cite{grill2020bootstrap} & 53.2&68.8&\textbf{77.7}&78.4&89.0&\textbf{93.9}\\
SwAV \cite{caron2020unsupervised} & 53.9&\textbf{70.2}&-&78.5&\textbf{89.9}&-\\
Barlow Twins \cite{DBLP:journals/corr/abs-2103-03230} & \textbf{55.0}&69.7&-&\textbf{79.2}&89.3&-\\
\rowcolor{LightBlue}
SSL-HSIC (w/o target)& 45.3&65.5&76.4&72.7&87.5&93.2\\
\rowcolor{LightBlue}
SSL-HSIC (w/ target) & 52.1&67.9&77.2&77.7&88.6&93.6\\
\bottomrule
\end{tabular}
\end{minipage}
\end{table}
\textbf{Semi-supervised learning on ImageNet}
We fine-tune the network pretrained with SSL-HSIC on 1\%, 10\% and 100\% of ImageNet, using the same ImageNet splits as SimCLR \cite{chen2020simple}. \Cref{tab:finetune-imagenet} summarizes the semi-supervised learning performance. Our method, with or without a target network, has competitive performance in both data regimes. The target network has the most impact on the small-data regime, with 1\% labels.
\begin{table}[htbp]
\centering
\setlength\tabcolsep{2pt}
\scriptsize
\caption{Comparison of transfer learning performance on 12 image datasets. Supervised-IN is trained on ImageNet with supervised pretrainining. Random init trains on individual dataset with randomly initialized weights. MPCA refers to mean per-class accuracy; AP50 is average precision at IoU=0.5.}
\begin{tabular}{@{}lcccccccccccc@{}}
\toprule
Dataset & Birdsnap & Caltech101 & Cifar10 & Cifar100 & DTD & Aircraft & Food & Flowers & Pets & Cars & SUN397 & VOC2007 \\
Metric & Top-1 & MPCA & Top-1 & Top-1 & Top-1 & MPCA & Top-1 & MPCA & MPCA & Top-1 & Top-1 & AP50 \\
\midrule
\textit{Linear:}\\
\midrule
Supervised-IN \cite{chen2020simple} &53.7 & \textbf{94.5} & \textbf{93.6} & 78.3 & 74.9 & \textbf{61.0} & 72.3 & 94.7 & \textbf{91.5} & \textbf{67.8} & \textbf{61.9} & 82.8 \\
SimCLR \cite{chen2020simple} &37.4&90.3&90.6&71.6&74.5&50.3&68.4&90.3&83.6&50.3&58.8&80.5\\
BYOL \cite{grill2020bootstrap}&57.2&94.2&91.3&\textbf{78.4}&75.5&60.6&75.3&\textbf{96.1}&90.4&66.7&62.2&82.5\\
\rowcolor{LightBlue}
SSL-HSIC (w/o target) & 50.6&92.3&91.5&75.9&75.3&57.9&73.6&95.0&88.2&59.3&61.0&81.4\\
\rowcolor{LightBlue}
SSL-HSIC (w/ target)&\textbf{57.8}&93.5&92.3&77.0&\textbf{76.2}&58.5&\textbf{75.6}&95.4&91.2&62.6&61.8&\textbf{83.3}\\
\midrule
\textit{Fine-tuned:}\\
\midrule
Supervised-IN \cite{chen2020simple} &75.8 & 93.3 & 97.5 & \textbf{86.4} & 74.6 & 86.0 & 88.3 & \textbf{97.6} & \textbf{92.1} & \textbf{92.1} & \textbf{94.3} & 85.0 \\
Random init \cite{chen2020simple} & \textbf{76.1} & 72.6 & 95.9 & 80.2 & 64.8 & 85.9 & 86.9 & 92.0 & 81.5 & 91.4 & 53.6 & 67.3 \\
SimCLR \cite{chen2020simple} & 75.9 & 92.1 & 97.7 & 85.9 & 73.2 & 88.1 & 88.2 & 97.0 & 89.2 & 91.3 & 63.5 & 84.1 \\
BYOL \cite{grill2020bootstrap} & 76.3 & \textbf{93.8} & \textbf{97.8} & 86.1 & \textbf{76.2} & 88.1 & \textbf{88.5} & 97.0 & 91.7 & 91.6 & 63.7 & \textbf{85.4} \\
\rowcolor{LightBlue}
SSL-HSIC (w/o target) & 73.1 & 91.5 & 97.4 & 85.3 & 75.3 & 87.1 & 87.5 & 96.4 & 90.6 & 91.6 & 62.2 & 84.1 \\
\rowcolor{LightBlue}
SSL-HSIC (w/ target) & 74.9 & \textbf{93.8} & \textbf{97.8} & 84.7 & 75.4 & \textbf{88.9} & 87.7 & 97.3 & 91.7 & 91.8 & 61.7 & 84.1 \\
\bottomrule
\end{tabular}
\label{tab:linear-classification}
\end{table}
\textbf{Transfer to other classification tasks}
To investigate the generality of the representation learned with SSL-HSIC, we evaluate the transfer performance for classification on 12 natural image datasets \cite{fei2004learning,Krizhevsky09learningmultiple,DBLP:journals/corr/CimpoiMKMV13,DBLP:journals/corr/MajiRKBV13,bossard2014food,nilsback2008automated,parkhi12a,Krause13collectinga,xiao2010sun,everingham2010pascal} using the same procedure as \cite{chen2020simple, grill2020bootstrap, DBLP:journals/corr/abs-1805-08974}. \cref{tab:linear-classification} shows the top-1 accuracy of the linear evaluation and fine-tuning performance on the test set. SSL-HSIC gets state-of-the-art performance on 3 of the classification tasks and reaches strong performance on others for this benchmark, indicating the learned representations are robust for transfer learning.
\textbf{Transfer to other vision tasks}
To test the ability of transferring to tasks other than classification, we fine-tune the network on semantic segmentation, depth estimation and object detection tasks. We use Pascal VOC2012 dataset \cite{everingham2010pascal} for semantic segmentation, NYU v2 dataset \cite{Silberman:ECCV12} for depth estimation and COCO \cite{DBLP:journals/corr/LinMBHPRDZ14} for object detection. Object detection outputs either bounding box or object segmentation (instance segmentation). Details of the evaluations setup is in \cref{app:experiment-eval}. \cref{tab:finetune-segmentation} and \cref{tab:object-detection} shows that SSL-HSIC achieves competitive performance on all three vision tasks.
\begin{table}[htbp]
\begin{minipage}[t]{0.65\linewidth}
\centering
\caption{Fine-tuning performance on semantic segmentation and depth estimation. Mean Intersection over Union (mIoU) is reported for semantic segmentation. Relative error (rel), root mean squared error (rms), and the percent of pixels (pct) where the error is below $1.25^n$ thresholds are reported for depth estimation.}
\label{tab:finetune-segmentation}
\resizebox{\textwidth}{!}{
\begin{tabular}{@{}lcccccc@{}}
\toprule
& VOC2012 & \multicolumn{5}{c}{NYU v2}\\
\cmidrule(r){3-7}
Method & mIoU & pct.$<1.25$ &pct.$<1.25^2$ &pct.$<1.25^3$&rms&rel\\
\midrule
Supervised-IN&74.4&81.1&95.3&98.8&0.573&\textbf{0.127}\\
SimCLR&75.2&83.3&96.5&99.1&0.557&0.134\\
BYOL&\textbf{76.3}&\textbf{84.6}&96.7&99.1&0.541&0.129\\
\rowcolor{LightBlue}
SSL-HSIC(w/o target)&74.9 &84.1&96.7&\textbf{99.2}&\textbf{0.539}&0.130\\
\rowcolor{LightBlue}
SSL-HSIC(w/ target)&76.0&83.8&\textbf{96.8}&99.1&0.548&0.130\\
\bottomrule
\end{tabular}
}
\end{minipage}\hspace{\fill}%
\begin{minipage}[t]{0.33\linewidth}
\centering
\caption{Fine-tuning performance on COCO object detection tasks. Precision, averaged over 10 IoU (Intersection over Union) thresholds, is reported for both bounding box and object segmentation.}
\label{tab:object-detection}
\resizebox{\textwidth}{!}{
\begin{tabular}{@{}lcc@{}}
\toprule
Method & AP$^{bb}$ & AP$^{mk}$ \\
\midrule
Supervised&39.6&35.6\\
SimCLR&39.7&35.8\\
MoCo v2&40.1&36.3\\
BYOL&\textbf{41.6}&37.2\\
SwAV&\textbf{41.6}&\textbf{37.8}\\
\rowcolor{LightBlue}
SSL-HSIC(w/o target) &40.5&36.3\\
\rowcolor{LightBlue}
SSL-HSIC(w/ target) &41.3&36.8\\
\bottomrule
\end{tabular}
}
\end{minipage}
\end{table}
\section{Ablation Studies}
\label{sec:ablations}
We present ablation studies to gain more intuition on SSL-HSIC. Here, we use a ResNet-50 backbone trained for 100 epochs on ImageNet, and evaluate with the linear protocol unless specified.
\textbf{ResNet architectures}
In this ablation, we investigate the performance of SSL-HSIC with wider and deeper ResNet architecture. \Cref{fig:arch-width} and \cref{tab:larger-net} show our main results. The performance of SSL-HSIC gets better with larger networks. We used the supervised baseline from \cite{grill2020bootstrap} which our training framework is based on (\cite{chen2020simple} reports lower performance). The performance gap between SSL-HSIC and the supervised baseline diminishes with larger architectures. In addition, \cref{tab:larger-net-semi} presents the semi-supervise learning results with subsets 1\%, 10\% and 100\% of the ImageNet data.
\begin{table}[htbp]
\begin{minipage}[t]{.5\linewidth}
\centering
\scriptsize
\caption{Top-1 and top-5 accuracies for different ResNet architectures using linear evaluation protocol.}
\label{tab:larger-net}
\begin{tabular}{@{}l>{\columncolor{LightBlue}}c>{\columncolor{LightBlue}}ccccc@{}}
\toprule
&\multicolumn{2}{c}{SSL-HSIC}&\multicolumn{2}{c}{BYOL\cite{grill2020bootstrap}}&\multicolumn{2}{c}{Sup.\cite{grill2020bootstrap}}\\%& Sup.\cite{chen2020simple}\\
\cmidrule(l){2-3}
\cmidrule(l){4-5}
\cmidrule(l){6-7}
ResNet & Top1 & Top5 & Top1 & Top5 & Top1 & Top5\\%&Top1\\
\midrule
50 (1x)&74.8&92.2&74.3&91.6&76.4&92.9\\%&76.5\\
50 (2x)&77.9&94.0&77.4&93.6&79.9&95.0\\%&77.8\\
50 (4x)&79.1&94.5&78.6&94.2&80.7&95.3\\%&78.9\\
200 (2x)&79.6&94.8&79.6&94.9&80.1&95.2\\%&-\\
\bottomrule
\end{tabular}
\end{minipage}\hspace{\fill}%
\begin{minipage}[t]{0.45\linewidth}
\centering
\scriptsize
\caption{Top-1 and top-5 accuracies for different ResNet architectures using semi-supervised fine-tuning.}
\label{tab:larger-net-semi}
\begin{tabular}{@{}lcccccc@{}}
\toprule
&\multicolumn{3}{c}{Top1}&\multicolumn{3}{c}{Top5}\\
\cmidrule(l){2-4}
\cmidrule(l){5-7}
ResNet & 1\% & 10\% & 100\% & 1\% & 10\% & 100\%\\
\midrule
50 (1x)&52.1&67.9&77.2&77.7&88.6&93.6\\
50 (2x)&61.2&72.6&79.3&83.8&91.2&94.7\\
50 (4x)&67.0&75.4&79.7&87.4&92.5&94.8\\
200(2x)&69.0&76.3&80.5&88.3&92.9&95.2\\
\bottomrule
\end{tabular}
\end{minipage}
\end{table}
\textbf{Regularization term}
We compare performance of InfoNCE with SSL-HSIC in \Cref{tab:ablation-reg} since they can be seen as approximating the same $\mathrm{HSIC}(Z, Y)$ objective but with different forms of regularization. We reproduce the InfoNCE result in our codebase, using the same architecture and data augmentiation as for SSL-HSIC. Trained for 100 epochs (without a target network), InfoNCE achieves 66.0\% top-1 and 86.9\% top-5 accuracies, which is better than the result reported in \cite{chen2020simple}. For comparison, SSL-HSIC reaches 66.7\% top-1 and 87.6\% top-5 accuracies. This suggests that the regularization employed by SSL-HSIC is more effective.
\textbf{Kernel type}
We investigate the effect of using different a kernel on latents $Z$. Training without a target network or random Fourier feature approximation, the top-1 accuracies for linear, Gaussian, and inverse multiquadric (IMQ) kernels are $65.27\%$, $66.67\%$ and $66.72\%$ respectively. %
Non-linear kernels indeed improve the performance; Gaussian and IMQ kernels reach very similar performance for 100 epochs. We choose IMQ kernel for longer runs, because its heavy-tail property can capture more signal when points are far apart.
\begin{table}[htbp]
\caption{Linear evaluation results when varying different hyperparameters.}
\begin{subtable}[t]{0.23\linewidth}
\centering
\scriptsize
\captionof{table}{Regularization}
\label{tab:ablation-reg}
\begin{tabular}{@{}lcc@{}}
\toprule
& Top-1 & Top-5\\ %
\midrule
SSL-HSIC&66.7&87.6\\
InfoNCE&66.0&86.9\\
\bottomrule
\end{tabular}
\end{subtable}\hspace{\fill}%
\begin{subtable}[t]{0.23\linewidth}
\centering
\scriptsize
\captionof{table}{\# Fourier features}
\label{tab:ablation-rff}
\begin{tabular}[t]{@{}lcc@{}}
\toprule
\# RFFs & Top-1(\%)\\
\midrule
64&66.0\\
128&66.2\\
256&66.2\\
512&66.4\\
1024&66.5\\
2048&66.5\\
No Approx.&66.7\\
\bottomrule
\end{tabular}
\end{subtable}\hspace{\fill}%
\begin{subtable}[t]{0.3\linewidth}
\centering
\scriptsize
\captionof{table}{Batch size}
\label{tab:ablation-batch}
\begin{tabular}[t]{@{}lcc@{}}
\toprule
& \multicolumn{2}{c}{Top-1(\%)}\\
\cmidrule(r){2-3}
Batch Size & SSL-HSIC & SimCLR\\
\midrule
256&63.7&57.5\\
512&65.6&60.7\\
1024&66.7&62.8\\
2048&67.1&64.0\\
4096&66.7&64.6\\
\bottomrule
\end{tabular}
\end{subtable}\hspace{\fill}%
\begin{subtable}[t]{0.24\linewidth}
\centering
\scriptsize
\captionof{table}{Projector/predictor size}
\label{tab:ablation-proj}
\begin{tabular}[t]{@{}lc@{}}
\toprule
Output Dim & Top-1(\%)\\
\midrule
64&65.4\\
128&66.0\\
256&66.4\\
512&66.6\\
1024&66.6\\
\bottomrule
\end{tabular}
\end{subtable}
\end{table}
\textbf{Number of RFF Features}
\cref{tab:ablation-rff} shows the performance of SSL-HSIC with different numbers of Fourier features. The RFF approximation has a minor impact on the overall performance, as long as we resample them; fixed sets of features performed poorly.
Our main result picked 512 features, for substantial computational savings with minor loss in accuracy.
\textbf{Batch size}
Similar to most of the self-supervised learning methods \cite{chen2020simple,grill2020bootstrap}, SSL-HSIC benefits from using a larger batch size during training. However, the drop of performance from using smaller batch size is not as pronounced as it is in SimCLR\cite{chen2020simple} as shown in \Cref{tab:ablation-batch}.
\textbf{Projector and predictor output size}
\cref{tab:ablation-proj} shows the performance when using different output dimension for the projector/predictor networks. The performance saturates at 512 dimensions.
\section{Introduction}
Learning general-purpose visual representations without human supervision is a long-standing goal of machine learning.
Specifically, we wish to find a feature extractor that captures the image semantics of a large unlabeled collection of images,
so that e.g.\ various image understanding tasks can be achieved with simple linear models.
One approach takes the latent representation of a likelihood-based generative model
\citep{hinton2006reducing,vincent2010stacked,higgins2016beta,DBLP:journals/corr/abs-2005-14165,coates2012learning,DBLP:journals/corr/abs-1711-00937,GCBD:ffjord};
such models, though, solve a harder problem than necessary since semantic features need not capture low-level details of the input.
Another option is to train a \emph{self-supervised} model for a ``pretext task,'' such as predicting the position of image patches, identifying rotations, or image inpainting \citep{DBLP:journals/corr/DoerschGE15,noroozi2016unsupervised,gidaris2018unsupervised,zhang2016colorful,larsson2016learning,pathak2016context}.
Designing good pretext tasks, however, is a subtle art, with little theoretical guidance available.
Recently, a class of models based on contrastive learning \cite{oord2018representation,bachman2019learning,henaff2020data,chen2020simple,DBLP:journals/corr/abs-1911-05722,DBLP:journals/corr/abs-2003-04297,caron2020unsupervised,DBLP:journals/corr/abs-2103-03230} has seen substantial success:
dataset images are cropped, rotated, color shifted, etc.\ into several \emph{views},
and features are then trained to pull together representations of the ``positive'' pairs of views of the same source image,
and push apart those of ``negative'' pairs (from different images).
These methods are either understood from an information theoretic perspective as estimating the mutual information between the ``positives'' \citep{oord2018representation}, or explained as aligning features subject to a uniformity constraint \cite{DBLP:journals/corr/abs-2005-10242}. Another line of research \cite{grill2020bootstrap,DBLP:journals/corr/abs-2011-10566} attempts to learn representation without the ``negative'' pairs, but requires either a target network or stop-gradient operation to avoid collapsing.
\begin{figure}[t]
\centering
\begin{minipage}{0.49\textwidth}
\centering
\includegraphics[width=.98\linewidth]{figures/arch_width.pdf}
\vspace*{-1mm}
\captionof{figure}{Top-1 accuracies with linear evaluation for different ResNet architecture and methods: supervised (as in \cite{grill2020bootstrap}), SSL-HSIC (with a target network; ours), BYOL \cite{grill2020bootstrap}, SwAV \cite{caron2020unsupervised}, SimCLR \cite{chen2020simple}, MoCo v2 \cite{DBLP:journals/corr/abs-2003-04297} and Barlow Twins \cite{DBLP:journals/corr/abs-2103-03230}.}
\label{fig:arch-width}
\end{minipage}
\hfill
\begin{minipage}{0.49\textwidth}
\centering
\includegraphics[width=.98\linewidth]{figures/ssl-hsic.pdf}
%
\captionof{figure}{Statistical dependence view of contrastive learning: representations of transformed images should highly depend on image identity. Measuring dependence with HSIC, this pushes different images' representation distributions apart (black arrows) and pulls representations of the same image together (colored shapes).}
\label{fig:hsic-z-y}
\end{minipage}
\end{figure}
We examine the contrastive framework from a statistical dependence point of view: feature representations for a given transformed image should be highly dependent on the image identity (\cref{fig:hsic-z-y}).
To measure dependence, we turn to the
Hilbert-Schmidt Independence Criterion (HSIC) \citep{gretton2005measuring},
and propose a new loss for self-supervised learning which we call SSL-HSIC.
Our loss is inspired by HSIC Bottleneck \citep{ma2019hsic,pogodin2020kernelized}, an alternative to Information Bottleneck \citep{DBLP:journals/corr/TishbyZ15}, where we use the image identity as the label, but change the regularization term. %
Through the dependence maximization perspective, we present a unified view of various self-supervised losses.
Previous work \cite{tschannen2019mutual} has shown that the success of InfoNCE cannot be solely attributed to properties of mutual information,
in particular because mutual information (unlike kernel measures of dependence) has no notion of geometry in feature space:
for instance, \emph{all} invertible encoders achieve maximal mutual information, but they can output dramatically different representations with very different downstream performance \citep{tschannen2019mutual}.
Variational bounds on mutual information
do impart notions of locality that allow them to succeed in practice,
departing from the mutual information quantity that they try to estimate.
We prove that InfoNCE, a popular such bound, in fact approximates SSL-HSIC with a variance-based regularization.
Thus, InfoNCE can be thought of as working because it implicitly estimates a kernel-based notion of dependence.
We additionally show SSL-HSIC is related to metric learning, where the features learn to align to the structure induced by the self-supervised labels. This perspective is closely related to the objective of BYOL \citep{grill2020bootstrap}, and can explain properties such as alignment and uniformity \cite{DBLP:journals/corr/abs-2005-10242} observed in contrastive learning.
Our perspective brings additional advantages, in computation and in simplicity of the algorithm, compared with existing approaches.
Unlike the indirect variational bounds on mutual information \cite{oord2018representation,DBLP:journals/corr/abs-1801-04062,DBLP:journals/corr/abs-1905-06922}, SSL-HSIC can be directly estimated from mini-batches of data.
Unlike ``negative-free'' methods, the SSL-HSIC loss itself penalizes trivial solutions,
so techniques such as target networks are not needed for reasonable outcomes.
Using a target network does improve the performance of our method, however,
suggesting target networks have other advantages that are not yet well understood.
Finally, we employ random Fourier features \citep{rahimi2007random} in our implementation, resulting in cost
linear in batch size.
Our main contributions are as follows:
\begin{itemize}[leftmargin=*,noitemsep,topsep=0pt,parsep=0pt,partopsep=0pt]
\item We introduce SSL-HSIC, a principled self-supervised loss using kernel dependence maximization.
\item We present a unified view of contrastive learning through dependence maximization, by establishing relationships between SSL-HSIC, InfoNCE, and metric learning.
\item %
%
Our method achieves top-1 accuracy of 74.8\% and top-5 accuracy of 92.2\% with linear evaluations (see \cref{fig:arch-width} for a comparison with other methods), top-1 accuracy of 80.2\% and Top-5 accuracy of 94.7\% with fine-tuning, and competitive performance on a diverse set of downstream tasks.
\end{itemize}
\section{Self-supervised learning with Kernel Dependence Maximization}
\label{sec:method}
Our method builds on the self-supervised learning framework used by most of the recent self-supervised learning approaches \cite{bachman2019learning,chen2020simple,DBLP:journals/corr/abs-1911-05722,grill2020bootstrap,caron2020unsupervised,DBLP:journals/corr/abs-2103-03230,DBLP:journals/corr/abs-2011-10566}. For a dataset with $N$ points $x_i$, each point goes through a random transformation
$t^{p}(x_i)$ (e.g. random crop), and then forms a feature representation $z^{p}_i=f_\theta(t^p(x_i))$ with an encoder network $f_\theta$.
We associate each image $x_i$ with its identity $y_i$, which works as a one-hot encoded label: $y_i\in \mathbb{R}^{N}$ and $(y_i)_d=1$ iff $d=i$ (and zero otherwise). To match the transformations and image identities, we maximize the dependence between $z_i$ and $y_i$ such that $z_i$ is predictive of its original image.
To build representations suitable for downstream tasks, we also need to penalize high-variance representations. These ideas come together in our HSIC-based objective for self-supervised learning, which we term SSL-HSIC:
\begin{equation}
\mathcal{L}_{\mathrm{SSL-HSIC}}(\theta) = -\mathrm{HSIC}\brackets{Z, Y} + \gamma\,\sqrt{ \mathrm{HSIC}\brackets{Z, Z}}\,.
\label{eq:ssl_hsic}
\end{equation}
Unlike contrastive losses, which make the $z_{i}^p$ from the same $x_i$ closer and those from different $x_j$ more distant, we propose an alternative way to match different transformations of the same image with its \textit{abstract identity} (e.g.\ position in the dataset).
Our objective also resembles the HSIC bottleneck for supervised learning \cite{ma2019hsic} (in particular, the version of \cite{pogodin2020kernelized}), but ours uses a square root for $\mathrm{HSIC}(Z,Z)$.
The square root makes the two terms on the same scale: $\mathrm{HSIC}(Z, Y)$ is effectively a dot product, and $\sqrt{\mathrm{HSIC}(Z, Z)}$ a norm, so that e.g.\ scaling the kernel by a constant does not change the relative amount of regularization;\footnote{Other prior work on maximizing HSIC \citep{BG09:taxonomies,BZG:better-taxonomies} used $\mathrm{HSIC}(Z, Y) / \sqrt{\mathrm{HSIC}(Z, Z) \, \mathrm{HSIC}(Y, Y)}$, or equivalently \citep{SSGF:dist-hsic} the distance correlation \citep{SRB:distance-correlation}; the kernel-target alignment \citep{CSEK:kernel-target,cortes2012algorithms} is also closely related. Here, the overall scale of either kernel does not change the objective. Our $\mathrm{HSIC}(Y, Y)$ is constant (hence absorbed in $\gamma$), and we found an additive penalty to be more stable in optimization than dividing the estimators.}
this also gives better performance in practice.
Due to the one-hot encoded labels, we can re-write $\mathrm{HSIC}\brackets{Z, Y}$ as (see \cref{app:estimator})
\begin{align}
\mathrm{HSIC}(Z,Y) &\propto \mathbb{E}_{z_1, z_2 \sim pos}\left[k(z_1, z_2)\right] - \mathbb{E}_{z_1} \mathbb{E}_{z_2}\left[k(z_1, z_2)\right]\,,
\label{eq:ssl-hsic-kernels}
\end{align}
where the first expectation is over the distribution of ``positive'' pairs (those from the same source image), and the second one is a sum over all image pairs, including their transformations. The first term in \eqref{eq:ssl-hsic-kernels} pushes representations belonging to the same image identity together, while the second term keeps mean representations for each identity apart (as in \cref{fig:hsic-z-y}). The scaling of $\mathrm{HSIC}(Z, Y)$ depends on the choice of the kernel over $Y$, and is irrelevant to the optimization.
This form also reveals three key theoretical results.
\Cref{subsec:infonce_connection} shows that InfoNCE is better understood as an HSIC-based loss than a mutual information between views.
\Cref{subsec:clustering} reveals that the dependence maximization in $\mathrm{HSIC}(Z, Y)$ can also be viewed as a form of distance metric learning, where the cluster structure is defined by the labels.
Finally, $\mathrm{HSIC}(Z,Y)$ is proportional to the average kernel-based distance between the distribution of views for each source image (the maximum mean discrepancy, MMD; see \cref{app:mmd}).
\subsection{Connection to InfoNCE}
\label{subsec:infonce_connection}
In this section we show the connection between InfoNCE and our loss; see \cref{app:section:theory} for full details. We first write the latter in its infinite sample size limit (see \citep{DBLP:journals/corr/abs-2005-10242} for a derivation) as
\begin{equation}
\mathcal{L}_{\mathrm{InfoNCE}}(\theta)=- \mathbb{E}_{z_1,z_2\sim\mathrm{pos}}\left[k(z_1, z_{2})\right] + \mathbb{E}_{z_1}\log\mathbb{E}_{z_2} \left[ \exp \brackets{k(z_{1}, z_{2})} \right] \,,
\label{eq:infoNCE_mean}
\end{equation}
where the last two expectations are taken over all points, and the first is over the distribution of positive pairs. The kernel $k(z_{1}, z_{2})$ was originally formulated as a scoring function in a form of a dot product \cite{oord2018representation}, and then a scaled cosine similarity \cite{chen2020simple}. Both functions are valid kernels.
Now assume that $k(z_1, z_2)$ doesn't deviate much from $\mathbb{E}_{z_2} \left[k(z_1, z_2)\right]$, Taylor-expand the exponent in \eqref{eq:infoNCE_mean} around $\mathbb{E}_{z_2} \left[k(z_1, z_2)\right]$, then expand $\log(1+\mathbb{E}_{z_2}(\dots))\approx \mathbb{E}_{z_2}(\dots)$.
We obtain an $\mathrm{HSIC}(Z, Y)$-based objective:
\begin{equation}
\mathcal{L}_{\mathrm{InfoNCE}}(\theta) \approx \underbrace{- \mathbb{E}_{z_1,z_2\sim \mathrm{pos}}\left[k(z_1, z_2)\right] + \mathbb{E}_{z_1}\mathbb{E}_{z_2} \left[k(z_1, z_2) \right]}_{\propto-\mathrm{HSIC}(Z, Y)} +\frac{1}{2} \underbrace{\mathbb{E}_{z_1} \left[\mathbb{V}\mathrm{ar}_{z_2}\left[k(z_1, z_2)\right] \right]}_{\textrm{variance penalty}}\,.
\label{eq:infonce_taylor}
\end{equation}
Since the scaling of $\mathrm{HSIC}(Z,Y)$ is irrelevant to the optimization, we assume scaling to replace $\propto$ with $=$. In the small variance regime, we can show that for the right $\gamma$,
\begin{equation}
-\mathrm{HSIC}\brackets{Z, Y} + \gamma\,\mathrm{HSIC}\brackets{Z, Z} \leq \mathcal{L}_{\mathrm{InfoNCE}}(\theta) + o(\mathrm{variance})\,.
\end{equation}
For $\mathrm{HSIC}(Z, Z) \leq 1$, we also have that
\begin{equation}
-\mathrm{HSIC}\brackets{Z, Y} + \gamma\,\mathrm{HSIC}\brackets{Z, Z} \leq \mathcal{L}_{\mathrm{SSL-HSIC}}(\theta)
\end{equation}
due to the square root. InfoNCE and SSL-HSIC in general don't quite bound each other due to discrepancy in the variance terms, but in practice the difference is small. %
Why should we prefer the HSIC interpretation of InfoNCE?
Initially, InfoNCE was suggested as a variational approximation to the mutual information between two views \cite{oord2018representation}.
It has been observed, however, that using tighter estimators of mutual information leads to worse performance \cite{tschannen2019mutual}.
It is also simple to construct examples where InfoNCE finds different representations while the underlying MI remains constant \cite{tschannen2019mutual}.
Alternative theories suggest that InfoNCE balances alignment of ``positive'' examples and uniformity of the overall feature representation \cite{DBLP:journals/corr/abs-2005-10242}, or that (under strong assumptions) it can identify the latent structure in a hypothesized data-generating process, akin to nonlinear ICA \citep{zimmermann2021cl}. Our view is consistent with these theories, but doesn't put restrictive assumptions on the input data or learned representations. In \cref{sec:ablations} (summarized in \cref{tab:ablation-reg}), we show that our interpretation gives rise to a better objective in practice. %
\subsection{Connection to metric learning}
\label{subsec:clustering}
Our SSL-HSIC objective is closely related to kernel alignment \cite{CSEK:kernel-target}, especially centered kernel alignment \cite{cortes2012algorithms}. As a kernel method for distance metric learning, kernel alignment measures the agreement between a kernel function and a target function. Intuitively, the self-supervised labels $Y$ imply a cluster structure, and $\mathrm{HSIC}(Z,Y)$ estimates the degree of agreement between the learned features and this cluster structure in the kernel space.
This relationship with clustering is also established in \cite{cluhsic,BG09:taxonomies,BZG:better-taxonomies}, where labels are learned rather than features. The clustering perspective is more evident when we assume linear kernels over both $Z$ and $Y$, and $Z$ is unit length and centered:\footnote{Centered $Z$ is a valid assumption for BYOL, as the target network keeps representations of views with different image identities away from each other. For high-dimensional unit vectors, this can easily lead to orthogonal representations. We also observe centered representations empirically: see \cref{app:section:features}.}
\begin{multline}
-\mathrm{HSIC}(Z,Y)
\propto - \frac{1}{M} Tr(Y^\top Z^\top Z Y) + Tr(Z^\top Z)-NM \\ %
= - \frac{1}{M} \sum_{i=1}^N \left\lVert \sum_{p=1}^M z_{i}^{p} \right\rVert_2^2
+ \sum_{i=1}^N\sum_{p=1}^M \left\lVert z_{i}^p \right\rVert_2^2 - NM
= \sum_{i=1}^N \sum_{p=1}^M \left\lVert z_i^p - \bar{z_i} \right\rVert_2^2 -NM,
\label{eq:clustering}
\end{multline}
with $M$ the number of augmentations per image and $\bar{z_i}=\sum_p z_i^p / M$ the average feature vector of the augmented views of $x_i$.
We emphasize, though, that \eqref{eq:clustering} assumes centered, normalized data with linear kernels;
the right-hand side of \eqref{eq:clustering} could be optimized by setting all $z_i^p$ to the same vector for each $i$, but this does not actually optimize $\mathrm{HSIC}(Z, Y)$.
\cref{eq:clustering} shows that we recover the spectral formulation \cite{zha2001spectral} and sum-of-squares loss used in the k-means clustering algorithm from the kernel objective. Moreover, the self-supervised label imposes that the features from transformations of the same image are gathered in the same cluster.
\cref{eq:clustering} also allows us to connect SSL-HSIC to non-contrastive objectives such as BYOL, although the connection is subtle because of its use of predictor and target networks. If each image is augmented with two views, we can compute \eqref{eq:clustering} using $\bar{z_i}\approx(z_i^1+z_i^2) / 2$, so the clustering loss becomes $\propto \sum_i ||z_i^1 - z_i^2||_2^2$. This is exactly the BYOL objective, only that $z_i^2$ in BYOL comes from a target network.
The assumption of centered and normalized features for \eqref{eq:clustering} is important in the case of BYOL: without it, BYOL can find trivial solutions where all the features are collapsed to the same feature vector far away from the origin. The target network is used to prevent the collapse. SSL-HSIC, on the other hand, rules out such a solution by building the centering into the loss function, and therefore can be trained successfully without a target network or stop gradient operation.
\subsection{Estimator of SSL-HSIC}
\label{sec:estimator}
To use SSL-HSIC, we need to correctly and efficiently estimate \eqref{eq:ssl_hsic}. Both points are non-trivial: the self-supervised framework implies non-i.i.d.\ batches (due to positive examples), while the estimator in \eqref{eq:biased-hsic} assumes i.i.d.\ data; moreover, the time to compute \eqref{eq:biased-hsic} is quadratic in the batch size.
First, for $\mathrm{HSIC}(Z, Z)$ we use the biased estimator in \eqref{eq:biased-hsic}. Although the i.i.d.\ estimator \eqref{eq:biased-hsic} results in an $O(1/B)$ bias for $B$ original images in the batch size (see \cref{app:estimator}), the batch size $B$ is large in our case and therefore the bias is negligible. For $\mathrm{HSIC}(Z, Y)$ the situation is more delicate: the i.i.d.\ estimator needs re-scaling, and its bias depends on the number of positive examples $M$, which is typically very small (usually 2). We propose the following estimator:
\begin{align}
\widehat{\mathrm{HSIC}}(Z, Y)\,&= \frac{\Delta l}{N}\brackets{\frac{1}{BM(M-1)}\sum_{ipl} k(z^{p}_{i}, z^{l}_{i}) - \frac{1}{B^2M^2}\sum_{ijpl} k(z^{p}_{i}, z^{l}_{j}) - \frac{1}{M-1}}\,,
\end{align}
where $i$ and $j$ index original images, and $p$ and $l$ their random transformations; $k$ is the kernel used for latent $Z$, $l$ is the kernel used for the labels, and $\Delta l=l(i, i) - l(i, j)$ ($l$ for same labels minus $l$ for different labels).
Note that due to the one-hot structure of self-supervised labels $Y$, the standard (i.i.d.-based) estimator would miss the $1/N$ scaling and the $M-1$ correction (the latter is important in practice, as we usually have $M=2$). See \cref{app:estimator} for the derivations.
For convenience, we assume $\Delta l=N$ (any scaling of $l$ can be subsumed by $\gamma$), and optimize
\begin{equation}
\widehat{\mathcal{L}}_{\mathrm{SSL-HSIC}}(\theta) = - \widehat{\mathrm{HSIC}}(Z, Y) +\gamma\,\sqrt{ \widehat{\mathrm{HSIC}}(Z, Z)}\,.
\label{eq:ssl_hsic_batch}
\end{equation}
The computational complexity of the proposed estimators is $O(B^2M^2)$ for each mini-batch of size $B$ with $M$ augmentations.
We can reduce the complexity to $O(BM)$ by using random Fourier features (RFF) \cite{rahimi2007random}, which approximate the kernel $k(z_1, z_2)$ with a carefully chosen random $D$-dimensional approximation $R(z_1)^\top R(z_2)$ for $R(z): \mathbb{R}^{D_z}\rightarrow \mathbb{R}^{D}$, such that
$
k(z_1, z_2) = \mathbb{E} \left[R (z_1)^\top R(z_2)\right]
.$
Fourier frequencies are sampled independently for the two kernels on $Z$ in $\mathrm{HSIC}(Z, Z)$ at each training step. We leave the details on how to construct $R(z)$ for the kernels we use to Appendix \ref{app:rff}.
| {
"attr-fineweb-edu": 1.354492,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUd-c5qsBDH65QV853 | \section{Introduction}
We consider a time-homogeneous, real-valued, non-explosive, \textcolor[rgb]{0,0,0}{c\`{a}dl\`{a}g}
Markov process $X=(X_{t})_{t\geq0}$ with state space $\mathbb{R}$
\footnote{The state space can \textcolor[rgb]{0.0,0.0,0.0}{sometimes be relaxed} to
an open interval of $%
\mathbb{R}
$
\textcolor[rgb]{0.0,0.0,0.0}{(e.g., (0,+$\infty$) for geometric Brownian motions)}.
It is also possible to treat some general state space with complex boundary
behaviors. However, for simplicity, we choose $%
\mathbb{R}
$ as the state space of $X$ in this paper.} defined on a filtered probability
space $(\Omega,\mathcal{F},\boldsymbol{F}=(\mathcal{F}_{t})_{t\geq
0},\mathbb{P})$
\textcolor[rgb]{0.0,0.0,0.0}{with a complete and right-continuous filtration.}
\textcolor[rgb]{0.0,0.0,0.0}{Throughout, we silently assume that $X$ satisfies the strong Markov property (see Section III.8,9 of Rogers and Williams \cite{RW00}), and exclude Markov processes with monotone paths.}
The first passage time of $X$ above (below) a level $x\in%
\mathbb{R}
$ is denoted by
\[
\textcolor[rgb]{0.0,0.0,0.0}{T_{x}^{+(-)}=\inf\left\{ t\geq0:X_{t}>(<)x\right\},}
\]
with the common convention that $\inf\emptyset=\infty$.
The drawdown process of $X$ (also known as the reflected process of $X$ at its
supremum) is denoted by $Y=(Y_{t})_{t\geq0}$ with $Y_{t}=M_{t}-X_{t},$ where
$M_{t}=\sup_{0\leq s\leq t}X_{t}$. Let $\tau_{a}=\inf\{t>0:Y_{t}>a\}$ be the
first time the magnitude of drawdowns exceeds a given threshold $a>0$. Note
that $\left( \textcolor[rgb]{0.0,0.0,0.0}{\sup_{0\leq s\leq t}}Y_{s}%
>a\right) =\left( \tau_{a}\leq t\right) $ $\mathbb{P}$-a.s. Hence, the
distributional study of the maximum drawdown of $X$ is equivalent to the study
of the stopping time $\tau_{a}$. Similarly, the drawup process of $X$ is
defined as $\hat{Y}_{t}=X_{t}-m_{t}$ for $t\geq0,$ where $m_{t}=\inf_{0\leq
s\leq t}X_{t}$. However, given that the drawup of $X$ can be investigated via
the drawdown of $-X$, we exclusively focus on the drawdown process $Y$ in this paper.
Applications of drawdowns can be found in many areas. For instance, drawdowns
are widely used by mutual funds and commodity trading advisers to quantify
downside risks. Interested readers are referred to Schuhmacher and Eling
\cite{SE11} for a review of drawdown-based performance measures. An extensive
body of literature exists on the assessment and mitigation of drawdown risks
(e.g., Grossman and Zhou \cite{GZ93}, Carr et al. \cite{CZH11}, Cherny and
Obloj \cite{CO13}, and Zhang et al. \cite{ZLH13}). Drawdowns are also closely
related to many problems in mathematical finance, actuarial science and
statistics such as the pricing of Russian options (e.g., Shepp and Shiryaev
\cite{SS93}, Asmussen et al. \cite{AAP04} and Avram et al. \cite{AKP04}),
\textcolor[rgb]{0,0,0}{De Finetti's} dividend problem\ (e.g., Avram et al.
\cite{APP07} and Loeffen \cite{L08}), loss-carry-forward taxation models
(e.g., Kyprianou and Zhou \cite{KZ09} and Li et al. \cite{LTZ13}), and
change-point detection methods (e.g., Poor and Hadjiliadis \cite{PH09}). \textcolor[rgb]{0,0,0}{More specifically, in De Finetti's dividend problem under a fixed dividend barrier $a>0$, the underlying surplus process with dividend payments is a process obtained from reflecting $X$ at a fixed barrier $a$ (the reflected process' dynamics may be different than the drawdown process $Y$ when the underlying process $X$ is not spatial homogeneous). However, the distributional study of ruin quantities in De Finetti's dividend problem can be transformed to the study of drawdown quantities for the underlying surplus process; see Kyprianou and Palmowski \cite{KP07} for a more detailed discussion. Similarly, ruin problems in loss-carry-forward taxation models can also be transformed to a generalized drawdown problem for classical models without taxation, where the generalized drawdown process is defined in the form of $Y_t=\gamma(M_t)-X_t$ for some measurable function $\gamma(\cdot)$.}
The distributional study of drawdown quantities is not only of theoretical
interest, but also plays a fundamental role in the aforementioned
applications. Early distributional studies on drawdowns date back to Taylor
\cite{T75} on the joint Laplace transform of $\tau_{a}$ and $M_{\tau_{a}}$ for
Brownian motions. This result was later generalized by Lehoczky \cite{L77} to
time-homogeneous diffusion processes. Douady et al. \cite{DSY00} and Magdon et
al. \cite{MAPA04} derived infinite series expansions for the distribution of
$\tau_{a}$ for a standard Brownian motion and a drifted Brownian motion,
respectively. For spectrally negative L\'{e}vy processes, Mijatovic and
Pistorius \cite{MP12} obtained a sextuple formula for the joint Laplace
transform of $\tau_{a}$ and the last reset time of the maximum prior to
$\tau_{a}$, together with the joint distribution of the running maximum, the
running minimum, and the overshoot of $Y$ at $\tau_{a}$. For some studies on
the joint law of drawdown and drawup of spectrally negative L\'{e}vy processes
or diffusion processes, please refer to Pistorius \cite{P04}, Pospisil et al.
\cite{PVH09}, Zhang and Hadjiliadis \cite{ZH10}, and Zhang \cite{Z15}.
As mentioned above, L\'{e}vy processes\footnote{Most often, one-sided L\'{e}vy
processes (an exception to this is Baurdoux \cite{B09} for general L\'{e}vy
processes)} and time-homogeneous diffusion processes are two main classes of
Markov processes for which various drawdown problems have been extensively
studied. The treatment of these two classes of Markov processes has typically
been considered distinctly in the literature. For L\'{e}vy processes,
It\^{o}'s excursion theory is a powerful approach to handle drawdown problems
(e.g., Avram et al. \cite{AKP04}, Pistorius \cite{P04}, and Mijatovic and
Pistorius \cite{MP12}). However, the excursion-theoretic approach is somewhat
specific to the underlying model, and additional care is required when a more
general class of Markov processes is considered. On the other hand, for
time-homogeneous diffusion processes, Lehoczky \cite{L77} introduced an
ingenious approach which has recently been generalized by many researchers
(e.g., Zhou \cite{Z07}, Li et al. \cite{LTZ13}, and Zhang \cite{Z15}). Here
again, Lehoczky's approach relies on the continuity of the sample path of the
underlying model, and hence is not applicable for processes with upward jumps.
Also, other general methodologies (such as the martingale approach in, e.g.,
Asmussen \cite{AAP04} and the occupation density approach in, e.g., Ivanovs
and Palmowski \cite{IP12}) are well documented in the literature but they
strongly depend on the specific structure of the underlying process. To the
best of our knowledge, no unified treatment of drawdowns (drawups) for general
Markov processes has been proposed in the literature.
In this paper, we propose a general and unified approach to study the joint
law of $(\tau_{a},M_{\tau_{a}},Y_{\tau_{a}})$ for time-homogeneous Markov
processes with possibly two-sided jumps. Under mild regularity conditions, the
joint law is expressed as the solution to an integral equation which involves
two-sided exit quantities of the underlying process $X$. The uniqueness of the
integral equation for the joint law is also investigated. In particular, the
joint law possesses explicit forms when $X$ has only one-sided jumps or is a
L\'{e}vy process (possibly with two-sided jumps). In general, our main result
reduces the drawdown problem to fundamental two-sided exit quantities.
The main idea of our proposed approach is briefly summarized below. By
analyzing the evolution of sample paths over a short time period following
time $0$ and using renewal arguments, we first establish tight upper and lower
bounds for the joint law of $(\tau_{a},M_{\tau_{a}},Y_{\tau_{a}})$ in terms of
the two-sided exit quantities. Then, under mild regularity conditions, we use
a Fatou's lemma with varying measures to show that the upper and lower bounds
converge when the length of the time interval approaches $0$. This leads to an
integro-differential equation satisfied by the desired joint law. Finally, we
reduce the integro-differential equation to an integral equation. When $X$ is
a spectrally negative Markov process or a general L\'{e}vy process, the
integral equation can be solved and the joint law of $(\tau_{a},M_{\tau_{a}%
},Y_{\tau_{a}})$ is hence explicitly expressed in terms of two-sided exit quantities.
The rest of the paper is organized as follows. In Section 2, we introduce some
fundamental two-sided exit quantities and present several preliminary results.
In Section 3, we derive the joint law of $(\tau_{a},Y_{\tau_{a}},M_{\tau_{a}%
})$ for general time-homogeneous Markov processes. Several Markov processes
for which the proposed regularity conditions are met are further discussed.
\textcolor[rgb]{0,0,0}{Some numerical examples are investigated in more detail in Section 4}.
Some technical proofs are postponed to Appendix.
\section{Preliminary}
For ease of notation, we adopt the following conventions throughout the paper.
We denote by $\mathbb{P}_{x}$ the law of $X$ given $X_{0}=x\in%
\mathbb{R}
$ and write $\mathbb{P}\equiv\mathbb{P}_{0}$ for brevity. We write $u\wedge
v=\min\{u,v\}$, $%
\mathbb{R}
_{+}=[0,\infty)$, and $\int_{x}^{y}\cdot\mathrm{d}z$ for an integral on the
open interval $z\in(x,y)$.
For $q,s\geq0$, $u\leq x\leq v$ and $z>0$, we introduce the following
two-sided exit quantities of $X$:
\begin{align*}
B_{1}^{(q)}(x;u,v) & :=\mathbb{E}_{x}\left[ e^{-qT_{v}^{+}}1_{\left\{T_{v}^{+}<\infty,
T_{v}^{+}<T_{u}^{-},X_{T_{v}^{+}}=v\right\} }\right] ,\\
B_{2}^{(q)}(x,\mathrm{d}z;u,v) & :=\mathbb{E}_{x}\left[ e^{-qT_{v}^{+}%
}1_{\left\{T_{v}^{+}<\infty, T_{v}^{+}<T_{u}^{-},X_{T_{v}^{+}}-v\in\mathrm{d}z\right\}
}\right] ,\\
C^{(q,s)}(x;u,v) & :=\mathbb{E}_{x}\left[ e^{-qT_{u}^{-}-s(u-X_{T_{u}^{-}%
})}1_{\left\{ T_{u}^{-}<\infty, T_{u}^{-}<T_{v}^{+}\right\} }\right] .
\end{align*}
We also define the joint Laplace transform
\begin{equation}
B^{(q,s)}(x;u,v):=\mathbb{E}_{x}\left[ e^{-qT_{v}^{+}-s(X_{T_{v}^{+}}%
-v)}1_{\left\{ T_{v}^{+}<\infty, T_{v}^{+}<T_{u}^{-}\right\} }\right] =B_{1}^{(q)}%
(x;u,v)+B_{2}^{(q,s)}(x;u,v), \label{BBB}%
\end{equation}
where $B_{2}^{(q,s)}(x;u,v):=\int_{0}^{\infty}e^{-sz}B_{2}^{(q)}%
(x,\mathrm{d}z;u,v)$.
The following pathwise inequalities are central to the construction of tight
bounds for the joint law of the triplet $(\tau_{a},M_{\tau_{a}},Y_{\tau_{a}})$.
\begin{proposition}
\label{prop path}For $q,s\geq0$, $x\in\mathbb{R}$ and $\varepsilon\in(0,a)$,
we have $\mathbb{P}_{x}$-a.s.
\begin{equation}
1_{\{T_{x+\varepsilon}^{+}<\infty, T_{x+\varepsilon}^{+}<T_{x+\varepsilon-a}^{-}\}}\leq1_{\{T_{x+\varepsilon}^{+}<\infty, T_{x+\varepsilon
}^{+}<\tau_{a}\}}\leq1_{\{T_{x+\varepsilon}^{+}<\infty, T_{x+\varepsilon}^{+}<T_{x-a}^{-}\}}, \label{eq.up}%
\end{equation}
and
\begin{align}
e^{-q\tau_{a}-s(Y_{\tau_{a}}-a)}1_{\left\{\tau_{a}<\infty, \tau_{a}<T_{x+\varepsilon}%
^{+}\right\} } & \geq e^{-qT_{x-a}^{-}-s(x-a-X_{T_{x-a}^{-}})-s\varepsilon
}1_{\{T_{x-a}^{-}<\infty, T_{x-a}^{-}<T_{x+\varepsilon}^{+}\}},\label{eq.down1}\\
e^{-q\tau_{a}-s(Y_{\tau_{a}}-a)}1_{\left\{ \tau_{a}<\infty, \tau_{a}<T_{x+\varepsilon}%
^{+}\right\} } & \leq e^{-qT_{x+\varepsilon-a}^{-}%
-s(x-a-X_{T_{x+\varepsilon-a}^{-}})}1_{\{T_{x+\varepsilon-a}^{-}<\infty, T_{x+\varepsilon-a}^{-}%
<T_{x+\varepsilon}^{+}\}}. \label{eq.down2}%
\end{align}
\end{proposition}
\begin{proof}
By analyzing the sample paths of $X$, it is easy to see that, for any path $\omega\in(T_{x+\varepsilon}^{+}<\infty)$, we have $\mathbb{P}_x\{\tau_a\le T_{x-a}^-\}=1$, so
\begin{equation}
(T_{x+\varepsilon}^{+}<\infty, T_{x+\varepsilon}^{+}<\tau_{a})=(T_{x+\varepsilon}^{+}<\infty, T_{x+\varepsilon
}^{+}<\tau_a\le T_{x-a}^{-})\subset(T_{x+\varepsilon}^{+}<\infty, T_{x+\varepsilon}^{+}<T_{x-a}^-)\quad\mathbb{P}_{x}\text{-a.s.}\nonumber
\label{w1}%
\end{equation}
and similarly, $\mathbb{P}_x$-a.s.%
\begin{equation}
(T_{x+\varepsilon}^{+}<\infty, T_{x+\varepsilon}^{+}<T_{x+\varepsilon-a}^-)=(T_{x+\varepsilon}^{+}<\infty, T_{x+\varepsilon
}^{+}< T_{x+\varepsilon-a}^{-}, T_{x+\varepsilon
}^{+}< \tau_a)\subset(T_{x+\varepsilon}^{+}<\infty, T_{x+\varepsilon}^{+}<\tau_a),\nonumber%
\end{equation}
which immediately implies (\ref{eq.up}). On the other hand, by using the same argument, we have
\begin{equation}
(T_{x-a}^{-}<\infty, T_{x-a}^{-}<T_{x+\varepsilon}^{+})=(T_{x-a}^{-}<\infty, \tau_{a}\leq T_{x-a}^{-}<T_{x+\varepsilon
}^{+})\subset(\tau_{a}<\infty, \tau_{a}<T_{x+\varepsilon}^{+})\quad\mathbb{P}_{x}\text{-a.s.}
\label{w1}%
\end{equation}
and%
\begin{equation}
(\tau_{a}<\infty, \tau_{a}<T_{x+\varepsilon}^{+})=(\tau_{a}<\infty, T_{x+\varepsilon-a}^{-}\leq\tau
_{a}<T_{x+\varepsilon}^{+})\subset(T_{x+\varepsilon-a}^{-}<\infty, T_{x+\varepsilon-a}^{-}<T_{x+\varepsilon
}^{+})\quad\mathbb{P}_{x}\text{-a.s.} \label{w2}%
\end{equation}
For any path $\omega\in(T_{x-a}^{-}<\infty, T_{x-a}^{-}<T_{x+\varepsilon}^{+})$, we know
from (\ref{w1}) that $\omega\in(T_{x-a}^{-}<\infty, \tau_{a}\leq T_{x-a}^{-}<T_{x+\varepsilon}%
^{+})$. This implies $M_{\tau_{a}}(\omega)\leq x+\varepsilon$ and $X_{\tau
_{a}}(\omega)\geq X_{T_{x-a}^{-}}(\omega)$, which further entails that
$Y_{\tau_{a}}(\omega)=M_{\tau_{a}}(\omega)-X_{\tau_{a}}(\omega)\leq
x+\varepsilon-X_{T_{x-a}^{-}}(\omega)$. Therefore, by the above analysis and
the second inequality of (\ref{eq.up}),
\[
e^{-qT_{x-a}^{-}-s(x+\varepsilon-X_{T_{x-a}^{-}})}1_{\left\{ T_{x-a}^{-}<\infty, T_{x-a}%
^{-}<T_{x+\varepsilon}^{+}\right\} }\leq e^{-q\tau_{a}-sY_{\tau_{a}}%
}1_{\left\{\tau_{a}<\infty, \tau_{a}<T_{x+\varepsilon}^{+}\right\} }\quad\mathbb{P}%
_{x}\text{-a.s.}%
\]
which naturally leads to (\ref{eq.down1}).
Similarly, for any sample path $\omega\in(\tau_{a}<\infty, \tau_{a}<T_{x+\varepsilon}^{+})$,
we know from (\ref{w2}) that $\omega\in(\tau_{a}<\infty, T_{x+\varepsilon-a}^{-}\leq\tau
_{a}<T_{x+\varepsilon}^{+})$, which implies that $x-X_{T_{x+\varepsilon
-a}^{-}}(\omega)\leq Y_{T_{x+\varepsilon-a}^{-}}(\omega)\leq Y_{\tau_{a}%
}(\omega).$ Therefore, by the first inequality of (\ref{eq.up}), we obtain
\[
e^{-q\tau_{a}-sY_{\tau_{a}}}1_{\left\{ \tau_{a}<\infty, \tau_{a}<T_{x+\varepsilon}%
^{+}\right\} }\leq e^{-qT_{x+\varepsilon-a}^{-}-s(x-X_{T_{x+\varepsilon
-a}^{-}})}1_{\{T_{x+\varepsilon-a}^{-}<\infty, T_{x+\varepsilon-a}^{-}<T_{x+\varepsilon}^{+}\}}\quad
\mathbb{P}_{x}\text{-a.s.}%
\]
This implies the second inequality of (\ref{eq.down2}).\bigskip
\end{proof}
By Proposition \ref{prop path}, we easily obtain the following useful estimates.
\begin{corollary}
\label{cor bd}For $q,s\geq0$, $x\in%
\mathbb{R}
,z>0$ and $\varepsilon\in(0,a)$,%
\begin{align*}
B_{1}^{(q)}(x;x+\varepsilon-a,x+\varepsilon) & \leq\mathbb{E}_{x}\left[
e^{-qT_{x+\varepsilon}^{+}}1_{\{T_{x+\varepsilon}^{+}<\infty, T_{x+\varepsilon}^{+}<\tau_{a}%
,X_{T_{x+\varepsilon}^{+}}=x+\varepsilon\}}\right] \leq B_{1}^{(q)}%
(x;x-a,x+\varepsilon),\\
B_{2}^{(q)}(x,\mathrm{d}z;x+\varepsilon-a,x+\varepsilon) & \leq
\mathbb{E}_{x}\left[ e^{-qT_{x+\varepsilon}^{+}}1_{\{T_{x+\varepsilon}^{+}<\infty, T_{x+\varepsilon}%
^{+}<\tau_{a},X_{T_{x+\varepsilon}^{+}}-x-\varepsilon\in\mathrm{d}z\}}\right]
\leq B_{2}^{(q)}(x,\mathrm{d}z;x-a,x+\varepsilon),
\end{align*}
and%
\[
e^{-s\varepsilon}C^{(q,s)}(x;x-a,x+\varepsilon)\leq\mathbb{E}_{x}\left[
e^{-q\tau_{a}-s(Y_{\tau_{a}}-a)}1_{\left\{\tau_{a}<\infty, \tau_{a}<T_{x+\varepsilon}%
^{+}\right\} }\right] \leq e^{s\varepsilon}C^{(q,s)}(x;x+\varepsilon
-a,x+\varepsilon).
\]
\end{corollary}
\begin{remark}
\normalfont It is not difficult to check that the results of Proposition
\ref{prop path} and Corollary \ref{cor bd} still hold if the first passage
times and the drawdown times are only observed discretely or randomly (such as
the Poisson observation framework in Albrecher et al. \cite{AIZ16} for the
latter). Further, explicit relationship between Poisson observed first passage
times and Poisson observed drawdown times (similar as for Theorem
\ref{thm markov} below) can be found by exploiting the same approach as laid
out in this paper.
\end{remark}
The later analysis involves the weak convergence of measures which is recalled
here. Consider a metric space $S$ with the Borel $\sigma$-algebra on it. We
say a sequence of finite measures $\{\mu_{n}\}_{n\in%
\mathbb{N}
}$ is weakly convergent to a finite measure $\mu$ as $n\rightarrow\infty$ if
\[
\lim_{n\rightarrow\infty}\int_{S}\phi(z)\mathrm{d}\mu_{n}(z)=\int_{S}%
\phi(z)\mathrm{d}\mu(z),
\]
for any bounded and continuous function $\phi(\cdot)$ on $S$.
In the next lemma, we show some forms of Fatou's lemma for varying measures
under weak convergence. Similar results are proved in Feinberg et al.
\cite{FKZ14} for probability measures. For completeness, a proof for general
finite measures is provided in Appendix.
\begin{lemma}
\label{lem fatou}Suppose that $\{\mu_{n}\}_{n\in%
\mathbb{N}
}$ is a sequence of finite measures on $S$ which is weakly convergent to a
finite measure $\mu$, and $\{\phi_{n}\}_{n\in%
\mathbb{N}
}$ is a sequence of uniformly bounded and nonnegative functions on $S$. Then,%
\begin{equation}
\int_{S}\liminf_{n\rightarrow\infty,w\rightarrow z}\phi_{n}(w)\mathrm{d}%
\mu(z)\leq\liminf_{n\rightarrow\infty}\int_{S}\phi_{n}(z)\mathrm{d}\mu
_{n}(z)\text{,} \label{inf}%
\end{equation}
and
\begin{equation}
\int_{S}\limsup_{n\rightarrow\infty,w\rightarrow z}\phi_{n}(w)\mathrm{d}%
\mu(z)\geq\limsup_{n\rightarrow\infty}\int_{S}\phi_{n}(z)\mathrm{d}\mu_{n}(z).
\label{sup}%
\end{equation}
\end{lemma}
\section{Main results}
In this section, we study the joint law of $(\tau_{a},M_{\tau_{a}},Y_{\tau
_{a}})$ for a general Markov process with possibly two-sided jumps. The
following assumptions on the two-sided exit quantities of $X$ are assumed to
hold, which are sufficient (but not necessary) conditions for the
applicability of our proposed methodology. Weaker assumptions might be assumed
for special Markov processes; see, for instance, Remark \ref{rk levy} and
Corollary \ref{cor snm} below.
\begin{condition}
For all $q,s\geq0$, $z>0$ and $x>X_{0}$, we assume the following limits exist
and identities hold:
\begin{align*}
\text{\textbf{(A1)} }b_{a,1}^{(q)}(x) & :=\lim_{\varepsilon\downarrow0}%
\frac{1-B_{1}^{(q)}(x;x-a,x+\varepsilon)}{\varepsilon}=\lim_{\varepsilon
\downarrow0}\frac{1-B_{1}^{(q)}(x;x+\varepsilon-a,x+\varepsilon)}{\varepsilon
}\\
& =\lim_{\varepsilon\downarrow0}\frac{1-B_{1}^{(q)}(x-\varepsilon
;x-a,x)}{\varepsilon}=\lim_{\varepsilon\downarrow0}\frac{1-B_{1}%
^{(q)}(x-\varepsilon;x-\varepsilon-a,x)}{\varepsilon},
\end{align*}
and $\int_{x}^{y}b_{a,1}^{(q)}(w)\mathrm{d}w<\infty$ for any $x,y\in%
\mathbb{R}
$;%
\begin{align*}
\text{\textbf{(A2)} }b_{a,2}^{(q,s)}(x) & :=\lim_{\varepsilon\downarrow
0}\frac{1}{\varepsilon}B_{2}^{(q,s)}(x;x-a,x+\varepsilon)=\lim_{\varepsilon
\downarrow0}\frac{1}{\varepsilon}B_{2}^{(q,s)}(x;x+\varepsilon-a,x+\varepsilon
)\\
& =\lim_{\varepsilon\downarrow0}\frac{1}{\varepsilon}B_{2}^{(q,s)}%
(x-\varepsilon;x-a,x)=\lim_{\varepsilon\downarrow0}\frac{1}{\varepsilon}%
B_{2}^{(q,s)}(x-\varepsilon;x-\varepsilon-a,x),
\end{align*}
and $s\longmapsto b_{a,2}^{(q,s)}(x)$ is right continuous at $s=0$;%
\begin{align*}
\text{\textbf{(A3)} }c_{a}^{(q,s)}(x) & :=\lim_{\varepsilon\downarrow0}%
\frac{C^{(q,s)}(x;x-a,x+\varepsilon)}{\varepsilon}=\lim_{\varepsilon
\downarrow0}\frac{C^{(q,s)}(x;x+\varepsilon-a,x+\varepsilon)}{\varepsilon}\\
& =\lim_{\varepsilon\downarrow0}\frac{C^{(q,s)}(x-\varepsilon;x-a,x)}%
{\varepsilon}=\lim_{\varepsilon\downarrow0}\frac{C^{(q,s)}(x-\varepsilon
;x-\varepsilon-a,x)}{\varepsilon}.
\end{align*}
\end{condition}
Under Assumptions (\textbf{A1}) and (\textbf{A2}), it follows from (\ref{BBB})
that
\begin{equation}
b_{a}^{(q,s)}(x):=\lim_{\varepsilon\downarrow0}\frac{1-B^{(q,s)}%
(x;x-a,x+\varepsilon)}{\varepsilon}=b_{a,1}^{(q)}(x)-b_{a,2}^{(q,s)}(x).
\label{bbb}%
\end{equation}
\begin{remark}
\label{rmk31} \normalfont Due to the general structure of $X$, it is difficult
to refine Assumptions \textbf{(A1)}-\textbf{(A3)} unless a specific structure
for $X$ is given. \textcolor[rgb]{0,0,0}{A necessary condition for
Assumptions \textbf{(A1)}-\textbf{(A3}) to hold is that,
\[
T_{x}^{+}=0\text{ and }X_{T_{x}^{+}}=x,\text{ }\mathbb{P}_{x}\text{-a.s. for
all }x\in\mathbb{R}\text{.}\]
In other words, $X$ must be upward regular and creeping upward at every
$x$.}\footnote{See page 142 and page 197 of \cite{K14} for definitions of
regularity and creeping for L\'{e}vy processes.} In the later part of this
section, we provide some examples of Markov processes which satisfy
Assumptions \textbf{(A1)}-\textbf{(A3)}, including spectrally negative
L\'{e}vy processes, linear diffusions, piecewise exponential Markov processes,
and jump diffusions.
\end{remark}
\begin{remark}
\normalfont\label{rk weak}By Theorem 5.22 of Kallenberg \cite{K02} or
Proposition 7.1 of Landriault et al. \cite{LLZ16}, we know that Assumption
(\textbf{A2}) implies that the measures $\frac{1}{\varepsilon}B_{2}%
^{(q)}(x,\mathrm{d}z;x-a,x+\varepsilon)$, $\frac{1}{\varepsilon}B_{2}%
^{(q)}(x,\mathrm{d}z;x+\varepsilon-a,x+\varepsilon)$, $\frac{1}{\varepsilon
}B_{2}^{(q)}(x-\varepsilon,\mathrm{d}z;x-a,x)$ and $\frac{1}{\varepsilon}%
B_{2}^{(q)}(x-\varepsilon,\mathrm{d}z;x-\varepsilon-a,x)$ weakly converge to
the same measure on $%
\mathbb{R}
_{+}$, denoted as $b_{a,2}^{(q)}(x,\mathrm{d}z)$, such that $\int_{%
\mathbb{R}
_{+}}e^{-sz}b_{a,2}^{(q)}(x,\mathrm{d}z)=b_{a,2}^{(q,s)}(x)$. We point out
that it is possible that $b_{a,2}^{(q)}(x,\{0\})>0$, though the measure
$B_{2}^{(q)}(x,\mathrm{d}z;u,v)$ is only defined on $z\in(0,\infty)$.
\end{remark}
We are now ready to present the main result of this paper related to the joint
law of $(\tau_{a},Y_{\tau_{a}},M_{\tau_{a}})$.
\begin{theorem}
\label{thm markov}Consider a general time-homogeneous Markov process $X$
satisfying Assumptions (\textbf{A1})-(\textbf{A3}). For $q,s\geq0$ and
$K\in\mathbb{R}$, let
\[
h(x)=\mathbb{E}_{x}\left[ e^{-q\tau_{a}-s(Y_{\tau_{a}}-a)}1_{\{\tau_a<\infty, M_{\tau_{a}%
}\leq K\}}\right] ,\quad x\leq K.
\]
Then $h(\cdot)$ is differentiable in $x<K$ and solves the following integral
equation
\begin{equation}
h(x)=\int_{x}^{K}e^{-\int_{x}^{y}b_{a,1}^{(q)}(w)\mathrm{d}w}\left(
c_{a}^{(q,s)}(y)+\int_{[0,K-y)}h(y+z)b_{a,2}^{(q)}(y,\mathrm{d}z)\right)
\mathrm{d}y\text{,}\quad x\leq K. \label{triple LT}%
\end{equation}
\end{theorem}
\begin{proof}
By the strong Markov property of $X$, for any $X_{0}=x\leq y<K$ and
$0<\varepsilon<(K-y)\wedge a$, we have
\begin{align*}
h(y) & =\mathbb{E}_{y}\left[ e^{-q\tau_{a}-s(Y_{\tau_{a}}-a)}1_{\left\{\tau_{a}<\infty,
\tau_{a}<T_{y+\varepsilon}^{+}\right\} }\right] +\mathbb{E}_{y}\left[
e^{-qT_{y+\varepsilon}^{+}}1_{\{T_{y+\varepsilon}^{+}<\infty, T_{y+\varepsilon}^{+}<\tau_{a}%
,X_{T_{y+\varepsilon}^{+}}=y+\varepsilon\}}\right] h(y+\varepsilon)\\
& +\int_{0}^{K-y-\varepsilon}\mathbb{E}_{y}\left[ e^{-qT_{y+\varepsilon}%
^{+}}1_{\{T_{y+\varepsilon}^{+}<\infty, T_{y+\varepsilon}^{+}<\tau_{a},X_{T_{y+\varepsilon}^{+}%
}-y-\varepsilon\in\mathrm{d}z\}}\right] h(y+\varepsilon+z).
\end{align*}
By Corollary \ref{cor bd}, it follows that%
\begin{align}
h(y+\varepsilon)-h(y) & \geq-e^{s\varepsilon}C^{(q,s)}(y;y+\varepsilon
-a,y+\varepsilon)+\left( 1-B_{1}^{(q)}(y;y-a,y+\varepsilon)\right)
h(y+\varepsilon)\nonumber\\
& -\int_{0}^{K-y-\varepsilon}h(y+\varepsilon+z)B_{2}^{(q)}(y,\mathrm{d}%
z;y-a,y+\varepsilon), \label{down}%
\end{align}
and%
\begin{align}
h(y+\varepsilon)-h(y) & \leq-e^{-s\varepsilon}C^{(q,s)}(y;y-a,y+\varepsilon
)+\left( 1-B_{1}^{(q)}(y;y+\varepsilon-a,y+\varepsilon)\right)
h(y+\varepsilon)\nonumber\\
& -\int_{0}^{K-y-\varepsilon}h(y+\varepsilon+z)B_{2}^{(q)}(y,\mathrm{d}%
z;y+\varepsilon-a,y+\varepsilon). \label{up}%
\end{align}
By Assumptions (\textbf{A1})-(\textbf{A3}) and $h(\cdot)\in\lbrack0,1]$, it is
clear that both the lower bound of $h(y+\varepsilon)-h(y)$ in (\ref{down}) and
the upper bound in (\ref{up}) vanish as $\varepsilon\downarrow0$. Hence,
$h(y)$ is right continuous for $y\in\lbrack x,K)$. Replacing $y$ by
$y-\varepsilon$ in (\ref{down}) and (\ref{up}), and using Assumptions
(\textbf{A1})-(\textbf{A3}) again, it follows that $h(y)$ is also left
continuous for $y\in(x,K]$ with $h(K)=0$. Therefore, $h(y)$ is continuous for
$y\in\lbrack x,K]$ (left continuous at $x$ and right continuous at $K$).
To consecutively show the differentiability, we divide inequalities
(\ref{down}) and (\ref{up}) by $\varepsilon$. It follows from Assumptions
(\textbf{A1})-(\textbf{A3}), Remark \ref{rk weak}, Lemma \ref{lem fatou} and
the continuity of $h$ that
\begin{align*}
& \liminf_{\varepsilon\downarrow0}\frac{h(y+\varepsilon)-h(y)}{\varepsilon}\\
& \geq-c_{a}^{(q,s)}(y)+b_{a,1}^{(q)}(y)h(y)-\limsup_{\varepsilon\downarrow
0}\int_{0}^{K-y-\varepsilon}h(y+\varepsilon+z)\frac{B_{2}^{(q)}(y,\mathrm{d}%
z;y-a,y+\varepsilon)}{\varepsilon}\\
& \geq-c_{a}^{(q,s)}(y)+b_{a,1}^{(q)}(y)h(y)-\int_{[0,K-y)}h(y+z)b_{a,2}%
^{(q)}(y,\mathrm{d}z)\text{,}%
\end{align*}
and similarly,
\[
\limsup_{\varepsilon\downarrow0}\frac{h(y+\varepsilon)-h(y)}{\varepsilon}%
\leq-c_{a}^{(q,s)}(y)+b_{a,1}^{(q)}(y)h(y)-\int_{[0,K-y)}h(y+z)b_{a,2}%
^{(q)}(y,\mathrm{d}z).
\]
Since the two limits coincide, one concludes that $h(y)$ is right
differentiable for $y\in(x,K)$. Moreover, by replacing $y$ by $y-\varepsilon$
in (\ref{down}) and (\ref{up}), and using similar arguments, we can show that
$h(y)$ is also left differentiable for $y\in(x,K)$. Since the left and right
derivatives coincide, we conclude that $h(y)$ is differentiable for any
$y\in(x,K)$ and solves the following ordinary integro-differential equation
(OIDE),%
\begin{equation}
h^{\prime}(y)-b_{a,1}^{(q)}(y)h(y)=-c_{a}^{(q,s)}(y)-\int_{[0,K-y)}%
h(y+z)b_{a,2}^{(q)}(y,\mathrm{d}z). \label{h'}%
\end{equation}
Multiplying both sides of (\ref{h'}) by $e^{-\int_{x}^{y}b_{a,1}%
^{(q)}(w)\mathrm{d}w}$, integrating the resulting equation (with respect to
$y$) from $x$ to $K$, and using $h(K)=0$, this completes the proof of Theorem
\ref{thm markov}.\bigskip
\end{proof}
When the Markov process $X$ is spectrally negative (i.e., with no upward
jumps), the upward overshooting density $b_{a,2}^{(q)}(x,\mathrm{d}z)$ is
trivially $0$. Theorem \ref{thm markov} reduces to the following corollary.
\begin{corollary}
\label{cor snm}Consider a spectrally negative time-homogeneous Markov process
$X$ satisfying Assumptions (\textbf{A1}) and (\textbf{A3}). For $q,s\geq0$ and
$K>0$, we have%
\[
\mathbb{E}_{x}\left[ e^{-q\tau_{a}-s(Y_{\tau_{a}}-a)}1_{\{\tau_{a}<\infty, M_{\tau_{a}}\leq
K\}}\right] =\int_{x}^{K}e^{-\int_{x}^{y}b_{a,1}^{(q)}(w)\mathrm{d}w}%
c_{a}^{(q,s)}(y)\mathrm{d}y\text{,}\quad x\leq K.
\]
\end{corollary}
When $X$ is a general L\'{e}vy process (possibly with two-sided jumps), we
have the following result for the joint Laplace transform of the triplet
$(\tau_{a},Y_{\tau_{a}},M_{\tau_{a}})$. Note that Corollary \ref{cor levy}
should be compared to Theorem 4.1 of Baurdoux \cite{B09}, in which, under the
L\'{e}vy framework, the resolvent density of $Y$ is expressed in terms of the
resolvent density of $X$ using excursion theory.
\begin{corollary}
\label{cor levy}Consider a L\'{e}vy process $X$ satisfying Assumptions
(\textbf{A1})-(\textbf{A3}). For $q,s,\delta\geq0$, we have\footnote{For L\'evy processes $\mathbb{P}\{\tau_a<\infty\}=1$ as long as $X$ is not monotone.}
\begin{equation}
\mathbb{E}\left[ e^{-q\tau_{a}-s(Y_{\tau_{a}}-a)-\delta M_{\tau_{a}}}\right]
=\frac{c_{a}^{(q,s)}(0)}{\delta+b_{a}^{(q,\delta)}(0)}. \label{levy}%
\end{equation}
\end{corollary}
\begin{proof}
By the spatial homogeneity of the L\'{e}vy process $X$, Eq. (\ref{triple LT})
at $x=0$ reduces to
\begin{equation}
h(0)=\frac{c_{a}^{(q,s)}(0)}{b_{a,1}^{(q)}(0)}\left( 1-e^{-b_{a,1}^{(q)}%
(0)K}\right) +\int_{0}^{K}e^{-b_{a,1}^{(q)}(0)y}\int_{[0,K-y)}h(y+z)b_{a,2}%
^{(q)}(0,\mathrm{d}z)\mathrm{d}y. \label{h}%
\end{equation}
Let
\[
\hat{h}(0):=\mathbb{E}\left[ e^{-q\tau_{a}-s(Y_{\tau_{a}}-a)-\delta
M_{\tau_{a}}}\right] =\mathbb{E}\left[ e^{-q\tau_{a}-s(Y_{\tau_{a}}%
-a)}1_{\{M_{\tau_{a}}\leq e_{\delta}\}}\right] \text{,}%
\]
where $e_{\delta}$ is an independent exponential random variable with finite
mean $1/\delta>0$. Multiplying both sides of (\ref{h}) by $\delta e^{-\delta
K}$, integrating the resulting equation (with respect to $K$) from $0$ to
$\infty$, and using integration by parts, one obtains
\begin{align*}
\hat{h}(0) & =\frac{c_{a}^{(q,s)}(0)}{\delta+b_{a,1}^{(q)}(0)}+\int
_{0}^{\infty}\delta e^{-\delta K}\int_{0}^{K}e^{-b_{a,1}^{(q)}(0)y}%
\int_{[0,K-y)}h(y+z)b_{a,2}^{(q)}(0,\mathrm{d}z)\mathrm{d}y\mathrm{d}K\\
& =\frac{c_{a}^{(q,s)}(0)}{\delta+b_{a,1}^{(q)}(0)}+\int_{0}^{\infty
}e^{-b_{a,1}^{(q)}(0)y}\mathrm{d}y\int_{%
\mathbb{R}
_{+}}b_{a,2}^{(q)}(0,\mathrm{d}z)\int_{z+y}^{\infty}\delta e^{-\delta
K}\mathbb{E}\left[ e^{-q\tau_{a}-s(Y_{\tau_{a}}-a)}1_{\{M_{\tau_{a}}\leq
K-y-z\}}\right] \mathrm{d}K\\
& =\frac{c_{a}^{(q,s)}(0)}{\delta+b_{a,1}^{(q)}(0)}+\hat{h}(0)\frac{\int_{%
\mathbb{R}
_{+}}e^{-\delta z}b_{a,2}^{(q)}(0,\mathrm{d}z)}{\delta+b_{a,1}^{(q)}(0)}.
\end{align*}
Solving for $\hat{h}(0)$ and using (\ref{bbb}), it follows that%
\[
\hat{h}(0)=\frac{c_{a}^{(q,s)}(0)}{\delta+b_{a,1}^{(q)}(0)-\int_{%
\mathbb{R}
_{+}}e^{-\delta z}b_{a,2}^{(q)}(0,\mathrm{d}z)}=\frac{c_{a}^{(q,s)}(0)}%
{\delta+b_{a}^{(q,\delta)}(0)}.
\]
It follows from the monotone convergence theorem that (\ref{levy}) also holds
for $\delta=0$.\bigskip
\end{proof}
\begin{remark}
\label{rk levy} \normalfont
We point out that Assumptions (\textbf{A1})-(\textbf{A3}) are not necessary to
yield (\ref{levy}) in the L\'{e}vy framework. In fact, by the spatial
homogeneity of $X$, similar to (\ref{down}) and (\ref{up}), we have
\[
\frac{e^{-(s+\delta)\varepsilon}C^{(q,s)}(0;-a,\varepsilon)}{1-e^{-\delta
\varepsilon}B^{(q,\delta)}(0;\varepsilon-a,\varepsilon)}\leq\mathbb{E}\left[
e^{-q\tau_{a}-s(Y_{\tau_{a}}-a)-\delta M_{\tau_{a}}}\right] \leq
\frac{e^{s\varepsilon}C^{(q,s)}(0;\varepsilon-a,\varepsilon)}{1-e^{-\delta
\varepsilon}B^{(q,\delta)}(0;-a,\varepsilon)},
\]
for any $\varepsilon\in(0,a)$. Suppose that the following condition holds:
\[
\lim_{\varepsilon\downarrow0}\frac{C^{(q,s)}(0;-a,\varepsilon)}{1-e^{-\delta
\varepsilon}B^{(q,\delta)}(0;\varepsilon-a,\varepsilon)}=\lim_{\varepsilon
\downarrow0}\frac{C^{(q,s)}(0;\varepsilon-a,\varepsilon)}{1-e^{-\delta
\varepsilon}B^{(q,\delta)}(0;-a,\varepsilon)}:=D_{a}^{(q,s,\delta)}%
\]
Then,
\[
\mathbb{E}\left[ e^{-q\tau_{a}-s(Y_{\tau_{a}}-a)-\delta M_{\tau_{a}}}\right]
=D_{a}^{(q,s,\delta)}.
\]
\end{remark}
Theorem \ref{thm markov} shows that the joint law $\mathbb{E}_{x}\left[
e^{-q\tau_{a}-s(Y_{\tau_{a}}-a)}1_{\{M_{\tau_{a}}\leq K\}}\right] $ is a
solution to Eq. (\ref{triple LT}). Furthermore, the following theorem shows
that Eq. (\ref{triple LT}) admits a unique solution.
\begin{theorem}
Suppose that Assumptions (\textbf{A1})-(\textbf{A3}) hold. For $q,s\geq0$ and
$K>0$, Eq. (\ref{triple LT}) admits a unique solution.
\end{theorem}
\begin{proof}
From Theorem \ref{thm markov}, we know that $h(x):=\mathbb{E}_{x}\left[
e^{-q\tau_{a}-s(Y_{\tau_{a}}-a)}1_{\{\tau_{a}<\infty, M_{\tau_{a}}\leq K\}}\right] $ is a
solution of (\ref{triple LT}). We also notice that any continuous solution to
(\ref{triple LT}) must vanish when $x\uparrow$ $K$. For any fixed
$L\in(-\infty,K)$, we define a metric space $(\mathbb{A}_{L},\boldsymbol{d}%
_{L})$, where $\mathbb{A}_{L}=\left\{ f\in C[L,K],f(K)=0\right\} $ and the
metric $\boldsymbol{d}_{L}(f,g)=\sup_{x\in\lbrack L,K]}|f(x)-g(x)|$ for
$f,g\in\mathbb{A}_{L}$. We then define a mapping $\mathcal{L}$ on
$\mathbb{A}_{L}$ by
\[
\mathcal{L}f(x)=\int_{x}^{K}e^{-\int_{x}^{y}b_{a,1}^{(q)}(w)\mathrm{d}%
w}\left( c_{a}^{(q,s)}(y)+\int_{[0,K-y)}f(y+z)b_{a,2}^{(q)}(y,\mathrm{d}%
z)\right) \mathrm{d}y,\text{\quad}x\in\lbrack L,K],
\]
where $f\in\mathbb{A}_{L}$. It is clear that $\mathcal{L}(\mathbb{A}%
_{L})\subset\mathbb{A}_{L}$.
Next we show that $\mathcal{L}:\mathbb{A}_{L}\rightarrow\mathbb{A}_{L}$ is a
contraction mapping. By the definitions of the two-sided exit quantities, for
any $y\in%
\mathbb{R}
$, it follows that%
\begin{equation}
C^{(q,s)}(y;y-a,y+\varepsilon)+\int_{%
\mathbb{R}
_{+}}B_{2}^{(q)}(y,\mathrm{d}z;y-a,y+\varepsilon)\leq1-B_{1}^{(q)}%
(y;y-a,y+\varepsilon). \label{C1B}%
\end{equation}
Dividing each term in (\ref{C1B}) by $\varepsilon\in(0,a)$ and letting
$\varepsilon\downarrow0$, it follows from Assumptions (\textbf{A1}%
)-(\textbf{A3}) that%
\begin{equation}
0\leq c_{a}^{(q,s)}(y)+\int_{%
\mathbb{R}
_{+}}b_{a,2}^{(q)}(y,\mathrm{d}z)\leq b_{a,1}^{(q)}(y),\quad y\in%
\mathbb{R}
. \label{ineq}%
\end{equation}
By (\ref{ineq}), we have for any $f,g\in\mathbb{A}_{L}$,
\begin{align*}
\boldsymbol{d}_{L}\left( \mathcal{L}f,\mathcal{L}g\right) & \leq\sup
_{t\in\lbrack L,K]}\left\vert f(t)-g(t)\right\vert \sup_{x\in\lbrack L,K]}%
\int_{x}^{K}e^{-\int_{x}^{y}b_{a,1}^{(q)}(w)\mathrm{d}w}\int_{%
\mathbb{R}
_{+}}b_{a,2}^{(q)}(y,\mathrm{d}z)\mathrm{d}y\\
& \leq\boldsymbol{d}_{L}(f,g)\sup_{L\leq x\leq K}\int_{x}^{K}e^{-\int_{x}%
^{y}b_{a,1}^{(q)}(w)\mathrm{d}w}b_{a,1}^{(q)}(y)\mathrm{d}y\\
& \leq\boldsymbol{d}_{L}(f,g)\left( 1-e^{-\int_{L}^{K}b_{a,1}^{(q)}%
(w)\mathrm{d}w}\right) .
\end{align*}
Since $\int_{L}^{K}b_{a,1}^{(q)}(w)\mathrm{d}w<\infty$ by Assumption
(\textbf{A1}), one concludes that $\mathcal{L}:\mathbb{A}_{L}\rightarrow
\mathbb{A}_{L}$ is a contraction mapping. By Banach fixed point theorem, there
exists a unique fixed point in $\mathbb{A}_{L}$. By a restriction of domain,
it is easy to see that $\mathbb{A}_{L_{1}}\subset\mathbb{A}_{L_{2}}$ for
$-\infty<L_{1}<L_{2}<K$. By the arbitrariness of $L$, the uniqueness holds for
the space $\cap_{L<K}\mathbb{A}_{L}$. This completes the proof.
\end{proof}
For the reminder of this section, we state several examples of Markov
processes satisfying Assumptions (\textbf{A1})-(\textbf{A3}). Note that the
joint law of drawdown estimates for Examples \ref{eg SNLP} and
\ref{eg diffusion} were solved by Mijatovic and Pistorius \cite{MP12} and
Lehoczky \cite{L77}, respectively (using different approaches). Assumption
verifications for Examples \ref{eg PEMP} and \ref{eg JD} are postponed to Appendix.
\begin{example}
[Spectrally negative L\'{e}vy processes]\label{eg SNLP} \normalfont Consider a
spectrally negative L\'{e}vy process $X$. Let $\psi(s):=\frac{1}{t}%
\log\mathbb{E}[e^{sX_{t}}]$ $\left( s\geq0\right) $ be the Laplace exponent
of $X$. Further, let $W^{(q)}:%
\mathbb{R}
\rightarrow\lbrack0,\infty)$ be the well-known $q$-scale function of $X$; see,
for instance Chapter 8 of Kyprianou \cite{K14}. The second scale function is
defined as $Z^{(q)}(x)=1+q\int_{0}^{x}W^{(q)}(y)\mathrm{d}y$. Under some mild
conditions (e.g., Lemma 2.4 of Kuznetsov et al. \cite{KKR12}), the scale
functions are continuously differentiable which further implies that
Assumptions (\textbf{A1}) and (\textbf{A3}) hold with
\begin{equation}
b_{a,1}^{(q)}(0)=\frac{W^{(q)\prime}(a)}{W^{(q)}(a)}\text{ and }c_{a}%
^{(q,s)}(0)=e^{sa}\frac{Z_{s}^{(p)}(a)W_{s}^{(p)\prime}(a)-Z_{s}^{(p)\prime
}(a)W_{s}^{(p)}(a)}{W_{s}^{(p)}(a)}, \label{bc}%
\end{equation}
where $p=q-\psi(s)$, and $W_{s}^{(p)}$ ($Z_{s}^{(p)}$) is the (second) scale
function of $X$ under a new probability measure\ $\mathbb{P}^{s}$ defined by
the Radon-Nikodym derivative process $\left. \frac{\mathrm{d}\mathbb{P}^{s}%
}{\mathrm{d}\mathbb{P}}\right\vert _{\mathcal{F}_{t}}=e^{sX_{t}-\psi(s)t}$ for
$t\geq0$. Therefore, by Corollary \ref{cor levy} and (\ref{bc}), we have%
\[
\mathbb{E}\left[ e^{-q\tau_{a}-s(Y_{\tau_{a}}-a)-\delta M_{\tau_{a}}}\right]
=\frac{e^{sa}W^{(q)}(a)}{\delta W^{(q)}(a)+W^{(q)\prime}(a)}\frac{Z_{s}%
^{(p)}(a)W_{s}^{(p)\prime}(a)-pW_{s}^{(p)}(a)^{2}}{W_{s}^{(p)}(a)},
\]
which is consistent with Theorem 3.1 of Landriault et al. \cite{LLZ16}, and
\textcolor[rgb]{0.0,0.0,0.0}{Theorem 1 of
Mijatovic and Pistorius \cite{MP12}}.
\end{example}
\begin{example}
[Refracted L\'{e}vy processes]\label{eg refracted} \normalfont Consider a
refracted spectrally negative L\'{e}vy process $X$ of the form
\begin{equation}
X_{t}=U_{t}-\lambda\int_{0}^{t}1_{\{X_{s}>b\}}\mathrm{d}s, \label{RLEVY}%
\end{equation}
where $\lambda\geq0$, $b>0$, and $U$ is a spectrally negative L\'{e}vy process
(see Kyprianou and Loeffen \cite{KL10}). Let $W^{(q)}$ ($Z^{(q)}$) be the
(second) $q$-scale function of $U$, and $\mathbb{W}^{(q)}$ be the $q$-scale
function of the process $\{U_{t}-\lambda t\}_{t\geq0}$. Similar to Example
\ref{eg SNLP}, all the scale functions are continuously differentiable under
mild conditions.
For simplicity, we only consider the quantity
\textcolor[rgb]{0.0,0.0,0.0}{$\mathbb{E}_{x}\left[
e^{-q\tau_{a}}1_{\{\tau_a<\infty, M_{\tau_a}\le K\}}\right] $} with $b>x-a$ (otherwise the
problem reduces to Example \ref{eg SNLP} for $X_{t}=U_{t}-\lambda t$). By
Theorem 4 of Kyprianou and Loeffen \cite{KL10}, one can verify that
Assumptions (\textbf{A1}) and (\textbf{A3}) hold. For $b>x$, from (\ref{bc})
with $s=0$, we have
\[
b_{a,1}^{(q)}(x)=\frac{W^{(q)\prime}(a)}{W^{(q)}(a)}\text{ and }c_{a}%
^{(q,0)}(x)=\frac{Z^{(q)}(a)W^{(q)\prime}(a)-Z^{(q)\prime}(a)W^{(q)}%
(a)}{W^{(q)}(a)}.
\]
For $x>b>x-a$,%
\[
b_{a,1}^{(q)}(x)=\frac{\left( 1+\lambda\mathbb{W}^{(q)}(0)\right)
W^{(q)\prime}(a)+\lambda\int_{b-x+a}^{a}\mathbb{W}^{(q)\prime}%
(a-y)W^{(q)\prime}(y)\mathrm{d}y}{W^{(q)}(a)+\lambda\int_{b-x+a}^{a}%
\mathbb{W}^{(q)}(a-y)W^{(q)\prime}(y)\mathrm{d}y}%
\]
and
\[
c_{a}^{(q,0)}(x)=\frac{k_{a}^{(q)}(x)}{W^{(q)}(a)+\lambda\int_{b-x+a}%
^{a}\mathbb{W}^{(q)}(a-y)W^{(q)\prime}(y)dy},
\]
where%
\begin{align*}
k_{a}^{(q)}(x) & =(1+\lambda\mathbb{W}^{(q)}(0))\left( Z^{(q)}%
(a)W^{(q)\prime}(a)-qW^{(q)}(a)^{2}\right) \\
& +\lambda q(1+\lambda\mathbb{W}^{(q)}(0))\int_{b-x+a}^{a}\mathbb{W}%
^{(q)}(a-y)\left( W^{(q)\prime}(a)W^{(q)}(y)-W^{(q)}(a)W^{(q)\prime
}(y)\right) \mathrm{d}y\\
& -\lambda q\left[ W^{(q)}(a)+\lambda\int_{b-x+a}^{a}\mathbb{W}%
^{(q)}(a-y)W^{(q)\prime}(y)\mathrm{d}y\right] \int_{b-x+a}^{a}\mathbb{W}%
^{(q)\prime}(a-y)W^{(q)}(y)\mathrm{d}y\\
& +\lambda\left[ Z^{(q)}(a)+\lambda q\int_{b-x+a}^{a}\mathbb{W}%
^{(q)}(a-y)W^{(q)}(y)\mathrm{d}y\right] \int_{b-x+a}^{a}\mathbb{W}%
^{(q)\prime}(a-y)W^{(q)\prime}(y)\mathrm{d}y.
\end{align*}
By Corollary \ref{cor snm}, we obtain
\[
\mathbb{E}_{x}\left[ e^{-q\tau_{a}}1_{\{M_{\tau_{a}}\leq K\}}\right]
=\int_{x}^{K}e^{-\int_{x}^{y}b_{a,1}^{(q)}(w)\mathrm{d}w}c_{a}^{(q,0)}%
(y)\mathrm{d}y\text{,}\quad x\leq K,
\]
which is a new result for the refracted L\'{e}vy process (\ref{RLEVY}).
\end{example}
\begin{example}
[Linear diffusion processes]\label{eg diffusion} \normalfont Consider a linear
diffusion process $X$ of the form
\[
\mathrm{d}X_{t}=\mu(X_{t})\mathrm{d}t+\sigma(X_{t})\mathrm{d}W_{t},
\]
where $(W_{t})_{t\geq0}$ is a standard Brownian motion, and the drift term
$\mu(\cdot)$ and local volatility $\sigma(\cdot)>0$ satisfy the usual
Lipschitz continuity and linear growth conditions. As a special case of the
jump diffusion process of Example \ref{eg JD}, it will be shown later that
Assumptions (\textbf{A1}) and (\textbf{A3}) hold for linear diffusion
processes. By Corollary \ref{cor snm}, we obtain
\[
\mathbb{E}_{x}\left[ e^{-q\tau_{a}}1_{\{\tau_a<\infty, M_{\tau_{a}}\leq K\}}\right]
=\int_{x}^{K}e^{-\int_{x}^{y}b_{a,1}^{(q)}(w)\mathrm{d}w}c_{a}^{(q,0)}%
(y)\mathrm{d}y\text{,}\quad x\leq K,
\]
which is consistent with Eq. (4) of Lehoczky \cite{L77}.
\end{example}
\begin{example}
[Piecewise exponential Markov processes]\label{eg PEMP} \normalfont Consider a
piecewise exponential Markov process (PEMP) $X$ of the form
\begin{equation}
\mathrm{d}X_{t}=\mu X_{t}\mathrm{d}t+\mathrm{d}Z_{t}, \label{PEMP}%
\end{equation}
where $\mu>0$ is the drift coefficient and $Z=(Z_{t})_{t\geq0}$ is a compound
Poisson process given by $Z_{t}=\sum_{i=1}^{N_{t}}J_{i}$. Here, $(N_{t}%
)_{t\geq0}$ is a Poisson process with intensity $\lambda>0$ and $J_{i}$'s are
iid copies of a real-valued random variable $J$ with cumulative distribution
function $F$.
\textcolor[rgb]{0,0,0}{We also assume the initial value $X_0\geq a$ which ensures that $X_t\geq 0$ for all $t<\tau_a$. In this case, as discussed in Remark \ref{rmk31}, $X$ is upward regular and creeps upward before $\tau_a$.}
The first passage times of $X$ have been extensively studied in applied
probability; see, e.g., Tsurui and Osaki \cite{TO76} and Kella and Stadje
\cite{KS01}. For the PEMP (\ref{PEMP}), semi-explicit expressions for the
two-sided exit quantities $B_{1}^{(q)}(\cdot)$, $B_{2}^{(q)}(\cdot,\cdot)$ and
$C^{(q,s)}(\cdot)$ are given in Section 6 of Jacobsen and Jensen \cite{JJ07}.
As will be shown in Section \ref{ver pemp}, Assumptions (\textbf{A1}%
)-(\textbf{A3}) and Theorem \ref{thm markov} hold for the PEMP $X$ with a
continuous jump size distribution $F$.
\end{example}
\begin{example}
[Jump diffusion]\label{eg JD} \normalfont Consider a jump diffusion process
$X$ of the form
\begin{equation}
\mathrm{d}X_{t}=\mu(X_{t})\mathrm{d}t+\sigma(X_{t})\mathrm{d}W_{t}%
+\int_{-\infty}^{\infty}\gamma(X_{t-},z)N(\mathrm{d}t,\mathrm{d}z), \label{JD}%
\end{equation}
where $\mu(\cdot)$ and $\sigma(\cdot)>0$ are functions on $\mathbb{R}$,
$(W_{t})_{t\geq0}$ is a standard Brownian motion, $\gamma(\cdot,\cdot)$ is a
real-valued function on $\mathbb{R}^{2}$ modeling the jump size, and
$N(\mathrm{d}t,\mathrm{d}z)$ is an independent Poisson random measure on
$\mathbb{R}_{+}\times\mathbb{R}$ with a finite intensity measure
\textcolor[rgb]{0.0,0.0,0.0}{$\mathrm{d}t\times\nu(\mathrm{d}z)$}. For
specific $\mu(\cdot)$ and $\sigma(\cdot)$, the jump diffusion (\ref{JD}) can
be used to model the surplus process of an insurer with investment in risky
assets; see, e.g., Gjessing and Paulsen \cite{GP97} and Yuen et al.
\cite{YWN04}. We assume the same conditions as Theorem 1.19 of \O ksendal and
Sulem-Bialobroda \cite{OS07} so that (\ref{JD}) admits a unique c\`{a}dl\`{a}g
adapted solution. Under this setup, we show in Section \ref{ver JD} that
Assumptions (\textbf{A1})--(\textbf{A3}) and thus Theorem \ref{thm markov}
hold for the jump diffusion (\ref{JD}).
\end{example}
\section{Numerical examples}
\textcolor[rgb]{0,0,0}{The main results of Section 3 rely on the analytic tractability of the
two-sided exit quantities. To further illustrate their applicability, we now
consider the numerical evaluation of the joint law of $(Y_{\tau_{a}}%
,M_{\tau_{a}})$ for two particular spatial-inhomogeneous Markov processes with
(positive) jumps through Theorem \ref{thm markov}. For simplicity, we assume
that the discount rate $q=0$ throughout this section.}
\subsection{PEMP}
\textcolor[rgb]{0,0,0}{In this section, we consider the PEMP $X$ in Example \ref{eg PEMP} with
$\mu=1$, $\lambda=3$, and the generic jump size $J$ with density
\begin{equation}
p(x)=\left\{
\begin{array}
[c]{ll}%
\frac{1}{3}e^{-x}, & x>0,\\
\frac{1}{3}(e^{x}+2e^{2x}), & x<0.
\end{array}
\right. \label{p1}%
\end{equation}
We follow Section 6 of Jacobsen and Jensen \cite{JJ07} to first solve for the
two-sided exit quantities. Define the integral kernel
\[
\psi_{0}(z):=\frac{1}{z(z+1)(z-1)(z-2)},\quad z\in%
\mathbb{C}
,
\]
and the linearly independent functions%
\[%
\begin{array}
[c]{ll}%
g_{1}(x):=\frac{1}{2\pi\sqrt{-1}}\int_{\Gamma_{1}}\psi_{0}(z)e^{-xz}%
dz=\frac{1}{6}e^{-2x}, & g_{2}(x):=\frac{1}{2\pi\sqrt{-1}}\int_{\Gamma_{2}%
}\psi_{0}(z)e^{-xz}dz=-\frac{1}{2}e^{-x},\\
g_{3}(x):=\frac{1}{2\pi\sqrt{-1}}\int_{\Gamma_{3}}\psi_{0}(z)e^{-xz}%
dz=\frac{1}{2}, & g_{4}(x):=\frac{1}{2\pi\sqrt{-1}}\int_{\Gamma_{4}}\psi
_{0}(z)e^{-xz}dz=-\frac{1}{6}e^{x},
\end{array}
\]
for $x>0,$ where $\Gamma_{i}$ ($i=1,2,3,4$) is a small counterclockwise circle
centered at the pole $\mu_{i}=3-i$ of $\psi_{0}(z)$. Moreover, for $0<u<v$, we
consider the matrix-valued function
\[
(M_{i,k}({u,v}))_{1\leq i,k\leq4}:=%
\begin{pmatrix}
-\frac{1}{3}e^{-2u}(u+\frac{11}{6}) & \frac{e^{-2u}}{6} & \frac{e^{-2v}}{18} &
g_{1}(v)\\
e^{-u} & \frac{e^{-u}}{2}(u+\frac{1}{2}) & -\frac{e^{-v}}{4} & g_{2}(v)\\
-\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & g_{3}(v)\\
\frac{e^{u}}{9} & \frac{e^{u}}{12} & \frac{e^{v}}{6}(v-\frac{11}{6}) &
g_{4}(v)
\end{pmatrix}
,
\]
where the matrix $M$ entries are chosen according to
\[
\left\{
\begin{array}
[c]{l}%
M_{i,k}(u,v)=\frac{\mu_{k}}{2\pi\sqrt{-1}}\int_{\Gamma_{i}}\frac{\psi_{0}%
(z)}{z-\mu_{k}}e^{-uz}\mathrm{d}z,\quad1\leq i\leq4, k=1,2,\\
M_{i,3}(u,v)=\frac{|\mu_{4}|}{2\pi\sqrt{-1}}\int_{\Gamma_{i}}\frac{\psi
_{0}(z)}{z-\mu_{4}}e^{-vz}\mathrm{d}z,\quad1\leq i\leq4.
\end{array}
\right.
\]
Let $(N_{k,j}(u,v))_{1\leq k,j\leq4}$ be the inverse of $(M_{i,k}(u,v))_{1\leq
i,k\leq4}$.
Combining Eq. (46) and a generalized Eq. (48) of Jacobsen and Jensen
\cite{JJ07} (with $\zeta=s\geq0$ and $\rho\geq0$), we obtain the linear system
of equations
\begin{equation}
(c_{1},c_{2},c_{3},c_{4})(M_{i,k})=\left( -\frac{2\underline{C}}{s+2}%
,-\frac{\underline{C}}{s+1},\frac{\overline{C}}{\rho+1},f(v)\right) ,
\label{CCCC}%
\end{equation}
where $\underline{C}$ and\ $\overline{C}$ are constants specified later, and
$f(x)$ could stand for any of $B_{1}^{(0)}(x;u,v)$, $B_{2}^{(0,\rho)}(x;u,v)$,
or $C^{(0,s)}(x;u,v)$ and has the representation
\[
f(x)=\sum\limits_{i=1}^{4}c_{i}g_{i}(x),\quad x\in\lbrack u,v].
\]
}
\textcolor[rgb]{0,0,0}{To solve for $B_{1}^{(0)}(x;u,v)$, $B_{2}^{(0,\rho)}(x;u,v)$, or
$C^{(0,s)}(x;u,v)$, we only need to solve (\ref{CCCC}) with different assigned
values of $\underline{C}$, $\overline{C}$, and $f(v)$ according to Eq. (45) of
Jacobsen and Jensen \cite{JJ07}. By letting $\underline{C}=\overline{C}=0$ and
$f(v)=1$, we obtain
\[
B_{1}^{(0)}(x;u,v)=\sum_{i=1}^{4}N_{4,i}(u,v)g_{i}(x).
\]
Similarly, by letting $\underline{C}=f(v)=0$ and $\overline{C}=1$, for
$\rho\geq0$, we obtain
\[
B_{2}^{(0,\rho)}(x;u,v)=\frac{1}{1+\rho}\sum_{i=1}^{4}N_{3,i}(u,v)g_{i}(x).
\]
A Laplace inversion with respect to $\rho$ yields, for $z>0$,
\[
B_{2}^{(0)}(x,\mathrm{d}z;u,v)=e^{-z}\sum_{i=1}^{4}N_{3,i}(u,v)g_{i}%
(x)\mathrm{d}z.
\]
By letting $\underline{C}=1$ and $\overline{C}=f(v)=0$, for $s\geq0$, we
obtain
\[
C^{(0,s)}(x;u,v)=\sum_{i=1}^{4}\left( \frac{-2}{s+2}N_{1,i}(u,v)+\frac
{-1}{s+1}N_{2,i}(u,v)\right) g_{i}(x).
\]
By the definitions, we have
\begin{align*}
b_{a,1}^{(0)}(x) & =-\sum_{i=1}^{4}D_{4,i}(x-a,x)g_{i}(x),\\
b_{a,2}^{(0)}(x,\mathrm{d}z) & =e^{-z}\left( \sum_{i=1}^{4}D_{3,i}%
(x-a,x)g_{i}(x)\right) \mathrm{d}z,\\
c_{a}^{(0,s)}(x) & =\sum_{i=1}^{4}\left( \frac{-2}{s+2}D_{1,i}%
(x-a,x)+\frac{-1}{s+1}D_{2,i}(x-a,x)\right) g_{i}(x),
\end{align*}
where we denote $D_{k,j}(u,v):=\frac{\partial}{\partial v}N_{k,j}(u,v)$.}
\textcolor[rgb]{0,0,0}{In Figure \ref{fig1} below, we use \textsf{Mathematica} to numerically solve
the integral equation (\ref{triple LT}).}
\begin{figure}[H]
\centering
{{\includegraphics[width=220pt]{plot_laplace.eps} }}
\caption{Plot of the probability $h(x)=\mathbb{P}_{x}\{M_{\tau_{a}}\le K\}$ for PEMP \eqref{PEMP} with $q=0,\mu=1,\lambda
=3,a=1,K=20$ and jump size distribution given in \eqref{p1}}%
\label{fig1}%
\end{figure}
\subsection{A jump diffusion model}
\textcolor[rgb]{0,0,0}{In this section, we consider a generalized PEMP $(X_{t})_{t\geq0}$ with
diffusion whose dynamics is governed by
\begin{equation}
\mathrm{d}X_{t}=X_{t}\mathrm{d}t+\sqrt{2}\mathrm{d}W_{t}+\mathrm{d}Z_{t},\quad
t>0, \label{eq:jd}%
\end{equation}
where the initial value $X_{0}=x\in%
\mathbb{R}
$, $(W_{t})_{t\geq0}$ is a standard Brownian motion, and $(Z_{t})_{t\geq0}$ is
an independent compound Poisson process with a unit jump intensity and a unit
mean exponential jump distribution. The two-sided exit quantities of this
generalized PEMP can also be solved using the approach described in Sections 6
and 7 of Jacobsen and Jensen \cite{JJ07}.}
\textcolor[rgb]{0,0,0}{We define an integral kernel
\[
\psi_{1}(z)=\frac{e^{\frac{z^{2}}{2}}}{z(z+1)},\quad z\in%
\mathbb{C}
\text{.}%
\]
Let $\Gamma_{i}$ $\left( i=1,2\right) $ be small counterclockwise circles
around the simple poles $\mu_{1}=0$ and $\mu_{2}=-1$, respectively, and define
the linearly independent functions%
\begin{align*}
g_{1}(x) & :=\frac{1}{2\pi\sqrt{-1}}\int_{\Gamma_{1}}\psi_{1}(z)e^{-xz}%
dz=1,\\
g_{2}(x) & :=\frac{1}{2\pi\sqrt{-1}}\int_{\Gamma_{2}}\psi_{1}(z)e^{-xz}%
dz=-e^{x+\frac{1}{2}},
\end{align*}
for $x\in%
\mathbb{R}
$. To find another linearly independent partial eigenfunction, we consider the
vertical line $\Gamma_{3}=\{1+t\sqrt{-1},t\in%
\mathbb{R}
\}$ and define
\begin{equation}
g_{3}(x):=\frac{1}{2\pi\sqrt{-1}}\int_{\Gamma_{3}}\psi_{1}(z)e^{-xz}%
\mathrm{d}z. \label{eq:G3last}%
\end{equation}
Next we derive an explicit expression for $g_{3}(x)$. We know from
(\ref{eq:G3last}) that $\lim_{x\rightarrow\infty}g_{3}(x)=0$ and $g_{3}$ is
continuously differentiable with
\begin{equation}
g_{3}^{\prime}(x)=-\frac{1}{2\pi\sqrt{-1}}\int_{\Gamma_{3}}\frac
{e^{\frac{z^{2}}{2}}}{z+1}e^{-xz}\mathrm{d}z. \label{eq:last1}%
\end{equation}
Notice that the bilateral Laplace transform functions (e.g., Chapter VI of
\cite{W46}) of a standard normal random variable $U_{1}$ and an independent
unit mean exponential random variables $U_{2}$ are given respectively by
\[
\int_{-\infty}^{\infty}e^{-zy}\cdot\frac{1}{\sqrt{2\pi}}e^{-\frac{y^{2}}{2}%
}\mathrm{d}y=e^{\frac{z^{2}}{2}},\quad\int_{0}^{\infty}e^{-zy}\cdot
e^{-y}\mathrm{d}y=\frac{1}{z+1},
\]
for all complex $z$ such that $\Re(z)\geq0$. Hence, the bilateral Laplace
transform of the density function of $U_{1}+U_{2}$, i.e.,
\[
\int_{0}^{\infty}\frac{1}{\sqrt{2\pi}}e^{-\frac{(x-y)^{2}}{2}}e^{-y}%
\mathrm{d}y
\]
is given by $e^{\frac{z^{2}}{2}}/(z+1)$ for all complex $z$ such that
$\Re(z)\geq0$. Since the right hand side of (\ref{eq:last1}) is just the
Bromwich integral for the inversion of the bilateral Laplace transform
$-e^{\frac{z^{2}}{2}}/(z+1)$, evaluated at $-x$, we deduce that
\[
g_{3}^{\prime}(x)=-\int_{0}^{\infty}\frac{1}{\sqrt{2\pi}}e^{-\frac{(x+y)^{2}%
}{2}}e^{-y}\mathrm{d}y.
\]
It follows that
\[
g_{3}(x)=-\int_{x}^{\infty}g_{3}^{\prime}(y)\mathrm{d}y=1-\int_{0}^{\infty
}N(x+y)e^{-y}\mathrm{d}y.
\]
where $N(\cdot)$ is the cumulative distribution function of standard normal distribution.}
\textcolor[rgb]{0,0,0}{For any fixed $-\infty<u<v<\infty$, we define a matrix-valued function
\[
(M_{i,k}(u,v))_{1\leq i,k\leq3}:=%
\begin{pmatrix}
1 & g_{1}(v) & g_{1}(u)\\
ve^{v+\frac{1}{2}} & g_{2}(v) & g_{2}(u)\\
1-\int_{0}^{\infty}N(v+y)ye^{-y}\mathrm{d}y & g_{3}(v) & g_{3}(u)
\end{pmatrix}
,
\]
where the first row is computed according to
\[
M_{i,1}(u,v)=\frac{1}{2\pi\sqrt{-1}}\int_{\Gamma_{i}}\frac{\psi_{0}(z)}%
{z+1}e^{-vz}\mathrm{d}z.
\]
Notice that $M_{3,1}(u,v)$ can be calculated in the same way as $g_{3}(x)$. We
also denote by $(N_{k,j}(u,v))_{1\leq k,j\leq3}$ the inverse of $(M_{i,k}%
(u,v))_{1\leq i,k\leq3}$.}
\textcolor[rgb]{0,0,0}{By Eq. (46) and a generalized Eq. (48) of Jacobsen and Jensen \cite{JJ07}
(with $\zeta=s=0$ and $\rho\geq0$), we obtain the linear system of equations
\begin{equation}
(c_{1},c_{2},c_{3})(M_{i,k})=\left( \frac{\overline{C}}{\rho+1}%
,f(v),f(u)\right) , \label{CCC}%
\end{equation}
where $\overline{C}$ is a constant specified later, and $f(x)$ could stand for
any of $B_{1}^{(0)}(x;u,v)$, $B_{2}^{(0,\rho)}(x;u,v)$, or $C^{(0,0)}(x;u,v)$
and has the representation
\[
f(x)=\sum\limits_{i=1}^{3}c_{i}g_{i}(x),\quad x\in\lbrack u,v].
\]
By letting (1) $\overline{C}=f(u)=0$ and $f(v)=1$, (2) $\overline{C}=1$ and
$f(v)=f(u)=0$, (3) $\overline{C}=f(v)=0$ and $f(u)=1$, for any $\rho\geq0$ and
$z>0$, and solving the linear system (\ref{CCC}), we respectively obtain
\begin{align*}
B_{1}^{(0)}(x;u,v) & =\sum_{i=1}^{3}N_{2,i}(u,v)g_{i}(x),\\
B_{2}^{(0,\rho)}(x;u,v) & =\frac{1}{1+\rho}\sum_{i=1}^{3}N_{1,i}%
(u,v)g_{i}(x),\quad B_{2}^{(0)}(x,\mathrm{d}z;u,v)=e^{-z}\sum_{i=1}^{3}%
N_{1,i}(u,v)g_{i}(x)\mathrm{d}z,\\
C^{(0,0)}(x;u,v) & =\sum_{i=1}^{3}N_{3,i}(u,v)g_{i}(x).
\end{align*}
Furthermore, this implies
\begin{align*}
b_{a,1}^{(0)}(x) & =-\sum_{i=1}^{3}D_{2,1}(x-a,x)g_{i}(x),\\
b_{a,2}^{(0)}(x,\mathrm{d}z) & =e^{-z}\left( \sum_{i=1}^{3}D_{1,i}%
(x-a,x)g_{i}(x)\right) ,\\
c_{a}^{(0,0)}(x) & =\sum_{i=1}^{3}D_{3,i}(x-a,x)g_{i}(x),
\end{align*}
where we denote $D_{k,j}(u,v)=\frac{\partial}{\partial v}N_{k,j}(u,v)$.}
\textcolor[rgb]{0,0,0}{In Figure 2 below, we plot $h(x)=\mathbb{P}_{x}\{M_{\tau_{a}}\leq K\}$ by
numerically solving the integral equation (\ref{triple LT}) using
\textsf{Mathematica}.}
\begin{figure}[H]
\centering
{\includegraphics[width=220pt]{df_plot.eps}}\caption{Plot of the probability
$h(x)=\mathbb{P}_{x}\{M_{\tau_{a}}\le K\}$ for the jump diffusion in
\eqref{eq:jd} with $K=6$ and $a=1$.}%
\label{fig2}%
\end{figure}
\section{Acknowledgments}
The authors would like to thank two anonymous referees for their helpful
comments and suggestions. Support from grants from the Natural Sciences and
Engineering Research Council of Canada is gratefully acknowledged by David
Landriault and Bin Li (grant numbers 341316 and 05828, respectively). Support
from a start-up grant from the University of Waterloo is gratefully
acknowledged by Bin Li, as is support from the Canada Research Chair Program
by David Landriault.
| {
"attr-fineweb-edu": 1.244141,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUd1s4eIOjSBZeeanX | \section{Introduction}
In the last four decades, the proof of the Strauss conjecture concerning the critical exponent of the initial value problem for the semilinear wave equation with power nonlinearity required the effort of many mathematicians worldwide. Nowadays, we know that the critical exponent for the Cauchy problem
\begin{align*}
\begin{cases} v_{tt} - \Delta v=|v|^p & x\in \mathbb{R}^n, \ t>0, \\
v(0,x)= \varepsilon v_0(x) & x\in \mathbb{R}^n, \\
v_t(0,x)= \varepsilon v_1(x) & x\in \mathbb{R}^n,
\end{cases}
\end{align*} is the so -- called \emph{Strauss exponent} $p_{\mathrm{Str}}(n)$ (cf. \cite{Joh79,Kato80,Gla81-g,Gla81-b,Sid84,Sch85,Zhou95,LS96,GLS97,YZ06,Zhou07}), that is, the positive root of the quadratic equation
\begin{align*}
(n-1)p^2-(n+1)p-2=0.
\end{align*}
We are also interested in not only the critical exponent but also lifespan, the maximal existence time of the solution,
when the global in time existence cannot be expected.
See the introduction of \cite{IKTW19} for the complete picture of the lifespan estimates for the classical semilinear wave equation with power nonlinearity.
While the situation is completely understood in the Euclidean case with flat metric on $\mathbb{R}^n$, in the last years several papers have been devoted to study the semilinear wave equation in the spacetime $\mathbb{R}^{1+n}_+$ equipped with different Lorentzian metrics. The semilinear wave equation in Schwarzschild has been investigated in \cite{CG06,MMTT10,LMSTW14,LLM20} in the $1+3$ dimensional case. Moreover, the wave (or Klein-Gordon) equation in de Sitter and anti -- de Sitter spacetimes have been investigated in the linear and semilinear case in \cite{Yag09,YagGal09,Yag12,GalYag2017,ER18,Gal18} and \cite{Gal03,YagGal08,YG09,YG09Rend}, respectively. Finally, the wave equation in Einstein -- de Sitter spacetime has been considered in \cite{GalKinYag10,GalYag14,GalYag17EdS}. In this paper, we shall examine the semilinear wave equation with power nonlinearity in a \emph{generalized Einstein -- de Sitter spacetime}. More precisely, let us consider the semilinear equation with singular coefficients
\begin{align} \label{Semi EdeS k Psi}
\varphi_{tt} -t^{-2k} \Delta \varphi +2t^{-1} \varphi_t =|\varphi|^p,
\end{align} where $k\in [0,1)$ and $p>1$. We call this model the semilinear wave equation in a generalized EdeS spacetime since for $k=2/3$ and $n=3$ Equation \eqref{Semi EdeS k Psi} is the semilinear wave equation in Einstein -- de Sitter (EdeS) spacetime with power nonlinearity.
In \cite[Theorem 1.3]{GalYag17EdS} the authors proved that for $$1<p<\max\big\{p_0(n,k),p_1(n,k)\big\}$$ a local in time solution to the corresponding Cauchy problem (with initial data prescribed at the initial time $t=1$) blows up in finite time, provided that the initial data fulfill certain integral sign conditions. Here $p_0(n,k)$ is the positive root of the quadratic equation
\begin{align}\label{intro equation critical exponent general case}
\left((1-k)n +1\right)p^2- \left((1-k)n +3+2k\right)p -2(1-k)=0,
\end{align} while
\begin{align} \label{intro def p1}
p_1(n,k) \doteq 1+ \frac{2}{(1-k)n}.
\end{align}
Furthermore, in \cite{GalYag17EdS} it is also shown that, for the semilinear wave equation in EdeS spacetime, the blow -- up is the effect of the semilinear term. For this reason we shall focus our analysis on the effect of the nonlinear term, prescribing the Cauchy data at the initial time $t=1$.
Performing the transformation $u=t \varphi$, \eqref{Semi EdeS k Psi} becomes equivalent to the following semilinear equation for $u$
\begin{align} \label{Semi EdeS k u}
u_{tt} -t^{-2k} \Delta u =t^{1-p}|u|^p.
\end{align}
In this paper, we investigate the blow -- up dynamic for \eqref{Semi EdeS k u} and, in particular, we will focus on the upper bound estimates for the lifespan and on the treatment of the critical case $p=\max\{p_0(n,k),p_1(n,k)\}$. More precisely, in the next sections we are going to provide a complete picture of the upper bound estimates for the lifespan of local in time solutions to \eqref{Semi EdeS k u} when $1<p\leqslant \max\{p_0(n,k),p_1(n,k)\}$.
In the subcritical case, we employ a Kato -- type lemma on the blow -- up dynamic for a second order ordinary differential inequality. On the other hand, in the critical case an iteration argument combined with a slicing procedure is applied. More particularly, for $p=p_0(n,k)$ we adapt the approach from \cite{WakYor18,WakYor18Damp} to the time -- dependent semilinear model \eqref{Semi EdeS k u}.
\subsection{Notations}
Throughout the paper we will employ the following notations: $\phi_k(t)\doteq \frac{t^{1-k}}{1-k}$ denotes a distance function produced by the speed of propagation $a_k(t)=t^{-k}$, while the amplitude of the light cone is given by the function
\begin{align} \label{def A k}
A_k(t)\doteq \int_1^t \tau^{-k} \mathrm{d}\tau = \phi_k(t) -\phi_k(1);
\end{align} the ball with radius $R$ around the origin is denoted $B_R$; $f\lesssim g$ means that there exists a positive constant $C$ such that $f\leqslant C g$ and, similarly, for $f\gtrsim g$; $\mathrm{I}_\nu$ and $\mathrm{K}_\nu$ denote the modified Bessel function of first and second kind of order $\nu$, respectively; finally,
\begin{align} \label{def N(k)}
N(k) \doteq \frac{1-2k+\sqrt{4k^2-4k+8}}{2(1-k)}
\end{align} denotes the threshold for the spatial dimension in determining the dominant exponent between $p_0(n,k)$ and $p_1(n,k)$ (more specifically, $p_0(n,k)>p_1(n,k)$ if and only if $n>N(k)$, while $p_0(n,k)\leqslant p_1(n,k)$ for $n\leqslant N(k)$).
\subsection{Main results}
The main results of this work are the following blow -- up results that combined together provide a full picture of the critical case $p=\max\{p_0(n,k),p_1(n,k)\}$ for the Cauchy problem
\begin{align}\label{Semi EdeS k}
\begin{cases} u_{tt} - t^{-2k}\Delta u= t^{1-p}|u|^p & x\in \mathbb{R}^n, \ t\in (1,T), \\
u(1,x)= \varepsilon u_0(x) & x\in \mathbb{R}^n, \\
u_t(1,x)= \varepsilon u_1(x) & x\in \mathbb{R}^n,
\end{cases}
\end{align} where $p>1$, $\varepsilon>0$ is a parameter describing the size of initial data and $k\in [0,1)$.
Before stating the main results, let us introduce the notion of energy solution to the semilinear Cauchy problem \eqref{Semi EdeS k}.
\begin{definition} \label{Def energy sol} Let $u_0\in H^1(\mathbb{R}^n)$ and $u_1\in L^2(\mathbb{R}^n)$. We say that
\begin{align*}
u\in \mathcal{C} \big([1,T), H^1(\mathbb{R}^n)\big) \cap \mathcal{C}^1 \big([1,T), L^2(\mathbb{R}^n)\big)\cap L^p_{\mathrm{loc}}\big([1,T)\times \mathbb{R}^n\big)
\end{align*} is an energy solution to \eqref{Semi EdeS k} on $[1,T)$ if $u$ fulfills $u(1,\cdot) = \varepsilon u_0$ in $H^1(\mathbb{R}^n)$ and the integral relation
\begin{align}
& \int_{\mathbb{R}^n} \partial_t u(t,x) \psi (t,x) \, \mathrm{d}x -\varepsilon \int_{\mathbb{R}^n} u_1(x) \psi (1,x) \, \mathrm{d}x
-\int_1^t\int_{\mathbb{R}^n} \big( \partial_t u (s,x) \psi_s(s,x) -s^{-2k} \nabla u(s,x) \cdot \nabla \psi (s,x)\big) \mathrm{d}x \, \mathrm{d}s \notag \\
& \quad = \int_1^t\int_{\mathbb{R}^n} s^{1-p} |u(s,x)|^p \psi (s,x) \, \mathrm{d} x \, \mathrm{d} s \label{integral identity def energy sol}
\end{align} for any $\psi\in \mathcal{C}_0^\infty([1,T)\times \mathbb{R}^n)$ and any $t\in (1,T)$.
\end{definition}
We point out that performing a further step of integration by parts in \eqref{integral identity def energy sol}, we find the integral relation
\begin{align}
& \int_{\mathbb{R}^n} \big( \partial_t u(t,x) \psi (t,x)-u(t,x)\psi_s(t,x)\big) \mathrm{d}x -\varepsilon \int_{\mathbb{R}^n} \big( u_1(x) \psi (1,x)- u_0(x)\psi_s(1,x)\big) \mathrm{d}x \notag \\ & \qquad
+\int_1^t\int_{\mathbb{R}^n} u (s,x) \big( \psi_{ss}(s,x) -s^{-2k} \Delta \psi (s,x)\big) \mathrm{d}x \, \mathrm{d}s = \int_1^t\int_{\mathbb{R}^n} s^{1-p} |u(s,x)|^p \psi (s,x) \, \mathrm{d} x \, \mathrm{d} s \label{integral identity weak sol}
\end{align} for any $\psi\in \mathcal{C}_0^\infty([1,T)\times \mathbb{R}^n)$ and any $t\in (1,T)$.
\begin{remark}\label{Remark support} Let us stress that if the Cauchy data are compactly supported, say $\mathrm{supp}\, u_j \subset B_R$ for $j=0,1$ and for some $R>0$, then, for any $t\in (1,T)$ a local solution $u$ to \eqref{Semi EdeS k} satisfies the support condition $$\mathrm{supp} \, u(t, \cdot) \subset B_{R+A_k(t)} ,$$ where $A_k$ is defined by \eqref{def A k}. Therefore, in Definition \ref{Def energy sol} we may also consider test functions which are not compactly supported, namely, $\psi \in \mathcal{C}^\infty([1,T)\times \mathbb{R}^n)$.
\end{remark}
\begin{theorem} \label{Theorem critical case p0}
Let $n \in \mathbb{N}^*$ such that $n>N(k)$ and $p=p_0(n,k)$. Let us assume that $u_0\in H^1(\mathbb{R}^n)$ and $u_1\in L^2(\mathbb{R}^n)$ are nonnegative, nontrivial and compactly supported functions with supports contained in $B_R$ for some $R>0$. Let $$u\in \mathcal{C} \big([1,T), H^1(\mathbb{R}^n)\big) \cap \mathcal{C}^1 \big([1,T), L^2(\mathbb{R}^n)\big)\cap L^p_{\mathrm{loc}}\big([1,T)\times \mathbb{R}^n\big)$$ be an energy solution to \eqref{Semi EdeS k} according to Definition \ref{Def energy sol} with lifespan $T=T(\varepsilon)$ and satisfying the support condition $\mathrm{supp} \, u(t,\cdot)\subset B_{A_k(t)+R}$ for any $t\in (1,T)$.
Then, there exists a positive constant $\varepsilon_0=\varepsilon_0(u_0,u_1,n,p,k,R)$ such that for any $\varepsilon\in (0,\varepsilon_0]$ the energy solution $u$ blows up in finite time. Moreover, the upper bound estimate for the lifespan
\begin{align*}
T(\varepsilon)\leqslant \exp\left(C\varepsilon^{-p(p-1)}\right)
\end{align*} holds, where the constant $C>0$ is independent of $\varepsilon$.
\end{theorem}
\begin{theorem} \label{Theorem critical case p1}
Let $n \in \mathbb{N}^*$ such that $n\leqslant N(k)$ and $p=p_1(n,k)$. Let us assume that $u_0\in H^1(\mathbb{R}^n)$ and $u_1\in L^2(\mathbb{R}^n)$ are nonnegative, nontrivial and compactly supported functions with supports contained in $B_R$ for some $R>0$. Let $$u\in \mathcal{C} \big([1,T), H^1(\mathbb{R}^n)\big) \cap \mathcal{C}^1 \big([1,T), L^2(\mathbb{R}^n)\big)\cap L^p_{\mathrm{loc}}\big([1,T)\times \mathbb{R}^n\big)$$ be an energy solution to \eqref{Semi EdeS k} according to Definition \ref{Def energy sol} with lifespan $T=T(\varepsilon)$ and satisfying the support condition $\mathrm{supp} \, u(t,\cdot)\subset B_{A_k(t)+R}$ for any $t\in (1,T)$.
Then, there exists a positive constant $\varepsilon_0=\varepsilon_0(u_0,u_1,n,p,k,R)$ such that for any $\varepsilon\in (0,\varepsilon_0]$ the energy solution $u$ blows up in finite time. Moreover, the upper bound estimate for the lifespan
\begin{align*}
T(\varepsilon)\leqslant \exp\left(C\varepsilon^{-(p-1)}\right)
\end{align*} holds, where the constant $C>0$ is independent of $\varepsilon$.
\end{theorem}
The remaining part of the paper is organized as follows: in Section \ref{Section critical case p0} we prove Theorem \ref{Theorem critical case p0} by using the approach introduced in \cite{WakYor18}; then, in Section \ref{Section subcritical case} we provide a complete overview on upper bound estimates for the subcritical case (cf. Proposition \ref{Proposition lifespan subcrit}), while in Section \ref{Section critical case p1} we show the proof of Theorem \ref{Theorem critical case p1}; finally, in Appendix \ref{Appendix solutions y''-lambda^2 t^(-4/3)y=0} we provide a different proof of Proposition \ref{Proposition representations y0 and y1} in the special case of Einstein -- de Sitter spacetime.
\section[Semilinear wave equation in EdeS spacetime: critical case $p=p_0(n,k)$]{Semilinear wave equation in EdeS spacetime: 1st critical case} \label{Section critical case p0}
Our goal is to prove a blow -- up result in the critical case $p=p_0(n,k)$, where $p_0(n,k)$ is the greatest root of the quadratic equation
\begin{align}\label{equation critical exponent general case}
\left(\tfrac{n-1}{2}+\tfrac{2-k}{2(1-k)}\right)p^2-
\left(\tfrac{n+1}{2}+\tfrac{2+3k}{2(1-k)}\right)p-1=0.
\end{align}
The approach that we will follow is based on the technique introduced in \cite{WakYor18} and subsequently applied to different wave models (cf. \cite{WakYor18Damp,PalTak19,PalTak19mix,LinTu19,ChenPal19MGT,ChenPal19SWENM}).
We are going to introduce a time -- dependent functional that depends on a local in time solution to \eqref{Semi EdeS k} and to study its blow -- up dynamic. In particular, the blow -- up result will be obtained by applying the so -- called \emph{slicing procedure} in an iteration argument to show a sequence of lower bound estimates for the above mentioned functional.
The section is organized as follows: in Section \ref{Subsection Aux functions} we determine a pair of auxiliary functions which have a fundamental role in the definition of the time -- dependent functional and in the determination of the iteration frame, while in Section \ref{Subsection estimates auxiliary functions} we establish some fundamental estimates for these functions; then, in Section \ref{Subsection iteration frame} we establish the iteration frame for the functional and, finally, in Section \ref{Subsection iteration procedure} we prove the blow -- up result by using an iteration procedure.
\subsection{Auxiliary functions} \label{Subsection Aux functions}
In this section, we are going to introduce two auxiliary functions (see $\xi_q$ and $\eta_q$ below) analogously to the corresponding functions introduced in \cite{WakYor18}, which represent, in turn, a generalization of the solution to the classical free wave equation given in \cite{Zhou07}. Those auxiliary functions are defined by using the remarkable function
\begin{align}\label{def Yordanov-Zhang function}
\varphi (x) \doteq \begin{cases} \int_{\mathbb{S}^{n-1}} \mathrm{e}^{x\cdot \omega} \mathrm{d} \sigma_\omega & \mbox{if} \ n\geqslant 2, \\ \cosh x
& \mbox{if} \ n=1 \end{cases}
\end{align} introduced in \cite{YZ06}. Let us recall briefly the main properties of this function: $\varphi$ is a positive and smooth function that satisfies $\Delta \varphi =\varphi$ and asymptotically behaves like $|x|^{-\frac{n-1}{2}}\mathrm{e}^{|x|}$ as $|x|\to \infty$ up to a positive multiplicative constant.
In order to introduce the definition of the auxiliary functions, let us begin by determining the solutions $y_j=y_j(t,s;\lambda,k)$, $j\in\{0,1\}$, of the non-autonomous, parameter-dependent, ordinary Cauchy problems
\begin{align}\label{CP yj(t,s;lambda,k)}
\begin{cases} \partial_t^2 y_j(t,s;\lambda,k) - \lambda^2 t^{-2k} y_j(t,s;\lambda,k)= 0, & t>s, \\
y_j(s,s;\lambda,k)= \delta_{0j}, \\
\partial_t y_j(s,s;\lambda,k)= \delta_{1j},
\end{cases}
\end{align} where $\delta_{ij}$ denotes the Kronecker delta, $s\geqslant 1$ is the initial time and $\lambda>0$ is a real parameter.
Let us recall that we denote by $\phi_k(t)= \tfrac{t^{1-k}}{1-k}$ a primitive of the speed of propagation $a(t) = t^{-k}$ for the wave equation in \eqref{Semi EdeS k}. In order to find a system of independent solutions to
\begin{align}\label{equation y}
\frac{\mathrm{d}^2 y}{\mathrm{d} t^2} -\lambda^2 t^{-2k}y=0
\end{align} we perform first a change of variables. Let $\tau= \tau(t;\lambda,k)\doteq \lambda \phi_k(t)$. Since
\begin{align*}
\frac{\mathrm{d} y}{\mathrm{d} t} &= \lambda t^{-k} \frac{\mathrm{d} y}{\mathrm{d} \tau}, \qquad \frac{\mathrm{d}^2 y}{\mathrm{d} t^2} = \lambda^2 t^{-2k} \frac{\mathrm{d}^2 y}{\mathrm{d} \tau^2}-\lambda k t^{-k-1} \frac{\mathrm{d} y}{\mathrm{d} \tau},
\end{align*} then, $y$ solves \eqref{equation y} if and only if it solves
\begin{align}\label{equation y tau}
\tau \frac{\mathrm{d}^2 y}{\mathrm{d} \tau^2} -\frac{k}{1-k} \frac{\mathrm{d} y}{\mathrm{d} \tau}-\tau y=0.
\end{align} Next, we carry out the transformation $y(\tau)=\tau^\nu w(\tau)$ with $\nu\doteq \tfrac{1}{2(1-k)}$. Therefore, $y$ solves \eqref{equation y tau} if and only if $w$ solves the modified Bessel equation of order $\nu$
\begin{align}\label{Bessel equation w}
\tau^2\frac{\mathrm{d}^2 w}{\mathrm{d} \tau^2} +\tau\frac{\mathrm{d} w}{\mathrm{d} \tau}-\left(\nu^2+\tau^2 \right) w=0,
\end{align} where we applied the straightforward relations
\begin{align*}
\frac{\mathrm{d} y}{\mathrm{d} \tau}= \nu \tau^{\nu-1} w(\tau) +\tau^\nu \frac{\mathrm{d} w}{\mathrm{d} \tau} ,\qquad \frac{\mathrm{d}^2 y}{\mathrm{d} \tau^2} = \nu (\nu-1) \tau^{\nu-2} w+2\nu \tau^{\nu-1} \frac{\mathrm{d} w}{\mathrm{d} \tau}+\tau^\nu \frac{\mathrm{d}^2 w}{\mathrm{d} \tau^2}.
\end{align*} If we employ as independent solutions to \eqref{Bessel equation w} the modified Bessel function of first and second kind of order $\nu$, denoted, respectively, by $\mathrm{I}_\nu(\tau)$ and $\mathrm{K}_\nu(\tau)$, then, the pair of functions
\begin{align*}
V_0(t;\lambda,k) &\doteq \tau ^\nu \mathrm{I}_\nu (\tau) = (\lambda \phi_k(t))^\nu \mathrm{I}_\nu (\lambda \phi_k(t)), \\
V_1(t;\lambda,k) & \doteq \tau ^\nu \mathrm{K}_\nu (\tau) = (\lambda \phi_k(t))^\nu \mathrm{K}_\nu (\lambda \phi_k(t))
\end{align*} is a basis of the space of solutions to \eqref{equation y}.
\begin{proposition} \label{Proposition representations y0 and y1} The functions
\begin{align}
y_0(t,s;\lambda,k) &\doteq \lambda \left(t/s\right)^{1/2}\phi_k(s)\big[\mathrm{I}_{\nu-1}(\lambda \phi_k (s))\, \mathrm{K}_{\nu}(\lambda \phi_k (t))+\mathrm{K}_{\nu-1}(\lambda \phi_k (s))\,\mathrm{I}_{\nu}(\lambda \phi_k (t))\big], \label{def y0(t,s;lambda,k)} \\
y_1(t,s;\lambda,k) &\doteq (1-k)^{-1} (st)^{1/2} \big[\mathrm{K}_{\nu}(\lambda \phi_k (s))\, \mathrm{I}_{\nu}(\lambda \phi_k (t))-\mathrm{I}_{\nu}(\lambda \phi_k (s))\,\mathrm{K}_{\nu}(\lambda \phi_k (t))\big], \label{def y1(t,s;lambda,k)}
\end{align} solve the Cauchy problems \eqref{CP yj(t,s;lambda,k)} for $j=0$ and $j=1$, respectively, where $\nu= 1/(2(1-k))$, $\phi_k(t)= t^{1-k}/(1-k)$ and $\mathrm{I}_\nu,\mathrm{K}_\nu$ denote the modified Bessel function of order $\nu$ of the first and second kind, respectively.
\end{proposition}
\begin{proof}
We have seen that $V_0,V_1$ form a system of independent solutions to \eqref{equation y}. Therefore, we may express the solutions of \eqref{CP yj(t,s;lambda,k)} as linear combinations of $V_0,V_1$ as follows:
\begin{align} \label{representation yj with aj and bj}
y_j(t,s;\lambda,k) = a_j(s;\lambda,k) V_0(t;\lambda,k)+ b_j(s;\lambda,k) V_1(t;\lambda,k)
\end{align} for suitable coefficients $a_j(s;\lambda,k), b_j(s;\lambda,k)$, $j\in\{0,1\}$. Using the initial conditions $\partial^i_t y_j(s,s;\lambda,k)=\delta_{ij}$, we find the system
\begin{align*}
\left(\begin{array}{cc}
V_0(s;\lambda,k) & V_1(s;\lambda,k) \\
\partial_t V_0(s;\lambda,k) & \partial_t V_1(s;\lambda,k)
\end{array} \right) \left(\begin{array}{cc}
a_0(s;\lambda,k) & a_1(s;\lambda,k) \\
b_0(s;\lambda,k) & b_1(s;\lambda,k)
\end{array} \right) = I,
\end{align*} where $I$ denotes the identity matrix. So, in order to determine the coefficients in \eqref{representation yj with aj and bj}, we have to calculate explicitly the inverse matrix
\begin{align}\label{inverse matrix}
\left(\begin{array}{cc}
V_0(s;\lambda,k) & V_1(s;\lambda,k) \\
\partial_t V_0(s;\lambda,k) & \partial_t V_1(s;\lambda,k)
\end{array} \right)^{-1} = \left(\mathcal{W}(V_0,V_1)(s;\lambda,k)\right)^{-1}\left(\begin{array}{cc}
\partial_t V_1(s;\lambda,k) & -V_1(s;\lambda,k) \\
-\partial_t V_0(s;\lambda,k) & V_0(s;\lambda,k)
\end{array} \right),
\end{align} where $\mathcal{W}(V_0,V_1)$ is the Wronskian of $V_0,V_1$. Clearly, we need to express in a more suitable way $\mathcal{W}(V_0,V_1)$. Let us calculate the $t$ -- derivative of $V_0,V_1$. Recalling that $\phi_k(t)=t^{1-k}/(1-k)$ and $\nu=1/(2(1-k))$, it results
\begin{align*}
\partial_t V_0(t;\lambda,k) & =\nu (\lambda\phi_k(t))^{\nu-1} \lambda \phi_k'(t) \, \mathrm{I}_\nu(\lambda \phi_k(t))+ (\lambda\phi_k(t))^{\nu} \, \mathrm{I}_\nu'(\lambda \phi_k(t)) \lambda \phi_k'(t) \\
& =\tfrac{1}{2t} (\lambda\phi_k(t))^{\nu} \, \mathrm{I}_\nu(\lambda \phi_k(t))+ (\lambda\phi_k(t))^{\nu} (\lambda \phi_k'(t)) \, \mathrm{I}_\nu'(\lambda \phi_k(t))
\end{align*} and, analogously,
\begin{align*}
\partial_t V_1(t;\lambda,k) & =\tfrac{1}{2t} (\lambda\phi_k(t))^{\nu} \, \mathrm{K}_\nu(\lambda \phi_k(t))+ (\lambda\phi_k(t))^{\nu} (\lambda \phi_k'(t)) \, \mathrm{K}_\nu'(\lambda \phi_k(t)) .
\end{align*} Consequently, we can express $\mathcal{W}(V_0,V_1)$ as follows:
\begin{align*}
\mathcal{W}(V_0,V_1)(t;\lambda,k) & = (\lambda \phi_k(t))^{2\nu} (\lambda \phi_k'(t)) \big[\mathrm{K}_\nu'(\lambda \phi_k(t))\, \mathrm{I}_\nu(\lambda \phi_k(t)) -\mathrm{I}_\nu'(\lambda \phi_k(t))\, \mathrm{K}_\nu(\lambda \phi_k(t)) \big] \\
& = (\lambda \phi_k(t))^{2\nu} (\lambda \phi_k'(t)) \mathcal{W}(\mathrm{I}_\nu,\mathrm{K}_\nu) (\lambda\phi_k(t)) = -(\lambda \phi_k(t))^{2\nu-1} (\lambda \phi_k'(t))\\
& = -\lambda^{2\nu} (\phi_k(t))^{2\nu-1} \phi_k'(t) = -c_k^{-1} \lambda^{2\nu},
\end{align*} where $c_k\doteq (1-k)^{k/(1-k)}$ and in the third equality we used the value of the Wronskian of $\mathrm{I}_\nu,\mathrm{K}_\nu$
\begin{align*}
\mathcal{W}(\mathrm{I}_\nu,\mathrm{K}_\nu) (z)= \mathrm{I}_\nu (z) \partial_z\mathrm{K}_\nu(z)- \partial_z \mathrm{I}_\nu (z)\mathrm{K}_\nu (z) =- \frac1z.
\end{align*} Let us underline that $\mathcal{W}(V_0,V_1)(t;\lambda,k)$ does not actually depend on $t$, due to the absence of the first order term in \eqref{equation y}.
Plugging the previous representation of $\mathcal{W}(V_0,V_1)$ in \eqref{inverse matrix}, we get
\begin{align*}
\left(\begin{array}{cc}
a_0(s;\lambda,k) & a_1(s;\lambda,k) \\
b_0(s;\lambda,k) & b_1(s;\lambda,k)
\end{array} \right) = -c_k \lambda^{-2\nu} \left(\begin{array}{cc}
\partial_t V_1(s;\lambda,k) & -V_1(s;\lambda,k) \\
-\partial_t V_0(s;\lambda,k) & V_0(s;\lambda,k)
\end{array} \right).
\end{align*}
Let us begin by proving \eqref{def y0(t,s;lambda,k)}. Using the above representation of $a_0(s;\lambda,k),b_0(s;\lambda,k)$ in \eqref{representation yj with aj and bj}, we obtain
\begin{align*}
y_0(t,s;\lambda,k) &= c_k \lambda^{-2\nu} \big\{\partial_tV_0(s;\lambda,k) V_1(t;\lambda,k)-\partial_tV_1(s;\lambda,k) V_0(t;\lambda,k)\big\} \\
& = c_k \lambda^{-2\nu}(\lambda\phi_k(s))^{\nu} (\lambda\phi_k(t))^{\nu}\Big\{ \big[\tfrac{1}{2s}\, \mathrm{I}_\nu(\lambda \phi_k(s))+ (\lambda \phi_k'(s)) \, \mathrm{I}_\nu'(\lambda \phi_k(s)) \big] \mathrm{K}_\nu(\lambda \phi_k(t)) \\
& \qquad \phantom{ c_k \lambda^{-2\nu}(\phi_k(s))^{\nu} (\phi_k(t))^{\nu}\big\{ }- \big[\tfrac{1}{2s}\, \mathrm{K}_\nu(\lambda \phi_k(s))+ (\lambda \phi_k'(s)) \, \mathrm{K}_\nu'(\lambda \phi_k(s)) \big] \mathrm{I}_\nu(\lambda \phi_k(t)) \Big\} \\
& = c_k (\phi_k(s)\phi_k(t))^{\nu} (2s)^{-1} \big\{ \mathrm{I}_\nu(\lambda \phi_k(s))\, \mathrm{K}_\nu(\lambda \phi_k(t)) -\mathrm{K}_\nu(\lambda \phi_k(s))\, \mathrm{I}_\nu(\lambda \phi_k(t)) \big\} \\
& \quad + c_k \lambda (\phi_k(s)\phi_k(t))^{\nu} \phi_k'(s) \big\{ \mathrm{I}'_\nu(\lambda \phi_k(s))\, \mathrm{K}_\nu(\lambda \phi_k(t)) -\mathrm{K}'_\nu(\lambda \phi_k(s))\, \mathrm{I}_\nu(\lambda \phi_k(t)) \big\}.
\end{align*}
Applying the recursive relations for the derivatives of the modified Bessel functions
\begin{align*}
\frac{\partial \,\mathrm{I}_\nu}{\partial z}(z) & = -\frac{ \nu}{z} \, \mathrm{I}_\nu(z)+ \mathrm{I}_{\nu-1}(z), \\
\frac{\partial \,\mathrm{K}_\nu}{\partial z}(z) & = -\frac{ \nu}{z} \, \mathrm{K}_\nu(z)- \mathrm{K}_{\nu-1}(z),
\end{align*} to the last relation, we arrive at
\begin{align}
y_0(t,s;\lambda,k) &= c_k (\phi_k(s)\phi_k(t))^{\nu} \underbrace{ \left[ (2s)^{-1} -\tfrac{\nu \lambda \phi_k'(s)}{ \lambda \phi_k(s)} \right]}_{=0}\big\{ \mathrm{I}_\nu(\lambda \phi_k(s))\, \mathrm{K}_\nu(\lambda \phi_k(t)) -\mathrm{K}_\nu(\lambda \phi_k(s))\, \mathrm{I}_\nu(\lambda \phi_k(t)) \big\}\notag \\
& \quad + c_k \lambda (\phi_k(s)\phi_k(t))^{\nu} \phi_k'(s) \big\{ \mathrm{I}_{\nu-1}(\lambda \phi_k(s))\, \mathrm{K}_\nu(\lambda \phi_k(t)) +\mathrm{K}_{\nu-1}(\lambda \phi_k(s))\, \mathrm{I}_\nu(\lambda \phi_k(t)) \big\} \notag \\
& = c_k \lambda (\phi_k(s)\phi_k(t))^{\nu} \phi_k'(s) \big\{ \mathrm{I}_{\nu-1}(\lambda \phi_k(s))\, \mathrm{K}_\nu(\lambda \phi_k(t)) +\mathrm{K}_{\nu-1}(\lambda \phi_k(s))\, \mathrm{I}_\nu(\lambda \phi_k(t)) \big\}. \label{intermediate representation y0}
\end{align} Since $c_k (\phi_k(s)\phi_k(t))^{\nu} \phi_k'(s) = (1-k)^{-1} (st)^{1/2} s^{-k} =(t/s)^{1/2} \phi_k(s)$, \eqref{intermediate representation y0} yields immediately \eqref{def y0(t,s;lambda,k)}. Let us prove now the representation \eqref{def y1(t,s;lambda,k)}. Plugging the above determined expressions for $a_1(s;\lambda,k),b_1(s;\lambda,k)$ in \eqref{representation yj with aj and bj}, we have
\begin{align}
y_1(t,s;\lambda,k) &= c_k \lambda^{-2\nu} \big\{V_1(s;\lambda,k) V_0(t;\lambda,k)-V_0(s;\lambda,k) V_1(t;\lambda,k)\big\} \notag \\
& = c_k \lambda^{-2\nu}(\lambda\phi_k(s))^{\nu} (\lambda\phi_k(t))^{\nu}\big\{ \mathrm{K}_\nu(\lambda \phi_k(s)) \, \mathrm{I}_\nu(\lambda \phi_k(t)) -\mathrm{I}_\nu(\lambda \phi_k(s)) \, \mathrm{K}_\nu(\lambda \phi_k(t))\big\} \notag \\
& = c_k (\phi_k(s) \phi_k(t))^{\nu}\big\{ \mathrm{K}_\nu(\lambda \phi_k(s)) \, \mathrm{I}_\nu(\lambda \phi_k(t)) -\mathrm{I}_\nu(\lambda \phi_k(s)) \, \mathrm{K}_\nu(\lambda \phi_k(t))\big\} . \label{intermediate representation y1}
\end{align} Thus, using $c_k (\phi_k(s) \phi_k(t))^{\nu} =(st)^{1/2}/(1-k)$, from \eqref{intermediate representation y1} follows \eqref{def y1(t,s;lambda,k)}. This concludes the proof.
\end{proof}
\begin{remark} In the special case $k=2/3$, $y_0(t,s;\lambda,k)$ and $y_1(t,s;\lambda,k)$ can be expressed in terms of elementary functions. Indeed by using the explicit representations
\begin{align*}
\mathrm{I}_{\frac{1}{2}}(z) & = \sqrt{\frac{2}{\pi}} \, \frac{\sinh z}{z^{1/2}}, \quad \mathrm{I}_{\frac{3}{2}}(z) \ = \sqrt{\frac{2}{\pi}} \, \frac{z\cosh z -\sinh z}{z^{3/2}}, \\ \mathrm{K}_{\frac{1}{2}}(z) & = \sqrt{\frac{\pi}{2}} \, \frac{\mathrm{e}^{-z}}{z^{1/2}}, \quad \ \ \mathrm{K}_{\frac{3}{2}}(z) = \sqrt{\frac{\pi}{2}} \, \frac{\mathrm{e}^{-z}(z+1)}{z^{3/2}},
\end{align*} we can derive the following representations
\begin{align}
y_0(t,s;\lambda,2/3) &= \left(\frac{t}{s}\right)^{1/3} \cosh\big( 3\lambda \big(t^{1/3}-s^{1/3}\big)\big) -\frac{1}{3\lambda s^{1/3}} \sinh\big( 3\lambda \big(t^{1/3}-s^{1/3}\big)\big), \label{def y0(t,s;lambda,2/3)} \\
y_1(t,s;\lambda,2/3) &= \left[\frac{\left(st\right)^{1/3}}{\lambda} -\frac{1}{9\lambda^3}\right] \cosh\big( 3\lambda \big(t^{1/3}-s^{1/3}\big)\big) +\frac{1}{3\lambda^2} \big(t^{1/3}-s^{1/3}\big) \sinh\big( 3\lambda \big(t^{1/3}-s^{1/3}\big)\big). \label{def y1(t,s;lambda,2/3)}
\end{align} Actually, in this case it is possible to derive the representations of $y_0(t,s;\lambda,2/3),y_1(t,s;\lambda,2/3)$ by reducing \eqref{equation y} to a confluent hypergeometric equation instead of a modified Bessel equation. For a detailed proof see Appendix \ref{Appendix solutions y''-lambda^2 t^(-4/3)y=0}.
\end{remark}
\begin{lemma} Let $y_0$, $y_1$ be the functions defined in \eqref{def y0(t,s;lambda,k)} and \eqref{def y1(t,s;lambda,k)}, respectively. Then, the following identities are satisfied for any $t\geqslant s\geqslant 1$
\begin{align}
& \frac{\partial y_1}{\partial s}(t,s;\lambda,k)= -y_0(t,s;\lambda,k), \label{dy1/ds= -y0} \\
& \frac{\partial^2 y_1}{\partial s^2}(t,s;\lambda,k) -\lambda^2 s^{-2k}y_1(t,s;\lambda,k)= 0. \label{y1 adjoiunt equation}
\end{align}
\end{lemma}
\begin{remark} As the operator $(\partial_t^2 -\lambda^2t^{-2k})$ is formally self-adjoint, in particular \eqref{dy1/ds= -y0} and \eqref{y1 adjoiunt equation} tell us that $y_1$ solves also the adjoint problem to \eqref{equation y} with final conditions $(0,-1)$.
\end{remark}
\begin{proof} Let us introduce the pair of independent solutions to \eqref{equation y}
\begin{align*}
z_0(t;\lambda,k) & \doteq y_0(t,1;\lambda,k), \\
z_1(t;\lambda,k) & \doteq y_1(t,1;\lambda,k).
\end{align*} By standard computations, we may show the representations
\begin{align*}
y_0(t,s;\lambda,k) & = z_1'(s;\lambda,k) z_0(t;\lambda,k)- z_0'(s;\lambda,k) z_1(t;\lambda,k),\\
y_1(t,s;\lambda,k) & = z_0(s;\lambda,k) z_1(t;\lambda,k)- z_1(s;\lambda,k) z_0(t;\lambda,k).
\end{align*} In particular, we used that the Wronskian of $z_0,z_1$ is identically 1. First we prove \eqref{dy1/ds= -y0}. Differentiating the second one of the previous representations with respect to $s$ and then using the first one, we get immediately
\begin{align*}
\frac{\partial y_1}{\partial s}(t,s;\lambda,k) & = z_0'(s;\lambda,k) z_1(t;\lambda,k)- z_1'(s;\lambda,k) z_0(t;\lambda,k) = - y_0(t,s;\lambda,k) .
\end{align*} Since $z_0,z_1$ are solutions of \eqref{equation y}, then,
\begin{align*}
\frac{\partial^2 y_1}{\partial s^2}(t,s;\lambda,k) & = z_0''(s;\lambda,k) z_1(t;\lambda,k)- z_1''(s;\lambda,k) z_0(t;\lambda,k) \\ & = \lambda^2 s^{-2k} z_0(s;\lambda,k) z_1(t;\lambda,k)- \lambda^2 s^{-2k} z_1(s;\lambda,k) z_0(t;\lambda,k) = \lambda^2 s^{-2k} y_1(t,s;\lambda,k) .
\end{align*}So, we prove \eqref{y1 adjoiunt equation} as well.
\end{proof}
\begin{proposition} \label{Proposition integral relation test function} Let $u_0\in H^1(\mathbb{R}^n)$ and $u_1\in L^2(\mathbb{R}^n)$ be functions such that $\mathrm{supp}\, u_j \subset B_R$ for $j=0,1$ and for some $R>0$ and let $\lambda>0$ be a parameter. Let $u$ be a local in time energy solution to \eqref{Semi EdeS k} on $[1,T)$ according to Definition \ref{Def energy sol}. Then, the following integral identity is satisfied for any $t\in [1,T)$
\begin{align}
\int_{\mathbb{R}^n} u(t,x) \varphi_\lambda (x) \, \mathrm{d}x & = \varepsilon \, y_0(t,1;\lambda,k) \int_{\mathbb{R}^n} u_0(x) \varphi_\lambda(x) \, \mathrm{d}x + \varepsilon \, y_1(t,1;\lambda,k) \int_{\mathbb{R}^n} u_1(x) \varphi_\lambda(x) \, \mathrm{d}x \notag \\ & \quad +\int_1^t s^{1-p} y_1(t,s;\lambda,k) \int_{\mathbb{R}^n} |u(s,x)|^p \varphi_\lambda (x) \, \mathrm{d}x \, \mathrm{d}s, \label{fundametal integral equality}
\end{align} where $\varphi_\lambda(x)\doteq \varphi(\lambda x)$ and $\varphi$ is defined by \eqref{def Yordanov-Zhang function}.
\end{proposition}
\begin{proof} Since we assumed $u_0,u_1$ compactly supported, we may consider a test function $\psi\in \mathcal{C}^\infty([1,T)\times\mathbb{R}^n)$ in Definition \ref{Def energy sol} according to Remark \ref{Remark support}. Therefore, we consider $\psi(s,x)=y_1(t,s;\lambda,k)\varphi_\lambda(x)$ (here $t,\lambda$ can be considered fixed parameters). Hence, $\psi$ satisfies
\begin{align*}
\psi(t,x) &=y_1(t,t;\lambda,k) \varphi_\lambda(x)=0, \quad \psi_s(t,x) = \partial_s y_1(t,t;\lambda,k) \varphi_\lambda(x) =- y_0(t,t;\lambda,k) \varphi_\lambda(x) =- \varphi_\lambda(x), \\
\psi(1,x) &=y_1(t,1;\lambda,k) \varphi_\lambda(x), \phantom{=0} \quad \psi_s(1,x) = \partial_s y_1(t,1;\lambda,k) \varphi_\lambda(x) =- y_0(t,1;\lambda,k) \varphi_\lambda(x),
\end{align*} and
\begin{align*}
\psi_{ss}(s,x) -s^{-2k} \Delta \psi(s,x) =\left( \partial_s^2 y_1(t,s;\lambda,k) -\lambda^2 s^{-2k} y_1(t,s;\lambda,k) \right) \varphi_\lambda(x)=0,
\end{align*}
where we used \eqref{dy1/ds= -y0}, \eqref{y1 adjoiunt equation} and the property $\Delta \varphi=\varphi$.
Hence, employing this $\psi$ in \eqref{integral identity weak sol}, we find immediately \eqref{fundametal integral equality}.
\end{proof}
\begin{proposition} \label{Proposition lower bound estimates y0 and y1} Let $y_0$, $y_1$ be the functions defined in \eqref{def y0(t,s;lambda,k)} and \eqref{def y1(t,s;lambda,k)}, respectively. Then, the following estimates are satisfied for any $t\geqslant s\geqslant 1$
\begin{align}
& y_0(t,s;\lambda,k)\geqslant \cosh \big(\lambda (\phi_k(t)-\phi_k(s))\big) , \label{lower bound estimate y0(t,s;lambda,k)} \\
& y_1(t,s;\lambda,k)\geqslant (st)^{\frac{k}{2}} \frac{\sinh \big(\lambda (\phi_k(t)-\phi_k(s))\big) }{\lambda}.\label{lower bound estimate y_1(t,s;lambda,k)}
\end{align}
\end{proposition}
\begin{proof} The proof of the inequalities \eqref{lower bound estimate y0(t,s;lambda,k)} and \eqref{lower bound estimate y_1(t,s;lambda,k)} is based on the following minimum type principle: \\
\emph{ let $w=w(t,s;\lambda,k)$ be a solution of the Cauchy problem }
\begin{align}\label{CP y}
\begin{cases} \partial_t^2 w -\lambda^2 t^{-2k} w =h, & \mbox{for} \ t>s \geqslant 1, \\ w(s)=
w_0, \ \partial_t w(s)=w_1, \end{cases}
\end{align} \emph{where $h=h(t,s;\lambda,k)$ is a continuous function; if $h\geqslant 0$ and $w_0=w_1=0$ (i.e. $w$ is a \emph{supersolution} of the homogeneous problem with trivial initial conditions), then, $w(t,s;\lambda,k)\geqslant 0$ for any $t>s$}.
In order to prove this minimum principle, we apply the continuous dependence on initial conditions (note that for $t\geqslant 1$ the function $t^{-2k}$ is smooth). Indeed, if we denote by $w_\epsilon$ the solution to \eqref{CP y} with $w_0=\epsilon>0$ and $w_1=0$, then, $w_\epsilon$ solves the integral equation
\begin{align*}
w_\epsilon(t,s;\lambda,k) = \epsilon +\int_s^t\int_s^\tau \big(\lambda^2\sigma^{-2k} w_\epsilon(\sigma,s;\lambda,k)+h(\sigma,s;\lambda,k)\big) \mathrm{d}\sigma\, \mathrm{d}\tau.
\end{align*} By contradiction, one can prove easily that $w_\epsilon(t,s;\lambda,k)>0$ for any $t>s$. Hence, by the continuous dependence on initial data, letting $\epsilon\to 0$, we find that $w(t,s;\lambda,k)\geqslant 0$ for any $t>s$.
Note that if $w_0,w_1\geqslant 0$ and $w_0+w_1>0$, then, the positivity of $w$ follows straightforwardly from the corresponding integral equation via a contradiction argument, rather than working with the family $\{w_\epsilon\}_{\epsilon>0}$. Nevertheless, in what follows we consider exactly the limit case $w_0=w_1=0$, for this reason the previous digression was necessary.
Let us prove the validity of \eqref{lower bound estimate y_1(t,s;lambda,k)}. We denote by $w_1=w_1(t,s;\lambda,k)$ the function on the right -- hand side of \eqref{lower bound estimate y_1(t,s;lambda,k)}. It is easy to see that $w_1(s,s;\lambda,k)=0$ and $\partial_t w_1(s,s;\lambda,k)=1$. Moreover,
\begin{align*}
\partial_t^2 w_1(t,s;\lambda,k) &= \lambda^{-1} s^{\frac{k}{2}} \Big[ \tfrac k2 \left(\tfrac k2 -1\right) t^{\frac{k}{2}-2} \sinh \big(\lambda (\phi_k(t)-\phi_k(s))\big)+ k t^{\frac k2 -1 } \cosh \big(\lambda (\phi_k(t)-\phi_k(s))\big) \lambda \phi'_k(t) \\
& \qquad \phantom{\lambda^{-1} s^{\frac{k}{2}} \Big[} +t^{\frac{k}{2}} \sinh \big(\lambda (\phi_k(t)-\phi_k(s))\big) (\lambda \phi'_k(t))^2+t^{\frac{k}{2}} \cosh \big(\lambda (\phi_k(t)-\phi_k(s))\big) \lambda \phi''_k(t) \Big] \\
& = \lambda^{-1} s^{\frac{k}{2}} \Big[ \tfrac k2 \left(\tfrac k2 -1\right) t^{\frac{k}{2}-2} \sinh \big(\lambda (\phi_k(t)-\phi_k(s))\big) +t^{\frac{k}{2}} \sinh \big(\lambda (\phi_k(t)-\phi_k(s))\big) (\lambda t^{-k})^2 \Big] \\
& \leqslant \lambda^{-1} (s t)^{\frac{k}{2}} \sinh \big(\lambda (\phi_k(t)-\phi_k(s))\big) (\lambda t^{-k})^2 = \lambda^2 t^{-2k} w_1(t,s;\lambda,k).
\end{align*}
Therefore, $y_1-w_1$ is a supersolution of \eqref{CP y} with $h=0$ and $w_0=w_1=0$. Thus, applying the minimum principle we have that $(y_1-w_1)(t,s;\lambda,k)\geqslant 0$ for any $t>s$, that is, we showed \eqref{lower bound estimate y_1(t,s;lambda,k)}.
In a completely analogous way, one can prove \eqref{lower bound estimate y0(t,s;lambda,k)}, repeating the previous argument based on the minimum principle with $w_0(t,s;\lambda,k)\doteq \cosh \big(\lambda (\phi_k(t)-\phi_k(s))\big)$ in place of $w_1(t,s;\lambda,k)$ and $y_0$ in place of $y_1$, respectively.
\end{proof}
After the preliminary results that we have proved so far in this section, we can now introduce the definition of the following \emph{auxiliary function}
\begin{align}
\xi_q(t,s,x;k) &\doteq \int_0^{\lambda_0} \mathrm{e}^{-\lambda (A_k(t)+R)} \cosh \big(\lambda (\phi_k(t)-\phi_k(s))\big) \varphi_\lambda(x) \lambda^q \,\mathrm{d}\lambda \label{def xi q}, \\
\eta_q(t,s,x;k) & \doteq (st)^{k/2}\int_0^{\lambda_0} \mathrm{e}^{-\lambda (A_k(t)+R)} \frac{\sinh \big(\lambda (\phi_k(t)-\phi_k(s))\big) }{\lambda (\phi_k(t)-\phi_k(s)) }\, \varphi_\lambda(x) \lambda^q \,\mathrm{d}\lambda\label{def eta q},
\end{align} where $q>-1$, $\lambda_0>0$ is a fixed parameter and $A_k$ is defined by \eqref{def A k}.
\begin{remark} For $k=0$ the functions $\xi_q$ and $\eta_q$ coincide with the corresponding ones given in \cite{WakYor18}, provided of course that we shift the initial time in the Cauchy problem from 0 to 1.
\end{remark}
Combining the results from Propositions \ref{Proposition integral relation test function} and \ref{Proposition lower bound estimates y0 and y1}, we may finally derive a fundamental inequality, whose role will be crucial in the next sections in order to prove the blow -- up result.
\begin{corollary} \label{Corollary fund ineq} Let $u_0\in H^1(\mathbb{R}^n)$ and $u_1\in L^2(\mathbb{R}^n)$ such that $\mathrm{supp}\, u_j \subset B_R$ for $j=0,1$ and for some $R>0$. Let $u$ be a local in time energy solution to \eqref{Semi EdeS k} on $[1,T)$ according to Definition \ref{Def energy sol}. Let $q>-1$ and let $\xi_q(t,s,x;k),\eta_q(t,s,x;k)$ be the functions defined by \eqref{def xi q} and \eqref{def eta q}, respectively. Then,
\begin{align}
\int_{\mathbb{R}^n} u(t,x) \, \xi_q(t,t,x;k) \, \mathrm{d}x & \geqslant \varepsilon \int_{\mathbb{R}^n} u_0(x) \, \xi_q(t,1,x;k) \, \mathrm{d}x
+ \varepsilon \, (\phi_k(t)-\phi_k(1)) \int_{\mathbb{R}^n} u_1(x) \, \eta_q(t,s,x;k) \, \mathrm{d}x \notag \\ & \quad +\int_1^t (\phi_k(t)-\phi_k(s)) \, s^{1-p} \int_{\mathbb{R}^n} |u(s,x)|^p \eta_q(t,s,x;k) \, \mathrm{d}x \, \mathrm{d}s \label{fundamental inequality functional mathcalU}
\end{align} for any $t\in [1,T)$.
\end{corollary}
\begin{proof}
Combining \eqref{fundametal integral equality} and the lower bound estimates \eqref{lower bound estimate y0(t,s;lambda,k)}, \eqref{lower bound estimate y_1(t,s;lambda,k)}, we find
\begin{align*}
\int_{\mathbb{R}^n} u(t,x) \varphi_\lambda (x) \, \mathrm{d}x & \geqslant \varepsilon \, \cosh \big(\lambda (\phi_k(t)-\phi_k(1))\big) \int_{\mathbb{R}^n} u_0(x) \varphi_\lambda(x) \, \mathrm{d}x \\
& \quad + \varepsilon \, t^{\frac{k}{2}} \lambda^{-1} \sinh \big(\lambda (\phi_k(t)-\phi_k(1))\big) \int_{\mathbb{R}^n} u_1(x) \varphi_\lambda(x) \, \mathrm{d}x \\ & \quad +\int_1^t s^{1-p} (st)^{\frac{k}{2}} \lambda^{-1} \sinh \big(\lambda (\phi_k(t)-\phi_k(s))\big) \int_{\mathbb{R}^n} |u(s,x)|^p \varphi_\lambda (x) \, \mathrm{d}x \, \mathrm{d}s.
\end{align*} Multiplying both sides of the previous identity by $\mathrm{e}^{-\lambda (A_k(t)+R)}\lambda^q$, integrating with respect to $\lambda$ over $[0,\lambda_0]$ and applying Fubini's theorem, we get \eqref{fundamental inequality functional mathcalU}.
\end{proof}
\subsection{Properties of the auxiliary functions} \label{Subsection estimates auxiliary functions}
In this section, we determine some lower and upper bound estimates for the auxiliary functions $\xi_q,\eta_q$ under suitable assumptions on $q$.
Let us begin with the lower bound estimates
\begin{lemma} \label{Lemma lower bound estimates xi q and eta q} Let $n\geqslant 1$ and $\lambda_0>0$. If we assume $q>-1$, then, for $t\geqslant s\geqslant 1$ and $|x|\leqslant A_k(s) +R$ the following lower bound estimates hold:
\begin{align}
\xi_q (t,s,x;k) &\geqslant B_0 \langle A_k(s)\rangle ^{-q-1}; \label{lower bound xi q}\\
\eta_q (t,s,x;k) & \geqslant B_1 (st)^{\frac{k}{2}} \langle A_k(t)\rangle ^{-1}\langle A_k(s)\rangle ^{-q}. \label{lower bound eta q}
\end{align}
Here $B_0,B_1$ are positive constants depending only on $\lambda_0,q,R,k$ and we employ the notation $\langle y\rangle \doteq 3+|s|$.
\end{lemma}
\begin{proof} We follow the main ideas of the proof of Lemma 3.1 in \cite{WakYor18}. Since
\begin{align} \label{asymptotic YordanovZhang function}
\langle |x| \rangle^{-\frac{n-1}{2}}\mathrm{e}^{|x|}\lesssim \varphi(x) \lesssim\langle |x| \rangle^{-\frac{n-1}{2}}\mathrm{e}^{|x|}
\end{align} holds for any $x\in \mathbb{R}^n$, we can find a constant $B=B(\lambda_0,R,k)>0$ independent of $\lambda$ and $s$ such that
$$B\leqslant \inf_{\lambda \in \left[\frac{\lambda_0}{\langle A_k(s)\rangle}, \frac{2\lambda_0}{\langle A_k(s)\rangle}\right]} \inf_{|x|\leqslant A_k(s)+R} \mathrm{e}^{-\lambda(A_k(s)+R)}\varphi_\lambda(x).$$
Let us begin with \eqref{lower bound xi q}. Shrinking the domain of integration in \eqref{def xi q} to $\left[\frac{\lambda_0}{\langle A_k(s)\rangle}, \frac{2\lambda_0}{\langle A_k(s)\rangle}\right]$ and applying the previous inequality, we get
\begin{align*}
\xi_q (t,s,x;k) &\geqslant \int_{\lambda_0/\langle A_k(s)\rangle} ^{2\lambda_0/\langle A_k(s)\rangle} \mathrm{e}^{-\lambda (A_k(t)-A_k(s))} \cosh \big(\lambda (\phi_k(t)-\phi_k(s))\big) \mathrm{e}^{-\lambda (A_k(s)+R)} \varphi_\lambda(x) \lambda^q \,\mathrm{d}\lambda \\
&\geqslant B \int_{\lambda_0/\langle A_k(s)\rangle} ^{2\lambda_0/\langle A_k(s)\rangle} \mathrm{e}^{-\lambda (A_k(t)-A_k(s))} \cosh \big(\lambda (\phi_k(t)-\phi_k(s))\big) \lambda^q \,\mathrm{d}\lambda \\
& = B/2 \int_{\lambda_0/\langle A_k(s)\rangle} ^{2\lambda_0/\langle A_k(s)\rangle} \left(1+\mathrm{e}^{-2\lambda (\phi_k(t)-\phi_k(s))}\right) \lambda^q \,\mathrm{d}\lambda \\ & \geqslant B/2 \int_{\lambda_0/\langle A_k(s)\rangle} ^{2\lambda_0/\langle A_k(s)\rangle} \lambda^q \,\mathrm{d}\lambda = \frac{B(2^{q+1}-1) \lambda_0^{q+1}}{2(q+1)} \langle A_k(s)\rangle^{-q-1}.
\end{align*} We prove now \eqref{lower bound eta q}. Repeating similar steps as before, we arrive at
\begin{align*}
\eta_q (t,s,x;k) &\geqslant (st)^{\frac{k}{2}} \int_{\lambda_0/\langle A_k(s)\rangle} ^{2\lambda_0/\langle A_k(s)\rangle} \mathrm{e}^{-\lambda (A_k(t)-A_k(s))} \frac{\sinh \big(\lambda (\phi_k(t)-\phi_k(s))\big) }{\lambda (\phi_k(t)-\phi_k(s)) }\, \mathrm{e}^{-\lambda (A_k(s)+R)} \varphi_\lambda(x) \lambda^q \,\mathrm{d}\lambda \\
&\geqslant \tfrac{B}{2} (st)^{\frac{k}{2}} \int_{\lambda_0/\langle A_k(s)\rangle} ^{2\lambda_0/\langle A_k(s)\rangle} \frac{1-\mathrm{e}^{-2\lambda (\phi_k(t)-\phi_k(s))}}{\phi_k(t)-\phi_k(s) }\, \lambda^{q-1} \,\mathrm{d}\lambda \\
&\geqslant \tfrac{B}{2} (st)^{\frac{k}{2}} \frac{1-\mathrm{e}^{-2\lambda_0 \frac{\phi_k(t)-\phi_k(s)}{\langle A_k(s)\rangle}}}{\phi_k(t)-\phi_k(s) } \int_{\lambda_0/\langle A_k(s)\rangle} ^{2\lambda_0/\langle A_k(s)\rangle} \lambda^{q-1} \,\mathrm{d}\lambda \\
& = \frac{B(2^q-1)\lambda_0^q}{2q} \, (st)^{\frac{k}{2}} \langle A_k(s)\rangle^{-q} \, \frac{1-\mathrm{e}^{-2\lambda_0 \frac{\phi_k(t)-\phi_k(s)}{\langle A_k(s)\rangle}}}{\phi_k(t)-\phi_k(s) } .
\end{align*} The previous inequality implies \eqref{lower bound eta q}, provided that $$\frac{1-\mathrm{e}^{-2\lambda_0 \frac{\phi_k(t)-\phi_k(s)}{\langle A_k(s)\rangle}}}{\phi_k(t)-\phi_k(s) } \gtrsim \langle A_k(t)\rangle^{-1} $$ holds. Let us prove this last inequality. For $\phi_k(t)-\phi_k(s)\geqslant \frac{1}{2\lambda_0}\langle A_k(s)\rangle $, we have $$1-\mathrm{e}^{-2\lambda_0 \frac{\phi_k(t)-\phi_k(s)}{\langle A_k(s)\rangle}} \geqslant 1-\mathrm{e}^{-1}$$ and, consequently,
\begin{align*}
\frac{1-\mathrm{e}^{-2\lambda_0 \frac{\phi_k(t)-\phi_k(s)}{\langle A_k(s)\rangle}}}{\phi_k(t)-\phi_k(s) } & \gtrsim \big(\phi_k(t)-\phi_k(s) \big)^{-1} \geqslant A_k(t)^{-1} \geqslant \langle A_k(t)\rangle^{-1}.
\end{align*} On the other hand, in the case $\phi_k(t)-\phi_k(s)\leqslant \frac{1}{2\lambda_0}\langle A_k(s)\rangle $, employing the inequality $1-\mathrm{e}^{-\sigma}\geqslant \sigma/2$ for $\sigma\in [0,1]$, we find immediately
\begin{align*}
\frac{1-\mathrm{e}^{-2\lambda_0 \frac{\phi_k(t)-\phi_k(s)}{\langle A_k(s)\rangle}}}{\phi_k(t)-\phi_k(s) } & \geqslant \frac{\lambda_0}{\langle A_k(s)\rangle} \geqslant \frac{\lambda_0}{\langle A_k(t)\rangle}.
\end{align*} So, also the proof of \eqref{lower bound eta q} is completed.
\end{proof}
Next we prove an upper bound estimate in the special case $s=t$.
\begin{lemma} \label{Lemma lupper bound estimate xi q t=s} Let $n\geqslant 1$ and $\lambda_0>0$. If we assume $q> (n-3)/2$, then, for $t\geqslant 1$ and $|x|\leqslant A_k(t) +R$ the following upper bound estimate holds:
\begin{align}
\xi_q (t,t,x;k) &\leqslant B_2 \langle A_k(t)\rangle ^{-\frac{n-1}{2}} \langle A_k(t) - |x|\rangle ^{\frac{n-3}{2}-q}. \label{upper bound xi q t=s}
\end{align}
Here $B_2$ is a positive constant depending only on $\lambda_0,q,R,k$ and $\langle y\rangle$ denotes the same function as in the statement of Lemma \ref{Lemma lower bound estimates xi q and eta q}.
\end{lemma}
\begin{proof} We follow the proof of Lemma 3.1 (iii) in \cite{WakYor18}. Applying \eqref{asymptotic YordanovZhang function}, we get
\begin{align*}
\xi_q(t,t,x;k) & = \int_0^{\lambda_0} \mathrm{e}^{-\lambda (A_k(t)+R)} \varphi_\lambda(x) \lambda^q \,\mathrm{d}\lambda \lesssim \int_0^{\lambda_0} \langle \lambda |x| \rangle^{-\frac{n-1}{2}} \mathrm{e}^{-\lambda (A_k(t)+R-|x|)} \lambda^q \,\mathrm{d}\lambda.
\end{align*} Let us consider separately two different cases. If $|x|\leqslant (A_k(t)+R)/2$, then,
\begin{align*}
\xi_q(t,t,x;k) & \lesssim \int_0^{\lambda_0} \mathrm{e}^{-\lambda (A_k(t)+R-|x|)} \lambda^q \,\mathrm{d}\lambda \lesssim \int_0^{\lambda_0} \mathrm{e}^{-\lambda (A_k(t)+R)/2} \lambda^q \,\mathrm{d}\lambda \\
& \lesssim (A_k(t)+R)^{-q-1} \int_0^{\infty} \mathrm{e}^{-\mu/2} \mu^q \,\mathrm{d}\mu \lesssim (A_k(t)+R)^{-q-1}
\lesssim \langle A_k(t)\rangle ^{-q-1} \\
& \lesssim \langle A_k(t)\rangle ^{-\frac{n-1}{2}} \langle A_k(t) - |x|\rangle ^{\frac{n-3}{2}-q}.
\end{align*} In particular, in the last estimate we used the inequality $\langle A_k(t) - |x|\rangle \lesssim \langle A_k(t)\rangle$, which follows trivially from $|A_k(t) - |x|| \leqslant A_k(t)$ for $ |x|\leqslant A_k(t)$ and from $\langle A_k(t) - |x|\rangle \lesssim 1$ for $A_k(t)\leqslant |x|\leqslant (A_k(t)+R)/2$.
On the other hand, for $|x|\geqslant (A_k(t)+R)/2$, we may estimate
\begin{align}
\xi_q(t,t,x;k) & \lesssim (A_k(t)+R)^{-\frac{n-1}{2}} \int_0^{\lambda_0} \mathrm{e}^{-\lambda (A_k(t)+R-|x|)} \lambda^{q-\frac{n-1}{2}} \,\mathrm{d}\lambda \notag \\
& \lesssim \langle A_k(t)\rangle ^{-\frac{n-1}{2}} (A_k(t)+R-|x|)^{-q+\frac{n-3}{2}} \int_0^{\infty} \mathrm{e}^{-\mu } \mu^{q-\frac{n-1}{2}} \,\mathrm{d}\mu \notag \\
& \lesssim \langle A_k(t)\rangle ^{-\frac{n-1}{2}} (A_k(t)+R-|x|)^{-q+\frac{n-3}{2}} . \label{upper bound xiq(t,t,x) intermediate}
\end{align} When $(A_k(t)+R)/2\leqslant |x|\leqslant A_k(t)$, thanks to the inequality $A_k(t)+R-|x|\gtrsim \langle A_k(t)-|x|\rangle$, from \eqref{upper bound xiq(t,t,x) intermediate} it follows easily \eqref{upper bound xi q t=s}; while for $A_k(t)\leqslant |x|\leqslant A_k(t)+R$, as $ \langle A_k(t)-|x|\rangle\approx 1$, the estimate
\begin{align*}
\xi_q(t,t,x;k) & \lesssim (A_k(t)+R)^{-\frac{n-1}{2}} \int_0^{\lambda_0} \mathrm{e}^{-\lambda (A_k(t)+R-|x|)} \lambda^{q-\frac{n-1}{2}} \,\mathrm{d}\lambda \\& \lesssim \langle A_k(t)\rangle ^{-\frac{n-1}{2}} \int_0^{\lambda_0} \lambda^{q-\frac{n-1}{2}} \,\mathrm{d}\lambda \lesssim \langle A_k(t)\rangle ^{-\frac{n-1}{2}}
\end{align*} is sufficient to conclude \eqref{upper bound xi q t=s}. This completes the proof.
\end{proof}
\subsection{Derivation of the iteration frame} \label{Subsection iteration frame}
In this section, we introduce the time -- dependent functional whose dynamic is studied in order to prove the blow -- up result. Hence, we derive the iteration frame for this functional and a first lower bound estimate of logarithmic type.
Let us introduce the functional
\begin{align}\label{def functional mathcalU}
\mathcal{U}(t) \doteq t^{-\frac{k}{2}} \int_{\mathbb{R}^n} u(t,x) \, \xi_q(t,t,x;k) \, \mathrm{d} x
\end{align} for $t\geqslant 1$ and for some $q>(n-3)/2$.
From \eqref{fundamental inequality functional mathcalU}, \eqref{lower bound xi q} and \eqref{lower bound eta q}, it follows
\begin{align*}
\mathcal{U}(t) \gtrsim B_0 \varepsilon \, t^{-\frac{k}{2}} \int_{\mathbb{R}^n} u_0(x) \, \mathrm{d}x + B_1 \varepsilon \, \frac{A_k(t)}{\langle A_k(t)\rangle } \int_{\mathbb{R}^n} u_1(x) \, \mathrm{d}x.
\end{align*} If we assume that $u_0,u_1$ are both nonnegative and nontrivial, then, we find that
\begin{align}\label{mathcalU > epsilon}
\mathcal{U}(t)\gtrsim \varepsilon
\end{align} for any $t\in [1,T)$, where the unexpressed multiplicative constant depends on $u_0,u_1$.
In the next proposition, we derive the iteration frame for the functional $\mathcal{U}$.
\begin{proposition} \label{Proposition iteration frame} Suppose that the assumptions in Corollary \ref{Corollary fund ineq} are satisfied and let $q=(n-1)/2-1/p$. If $\mathcal{U}$ is defined by \eqref{def functional mathcalU}, then, there exists a positive constant $C=C(n,p,R,k)$ such that
\begin{align} \label{iteration frame}
\mathcal{U}(t)\geqslant C \langle A_k(t)\rangle^{-1}\int_1^t \frac{\phi_k(t)-\phi_k(s)}{s} \big(\log \langle A_k(s)\rangle\big)^{-(p-1)} (\mathcal{U}(s))^p\, \mathrm{d}s
\end{align} for any $t\in (1,T)$.
\end{proposition}
\begin{proof}
By the definition of the functional \eqref{def functional mathcalU}, applying H\"older's inequality we get
\begin{align*}
s^{\frac k2} \mathcal{U} (s) \leq \left(\int_{\mathbb{R}^n}|u(s,x)|^p \eta_q(t,s,x;k) \, \mathrm{d}x\right)^{1/p} \left(\int_{B_{R+A_k(s)}} \frac{\big(\xi_q(s,s,x;k)\big)^{p'}}{\big(\eta_q(t,s,x;k)\big)^{p'/p}} \, \mathrm{d}x\right)^{1/p'},
\end{align*} where $1/p+1/p'=1$. Therefore,
\begin{align} \label{intermediate lower bound int |u|^p eta_q}
\int_{\mathbb{R}^n}|u(s,x)|^p \eta_q(t,s,x;k) \, \mathrm{d}x \geqslant \big(s^{\frac k2} \mathcal{U} (s)\big)^p \left(\int_{B_{R+A_k(s)}} \frac{\big(\xi_q(s,s,x;k)\big)^{p/(p-1)}}{\big(\eta_q(t,s,x;k)\big)^{1/(p-1)}} \, \mathrm{d}x\right)^{-(p-1)}.
\end{align} Let us determine now an upper bound estimates for the integral on the right -- hand side of \eqref{intermediate lower bound int |u|^p eta_q}. By using \eqref{upper bound xi q t=s} and \eqref{lower bound eta q}, we obtain
\begin{align*}
& \int_{B_{R+A_k(s)}} \frac{\big(\xi_q(s,s,x;k)\big)^{p/(p-1)}}{\big(\eta_q(t,s,x;k)\big)^{1/(p-1)}} \, \mathrm{d}x \\
& \qquad \leqslant B_1^{-\frac{1}{p-1}} B_2^{\frac{p}{p-1}} \langle A_k(s)\rangle ^{-\frac{n-1}{2}\frac{p}{p-1} } (st)^{-\frac{k}{2(p-1)}} \langle A_k(t)\rangle ^{\frac{1}{p-1}}\langle A_k(s)\rangle ^{\frac{q}{p-1}}\int_{B_{R+A_k(s)}} \langle A_k(s) - |x|\rangle ^{(\frac{n-3}{2}-q)\frac{p}{p-1}} \mathrm{d}x \\
& \qquad \leqslant B_1^{-\frac{1}{p-1}} B_2^{\frac{p}{p-1}} (st)^{-\frac{k}{2(p-1)}} \langle A_k(t)\rangle ^{\frac{1}{p-1}}\langle A_k(s)\rangle ^{\frac{1}{p-1}(-\frac{n-1}{2}p+\frac{n-1}{2}-\frac{1}{p})}\int_{B_{R+A_k(s)}} \langle A_k(s) - |x|\rangle ^{-1} \mathrm{d}x \\
& \qquad \leqslant B_1^{-\frac{1}{p-1}} B_2^{\frac{p}{p-1}} (st)^{-\frac{k}{2(p-1)}} \langle A_k(t)\rangle ^{\frac{1}{p-1}}\langle A_k(s)\rangle ^{\frac{1}{p-1}(-\frac{n-1}{2}p+\frac{n-1}{2}-\frac{1}{p})+n-1} \log \langle A_k(s) \rangle ,
\end{align*} where in the second step we used
$q=(n-1)/2-1/p$ to get exactly $-1$ as power of the function in the integral.
Hence, from \eqref{intermediate lower bound int |u|^p eta_q} we get
\begin{align*}
\int_{\mathbb{R}^n}|u(s,x)|^p \eta_q(t,s,x;k) \, \mathrm{d}x &\gtrsim \big(s^{\frac k2} \mathcal{U} (s)\big)^p (st)^{\frac{k}{2}} \langle A_k(t)\rangle ^{-1}\langle A_k(s)\rangle ^{\frac{n-1}{2}(p-1)+\frac{1}{p}-(n-1)(p-1)} \big(\log \langle A_k(s) \rangle\big)^{-(p-1)} \\
& \gtrsim t^{\frac{k}{2}} \langle A_k(t)\rangle ^{-1} s^{\frac {k}{2}(p+1)} \langle A_k(s)\rangle ^{-\frac{n-1}{2}(p-1)+\frac{1}{p}} \big(\log \langle A_k(s) \rangle\big)^{-(p-1)} \big(\mathcal{U} (s)\big)^p.
\end{align*}
If we combine the previous lower bound estimate and \eqref{fundamental inequality functional mathcalU}, we have
\begin{align*}
\mathcal{U}(t) & \geqslant t^{-\frac k2} \int_1^t (\phi_k(t)-\phi_k(s)) \, s^{1-p} \int_{\mathbb{R}^n} |u(s,x)|^p \eta_q(t,s,x;k) \, \mathrm{d}x \, \mathrm{d}s \\
& \gtrsim \langle A_k(t)\rangle ^{-1} \int_1^t (\phi_k(t)-\phi_k(s)) \, s^{1-p+\frac {k}{2}(p+1)} \langle A_k(s)\rangle ^{-\frac{n-1}{2}(p-1)+\frac{1}{p}} \big(\log \langle A_k(s) \rangle\big)^{-(p-1)} \big(\mathcal{U} (s)\big)^p\, \mathrm{d}s \\
& \gtrsim \langle A_k(t)\rangle ^{-1} \int_1^t (\phi_k(t)-\phi_k(s)) \langle A_k(s)\rangle ^{\frac{1-p}{1-k}+\frac {k(p+1)}{2(1-k)}-\frac{n-1}{2}(p-1)+\frac{1}{p}} \big(\log \langle A_k(s) \rangle\big)^{-(p-1)} \big(\mathcal{U} (s)\big)^p\, \mathrm{d}s\\
& \gtrsim \langle A_k(t)\rangle ^{-1} \int_1^t (\phi_k(t)-\phi_k(s)) \langle A_k(s)\rangle ^{-\left(\frac{n-1}{2}+\frac{2-k}{2(1-k)}\right)p+\left(\frac{n-1}{2}+\frac{2+k}{2(1-k)}\right)+\frac 1p} \big(\log \langle A_k(s) \rangle\big)^{-(p-1)} \big(\mathcal{U} (s)\big)^p\, \mathrm{d}s,
\end{align*} where in third step we used $s= (1-k)^{\frac{1}{1-k}}(A_k(s)+\phi_k(1))^{\frac{1}{1-k}}\approx \langle A_k(s)\rangle^{\frac{1}{1-k}}$ for $s\geqslant 1$. Since $p=p_0(n,k)$ from \eqref{equation critical exponent general case} it follows
\begin{align}\label{equation critical exponent intermediate}
-\left(\tfrac{n-1}{2}+\tfrac{2-k}{2(1-k)}\right)p+\left(\tfrac{n-1}{2}+\tfrac{2+k}{2(1-k)}\right)+\tfrac 1p= -1-\tfrac{k}{1-k}=-\tfrac{1}{1-k},
\end{align} then, plugging \eqref{equation critical exponent intermediate} in the last lower bound estimate for $\mathcal{U}(t)$ we find
\begin{align*}
\mathcal{U}(t) & \gtrsim \langle A_k(t)\rangle ^{-1} \int_1^t (\phi_k(t)-\phi_k(s)) \langle A_k(s)\rangle ^{-\frac{1}{1-k}} \big(\log \langle A_k(s) \rangle\big)^{-(p-1)} \big(\mathcal{U} (s)\big)^p\, \mathrm{d}s \\
& \gtrsim \langle A_k(t)\rangle ^{-1} \int_1^t \frac{\phi_k(t)-\phi_k(s)}{s} \big(\log \langle A_k(s) \rangle\big)^{-(p-1)} \big(\mathcal{U} (s)\big)^p\, \mathrm{d}s,
\end{align*} which is precisely \eqref{iteration frame}. This completes the proof.
\end{proof}
\begin{lemma} \label{Lemma lower bound int |u|^p} Suppose that the assumptions in Corollary \ref{Corollary fund ineq} are satisfied. Then, there exists a positive constant $K=K(u_0,u_1,n,p,R,k)$ such that the lower bound estimate
\begin{align} \label{lower bound int |u|^p}
\int_{\mathbb{R}^n} |u(t,x)|^p \, \mathrm{d}x \geqslant K \varepsilon^p \langle A_k(t)\rangle^{(n-1)(1-\frac{p}{2})+\frac{kp}{2(1-k)}}
\end{align} holds for any $t\in (1,T)$.
\end{lemma}
\begin{proof} We adapt the proof of Lemma 5.1 in \cite{WakYor18} to our model. Let us fix $q>(n-3)/2 +1/p'$. Combining \eqref{def functional mathcalU}, \eqref{mathcalU > epsilon} and H\"older's inequality, it results
\begin{align*}
\varepsilon t^{\frac{k}{2}} \lesssim t^{\frac{k}{2}} \mathcal{U}(t) = \int_{\mathbb{R}^n} u(t,x) \, \xi_q(t,t,x;k) \, \mathrm{d} x \leqslant \left(\int_{\mathbb{R}^n}|u(t,x)|^p\, \mathrm{d}x\right)^{1/p} \left(\int_{B_{R+A_k(t)}}\big(\xi_q(t,t,x;k\big)^{p'}\, \mathrm{d}x\right)^{1/p'}.
\end{align*} Hence,
\begin{align} \label{lower bound int |u|^p intermediate}
\int_{\mathbb{R}^n}|u(t,x)|^p\, \mathrm{d}x \gtrsim \varepsilon^p t^{\frac{kp}{2}} \left(\int_{B_{R+A_k(t)}}\big(\xi_q(t,t,x;k\big)^{p'}\, \mathrm{d}x\right)^{-(p-1)}.
\end{align} Let us determine an upper bound estimates for the integral of $\xi_q(t,t,x;k)^{p'}$. By using \eqref{upper bound xi q t=s}, we have
\begin{align*}
\int_{B_{R+A_k(t)}}\big(\xi_q(t,t,x;k\big)^{p'}\, \mathrm{d}x & \lesssim \langle A_k(t)\rangle ^{-\frac{n-1}{2}p'} \int_{B_{R+A_k(t)}} \langle A_k(t) - |x|\rangle ^{(n-3)p'/2-p'q} \, \mathrm{d}x \\
& \lesssim \langle A_k(t)\rangle ^{-\frac{n-1}{2}p'} \int_0^{R+A_k(t)} r^{n-1} \langle A_k(t) - r\rangle ^{(n-3)p'/2-p'q} \, \mathrm{d}r \\
& \lesssim \langle A_k(t)\rangle ^{-\frac{n-1}{2}p'+n-1} \int_0^{R+A_k(t)} \langle A_k(t) - r\rangle ^{(n-3)p'/2-p'q} \, \mathrm{d}r.
\end{align*} Performing the change of variable $A_k(t)-r=\varrho$, one gets
\begin{align*}
\int_{B_{R+A_k(t)}}\big(\xi_q(t,t,x;k\big)^{p'}\, \mathrm{d}x & \lesssim \langle A_k(t)\rangle ^{-\frac{n-1}{2}p'+n-1} \int^{A_k(t)}_{-R} (3+|\varrho|)^{(n-3)p'/2-p'q} \, \mathrm{d}\varrho \\
& \lesssim \langle A_k(t)\rangle ^{-\frac{n-1}{2}p'+n-1}
\end{align*} because of $(n-3)p'/2 -p'q<-1$.
Combining this upper bound estimates for the integral of $\xi_q(t,t,x;k)^{p'}$, \eqref{lower bound int |u|^p intermediate} and using $t\approx\langle A_k(t)\rangle^{\frac{1}{1-k}}$ for $t\geqslant 1$, we arrive at \eqref{lower bound int |u|^p}. The proof is over.
\end{proof}
In Proposition \ref{Proposition iteration frame}, we derive the iteration frame for $\mathcal{U}$. In the next result, we shall prove a first lower bound estimate for $\mathcal{U}$, which shall be the base case of the inductive argument in Section \ref{Subsection iteration procedure}.
\begin{proposition} \label{Proposition first logarithmic lower bound mathcalU} Suppose that the assumptions in Corollary \ref{Corollary fund ineq} are satisfied and let $q=(n-1)/2-1/p$. Let $\mathcal{U}$ be defined by \eqref{def functional mathcalU}. Then, for $t\geqslant 3/2$ the functional $\mathcal{U}(t)$ fulfills
\begin{align} \label{first logarithmic lower bound mathcalU}
\mathcal{U}(t) \geqslant M \varepsilon^p \log\left(\tfrac{2t}{3}\right),
\end{align} where the positive constant $M$ depends on $u_0,u_1,n,p,R,k$.
\end{proposition}
\begin{proof}
From \eqref{fundamental inequality functional mathcalU} we know that
\begin{align*}
\mathcal{U}(t) & \geqslant t^{-\frac{k}{2}} \int_1^t (\phi_k(t)-\phi_k(s)) \, s^{1-p} \int_{\mathbb{R}^n} |u(s,x)|^p \eta_q(t,s,x;k) \, \mathrm{d}x \, \mathrm{d}s.
\end{align*} Consequently, applying \eqref{lower bound eta q} first and then \eqref{lower bound int |u|^p}, we find
\begin{align*}
\mathcal{U}(t) & \geqslant B_1 \langle A_k(t)\rangle ^{-1} \int_1^t (\phi_k(t)-\phi_k(s)) \, s^{1-p+\frac{k}{2}} \langle A_k(s)\rangle ^{-q} \int_{\mathbb{R}^n} |u(s,x)|^p \, \mathrm{d}x \, \mathrm{d}s \\
& \geqslant B_1 K \varepsilon^p \langle A_k(t)\rangle ^{-1} \int_1^t (\phi_k(t)-\phi_k(s)) \, s^{1-p+\frac{k}{2}} \langle A_k(s)\rangle^{-q+(n-1)(1-\frac{p}{2})+\frac{kp}{2(1-k)}} \, \mathrm{d}s \\
& \gtrsim \varepsilon^p \langle A_k(t)\rangle ^{-1} \int_1^t (\phi_k(t)-\phi_k(s)) \langle A_k(s)\rangle^{(1-p+\frac{k}{2})\frac{1}{1-k}-\frac{n-1}{2}+\frac{1}{p}+(n-1)(1-\frac{p}{2})+\frac{kp}{2(1-k)}} \, \mathrm{d}s \\
& \gtrsim \varepsilon^p \langle A_k(t)\rangle ^{-1} \int_1^t (\phi_k(t)-\phi_k(s)) \langle A_k(s)\rangle ^{-\left(\frac{n-1}{2}+\frac{2-k}{2(1-k)}\right)p+\left(\frac{n-1}{2}+\frac{2+k}{2(1-k)}\right)+\frac 1p} \, \mathrm{d}s \\
& \gtrsim \varepsilon^p \langle A_k(t)\rangle ^{-1} \int_1^t (\phi_k(t)-\phi_k(s)) \langle A_k(s)\rangle ^{-\frac{1}{1-k}} \, \mathrm{d}s \gtrsim \varepsilon^p \langle A_k(t)\rangle ^{-1} \int_1^t \frac{\phi_k(t)-\phi_k(s)}{s} \, \mathrm{d}s.
\end{align*} We estimate now the integral in the right -- hand side of the previous chain of inequalities. Integration by parts leads to
\begin{align*}
\int_1^t \frac{\phi_k(t)-\phi_k(s)}{s} \, \mathrm{d}s & = \big(\phi_k(t)-\phi_k(s)\big)\log s \, \Big|^{s=t}_{s=1} +\int_1^t \phi_k'(s) \log s \, \mathrm{d}s \\ &= \int_1^t s^{-k} \log s \, \mathrm{d}s \geqslant t^{-k} \int_1^t \log s \, \mathrm{d}s.
\end{align*} Therefore, for $t\geqslant 3/2$
\begin{align*}
\mathcal{U}(t) & \gtrsim \varepsilon^p \langle A_k(t)\rangle ^{-1} t^{-k} \int_1^t \log s \, \mathrm{d}s \geqslant \varepsilon^p \langle A_k(t)\rangle ^{-1} t^{-k} \int_{2t/3}^t \log s \, \mathrm{d}s \geqslant (1/3) \varepsilon^p \langle A_k(t)\rangle ^{-1} t^{1-k} \log (2t/3) \\
& \gtrsim \varepsilon^p \log (2t/3) ,
\end{align*} where in the last line we employed $t\approx \langle A_k(t)\rangle ^{\frac{1}{1-k}}$ for $t\geqslant 1$. Also, the proof is complete.
\end{proof}
\subsection{Iteration argument} \label{Subsection iteration procedure}
In this section we prove the blow -- up result. More specifically, we are going to show a sequence of lower bound estimates for $\mathcal{U}$ and from these lower bound estimates we conclude that for $t$ over a certain $\varepsilon$~--~dependent threshold the functional $\mathcal{U}(t)$ may not be finite.
Our goal is to show the validity of the sequence of lower bound estimates
\begin{align} \label{lower bound mathcalU j step}
\mathcal{U}(t)\geqslant C_j \big(\log\langle A_k(t)\rangle \big)^{-\beta_j} \left(\log \left(\frac{t}{\ell_j}\right)\right)^{\alpha_j} \qquad \mbox{for} \ t\geqslant \ell_j
\end{align} for any $j\in \mathbb{N}$, where the bounded sequence of parameters characterizing the slicing procedure is $\{\ell_j\}_{j\in\mathbb{N}}$ with $\ell_j\doteq 2-2^{-(j+1)}$ and $\{C_j\}_{j\in\mathbb{N}},\{\alpha_j\}_{j\in\mathbb{N}},\{\beta_j\}_{j\in\mathbb{N}}$ are sequences of positive numbers that we will determine throughout the iteration argument.
In order to show \eqref{lower bound mathcalU j step}, we apply an inductive argument. As we have already said, the crucial idea here is to apply a slicing procedure for the domain of integration in the iteration frame \eqref{iteration frame}, in order to increase the power of the second logarithmic term in \eqref{lower bound mathcalU j step} step by step. This idea was introduced for the first time in \cite{AKT00} and since then it has been applied successfully to study the blow -- up dynamic of semilinear wave models in critical cases, overcoming the difficulties in the application of Kato's lemma for critical cases.
Since \eqref{lower bound mathcalU j step} is true in the base case $j=0$, provided that $C_0\doteq M\varepsilon^p$ and $\alpha_0\doteq 1, \beta_0=0$ (cf. Proposition \ref{Proposition first logarithmic lower bound mathcalU}), it remains to prove the inductive step. We assume \eqref{lower bound mathcalU j step} true for $j\geqslant 0$ and we have to prove it for $j+1$. Plugging \eqref{lower bound mathcalU j step} in \eqref{iteration frame}, for $t\geqslant \ell_{j+1}$ we get
\begin{align*}
\mathcal{U}(t) & \geqslant C C_j^p \langle A_k(t)\rangle^{-1}\int_{\ell_j}^t \frac{\phi_k(t)-\phi_k(s)}{s} \big(\log \langle A_k(s)\rangle\big)^{-(p-1) -\beta_jp} \left(\log \left(\tfrac{s}{\ell_j}\right)\right)^{\alpha_j p} \, \mathrm{d}s \\
& \geqslant C C_j^p \langle A_k(t)\rangle^{-1} \big(\log \langle A_k(t)\rangle\big)^{-(p-1) -\beta_jp}\int_{\ell_j}^t \frac{\phi_k(t)-\phi_k(s)}{s} \left(\log \left(\tfrac{s}{\ell_j}\right)\right)^{\alpha_j p} \, \mathrm{d}s.
\end{align*} Using integration by parts, we find
\begin{align*}
& \int_{\ell_j}^t \frac{\phi_k(t)-\phi_k(s)}{s} \left(\log \left(\tfrac{s}{\ell_j}\right)\right)^{\alpha_j p} \, \mathrm{d}s & \\ & \quad = \big(\phi_k(t)-\phi_k(s)\big) (\alpha_jp+1)^{-1}\left(\log \left(\tfrac{s}{\ell_j}\right)\right)^{\alpha_j p+1} \, \Big|^{s=t}_{s=\ell_j} + (\alpha_j p+1)^{-1} \int_{\ell_j}^t \phi'_k(s) \left(\log \left(\tfrac{s}{\ell_j}\right)\right)^{\alpha_j p+1} \, \mathrm{d}s \\
& \quad = (\alpha_j p+1)^{-1} \int_{\ell_j}^t s^{-k} \left(\log \left(\tfrac{s}{\ell_j}\right)\right)^{\alpha_j p+1} \, \mathrm{d}s \geqslant (\alpha_j p+1)^{-1} t^{-k} \int_{\ell_j}^t \left(\log \left(\tfrac{s}{\ell_j}\right)\right)^{\alpha_j p+1} \, \mathrm{d}s \\
& \quad \geqslant (\alpha_j p+1)^{-1} t^{-k} \int_{\tfrac{\ell_j t}{\ell_{j+1}}}^t \left(\log \left(\tfrac{s}{\ell_j}\right)\right)^{\alpha_j p+1} \, \mathrm{d}s \geqslant (\alpha_j p+1)^{-1} t^{1-k} \left(1-\tfrac{\ell_j}{\ell_{j+1}}\right) \left(\log \left(\tfrac{t}{\ell_{j+1}}\right)\right)^{\alpha_j p+1} \\
& \quad \geqslant (\alpha_j p+1)^{-1} 2^{-(j+3)} \gamma_k \langle A_k(t)\rangle \left(\log \left(\tfrac{t}{\ell_{j+1}}\right)\right)^{\alpha_j p+1} ,
\end{align*} where in the last step we applied $1-\ell_j/\ell_{j+1}>2^{-(j+3)}$ and $t^{1-k}\geqslant \gamma_k \langle A_k(t)\rangle $ for $t\geqslant 1$ with
\begin{align*}
\gamma_k \doteq \begin{cases} 1/3 & \mbox{if} \ \, k\in[0,2/3], \\ (1-k) & \mbox{if} \ \, k\in[2/3,1).\end{cases}
\end{align*}
Therefore,
\begin{align*}
\mathcal{U}(t) & \geqslant C \gamma_k \, 2^{-(j+3)} (\alpha_j p+1)^{-1} C_j^p \big(\log \langle A_k(t)\rangle\big)^{-(p-1) -\beta_jp} \left(\log \left(\tfrac{t}{\ell_{j+1}}\right)\right)^{\alpha_j p+1}
\end{align*} for $t\geqslant \ell_{j+1}$, that is, we proved \eqref{lower bound mathcalU j step} for $j+1$, provided that
\begin{align*}
C_{j+1} \doteq C \gamma_k \, 2^{-(j+3)} (\alpha_j p+1)^{-1} C_j^p, \quad \alpha_{j+1}\doteq 1+p \alpha_j, \quad \beta_{j+1}\doteq p-1 +p \beta_j.
\end{align*}
Next we establish a lower bound estimate for $C_j$. For this purpose, we provide first an explicit representation of the exponents $\alpha_j$ and $\beta_j$. Employing recursively the relations $\alpha_j=1+p\alpha_{j-1}$ and $\beta_j= (p-1) + p \beta_{j-1}$ and the initial exponents $\alpha_0=1$, $\beta_0=0$, we obtain
\begin{align}\label{explit expressions aj and bj}
\alpha_j & = \alpha_0 p^j +\sum_{k=0}^{j-1} p^k = \tfrac{p^{j+1}-1}{p-1} \quad \mbox{and} \quad
\beta_j = p^j \beta_0 +(p-1) \sum_{k=0}^{j-1} p^k = p^j-1.
\end{align} In particular, $\alpha_{j-1}p+1= \alpha_j\leqslant p^{j+1}/(p-1) $ implies that
\begin{align} \label{lower bound Mj no.1}
C_j \geqslant D\, (2 p)^{-j} C^p_{j-1}
\end{align} for any $j\geqslant 1$, where $D\doteq {2^{-2}} C \gamma_k (p-1)/p$. Applying the logarithmic function to both sides of \eqref{lower bound Mj no.1} and using iteratively the resulting inequality, we find
\begin{align*}
\log C_j & \geqslant p \log C_{j-1} -j \log (2p)+\log D \\
& \geqslant \ldots \geqslant p^j \log C_0 -\Bigg(\sum_{k=0}^{j-1}(j-k)p^k \Bigg)\log (2p)+\Bigg(\sum_{k=0}^{j-1} p^k \Bigg)\log D \\
& = p^j \left(\log M \varepsilon^p -\frac{p\log (2p)}{(p-1)^2}+\frac{\log D }{p-1}\right)+\left( \frac{j}{p-1}+\frac{p}{(p-1)^2}\right)\log (2p)-\frac{\log D}{p-1},
\end{align*} where we used the identity
\begin{align} \label{identity sum (j-k)p^k}
\sum\limits_{k=0}^{j-1}(j-k)p^k = \frac{1}{p-1}\left(\frac{p^{j+1}-p}{p-1}-j\right).
\end{align} Let us define $j_0=j_0(n,p,k)$ as the smallest nonnegative integer such that $$j_0\geqslant \frac{\log D}{\log (2p)}-\frac{p}{p-1}.$$ Hence, for any $j\geqslant j_0$ we have the estimate
\begin{align} \label{lower bound Mj no.2}
\log C_j & \geqslant p^j \left(\log M \varepsilon^p -\frac{p\log (2p)}{(p-1)^2}+\frac{\log D}{p-1}\right) = p^j \log (E \varepsilon^p),
\end{align} where $E\doteq M (2p)^{-p/(p-1)^2}D^{1/(p-1)}$.
Combining \eqref{lower bound mathcalU j step}, \eqref{explit expressions aj and bj} and \eqref{lower bound Mj no.2}, we arrive at
\begin{align*}
\mathcal{U}(t)&\geqslant \exp \left( p^j\log(E\varepsilon^p)\right) \left(\log\langle A_k(t)\rangle \right)^{-\beta_j} \left(\log \left(\tfrac t2\right)\right)^{\alpha_j} \\
&= \exp \left( p^j\log(E\varepsilon^p)\right) \left(\log\langle A_k(t)\rangle \right)^{-p^j+1} \left(\log \left(\tfrac t2\right)\right)^{(p^{j+1}-1)/(p-1)}
\\
&= \exp \left( p^j\log\left(E\varepsilon^p \left(\log\langle A_k(t)\rangle\right)^{-1}\left(\log \left(\tfrac t2\right)\right)^{p/(p-1)}\right) \right) \log\langle A_k(t)\rangle \left(\log \left(\tfrac t2\right)\right)^{-1/(p-1)}
\end{align*} for $t\geqslant 2$ and any $j\geqslant j_0$.
Since for $t\geqslant t_0(k) \doteq \max\big\{4,\gamma_k^{-1/k}\big\}$ the inequalities $$\log\langle A_k(t)\rangle \leqslant (1-k) \log t-\log \gamma_k \leqslant \log t \quad \mbox{and} \quad \log (\tfrac t2)\geqslant 2^{-1} \log t $$ hold true, then,
\begin{align}
\mathcal{U}(t)&\geqslant \exp \left( p^j\log\left(2^{-p/(p-1)}E\varepsilon^p \left(\log t\right)^{1/(p-1)}\right) \right) \log\langle A_k(t)\rangle \left(\log \left(\tfrac t2\right)\right)^{-1/(p-1)} \label{final lower bound G}
\end{align} for $t\geqslant t_0$ and any $j\geqslant j_0$. Let us denote $J(t,\varepsilon)\doteq 2^{-p/(p-1)}E\varepsilon^p \left(\log t\right)^{1/(p-1)}$.
If we choose $\varepsilon_0=\varepsilon_0(n,p,k,\lambda_0,R,u_0,u_1)$ sufficiently small so that
\begin{align*}
\exp \left(2^{p}E^{1-p}\varepsilon_0^{-p(p-1)}\right)\geqslant t_0,
\end{align*}
then, for any $\varepsilon\in (0,\varepsilon_0]$ and for $t> \exp \left(2^{p}E^{1-p}\varepsilon^{-p(p-1)}\right)$ we get $t\geqslant t_0$ and $J(t,\varepsilon)>1$. Consequently, for any $\varepsilon\in (0,\varepsilon_0]$ and for $t> \exp \left(2^{p}E^{1-p}\varepsilon^{-p(p-1)}\right)$ letting $j\to \infty$ in \eqref{final lower bound G} it results that the lower bound for $\mathcal{U}(t)$ blows up; hence, $\mathcal{U}(t)$ is not finite as well. Also, we showed that $\mathcal{U}$ blows up in finite time and, moreover, we proved the upper bound estimate for the lifespan $$T(\varepsilon)\leqslant \exp \left(2^{p}E^{1-p}\varepsilon^{-p(p-1)}\right).$$
Therefore, we completed the proof of Theorem \ref{Theorem critical case p0}.
\section{Semilinear wave equation in EdeS spacetime: subcritical case} \label{Section subcritical case}
As byproduct of the approach developed in Section \ref{Section critical case p0}, we derive in this section the upper bound estimates for the lifespan of local in time solutions in the subcritical case $1<p<\max\{p_0(n,k),p_1(n,k)\}$. Our main tool will be the generalization of Kato's lemma containing the upper bound estimates for the lifespan proved in \cite{Tak15}, whose statement is recalled below for the ease of the reader.
\begin{lemma} \label{Kato's lemma} Let $p>1$, $a>0$, $q>0$ satisfy $$M\doteq \frac{p-1}{2}a-\frac{q}{2}+1>0.$$ Assume that $F\in \mathcal{C}^2([\tau,T))$ satisfies
\begin{align}
& F(t) \geqslant A t^a \ \ \, \qquad \qquad \qquad \qquad \mbox{for} \ \ t\geqslant T_0\geqslant \tau , \label{F lower bound Kato lemma} \\
& F''(t) \geqslant B (t+R)^{-q}|F(t)|^{p} \qquad \mbox{for} \ \ t\geqslant \tau, \label{F'' lower bound Kato lemma} \\
& F(\tau) \geqslant 0, \ \ F'(\tau)>0, \label{F(0), F'(0) conditions Kato lemma}
\end{align} where $A,B,R,T_0$ are positive constants. Then, there exists a positive constant $C_0=C_0(p,a,q,B,\tau)$ such that
\begin{align}
T< 2^{\frac{2}{M}}T_1 \label{upper bound T Kato lemma}
\end{align}
holds, provided that
\begin{align}
T_1\doteq \max\left\{T_0,\frac{F(\tau)}{F'(\tau)},R\right\} \geqslant C_0 A^{-\frac{p-1}{2M}}. \label{lower bound T1 Kato lemma}
\end{align}
\end{lemma}
As we are going to apply this generalization of Kato's lemma, we will find some estimates already obtained in \cite{GalYag17EdS} in the treatment of the subcritical case, although the proofs that lead to these estimates are different.
Let us assume that $u_0,u_1$ are compactly supported with supports in $B_R$ for some $R>0$, nonnegative and nontrivial functions. Let $u$ be a solution on $[1,T)$ of \eqref{Semi EdeS k} according to Definition \ref{Def energy sol} such that $$\supp u(t,\cdot) \subset B_{R+A_k(t)}$$ for any $t\in (1,T)$, where $T=T(\varepsilon)$ is the lifespan of $u$.
Hence, we introduce as time -- dependent functional the spatial average of $u$
\begin{align} \label{def U}
\mathrm{U}(t) \doteq \int_{\mathbb{R}^n} u(t,x) \, \mathrm{d}x.
\end{align} Choosing a test function $\psi$ such that $\psi = 1$ on $\{(s,x)\in [1,t]\times \mathbb{R}^n: |x|\leqslant R+A_k(s) \}$ in \eqref{integral identity def energy sol}, we get
\begin{align*}
\mathrm{U}'(t) = \mathrm{U}'(1) +\int_1^t \int_{\mathbb{R}^n} s^{1-p} |u(s,x)|^p \, \mathrm{d}x \, \mathrm{d}s.
\end{align*} Also, differentiating the previous identity with respect to $t$, it results
\begin{align} \label{equality U''}
\mathrm{U}''(t) = t^{1-p}\int_{\mathbb{R}^n} |u(t,x)|^p \, \mathrm{d}x.
\end{align} By using the support condition for $u$ and H\"older's inequality, from the above inequality we obtain
\begin{align}
\mathrm{U}''(t) & \gtrsim t^{1-p} (R+A_k(t))^{-n(p-1)} |\mathrm{U}(t)|^p \notag \\ &\gtrsim (R+t)^{-((1-k)n+1)(p-1)} |\mathrm{U}(t)|^p \label{differential ineq U''}
\end{align} for any $t\in(1,T)$.
Let us derive now two estimates from below for $\mathrm{U}$.
On the one hand, thanks to the convexity of $\mathrm{U}$, we have immediately
\begin{align} \label{U > epsilon}
\mathrm{U}(t)\geqslant \mathrm{U}(1)+ (t-1) \mathrm{U}'(1) \gtrsim \varepsilon t
\end{align} for any $t\in (1,T)$, where we used that $u_0, u_1$ are nonnegative and nontrivial in the unexpressed multiplicative constant. Plugging \eqref{U > epsilon} in \eqref{differential ineq U''} and integrating twice, we get
\begin{align}\label{lower bound U convex}
\mathrm{U}(t)\gtrsim \varepsilon^p t^{-((1-k)n+1)(p-1) +p+2}
\end{align}
for any $t\in [T_0,T)$, where $T_0>1$. The first lower bound estimate for $\mathrm{U}$ in \eqref{lower bound U convex} has been obtained from the convexity of $\mathrm{U}$. On the other hand, from Lemma \ref{Lemma lower bound int |u|^p} and \eqref{equality U''}, integrating twice, we find a second lower bound estimate for $U$, that is,
\begin{align} \label{lower bound U int |u|^p}
\mathrm{U}(t)\gtrsim \varepsilon^p t^{(1-k)(n-1)(1-\frac{p}{2})+\frac{kp}{2} +1-p+2}
\end{align} for any $t\in [T_0,T)$.
Next we apply Lemma \ref{Kato's lemma} to the functional $\mathrm{U}$. Since $u_0,u_1$ are nonnegative and nontrivial we have $\mathrm{U}(1),\mathrm{U}'(1)>0$, so \eqref{F(0), F'(0) conditions Kato lemma} is fulfilled. Moreover, \eqref{differential ineq U''} corresponds to \eqref{F'' lower bound Kato lemma} with $q\doteq((1-k)n+1)(p-1)$. Finally, combining \eqref{lower bound U convex} and \eqref{lower bound U int |u|^p} we have \eqref{F lower bound Kato lemma} with $a= \max\{a_1,a_2\}$, where
\begin{align*}
a_1 &\doteq -((1-k)n+1)(p-1) +p+2, \\ a_2 & \doteq (1-k)(n-1)(1-\tfrac{p}{2})+\tfrac{kp}{2} +1-p+2
\end{align*} and $A\approx \varepsilon^p$.
According to this choice we have two possible value for the quantity $M$ in Lemma \ref{Kato's lemma}: either we use \eqref{lower bound U convex}, that is, $a=a_1$ and, consequently,
\begin{align*}
M_1\doteq \tfrac{p-1}{2} a_1 - \tfrac q2 +1 = \tfrac p2 \left [ -(1-k)n (p-1) +2\right]
\end{align*} or we use \eqref{lower bound U int |u|^p}, that is, $a=a_2$ and, then,
\begin{align*}
M_2\doteq \tfrac{p-1}{2} a_2 - \tfrac q2 +1 = \tfrac 12 \left \{ -\left[(1-k) \tfrac{n-1}{2} +1-\tfrac{k}{2}\right] p^2+\left[ (1-k) \tfrac{n+1}{2}+1+\tfrac{3k}{2} \right]p+1-k\right\}.
\end{align*} Therefore, for $M\doteq \max\{M_1,M_2\}>0$ Lemma \ref{Kato's lemma} provides a blow -- up result and the upper bound estimate for the lifespan $$T\lesssim \varepsilon ^{-\frac{p(p-1)}{2M}}.$$ Let us make the condition $M>0$ more explicit. The condition $M_1>0$ is equivalent to $p<p_1(n,k)$, while the condition $M_2>0$ is equivalent to $p<p_0(n,k)$. Hence, Lemma \ref{Kato's lemma} implies the validity of a blow -- up result for \eqref{Semi EdeS k} in the subcritical case $1<p< \max\{p_0(n,k),p_1(n,k)\}$ (exactly as in \cite{GalYag17EdS}) and the upper bound estimates for the lifespan
\begin{align} \label{lifespan estimate subcrit case}
T(\varepsilon) \lesssim \begin{cases} \varepsilon ^{- \left(\frac{2}{p-1}-(1-k) n\right)^{-1}} & \mbox{if} \ p<p_1(n,k), \\ \varepsilon ^{- \frac{p(p-1)}{\theta(p,n,k)}} & \mbox{if} \ p<p_0(n,k),\end{cases}
\end{align} where
\begin{align}\label{def theta}
\theta(p,n,k) \doteq 1-k +\left[ (1-k) \tfrac{n+1}{2}+1+\tfrac{3k}{2} \right]p -\left[(1-k) \tfrac{n-1}{2} +1-\tfrac{k}{2}\right] p^2.
\end{align}
Furthermore, we point out that $a > 1$ (so, in particular, $a>0$ as it is required in the assumptions of Lemma \ref{Kato's lemma}) if and only if $1<p <\max\{p_1(n,k),p_2(n,k)\}$, where
\begin{align*}
p_2(n,k)\doteq 2+\frac{2k}{(1-k)n+1}.
\end{align*} We want to show now that the condition $a> 1$ is always fulfilled whenever $M>0$ holds.
For this purpose, we shall determine how to order the exponents $p_0,p_1,p_2$. Since $p_0(n,k)$ is defined through \eqref{intro equation critical exponent general case}, the inequality $p_0(n,k)>p_1(n,k)$ holds if and only if
\begin{align*}
& \left((1-k)n +1\right)p_1(n,k)^2- \left((1-k)n +3+2k\right)p_1(n,k) -2(1-k)<0.
\end{align*} By straightforward computations it follows that the last inequality is fulfilled if and only if $ n> N(k)$, where $N(k)$ is defined in \eqref{def N(k)}. Similarly, $p_0(n,k)>p_2(n,k)$ if and only if $n< N(k)$.
Summarizing,
\begin{equation}\label{order p0,p1,p2}
\begin{split}
p_2(n,k)<p_0(n,k)<p_1(n,k) & \qquad \mbox{if} \ \ n<N(k), \\
p_0(n,k)=p_1(n,k)=p_2(n,k) & \qquad \mbox{if} \ \ n=N(k), \\
p_1(n,k)<p_0(n,k)<p_2(n,k) & \qquad \mbox{if} \ \ n>N(k).
\end{split}
\end{equation} Consequently, for $n\geqslant N(k)$ the critical condition is $p=p_0(n,k)$, so if $p< p_0(n,k)$, in particular, the condition $p< p_2(n,k)$ is fulfilled (that is, $M_2>0$ implies $a_2>1$). On the other hand, for $n < N(k)$ it holds $p_2(n,k)<p_1(n,k)$ and the condition $M_1>0$ and $a_1>1$ are both equivalent to $p<p_1(n,k)$ (the critical condition is $p=p_1(n,k)$ in this case). Therefore, we actually proved that $M>0$ implies $a>1$.
\begin{remark} In \cite{GalYag17EdS} the condition in the subcritical case on $p$ under which a blow -- up result holds for \eqref{Semi EdeS k} is written in a slightly different but equivalent way. Indeed, combining \cite[Equation (1.9)]{GalYag17EdS} with \eqref{order p0,p1,p2}, we see immediately that the condition for $p$ in \cite[Theorem 1.3]{GalYag17EdS} is satisfied if and only if $1<p<\max\{p_1(n,k),p_0(n,k)\}$.
\end{remark}
Finally, we want to compare the upper bound estimates for the lifespan in \eqref{lifespan estimate subcrit case}. Clearly, the estimates
\begin{align*}
T(\varepsilon) \lesssim \begin{cases} \varepsilon ^{- \left(\frac{2}{p-1}-(1-k) n\right)^{-1}} & \mbox{if} \ n<N(k) \ \mbox{and} \ p\in [p_0(n,k),p_1(n,k)), \\ \varepsilon ^{- \frac{p(p-1)}{\theta(p,n,k)}} & \mbox{if} \ n>N(k) \ \mbox{and} \ p\in [p_1(n,k),p_0(n,k)),\end{cases}
\end{align*} cannot be improved because it holds either $p\geqslant p_0(n,k)$ or $p\geqslant p_1(n,k)$. Note that $p_2(n,k)$ plays no role in the determination of the upper bound estimate for the lifespan.
However, in the case $1<p<\min\{p_0(n,k),p_1(n,k)\}$ it is not clear which of the upper bounds in \eqref{lifespan estimate subcrit case} is better. Of course, in this case we have to compare $a_1$ and $a_2$. A straightforward computation shows that $a_1\geqslant a_2$ if and only if
\begin{align} \label{inequality p3}
((1-k)n-1)p\leqslant 2(1-k).
\end{align} If $n\leqslant \widetilde{N}(k)\doteq 1/(1-k)$, then, the previous inequality is always true. On the other hand, for $n>\widetilde{N}(k)$ we may introduce the further exponent $$p_3(n,k)\doteq \frac{2(1-k)}{(1-k)n-1}.$$ It turns out that $p_3(n,k)>1$ if an only if $\widetilde{N}(k) < n< \widehat{N}(k)\doteq 2+1/(1-k)$. Moreover, for $n>\widetilde{N}(k)$ the inequalities $p_1(n,k)<p_3(n,k)$ and $p_0(n,k)<p_3(n,k)$ are both satisfied if and only if $n< N(k)$.
In order to clarify the upper bound estimates in \eqref{lifespan estimate subcrit case}, we shall consider five different subcases depending on the range for the spatial dimension $n$.
\subsubsection*{Case $n\leqslant \widetilde{N}(k)$ }
In this case, \eqref{inequality p3} is always satisfied as the left -- hand side is nonpositive. So, $a_1\geqslant a_2$. Therefore, for any $1<p<p_1(n,k)$ the following upper bound estimate holds
\begin{align} \label{lifespan estimate subcrit a1}
T(\varepsilon) \lesssim \varepsilon ^{- \left(\frac{2}{p-1}-(1-k) n\right)^{-1}}.
\end{align}
\subsubsection*{Case $\widetilde{N}(k) < n <N(k)$ } In this case, \eqref{inequality p3} is satisfied for $p\leqslant p_3$. Hence, by the ordering $1<p_0(n,k) <p_1(n,k) <p_3(n,k)$, we get that $a_1>a_2$ for exponents satisfying $1<p<p_1(n,k)$. Also, even in this case \eqref{lifespan estimate subcrit a1} is a better estimates than $T(\varepsilon) \lesssim \varepsilon ^{- \frac{p(p-1)}{\theta(p,n,k)}}$.
\subsubsection*{Case $n = N(k)$ } In this limit case, $p_0(n,k) =p_1(n,k) =p_3(n,k)$. So, for $1<p<p_1(n,k)=p_3(n,k)$ it holds $a_1>a_2$ and as in the previous case \eqref{lifespan estimate subcrit a1} is the best estimate.
\subsubsection*{Case $N(k) < n < \widehat{N}(k)$ } In this case, it results $1<p_3(n,k) <p_1(n,k)<p_0(n,k) $. So, for $1<p\leqslant p_3(n,k)$ it holds $a=a_1$, while for $p_3(n,k) <p<p_0(n,k)$ we have $a=a_2$. Therefore,
\begin{align*}
T(\varepsilon) \lesssim \begin{cases} \varepsilon ^{- \left(\frac{2}{p-1}-(1-k) n\right)^{-1}} & \mbox{if} \ \ p\in (1, p_3(n,k)], \\ \varepsilon ^{- \frac{p(p-1)}{\theta(p,n,k)}} & \mbox{if} \ \ p\in (p_3(n,k),p_0(n,k)).\end{cases}
\end{align*}
\subsubsection*{Case $ n \geqslant \widehat{N}(k)$ } In this case, $p_3(n,k)\leqslant 1$ and $1<p_1(n,k)<p_0(n,k)$ so \eqref{inequality p3} is never satisfied for $p>1$. Hence, $a_2>a_1$ for any $1<p<p_0(n,k)$, that is,
\begin{align*}
T(\varepsilon) \lesssim \varepsilon ^{- \frac{p(p-1)}{\theta(p,n,k)}}
\end{align*} is a better estimate than \eqref{lifespan estimate subcrit a1}.
\subsection{Lifespan estimates in the subcritical case}
Summarizing, what we established in the above subcases, we proved the following proposition, that completes \cite[Theorem 1.3]{GalYag17EdS} with the estimate for the lifespan while Theorem \ref{Theorem critical case p0} and Theorem \ref{Theorem critical case p1} deal with the critical case that was not discussed in \cite{GalYag17EdS}.
\begin{proposition} \label{Proposition lifespan subcrit}
Let $n\geqslant 1$ and $1<p< \max\{p_0(n,k), p_1(n,k)\}$. Let us assume that $u_0\in H^1(\mathbb{R}^n)$ and $u_1\in L^2(\mathbb{R}^n)$ are nonnegative, nontrivial and compactly supported functions with supports contained in $B_R$ for some $R>0$. Let $$u\in \mathcal{C} \big([1,T), H^1(\mathbb{R}^n)\big) \cap \mathcal{C}^1 \big([1,T), L^2(\mathbb{R}^n)\big)\cap L^p_{\mathrm{loc}}\big([1,T)\times \mathbb{R}^n\big)$$ be an energy solution to \eqref{Semi EdeS k} according to Definition \ref{Def energy sol} with lifespan $T=T(\varepsilon)$ and fulfilling the support condition $\mathrm{supp} \, u(t,\cdot)\subset B_{A_k(t)+R}$ for any $t\in (1,T)$. Then, there exists a positive constant $\varepsilon_0=\varepsilon_0(u_0,u_1,n,p,k,R)$ such that for any $\varepsilon\in (0,\varepsilon_0]$ the energy solution $u$ blows up in finite time. Furthermore, the upper bound estimates for the lifespan
\begin{align*}
T(\varepsilon)\leqslant \begin{cases} C \varepsilon ^{- \left(\frac{2}{p-1}-(1-k) n\right)^{-1}} & \mbox{if} \ n\leqslant N(k) \ \mbox{and} \ p\in (1, p_1(n,k)), \\
C \varepsilon ^{- \left(\frac{2}{p-1}-(1-k) n\right)^{-1}} & \mbox{if} \ n\in( N(k) ,\widehat{N}(k) ) \ \mbox{and} \ \ p\in (1, p_3(n,k)], \\
C \varepsilon ^{- \frac{p(p-1)}{\theta(p,n,k)}} & \mbox{if} \ n\in( N(k) ,\widehat{N}(k) ) \ \mbox{and} \ \ p\in (p_3(n,k),p_0(n,k)), \\
C \varepsilon ^{- \frac{p(p-1)}{\theta(p,n,k)}} & \mbox{if} \ n\geqslant \widehat{N}(k) \ \mbox{and} \ \ p\in (1,p_0(n,k)),\end{cases}
\end{align*} hold, where the constant $C>0$ is independent of $\varepsilon$ and $\theta(p,n,k)$ is defined by \eqref{def theta}.
\end{proposition}
\section[Semilinear wave equation in EdeS spacetime: critical case $p=p_1(n,k)$]{Semilinear wave equation in EdeS spacetime: 2nd critical case} \label{Section critical case p1}
In Section \ref{Section subcritical case} we derived the upper bound for the lifespan in the subcritical case, while in Section \ref{Section critical case p0} we studied the critical case $p=p_0(n,k)$. We have already remarked that $p=p_0(n,k)$ is the critical case when $n >N(k)$. Therefore, it remains to consider the critical case $p=p_1(n,k)$ when $n\leqslant N(k)$. In this section, we are going to prove a blow -- up result even in this critical case $p=p_1(n,k)$ and to derive the corresponding upper bound estimate for the lifespan. Even in this critical case, our approach will be based on a basic iteration argument combined with the slicing procedure we already applied in Section \ref{Section critical case p0}.
As time -- depending functional we will use the same one employed in Section \ref{Section subcritical case}, namely the function $\mathrm{U}$ defined in \eqref{def U}. Then, since $p=p_1(n,k)$ is equivalent to the condition
\begin{align}\label{condition p=p1 equiv}
((1-k)n+1)(p-1)=p+1,
\end{align}
we may rewrite \eqref{differential ineq U''} as
\begin{align}\label{iteration frame 2nd crit case}
\mathrm{U}(t) \geqslant C \int_1^t\int_1^s (R+\tau)^{-(p+1)}\big(\mathrm{U}(\tau)\big)^p \, \mathrm{d}\tau \, \mathrm{d}s
\end{align} for any $t\in (1,T)$ and for a suitable positive constant $C>0$. Let us point out that \eqref{iteration frame 2nd crit case} will be the iteration frame in the iteration procedure for the critical case $p=p_1(n,k)$.
We know that $\mathrm{U}(t)\geqslant K \varepsilon\, t$ for any $t\in (1,T)$, where $K$ is a suitable positive constant, provided that $u_0,u_1$ are nonnegative, nontrivial and compactly supported (cf. the estimate in \eqref{U > epsilon}). Therefore,
\begin{align}
\mathrm{U}(t) & \geqslant C K^p \varepsilon^p \int_1^t\int_1^s (R+\tau)^{-(p+1)}\tau ^p \, \mathrm{d}\tau \, \mathrm{d}s \geqslant C K^p (R+1)^{-(p+1)} \varepsilon^p \int_1^t\int_1^s \tau ^{-1} \, \mathrm{d}\tau \, \mathrm{d}s \notag \\
& = C K^p (R+1)^{-(p+1)} \varepsilon^p \int_1^t\log s \, \mathrm{d}s \geqslant C K^p (R+1)^{-(p+1)} \varepsilon^p \int_{2t/3}^t\log s \, \mathrm{d}s \notag \\ &\geqslant 3^{-1} C K^p (R+1)^{-(p+1)} \varepsilon^p \, t \log \left( \tfrac{2t}{3}\right) \label{1st lower bound U p=p1}
\end{align} for $t \geqslant \ell_0=3/2$, where we used $R+\tau\leqslant (R+1)\tau$ for $\tau\geqslant 1$.
Hence, by using recursively \eqref{iteration frame 2nd crit case}, we are going to prove now the sequence of lower bound estimates
\begin{align}\label{lower bound U j p=p1}
\mathrm{U}(t)\geqslant K_j \, t \left(\log \left(\frac{t}{\ell_j}\right)\right)^{\sigma_j} \qquad \mbox{for} \ t\geqslant \ell_j
\end{align} for any $j\in \mathbb{N}$, where the sequence of parameters $\{\ell_j\}_{j\in\mathbb{N}}$ is defined as in Section \ref{Subsection iteration frame}, i.e. $\ell_j=2-2^{-(j+1)}$, and $\{K_j\}_{j\in\mathbb{N}}$, $\{\sigma\}_{j\in\mathbb{N}}$ are sequences of positive reals that we shall determine afterwards.
We remark that for $j=0$ \eqref{lower bound U j p=p1} holds true thanks to \eqref{1st lower bound U p=p1}, provided that $K_0= (CK^p (R+1)^{-(p+1)} \varepsilon^p)/3$ and $\sigma_0=1$. Next we are going to prove \eqref{lower bound U j p=p1} by using an inductive argument. Assumed the validity of \eqref{lower bound U j p=p1} for some $j\geqslant 0$ we have to prove \eqref{lower bound U j p=p1} for $j+1$. For this purpose, we plug \eqref{lower bound U j p=p1} in \eqref{iteration frame 2nd crit case}, thus, after shrinking the domain of integration, we have
\begin{align*}
\mathrm{U}(t) & \geqslant CK_j^p \int_{\ell_j}^t\int_{\ell_j}^s (R+\tau)^{-(p+1)} \tau^p \left(\log \left( \tfrac{\tau}{\ell_j}\right)\right)^{\sigma_j p} \mathrm{d}\tau \, \mathrm{d}s \\
& \geqslant C(R+1)^{-(p+1)} K_j^p \int_{\ell_j}^t\int_{\ell_j}^s \tau^{-1} \left(\log \left( \tfrac{\tau}{\ell_j}\right)\right)^{\sigma_j p} \mathrm{d}\tau \, \mathrm{d}s \\ & = C(R+1)^{-(p+1)} K_j^p (\sigma_j p+1)^{-1}\int_{\ell_j}^t \left(\log \left( \tfrac{s}{\ell_j}\right)\right)^{\sigma_j p+1} \mathrm{d}s
\end{align*} for $t\geqslant \ell_{j+1}$. If we shrink the domain of integration to $[(\ell_j/\ell_{j+1})t,t]$ in the last $s$ -- integral we get
\begin{align*}
\mathrm{U}(t) & \geqslant C (R+1)^{-(p+1)} K_j^p (\sigma_j p+1)^{-1}\int_{\tfrac{\ell_j t}{\ell_{j+1}}}^t \left(\log \left( \tfrac{s}{\ell_j}\right)\right)^{\sigma_j p+1} \mathrm{d}s \\ &\geqslant C (R+1)^{-(p+1)} K_j^p (\sigma_j p+1)^{-1} \left(1- \tfrac{\ell_j}{\ell_{j+1}}\right) t \left(\log \left( \tfrac{t}{\ell_{j+1}}\right)\right)^{\sigma_j p+1} \\
& \geqslant C (R+1)^{-(p+1)} 2^{-(j+3)}K_j^p (\sigma_j p+1)^{-1} t \left(\log \left( \tfrac{t}{\ell_{j+1}}\right)\right)^{\sigma_j p+1}
\end{align*} for $t\geqslant \ell_{j+1}$, where in the last step we applied the inequality $1-\ell_j/\ell_{j+1}>2^{-(j+3)}$. Also, we proved \eqref{lower bound U j p=p1} for $j+1$ provided that
\begin{align*}
K_{j+1}\doteq C (R+1)^{-(p+1)} 2^{-(j+3)} (\sigma_j p+1)^{-1} K_j^p \quad \mbox{and} \quad \sigma_{j+1} \doteq \sigma_j p+1.
\end{align*}
Next we determine a lower bound estimate for $K_j$. First we find the value of the exponent $\sigma_j$. Applying iteratively the relation $\sigma_j=1+p\sigma_{j-1}$ and the initial exponent $\sigma_0=1$, we get
\begin{align}\label{explit expressions sigmaj}
\sigma_j & = \sigma_0 p^j +\sum_{k=0}^{j-1} p^k = \tfrac{p^{j+1}-1}{p-1}.
\end{align} In particular, $\sigma_{j-1}p+1= \sigma_j\leqslant p^{j+1}/(p-1) $ implies that
\begin{align} \label{lower bound Kj no.1}
K_j \geqslant L\, (2 p)^{-j} K^p_{j-1}
\end{align} for any $j\geqslant 1$, where $L\doteq {2^{-2}} C (R+1)^{-(p+1)} (p-1)/p$. Applying the logarithmic function to both sides of \eqref{lower bound Kj no.1} and reusing the resulting inequality in an iterative way, we arrive at
\begin{align*}
\log K_j & \geqslant p \log K_{j-1} -j \log (2p)+\log L \\
& \geqslant \ldots \geqslant p^j \log K_0 -\Bigg(\sum_{k=0}^{j-1}(j-k)p^k \Bigg)\log (2p)+\Bigg(\sum_{k=0}^{j-1} p^k \Bigg)\log L \\
& = p^j \left(\log \left(3^{-1}CK^p (R+1)^{-(p+1)} \varepsilon^p\right) -\frac{p\log (2p)}{(p-1)^2}+\frac{\log L }{p-1}\right)+\left( \frac{j}{p-1}+\frac{p}{(p-1)^2}\right)\log (2p)-\frac{\log L}{p-1},
\end{align*} where we applied again the identity \eqref{identity sum (j-k)p^k}. Let us define $j_1=j_1(n,p,k)$ as the smallest nonnegative integer such that $$j_1\geqslant \frac{\log L}{\log (2p)}-\frac{p}{p-1}.$$ Hence, for any $j\geqslant j_1$ the estimate
\begin{align} \label{lower bound Kj no.2}
\log K_j & \geqslant p^j \left(\log \left( 3^{-1} CK^p (R+1)^{-(p+1)} \varepsilon^p\right) -\frac{p\log (2p)}{(p-1)^2}+\frac{\log L}{p-1}\right) = p^j \log ( N \varepsilon^p)
\end{align} holds, where $N\doteq 3^{-1}CK^p (R+1)^{-(p+1)} (2p)^{-p/(p-1)^2}L^{1/(p-1)}$.
Combining \eqref{lower bound U j p=p1}, \eqref{explit expressions sigmaj} and \eqref{lower bound Kj no.2}, we arrive at
\begin{align*}
\mathrm{U}(t)&\geqslant \exp \left( p^j\log(N\varepsilon^p)\right) t \left(\log \left(\tfrac t{\ell_j}\right)\right)^{\sigma_j} \\ & \geqslant \exp \left( p^j\log(N\varepsilon^p)\right) t \left( \tfrac 12 \log t \right)^{(p^{j+1}-1)/(p-1)} \\
&= \exp \left( p^j\log\left( 2^{-p/(p-1)}N\varepsilon^p \left(\log t \right)^{p/(p-1)}\right) \right) t \left( \tfrac 12 \log t \right)^{-1/(p-1)}
\end{align*} for $t\geqslant 4$ and for any $j\geqslant j_1$, where we applied the inequality $\log (t/ \ell_j)\geqslant \log(t/2) \geqslant (1/2) \log t$ for all $t\geqslant 4$.
If we denote $H(t,\varepsilon)\doteq 2^{-p/(p-1)}N\varepsilon^p \left(\log t\right)^{p/(p-1)}$, the last estimate may be rewritten as
\begin{align}\label{final lower bound U}
\mathrm{U}(t)&\geqslant \exp \big( p^j \log H(t,\varepsilon)\big) t \left( \tfrac 12 \log t \right)^{-1/(p-1)}
\end{align} for $t\geqslant 4$ and any $j\geqslant j_1$.
Let us fix $\varepsilon_0=\varepsilon_0(n,p,k,R,u_0,u_1)$ so that
\begin{align*}
\exp \left(2N^{-(1-p)/p}\varepsilon_0^{-(p-1)}\right)\geqslant 4.
\end{align*}
Then, for any $\varepsilon\in (0,\varepsilon_0]$ and for $t> \exp \left(2N^{-(1-p)/p}\varepsilon^{-(p-1)}\right)$ we get $t\geqslant 4$ and $H(t,\varepsilon)>1$. Thus, for any $\varepsilon\in (0,\varepsilon_0]$ and for $t> \exp \left(2N^{-(1-p)/p}\varepsilon^{-(p-1)}\right)$ as $j\to \infty$ in \eqref{final lower bound U} we find that the lower bound for $\mathrm{U}(t)$ blows up and, consequently, $\mathrm{U}(t)$ cannot be finite too. Summarizing, we proved that $\mathrm{U}$ blows up in finite time and, besides, we showed the upper bound estimate for the lifespan $$T(\varepsilon)\leqslant \exp \left(2N^{-(1-p)/p}\varepsilon^{-(p-1)}\right).$$
Altogether, we established Theorem \ref{Theorem critical case p1} in the critical case $p=p_1(n,k)$.
\begin{remark} Combining the results from Theorems \ref{Theorem critical case p0} and \ref{Theorem critical case p1} and Proposition \ref{Proposition lifespan subcrit}, we a full picture of the upper bound estimates for the lifespan of local in time solutions to \eqref{Semi EdeS k} whenever $1<p\leqslant \max\{p_0(n,k),p_1(n,k)\}$, of course, under suitable sign, size and support assumptions for the initial data.
\end{remark}
\section{Final remarks}
Let us compare our results with the corresponding ones for the semilinear wave equation in the flat case. First, we point out that due to the presence of the term $t^{1-p}$ in the semilinear term in \eqref{Semi EdeS k u}, we have a competition between the two exponents $p_0, p_1$ to be the critical exponent. This for the classical semilinear wave equation with power nonlinearity does not happen since $p_{\mathrm{Str}}(n)\geqslant \frac{n+1}{n-1}$ for any $n\geqslant 2$. However, a similar situation it has been observed when lower order terms with time -- dependent coefficients in the \emph{scale -- invariant case} are present, with a competition between a shift of Fujita exponent and a shift Strauss exponent (cf. \cite
{DLR15,DL15,NPR16,PalRei18,PT18,Pal18odd,Pal18even}).
On the other hand, the presence of the exponent $p_3$ for dimensions $n\in (N(k),\widehat{N}(k))$ to distinguish among two different upper bounds for the lifespan depending on the range for $p$ is exactly what happens for the semilinear wave equation in spatial dimensions $n=2$ (see \cite{Tak15,IKTW19}). Moreover, the situation for \eqref{Semi EdeS k} when $n\leqslant N(k)$ is completely analogous to what happens for the semilinear wave equation when $n=1$, see \cite{Zhou92} for the Euclidean case.
After the completion of the final version of this work, we found out the existence of the paper \cite{TW20}, where a more general model is considered. We point out that the approach we used in the critical case is completely different, and that we slightly improved their result in the special case of the semilinear wave equation in the generalized Einstein -- de Sitter spacetime, by removing the assumption on the size of the support of the Cauchy data (cf. \cite[Theorem 2.3]{TW20}).
\section*{Acknowledgments}
A. Palmieri
is supported by the GNAMPA project `Problemi stazionari e di evoluzione nelle equazioni di campo nonlineari dispersive'. The author acknowledges Karen Yagdjian (UTRGV) and Hiroyuki Takamura (Tohoku Univ.) for valuable discussions on the model considered in this work.
| {
"attr-fineweb-edu": 1.680664,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUd285qoYA4kbtHYE2 | \section{Introduction}
We present \textsc{cudaclaw}\xspace{}, a high-performance data-parallel solution framework for 2D and 3D hyperbolic partial differential equation (PDE) systems. Our primary motivation for the development of this framework is to enable computational scientists to solve this broad and important class of simulations efficiently without having to write low-level code or worry about low-level details of data layout and movement between different levels of the memory hierarchy --- details that are essential for obtaining performance on GPUs. Our framework allows scientists to define the PDEs to be solved using a high-level GPU-independent description, and our proof-of-concept results show that the resulting simulations run at speeds comparable to manually tuned code.
Time-dependent hyperbolic systems of PDEs arise in a broad range of
application domains in engineering and science including acoustics,
elastodynamics, hydrodynamics, and optics. Computational scientists
and engineers are interested in solving these systems of equations efficiently
and reliably in order to capture complex nonlinear phenomena and understand
shock waves and other characteristics that appear in simulations.
In many cases, the computation of time-sensitive solutions demands higher
performance than what the current generation of workstations offers; for
instance, in the case of tsunami forecasting. Such forecasts are often
based on solution of the shallow water equations \eqref{eq:SW}, with initial
conditions determined from seismic data and buoys. In order to be useful,
the simulations must proceed much faster than real-time.
Scientists wishing to perform such numerical simulations on today's manycore
accelerators are faced with a dilemma. On one hand, the promise of
high-performance, inexpensive, multi-teraflop cards offers tantalizing
capabilities for routine use of high-fidelity simulations. On the other hand,
the gap between the mathematical or algorithmic problem descriptions and the
code that achieves high-performance is wide, making it impractical for
scientists to take advantage of the hardware capabilities. What is needed is a
new generation of systems, such as \cite{fenics,cvx}, that bridge the gap between the expressiveness of
high-level scientific programming languages and the lower-level languages
optimized for execution.
The \textsc{cudaclaw}\xspace{} framework we describe in this paper is an example of such
software. It is a programmable framework in the sense that it allows a
computational scientist to define a set of PDEs through a ``Riemann solver''
expressed at a high level, shielding the user from the details of data layout,
threads, warps, shared vs. global memory, and the many details that one
normally needs to attend to in order to achieve performance on GPUs. From the
user's point of view, the framework is simple and its use does not require
knowledge of the underlying data structures, nor experience in CUDA
programming. Yet, the code generated is tailored to the GPU architecture,
taking advantage of its arithmetic throughput, and exploiting its high memory
bandwidth and fast on-chip memories. Related efforts have been described in \cite{mint,manyclaw}.
The solution of hyperbolic PDEs by finite volume methods involves discretizing
the spatial domain of the simulation into cells, with each cell holding a
set of state variables representing solution values. The solution proceeds in time steps, whose size is adaptively
computed from the solution values at the previous time step. At every step, the state
variables in each cell are updated from the values of spatial neighbors. The
primary challenge for obtaining high-performance across the broad spectrum of
hyperbolic PDEs is to abstract the details of data layout and data movement to
shared memory from the arithmetic operations needed to update state variables.
It is this separation that allows the framework to orchestrate data movement
from global to shared memory in an efficient way without user intervention,
and allows the user to specify the arithmetic operations that need to be
performed, independently of how thread blocks operate on the data. A
significant fraction of the computations involved in finite volume methods are
memory-bound. In the roofline model, the maximum performance is achieved on the diagonal
bandwidth-limited portion of the graph. Therefore, GPU code optimizations
generally involve how shared memory is used and how sizes and shapes of
blocks are chosen to match data access patterns. Optimizations also
involve structuring the computations into kernels that are designed to maximize
their flops-to-byte ratios, thereby improving their performance.
In this paper, we describe the design of \textsc{cudaclaw}\xspace{} and the optimizations it
performs, and demonstrate it on the acoustic wave equation in 2D and
3D, and on the nonlinear shallow-water equations in 2D. The primary contributions of
the work are a set of GPU-performant algorithms for the solution of hyperbolic
PDEs; an example of a domain-specific scientific
computing system that allows users to customize it with a high level problem
description without sacrificing efficiency; and a practical system that is
accessible through a Python interface, PyClaw \cite{PyClaw}, to allow scientists to use GPU
acceleration routinely when solving hyperbolic PDEs. The system is available
from github under Clawpack. The sample results of our current prototype show that sustained performance of more than 180 GFlops/s is achieved on the Tesla C2050 Nvidia GPU. With the memory-bound
nature of the computations, a roofline analysis shows that this is around 50\%
of the maximum performance achievable on the C2050, and comparable to the performance of manually-tuned kernels~\cite{Rostrup2010}.
The rest of this paper is organized as follows. We briefly describe some related prior work in section \ref{prior} and introduce hyperbolic PDEs and the numerical solution that we adopt in section \ref{background_hyperbolic_pde_sec}. In section \ref{cudaclaw_architecture_sec} we give an overview of the structure of the framework, followed by the key design decisions and optimizatios in section \ref{cudaclaw_system_details_sec}. The Python interface to the system is briefly described in section \ref{pyclaw_integration_sec}. In section \ref{performance_analysis_sec} we analyze the performance of the framework using the roofline model. Section \ref{future_work_conclusion_sec} concludes.
\section{Prior Work}
\label{prior}
Because of the importance of hyperbolic PDEs in many engineering and scientific problem domains, substantial work has been conducted, from the early days of GPUs, on developing GPU-friendly algorithms and implementations for their acceleration.
Shallow water simulations have been performed by a number of researchers. Hagen et. al. \cite{Hagen2005} implemented a 2nd order central-upwind scheme for 2D shallow water equations with bathymetry and dry states, achieving up to 30x speedup. In \cite{Lastra2009,Asuncion2010}, first-order schemes for one- and two-layer shallow water flow were implemented in CUDA. A third-order scheme based on a Roe solver combined with non-oscillatory polynomial reconstruction and Runge-Kutta time integration was implemented in \cite{Gallardo2011}, achieving speedups of more than 100x on a GTX260 card. This scheme is well-balanced but not capable of handling dry states. Further work by this group, including extension to unstructured triangular meshes, is reported in \cite{Castro2011}. Brodtkorb et. al. \cite{Brodtkorb2011} implement shallow water schemes that handle dry states as well as friction, and provide direct, realistic visual output at interactive speeds. They also discuss tradeoffs in using single- or double-precision. Rostrup \cite{Rostrup2010} implement a second-order scheme using a Roe solver, flux limiting, and dimensional splitting, and compare performance on CELL and GPU architectures.
Hagen \cite{Hagen2006,hagen2007solve} describes implementation of first- and second-order schemes for 2D and 3D flows, with speedups of 10-20x for some standard test problems. Brandvik \cite{brandvik2007acceleration,brandvik2008acceleration} implements 2D and 3D solvers that are 2nd order in space but only first order in time. Speedups of 15-30x are reported. Kestener et. al. \cite{Kestener2010} implement a first-order Godunov scheme and a second-order central scheme for the 2D Euler equations, achieving 30-70x speedup. Cohen \cite{cohen2010fast} solves the Boussinesq equations with a second-order
scheme and compares GPU acceleration with OpenMP acceleration. While most of these papers focus on standard test problems, Elsen \cite{ELSEN2008} implements a finite-difference scheme with multigrid for 3D unsteady RANS simulations to solve an industrial problem.
Our work differs from the above in several respects. Rather than focusing on a single set of equations and optimizing the code to take advantage of its particular structure, we are interested in building a framework to handle a wide variety of hyperbolic PDEs, which can vary significantly in the number of variables per cell and in the arithmetic intensity of their core computation. This requires the development of different optimizations for the various stages of the computations as described later. In addition, and in contrast to the usual stencil computations that access the neighboring cells in all spatial dimensions simultaneously in order to compute cell updates, we use dimensional splitting as mentioned in section \ref{background_hyperbolic_pde_sec}. Dimensional splitting allows the PDEs to be solved independently in each dimension, with the solutions of each dimension assembled together to give the final cell update. This strategy significantly enhances the overall memory and cache behavior as it allows data access in the separate spatial dimensions to be optimized more effectively.
\section{Hyperbolic PDEs and Clawpack}
\label{background_hyperbolic_pde_sec}
Hyperbolic PDEs arise from modelling in various disciplines such as
engineering, medicine, geology, etc. Hyperbolic PDEs describe
wave motion.
The numerical methods in \textsc{cudaclaw}\xspace{} compute approximate solutions of
systems of hyperbolic conservation laws which we describe, for simplicity, in two dimensions:
\begin{align} \label{eq:conslaw}
\mathbf q_t + \mathbf f(\mathbf q)_x + \mathbf g(\mathbf q)_y & = 0.
\end{align}
Here $\mathbf q(\mathbf x,t)$ is a vector of conserved quantities (e.g., density,
momentum, energy) and $\mathbf f,\mathbf g$ represent the flux components.
Here we describe high-resolution shock capturing methods, which
are one of the most successful classes of numerical methods for solving \eqref{eq:conslaw}.
Computing solutions to nonlinear hyperbolic equations is often costly.
Solutions of \eqref{eq:conslaw} generically develop
singularities (shocks) in finite time, even if the initial data are
smooth. Accurate modeling of solutions with shocks or strong convective character requires computationally
expensive techniques, such as Riemann solvers and nonlinear limiters.
In a finite volume method, the unknowns at time level $t^n$ are taken to be the averages of $q$
over each cell:
\begin{align}
Q^n_{i,j} = \frac{1}{\Delta x\Delta y} \int_{y_{j-\frac{1}{2}}}^{y_{j+\frac{1}{2}}}\int_{x_{i-\frac{1}{2}}}^{x_{i+\frac{1}{2}}} q(x,t^n) \, dx,
\end{align}
where $(\Delta x,\Delta y)$ and $(i,j)$ are the local grid spacing and the
cell index, respectively.
The classic Clawpack algorithm is based on the second-order Lax-Wendroff difference scheme
that was later extended by LeVeque \cite{leveque1997,levequefvmbook}.
In two dimensions, it takes the form
\begin{align}
\begin{aligned} \label{LW-update}
Q_{i,j}^{*} = Q_{i,j}^{n} &
-\frac{\Delta t}{\Delta x}\left(\Aop^+\DQ_{i-1/2,j}+\Aop^-\DQ_{i+1/2,j}\right) \\
&-\frac{\Delta t}{\Delta x}\left(\tilde{F}_{i+1/2,j}-\tilde{F}_{i-1/2,j}\right) \\
Q_{i,j}^{n+1} = Q_{i,j}^{*} &
-\frac{\Delta t}{\Delta y}\left(\Bop^+\DQ^*_{i,j-1/2}+\Bop^-\DQ^*_{i,j+1/2}\right) \\
&-\frac{\Delta t}{\Delta y}\left(\tilde{G^*}_{i,j+1/2}-\tilde{G^*}_{i,j-1/2}\right).
\end{aligned}
\end{align}
Equation \eqref{LW-update} is a Godunov split scheme, second-order update of the cell average state $Q_{i,j}$
from time $t^n$ to an intermediate state and then to $t^{n+1}$. The first two terms represent the effect
of fluxes on the horizontal direction (approximating the term $\mathbf f(\mathbf q)_x$)
while the third and fourth terms represent vertical fluxes (the term $\mathbf g(\mathbf q)_y$).
The latter two terms are computed using the intermediate cell states after the first two terms are applied.
The first and third update terms give a first-order update, while the
second and fourth terms represent a second-order correction that is computed
using a nonlinear {\em wave limiter}.
All of the terms in \eqref{LW-update} are computed by solving Riemann problems.
A Riemann problem consists of a hyperbolic PDE \eqref{eq:conslaw} together with
piecewise constant initial data composed of two states with a single discontinuity
between them. Conceptually, one may think of the finite volume solution as being
constant within each grid cell; then at each cell edge the local solution corresponds
to a Riemann problem. The method may be thought of as solving these Riemann problems,
evolving the solution over a short time $\Delta t$, and re-averaging the solution over
each grid cell. The most expensive step is the solution of the Riemann problems.
These Riemann problems are independent from one another, and provide the
key to the parallelism that we exploit with the GPU architecture.
The wave limiter is also a relatively expensive part of the computation; it involves
taking the waves computed by the Riemann solver and modifying them to add dissipation
in the vicinity of a discontinuity.
The time step size $\Delta t$ used in \eqref{LW-update} must be chosen carefully.
Typically, one wishes to take it as large as possible for computational
efficiency, but numerical stability requires that the step size satisfy
\begin{align} \label{CFL-condition}
\Delta t & \le C\frac{\Delta x}{s}
\end{align}
where $s$ is the magnitude of the fastest wave speed occurring in the problem
and $C$ is a constant depending on the numerical method. The restriction
\eqref{CFL-condition} is referred to as a {\em CFL condition}.
Clawpack{} is a very general tool in the sense that it is easily adapted
to solve any hyperbolic system of conservation laws.
The only specialized code required in order to solve a
particular hyperbolic system is the Riemann solver routine. A wide range of
Riemann solvers, including several for the most widely studied hyperbolic systems, have
been developed by Clawpack{} users and are also freely available.
Non-hyperbolic source terms ($\mathbf s(\mathbf q,\mathbf x)$) can be easily included via operator splitting.
For more examples and details regarding Clawpack,
see \cite{leveque1997} and \cite[Chapter 23]{levequefvmbook}.
To illustrate the use of our tool, we solve the acoustic wave equation
\begin{subequations} \label{eq:acoustics}
\begin{align}
p_t + \nabla \cdot \mathbf u & = 0 \\
\mathbf u_t + \nabla p & = 0
\end{align}
\end{subequations}
in 2D and 3D (here $p,u$ are the pressure and velocity), and the two-dimensional shallow water equations
\begin{subequations} \label{eq:SW}
\begin{align}
h_t + (hu)_x + (hv)_y & = 0 \\
(hu)_t + \left(hu^2 + \frac{1}{2}gh^2\right)_x + (huv)_y & = 0 \\
(hv)_t + (huv)_x + \left(hv^2 + \frac{1}{2}gh^2\right)_y & = 0.
\end{align}
\end{subequations}
Here $h$ denotes the water height while $u$ and $v$ denote the $x$- and $y$-component of the
velocity, respectively.
\section{CUDACLAW Architecture}
\label{cudaclaw_architecture_sec}
\begin{figure*}[t]
\begin{center}
\scalebox{0.75}{\includegraphics{figures/paper_figures/CUDACLAW_abstract_pipeline.png}}
\end{center}
\caption{CUDACLAW conceptual pipeline \label{abstract_pipeline_fig}}
\end{figure*}
Based on Clawpack, \textsc{cudaclaw}\xspace{} aims at exploiting the independence of the Riemann problems at cell interfaces and the split dimensional updates to achieve performance on GPUs. Conceptually, \textsc{cudaclaw}\xspace{} implements the pipeline appearing in figure \ref{abstract_pipeline_fig}. It orchestrates data movements between global and shared GPU memory and rearranges computations to eliminate otherwise unavoidable expensive synchronization. It does this by combining stages of this pipeline into kernels ---with relatively high flop-to-byte ratio and adapted memory access pattern--- for efficient execution. \textsc{cudaclaw}\xspace{} also minimizes the memory footprint of the computation, by computing the full wave details in shared memory and avoiding their storage in global memory. In this section, we describe the overall computations involved. We elaborate on some of the design decisions in the following section.
\subsection{CUDACLAW Computational Pipeline}
As shown in Figure \ref{abstract_pipeline_fig}, our framework is composed of several stages: Boundary Conditions, Riemann Problem Solution, Flux Limiter Computation, State Update and Time Step Adjustment. This computational pipeline grants great flexibility to the framework by allowing boundary conditions, Riemann solvers and limiters to be swapped, furthermore it automatically handles second-order computations leaving the user to define only their Riemann solver and optionally additional limiter functions and boundary conditions.
The framework implements this pipeline as a set of GPU device kernels. The first applies the boundary conditions. The second combines all inner stages, Riemann solution, flux limiting, second-order corrections and state update, there are two(three) of these kernels for the 2D(3D) solver, one for each dimension, and from this point on we will refer to these kernels as `core kernels', in the following section we explain the reason for this split. The third kernel is an auxiliary kernel that reduces over the horizontal and vertical wave speeds, to find the largest absolute speed. The fourth kernel computes the CFL number and decides if the time step should be reverted or the computation continued with a new time step according to equation \ref{CFL-condition}.
In addition to updating the cells inside the domain, the solver must apply
prescribed boundary conditions at the edge of the domain.
Updating the boundaries is independent of the Riemann problems and is
therefore a separate stage and a separate kernel. Usually, boundary conditions are not computationally intensive
and since the boundaries of $2(3)$-dimensional spaces are $1(2)$-dimensional
respectively, this stage is computationally relatively inexpensive.
Once the Riemann problems at cell interfaces are solved, the framework will have enough data to
compute wave limiters, use them in the second-order corrections (lines 2,4 in \eqref{LW-update}) and finalizing the second-order update terms by adding them to the first order update terms (lines 1,3 in \eqref{LW-update}) using a limiter function. The full update terms are then applied to the cells. As mentioned earlier, \textsc{cudaclaw}\xspace{} solves multidimensional problems by dimensional splitting~\ref{LW-update}, solving one dimensional problems along each direction. This makes the inner stages of the pipeline ---Riemann solution, limiter and second order computation--- multistage, where they are repeated in each dimension in the corresponding dimension's core kernel.
With the finite volume scheme, the Courant, Friedrichs and Lewy (CFL) condition \ref{CFL-condition} states that Riemann solution produced waves must not travel farther than the cells which generated them. This can be ensured by taking a small enough time step, which limits the travel distance of the fastest wave to one cell. This requires the framework to know the fastest wave that was generated, which, on a parallel machine, is done with a reduction operation. We cannot determine such a step before the wave speeds are known, therefore an estimate is used based on the wave speeds of the previous time step and used in the current step. If the CFL was found to be violated the step is reverted and the computation redone with a more appropriate time step. The fastest wave speed can be obtained efficiently on a GPU by a reduction operation over all the wave speeds. With a large number of waves, the reduction can be time consuming. In addition, storing wave speeds in global memory limits the maximum simulation size we can run on the GPU. In \textsc{cudaclaw}\xspace, waves and wave speeds are generated and stored in shared memory (Figure \ref{solver_mem_interaction_fig}) where local reduction on the wave speeds is performed
and only their local maximum written to global memory. This greatly reduces the number of elements to reduce in global memory.
\subsection {Point-wise Riemann Solver}
Unlike Clawpack's row-vectorized Riemann solver, we use point-wise Riemann solvers defined as functions that operate on two neighboring cells to generate waves governed by the PDEs being solved. Riemann solvers are more naturally formulated as scalar functions, but are often coded as vectorized functions to improve computational performance. However, and as shown in~\cite{manyclaw}, a well-designed computational framework allows application scientists to achieve the performance of architecture-tuned vectorized functions while enjoying the simplicity of working with scalar Riemann kernels that are independent of underlying memory structure and specific indexing arithmetic. Point-wise Riemann solvers are natural to the GPU's thread level parallelism. Figure \ref{point_wise_Riemann_fig} shows how threads can be mapped to solve Riemann problems at cell interfaces, essentially making the block of threads a vector solver. This also allows for further optimizations in terms of memory savings as point-wise functions are not required to store the first-order update terms in global memory. These are computed using the generated waves and wave speeds available in local memory and immediately used for the second-order correction and update operations.
\begin{figure}[ht]
\begin{center}
\scalebox{0.33}{\includegraphics{figures/paper_figures/Point-wise_Riemann.png}}
\end{center}
\caption{A block of threads solving Riemann problems at individual interfaces, using point-wise Riemann solvers. Results are stored in shared memory\label{point_wise_Riemann_fig}}
\end{figure}
\subsection{GPU Technical Considerations}
The memory layout of this grid is one of the important factors in maximizing memory bandwidth throughput, and as a consequence, computational throughput.
As the finest grain of parallelism of the GPU, the threads of a warp all execute the same instructions, and therefore request memory at the same time. The hardware can satisfy all memory requests within a warp most efficiently if the considered memory addresses are on a contiguous piece of memory, and start at an aligned address, resulting in a \emph{coalesced} access. This characteristic of the GPU consequently makes structures of arrays preferable over arrays of structures. With a multidimensional grid of multi state cells, where the same state is accessed simultaneously across the grid, the best memory layout for a 2D/3D problem is to store each state as a separate 2D/3D grid, in row major order, resulting in the layout depicted in figure \ref{data_distribution_fig}.
\begin{figure}[ht]
\begin{center}
\scalebox{0.44}{\includegraphics{figures/paper_figures/data_distribution2.png}}
\end{center}
\caption{CUDACLAW 2D data grid layout in GPU memory: each state is stored contiguously in row major order\label{data_distribution_fig}}
\end{figure}
Another factor that determines the activity distribution of the GPU is thread work assignment. Threads can be assigned to compute the necessary elements to update a single cell, or can compute the Riemann solution at inter-cell interfaces and then cooperate to compute the update terms in \eqref{LW-update}. Although the former scheme of a one-to-one map between threads and cells offers reduced synchronization and inter-thread communication, the requisite large amount of registers per thread makes it infeasible. In this implementation, we opt to map every thread to an interface, where each thread solves the Riemann problem at an interface, and eventually updates a single cell. The details are discussed in the next section, where we describe how blocks and threads within them are mapped to the computational grid, and the inner workings of the core kernels, specifically how the output of the the Riemann solver is stored and used in second-order computations and update.
\section{CUDACLAW System Details}
\label{cudaclaw_system_details_sec}
\begin{figure}[t]
\begin{center}
\scalebox{0.35}{\includegraphics{figures/paper_figures/solver_mem_interact.png}}
\end{center}
\caption{CUDACLAW's communication patterns with global and fast memories; filled green arrows represent block synchronization}
\label{solver_mem_interaction_fig}
\end{figure}
In designing the core kernels, we targeted a minimum global memory footprint at the cost of redundant computations. Performing a fractional amount of redundant computations in local memories outweighs the communication and synchronization that would otherwise be required. Figure \ref{solver_mem_interaction_fig} shows how the stages of a core kernel are implemented, clearly showing the heavy use and reuse of the fast memories, shared memory and registers. Each stage indicated in the figure depends on the output of the previous stages, allowing us to keep data local and avoid global memory. We now fully dissect the core kernel emphasizing the ideas that made this structure possible.
\subsection {Independent Blocks}
One of the key ideas that allow us to take full advantage of the GPU is block
independence. Instead of mapping threads one-to-one to interfaces, we divide the
computational grid into cell subdomains, and assign each block to update a single
subdomain. The block is then allowed to do all computations needed to
advance the cells in its subdomain. This makes the block completely independent from
other blocks. Figure \ref{thread_map_fig} shows a block assigned to a subdomain.
With a second-order scheme, cell data depends on 5 cells or on 4 sets of waves
from the left and right surrounding interfaces. In the figure,
required wave information for the highlighted subdomain is indicated by orange
arrows, precisely the interfaces the threads operate on. As the block is
solving an independent subproblem, one can view it as being a full grid, where the equivalent
global memory is the shared memory. As shown in Figure \ref{point_wise_Riemann_fig}, Riemann solutions can be stored
in shared and used later in the kernel, without having to access global memory. Once the
Riemann problem is solved, the border threads will idle, while the rest proceed
to limiter and second-order correction term computation, as these threads do not have one of the necessary two neighbouring wave information in the shared memory.
Such a block map will however incur redundant computations at the interfaces of adjacent subdomains, as shown in Figure \ref{thread_map2_fig}, however we can minimize such redundancy by careful choices of block shape and size (Figure \ref{block_overlap_fig}), and get better performance than we would without any redundant computations. In fact, having independent blocks frees the framework from handling inter-block communication to ensure correctness, which can only happen at the global memory level, which would mean that Riemann solutions must be stored in global memory without independent blocks. Storing the wave structure not only reduces the GPU's capability of solving large problems, it also reduces its throughput with more memory reads and additional expensive synchronization, later discussed in the analysis section.
\begin{figure}[t]
\begin{center}
\scalebox{0.35}{\includegraphics{figures/paper_figures/thread_map1.png}}
\end{center}
\caption{A single block is mapped to the highlighted subdomain, threads solve the Riemann problem on interfaces that participate in the update of the subdomain \label{thread_map_fig}}
\end{figure}
Given the wave speeds in shared memory, the block can do a local reduction, and write out a single local maximum speed back to global. This reduces the final size of the global reduction by a factor equal to the size of the block times the number of states the problem has.
\begin{figure}[t]
\begin{center}
\scalebox{0.35}{\includegraphics{figures/paper_figures/thread_map2.png}}
\end{center}
\caption{Computational and data overlap of two blocks with adjacent subdomains\label{thread_map2_fig}}
\end{figure}
\subsection {Split Kernel Dimension Computation}
Solving both horizontal and vertical Riemann problems in a unified kernel allows the kernel to read less data, however, on the GPU, such a kernel requires too many registers per thread and shared memory per block as we intend to store the waves and their speeds in shared memory. Also, blocks launched for a unified kernel will have to overlap over data with other blocks from all sides, greatly reducing the number of active threads in the later stages of the solution, namely, limiter and second-order computations. Furthermore it disallows certain optimizations that can be done with kernels dedicated to only one dimension. These drawbacks make such kernel impractical, therefore we use split the computations over the core kernels where each would have its own optimizations dealing with memory transactions and computations of a given dimension.
As a kernel is dedicated to either horizontal or vertical Riemann problems, we can choose their shape in order to minimize the block overlap we get by having independent blocks. Note that we use 2D blocks to launch the core kernels of any dimension, for 2D and 3D problems, even though computations are done over a single dimension. Blocks can therefore be viewed as parallel row-vectorized solvers. In terms of memory access patterns, the states in the grid being stored in row major order, computations in the horizontal dimension require overlap over the fastest moving index as shown similar to the blocks Figure \ref{block_overlap_fig}. This makes blocks for the horizontal core kernel start at misaligned addresses, however such misaligned is unavoidable in such stencil computations and will happen in exactly one dimension, while the other kernels get perfect alignment therefore perfect memory transaction per request ratio.
Figure \ref{block_overlap_fig} shows how a $3\times 5$ block gives twice as much overlap as a $2\times 9$ block. A careful tuning is required to get the best performance, as the computational redundancy is not the only factor at play. We found that having a block width such that the length times the data type size that corresponds to at least a complete cache line size yields the best results.
\begin{figure}[t]
\begin{center}
\scalebox{0.33}{\includegraphics{figures/paper_figures/block_overlap.png}}
\end{center}
\caption{In this simplified example, a $3\times 5$ block gives twice as much overlap as a $2\times 9$ block, for the horizontal core kernel\label{block_overlap_fig}}
\end{figure}
\subsection {Single Stage Update}
For a thread to update a cell in a single stage, update terms from its two interfaces must be available to it. A thread having computed Riemann solutions at a single interface, and used neighbouring wave information to compute wave limiters, will have two second-order update terms in its registers, one for each cell of the corresponding interface. A thread would need the update information from its neighbour's registers, its left neighbour in the horizontal kernel and its bottom in the vertical. However, thread register information cannot be read by any other thread, so the particular information has to be passed from the neighbour to the thread. The closest memory space where such a transaction can happen is the shared memory. By this stage all threads in the block should be done with their previous computations, ensured by a block synchronization, and therefore all wave data is available for re-write. Threads use their space of the first Riemann generated wave in shared memory to store the update term required by their right (top) neighbour. A thread will then be able to update its cell having the proper update terms, one in its registers and one in shared memory.
By contrast, applying the update terms one at a time requires twice as many reading and writing to global memory. This decreases the flop-to-byte ratio of the computation, and reduces performance.
\subsection{Display}
As the computational grid remains and is operated on the GPU, \textsc{cudaclaw}\xspace{} offers the option of viewing the solution of the defined hyperbolic PDE at interactive speeds. Given a time adaptive context, the display only outputs frames that reach a given time stamp, resulting in smoothly displayed simulation progression. Figure \ref{shallow_water_example_fig} is a snapshot taken from a shallow water simulation, with reflective boundaries at three interfaces and a transmissive boundary on the bottom. High-performance graphics is possible via the OpenGL-CUDA interoperability Nvidia libraries, making the computational buffer available to OpenGL pixel buffer objects (PBOs).
\begin{figure}[t]
\begin{center}
\scalebox{0.5}{\includegraphics{figures/paper_figures/Shallow_water_example.png}}
\end{center}
\caption{A sample shallow water simulation captured as the framework runs\label{shallow_water_example_fig}}
\end{figure}
\section{PyClaw Integration}
\label{pyclaw_integration_sec}
An important aspect of this work is integration into the PyClaw software framework. PyClaw is a Pythonic implementation of the Clawpack algorithms, with performant coupling into the original Fortran Riemann solver routines and a focus on accessibility, extensibility, and scalability. \textsc{cudaclaw}\xspace{} complements several other powerful extensions to the original Clawpack algorithm in PyClaw: SharpClaw and PyWeno provide auto-generated high-order weighted essentially non-oscillatory wave progagation, PetClaw provides performant distributed-computing that scales to tens of thousands of processes, and PeanoClaw is a prototype extension that adds adaptive mesh refinement through the Peano framework.
Integration with an existing interface provides many advantages. First, we greatly increase the accessibility of our code, and make it more readily deployable. For performance reasons, \textsc{cudaclaw}\xspace{} is necessarily implemented in CUDA C/C++, which require expert knowledge to modify and use to set up customized simulations. PyClaw, on the other hand, is implemented in Python and provides a Pythonic interface. Python is a language designed to be friendly to beginners without sacrificing performance or expressiveness. Python scripts are easier to read and customize and would allow more users to have access to \textsc{cudaclaw}\xspace{} In addition, PyClaw has an established user base, who can now take advantage of \textsc{cudaclaw}\xspace{} without learning a new system.
PyClaw also features an extensive test and application suite, including the acoustics and shallow water examples described in this paper, that users can rely on. Finally, the PyClaw interface allows users to switch seamlessly between platforms as their needs and access to hardware change.
A documented prototype PyClaw/\textsc{cudaclaw}\xspace{} interface is one of the software artifacts of this work. The prototype is available at $Clawpack/pyclaw/cudaclaw\_pull\_request$. The prototype features the ability to set up a problem in PyClaw, including the specification of initial and boundary value conditions, then evolve the problem using \textsc{cudaclaw}\xspace's time-steppping algorithm, and verify the computed solution at any subsequent time step. There are several limitations of the current interface prototype which will need to be addressed. Most notably, PyClaw currently does not support single-precision floating point computations, so the Python interface to \textsc{cudaclaw}\xspace{} is limited to double-precision, though it can still calculate in single-precision if desired. Additionally, the high-performance graphics available in \textsc{cudaclaw}\xspace{} are not yet available through the PyClaw interface.
\section{Performance Analysis}
\label{performance_analysis_sec}
\subsection{Roofline Model Analysis}
Analyzing the performance of algorithms on multicore machines is a subtle task, especially with the range
of different multicore architectures with their various computation and communication characteristics.
The roofline line model was proposed as an insightful performance model for multicore systems in
\cite{Williams2009Roofline}. The model gives bounds on the possible performance of a given algorithm on a
machine with given characteristics, abstracting away the specifics of the architecture. The model rates
algorithms according to their arithmetic intensity, floating point operation per byte ratio, and gives an
upper bound on the Flops/s performance. The model captures the classification of algorithms as memory-
and compute-bound where the former has low and the latter high arithmetic intensities. In this section,
we analyze the performance of \textsc{cudaclaw}\xspace using the roofline model.
Our experiments are performed on the Nvidia Tesla C2050 GPU. The theoretical and achievable roofs of the C2050 are shown in figure \ref{C2050_roofline_achievable_acoustics_fig}. The achievable roof of the C2050 was recorded by using a microbenching artificial algorithm that uses solely multiply and add operations which are executed by the fused multiply add (FMA) unit of the GPU, reading and writing by coalesced and aligned accesses.
\begin{figure}[t]
\begin{center}
\scalebox{0.44}{\includegraphics{figures/paper_figures/C2050_roofline_achievable_acoustics_2D.png}}
\end{center}
\caption{Achievable roofline, with achieved performance of the various components of an acoustics simulation\label{C2050_roofline_achievable_acoustics_fig}}
\end{figure}
\begin{figure}[t]
\begin{center}
\scalebox{0.44}{\includegraphics{figures/paper_figures/C2050_roofline_achievable_shallow_water_2D.png}}
\end{center}
\caption{Achievable roofline, with achieved performance of the various components of a shallow-water simulation\label{C2050_roofline_achievable_shallow_water_fig}}
\end{figure}
\begin{figure}[t]
\begin{center}
\scalebox{0.44}{\includegraphics{figures/paper_figures/C2050_roofline_achievable_acoustics_3D.png}}
\end{center}
\caption{Achievable roofline, with achieved performance of the various components of a 3D acoustics simulation\label{C2050_roofline_achievable_acoustics_3D_fig}}
\end{figure}
We use the linear acoustics and shallow water flow simulations as our tests to gauge the performance of the framework. We use Nvidia's Nsight analysis tools to measure the floating point operations done, and the amount of memory read and written between the L2 cache and global device memory. Tables \ref{Acoustics_full_kernel_analysis_tbl} and \ref{Shallow_w_full_kernel_analysis_tbl} show, for each of the directional updates of the acoustics and shallow water problems respectively, the total memory size transferred, the number of floating point operations done, and the weighted average of operational intensity, for a problem size of $1024\times 1024$.
\begin{table}[!h]
\caption{2D acoustics memory (MB) and floating point operations data, problem size $=1024\times 1024$}
\label{Acoustics_full_kernel_analysis_tbl}
{
\setlength{\extrarowheight}{1pt}
\begin{tabular}{ |l|c|c|c| }
\hline
Kernel &
Memory &
MFlops &
Operation Intensity \\
\hline
Horizontal & $35.1$ & $103$ & \multirow{2}{*}{$2.77$} \\\cline{1-3}
Vertical & $41.4$ & $118$ & \\\hline
\end{tabular}
}
\end{table}
\begin{table}[!h]
\caption{Shallow water memory (MB) and floating point operations data, problem size $=1024\times 1024$}
\label{Shallow_w_full_kernel_analysis_tbl}
{
\setlength{\extrarowheight}{1pt}
\begin{tabular}{ |l|c|c|c| }
\hline
Kernel &
Memory &
MFlops &
Operation Intensity \\
\hline
Horizontal & $27.0$ & $153$ & \multirow{2}{*}{$4.90$} \\\cline{1-3}
Vertical & $37.0$ & $175$ & \\\hline
\end{tabular}
}
\end{table}
\begin{table}[!h]
\caption{3D acoustics memory (MB) and floating point operations data, problem size $=96\times 96\times 96$}
\label{Acoustics3D_full_kernel_analysis_tbl}
{
\setlength{\extrarowheight}{1pt}
\begin{tabular}{ |l|c|c|c| }
\hline
Kernel &
Memory &
MFlops &
Operation Intensity \\
\hline
Horizontal & $34.5$ & $82$ & \multirow{3}{*}{$2.11$} \\\cline{1-3}
Vertical & $40.5$ & $86$ & \\\cline{1-3}
Depth & $42.5$ & $86$ & \\\hline
\end{tabular}
}
\end{table}
The operational intensity numbers in tables \ref{Acoustics_full_kernel_analysis_tbl} and \ref{Shallow_w_full_kernel_analysis_tbl} give only an average count of operations over the whole simulation. \textsc{cudaclaw}\xspace's main solver kernels are composed of several stages: data reading, Riemann solution, limiter computation, first and second order flux computations, local reduction, shared memory data passing and update, each with very different Flop-to-byte ratio. From these only the Riemann solver and update stages read from and write to global memory, respectively. The other stages largely use register and shared memory as shown in \ref{solver_mem_interaction_fig}.
To better assess the framework's performance we isolate the parts which deal with global memory from the parts that don't, and measure their performance separately. The operational intensities of the Riemann solver portions of the core kernels with the corresponding state update are shown in tables \ref{Acoustics_stripped_kernel_analysis_tbl} and \ref{Shallow_w_stripped_kernel_analysis_tbl} for 2D acoustics and shallow water simulations and in table \ref{Acoustics3D_stripped_kernel_analysis_tbl} for 3D acoustics simulation. We situate the Riemann solvers, the second-order limiting and corrections, and the overall solver in the figures \ref{C2050_roofline_achievable_acoustics_fig}, \ref{C2050_roofline_achievable_shallow_water_fig}, and \ref{C2050_roofline_achievable_acoustics_3D_fig} where the performance of the kernels are shown against the achievable roof of the Tesla C2050. The second order corrections achieve higher performance as they are primarily compute-bound and limited only by their floating point operations mix, while the Riemann solvers are memory-bound in all shown problems.
\begin{table}[!h]
\caption{Riemann solver acoustics horizontal and vertical kernels' memory (MB) and floating point operation data}
\label{Acoustics_stripped_kernel_analysis_tbl}
{
\setlength{\extrarowheight}{1pt}
\begin{tabular}{ |l|c|c|c| }
\hline
Kernel &
Memory &
MFlops &
Operation Intensity \\
\hline
Horizontal & $35.0$ & $52$ & \multirow{2}{*}{$1.425$} \\\cline{1-3}
Vertical & $41.3$ & $62$ & \\\hline
\end{tabular}
}
\end{table}
\begin{table}[!h]
\caption{Riemann solver for shallow water horizontal and vertical kernels' memory (MB) and floating point operation data}
\label{Shallow_w_stripped_kernel_analysis_tbl}
{
\setlength{\extrarowheight}{1pt}
\begin{tabular}{ |l|c|c|c| }
\hline
Kernel &
Memory &
MFlops &
Operation Intensity \\
\hline
Horizontal & $27.2$ & $76$ & \multirow{2}{*}{$2.49$} \\\cline{1-3}
Vertical & $36.8$ & $91$ & \\\hline
\end{tabular}
}
\end{table}
\begin{table}[!h]
\caption{Riemann solver for 3D acoustics horizontal, vertical and depth kernels' memory (MB) and floating point operation data}
\label{Acoustics3D_stripped_kernel_analysis_tbl}
{
\setlength{\extrarowheight}{1pt}
\begin{tabular}{ |l|c|c|c| }
\hline
Kernel &
Memory &
MFlops &
Operation Intensity \\
\hline
Horizontal & $34.5$ & $49$ & \multirow{3}{*}{$1.21$} \\\cline{1-3}
Vertical & $40.5$ & $52$ & \\\cline{1-3}
Depth & $42.5$ & $44$ & \\\hline
\end{tabular}
}
\end{table}
As can be seen from the plots, the various portions of \textsc{cudaclaw}\xspace{} achieve very respectable performance---near the peak performance achievable for their arithmetic intensity.
The observed performance gap between \textsc{cudaclaw}\xspace{} and the achievable performance can be attributed to a few factors. First, the achieved performance was obtained by running an artificial algorithm with only addition and multiplication operation which are executed together by the FMA unit of the processors. This algorithm does not reflect the floating point composition of our kernels, of which about $51\%$ are simple addition or multiplication operation, and $14 \%$ are special functions (square root, division,\ldots). Although special functions are executed by the SFU units of the GPU and can in theory run simultaneously with the other compute units, in practice data dependency restricts them to wait or stall the execution, in fact another test showed us that even with no data dependency, the maximum throughput of special function did not exceed $97 Gflop/s$, hence, any kernel with a significant portion of such functions will be severely limited by their low throughput. Second, all stages involve large numbers of integer operations required for addressing, amounting to up to $60\%$ (requires clear counting protocol) of all operations. Also the stages of the kernels involve inter-stage and intra-stage (local reduction) synchronizations, increasing the kernels' execution time. Third, the access pattern we have for a stencil based computation of this nature will have to sacrifice one dimension, in our case the horizontal direction, with cache line sizes, making less than optimal memory bandwidth usage. This is shown by the number of transactions required for the horizontal kernel, which requires almost 2 memory transaction per request, whereas the vertical kernels do a perfect one-to-one transaction per request.
\subsection{Numerical Experiments}
For our experiments we use two CUDA graphics cards: the Tesla C2050 and the GTX 460. These cards cover the mid to high end and low to mid ranges respectively. Tables \ref{exp_system1} and \ref{exp_system2} summarize the systems on which we ran our experiments.
\begin{table}[!h]
\caption{Experimental system 1}
\label{exp_system1}
\begin{center}
\begin{tabular}{ |l|l|l| }
\hline
Component & Our System & Notes \\
\hline
CPU & Intel i7 950 3.07 GHz & - \\
GPU & Nvidia Tesla C2050 (3GB)& ECC Off\\
RAM & 4GB System RAM & - \\
OS & Windows 7 & -\\
CUDA & CUDA 5.0 & - \\
Platform & Microsoft VS 2010 & - \\
Debugger & Nvidia Nsight 2.2& - \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[!h]
\caption{Experimental system 2}
\label{exp_system2}
\begin{center}
\begin{tabular}{ |l|l|l| }
\hline
Component & Our System & Notes \\
\hline
CPU & Intel i7 920 2.66 GHz & - \\
GPU 0 & Nvidia GTX 460 (1GB)& Display\\
GPU 1 & Nvidia GTX 460 (1GB)& Computation\\
RAM & 6GB System RAM & - \\
OS & Windows 7 & - \\
CUDA & CUDA 4.2 & - \\
Platform & Microsoft VS 2010 & - \\
Debugger & Nvidia Nsight 2.2& - \\
\hline
\end{tabular}
\end{center}
\end{table}
We measure the average time step to solve an acoustics and shallow water simulations of size $1024\times 1024$ on a CPU and both our GPU, both in single and double precision. Comparative double precision results are shown in figures \ref{acoustics_rel_perf_fig} and \ref{shallow_water_rel_perf_fig}. The double precision performance of the GTX 460 is lacking because it has a Fermi 2.1 SM with low double precision throughput. Single precision throughput of the GPUs is substantially higher, with the Tesla C2050 performing at 158 and 182 GFlop/s on the 2D acoustics and shallow water simulations respectively. In figures \ref{acoustics_size_scaling_fig} and \ref{shallow_water_size_scaling_fig}, we show how the GPU handles problems of increasing size, with some superlinear scaling behaviour due to a better work distribution and latency hiding with the larger sizes.
\begin{figure}[t]
\begin{center}
\scalebox{0.5}{\includegraphics{figures/paper_figures/Acoustics_rel_perf.png}}
\end{center}
\caption{Double precision comparison of an acoustics simulation between a dual quad Core Xeon X5550 running PyClaw, a GTX 460 and a Tesla C2050 running CUDACLAW.\label{acoustics_rel_perf_fig} Single-precision computations on the GPUs give a substantial speedup as described in the text. }
\end{figure}
\begin{figure}[t]
\begin{center}
\scalebox{0.5}{\includegraphics{figures/paper_figures/Shallow_water_rel_perf.png}}
\end{center}
\caption{Double precision comparison of a shallow water simulation between a dual quad Core Xeon X5550 running PyClaw, a GTX 460 and a Tesla C2050 running CUDACLAW. \label{shallow_water_rel_perf_fig} Single-precision computations on the GPUs give a substantial speedup as described in the text.}
\end{figure}
\begin{figure}[t]
\begin{center}
\scalebox{0.45}{\includegraphics{figures/paper_figures/size_scaling_acoustics.png}}
\end{center}
\caption{Problem size scaling achieved by CUDACLAW for 2D/3D acoustics simulations\label{acoustics_size_scaling_fig}}
\end{figure}
\begin{figure}[t]
\begin{center}
\scalebox{0.45}{\includegraphics{figures/paper_figures/size_scaling_shallow_water.png}}
\end{center}
\caption{Problem size scaling achieved by CUDACLAW for the shallow water simulation\label{shallow_water_size_scaling_fig}}
\end{figure}
We compare our numbers to the results obtained in \cite{Rostrup2010}, where the shallow water equations were solved using a manually tuned code on the Tesla C2050. On a $1000\times1000$ grid, the average time step takes 3.32 ms in single precision and 8.10 ms in double precision. In comparison, \textsc{cudaclaw}\xspace{} takes an average time step of 1.9 ms in single precision and 9.2 ms in double precision, showing how \textsc{cudaclaw}\xspace{} performs on par with manually tuned code. In addition, the reduced memory footprint of \textsc{cudaclaw}\xspace{} is seen by the fact that it can run these shallow water simulations on grids of size greater than $10,000\times 10,000$ (Figure \ref{shallow_water_size_scaling_fig}, while the simulation in \cite{Rostrup2010} can only reach grids smaller than $5,000\times 5,000$.
\section{Future Work}
\label{future_work_conclusion_sec}
We have plans to extend this work in a number of directions. One is the automatic generation of the Riemann solvers from a mathematical description of the PDEs, following the example of FEnICS~\cite{fenics} for elliptic PDEs. Even tough, we have eliminated the need to use CUDA constructs in writing Riemann solvers in \textsc{cudaclaw}\xspace{}, serial procedural code is needed when solving a new set of PDEs. A module to generate this procedural code on demand from a declarative specification of the PDEs will likely make \textsc{cudaclaw}\xspace{} accessible to a much broader set of users. Similarly, boundary conditions, now limited to those provided by \textsc{cudaclaw}\xspace, should be specifiable by algebraic equations.
Adaptive spatial refinement is an area of significant practical importance. Our framework is currently limited by the initial
grid size at launch which could potentially be insufficient for certain problems that require high resolutions in localized
regions such as shock wave fronts. Using a high fixed resolution everywhere is wasteful in terms of memory usage and is
generally impractical. The solution is to use adaptive mesh refinement (AMR) where we detect regions where higher resolutions
would be needed, and refine the mesh in those regions. The adaptive nature of AMR requires dynamic memory structure creation
and manipulation. With the advent of the new Kepler architecture and CUDA 5, kernel launch through other kernels has
become possible and adaptive mesh refinement can make direct use of this new feature.
Finally, multi-GPU implementations are needed to enable \textsc{cudaclaw}\xspace{} on modern supercomputers that are integrating GPU
coprocessors in their nodes. Using multiple GPUs adds a layer of complications in dealing with data splitting, communication
and balancing loads. We project from our current experience that the best way to implement such a distributed computation is to
give each node an independent subproblem for some number of time steps. Although the latency in inter-node communication gets higher, multiple local time steps can keep the nodes busy for a longer period of time and with a careful balance can make
computation and communication almost completely overlap.
\section*{Acknowledgements.} We would like to thank Rio Yokota for insights into the roofline model, Jeff Stuart for an early adaptation of Clawpack on the GPU, Wajih Bou Karam for his help with graphics inter-operability, and Andy Terrel for useful discussions on manycore architectures.
\addcontentsline{toc}{chapter}{References}
\bibliographystyle{plain}
| {
"attr-fineweb-edu": 1.910156,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUd2U5qWTBLvDM3QXt | \section{Introduction}
Load balancing is an important requisite for the efficient utilization of
computational resources in parallel and distributed systems. The aim is to
reallocate the load such that at the end each node has approximately the
same load. Load balancing problems have various applications, \emph{e.g.}, for
scheduling~\citep{Surana06}, routing~\citep{Cyb89}, and numerical computation~\citep{Zhanga09,Williams91}.
Typically, load balancing algorithms iteratively exchange load along edges of an
undirected connected graph. In the natural \emph{diffusion paradigm}, an arbitrary amount of load
can be sent along each edge at each step~\cite{RSW98,MuthukrishnanGS98}.
For the \emph{idealized} case of divisible
load, a popular diffusion algorithm is the first-order-scheme
by \citet{SubramanianScherson94} whose convergence rate is fairly well captured in terms of the spectral gap~\citep{Lovasz93random}.
However, for many applications the assumption of divisible load may be invalid.
Therefore, we consider the \emph{discrete} case where the load can only be decomposed
in indivisible unit-size tokens. It is a very natural question by how much this
integrality assumption decreases the efficiency of load balancing. In fact, finding
a precise quantitative relationship
between the discrete and the idealized case is an open problem posed by many authors, \emph{e.g.}, \citep{GhoshLMMPRRTZ99,GM96,LovaszWinkler95,MuthukrishnanGS98,
SubramanianScherson94,EMS06,FS09,RSW98}.
A simple method for approximating the idealized process was analyzed by \citet*{RSW98}.
Their approach (which we will call ``RSW-algorithm'')
is to round down the fractional flow of the idealized process.
They introduce a very useful parameter of the graph called \emph{local divergence}
and prove that it gives tight upper bounds on the deviation between the idealized process
and their discrete process. However,
one drawback of the RSW-algorithm is that it can end up in
rather unbalanced states (cf.~\proref{lowerbound}).
To overcome this problem, Friedrich and Sauerwald
analyzed a new algorithm based on randomized rounding \citep{FS09}. On many graphs, this algorithm approximates the idealized case much better than the approach of always rounding down of the RSW-algorithm.
A natural question is whether this randomized algorithm can be derandomized
without sacrificing on its performance.
For the graphs considered in this work, we answer this question to the positive. We introduce a \emph{quasirandom load balancing algorithm} which rounds
up or down deterministically such that the accumulated rounding errors
on each edge are minimized.
Our approach follows the concept of quasirandomness as it deterministically imitates the
expected behavior of its random counterpart.
\paragraph{Our Results}
We focus on two network topologies: hypercubes and torus graphs.
Both have been intensively studied in the
context of load balancing (see \emph{e.g.},~\citep{RSW98,FS09,JH03,P89,GPR99}).
We measure the smoothness
of the load by the so-called \emph{discrepancy} (see \emph{e.g.}~\cite{RSW98,FS09,GhoshLMMPRRTZ99,EMS06})
which is the difference between the maximum
and minimum load among all nodes.
For \emph{$d$\nobreakdash-dimensional torus graphs}
we prove that
our quasirandom algorithm approximates the idealized process up to
an additive constant (\thmref{torus}).
More precisely, for all initial load distributions and
time steps, the load of any vertex in the discrete process differs from
the respective load in the idealized process only by a constant.
This holds even for non-uniform torus graphs with different side-lengths
(cf.~\defref{torus}).
For the uniform torus graph our results are
to be compared with a deviation
of $\Omega(\operatorname{polylog}(n))$
for the randomized rounding approach (\thmref{toruslower})
and
$\Omega(n^{1/d})$ for the RSW-algorithm (\proref{lowerbound}).
Hence despite our approach is deterministic, it also improves
over its random counterpart.
Starting with an initial discrepancy of $K$,
the idealized process reaches a constant discrepancy after $\mathcal{O}(n^{2/d}\,\log(Kn))$ steps
(cf.~\corref{ideal}).
Hence the same holds for our quasirandom algorithm, which
makes it the first algorithm for the discrete case which is optimal both
in time and discrepancy.
For the \emph{hypercube} we prove a deviation
of our quasirandom algorithm
from the idealized process of $\Theta(\log n)$
(\thmref{cube}). For this topology
we also show that the deviation of the random approach
is $\Omega(\log n)$
(\thmref{cuberandomlower})
while the deviation of the RSW-algorith is $\Omega(\log^2 n)$
(\proref{lowerbound}).
Again, our quasirandom algorithm is at least as good as the randomized rounding
algorithm and substantially better than
the RSW-algorithm~\citep{RSW98}. In particular, we obtain that for any load vector with initial discrepancy~$K$, our quasirandom algorithm achieves a discrepancy of at most $\log n$ after $\mathcal{O}(\log n \, \log (Kn))$ steps.
\paragraph{Our Techniques}
Instead of analyzing our quasirandom algorithm directly,
we examine a new generic class of load balancing algorithms that
we call \emph{bounded error diffusion} (BED).
Roughly speaking, in a BED algorithm the \emph{accumulated} rounding error on each edge
is bounded by some constant at all times.
This class includes our quasirandom algorithm.
The starting point of \cite{RSW98} and \cite{FS09} as well as our paper is to express the deviation
from the idealized case by a certain sum of weighted rounding errors (\eq{StandardAnsatz}).
In this sum, the rounding errors are weighted by transition probabilities of a certain random walk.
Roughly speaking, \citet{RSW98} estimate this sum directly by adding up all transition probabilities.
In the randomized approach of \cite{FS09}, the sum is bounded by Chernoff-type inequalities relying
on independent rounding decisions. We take a completely different approach and prove that the
transition probabilities between two fixed vertices are unimodal in time (cf.~\thmref{cubeunimodal} for the hypercube).
This allows us to upper bound the complete sum by its maximal summand (\lemref{betragkleinerk}) for BED algorithms.
The intriguing combinatorial property of {\em unimodality} is the heart of our proof and seems to be
the main reason why we can outperform the previous approaches.
Even though unimodality has a one-line definition,
it has become apparent that proving it can be
a very challenging task requiring
intricate combinatorial constructions or refined mathematical tools~(see
\emph{e.g.}\ \citeauthor{Stanley1989}'s survey~\citep{Stanley1989}).
It turns out that this is also the case for the considered transition probabilities of torus graphs and hypercubes.
The reason is that explicit formulas seem to be intractable and
typical approximations (\emph{e.g.}\ Poissonization~\citep{DGM90})
are way too loose to compare consecutive transition probabilities.
For the $d$\nobreakdash-dimensional torus, we use a
local central limit theorem to approximate the transition probabilities by a
multivariate normal distribution which is known to be unimodal.
On hypercubes the above method fails as several inequalities for the torus
graph are only true for constant~$d$.
However, we can employ the additional symmetries
of the hypercube
to prove
unimodality of the transition probabilities by relating it to a random walk on a weighted path. Somewhat surprisingly,
this intriguing property was unknown before, although random walks on hypercubes have been intensively studied~(see \emph{e.g.}~\cite{DGM90,KLY93,MS04}).
We prove this unimodality result by establishing an interesting result
concerning first-passage probabilities
of a random walk
on paths with arbitrary transition
probabilities: If the loop probabilities are at least~$1/2$, then the
first-passage probability distribution can be expressed as a convolution of independent
geometric distributions. In particular, this implies that these probabilities
are log-concave. Reducing the random walk on a hypercube to a random walk on a
weighted path, we obtain that the transition probabilities on the hypercube are unimodal.
Estimating the maximum probabilities via a balls-and-bins-process, we finally obtain our upper bound on the deviation for the hypercube.
We believe that our probabilistic result for paths is of
independent interest, as random walks on the paths are among the most
extensively studied stochastic processes. Moreover, many analyses of randomized
algorithms can be reduced to such random walks (see \emph{e.g.}~\cite[Thm.~6.1]{MR95}).
\paragraph{Related Work}
In the approach of \citet{ES10}
certain interacting random walks are used to
reduce the load deviation.
This randomized algorithm achieves a constant additive error between the maximum and average load on hypercubes and torus graphs
in time $\mathcal{O}( \log (Kn)/(1-\lambda_2))$, where $\lambda_2$ is the second largest eigenvalue of the diffusion matrix. However, in contrast to our deterministic algorithm, this algorithm is less natural and more complicated (e.g., the nodes must know an accurate estimate of the average load).
\citet{AielloAMR93}
and \citet{GhoshLMMPRRTZ99} studied balancing algorithms where in each time step
at most one token is transmitted over each edge.
Due to this restriction, these
algorithms take substantially more time, i.e., they run in time at least linear in the initial
discrepancy~$K$.
Nonetheless, the best known bounds on the discrepancy are only
polynomial in~$n$ for the torus and $\Omega(\log^5 n)$ for the hypercube~\cite{GhoshLMMPRRTZ99}.
In another common model, nodes are only allowed to exchange load
with at most one neighbor in each time step, see \emph{e.g.},~\cite{GM96,RSW98,FS09}. In fact,
the afore-mentioned randomized rounding approach \cite{FS09} was analyzed in this model.
However,
the idea of randomly rounding the fractional flow such that the expected error is
zero naturally extends to our diffusive setting where a node may exchange load with all neighbors simultaneously.
\emph{Quasirandomness} describes a deterministic process which imitates certain
properties of a random process. Our quasirandom load balancing algorithm
imitates the property that rounding up and down the flow between two vertices
occurs roughly equally often by a deterministic process which minimizes these
rounding errors directly. This way, we keep the desired property that the ``expected''
accumulated rounding error is zero, but remove almost all of its (undesired) variance.
Similar concepts have been used for deterministic random walks~\cite{CooperSpencer},
external mergesort~\citep{BarveGV97},
and quasirandom rumor spreading~\cite{DFS08}. The latter work presents a quasirandom
algorithm which is able to broadcast a piece of information at least as fast as
its random counterpart on the hypercube and most random graphs. However,
in case of rumor spreading the quasirandom protocol is just slightly faster than the random protocol
while the quasirandom load-balancing algorithm presented here
substantially outperforms its random counterpart.
\paragraph{Organization of the paper}
In \secref{algorithms}, we give a description of our bounded error diffusion
(BED) model. For a better comparison, we present some results for
the previous algorithms of \cite{FS09} and \cite{RSW98} in \secref{others}.
In \secref{basic}, we introduce our basic method
which is used in \secrefs{cube}{torus} to analyze BED algorithms
on hypercubes and torus graphs, respectively.
\section{Model and algorithms}
\label{sec:algorithms}
We aim at balancing load on a connected, undirected graph $G=(V,E)$.
Denote by $\deg(i)$ the \emph{degree} of node $i\in V$ and let
$\Delta=\Delta(G)=\max_{i\in V} \deg(i)$
be the maximum degree of~$G$.
The balancing process is governed by an ergodic, doubly-stochastic diffusion matrix~$\ensuremath{\mathbf{P}}$
with
\[
\ensuremath{\mathbf{P}}_{i,j} =
\begin{cases}
\tfrac{1}{2 \Delta} & \text{if $\{i,j\} \in E$,}\\
1-\tfrac{\deg(i)}{2\Delta} & \text{if $i=j$,}\\
0 & \text{otherwise.}
\end{cases}
\]
Let $x^{(t)}$ be the load-vector of the vertices at step~$t$
(or more precisely, after the completion of the balancing procedure at step~$t$).
The \emph{discrepancy}
of such a (row) vector $x$ is $\max_{i,j} ( x_i - x_j )$,
and the discrepancy at step~$0$ is called initial discrepancy~$K$.
\paragraph{The idealized process}
In one time step
each pair $(i, j)$ of adjacent vertices shifts
divisible
tokens between~$i$ and~$j$.
We have the following iteration, $x^{(t)} = x^{(t-1)} \ensuremath{\mathbf{P}}$ and inductively,
$x^{(t)} = x^{(0)} \ensuremath{\mathbf{P}}^{t}$. Equivalently, for any edge $\{i,j\} \in E$ and step~$t$, the flow from~$i$ to~$j$ at step~$t$ is
$\ensuremath{\mathbf{P}}_{i,j} x_i^{(t-1)} - \ensuremath{\mathbf{P}}_{j,i} x_j^{(t-1)}$. Note that the symmetry of~$\ensuremath{\mathbf{P}}$ implies that for $t \rightarrow \infty$, $x^{(t)}$ converges towards the uniform vector $(1/n,1/n,\ldots,1/n)$.
\paragraph{The discrete process}
There are different ways how to handle non-divisible tokens.
We define the following \emph{bounded error diffusion} (BED) model.
Let $\Phi_{i,j}^{(t)}$ denote the integral flow from~$i$ to~$j$ at time~$t$.
As $\Phi_{i,j}^{(t)}=-\Phi_{j,i}^{(t)}$, we have
$x_i^{(t)} = x_i^{(t-1)} - \sum_{j\colon\{i,j\}\in E} \Phi_{i,j}^{(t)}$.
Let
$e_{i,j}^{(t)} := \big(\ensuremath{\mathbf{P}}_{i,j} x_i^{(t-1)} - \ensuremath{\mathbf{P}}_{j,i} x_j^{(t-1)}\big) - \Phi_{i,j}^{(t)}$
be the excess load (or lack of load) allocated to~$i$ as a result
of rounding on edge $\{i,j\}$ in time step~$t$.
Note that for all vertices~$i$,
$x_i^{(t)} = (x^{(t-1)} \ensuremath{\mathbf{P}})_i + \sum_{j\colon\{i,j\}\in E} e_{i,j}^{(t)}$.
Let now~$\Lambda$ be an upper bound for
the accumulated rounding errors (deviation from the idealized process),
that is,
$\big|\sum_{s=1}^t e_{i,j}^{(s)} \big|\leq \Lambda$ for all $t \in {\mathbb{N}}$
and $\{i,j\}\in E$.
All our bounds still hold if~$\Lambda$ is a function of~$n$ and/or~$t$,
but we only say that an algorithm is a \emph{BED algorithm} if~$\Lambda$
is a constant.
Our new \emph{quasirandom diffusion algorithm}
chooses for $\ensuremath{\mathbf{P}}_{i,j} x_i^{(t)} \geq \ensuremath{\mathbf{P}}_{j,i} x_j^{(t)}$
the flow $\Phi_{i,j}^{(t)}$ from~$i$ to~$j$ to be either
$\Phi_{i,j}^{(t)}=\big\lfloor \ensuremath{\mathbf{P}}_{i,j} x_i^{(t)} - \ensuremath{\mathbf{P}}_{j,i} x_j^{(t)} \big\rfloor$
or
$\Phi_{i,j}^{(t)}=\big\lceil \ensuremath{\mathbf{P}}_{i,j} x_i^{(t)} - \ensuremath{\mathbf{P}}_{j,i} x_j^{(t)} \big\rceil$
such that
$\big|\sum_{s=1}^t e_{i,j}^{(s)} \big|$
is minimized.
This yields a BED algorithm with $\Lambda\leq1/2$ and
can be implemented with $\lceil\log_2 \Delta\rceil$ storage per edge.
Note that one can imagine various other natural (deterministic or randomized)
BED algorithms. To do so, the algorithm only
has to ensure that the errors do not add up to more than a constant.
With above notation,
the \emph{RSW-algorithm} uses
$\Phi_{i,j}^{(t)}=\big\lfloor \ensuremath{\mathbf{P}}_{i,j} x_i^{(t)} - \ensuremath{\mathbf{P}}_{j,i} x_j^{(t)} \big\rfloor$,
provided that $\ensuremath{\mathbf{P}}_{i,j} x_i^{(t)} \geq \ensuremath{\mathbf{P}}_{j,i} x_j^{(t)}$.
In other words, the flow on each edge is always rounded down.
In our BED framework this would imply a~$\Lambda$ of order~$T$ after~$T$ time steps.
A simple \emph{randomized rounding diffusion algorithm}
chooses for $\ensuremath{\mathbf{P}}_{i,j} x_i^{(t)} \geq \ensuremath{\mathbf{P}}_{j,i} x_j^{(t)}$ the flow
$\Phi_{i,j}^{(t)}$ as the randomized rounding of
$\ensuremath{\mathbf{P}}_{i,j} x_i^{(t)} - \ensuremath{\mathbf{P}}_{j,i} x_j^{(t)}$, that is, it
rounds up with probability $(\ensuremath{\mathbf{P}}_{i,j} x_i^{(t)} - \ensuremath{\mathbf{P}}_{j,i} x_j^{(t)}) -
\big\lfloor \ensuremath{\mathbf{P}}_{i,j} x_i^{(t)} - \ensuremath{\mathbf{P}}_{j,i} x_j^{(t)} \big\rfloor$ and
rounds down otherwise.
This typically achieves an error
$\Lambda$ of order $\sqrt{T}$ after~$T$ time steps.
\paragraph{Handling Negative Loads}
Unless there is a lower bound on the minimum load of a vertex,
negative loads may occur during the balancing procedure. In the following, we describe a simple approach to cope with this problem.
Consider a graph~$G$ for which we can prove a deviation of at most
$\gamma$ from the idealized process. Let $x^{(0)}$ be the initial load vector
with an average load of~$\ensuremath{\bar{x}}$. Then at the beginning of the balancing
procedure, each node generates~$\gamma$ additional (virtual) tokens. During the
balancing procedure, these tokens are regarded as common tokens, but at the end
they are ignored. First observe that since the minimum load at each node in the idealized
process is at least~$\gamma$, it follows that at each step, every node has at
least a load of zero in the discrete process. Since each node has a load of
$\ensuremath{\bar{x}} + \mathcal{O}(\gamma)$ at the end, we end up with a load distribution where the
maximum load is still $\ensuremath{\bar{x}} + \mathcal{O}(\gamma)$ (ignoring the virtual tokens).
\section{Basic method to analyze our quasirandom algorithm}
\label{sec:basic}
To bound runtime and discrepancy of a BED algorithm,
we always bound the deviation between the idealized model and the discrete model
which is an important measure in its own right.
For this, let $x_\ell^{(t)}$ denote the load on vertex~$\ell$ in step~$t$ in the discrete model and
$\xi_\ell^{(t)}$ denote the load on vertex~$\ell$ in step~$t$ in the idealized model. We assume that the discrete and idealized model start with the same initial load, that is, $x^{(0)}=\xi^{(0)}$.
As derived in \citet{RSW98}, their difference can be written as
\begin{align}
\label{eq:StandardAnsatz}
x_\ell^{(t)} - \xi_\ell^{(t)}
=
\sum_{s=0}^{t-1}
\sum_{[i:j]\in E}
e_{i,j}^{(t-s)}
(\ensuremath{\mathbf{P}}_{\ell,i}^s - \ensuremath{\mathbf{P}}_{\ell,j}^s).
\end{align}
where $[i:j]$ refers to an edge $\{i,j\}\in E$ with $i < j$, where``$<$'' is some arbitrary but fixed ordering on the vertices~$V$.
It will be sufficient to bound \eq{StandardAnsatz}, as the convergence speed of the idealized process can be bounded in terms of the second largest eigenvalue.
\begin{thm}[{\emph{e.g.}, \cite[Thm.~1]{RSW98}}]\label{thm:ideal}
On all graphs with second largest eigenvalue in absolute value $\lambda_2=\lambda_2(\ensuremath{\mathbf{P}})$,
the idealized process
with divisible tokens
reduces
an initial
discrepancy~$K$
to~$\ell$ within
\[
\tfrac{2}{1-\lambda_2} \ln \big( \tfrac{K n^2}{\ell} \big)
\]
time steps.
\end{thm}
As $\lambda_2=1-\Theta(\log^{-1} n)$ for the hypercube and
$\lambda_2=1-\Theta(n^{-2/d})$ for the
$d$\nobreakdash-dimensional torus~\cite{GM96}, one immediately gets the following corollary.
\begin{cor}\label{cor:ideal}
The idealized process
reduces an initial
discrepancy of~$K$ to~$1$ within
\(
\O(n^{2/d} \, \log(K n) )
\)
time steps on the $d$\nobreakdash-dimensional torus
and within
\(
\O(\log n \, \log(K n) )
\)
time steps
on the hypercube.
\end{cor}
An important property of all examined graph classes will be
unimodality or log-concavity of certain transition probabilities.
\begin{defi}
A function $f\colon{\mathbb{N}}\to {\mathbb{R}}_{\ge 0}$ is \emph{log-concave}
if $f(i)^2\ge f(i-1) \cdot f(i+1)$ for all $i\in{\mathbb{N}}_{>0}$.
\end{defi}
\begin{defi}
A function $f\colon{\mathbb{N}} \to {\mathbb{R}}$ is \emph{unimodal}
if there is a $s \in {\mathbb{N}}$ such that
$f|_{x \le s}$ as well as $f|_{x \ge s}$ are monotone.
\end{defi}
Log-concave functions are sometimes also called \emph{strongly unimodal}~\citep{KeilsonGerber1971}.
We summarize some classical results regarding log-concavity and unimodality.
\begin{fac}
\label{fac:unimodal}
\begin{enumerate}
\item Let~$f$ be a log-concave function.
Then, $f$ is also a unimodal function (e.g.~\citep{Keilson,KeilsonGerber1971}).
\item Hoggar's theorem~\citep{Hoggar1974}:
Let $f$ and~$g$ be log-concave functions.
Then their convolution
$(f*g)(k)=\sum_{i=0}^k f(i) \, g(k-i)$
is also log-concave.
\item Let $f$ be a log-concave function and $g$ be a unimodal function.
Then their convolution $f*g$ is a unimodal function~\citep{KeilsonGerber1971}.
\end{enumerate}
\end{fac}
Our interest in unimodality is based on the fact that an alternating sum over a
unimodal function can be bounded by their maximum. More precisely, for
a non-negative and unimodal function $f\colon X \to {\mathbb{R}}$
and $t_0, \ldots, t_k \in X$ with $t_0 \leq \cdots \leq t_k$, the following holds:
\[
\bigg| \sum_{i = 0}^k (-1)^i \, f(t_i)\bigg| \le \max_{x \in X} f(x).
\]
We generalize this well-known property in the following lemma.
\begin{lem}
\label{lem:betragkleinerk}
Let $f\colon X \to {\mathbb{R}}$ be non-negative with $X\subseteq{\mathbb{R}}$.
Let $A_0, \ldots, A_k \in {\mathbb{R}}$ and
$t_0, \ldots, t_k \in X$ such that $t_0 \leq \cdots \leq t_k$
and
$|\sum_{i=a}^{k} A_i| \leq k$ for all $0\leq a\leq k$.
If $f$ has $\ell$ local extrema, then
\[
\bigg| \sum_{i = 0}^k A_i \, f(t_i)\bigg| \
\le\ (\ell+1)\,k\,\max_{j=0}^k f(t_j).
\]
\end{lem}
\begin{proof}
Let us start with the assumption that $f(t_i)$, $0\leq i\leq k$, is monotone increasing.
With $f(t_{-1}):=0$, it is easy to see that then
\begin{align*}
\big|\textstyle\sum_{i = 0}^k A_i \, f(t_i)\big|
&= \big|\textstyle\sum_{i = 0}^k \sum_{j=0}^i A_i \, (f(t_j)-f(t_{j-1}))\big|\\
&= \big|\textstyle\sum_{j = 0}^k \sum_{i=j}^k A_i \, (f(t_j)-f(t_{j-1}))\big|\\
&\leq \textstyle\sum_{j = 0}^k \big| f(t_j)-f(t_{j-1})\big| \, \big| \sum_{i=j}^k A_i \big|\\
&\leq \textstyle\sum_{j = 0}^k \big| f(t_j)-f(t_{j-1}) \big|\, k\\
&= k\,\max_{j=0}^k f(t_j).
\end{align*}
The same holds if $f(t_i)$, $0\leq i\leq k$, is monotone decreasing.
If $f(x)$ has $\ell$ local extrema, we split the sum in $(\ell+1)$ parts
such that $f(x)$ is monotone on each part and apply above arguments.
\end{proof}
\paragraph{Random Walks}
To examine the diffusion process, it will be useful to define a random walk based on~$\ensuremath{\mathbf{P}}$.
For any pair of vertices $i,j$, $\ensuremath{\mathbf{P}}^{t}_{i,j}$ is
the probability that a random walk guided by~$\ensuremath{\mathbf{P}}$ starting from~$i$
is located at~$j$ at step~$t$.
In the following \secref{cube} it will be useful to set $\ensuremath{\mathbf{P}}_{i,j}(t):=\ensuremath{\mathbf{P}}^{t}_{i,j}$
and to denote with $\ensuremath{\mathbf{f}}_{i,j}(t)$ for $i \neq j$
the first-passage probabilities, that is,
the probability that a random walk starting from $i$ visits
the vertex $j$ at step~$t$ for the first time.
\section{Analysis on the hypercube}
\label{sec:cube}
We first give the definition of the hypercube.
\begin{defi}
A $d$\nobreakdash-dimensional hypercube with $n=2^d$ vertices
has vertex set $V=\{0,1\}^d$ and edge set
$E=\{ \{i,j\} \mid \text{$i$ and~$j$ differ in one bit}\}$.
\end{defi}
In this section we prove the following result.
\begin{thm}
\label{thm:cube}
For all initial load vectors
on the $d$\nobreakdash-dimensional hypercube with $n$~vertices,
the deviation between the idealized process and
a discrete process with accumulated rounding errors at most~$\Lambda$
is upper bounded by $4 \Lambda \log n$ at all times and there
are load vectors for which this deviation can be $(\log n)/2$.
\end{thm}
Recall that for BED algorithms $\Lambda=\mathcal{O}(1)$.
With \thmref{ideal} it follows that any BED algorithm (and in particular
our quasirandom algorithm) reduces the discrepancy
of any initial load vector with discrepancy~$K$ to $\mathcal{O}(\log n)$ within
$\mathcal{O}( \log n \, \log (K n) )$ time steps.
\newcommand{\beta}{\beta}
\newcommand{\alpha}{\alpha}
\newcommand{\mathcal{P}}{\mathcal{P}}
\newcommand{\path\setminus\{d\}}{\mathcal{P}\setminus\{d\}}
\subsection{Log-concave passage time on paths}
To prove \thmref{cube}, we first consider a discrete-time
random walk on a path $\mathcal{P}=(0,1,\ldots, d)$ starting
at node~$0$.
We make use of
a special generating function, called \emph{$z$\nobreakdash-transform}.
The $z$\nobreakdash-transform of a function $g\colon{\mathbb{N}}\mapsto {\mathbb{R}}_{\ge 0}$ is defined by
$\ensuremath{\mathcal{G}}(z)=\sum_{i=0}^{\infty} g(i)\cdot z^{-i}$.
We will use the fact that a convolution reduces to multiplication in the $z$\nobreakdash-plane.
Instead of the z-transform one could carry out a similar analysis using the
\emph{probability generating function}. We choose to use the z-transform here since it leads to
slightly simpler arithmetic expressions.
Our analysis also uses
the \emph{geometric distribution} with parameter~$p$, which is defined by $\mathsf{Geo}(p)(t)=(1-p)^{t-1}\,p$ for $t \in {\mathbb{N}} \setminus \{ 0 \}$
and $\mathsf{Geo}(p)(0)=0$. It is easy to check that $\mathsf{Geo}(p)$ is log-concave.
Moreover, the $z$\nobreakdash-transform of $\mathsf{Geo}(p)$ is
\[
\sum_{i=1}^{\infty} \mathsf{Geo}(p)(i) \cdot z^{-i} = \dfrac{p}{z-(1-p)}.
\]
\noindent
For each node $i\in\mathcal{P}$, let $\alpha_i$ be the loop probability at node~$i$ and
$\beta_i$ be the \emph{upward probability}, i.e., the probability to move to node $i+1$.
Then, the \emph{downward probability} at node~$i$ is $1-\alpha_i-\beta_i$.
We can assume that $\beta_i>0$ for all $i\in\path\setminus\{d\}$.
We are interested in the first-passage probabilities $\ensuremath{\mathbf{f}}_{0,d}(t)$. Observe that
\begin{align}
\label{eq:faltung}
\ensuremath{\mathbf{f}}_{0,d}(t)= (\ensuremath{\mathbf{f}}_{0,1} * \ensuremath{\mathbf{f}}_{1,2} * \cdots * \ensuremath{\mathbf{f}}_{d-1,d}) (t).
\end{align}
In the following, we will show that $\ensuremath{\mathbf{f}}_{0,d}(t)$ is \emph{log-concave}. Indeed, we show a much stronger result:
\begin{thm}
\label{thm:logconcavePath}
Consider a random walk on a path $\mathcal{P}=(0,1, \ldots, d)$ starting at node~$0$. If $\alpha_i\ge\frac{1}{2}$ for
all nodes $i\in\mathcal{P}$, then $\ensuremath{\mathbf{f}}_{0,d}$ can be expressed as convolution of $d$~independent
geometric distributions.
\end{thm}
As the geometric distribution is log-concave
and the convolution of log-concave functions is again log-concave (cf.~\facref{unimodal}),
we immediately get the following corollary.
\begin{cor}
\label{cor:logconcavePath}
Consider a random walk on a path $\mathcal{P}=(0,1, \ldots, d)$ starting at node~$0$. If $\alpha_i\ge\frac{1}{2}$ for
all nodes $i\in\mathcal{P}$, then $\ensuremath{\mathbf{f}}_{0,d}(t)$ is log-concave in~$t$.
\end{cor}
\medskip
Note that \thmref{logconcavePath} follows directly from
Theorem 1.2 of \citet{Fill2009a}.
As \thmref{logconcavePath} is a crucial ingredient for proving our main result
\thmref{cube} for the hypercube, we give a simpler alternative proof of the statement.
While \citeauthor{Fill2009a}'s proof
is purely stochastic, our proof is elementary and
based on functional analysis of the $z$\nobreakdash-transform.
Our analysis for the discrete-time random walk should also
be compared with \citeauthor{Keilson}'s analysis of the
continuous-time process~\cite{Keilson}.
The continuous-time process was independently considered even
earlier by \citet{KarlinMcGregor59}.
Before proving the theorem, we will show how to obtain $\ensuremath{\mathbf{f}}_{0,d}(t)$ by a recursive argument.
For this, suppose we are at node~$i\in\path\setminus\{d\}$. The next step is a loop with probability~$\alpha_i$.
Moreover, the next subsequent non-loop move ends at $i+1$ with probability
$\frac{\beta_i}{1-\alpha_i}$ and at $i-1$ with probability
$\frac{1-\beta_i-\alpha_i}{1-\alpha_i}$.
Thus, for all $i\in \path\setminus\{d\}$,
\begin{align*}
\ensuremath{\mathbf{f}}_{i,i+1}(t) = \frac{\beta_i}{1-\alpha_i} \cdot \mathsf{Geo}(1-\alpha_i)(t)
+ \frac{1-\beta_i-\alpha_i}{1-\alpha_i}\cdot (\mathsf{Geo}(1-\alpha_i)*\ensuremath{\mathbf{f}}_{i-1,i}*\ensuremath{\mathbf{f}}_{i,i+1})(t),
\end{align*}
with corresponding $z$\nobreakdash-transform
\begin{align*}
\ensuremath{\mathcal{F}}_{i,i+1}(z) = \frac{\beta_i}{1-\alpha_i} \cdot \frac{1-\alpha_i}{z-\alpha_i}
+ \frac{1-\beta_i-\alpha_i}{1-\alpha_i} \cdot \frac{1-\alpha_i}{z-\alpha_i}
\cdot \ensuremath{\mathcal{F}}_{i-1,i}(z) \cdot \ensuremath{\mathcal{F}}_{i,i+1}(z).
\end{align*}
Rearranging terms yields
\begin{align}
\label{eq:f:recF}
\ensuremath{\mathcal{F}}_{i,i+1}(z) =\frac{ \beta_i}
{
z-\alpha_i-(1-\beta_i-\alpha_i)\cdot \ensuremath{\mathcal{F}}_{i-1,i}(z)
},
\end{align}
for all $i\in\path\setminus\{d\}$. So $ \ensuremath{\mathcal{F}}_{i,i+1}(z)$ is obtained recursively with
$
\ensuremath{\mathcal{F}}_{0,1}(z) = \frac{\beta_0}{z-(1-\beta_0)}.
$
Finally the $z$\nobreakdash-transform of \eq{faltung} is
\begin{align}
\label{eq:f:FF0N}
\ensuremath{\mathcal{F}}_{0,d}(z) = \ensuremath{\mathcal{F}}_{0,1}(z) \cdot \ensuremath{\mathcal{F}}_{1,2}(z) \cdot \ldots \cdot \ensuremath{\mathcal{F}}_{d-1,d}(z).
\end{align}
In the following, we prove some properties of $ \ensuremath{\mathcal{F}}_{i,i+1}(z)$ for $i\in\path\setminus\{d\}$.
\medskip
\begin{lem}
\label{lem:dec}
Except for singularities, $\ensuremath{\mathcal{F}}_{i,i+1}(z)$
is monotone decreasing in $z$,
for all $i\in\path\setminus\{d\}$.
\end{lem}
\begin{proof}
We will show the claim by induction on~$i$.
It is easy to see that the claim holds for the base case ($i=0$) since $\ensuremath{\mathcal{F}}_{0,1}(z) = \frac{\beta_0}{z-(1-\beta_0)}$. Assume inductively that the claim holds for $\ensuremath{\mathcal{F}}_{i-1,i}(z)$.
With $1-\beta_i-\alpha_i\ge 0$ this directly implies that the denominator
of \eq{f:recF}
is increasing in~$z$. The claim for $\ensuremath{\mathcal{F}}_{i,i+1}(z)$ follows.
\end{proof}
\begin{lem}
\label{lem:numpoles}
For all $i\in\path\setminus\{d\}$,
$\ensuremath{\mathcal{F}}_{i,i+1}(z)$ has exactly~$i+1$ poles which are all in the interval~$(0,1)$.
The poles of $\ensuremath{\mathcal{F}}_{i,i+1}(z)$
are distinct from the poles of $\ensuremath{\mathcal{F}}_{i-1,i}(z)$.
\end{lem}
\begin{proof}
Before proving the claims of the lemma, we will show that
$\ensuremath{\mathcal{F}}_{i,i+1}(0)\ge-1$ and $\ensuremath{\mathcal{F}}_{i,i+1}(1)=1$ for all $i\in\path\setminus\{d\}$. Observe, that
$\ensuremath{\mathcal{F}}_{0,1}(0)=\frac{\beta_0}{-(1-\beta_0)}=\frac{1-\alpha_0}{-\alpha_0}\ge -1$, since $\alpha_0\ge \frac{1}{2}$.
Also observe that $\ensuremath{\mathcal{F}}_{0,1}(1)=1$.
Assume, inductively that $\ensuremath{\mathcal{F}}_{i-1,i}(0)\ge-1$ and $\ensuremath{\mathcal{F}}_{i-1,i}(1)=1$. Then with \eq{f:recF},
$\ensuremath{\mathcal{F}}_{i,i+1}(0) \ge \frac{\beta_i}{-\alpha_i-(1-\beta_i-\alpha_i)\cdot(-1)}
=\frac{\beta_i}{1-2\alpha_i-\beta_i}\ge -1$, since $1-2\alpha_i\le 0$.
Moreover, $\ensuremath{\mathcal{F}}_{i,i+1}(1)=\frac{\beta_i}{1-\alpha_i-(1-\alpha_i-\beta_i)}=1$.
Thus, $\ensuremath{\mathcal{F}}_{i,i+1}(0)\ge-1$ and $\ensuremath{\mathcal{F}}_{i,i+1}(1)=1$ for all $i\in\path\setminus\{d\}$.
We will now show the claims of the lemma by induction.
For the base case observe that $\ensuremath{\mathcal{F}}_{0,1}(z) = \frac{\beta_0}{z-(1-\beta_0)}$ has one pole at
$z=1-\beta_0>0$ and $\ensuremath{\mathcal{F}}_{-1,0}$ is not defined. This implies the claim for $i=0$. Suppose the claim
holds for $\ensuremath{\mathcal{F}}_{i-1,i}(z)$ and let $z_1,z_2, \ldots z_i$ be the poles
of $\ensuremath{\mathcal{F}}_{i-1,i}(z)$. Without loss of generality, we assume $0<z_1<z_2<\cdots<z_i<1$.
Let $g_i(z)$ be the denominator of \eq{f:recF}, that is,
\begin{align*}
g_i(z):=z-\alpha_i-(1-\beta_i-\alpha_i)\cdot \ensuremath{\mathcal{F}}_{i-1,i}(z).
\end{align*}
Observe that
\begin{itemize}
\item[(i)]
$g_i(z)$ has the same set of poles as $\ensuremath{\mathcal{F}}_{i-1,i}(z)$,
\item[(ii)]
$\lim_{z\rightarrow -\infty} g_i(z)= - \infty$, and
\item[(iii)]
$\lim_{z\rightarrow \infty} g_i(z)= \infty$.
\end{itemize}
By \eq{f:recF}, $\ensuremath{\mathcal{F}}_{i,i+1}(z)$ has its poles at the zeros of
$g_i(z)$.
\lemref{dec} shows that in each interval
$(z_j,z_{j+1})$ with $1\le j \le i-1$,
$g_i(z)$ is increasing in~$z$. Using
fact (i)
this implies that $g_i(z)$ has exactly one zero in each interval $(z_j,z_{j+1})$.
Thus $\ensuremath{\mathcal{F}}_{i,i+}(z)$ has exactly
one pole in each interval $(z_j,z_{j+1})$.
Similarly, \lemref{dec} with
facts (i),(ii) and (iii)
imply that $\ensuremath{\mathcal{F}}_{i,i+1}(z)$ has exactly
one pole, say $z'$, in the interval $[-\infty,z_1)$
and
one pole, say~$z''$ in the interval $(z_i,\infty]$ .
This implies that $\ensuremath{\mathcal{F}}_{i,i+1}(z)$ has exactly $i+1$ poles
which are all distinct from the poles of $\ensuremath{\mathcal{F}}_{i-1,i}(z)$. It remains to
show that $z'>0$ and $z''<1$.
Since $\ensuremath{\mathcal{F}}_{i-1,i}(0)\ge-1$ and $\lim_{z\rightarrow -\infty}\ensuremath{\mathcal{F}}_{i-1,i}(z)=-0$ it
follows with \lemref{dec} that $-1\le \ensuremath{\mathcal{F}}_{i-1,i}(z) \le 0$ for all real $z<0$. So
$g_i(z)<0$
for all real $z<0$. It follows that $z'>0$.
Similarly, since $\ensuremath{\mathcal{F}}_{i-1,i}(1)=1$ and $\lim_{z\rightarrow \infty}\ensuremath{\mathcal{F}}_{i-1,i}(z)=+0$,
it follows with \lemref{dec} that $0\le \ensuremath{\mathcal{F}}_{i-1,i}(z) \le 1$ for all real $z> 1$.
So
$g_i(z)>0$ for all real $z>1$. It follows that $z''<1$.
This finishes the proof of our inductive step. The claim follows.
\end{proof}
\begin{lem}
\label{lem:factor}
Let $(b_{j,i})_{j=0}^{i}$ be the poles of $\ensuremath{\mathcal{F}}_{i,i+1}(z)$, $i\in\path\setminus\{d\}$, and
define $P_i(z)=\prod_{j=0}^{i} (z-b_{j,i})$.
Then $\ensuremath{\mathcal{F}}_{i,i+1}(z)=\beta_i \cdot \frac{P_{i-1}(z)}{P_{i}(z)}$ for all $i\in\path\setminus\{d\}$.
\end{lem}
\begin{proof}
Our proof is by induction on $i$.
For the base case ($i=0$), observe that $P_{-1}(z)=1$ and thus $\ensuremath{\mathcal{F}}_{0,1}(z)$ has the desired form.
Suppose the claim holds for $\ensuremath{\mathcal{F}}_{i-1,i}(z)$. Then \eq{f:recF} implies
\begin{align}
\ensuremath{\mathcal{F}}_{i,i+1}(z)
&= \frac{\beta_i}
{z-\alpha_i-(1-\beta_i-\alpha_i)\cdot \beta_{i-1}\cdot\frac{P_{i-2}(z)}{P_{i-1}(z)}}
\nonumber\\&
= \frac{\beta_i\cdot P_{i-1}(z)}
{(z-\alpha_i)\cdot P_{i-1}(z)- (1-\beta_i-\alpha_i)\cdot
\beta_{i-1}\cdot P_{i-2}(z)}\label{eq:f:factor1}.
\end{align}
Observe that $(z-\alpha_i)\cdot P_{i-1}(z)$ is a polynomial of degree~$i+1$ where
the leading term has a coefficient of~$1$. This also holds for the denominator
of \eq{f:factor1}, since there, we only subtract a polynomial of order $i-1$.
By \lemref{numpoles} we know that $\ensuremath{\mathcal{F}}_{i,i+1}(z)$ has exactly~$i+1$ real
positive poles. It follows that the denominator of \eq{f:factor1} is equal to
$P_{i}(z)$. The claim follows.
\end{proof}
We are now ready to prove \thmref{logconcavePath}.
\begin{proof}[Proof of \thmref{logconcavePath}]
By \eq{f:FF0N} and
\lemref{factor}, we get
\begin{align*}
\ensuremath{\mathcal{F}}_{0,d}(z)
= \prod_{i=0}^{d-1} \ensuremath{\mathcal{F}}_{i,i+1}(z)
= \prod_{i=0}^{d-1} \left(\beta_i \cdot \frac{P_{i-1}(z)}{P_{i}(z)}\right)
= \frac{\prod_{i=0}^{d-1} \beta_i}{P_{d-1}(z)}
= K_d
\cdot \prod_{i=0}^{d-1} \frac{1-b_{i,d-1}}{z-b_{i,d-1}},
\end{align*}
where $(b_{i,d-1})_{i=0}^{d-1}$ are the poles of
$\ensuremath{\mathcal{F}}_{d-1,d}(z)$ as defined in \lemref{factor} and
$K_d=\prod_{i=0}^{d-1}\frac{\beta_i}
{1-b_{i,d-1}}$.
By \lemref{numpoles}, $b_{i,d-1}\in (0,1)$ for all~$i$.
Now for each~$i$ the term $\frac{1-b_{i,d-1}}{z-b_{i,d-1}}$ is the $z$\nobreakdash-transform of
the geometric distribution with parameter $1-b_{i,d-1}$, i.e., $\mathsf{Geo}(1-b_{i,d-1})(t)$.
Thus, $\ensuremath{\mathbf{f}}_{0,d}(t)$ can be expressed as the convolution of~$d$ independent geometric distributions
\begin{align*}
\ensuremath{\mathbf{f}}_{0,d}(t) = K_d \cdot [\mathsf{Geo}(1-b_{0,d-1}) * \mathsf{Geo}(1-b_{1,d-1}) * \ldots * \mathsf{Geo}(1-b_{d-1,d-1})](t).
\end{align*}
Moreover, since $\ensuremath{\mathbf{f}}_{0,d}$ is a probability distribution over~$t$ and the convolution of probability
distributions is again a probability distribution, we have $K_d=1$.
The theorem follows.
\end{proof}
One should note that it follows from \cite[Theorem 1.2]{Fill2009a} that the
parameters $(b_{i,d-1})_{i=0}^{d-1}$ in the geometric distributions are the
eigenvalues of the underlying transition matrix.
Recall that our aim is to prove unimodality for the function $\ensuremath{\mathbf{P}}_{0,j}^{t}$ (in $t$). Using the simple convolution formula $\ensuremath{\mathbf{P}}_{0,j} = \ensuremath{\mathbf{f}}_{0,j} * \ensuremath{\mathbf{P}}_{j,j}$ and the log-concavity of $\ensuremath{\mathbf{f}}_{0,j}$, it suffices to prove that $\ensuremath{\mathbf{P}}_{j,j}$ is unimodal (cf.~\facref{unimodal}). In the following, we will prove that $\ensuremath{\mathbf{P}}_{j,j}$ is even non-increasing in $t$.
\begin{lem}
\label{lem:monotone}
Let $\ensuremath{\mathbf{P}}$ be the $(d+1)\times(d+1)$-transition matrix defining an ergodic Markov chain on a path $\mathcal{P}=(0, \ldots, d)$.
If $\ensuremath{\mathbf{P}}_{ii}\ge\frac{1}{2}$ for all $0 \leq i \leq d$
then for all
$0 \leq i \leq d$, $ \ensuremath{\mathbf{P}}_{i,i}^{t} $ is non-increasing in $t$.
\end{lem}
\begin{proof}
It is well known that ergodic Markov chains on paths are time reversible (see e.g. Section 4.8 of \citet{Ro07}).
To see this let $\pi=(\pi_0, \ldots, \pi_d)$ be the stationary distribution. Then for all
$0\le i\le d-1$ the rate at which the process goes from $i$ to $i+1$ (namely, $\pi_i \ensuremath{\mathbf{P}}_{i,i+1}$) is
equal to the rate at which the process goes from $i+1$ to $i$ (namely, $\pi_{i+1} \ensuremath{\mathbf{P}}_{i+1,i}$).
Thus, $\ensuremath{\mathbf{P}}$ is time-reversible.
One useful property of a time-reversible matrix is that its eigenvalues are all real.
The Ger\u{s}gorin disc theorem states that every eigenvalue $\lambda_j$, $0\le j \le d$,
satisfies the condition
\[
|\lambda_j - \ensuremath{\mathbf{P}}_{ii}| \le 1-\ensuremath{\mathbf{P}}_{ii},
\]
for some $0 \le i \le d$.
Since $\ensuremath{\mathbf{P}}_{ii}\ge\frac{1}{2}$, this directly implies that
all eigenvalues are in the interval $[0,1]$.
It is well-known
that there is an orthonormal base of ${\mathbb{R}}^{d+1}$ which is formed by the eigenvectors $v_{0},\,v_{1},\,\ldots,\,v_{d}$ (see e.g.~\cite{Gur00}).
Then for any $n$-dimensional vector $w \in {\mathbb{R}}^{d+1}$,
$w= \sum_{j=0}^{d} \langle w, v_j \rangle \, v_j$,
where $\langle \thinspace\cdot\thinspace,\thinspace\cdot\thinspace \rangle$
denotes the inner product.
Applying this to the $i$\nobreakdash-th unit vector $e_{i}$ and using $[\thinspace\cdot\thinspace]_{i}$
to denote the $i$\nobreakdash-th entry of a vector in ${\mathbb{R}}^{d+1}$ we obtain
\[
e_{i}
= \sum_{j=0}^{d} \langle e_i, v_j \rangle \, v_j
= \sum_{j=0}^d [v_{j}]_i v_{j}.
\]
Thus,
\begin{align*}
\ensuremath{\mathbf{P}}^{t} e_{i}
= \ensuremath{\mathbf{P}}^{t} \, \bigg( \sum_{j=0}^d [v_{j}]_i \, v_{j} \bigg)
= \sum_{j=0}^d [v_{j}]_i \, \ensuremath{\mathbf{P}}^{t} \, v_{j}
= \sum_{j=0}^d [v_{j}]_i \, \lambda^{t}_j v_j
\end{align*}
and finally
\begin{align*}
\ensuremath{\mathbf{P}}_{i,i}^{t}
= \left[ \ensuremath{\mathbf{P}}^{t} e_{i} \right]_{i}
= \sum_{j=0}^{d} [v_{j}]_i \, \lambda_j^{t} [v_j]_{i}
= \sum_{j=0}^d \lambda_j^t \, [v_j]_{i}^2,
\end{align*}
which is non-increasing in $t$ as $[v_j]_{i} \in \mathbb{R}$ and $0 \le \lambda_j \le 1$ for all $0\le j \le d$.
\end{proof}
\subsection{Unimodal transition probabilities on the hypercube}
Combining \lemref{monotone} and \thmref{logconcavePath} and then projecting the random walk
on the hypercube on a random walk on a path, we obtain the following result.
\begin{thm}\label{thm:cubeunimodal}
Let $i,j \in V$ be two vertices of a $d$\nobreakdash-dimensional hypercube. Then,
$\ensuremath{\mathbf{P}}_{i,j}(t)$ is unimodal.
\end{thm}
\begin{proof}
We use the following projection of a random walk on a $d$-dimensional hypercube with loop probability $1/2$ to a
random walk on a path with $d$ vertices, again with loop probability $1/2$.
The induced random walk is
obtained from the mapping $x \mapsto |x|_1$, that is, vertices in $\{0,1\}^d$ with
the same number of ones are equivalent. It is easy to check that this new random
walk is a random walk on a path with vertices $0,1,\ldots,d$ that moves right
with probability $\lambda_k=\frac{d-k}{2k}$, left with probability
$\mu_k=\frac{d}{2k}$, and loops with probability~$\frac{1}{2}$. (This process is also known as Ehrenfest chain~\cite{GS01}).
Consider now the random walk on the path with vertex set $\{ 0,1,\ldots,d \}$ and
let $j$~be an arbitrary number with $0 \leq j \leq d$.
Recall that $\ensuremath{\mathbf{P}}_{0,j}$ can be expressed as a convolution (cf.~\cite{GS01}) of~$\ensuremath{\mathbf{P}}$ and~$\ensuremath{\mathbf{f}}$ as follows,
\begin{align*}
\ensuremath{\mathbf{P}}_{0,j} = \ensuremath{\mathbf{f}}_{0,j} * \ensuremath{\mathbf{P}}_{j,j}.
\end{align*}
By \corref{logconcavePath},
$\ensuremath{\mathbf{f}}_{0,j}(t)$ is log-concave. Moreover, \lemref{monotone} implies that $\ensuremath{\mathbf{P}}_{j,j}(t)$ is non-increasing
in $t$ and hence unimodal.
As the convolution
of any log-concave function with any unimodal function is again unimodal~(cf.~\facref{unimodal}),
it follows that $\ensuremath{\mathbf{P}}_{0,j}(t)$ is unimodal in~$t$.
Now fix two vertices $i,j$ of the $d$\nobreakdash-dimensional hypercube. By symmetry, we may
assume that $i=0^d \equiv 0$. Conditioned on the event that the projected random
walk is located at a vertex with $|j|_1$ ones at step~$t$, every
vertex with $|j|_1$ ones is equally likely. This gives
$\ensuremath{\mathbf{P}}_{0,j}(t) = \ensuremath{\mathbf{P}}_{0,|j|_1}(t) / \binom{d}{|j|_1}$, and therefore the unimodality of
$\ensuremath{\mathbf{P}}_{0,|j|_1}(t)$ implies directly the unimodality of
$\ensuremath{\mathbf{P}}_{0,j}(t)$, as needed.
\end{proof}
With more direct methods, one can prove the following supplementary result
giving further insights into the distribution of $\ensuremath{\mathbf{P}}_{i,j}(t)$.
As the result is not required for our analysis,
the proof is given in the appendix.
\newcommand{\textpromonotonicity}{
Let $i,j\in V$ be two vertices of the $d$\nobreakdash-dimensional hypercube with $\operatorname{dist}(i,j) \geq d/2$. Then,
$\ensuremath{\mathbf{P}}_{i,j}(t)$ is monotone increasing.
}
\mypro{monotonicity}{\textpromonotonicity}
\subsection{Analysis of the discrete algorithm}
We are now ready to prove our main result
for hypercubes.
\begin{proof}[Proof of \thmref{cube}]
By symmetry, it suffices to bound the deviation at the vertex $0 \equiv 0^d$.
Hence by \eq{StandardAnsatz} we have to bound
\begin{align*}
\big| x_0^{(t)} - \xi_0^{(t)} \big|
&\leq \big|
\textstyle\sum_{s=0}^{t-1}
\textstyle\sum_{[i:j] \in E} e_{i,j}^{(t-s)} (\ensuremath{\mathbf{P}}_{0,i}(s) - \ensuremath{\mathbf{P}}_{0,j}(s)) \big| \\
&\leq \big|
\textstyle\sum_{s=0}^{t-1}
\textstyle\sum_{[i:j] \in E} e_{i,j}^{(t-s)} \ensuremath{\mathbf{P}}_{0,i}(s)
\big| +
\big|
\textstyle\sum_{s=0}^{t-1}
\textstyle\sum_{[i:j] \in E} e_{i,j}^{(t-s)} \ensuremath{\mathbf{P}}_{0,j}(s) \big|\\
&\leq
\textstyle\sum_{[i:j] \in E}
\big|
\textstyle\sum_{s=0}^{t-1} e_{i,j}^{(t-s)} \ensuremath{\mathbf{P}}_{0,i}(s)
\big| +
\textstyle\sum_{[i:j] \in E}
\big|
\textstyle\sum_{s=0}^{t-1}
e_{i,j}^{(t-s)} \ensuremath{\mathbf{P}}_{0,j}(s)
\big|.
\end{align*}
Using \thmref{cubeunimodal}, we know that the sequences $\ensuremath{\mathbf{P}}_{0,i}(s)$ and $\ensuremath{\mathbf{P}}_{0,j}(s)$ are unimodal in~$s$ and hence we can bound both summands by \lemref{betragkleinerk} (where $\ell=1$) to obtain that
\begin{align}
\big| x_0^{(t)} - \xi_0^{(t)} \big|
&\leq 2 \Lambda \, \textstyle\sum_{[i:j] \in E} \max_{s=0}^{t-1} \ensuremath{\mathbf{P}}_{0,i}(s) +
2 \Lambda \, \textstyle\sum_{[i:j] \in E} \max_{s=0}^{t-1} \ensuremath{\mathbf{P}}_{0,j}(s)\notag\\
&= 2 \Lambda \, d\, \textstyle\sum_{i\in V}
\max_{s=0}^{t-1} \ensuremath{\mathbf{P}}_{0,i}(s). \label{eq:cubefirst}
\end{align}
\noindent
To bound the last term, we view the random walk as the following process, similar to a balls-and-bins process. In
each step $t \in {\mathbb{N}}$ we choose a coordinate $i \in \{1,\ldots,d\}$ uniformly at
random. Then with probability~$1/2$ we flip the bit of this coordinate; otherwise
we keep it (equivalently, we set the bit to~$1$ with
probability~$1/2$ and to zero otherwise).
Now we partition the random walk's distribution at step~$t$ according to the
number of different coordinates chosen (not necessarily flipped) until step~$t$.
Consider $\ensuremath{\mathbf{P}}_{0,x}(t)$ for a vertex $x \in \{0,1\}^d$. Note that by the symmetry of the hypercube, $\ensuremath{\mathbf{P}}_{0,x}(t)$ is the same for all $x \in \{0,1\}^d$ with the same $|x|_1$. Hence let us fix a value~$\ell$ with $0 \leq \ell \leq d$ and let us consider $\ensuremath{\mathbf{P}}_{0,\ell}(t)$ which is the probability for reaching the vertex, say, $1^{\ell} 0^{d-\ell}$ from $0\equiv0^{d}$ within~$t$ steps.
Since (i) the $k$ chosen coordinates must contain
the $\ell$ ones and (ii) all $k$ chosen coordinates must be set to the
correct value, we have
\begin{equation}
\ensuremath{\mathbf{P}}_{0,\ell}(t) =
\textstyle\sum_{k=\ell}^{d} \Pro{ \mbox{exactly $k$ coordinates chosen in $t$ steps}}
\cdot 2^{-k}
\, \binom{d-\ell}{k-\ell}
\big/ \binom{d}{k}.
\label{eq:balls}
\end{equation}
Using this to estimate $\ensuremath{\mathbf{P}}_{0,i}(s)$, we can bound \eq{cubefirst} by
\begin{align*}
\big| x_0^{(t)} - \xi_0^{(t)} \big|
&\leq 2 \Lambda \, d \, \textstyle\sum_{\ell=0}^d \binom{d}{\ell} \max_{s=0}^{\infty} \ensuremath{\mathbf{P}}_{0,\ell}(s) \\
&= 2 \Lambda \, d \, \textstyle\sum_{\ell=0}^d \binom{d}{\ell} \max_{s=0}^{\infty} \\
&\quad\, \textstyle\sum_{k=\ell}^{d} \Pro{ \mbox{exactly $k$ coordinates chosen in $s$ steps}} \cdot \frac{ \binom{d-\ell}{k-\ell} }{ \binom{d}{k} } \cdot 2^{-k} \\
&\leq 2 \Lambda \, d \, \textstyle\sum_{\ell=0}^d \max_{k=\ell}^{d} \frac{ \binom{d-\ell}{k-\ell} \, \binom{d}{\ell} }{ \binom{d}{k} } \, 2^{-k}.
\end{align*}
The fraction in the last term corresponds to the probability of a
hyper-geometric distribution and is therefore trivially upper-bounded by~$1$. This allows us to conclude that
\begin{align*}
\big| x_0^{(t)} - \xi_0^{(t)} \big|
&\leq 2 \Lambda \, d \, \textstyle\sum_{\ell=0}^d 2^{-\ell}
\leq 4 \Lambda d
\end{align*}
and the first claim of the theorem follows.
The second claim follows by the following simple construction.
Define a load vector $x^{(0)}$ such that
$x^{(0)}_v := d$ for all vertices $v=(v_1,v_2,\ldots,v_d)\in\{0,1\}^d$ with $v_1=0$, and $x^{(0)}_v := 0$ otherwise.
Then for each edge $\{i,j\} \in E$ with $0 = i_1 \neq j_1$ the fractional flow at step $1$ is
$
\big(\ensuremath{\mathbf{P}}_{i,j} x_i^{(0)} - \ensuremath{\mathbf{P}}_{i,j} x_j^{(0)}\big) = +\frac{1}{2}.
$
Since in the first time step no rounding errors have been occurred so far, each edge is allowed to round up and down arbitrarily. Hence we can let all these edges round towards~$j$, i.e.,
$\Phi_{i,j}^{(1)} := 1$ for each such edge $\{i,j\} \in E$. By definition, this implies for the corresponding rounding error, $e_{i,j}^{(1)} = -\frac{1}{2}$. Moreover, we have the following load distribution after step~$1$. We have $x^{(1)}_v = 0$ for all vertices $v$ if $v_1=0$,
and $x^{(1)}_v = d$ otherwise. Similarly, the fractional flow for each edge $\{i,j\} \in E$ with $0 = i_1 \neq j_1$ is
$
\big(\ensuremath{\mathbf{P}}_{i,j} x_i^{(0)} - \ensuremath{\mathbf{P}}_{i,j} x_j^{(0)}\big) = -\frac{1}{2}.
$
Since $e_{i,j}^{(1)} = -\frac{1}{2}$, $|\sum_{s=1}^2 e_{i,j}^{(s)}|$ will be minimized if $e_{i,j}^{(2)} = \frac{1}{2}$. Hence we can set $\Phi_{i,j}^{(2)}:=-1$. This implies that we end up in exactly the same situation as at the beginning: the load vector is the same and also the sum over the previous rounding errors along each edge is zero. We conclude that there is an instance of the quasirandom algorithm for which $x^{(t)} = x^{(t \operatorname{mod} 2)}$. This gives the claim.
\end{proof}
\section{Analysis on $d$-dimensional torus graphs}
\label{sec:torus}
We start this section with the formal definition of
a $d$\nobreakdash-dimensional torus. \begin{defi}
\label{def:torus}
A $d$\nobreakdash-dimensional torus $T(n_1,n_2,\ldots,n_d)$
with
$n=n_1\cdot n_2\cdot \ldots \cdot n_d$
vertices has vertex set
$V=\{0,1,\ldots,n_1-1\}\times
\{0,1,\ldots,n_2-1\}\times
\ldots
\times
\{0,1,\ldots,\linebreak[0]n_d-1\}$\linebreak[0] and every vertex
$(i_1,\linebreak[0]i_2,\linebreak[0]\ldots,\linebreak[0]i_d)\in V$ has $2d$ neighbors
$((i_1\pm 1)\bmod{n_1},\linebreak[0]i_2,\ldots,\linebreak[0]i_d)$,\linebreak[0]
$(i_1,(i_2\pm 1)\bmod{n_2},\linebreak[0]i_3,\ldots,i_d)$,
\ldots,\linebreak[0]
$(i_1,i_2,\linebreak[0]\ldots,i_{d-1},\linebreak[0](i_d\pm 1)\bmod{n_d})$. Henceforth, we will always assume that $d=\mathcal{O}(1)$.
We call a torus uniform if $n_1=n_2=\ldots=n_d=\oldsqrt[d]{n}$.
\end{defi}
Without loss of generality we will assume in the remainder that $n_1 \leq n_2
\leq \cdots \leq n_d$. By the symmetry of the torus this does not restrict our
results.
Recall that $\lambda_2$ denotes the
second largest eigenvalue in absolute value.
Before we analyze the deviation between the idealized and discrete process, we
estimate $(1-\lambda_2)^{-1}$ for general torus graphs.
\begin{lem}\label{lem:nonuniformtorus}
For a $d$\nobreakdash-dimensional torus $T=T(n_1,n_2,\ldots,n_d)$,
$(1-\lambda_2)^{-1} = \Theta \left( n_d^2 \right)$.
\end{lem}
\begin{proof}
Following the notation of \cite{Ch92},
for a $k$\nobreakdash-regular graph $G$, let $\ensuremath{\mathbf{L}}(G)$ be the matrix given by $\ensuremath{\mathbf{L}}_{u,u}(G) = 1$,
$\ensuremath{\mathbf{L}}_{u,v}(G)= -\frac{1}{k}$ if $\{u,v\} \in E(G)$ and $\ensuremath{\mathbf{L}}_{u,v}(G) = 0$ otherwise.
Let $C_{q}$ be a cycle with $q$ vertices.
As shown in \cite[Example~1.4]{Ch92}, the eigenvalues of $\ensuremath{\mathbf{L}}(C_{q})$
are $1 - \cos\big( \frac{2 \pi r}{q}\big)$ where $0 \leq r \leq q-1$.
In particular, the second smallest eigenvalue of $\ensuremath{\mathbf{L}}(C_q)$
denoted by $\tau$ is given by $1 - \cos\big( \frac{2 \pi}{q} \big)$.
Let $\times$ denote the Cartesian product of graphs, that is, for any two graphs
$G_1=(V_1,E_1)$, $G_2=(V_2,E_2)$ the graph $G:=G_1 \times G_2$ with $G=(V,E)$
is defined by
$V=V_1 \times V_2$ and
\begin{align*}
E := &\big\{ \big((u_1,u_2), (v_1,u_2)\big) \colon \, u_2\in V_2 \wedge \{u_1,v_1 \} \in E_1 \big\} \,\cup \\
&\big\{ \big((u_1,u_2), (u_1,v_2)\big) \colon \, u_1\in V_1 \wedge \{u_2,v_2 \} \in E_2 \big \}.
\end{align*}
It is straightforward to generalize this definition to the Cartesian product of
more than two graphs and it is then easy to check that $T(n_1,n_2,\ldots,n_d) =
C_{n_1} \times C_{n_2} \times \ldots \times C_{n_d}$. The following theorem
expresses the second smallest eigenvalue of the Cartesian product of graphs in
terms of the second smallest eigenvalue of the respective graphs.
\begin{thm}[{\cite[Theorem~2.12]{Ch92}}]\label{thm:cartesian}
Let $G_1, G_2, \ldots, G_d$ be $d$ graphs and let $\tau_1, \tau_2, \ldots,
\tau_d$ be the respective second smallest eigenvalue of $\ensuremath{\mathbf{L}}(G_1), \ensuremath{\mathbf{L}}(G_2), \ldots, \ensuremath{\mathbf{L}}(G_d)$.
Then the second smallest eigenvalue $\tau$ of $\ensuremath{\mathbf{L}}(G_1 \times G_2 \times \ldots \times G_d)$
satisfies
$\tau = \tfrac{1}{d} \, \min_{k=1}^d \tau_k$.
\end{thm}
\medskip
Applying this theorem to our setting, it follows that the second
smallest eigenvalue~$\tau$ of $\ensuremath{\mathbf{L}}(T)$ is
$\tau = \tfrac{1}{d} \, \big( 1 - \cos \bigl( \tfrac{2 \pi}{n_d} \bigr) \big)$.
As $n_d \geq \oldsqrt[d]{n}$, we have $ \cos \big(\frac{2 \pi}{n_d}\big)
= 1 - \Theta \big( \frac{1}{n_d^2} \big) $. Using this and the fact that $d$ is a constant, we obtain that
$
\tau = \Theta \big( \tfrac{1}{n_d^2} \big).
$
As $T$ is a $k$\nobreakdash-regular graph, the transition matrix~$\ensuremath{\mathbf{P}}(T)$ can be expressed as
$\ensuremath{\mathbf{P}}(T) = \mathbf{I} - \frac{1}{2} \ensuremath{\mathbf{L}}(T)$. This implies for the second
smallest eigenvalue of~$\ensuremath{\mathbf{L}}(T)$, $\tau$, and the second largest eigenvalue of
the transition matrix~$\ensuremath{\mathbf{P}}(T)$, $\lambda_2$, that
$
\lambda_2 = 1 - \frac{1}{2} \tau.
$
Hence
$\lambda_{2} = 1 - \Theta \big( \tfrac{1}{n_d^2} \big)$,
which completes the proof.
\end{proof}
Note that the corresponding results of \citep{RSW98,FS09} only
hold for uniform torus graphs while the following result
for our algorithm holds for general torus graphs.
\begin{thm}\label{thm:torus}
For all initial load vectors
on the (not necessarily uniform)
$d$\nobreakdash-dimensional torus graph with $n$~vertices,
the deviation between the idealized process and
a discrete process with accumulated rounding error at most~$\Lambda$
is $\mathcal{O}(\Lambda)$ at all times.
\end{thm}
For any torus graph, we know that $(1-\lambda_2)^{-1} = \Theta( n_d^{2})$ by \lemref{nonuniformtorus}.
With \thmref{ideal} it follows that any BED algorithm (and in particular
our quasirandom algorithm) reduces the discrepancy
of any initial load vector with discrepancy $K$ to~$\mathcal{O}(1)$ within
$\mathcal{O}( n_d^{2}\,\log (K n) )$ time steps (for uniform torus graphs, this number of time steps is $\mathcal{O}(n^{2/d} \, \log (K n ) )$).
\begin{proof}[Proof of \thmref{torus}]
By symmetry of the torus graph, we have $\ensuremath{\mathbf{P}}_{i,j} = \ensuremath{\mathbf{P}}_{0,i-j}$.
Hence we set $\ensuremath{\mathbf{P}}_{i} = \ensuremath{\mathbf{P}}_{0,i}$.
We will first reduce the random walk $\ensuremath{\mathbf{P}}_{i,j}$
on the finite $d$\nobreakdash-dimensional torus
to a random walk on the infinite grid~${\mathbb{Z}}^{d}$,
both with loop probability~$1/2$.
Let
$\ensuremath{\overline{\mathbf{P}}}_{i,j}$ be the transition probability from~$i$ to~$j$ on~${\mathbb{Z}}^{d}$
defined by $\ensuremath{\overline{\mathbf{P}}}_{i,j} = 1/(4d)$ if $|i-j|_1=1$,
$\ensuremath{\overline{\mathbf{P}}}_{i,i} = 1/2$,
and~$0$ otherwise.
For $i=(i_1,\ldots,i_d)\in V$
we set
\[
H(i):=(i_1+n_1\,{\mathbb{Z}},
i_2+n_2\,{\mathbb{Z}},
\ldots,
i_d+n_d\,{\mathbb{Z}}
)\subset{\mathbb{Z}}^{d}.
\]
With $\ensuremath{\overline{\mathbf{P}}}_{i}:=\ensuremath{\overline{\mathbf{P}}}_{0,i}$,
we observe
\[
\ensuremath{\mathbf{P}}_{i}^s = \sum_{k\in H(i)} \ensuremath{\overline{\mathbf{P}}}_{k}^s
\]
for all $s\geq0$ and $i\in V$.
We extend the definition of $e_{i,j}$ in the natural way
by setting
\[
e_{k,\ell}:=e_{i,j} \text{ for all $i,j\in V$ and $k\in H(i)$, $\ell\in H(j)$.}
\]
Let $\mathsf{ARR}=\{ \pm u_\ell \mid \ell\in\{1,\ldots,d\} \}\in{\mathbb{Z}}^{d}$ with
$u_\ell$ being the $\ell$\nobreakdash-th unit vector.
Following \eq{StandardAnsatz} and using the fact that
by symmetry it suffices to bound the deviation at the vertex $0 := 0^d$,
we get
\begin{align*}
x_0^{(t)} - \xi_0^{(t)}
&=
\dfrac{1}{2}
\sum_{s=0}^{t-1}
\sum_{i\in V}
\sum_{z\in\mathsf{ARR}}
e_{i,i+z}^{(t-s)}
(\ensuremath{\mathbf{P}}_{i}^s - \ensuremath{\mathbf{P}}_{i+z}^s)
\\
&=
\dfrac{1}{2}
\sum_{s=0}^{t-1}
\sum_{i\in V}
\sum_{z\in\mathsf{ARR}}
e_{i,i+z}^{(t-s)}\,
\Bigg(\sum_{k\in H(i)} \ensuremath{\overline{\mathbf{P}}}_{k}^s - \sum_{\ell\in H(i+z)} \ensuremath{\overline{\mathbf{P}}}_{\ell}^s\Bigg)
\\
&=
\dfrac{1}{2}
\sum_{s=0}^{t-1}
\sum_{z\in\mathsf{ARR}}
\sum_{i\in V}
e_{i,i+z}^{(t-s)}\,
\Bigg(\sum_{k \in H(i)} \ensuremath{\overline{\mathbf{P}}}_{k}^s - \ensuremath{\overline{\mathbf{P}}}_{k+z}^s\Bigg)
\\
&=
\dfrac{1}{2}
\sum_{i\in V}
\sum_{z\in\mathsf{ARR}}
\sum_{k\in H(i)}
\sum_{s=0}^{t-1}
e_{k,k+z}^{(t-s)}\,
\big( \ensuremath{\overline{\mathbf{P}}}_{k}^s - \ensuremath{\overline{\mathbf{P}}}_{k+z}^s \big)
\end{align*}
\noindent
As ${\mathbb{Z}}^{d}=\bigcup_{i\in V} H(i)$ is a disjoint union, we can also write
\begin{align}
x_0^{(t)} - \xi_0^{(t)}
&=
\frac{1}{2}
\sum_{k\in {\mathbb{Z}}^{d}}
\sum_{z\in\mathsf{ARR}}
\sum_{s=0}^{t-1}
e_{k,k+z}^{(t-s)}\,
\big(\ensuremath{\overline{\mathbf{P}}}_{k}^s - \ensuremath{\overline{\mathbf{P}}}_{k+z}^s\big).
\label{eq:torus:inf}
\end{align}
\noindent
We now carefully break down the sums of \eq{torus:inf} and show that each part
can be bounded by~$\O(\Lambda)$.
For this, our main tool will be \lemref{betragkleinerk}.
As we cannot prove unimodality of $\big(\ensuremath{\overline{\mathbf{P}}}_{k}^s - \ensuremath{\overline{\mathbf{P}}}_{k+z}^s\big)$ directly,
we will use an appropriate local central limit theorems to
approximate the transition probabilities $\ensuremath{\overline{\mathbf{P}}}_k^s$ of~${\mathbb{Z}}^{d}$
with a multivariate normal distribution.
To derive the limiting distribution~$\ensuremath{\widetilde{\mathbf{P}}}_{k}^s$ of our random walk~$\ensuremath{\overline{\mathbf{P}}}_{i,j}$,
we follow \citet{LawlerLimic} and let $X=(X_1, \ldots, X_d)$ be a ${\mathbb{Z}}^d$-valued random variable
with $\Pr{X=z}=1/(4d)$ for all $z\in\mathsf{ARR}$ and
$\Pr{X=0^d}=1/2$.
Observe that $\Ex{X_j X_k}=0$ for $j\neq k$ since not both of them can be non-zero
simultaneously.
Moreover, $\Ex{X_j X_j} = \frac{1}{4d} (-1)^2 + \frac{1}{4d} (+1)^2 = \frac{1}{2d}$
for all
$1\leq j\leq d$.
Hence the covariance matrix is
\[
\Gamma
:= \Big[ \Ex{X_j X_k} \Big]_{1\leq j,k\leq d}
=(2d)^{-1} I.
\]
\noindent
From Eq.~(2.2) of \citet{LawlerLimic} we get
\begin{align}
\ensuremath{\widetilde{\mathbf{P}}}_{k}^s &=
\frac{1}{(2\pi)^d\,s^{d/2}} \,
\int_{{\mathbb{R}}^d}
\exp \left( \textswab{i}\, \frac{x \cdot k}{\sqrt{s}} \,\right) \,
\exp \left( -\frac{x \cdot \Gamma x}{2} \right) \,
d^d x\notag\\
\intertext{where $\textswab{i}=\sqrt{-1}$ denotes the imaginary unit. With this we can further conclude that}
\ensuremath{\widetilde{\mathbf{P}}}_{k}^s
&=
\frac{1}{(2\pi)^d\,s^{d/2}} \,
\int_{{\mathbb{R}}^d}
\exp \left( \textswab{i}\, \frac{x \cdot k}{\sqrt{s}} \,
-\frac{x \cdot \Gamma x}{2} \right) \,
d^d x\notag\\
&=
\frac{1}{(2\pi)^d\,s^{d/2}} \,
\int_{{\mathbb{R}}^d}
\exp \left( \textswab{i}\, \frac{x \cdot k}{\sqrt{s}} \,
-\frac{\|x\|_2^2}{4d} \right) \,
d^d x\notag\\
&=
\frac{1}{(2\pi)^d\,s^{d/2}} \,
\int_{{\mathbb{R}}^d}
\exp \left( -\frac{1}{4d} \left(
\|x\|_2^2\,
- 2 \textswab{i}\, \frac{2d}{\sqrt{s}} \, x \cdot k
\right) \right) \,
d^d x .\label{eq:integral1}
\end{align}
To evaluate the integral we complete the square, which yields
\begin{align}
\lefteqn{\int_{{\mathbb{R}}^d}
\exp \left( -\frac{1}{4d} \left(
\|x\|_2^2\,
- 2 \textswab{i}\, \frac{2d}{\sqrt{s}} \, x \cdot k
\right) \right) \,
d^d x}\notag \\
&= \int_{{\mathbb{R}}^d}\exp \left( -\frac{1}{4d} \left(
\|x\|_2^2\,
- 2 \textswab{i}\, \frac{2d}{\sqrt{s}} \, x \cdot k
-\frac{4d^2}{s} \|k\|_2^2
+\frac{4d^2}{s} \|k\|_2^2
\right) \right) \,
d^d x\notag \\
&= \exp \left(-\frac{d}{s}\|k\|_2^2\right)
\int_{{\mathbb{R}}^d}\exp \left( -\frac{1}{4d}
\left\|x-\textswab{i}\,\frac{2d}{\sqrt{s}}k\right\|_2^2
\right) \,
d^d x . \label{eq:integral2}
\end{align}
By substituting $z=x-\textswab{i}\,\frac{2d}{\sqrt{s}}k$ we get
\begin{align}
\lefteqn{\int_{{\mathbb{R}}^d}\exp \left( -\frac{1}{4d}
\left\|x-\textswab{i}\,\frac{2d}{\sqrt{s}}k\right\|_2^2
\right) \,
d^d x
}\notag\\
&=
\int_{{\mathbb{R}}^{d}}\exp \left( -\frac{1}{4d} \left(
\|z\|_2^2
\right) \right) \,
d^d z
\notag\\
&=
\idotsint_{{\mathbb{R}}^d}\exp \left( -\frac{1}{4d} \left(
\sum_{i=1}^d z_i^2
\right) \right) \,
d z_d \, \ldots \, d z_1
\notag\\
&=
\idotsint_{{\mathbb{R}}^{d-1}} \exp \left( -\frac{1}{4d} \left(
\sum_{i=1}^{d-1} z_i^2
\right) \right)
\,
\left(\int_{{\mathbb{R}}} \exp \left( -\frac{1}{4d} z_d^2\right)
d z_d \right) d z_{d-1}\, \ldots \, d z_1
\notag\\
&=
\left(2 \sqrt{\pi d}\right) \cdot
\idotsint_{{\mathbb{R}}^{d-1}}\exp \left( -\frac{1}{4d} \left(
\sum_{i=1}^{d-1} z_i^2
\right) \right) \,
d z_{d-1} \, \ldots \, d z_1
\notag\\
&=
\left(2 \sqrt{\pi d}\right)^{d} .
\label{eq:integral3}
\end{align}
Combining \eqss{integral1}{integral2}{integral3}, we get
\begin{align}
\ensuremath{\widetilde{\mathbf{P}}}_{k}^s
&= \frac{1}{(2\pi)^d\,s^{d/2}} \,
\exp \left(-\frac{d}{s}\|k\|_2^2\right) \,
\left(2 \sqrt{\pi d}\right)^{d}\notag\\
&= \left( \frac{d}{\pi s} \right) ^{d/2}
\exp\left(\frac{-d \,\|k\|_2^2}{s}\right).
\label{eq:multivariate}
\end{align}
It follows directly from Claims~4 and~5 of \citet{CooperSpencer} that
for all $k\in{\mathbb{Z}}^{d}$, $z\in\mathsf{ARR}$,
\begin{align}
&\ensuremath{\widetilde{\mathbf{P}}}_{k}^s - \ensuremath{\widetilde{\mathbf{P}}}_{k+z}^s = \mathcal{O}( \|k\|_2^{-(d+1)}) \text{ for all $s$},
\label{eq:multivariate:bound}
\\
(s\mapsto&\ensuremath{\widetilde{\mathbf{P}}}_{k}^s - \ensuremath{\widetilde{\mathbf{P}}}_{k+z}^s) \text{ has only a constant number of local extrema.}
\label{eq:multivariate:const}
\end{align}
\noindent
This gives the intuition that by approximating
$\big(\ensuremath{\overline{\mathbf{P}}}_{k}^s - \ensuremath{\overline{\mathbf{P}}}_{k+z}^s\big)$
with
$\big(\ensuremath{\widetilde{\mathbf{P}}}_{k}^s - \ensuremath{\widetilde{\mathbf{P}}}_{k+z}^s\big)$,
we can
bound
\eq{torus:inf}
for sufficiently large~$k$ and~$s$
by \lemref{betragkleinerk}.
This approximation is made precise by the following
local central limit theorems.
Theorem~2.3.6 of \citet{LawlerLimic} gives
for all $k\in{\mathbb{Z}}^{d}$, $z\in\mathsf{ARR}$, $s\geq0$,
\begin{align}
\label{eq:LCL1}
\big|
\big(\ensuremath{\overline{\mathbf{P}}}_{k}^s - \ensuremath{\overline{\mathbf{P}}}_{k+z}^s\big) -
\big(\ensuremath{\widetilde{\mathbf{P}}}_{k}^s - \ensuremath{\widetilde{\mathbf{P}}}_{k+z}^s\big)
\big|
&=
\O(s^{-(d+3)/2}).
\end{align}
\noindent
We first separate the case $k=0$ in \eq{torus:inf}.
With ${\mathbb{Z}^d_{\neq0}}:={\mathbb{Z}}^d\setminus\{0^d\}$
\begin{align}
\big|
x_0^{(t)} - \xi_0^{(t)}
\big|
&\leq
\frac{1}{2}
\overbrace{
\Bigg|
\sum_{z\in \mathsf{ARR}}
\sum_{s=0}^{t-1}
e_{0,0+z}^{(t-s)}\,
\left(\ensuremath{\overline{\mathbf{P}}}_{0}^s - \ensuremath{\overline{\mathbf{P}}}_{0+z}^s\right)
\Bigg|}^{(\ref{eq:torus:inf2}a)}
\notag\\
&\phantom{\leq}+
\frac{1}{2}
\underbrace{
\Bigg|
\sum_{k\in {\mathbb{Z}^d_{\neq0}}}
\sum_{z\in \mathsf{ARR}}
\sum_{s=0}^{t-1}
e_{k,k+z}^{(t-s)}\,
\left(\ensuremath{\overline{\mathbf{P}}}_{k}^s - \ensuremath{\overline{\mathbf{P}}}_{k+z}^s\right)
\Bigg|
}_{(\ref{eq:torus:inf2}b)}
\label{eq:torus:inf2}
\end{align}
\noindent
Now we can apply the local central limit theorem given in \eq{LCL1} to (\ref{eq:torus:inf2}a) and get
\begin{align*}
(\ref{eq:torus:inf2}a)
&=
\Bigg|
\sum_{z\in \mathsf{ARR}}
\sum_{s=0}^{t-1}
e_{0,0+z}^{(t-s)}\,
\big(\ensuremath{\widetilde{\mathbf{P}}}_{0}^s - \ensuremath{\widetilde{\mathbf{P}}}_{0+z}^s\big)
\Bigg|
+
\Bigg|
\sum_{z\in \mathsf{ARR}}
\sum_{s=0}^{t-1}
\O(s^{-(d+3)/2})
\Bigg|
=
\O(\Lambda),
\end{align*}
where the last equality follows by
\lemref{betragkleinerk}
combined with
\eq{multivariate:const}
and
the property
$\big|\sum_{s=1}^t e_{i,j}^{(s)} \big|\leq \Lambda$.
We proceed by fixing
a cutoff point $T(k):=\tfrac{C \,\|k\|_2^2}{\ln^2(\|k\|_2)}$,
$k\in{\mathbb{Z}^d_{\neq0}}$, of the innermost sum
of (\ref{eq:torus:inf2}b)
for some sufficiently small constant $C>0$,
\begin{align}
(\ref{eq:torus:inf2}b)
&\leq
\overbrace{
\Bigg|
\sum_{k\in {\mathbb{Z}^d_{\neq0}}}
\sum_{z\in \mathsf{ARR}}
\sum_{s=1}^{T(k)}
e_{k,k+z}^{(t-s)}\,
\left(\ensuremath{\overline{\mathbf{P}}}_{k}^s - \ensuremath{\overline{\mathbf{P}}}_{k+z}^s\right)
\Bigg|}^{(\ref{eq:torus:inf3}a)}
\notag\\
&\phantom{\leq}+
\underbrace{
\Bigg|
\sum_{k\in {\mathbb{Z}^d_{\neq0}}}
\sum_{z\in \mathsf{ARR}}
\sum_{s=T(k)}^{t-1}
e_{k,k+z}^{(t-s)}\,
\left(\ensuremath{\overline{\mathbf{P}}}_{k}^s - \ensuremath{\overline{\mathbf{P}}}_{k+z}^s\right)
\Bigg|}_{(\ref{eq:torus:inf3}b)}.
\label{eq:torus:inf3}
\end{align}
Note that the summand with~$s=0$ is zero and can be ignored.
The first summand (\ref{eq:torus:inf3}a) can be bounded by
\begin{align}
(\ref{eq:torus:inf3}a)
&=
\O
\Bigg(
\sum_{k\in {\mathbb{Z}^d_{\neq0}}}
\sum_{z\in \mathsf{ARR}}
\sum_{s=1}^{T(k)}
\Big(\ensuremath{\overline{\mathbf{P}}}_{k}^s + \ensuremath{\overline{\mathbf{P}}}_{k+z}^s\Big) \label{eq:secondlast}
\Bigg).
\end{align}
It is known form
\citet[Lem.~1.5.1(a)]{Lawler} that
for random walks on infinite grids,
$\sum_{\|k\|_2\geq\lambda\sqrt{s}} \ensuremath{\overline{\mathbf{P}}}_{k}^s = \O(e^{-\lambda})$
for all $s>0$ and $\lambda>0$. Hence also
\[
\ensuremath{\overline{\mathbf{P}}}_{k}^s
= \O\big(\exp\big(-\|k\|_2 / \sqrt{s} \big)\big)
\text{ for all $s>0$, $k\in{\mathbb{Z}}^{d}$.}
\]
With this we can now bound the term $\big(\ensuremath{\overline{\mathbf{P}}}_{k}^s + \ensuremath{\overline{\mathbf{P}}}_{k+z}^s\big)$ from \eq{secondlast}.
For $0<s\leq T(k)$,
$k\in{\mathbb{Z}^d_{\neq0}}$, $z\in\mathsf{ARR}$,
and sufficiently small $C > 0$,
\begin{align*}
\ensuremath{\overline{\mathbf{P}}}_{k}^s + \ensuremath{\overline{\mathbf{P}}}_{k+z}^s
&=\mathcal{O}\bigg( \exp\bigg(-\frac{\|k\|_2}{\sqrt{s}}\bigg) + \exp\bigg(-\frac{\|k - z\|_2}{\sqrt{s}}\bigg) \bigg) \\
&=\mathcal{O}\bigg( \exp\bigg(-\frac{\|k\|_2-\|z\|_2}{\sqrt{s}}\bigg) \bigg) \\
&=\mathcal{O}\bigg(\exp\bigg(-\ln(\|k\|_2)\frac{(\|k\|_2-1)}{\sqrt{C}\, \|k\|_2}\bigg)\bigg) \\
&=\mathcal{O}\big(\|k\|_2^{-(d+4)}\big).
\end{align*}
\noindent
Plugging this into \eq{secondlast}, we obtain that
\begin{align*}
(\ref{eq:torus:inf3}a)
=
\O\Bigg(
\sum_{k\in {\mathbb{Z}^d_{\neq0}}}
T(k)\,
\|k\|_2^{-(d+4)}
\Bigg)
=
\O\Bigg(
\sum_{k\in {\mathbb{Z}^d_{\neq0}}}
\|k\|_2^{-(d+2)}
\,\ln^{-2}(\|k\|_2)
\Bigg)
=\O(1).
\end{align*}
To bound (\ref{eq:torus:inf3}b),
we approximate the transition probabilities of ${\mathbb{Z}}^{d}$
with the multivariate normal distribution of \eq{multivariate}
by the local central limit theorem stated in \eq{LCL1},
\begin{align}
(\ref{eq:torus:inf3}b) &= \Bigg|
\sum_{k\in {\mathbb{Z}^d_{\neq0}}}
\sum_{z\in \mathsf{ARR}}
\sum_{s=T(k)}^{t-1}
e_{k,k+z}^{(t-s)}\,
\left(\ensuremath{\widetilde{\mathbf{P}}}_{k}^s - \ensuremath{\widetilde{\mathbf{P}}}_{k+z}^s\right)
\notag \\
&\phantom{\leq}+
\sum_{k\in {\mathbb{Z}^d_{\neq0}}}
\sum_{z\in \mathsf{ARR}}
\sum_{s=T(k)}^{t-1}
e_{k,k+z}^{(t-s)}\,
\left(\ensuremath{\overline{\mathbf{P}}}_{k}^s - \ensuremath{\overline{\mathbf{P}}}_{k+z}^s\right) - \left(\ensuremath{\widetilde{\mathbf{P}}}_{k}^s - \ensuremath{\widetilde{\mathbf{P}}}_{k+z}^s\right)
\Bigg| \notag \\
&\leq
\overbrace{
\Bigg|
\sum_{k\in {\mathbb{Z}^d_{\neq0}}}
\sum_{z\in \mathsf{ARR}}
\sum_{s=T(k)}^{t-1}
e_{k,k+z}^{(t-s)}\,
\left(\ensuremath{\widetilde{\mathbf{P}}}_{k}^s - \ensuremath{\widetilde{\mathbf{P}}}_{k+z}^s\right)
\Bigg|}^{(\ref{eq:megasum}a)}
\notag\\
&\phantom{\leq}+
\underbrace{
\Bigg|
\sum_{k\in {\mathbb{Z}^d_{\neq0}}}
\sum_{z\in \mathsf{ARR}}
\sum_{s=T(k)}^{t-1}
e_{k,k+z}^{(t-s)}\,
\O(s^{-(d+3)/2})
\Bigg|}_{(\ref{eq:megasum}b)}.
\label{eq:megasum}
\end{align}
\noindent
We can bound the second term (\ref{eq:megasum}b) by
\begin{align*}
(\ref{eq:megasum}b)
&=
\O\Bigg(
d
\sum_{k\in {\mathbb{Z}^d_{\neq0}}}
\sum_{s=T(k)}^{\infty}
\,s^{-(d+3)/2}
\Bigg)
\\
&=
\O\Bigg(
\sum_{k\in {\mathbb{Z}^d_{\neq0}}}
T(k)^{-(d+1)/2}
\Bigg) \\
&=
\O\Bigg(
\sum_{k\in {\mathbb{Z}^d_{\neq0}}}
\frac{\ln^{d-1}(\|k\|_2) }{\|k\|_2^{d+1}}
\Bigg).
\intertext{As there are constants $C' > 0$ and $\epsilon > 0$ such that
$\ln^{d-1}(\|k\|_2) \leq C' \, \|k\|_2^{1-\epsilon}$ for all $k \in {\mathbb{Z}^d_{\neq0}}$ we obtain that}
(\ref{eq:megasum}b)
&=
\O\Bigg(
\sum_{k\in {\mathbb{Z}^d_{\neq0}}}
\|k\|_2^{-(d+\epsilon)}
\Bigg).
\end{align*}
To see that this can be bounded by $\O(1)$, observe that
with ${\mathbb{N}^d_{\neq0}}:={\mathbb{N}}^{d}\setminus\{0^d\}$,
\begin{align*}
\sum_{k\in {\mathbb{Z}^d_{\neq0}}} \| k \|_2^{-(d+\epsilon)}
\leq 2^{d} \, \sum_{k\in {\mathbb{N}^d_{\neq0}}} (k_1^2 + \cdots + k_{d}^2)^{-(d+\epsilon)/2}.
\end{align*}
By convexity of $x \mapsto x^2$,
$k_1^{2} + \cdots + k_{d}^{2}
\geq \tfrac{1}{d} (k_1 + \cdots + k_{d})^2$, we then get
\begin{align*}
(\ref{eq:megasum}b)
&= \mathcal{O} \Bigg( \sum_{k\in {\mathbb{N}^d_{\neq0}}} (k_1 + \cdots + k_{d})^{-(d+\epsilon)}\Bigg)
= \mathcal{O} \Bigg( \sum_{x=1}^{\infty} \sum_{\substack{k\in {\mathbb{N}}^{d} \\ \|k\|_1=x}}
x^{-(d+\epsilon)} \Bigg) \\
&= \mathcal{O} \Bigg( \sum_{x=1}^{\infty} x^{d-1} \cdot x^{-(d+\epsilon)} \Bigg)
= \mathcal{O} \Bigg( \sum_{x=1}^{\infty} x^{-(1+\epsilon)} \Bigg)
= \mathcal{O}(1).
\end{align*}
\noindent
To finally bound (\ref{eq:megasum}a),
we apply
\eq{multivariate:const}.
We also use that $\ensuremath{\widetilde{\mathbf{P}}}_{k}^s - \ensuremath{\widetilde{\mathbf{P}}}_{k+z}^s$
can be bounded by $\mathcal{O}( \|k\|_2^{-(d+1)})$
according to \eq{multivariate:bound}.
As
$|\sum_{s=1}^t e_{i,j}^{(s)} |\leq \Lambda$,
applying \lemref{betragkleinerk} yields
\begin{align*}
(\ref{eq:megasum}a)
&=
\O\Bigg(
\sum_{k\in {\mathbb{Z}^d_{\neq0}}}
\sum_{z\in \mathsf{ARR}}
\Lambda \,
\max_{s=T(k)}^{t-1}
\Big(\ensuremath{\widetilde{\mathbf{P}}}_{k}^s - \ensuremath{\widetilde{\mathbf{P}}}_{k+z}^s\Big)
\Bigg)\\
&=\O\Bigg(
\sum_{k\in {\mathbb{Z}^d_{\neq0}}}
\sum_{z\in \mathsf{ARR}}
\Lambda \,
\|k\|_2^{-(d+1)}
\Bigg)
\\&
=
\O\Bigg( \Lambda \,
d
\sum_{k\in {\mathbb{Z}^d_{\neq0}}}
\|k\|_2^{-(d+1)}
\Bigg)
\\&
=
\O ( \Lambda ).
\end{align*}
Combining all above bounds,
we can conclude that
$
\big|x_0^{(t)} - \xi_0^{(t)}\big|
= \mathcal{O}(\Lambda),
$
meaning that the deviation between the idealized process and the discrete process at any time and
vertex is at most $\mathcal{O}(\Lambda)$.
\end{proof}
\section{Lower bounds for previous algorithms}
\label{sec:others}
For a better comparison with previous algorithms,
this section gives lower bounds for other discrete diffusion processes.
First, we observe the following general lower bound on the discrepancy for the RSW-algorithm.
\begin{pro}
\label{pro:lowerbound}
On all
graphs~$G$ with maximum degree~$\Delta$,
there is an initial load-vector $x^{(0)}$ with discrepancy $\Delta \operatorname{diam}(G)$
such that
for the RSW-algorithm,
$x^{(t)}=x^{(t-1)}$ for all $t \in {\mathbb{N}}$.
\end{pro}
\begin{proof}
Fix a pair of vertices $i$ and $j$ with $\operatorname{dist}(i,j)=\operatorname{diam}(G)$. Define an initial load-vector $x^{(0)}$ by
\[
x^{(0)}_k := \operatorname{dist}(k,i) \cdot \Delta.
\]
Clearly, the initial discrepancy is $x_j^{(0)}-x_i^{(0)} = \Delta \operatorname{diam}(G)$.
We claim that $x^{(1)}=x^{(0)}$. Consider an arbitrary edge $\{r,s\} \in E(G)$. Then,
\begin{align*}
\big| \ensuremath{\mathbf{P}}_{r,s}\,x_r^{(1)} - \ensuremath{\mathbf{P}}_{s,r} \,x_s^{(1)} \big|
= \frac{1}{2 \Delta} \big| x_r^{(0)} - x_s^{(0)} \big|
\leq \frac{1}{2 \Delta} \, \Delta
= \frac{1}{2}.
\end{align*}
Hence the integral flow on any edge $\{r,s\} \in E(G)$ is $\lfloor \frac{1}{2} \rfloor = 0$ and the load-vector remains unchanged. The claim follows.
\end{proof}
In the remainder of this section we present two lower bounds for
the deviation between the randomized rounding diffusion
algorithm and the idealized process.
First, we prove a bound of $\Omega(\log n)$
for the hypercube. Together with \thmref{cube}
this implies that on hypercubes
the quasirandom approach is as good as the
randomized rounding diffusion algorithm.
\begin{thm}
\label{thm:cuberandomlower}
There is an initial load vector
of the $d$\nobreakdash-dimensional hypercube with $n=2^d$ vertices
such that the deviation of the randomized rounding diffusion
algorithm and the idealized process
is at least $\log n/4$ with probability $1-n^{\Omega(1)}$.
\end{thm}
\begin{proof}
We define an initial load vector $x^{(0)}$ as follows. For every vertex
$v=(v_1,v_2,\ldots,v_{d}) \in \{0,1\}^{d}$ with $v_1=0$ we set
$x_{v}^{(0)}=\xi_{v}^{(0)}=0$ and if $v_1=1$
we set $x_{v}^{(0)}=\xi_{v}^{(0)}=d$.
Hence, the idealized process
will send a flow of $d/(2d)=1/2$ from
every vertex $v=(1,v_2,v_3,\ldots,v_d) \in \{0,1\}^d$ to $(0,v_2,v_3,\ldots,v_d)$. Hence for the idealized process, $\xi_v^{(1)}=(1/2)\,d$, that is, all
vertices have a load of $(1/2)\,d$ after one step
and the load is perfectly balanced.
Let us now consider the discrete
process. Let $V_0$ be the set of vertices whose bitstring begins with $0$.
Consider any node $v \in V_0$. Note that all neighbors of $v$ have a load of $1$ and the integral flow from any of those neighbors equals $1$ with probability $1/2$, independently. Hence the load of $v$ in the next step is just a binomial random variable and
using the fact that $\binom{r}{s} \geq (r/s)^s$ we obtain
\begin{align*}
\Pro { x_{v}^{(1)} = \frac{3}{4} \, d } \geq \Pro { x_{v}^{(1)} \geq \frac{3}{4} \, d } &\geq \binom{d}{(3/4) d} 2^{-d} \geq \bigg( \frac{4}{3} \bigg)^{(3/4) d} 2^{-d} \geq n^{-1+C},
\end{align*}
for some constant $C > 0$ since $d=\log_2 n$. As the maximum degree of the graph is $\log n$ and the size of $V_0$ is $n/2$, it follows that there is a subset $S
\subseteq V_0$ of size $\Omega\big(\frac{n}{\log^4 n}\big)$ in the hypercube such that
every pair in $S$ has distance at least $4$. By construction,
the respective events $x_{v}^{(1)} \geq (3/4) d$ are independent for all vertices $v \in S$. Hence
\begin{align*}
\Pro { \exists v \in S \colon x_{v}^{(1)} \geq \frac{3}{4} \, d } &\geq 1 - \left( 1 - n^{-1+C} \right)^{\Omega\left(\frac{n}{\log^4 n}\right)} \geq 1 - n^{-C'},
\end{align*}
where $0 < C' < C$ is another constant. This means that with probability at least $1-n^{-C'}$ the load at vertex $u$ at step $1$ will be at least $(3/4)\,d$ in the discrete process, but equals $(1/2)\,d$ in the idealized process. This completes the proof.
\end{proof}
It remains to give a lower bound for the deviation
between the randomized rounding and the idealized process
for torus graphs.
The following theorem proves a polylogarithmic lower bound
for the randomized rounding algorithm which
should be compared to the constant upper bound
for the quasirandom approach of \thmref{torus}.
Similar results can also be derived for non-uniform torus graphs.
\begin{thm}
\label{thm:toruslower}
There is an initial load vector
of the $d$\nobreakdash-dimensional uniform torus graph with $n$~vertices
such that the deviation between the randomized rounding diffusion algorithm
and the idealized process
is $\Omega( \operatorname{polylog}(n))$ with probability $1-o(1)$.
\end{thm}
\newcommand{\mathsf{G}}{\mathsf{G}}
\newcommand{\mathsf{T}}{\mathsf{T}}
\begin{proof}
Let $n$ be a sufficiently large integer and
$\mathsf{T}$ be a $d$\nobreakdash-dimensional torus graph with $n$~vertices
and side-length $\oldsqrt[d]{n}\in{\mathbb{N}}$.
Let $B_{k}(u) := \linebreak[0]\{ v \in V \colon \|v-u\|_\infty \leq k \}$ and
$\partial B_{k}(u) := \{ v \in V \colon \|v-u\|_\infty = k \}$.
For every vertex $v \in V(\mathsf{T})$, we define
$|B_{\ell/2}(v)| = \ell^d = (\log n)^{1/4} $
with $\ell:=(\log n)^{1/(4d)}$, where we assume w.l.o.g. that $\ell$ is an odd integer.
For $\ell':=(\log n)^{2/(3d)}$, define a set $S \subseteq V$ by
\[
S := \bigl \{
( x_1 \, \ell', x_2 \, \ell', \ldots, x_{d} \, \ell')
\, \bigm\vert \,
1 \leq x_1,x_2, \ldots,x_d < \oldsqrt[d]{n}/\ell' - 1
\bigr\},
\]
that is, every pair of distinct vertices in $S$ has a coordinate-wise
distance which is a multiple of $\ell'$.
Note that $|S|=\Omega(n/\ell'^{d})$.
Define the initial load vector as
$x_i^{(0)} = \xi_i^{(0)} := 2d \cdot \max\{0, \ell/2 - \operatorname{dist}(i,S) \}$,
$i\in V$.
Clearly, the initial discrepancy is $K=2d \cdot \ell/2$.
The idea is now to decompose $\mathsf{T}$ in smaller subgraphs
centered around $s\in S$, since
the upper bound on the convergence rate given by
\thmref{ideal} has a strong dependence on the size of the graph.
Then we relate the simultaneous convergence on each of the
smaller graphs to the convergence on the original graph. An illustration of
our decomposition of $\mathsf{T}$ can be found in \figref{torus}.
\begin{figure}[tb]
\center
\begin{tikzpicture}[auto,domain=0:4,x=1cm,y=1cm]
\pgftransformxscale{.7}
\pgftransformyscale{.7}
\draw (0,1) -- (9,1) -- (9,10) -- (0,10) -- (0,1);
\draw[snake=brace] (0.75,8) -- (0.75,9);
\draw[snake=brace] (2,9.25) -- (4,9.25);
\draw(0.75,8.5) node[left] {$\ell$};
\draw(3,9.25) node[above] {$\ell' - \ell$};
\setcounter{xcounter}{0}
\foreach \x in {1.08,1.16,...,2}
\foreach \y in {2.08,2.16,...,3}
{
\pgfmathsetcounter{cntShader}{100-190*max(\x-1.5,1.5-\x)-190*max(\y-2.5,2.5-\y)}
\filldraw [fill=black!\thecntShader, color=black!\thecntShader, line width=0pt] (\x-0.04,\y-0.04) rectangle (\x+0.04,\y+0.04);
}
\setcounter{xcounter}{0}
\foreach \x in {1.08,1.16,...,2}
\foreach \y in {5.08,5.16,...,6}
{
\pgfmathsetcounter{cntShader}{100-190*max(\x-1.5,1.5-\x)-190*max(\y-5.5,5.5-\y)}
\filldraw [fill=black!\thecntShader, color=black!\thecntShader, line width=0pt] (\x-0.04,\y-0.04) rectangle (\x+0.04,\y+0.04);
}
\setcounter{xcounter}{0}
\foreach \x in {1.08,1.16,...,2}
\foreach \y in {8.08,8.16,...,9}
{
\pgfmathsetcounter{cntShader}{100-190*max(\x-1.5,1.5-\x)-190*max(\y-8.5,8.5-\y)}
\filldraw [fill=black!\thecntShader, color=black!\thecntShader, line width=0pt] (\x-0.04,\y-0.04) rectangle (\x+0.04,\y+0.04);
}
\setcounter{xcounter}{0}
\foreach \x in {4.08,4.16,...,5}
\foreach \y in {2.08,2.16,...,3}
{
\pgfmathsetcounter{cntShader}{100-190*max(\x-4.5,4.5-\x)-190*max(\y-2.5,2.5-\y)}
\filldraw [fill=black!\thecntShader, color=black!\thecntShader, line width=0pt] (\x-0.04,\y-0.04) rectangle (\x+0.04,\y+0.04);
}
\setcounter{xcounter}{0}
\foreach \x in {4.08,4.16,...,5}
\foreach \y in {5.08,5.16,...,6}
{
\pgfmathsetcounter{cntShader}{100-190*max(\x-4.5,4.5-\x)-190*max(\y-5.5,5.5-\y)}
\filldraw [fill=black!\thecntShader, color=black!\thecntShader, line width=0pt] (\x-0.04,\y-0.04) rectangle (\x+0.04,\y+0.04);
}
\setcounter{xcounter}{0}
\foreach \x in {4.08,4.16,...,5}
\foreach \y in {8.08,8.16,...,9}
{
\pgfmathsetcounter{cntShader}{100-190*max(\x-4.5,4.5-\x)-190*max(\y-8.5,8.5-\y)}
\filldraw [fill=black!\thecntShader, color=black!\thecntShader, line width=0pt] (\x-0.04,\y-0.04) rectangle (\x+0.04,\y+0.04);
}
\setcounter{xcounter}{0}
\foreach \x in {7.08,7.16,...,8}
\foreach \y in {2.08,2.16,...,3}
{
\pgfmathsetcounter{cntShader}{100-190*max(\x-7.5,7.5-\x)-190*max(\y-2.5,2.5-\y)}
\filldraw [fill=black!\thecntShader, color=black!\thecntShader, line width=0pt] (\x-0.04,\y-0.04) rectangle (\x+0.04,\y+0.04);
}
\setcounter{xcounter}{0}
\foreach \x in {7.08,7.16,...,8}
\foreach \y in {5.08,5.16,...,6}
{
\pgfmathsetcounter{cntShader}{100-190*max(\x-7.5,7.5-\x)-190*max(\y-5.5,5.5-\y)}
\filldraw [fill=black!\thecntShader, color=black!\thecntShader, line width=0pt] (\x-0.04,\y-0.04) rectangle (\x+0.04,\y+0.04);
}
\setcounter{xcounter}{0}
\foreach \x in {7.08,7.16,...,8}
\foreach \y in {8.08,8.16,...,9}
{
\pgfmathsetcounter{cntShader}{100-190*max(\x-7.5,7.5-\x)-190*max(\y-8.5,8.5-\y)}
\filldraw [fill=black!\thecntShader, color=black!\thecntShader, line width=0pt] (\x-0.04,\y-0.04) rectangle (\x+0.04,\y+0.04);
}
\draw (1,9) -- (2,9) -- (2,8) -- (1,8) -- (1,9);
\draw (4,9) -- (5,9) -- (5,8) -- (4,8) -- (4,9);
\draw (1,6) -- (2,6) -- (2,5) -- (1,5) -- (1,6);
\draw (4,6) -- (5,6) -- (5,5) -- (4,5) -- (4,6);
\draw (7,9) -- (8,9) -- (8,8) -- (7,8) -- (7,9);
\draw (7,6) -- (8,6) -- (8,5) -- (7,5) -- (7,6);
\draw (1,3) -- (2,3) -- (2,2) -- (1,2) -- (1,3);
\draw (4,3) -- (5,3) -- (5,2) -- (4,2) -- (4,3);
\draw (7,3) -- (8,3) -- (8,2) -- (7,2) -- (7,3);
\end{tikzpicture}
\caption{Overview of the decomposition of $\mathsf{T}$ into various $\mathsf{T}'(s)$ for the two-dimensional case $d=2$. The inner rectangles represent the various smaller grids $\mathsf{T}'(s)$ with $s \in S$. The darkness indicates the amount of the initial load. Note that the initial load of vertices outside the $\mathsf{T}'(s)$'s is $0$.} \label{fig:torus}
\end{figure}
Fix some $s\in S$. Then the subgraph
induced by the vertices $B_{\ell/2}(s)$
is a $d$\nobreakdash-dimensional grid with exactly $n':=(\log n)^{1/4}$
vertices.
Let $\mathsf{T}'=\mathsf{T}'(s)$ denote the corresponding $d$\nobreakdash-dimensional torus graph with
the same vertices, but additional
wrap-around edges between vertices of
$\partial B_{\ell/2}(s)$.
W.l.o.g.\ we assume that the side-length $\oldsqrt[d]{n}$ of $T$ is a multiple
of the side-length $\ell$
of $\mathsf{T}'(s)$.
Let $\ensuremath{\mathbf{P}}'$ be the diffusion matrix of $\mathsf{T}'(s)$.
Let us denote by $\xi'^{(0)}$ $(x'^{(0)})$ the projection
of the load vector $\xi^{(0)}$ $(x^{(0)})$ from $\mathsf{T}$ onto $\mathsf{T}'(s)$.
By \corref{ideal}, the idealized process reduces the discrepancy on $\mathsf{T}'(s)$
from~$K=(\log n)^{1/(4d)}/2$ to~$1$ within
$t_0 := \mathcal{O}( (n')^{2/d}\,\log (K n') )
= \mathcal{O}( \log \log (n) \, (\log n)^{1/(2d)})$ time steps.
We now want to argue that this also happens on the original graph $\mathsf{T}$ with $n$~vertices.
Note that the convergence of the idealized process on $\mathsf{T}'(s)$ implies
\begin{equation}
\|\xi'^{(t_0)} - \overline{ \xi'} \|_{\infty}
= \| {\ensuremath{\mathbf{P}}'}^{t_0} \xi'^{(0)} -\overline{ \xi'} \|_{\infty} \leq 1. \label{eq:firstt}
\end{equation}
Furthermore, note that the average load $\overline{\xi'}$ in each $\mathsf{T}'(s)$ satisfies
\[
\overline{\xi'} \leq 2d \cdot \ell/4.
\]
\noindent
Our next observation is that for any two vertices $u,v \in \mathsf{T}'(s)$,
\begin{align}
\ensuremath{\mathbf{P}}_{u,v}^{t_0} &\leq {\ensuremath{\mathbf{P}}'}_{u,v}^{t_0} \label{eq:secondd}
\end{align}
as a random walk on $\mathsf{T}'(s)$ can be expressed as a projection of a random
walk on $\mathsf{T}$ (by assigning each vertex in $\mathsf{T}'(s)$ to a set of vertices
in $\mathsf{T}$). With the observations
\begin{itemize}
\setlength{\itemsep}{0pt}
\setlength{\parskip}{0pt}
\item for $v\in \mathsf{T}'(s)$: ${\xi}_v^{(0)}={\xi'_v}^{(0)}$,
\item for $v \in \mathsf{T}$ and $\ell/2 \leq \operatorname{dist}(v,s) \leq t_0$:
$\xi_v^{(0)}=0$
(as $t_0=o(\ell'-\ell/2))$,
\item for $v\in \mathsf{T}$ and $\operatorname{dist}(v,s)>t_0$: $\ensuremath{\mathbf{P}}_{v,s}^{t_0}=0$,
\end{itemize}
we obtain for any vertex $v \in B_{\ell/2}(s)$,
\[
\xi_{s}^{(t_0)}
= \big( \ensuremath{\mathbf{P}}^{(t_0)} \cdot \xi^{(0)} \big)_s
= \sum_{v \in \mathsf{T}} \xi_v^{(0)} \ensuremath{\mathbf{P}}_{v,s}^{(t_0)}
= \sum_{v \in \mathsf{T}'(s)} {\xi'_v}^{(0)} \ensuremath{\mathbf{P}}_{v,s}^{(t_0)}.
\]
By first applying \eq{secondd} and then \eq{firstt}, we get
\[
\xi_{s}^{(t_0)}
\leq \sum_{v \in \mathsf{T}'(s)} {\xi'_s}^{(0)} {\ensuremath{\mathbf{P}}'}_{v,s}^{(t_0)}
= {\xi'_s}^{(t_0)} \leq \overline{ \xi'} + 1.
\]
\noindent
This means that the idealized process achieves after $t_0$ time steps
a good balancing at $s$.
On the other hand, the discrete process may fail within $t_0$ time steps if
there is an~$s$ such that all edges in $\mathsf{T}'(s)$ round towards $s$ at all
time steps $t\leq t_0$. (Note that by construction, no load from another
$\mathsf{T}'(s')$, $s' \in S \setminus \{s\}$, can reach $\mathsf{T}'(s)$ within $t_0$ steps,
since the distance between any vertex in $\mathsf{T}'(s)$ and $\mathsf{T}'(s')$ is $\ell' -
2 \ell \geq t_0$.)
Moreover, by definition of $x^{(0)}$, $|x^{(0)}_u -
x^{(0)}_v|\in \{0, 2d \}$ if $\{u,v\}\in E(\mathsf{T})$. Hence the fractional flow in
the first step is $\in \{0, \frac{1}{2} \}$
and for fixed $s$
the probability that $x^{(0)}_u=x^{(1)}_u$ for all $u\in \mathsf{T}'(s)$ is
at least
$2^{ -|B_{\ell/2}(s)| }$.
By induction, for fixed $s$ the probability that $x^{(0)}_u=x^{(t_0)}_u$
holds for all $u\in \mathsf{T}'(s)$ is at least
\[
2^{ -|B_{\ell/2}(s)| \, t_0}
= 2^{ -(\log n)^{1/4} \cdot \mathcal{O}( \log \log (n) \, (\log n)^{1/(2d)} )} \geq 2^{-(\log n)^{4/5}} .
\]
As we have $|S| = \Omega(n/\ell'^{d})=\Omega(\operatorname{poly}(n))$
independent events, it follows that
there is at least one $s \in S$ with $ x_{s}^{(t_0)} = x_s^{(0)} = \ell/2 \cdot 2d$
with probability
\[
1 - \Big(1 - 2^{-(\log n)^{4/5}} \Big)^{\Omega(\operatorname{poly}{(n)})} \geq 1- n^{-C},
\] where $C > 0$ is some constant.
If this happens, then the deviation between the discrete and idealized process at vertex $s \in S$ at step $t_0$ is at least
\[
\big| x_{s}^{(t_0)} - \xi_{s}^{(t_0)} \big|
\geq \big| 2d \cdot \ell/2 - (2d \cdot \ell/4 + 1) \big|
= \Omega( (\log n)^{1/(4d)}),
\]
and the claim follows.
\end{proof}
\section{Conclusions}
We propose and analyze a new deterministic algorithm for balancing
indivisible tokens. By achieving a constant discrepancy in optimal time
on all torus graphs, our algorithm improves upon all previous deterministic and random
approaches with respect to both running time and discrepancy.
For hypercubes we prove a discrepancy of $\Theta(\log n)$ which
is also significantly better than the (deterministic)
RSW-algorithm which achieves a discrepancy of $\Omega(\log^2 n)$.
On a concrete level, it would be interesting to extend these results
to other network topologies. From a higher perspective, our new algorithm provides
a striking example of quasirandomness in algorithmics. Devising and analyzing
similar algorithms for other tasks such as routing, scheduling or synchronization
remains an interesting open problem.
\section*{Acknowledgments}
This work was done while all three authors
were postdoctoral fellows at the International Computer Science Institute (ICSI)
in Berkeley, California supported
by the German Academic Exchange Service (DAAD).
We also want to thank Benjamin Doerr for suggesting
a simplified proof of \lemref{betragkleinerk}.
\newcommand{\FOCS}[2]{#1 Annual IEEE Symposium on Foundations of Computer Science (FOCS'#2)}
\newcommand{\STOC}[2]{#1 Annual ACM Symposium on Theory of Computing (STOC'#2)}
\newcommand{\SODA}[2]{#1 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA'#2)}
\newcommand{\PODC}[2]{#1 Annual ACM Principles of Distributed Computing (PODC'#2)}
\newcommand{\SPAA}[2]{#1 ACM Symposium on Parallel Algorithms and Architectures (SPAA'#2)}
| {
"attr-fineweb-edu": 1.918945,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUd2c4uBhi8u04ptiQ | \section{Introduction}
In the past three decades, much efforts have been devoted to understanding non-equilibrium thermodynamics, focusing especially on mesoscopic scale non-equilibrium phenomena, where fluctuations are strong~\cite{sekimoto2010stochastic,seifert2012stochastic}. These theoretical studies have led to interesting applications in biological systems, including kinetical proof-reading~\cite{hopfield1974kinetic,Sartori2013Kinetic}, environment sensing and adaptation~\cite{lan2012energy,mehta2012energetic,Stefano2015Information}, and oscillation maintenance within cells~\cite{cao2015oscillation}, efficiency of nanosized rotary motors~\cite{noji1997direct,toyabe2011thermodynamic}. Due to the intrinsic complexity of these molecular devices and machines, coarse-graining is inevitable to simplify the description.
Recently, there have been much interest to consider the effect of coarse-graining on entropy production of the system~\cite{rahav2007fluctuation,puglisi2010entropy,celani2012anomalous,bo2014entropy,nakayama2015invariance,Chun2015fast,Esposito2015Stochastic,shouwen2015adaptation,chen2016Model}, which is a key thermodynamic quantity that measures how far the system is away from equilibrium. For a system with two distinct time scales, coarse-graining is achieved by adiabatically eliminating the fast variables to obtain a simpler description in terms of the slow variables. Hondou \emph{et al.} first realized that, for a Brownian particle moving in a spatially modulated temperature field, over-damped approximation gives the wrong entropy production rate compared with the underdamped Langevin description, which implies that Carnot efficiency cannot be achieved in such a Brownian heat engine~\cite{Derenyi1999efficiency,hondou2000unattainability}. Esposito made a systemmatic study on Markov systems with two time scales and found that the total entropy production can be decomposed into that at the coarse-grained level, that only due to the fast dynamics, and that due to the coupling between slow and fast dynamics. Surprisingly, the coupling term is non-negative and thus cannot be ignored in general~\cite{esposito2012stochastic}. The missing contribution at the coarse-grained level is referred to as ``hidden entropy production" by Kawaguchi \emph{et al.}, who further showed that it satisfies fluctuation theorem~\cite{kawaguchi2013fluctuation}. In our very recent paper, we showed that hidden entropy production reveals itself as a characteristic plateau in the violation spectrum of \emph{fluctuation-response relation} (FRR) of a slow observable~\cite{Wang2016entropy}. Our discovery suggests a way to reveal the hidden fast dissipative processes by just studying the trajectory of a slow variable with sufficient temporal resolution.
FRR is a fundamental property of an equilibrium system~\cite{kubo1966fluctuation}, and its violation can be exploited to characterize non-equilibrium system. This has been applied to study active hair bundles~\cite{martin2001comparison}, active cytoskeletal networks~\cite{Mizuno2007cytoskeletonNetwork}, and molecular motors~\cite{toyabe2010nonequilibrium}. Furthermore, one may introduce effective temperature as the ratio between correlation and response, and use its deviation from bath temperature to quantify deviation of the system from equilibrium. Although initially proposed by Cugliandolo \emph{et al.} to characterize glassy systems~\cite{LeticiaPREtemperature}, it has been used recently for small molecules driven out of equilibrium~\cite{dieterich2015single}. A more foundamental connection between FRR violation and dissipation was pointed out by Harada and Sasa a decade ago in the context of general Langevin systems~\cite{harada2005equality,harada2006energy}. Referred to as the Harada-Sasa equality, this relation has been confirmed experimentally in a driven colloid system, and has also been used to study the energetics of F1 ATPase~\cite{toyabe2007experimental}, a rotary motor with much higher complexity~\cite{toyabe2010nonequilibrium}. Although there is no general connection between FRR violation and dissipation in discrete Markov systems, Lippiello \emph{et al.} generalized the Harada-Sasa equality to \emph{diffusive} Markov jumping systems where the entropy production in the medium for each jump is relatively small~\cite{Lippiello2014fluctuation}. However, the requirement for this generalization is still not very clear.
In this paper, we systematically discuss the FRR violation of a Markov system with two distinct time scales and a finite state space. Its FRR violation spectrum for both a slow and a fast observable is derived, and the connection to hidden entropy production and effective temperature is also discussed. The paper is organized as follows. Section~\ref{sect:HS} gives a brief introduction to Harada-Sasa equality. Its generalization to Markov jumping systems is discussed in Section~\ref{sect:generalization_HS}. In Section~\ref{sect:CR}, we derive the analytical forms of correlation and response spectrum for Markov systems. Section~\ref{sect:time scale} introduces our finite Markov model with two time scales, followed by a perturbative analysis of this system, and its FRR violation spectrum for fast and slow observables, respectively. In Section~\ref{sect:discussion}, the connection to entropy production partition and to effective temperature are discussed. Section~\ref{sect:adaptation} illustrates our main idea through an example of sensory adaptation in E.coli. We conclude in Section~\ref{sect:conclusion}.
\section{The Harada-Sasa equality}
\label{sect:HS}
The FRR violation of a specific degree of freedom can be related to the dissipation rate of the same degree of freedom, as examplified by the Harada-Sasa equality in the context of Langevin systems. Consider the following $N_0$-component Langevin equation
\begin{equation}
\gamma_j \dot{x}_j=F_j(\vec{x}(t),t)+\xi_j(t)+h_j(t),
\label{eq:general-Langevin}
\end{equation}
where $\gamma_j$ is the friction coefficient for the variable $x_j$, $F_j$ is a driving force that depends on the system configuration $\vec{x}=(x_1,x_2,...)$ and external driving, $\xi(t)$ the zero-mean white Gaussian noise with variance $2\gamma_j T$. The Boltzmann constant $k_B$ is set to be 1 throughout this article. We assume that the external driving is such that the system reaches a NESS in the time scale much larger than the characteristic operation time. $h_j(t)$ is a perturbative force that is applied only when we want to measure the linear response function of the system, defined to be $R_{\dot{x}_j}(t-\tau)\equiv \delta \langle \dot{x}_j(t)\rangle/\delta h_j(\tau)$. Then, the average heat dissipation rate through the frictional motion of $x_j$ is given by $J_j\equiv \langle [\gamma_j\dot{x}_j(t)-\xi_j(t)]\circ \dot{x}_j(t)\rangle_{ss}$, where $\circ $ denotes the Stratonovich integral~\cite{sekimoto1998langevin} and $\langle \cdot\rangle_{ss}$ denotes the steady state ensemble. In this general Langevin setting, the Harada-Sasa equality states that~\cite{harada2005equality,harada2006energy}
\begin{equation}
J_j=\gamma_j \left \{ \langle \dot{x}_j \rangle_{ss}^2+ \int_{-\infty}^\infty \frac{d\omega}{2\pi} [\tilde{C}_{\dot{x}_j}(\omega)-2T\tilde{R}_{\dot{x}_j}'(\omega)] \right \}.
\label{eq:HS}
\end{equation}
Here, the prime denotes the real part, $\tilde{R}_{\dot{x}_j}(\omega)$ is the Fourier transform of the response function $R_{\dot{x}_j}(t-\tau)$, and $\tilde{C}_{\dot{x}_j}(\omega)$ is the Fourier transform of the correlation function $C_{\dot{x}_j}(t-\tau)\equiv \langle [\dot{x}_j(t)-\langle \dot{x}_j\rangle_{ss}][\dot{x}_j(\tau)-\langle \dot{x}_j\rangle_{ss}]\rangle_{ss}$. The Fourier transform for a general function $g(t)$ is defined to be $\tilde{g}(\omega)\equiv \int_{-\infty}^\infty g(t)\exp(i\omega t)dt$, with $i$ being the imaginary unit. In the special case of equilibrium, $\tilde{C}_{\dot{x}_j}(\omega)=2T\tilde{R}_{\dot{x}_j}'(\omega)$ due to FRR, and the mean drift $\langle \dot{x}_j\rangle_{ss}$ and heat dissipation rate $J_j$ also vanishes. In the steady state, the total entropy production rate $\sigma$ of the system is related to the heat dissipation rate through
\begin{equation}
\sigma=\frac{1}{T}\sum_j J_j.
\end{equation}
This is the basis for estimating entropy production rate through analyzing the FRR violation for each channel $x_j$.
\section{Generalizing Harada-Sasa equality}
\label{sect:generalization_HS}
So far, the condition under which the generalized Harada-Sasa equality holds in Markov jumping systems is still not clear. Here, we present a systematic derivation of the generalized Harada-Sasa equality Eq.~(\ref{eq:GHSE-2}) and Eq.~(\ref{eq:GHS}), and identify their range of applicability.
\subsection{General equality concerning FRR violation}
Consider a general Markov process. The transition from state $n$ to state $m$ happens with rate $w_n^m$. We assume that if $w_n^m>0$, then the reverse rate $w_m^n>0$. The self-transition is prohibited, i.e., $w_n^n=0$. The probability $P_n(t)$ at state $n$ and time $t$ evolves according to the following master equation
\begin{equation}
\frac{d}{dt}P_n(t)=\sum_m M_{nm}P_m(t),
\label{eq:govMatrix}
\end{equation}
where $M$ is assumed to be an irreducible transition rate matrix determined by $M_{nm}=w_m^n-\delta_{nm}\sum_k w_n^k$. We consider that an external perturbation $h$ modifies the transition rate in the following way
\begin{equation}
\tilde{w}_m^n=w_m^n\exp\left[h\frac{\mathcal{Q}_n-\mathcal{Q}_m}{2T}\right],
\label{eq:perturbation}
\end{equation}
which is a generalization of the way Langevin system is perturbed~\cite{diezemann2005fluctuation,maes2009response_inter}. Here, $\mathcal{Q}_m$ is a conjugate variable to perturbation $h$.
The linear response of an arbitrary observable $A$ for this Markov system is defined to be $R_{A}(t-\tau)\equiv \delta \langle A(t)\rangle/\delta h(\tau)$. In the last decade, much efforts have been devoted to study the relation between linear response and fluctuation in non-equilibirum steady state. By using path-integral approach, Baiesi \emph{et al.} have derived the following relation~\cite{baiesi2009fluctuations}:
\begin{equation}
R_{A}(t-\tau)=-\frac{\beta}{2}\langle \bar{v}(\tau) A(t)\rangle_{ss}+\frac{\beta}{2} \langle \dot{Q}(\tau) A(t)\rangle_{ss}.
\label{eq:response-NESS}
\end{equation}
Here, along a stochastic trajectory $n_t$, $Q(t)\equiv \mathcal{Q}_{n_t}$ and $\dot{Q}(t)$ is the corresponding instantaneous change rate of $Q(t)$. We define
\begin{equation}
\bar{\nu}_n\equiv \sum_{n'} \omega_n^{n'}(\mathcal{Q}_{n'}-\mathcal{Q}_n),
\label{eq:nu}
\end{equation}
which measures the average change rate of $Q(t)$ conditioned at the initial state $n$. Then, $\bar{v}(\tau)\equiv \bar{\nu}_{n_\tau}$. For equilibrium systems,
$\langle \dot{Q}(\tau) A(t)\rangle_{eq}=-\langle \bar{v}(\tau)A(t)\rangle_{eq}$ when $t>\tau$ and $\langle \dot{Q}(\tau) A(t)\rangle_{eq}=\langle \bar{v}(\tau) A(t)\rangle_{eq}$ when $t<\tau$. These relations reduce Eq.~(\ref{eq:response-NESS}) to FRR in equilibrium.
Now, we focus on a specific application of Eq.~(\ref{eq:response-NESS}) by choosing the observable $A(t)$ the same as $\dot{Q}(t)$ and setting $t=\tau$ to consider the immediate response $ R_{\dot{Q}}(0)$. After a little rearrangement, we derive
\begin{equation}
\langle \bar{v}(t) \dot{Q}(t)\rangle_{ss}= \langle \dot{Q}\rangle_{ss}^2+\int_{-\infty}^\infty [\tilde{C}_{\dot{Q}}(\omega)-2k_BT \tilde{R}'_{\dot{Q}}(\omega)] \frac{d\omega}{2\pi},
\label{eq:GHSE-2}
\end{equation}
where the auto-correlation function $C_{\dot{Q}}(t-\tau)\equiv \langle (\dot{Q}(t)-\langle \dot{Q}\rangle_{ss})(\dot{Q}(\tau)-\langle \dot{Q}\rangle_{ss})\rangle_{ss}$. Assuming that the system jumps from state $n_{\tau_j^-}$ to $n_{\tau_j^+}$ at the transition time $\tau_j$, $\dot{Q}(t)= \sum_j \delta(t-\tau_j)[\mathcal{Q}_{n_{\tau_j^+}}-\mathcal{Q}_{n_{\tau_j^-}}]$, which takes non-zero value only at the transition time $\tau_j$. However, the observable $\bar{v}(t)\equiv \bar{\nu}_{n_t}$ is not-well defined at the time of transition. This makes the evaluation of the correlation $\langle \bar{v}(t) \dot{Q}(t)\rangle$ nontrivial. Here, we define
\begin{equation}
\bar{v}(t)\equiv \frac{1}{2}[\bar{\nu}_{n_{t^+}}+\bar{\nu}_{n_{t^-}}],
\label{eq:bar_v}
\end{equation}
which takes the medium value at the transition. To evaluate $\langle \bar{v}(t) \dot{Q}(t)\rangle$, we only need to consider the transition events. For an ensemble of transition from state $n$ to $m$, $\bar{v}(t)$ gives $[\bar{\nu}_n+\bar{\nu}_m]/2$, while $\dot{Q}(t)$ gives $P_nw_n^m(\mathcal{Q}_n-\mathcal{Q}_m)$, with $P_nw_n^m$ the average rate for such transition to occur. Then, summing over all possible transitions, we derive
\begin{equation}
\langle \bar{v}(t) \dot{Q}(t) \rangle_{ss} = \frac{1}{4} \sum_{n,m} (\bar{\nu}_n+\bar{\nu}_m)[P_m\omega_m^n-P_n\omega_n^m](\mathcal{Q}_n-\mathcal{Q}_m) ,
\label{eq:evaluation}
\end{equation}
where we have symmetrized the result which gives rise to an additional factor $1/2$. Here, $\langle \bar{v}(t) \dot{Q}(t) \rangle_{ss}$ is proportional to the net flux $P_m\omega_m^n-P_n\omega_n^m$, which vanishes at equilibrium, in agreement with our expectation. We claim original contribution for the derivation of Eq.~(\ref{eq:GHSE-2}) and Eq.~(\ref{eq:evaluation}). Note that while the rhs of Eq.~(\ref{eq:GHSE-2}) is very similar to that of the Harada-Sasa equality, the lhs of Eq.~(\ref{eq:GHSE-2}) cannot in general be interpreted as heat dissipation for the degree of freedom $Q(t)$.
\subsection{Connection FRR violation to heat dissipation rate}
Here, we justify our medium value interpretation concerning Eq.~(\ref{eq:bar_v}) by refering to Langevin cases.
Consider the observable $\dot{x}_j(t)$ for the Langevin equation~(\ref{eq:general-Langevin}). Here, for simplicity we assume that the force does not have explicit time-dependence, and the NESS is achieved by breaking detailed balance in other ways. The average change rate for $x_j$ at the position $\vec{x}$ is given by $\bar{\nu}_{\vec{x}}=F_j(\vec{x})/\gamma_j$. For a transition from $\vec{x}$ to $\vec{x}'=\vec{x}+2\delta \vec{x}$, our medium value interpretation implies that
\begin{equation}
\bar{v}=\frac{F_j(\vec{x})+F_j(\vec{x}')}{2\gamma_j }= \frac{1}{\gamma_j}F_j\left(\frac{\vec{x}+\vec{x}'}{2}\right)+O(\delta \vec{x}^2).
\end{equation}
The second line is obtained by Talyor expansion. For a trajectory with update time $\delta t$, $\delta \vec{x}^2\sim \delta t$ and $\dot{x}_j\sim \delta t^{-1/2}$. If we use the temporal average to approximate the ensemble average, we have
\begin{equation}
\langle \bar{v}(t) \dot{x}_j(t)\rangle_{ss}=\frac{1}{\gamma_j}\langle F_j(\vec{x}(t))\circ \dot{x}_j(t)\rangle_{ss}+O(\sqrt{\delta t}).
\end{equation}
Therefore, our medium value interpretation recovers the Stratonovich interpretation in the limit of $\delta t\to 0$, and Eq.~(\ref{eq:GHSE-2}) is reduced to the Harada-Sasa equality Eq.~(\ref{eq:HS}).
\begin{figure}
\includegraphics[width=6cm]{multi_hopping}
\caption{Illustration of a multi-dimensional hopping process. The black dots represents remaining blocks. }
\label{fig:multi_hopping}
\end{figure}
This result can be easily generalized to a multi-dimensional hopping process. The network should be easy to be decomposed into different directions. Let us take FIG.~\ref{fig:multi_hopping} as an example, where the transition could be decomposed into the inter-block direction, indicated in blue arrows, and intra-block direction, labeled in red. The three-state block represents a more complicated subnetwork that can also be decomposed into different directions.
Here, we consider first how to capture the dissipation induced by the blue transitions. The key is to choose a proper observable $\dot{Q}(t)$ which could just \emph{count} the blue transitions with a proper weight $\bar{v}(t)$ that relates to the dissipation during the corresponding transition. To do so, we set the conjugate variable $\mathcal{Q}$ to be uniform within each block and make it change value by 1 when jumping to a neighboring block. Let $n^*$ represents a label for states inside the block. For the state $(p, n^*)$, which is labeled in red in FIG.~\ref{fig:multi_hopping}, the average change rate for $\mathcal{Q}$ becomes
\begin{equation}
\bar{\nu}_{p,n^*}=w_p^{p+1}(n^*)-w_p^{p-1}(n^*)
\end{equation}
according to Eq.~(\ref{eq:nu}), which gives the inherent property of this state. Similar to the above discussion of Langevin processes, $\bar{\nu}_{p,n^*}$ can be related to some kind of force, or dissipation per jump. We denote
\begin{equation}
\Delta S(n^*,p)\equiv \ln w_p^{p+1}(n^*)/w_{p+1}^{p}(n^*)
\label{eq:Delta-S}
\end{equation}
as the entropy produced in the medium during the transition from state $(n^*,p)$ to $(n^*,p+1)$. Then $T\Delta S(n^*,p)$ gives the heat dissipation for this jump. Now, if we assume that for all the blue transitions in FIG.~\ref{fig:multi_hopping} the transition rate satisfies the form
\begin{subequations}\label{eq:interblock-rates}
\begin{eqnarray}
w_p^{p+1}(n^*)&=&\frac{1}{\tau_Q}\exp\Big((1-\theta) \Delta S(n^*,p) \Big)\\
w_{p+1}^{p}(n^*)&=&\frac{1}{\tau_Q}\exp\Big( - \theta \Delta S(n^*,p)\Big),
\end{eqnarray}
\end{subequations}
where $\tau_Q$ is a constant number and $\theta$ is a load sharing factor. Then,
\begin{equation}
\bar{\nu}_{p,n^*}=\frac{1}{\tau_Q}\left(\Delta S(p)+\theta [\Delta S(p-1)-\Delta S(p)]+O(\Delta S^2)\right),
\end{equation}
where we have suppressed for $\Delta S$ the dependence on state $(n^*,p)$ for simplicity. Assuming that $|\Delta S|\ll 1$ and that $\Delta S$ varies slowly along $p$, i.e., $[\Delta S(p-1)-\Delta S(p)]\ll 1$, we now successfully connect the average change rate of $\mathcal{Q}$ with the dissipation per jump:
\begin{equation}
\bar{\nu}_{p,n^*}\approx \frac{1}{\tau_Q} \Delta S(p,n^*).
\end{equation}
Combined with Eq.~(\ref{eq:evaluation}) and Eq.~(\ref{eq:Delta-S}), the integral of the FRR violation for this observable is given by
\begin{equation}
\langle \bar{v}(t) \dot{Q}(t)\rangle_{ss} \approx \frac{1}{\tau_Q}\sum_{n^*,p}(P_{p}^{ss}w_p^{p+1}-P_{p+1}^{ss}w_{p+1}^{p})\ln \frac{w_p^{p+1}}{w_{p+1}^{p}}.
\label{eq:GHS}
\end{equation}
Therefore, combined with Eq.~(\ref{eq:GHSE-2}), this equation implies that the FRR violation of the observable $\dot{Q}$ captures the dissipation rate due to the inter-block transitions, as indicated by the blue arrows. This equation along with Eq.~(\ref{eq:GHSE-2}) constitute our generalized Harada-Sasa equality. Here, $T\tau_Q$ is the corresponding friction coefficient $\gamma_Q$.
Now, we make some comments related to Eq.~(\ref{eq:GHS}).
(a) Key assumptions include that all the inter-block transitions share the same timescale $\tau_Q$ specified by Eq.~(\ref{eq:interblock-rates}), that the dissipation per jump is relatively small compared with the thermal energy $T$, and that the dissipation changes slowly for neighboring jumps between blocks. However, the load sharing factor $\theta$ is not required to be 1/2, which was assumed previously by Lippiello \emph{et al.}~\cite{Lippiello2014fluctuation}.
(b) The observable $Q$ is chosen to be a linear function along the block hopping direction such that $\dot{Q}(t)$ would count the transitions. Even if the actual observable in the experiment is not such a linear form given in FIG.~\ref{fig:multi_hopping}, we can map the observed trajectory $\dot{Q}_{old}$ to a new one $\dot{Q}_{new}$ associated with a properly designed observable at the stage of data analysis.
(c) To access the dissipation rates due to the inter-block transitions, we have made no assumptions about the transitions inside the blocks. However, in order to access the total dissipation rates, we need to devise other observables to count the transitions inside the blocks and these transitions should satisfiy similar constraints.
(d) In certain cases, although $\Delta S$ is not always small, the probability flux becomes dominant only around the transitions where $\Delta S$ is small. Therefore, Eq.~(\ref{eq:GHS}) may also be valid, as illustrated later by our example in FIG.~\ref{fig:HS-adaptation-2}.
\section{Correlation and response in a general Markov system}
\label{sect:CR}
\subsection{Setup}
Here, assumping general Markov processes, we derive the velocity correlation spectrum Eq.~(\ref{eq:Cvelo_fre}) and response spectrum Eq.~(\ref{eq:Cvelo_fre}) for a general observable, and its FRR violation spectrum Eq.~(\ref{eq:velo-FRR-vio}). Our strategy is to project these spectrum in the eigenspace of the evolution operator.
Consider a general Markov process the similar as introduced above, except that it has only finite states, say $N$ states. The $j$-th left and right eigenmodes, denoted as $x_j(n)$ and $y_j(n)$ respectively, satisfy the eigenvalue equation
\begin{subequations} \label{eq:xy}
\begin{eqnarray}
\sum_m M_{nm}x_j(m)&=-\lambda_j x_j(n)\\
\sum_m y_j(m)M_{mn}&=-\lambda_j y_j(n),
\end{eqnarray}
\end{subequations}
where the minus sign is introduced to have a positive ``eigenvalue" $\lambda_j$~\cite{van1992stochastic}. These eigenvalues are arranged in the ascending order by their real part, i.e., $\text{Re}(\lambda_1)\le \text{Re}(\lambda_2)\le \cdots$. This system will reach a unique stationary state associated with $\lambda_1=0$, where $y_1=1$ and $x_1(m)=P_{m}^{ss}$ due to probability conservation. With the proper normalization $\sum_m P_m^{ss}=1$, the eigenmodes satisfy the orthogonal relations $\sum_m x_j(m)y_{j'}(m)=\delta_{jj'}$ and completeness relations $\sum_j x_j(n)y_j(m)=\delta_{nm}$.
\subsection{Correlation spectrum}
Consider first the auto correlation function $C_Q(t-\tau)$ for the observable $Q(t)\equiv \mathcal{Q}_{n_t}$, which can be reformulated in the following form
\[
C_{Q}(t-\tau)= \sum_{n,n'} \mathcal{Q}_{n}\mathcal{Q}_{n'} P(t-\tau;n,n')P_{n'}^{ss} -\langle Q\rangle_{ss}^2,
\]
where $P(t;n,n')$ is the probability that a system starting in state $n'$ at time $t=\tau$ would reach state $n$ at time $t$. An expansion in the eigen space gives $P(t;n,n')=\sum_{j} y_j(n') e^{-\lambda_j |t-\tau|} x_j(n)$, which satisfies the master equation Eq.~(\ref{eq:govMatrix}) and the initial condition $P(0;n,n')=\delta_{nn'}$. Introducing the weighted average of $\mathcal{Q}$ in the $j$-th eigenmodes, i.e., $\alpha_j\equiv\sum_n \mathcal{Q}_nx_j(n)$ and $\beta_j \equiv \sum_{n}\mathcal{Q}_{n} y_j(n)P_n^{ss}$, the correlation function is expanded in the eigenspace, i.e.,
\begin{equation}
C_{Q}(t-\tau) =\sum_{j=2}^N \alpha_j \beta_j e^{-\lambda_j |t-\tau|}.
\label{eq:Corr1}
\end{equation}
The contribution of the ground state ($j=1$) cancels that of the mean deviation, i.e., $\langle Q\rangle^2_{ss}$. Note that the stationarity leads to $C_Q(t-\tau)=C_Q(\tau-t)$, which constrains Eq.~(\ref{eq:Corr1}) through the absolute term $|t-\tau|$. This can be derived by
The correlation function $C_{\dot{Q}}(t-\tau)$ for the velocity observable $\dot{Q}(t)$ can be obtained by the transformation
\[
C_{\dot{Q}}(t-\tau)=\frac{\partial^2 C_{Q}(t-\tau)}{\partial \tau\partial t}.
\]
In the Fourier space, we have
\begin{equation}
\tilde{C}_{\dot{Q}}(\omega)=\sum_{j=2}^N 2\alpha_j\beta_j\lambda_j \Big[1-\frac{1}{1+(\omega/\lambda_j)^2}\Big] ,
\label{eq:Cvelo_fre}
\end{equation}
which is generally valid for a system in NESS.
\subsection{Response spectrum}
The response spectrum can be obtained by studying the response of the system to a periodic perturbation. Consider $h=h_0 \exp(i \omega t )$ with $h_0$ a small amplitude and $ i$ the imaginary unit. Up to the first order of $h_0$,
the transition rate matrix $\tilde{M}$ is modified as
\[\tilde{M}=M+h_0 \partial_h \tilde{M} \exp(i\omega t )+O(h_0^2).\]
After a sufficiently long time, the system reaches a distribution with a periodic temporal component that has time-independent amplitude:
\[\tilde{P}_m=P_m^{ss}+h_0 P^{(1)}_m\exp(i\omega t )+O(h_0^2).\]
Here, with the stationary condition of the zeroth term, i.e., $\sum_m M_{nm}P_m^{ss}=0$, the new master equation $d\tilde{P}_m/dt=\sum_n \tilde{M}_{mn}\tilde{P}_n$ determines the first order correction of the distribution
\[P^{(1)}=-\frac{1}{M-i\omega } \partial_h M P^{ss},\]
written in a Matrix form. By introducing
\begin{equation}
B_n\equiv\sum_{m}\partial_h \tilde{M}_{nm} P^{ss}_m,
\label{eq:general-Bn}
\end{equation}
and the weighted average of $B$ in the $j-th$ eigenmode, i.e., $\phi_j \equiv \sum_{n} B_{n} y_j(n)$, the linear order variation is expressed as
\[
P^{(1)}_n=\sum_{j=2}^N \frac{1}{\lambda_j+i\omega} \phi_j x_j(n).
\]
The first mode disappears as one can check that $\phi_1=\sum_mB_my_1(m)=\sum_m B_m=0$, because the ground state does not contain dynamic information.
Finally, for a state-dependent observable $Q(t)\equiv\mathcal{Q}_{n_t}$, its response spectrum is given by
\begin{equation}
\tilde{R}_Q(\omega)=\sum_n \mathcal{Q}_nP^{(1)}_n=\sum_{j=2}^N \frac{\alpha_j\phi_j}{\lambda_j+i\omega}.
\label{eq:Rq-general-fre}
\end{equation}
By using the transformation $ R_{\dot{Q}}(t)=dR_{Q}/dt$ or $\tilde{R}_{\dot{Q}}(\omega)=i\omega \tilde{R}_{Q}(\omega)$, we obtain the desired response spectrum $\tilde{R}_{\dot{Q}}(\omega)$ for the velocity observable $\dot{Q}(t)$, i.e.,
\begin{equation}
\tilde{R}_{\dot{Q}}(\omega)=\sum_{j=2}^N \alpha_j\phi_j\Big[1- \frac{1- i(\omega/\lambda_j) }{1+(\omega/\lambda_j)^2}\Big].
\label{eq:Rvelo_fre}
\end{equation}
For the perturbation form in Eq.~(\ref{eq:perturbation}), we have
\begin{equation}
B_n = \sum_m[ w_m^nP_m^{ss}+w_n^mP_n^{ss} ](\mathcal{Q}_n-\mathcal{Q}_m)/2T,
\label{eq:Bn-main}
\end{equation}
which gives the flux fluctuation of the conjugate variable $\mathcal{Q}_n$ at state $n$.
\subsection{Useful relations}
We find that the coefficients always satisfy the following relation
\begin{equation}\label{eq:sumrule-general}
\sum_{j=2}^N \alpha_j (\lambda_j \beta_j-T\phi_j)=0,
\end{equation}
which is called the \emph{sum rule}. See Appendix~\ref{app:sum-rule} for the derivation. Combining Eq.~(\ref{eq:Rvelo_fre}) and Eq.~(\ref{eq:Cvelo_fre}), this relation leads to FRR in high frequency domain ($\omega\gg\lambda_N$), i.e.,
\begin{equation}
\tilde{C}_{\dot{Q}}(\omega)= 2T\tilde{R}'_{\dot{Q}}(\omega).
\end{equation}
This is consistent with our intuition that when the frequency is much higher than the characteristic rate of the system the correlation and response spectrum of the system only reflects the property of the thermal bath that is in equilibrium.
Combining Eq.~(\ref{eq:Cvelo_fre}) and Eq.~(\ref{eq:Rvelo_fre}), the FRR violation spectrum for a velocity observable $\dot{Q}$ can be generally written as
\begin{equation}
\tilde{C}_{\dot{Q}}(\omega)-2T\tilde{R}'_{\dot{Q}}(\omega)=2\sum_{j=2}^N \alpha_j \frac{T\phi_j-\beta_j\lambda_j}{1+(\omega/\lambda_j)^2}.
\label{eq:velo-FRR-vio}
\end{equation}
Although the involved coefficients and eigenvalues in the rhs are probably complex numbers, the summation over all the eigenmodes guarantees a real violation spectrum, as shown in Appendix~\ref{app:realPart}. In the limit $\omega\to 0$, FRR is also valid since both the correlation and response becomes zero, as evident from Eq.~(\ref{eq:Cvelo_fre}) and Eq.~(\ref{eq:Rvelo_fre}). The integral of the FRR violation, denoted as $\Delta_Q$, is given by
\begin{eqnarray}
\Delta_Q&=&\int_{-\infty}^\infty \left (\tilde{C}_{\dot{Q}}(\omega)-2T\tilde{R}'_{\dot{Q}}(\omega)\right) \frac{d\omega}{2\pi}\nonumber\\
&=&\sum_j \lambda_j\alpha_j \left(T\phi_j-\beta_j\lambda_j\right).
\label{eq:Delta-Q-expansion}
\end{eqnarray}
The Harada-Sasa equality suggests that this quantity is related to the dissipation through the motion of $Q$.
We also find that the detailed balance $w_n^mP_n^{eq}=w_m^nP_m^{eq}$ is equivalent to
\begin{equation} \label{eq:detailed-eigenmode}
\lambda_j\beta_j^{eq}=T\phi_j^{eq},
\end{equation}
which ensures that FRR be satisfied in all frequency domain. This relation is a general result independent of the perturbation form proposed in Eq.~(\ref{eq:perturbation}), as proved in Appendix~\ref{app:FRReigenmode}. Therefore, we may also talk about detailed balance in the eigenspace, and an eigenmode contributes to dissipation only when it violates the detailed balance, according to Eq.~(\ref{eq:Delta-Q-expansion}).
\section{Markov processes with time scale separation }
\label{sect:time scale}
Now, we consider Markov processes with time scale separation. We assume that the state space can be grouped into $K$ different coarse-grained subspaces, denoted as $p$ or $q$. A microscopic state $k$ ($l$) within coarse-grained $p$ ($q$) is denoted as $p_k$ ($q_l$), which serves as an alternative to our previous state notation $n$ or $m$. We assume fast relaxation ($\sim \tau_f$) within the subspace of a coarse-grained state and slow `hopping ($\sim\tau_s $) to other subspaces associated with a different coarse-grained state. The competition of these two processes defines a dimensionless parameter $\epsilon\equiv\tau_f/\tau_s$. A typical example of this Markov system is illustrated in FIG.~\ref{fig:general-model}. Such an assumption implies that the transition rate matrix of the system can be decomposed as\begin{equation}
M_{p_kq_l}=\frac{1}{\epsilon}\delta_{pq}M^{p}_{kl}+M^{(0)}_{p_kq_l},
\label{eq:master-M}
\end{equation}
where $M^{p}_{kl}$ ($\sim 1$) describes a rescaled ``internal" Markov process within the same coarse-grained state $p$, and $M^{(0)}_{p_kq_l}$ for jumps connecting different coarse-grained states. The transition rate from state $k$ to $l$ for $M^p$ is denoted as $w_k^l(p)$ ($\sim 1$).
The goal of this section is to obtain Eq.~(\ref{eq:velo-FRR-vio-fast}), concerning the structure of the FRR violation spectrum for a general observable in this system, and also Eq.~(\ref{eq:velo-FRR-vio-slow}) for an observable that could only resolve different coarse-grained state, which is of particular interest. Below, we start by obtaining the eigenmodes of $M$ by perturbation theory.
\subsection{Perturbation analysis for eigenmodes}
\begin{figure}
\centering
\includegraphics[width=8cm]{modelSystem2}
\caption{(a) Our system with two time scales. One of the closed cycles that breaks time-reversal symmetry is indicated, which is responsible for hidden entropy production. (b) The corresponding effective dynamics. }
\label{fig:general-model}
\end{figure}
We write the eigenmodes and the corresponding eigenvalue as Taylor series in $\epsilon$, i.e.,
\begin{eqnarray*}
x_j&=&x_j^{(0)}+\epsilon x_j^{(1)}+O(\epsilon^2)\\
y_j&=&y_j^{(0)}+\epsilon y_j^{(1)}+O(\epsilon^2)\\
\lambda_j&=&\epsilon^{-1} \lambda_j^{(-1)}+\lambda_j^{(0)}+O(\epsilon^2),
\end{eqnarray*}
and substitute these expansions into the eigenvalue equations Eq.~(\ref{eq:xy}). The leading order equations in $\epsilon $ are given by
\begin{subequations}\label{eq:zero-xy}
\begin{eqnarray}
\sum_{l} M^{p}_{kl}x^{(0)}_j(p_l)&=-\lambda_j^{(-1)}x_j^{(0)}(p_k)\\
\sum_{l}y^{(0)}_j(p_l) M^{p}_{lk}&=-\lambda_j^{(-1)}y_j^{(0)}(p_k).
\end{eqnarray}
\end{subequations}
Therefore, the eigenmodes of the intra-block transition rate matrix $\sum_p M^p$ are the same as the leading order eigenmodes of $M$. At $\lambda_j^{(-1)}\neq 0$, these eigenmodes are in general non-degenerate and decay quickly at the time scale $\tau_f$, thus called the fast modes.
The remaining $K$ slow modes corresponding to $\lambda_j^{(-1)}=0$ are degenerate now, and the lift of this degeneracy is due to the inter-block transition, which requires the next order perturbation analysis, i.e.,
\begin{eqnarray*}
\sum_l M^{p}_{kl}x^{(1)}_j(p_l)+ \sum_{q_l} M^{(0)}_{p_kq_l}x^{(0)}_j(q_l)&=-\lambda_j^{(0)}x_j^{(0)}(p_k)\\
\sum_l y^{(1)}_j(p_l)M^{p}_{lk}+ \sum_{q_l}y^{(0)}_j(q_l)M^{(0)}_{q_lp_k}&= -\lambda_j^{(0)}y_j^{(0)}(p_k).
\end{eqnarray*}
Although the lhs (left hand side) depends on the unknown first order correction of the eigenmodes, we can eliminate these unknown terms by projecting the first equation on the left stationary mode of $M^p$, i.e., denoted as $y_1^p=1$, and projecting the second equation on the right stationary mode of $M^p$, denoted as $P(k|p)$ which satisfies the normalization $\sum_k P(k|p)=1$ and
\begin{equation}
\sum_k M^p_{lk} P(k|p)=0.
\end{equation}
We also introduce the following ansatz for these slow modes
\begin{equation}
x^{(0)}_j(p_k)=\wh{x}_j(p)P(k|p),\quad y^{(0)}_j(p_k)=\wh{y}_j(p),
\label{eq:zero-th-xy}
\end{equation}
which simply means that the eigenmodes are stationary under intra-block transition but have a modulation at the inter-block level. Then, we obtain a reduced eigenvalue equations for these modulation amplitudes, i.e.,
\begin{subequations}\label{eq:coarse-grain-xy}
\begin{eqnarray}
\sum_{q} \wh{M}_{pq}\wh{x}_j(q)&=- \lambda_j^{(0)} \wh{x}_j(p) \\
\sum_{q}\wh{y}_j(q)\wh{M}_{qp}&=-\lambda_j^{(0)}\wh{y}_j(p).
\end{eqnarray}
\end{subequations}
Here, the emergent transition rate matrix on the coarse-grained state space is given by
\begin{equation}
\wh{M}_{pq}\equiv\sum_{k,l} M_{p_kq_l}^{(0)}P(l|q),
\label{eq:effective-M}
\end{equation}
which is exactly due to a projection by the left and right stationary modes of the intra-block transition rate matrix. The projection procedure is effectively a coarse-graining over the microscopic states within the same block. Therefore, the effective matrix $\wh{M}$ removes the $K$-fold degeneracy of the slow modes, and determines its leading order behavior.
Now we summarize the non-degenerate leading order term of the eigenmodes of $M$. These eigenmodes are split into two classes in terms of its relaxation time scale: fast modes that relax at the time scale $\sim \tau_f$ and slow ones at the time scale $\sim \tau_s$. This is illustrated in FIG.~\ref{fig:general-model}(c) for an illustrative example. For a fast mode, its leading order term is localized within a certain coarse-grained state $p$. We may denote the non-stationary eigenmodes of $M^p$ as $x_j^p$ and $y_j^p$, and the corresponding eigenvalue as $\lambda_j^p$. They satisfy the following orthogonal relations
\begin{equation}
\sum_k x_j^q(k)=0,\qquad \sum_k y^q_j(k) P(k|q)=0.
\label{eq:orthogo}
\end{equation}
The first relation can be understood from probability conservation, while the second one due to stationarity of the ground state. Then, this fast mode can be expressed as $(j>K)$
\begin{subequations}\label{eq:fast-mode-corresp}
\begin{eqnarray}
\lambda_j &=&\epsilon^{-1}\lambda_j^p+O(1)\\
x_j(q_k)&=&\delta_{pq}x_j^p(k)+ O(\epsilon)\\
y_j(q_k)&=&\delta_{pq}y_j^p(k)+ O(\epsilon).
\end{eqnarray}
\end{subequations}
To express the slow modes, we introduce the eigenmodes of $\wh{M}$ as $\wh{x}_{j}$ and $\wh{y}_{j}$, with corresponding eigenvalue $\wh{\lambda}_{j}$. Then, the slow modes become $(j\le K)$
\begin{subequations}\label{eq:slow-mode-corresp}
\begin{eqnarray}
\lambda_j&=&\wh{\lambda}_j +O(\epsilon)\\
x_j(p_k)&=&\wh{x}_j(p)P(k|p)+O(\epsilon)\nonumber\\
\\
\quad y_j(p_k)&=&\wh{y}_j(p)+O(\epsilon).
\end{eqnarray}
\end{subequations}
In particular, the stationary distribution of the original system becomes
\begin{equation}
P^{ss}_{p_k}=\wh{P}^{ss}_pP(k|p)+O(\epsilon),
\end{equation}
with $\wh{P}^{ss}_p$ the normalized stationary distribution for the coarse-grained Markov system.
The leading order results are sufficient for the following discussion. The first order correction is discussed in Appendix~\ref{sect:higher-correction}. We note that the first order correction is in general very complicated. The correction of the fast modes involves only 1-step transition to all the other eigenmodes, while for the slow modes the correction also involves 2-step transition, i.e. first to a fast mode and then back to a different slow mode. The latter is generic in degenerate perturbation~\cite{sakurai1985modern}.
\subsection{FRR violation spectrum for fast observables}
\label{sect:FRRvio}
For a general conjugate variable $\mathcal{Q}^f_{p_k}$ that depends on microscopic states, its corresponding velocity observable $\dot{Q}^f$ moves at a fast time scale $\sim \tau_f$, thus classified as a fast variable and emphasized by the superscript $f$. To obtain the property of its violation spectrum, we study the asymptotic behavior of the corresponding projection coefficients $\alpha_j^f$, $\beta_j^f$ and $\phi_j^f$ with the help of the eigenmodes obtained in the previous section.
We first estimate their magnitude in the time scale separation limit $\epsilon\to 0$. Since the leading order terms for $\alpha_j^f$ and $\beta_j^f$ do not vanish for such a fast observable, we roughly obtain their magnitude as
\begin{equation}
\alpha_j^f\sim 1 ,\quad \beta_j^f\sim 1.
\end{equation}
More delicate results up to first order correction can be obtained immediately by using Eq.~(\ref{eq:fast-mode-corresp}) and Eq.~(\ref{eq:slow-mode-corresp}). To obtain the magnitude of $\phi_j^f\equiv\sum_{p_k} y_j(p_k) B_{p_k}$, we need to understand $B_{p_k}$ first. It can be expanded as [Eq.~(\ref{eq:Bn-main})]
\begin{equation}
B_{p_k}=\epsilon^{-1}B^p_{k}+B_{p_k}^{(1)},
\label{eq:B-split}
\end{equation}
where the leading order term $\epsilon^{-1}B^p_{k}\sim \epsilon^{-1}$ describes fast flux fluctuation within coarse-grained state $p$, as determined by $M^p$; at the same time $B_{p_k}^{(1)}\sim 1$ gives the slow flux fluctuation due to inter-block fluctuation, as determined by $M^{(0)}$. The projection of $B_{p_k}$ on fast modes generally have a large value, i.e.,
\begin{equation}
\phi_j^f\sim \epsilon^{-1},\quad j> K.
\end{equation}
However, its projection on slow modes are greatly supressed,
\begin{equation}
\phi_j^f\sim 1,\quad j\le K,
\end{equation}
because of the quasi uniformness of the slow modes in a given coarse-grained state, i.e., $y_j(p_k)\approx \wh{y}(p)$, and the conservation of flux fluctuation within each coarse-grained state, i.e., $\sum_k B^p_k=0$. The magnitude of these projection coefficients are listed in FIG.~\ref{fig:asymp}.
Since each fast mode has a counterpart from a certain internal transition rate matrix $M^p$, their projection coefficients also share this connection. For the Markov process described by $M^p$, we may also introduce the projection coefficients $\alpha_j^q\equiv\sum_k x^q_j(k)\mathcal{Q}_{q_k}^f$, $\beta_j^q\equiv\sum_k y^q_j(k)\mathcal{Q}_{q_k}^f P(k|q)$ and $\phi_j^q\equiv \sum_k y^q_j(k)B(k|q)$. Here, $B(k|q)$ is the flux fluctuatioin for this subsystem, which is found to satisfy the relation $B^q_k=\wh{P}^{ss}_pB(k|q)+O(\epsilon)$. We find that these coefficients are connected to those of the fast mode by ($j>K$)
\begin{subequations} \label{eq:eff-org-fast-2}
\begin{eqnarray}
\alpha_j^f&=&\alpha^q_j+O(\epsilon)\\
\beta_j^f&=&\wh{P}^{ss}_q\beta^q_j+O(\epsilon)\\
\phi_j^f&=&\epsilon^{-1} \wh{P}_p^{ss}\phi^q_j+O(1).
\end{eqnarray}
\end{subequations}
The FRR violation spectrum for this fast velocity observable $\dot{Q}^f$ can be split into the contribution of each fast internal Markov processes [Eq.~(\ref{eq:velo-FRR-vio})] and a correction due to the slow transition across different coarse-grained state, i.e.,
\begin{eqnarray}
\tilde{C}_{\dot{Q}^f}-2T\tilde{R}'_{\dot{Q}^f}&=&\frac{2}{\epsilon}\sum_{j>K} \wh{P}^{ss}_p \left[\alpha_j^q \frac{T\phi_j^q-\beta_j^q\lambda_j^q}{1+(\epsilon\omega/ \lambda_j^q)^2}\right]+V_f(\omega),\nonumber\\
\label{eq:velo-FRR-vio-fast}
\end{eqnarray}
where the summation is first over all the non-stationary modes of $M^p$, and then over all the coarse-grained state $p$. The correction term is given by
\begin{equation*}
V_f(\omega)=2\sum_{j\ge 2} \alpha_j^f\frac{T\phi_j^f-\beta_j^f\lambda_j}{1+[\omega/\lambda_j]^2}-\frac{2}{\epsilon}\sum_{j>K} \wh{P}^{ss}_p \left[\alpha_j^q \frac{T\phi_j^q-\beta_j^q\lambda_j^q}{1+(\epsilon\omega/ \lambda_j^q)^2}\right].
\end{equation*}
The two terms of the rhs have the same divergence of order $\epsilon^{-1}$, which cancels each other due to the mapping relation Eq.~(\ref{eq:eff-org-fast-2}). Therefore, $ V_f(\omega)$ is of order 1 and is well-defined in the time scale separation limit $\epsilon\to 0$.
Generally, $V_f$ vanishes in the high frequency region $\omega\gg\lambda_N$, as expected from the sum rule. It also vanishes in the low frequency region for our setup with a finite state space. In the intermediate frequency region $\tau_s^{-1}\ll \omega\ll \tau_f^{-1}$, the contribute from slow modes is negligible and
\begin{equation}
V_f\simeq 2\sum_{j>K} \alpha_j^f \left[T\phi_j^f-\beta_j^f\lambda_j^f\right]\xrightarrow{\epsilon\to 0} \text{const}.
\end{equation}
The finite limit is reached due to the mapping relations Eq.~(\ref{eq:eff-org-fast-2}).
The FRR violation integral $\Delta_{Q^f}$ also splits into two terms
\begin{equation}
\Delta_{Q^f}=\frac{1}{\epsilon} \sum_q \wh{P}^{ss}_q \Delta_{Q^f}^q+ \Delta_{Q^f}^{(1)},
\end{equation}
where $\Delta_{Q^f}^q$ is contributed by the FRR violation integral within mesoscopic state $q$, associated with $M^q$, and $\Delta_{Q^f}^{(1)}$ is the correction term contributed by integrating over $V_f(\omega)$. Both $\Delta_{Q^f}^q$ and $\Delta_{Q^f}^{(1)}$ scale as $\epsilon^{-1}$ since the violation plateau spans up to frequency $1/\tau_f\sim \epsilon^{-1}$.
Therefore, the leading order term $\frac{1}{\epsilon} \sum_q \wh{P}^{ss}_q \Delta_{Q^f}^q\sim \epsilon^{-2}$ and $ \Delta_{Q^f}^{(1)}$ is a negligible correction. Indeed, this leading order term implies a diverging dissipation rate of the system, which is not quite realistic. It is then natural to assume that detailed balance is satisfied within each coarse-grained state, i.e., $\phi_j^q=\beta_j^q\lambda_j^q$, or equivalently $\Delta_{Q^f}^q=0$. The FRR violation spectrum of this fast observable is then the same as $V_f(\omega)$, which is illustrated in FIG.~\ref{fig:illustration_FRRviolation}(a).
\begin{figure}
\centering
\includegraphics[width=6cm]{magnitude}
\caption{Overview of asymptotic behavior of the crucial parameters of correlation and response spectrum in this Markov system with time scale separation. }
\label{fig:asymp}
\end{figure}
\subsection{FRR violation spectrum for slow observables}
Consider a special conjugate variable $\mathcal{Q}_{p_k}^s=\mathcal{Q}_p^s$ that is uniform within the same coarse-grained state. It defines a velocity observable $\dot{Q}^s(t)=\frac{d}{dt}\mathcal{Q}_{p_t}$ which is non-zero only when a slow transition to a neighboring coarse-grained state takes place, thus classified as a slow observable and emphasized by the superscript $s$. Below, we study the asymptotic behavior of its FRR violation spectrum for $\epsilon\to 0$. The main result has already been announced in our previous paper~\cite{Wang2016entropy}, especially in its supplemental material. Here, we provide more details.
First, we also estimate the magnitude of the corresponding projection coefficients $\alpha_j^s$, $\beta_j^s$, and $\phi_j^s$ in the limit $\epsilon\to 0$. The projection on slow modes gives a constant value, i.e., ($j\le K$)
\begin{equation}
\alpha_j^s\sim 1,\quad \beta_j^s\sim 1,
\end{equation}
which is not surprising according to their definition. However, the projection on fast modes is vanishingly small, i.e., ($j>K$)
\begin{equation}
\alpha_j^s\sim \epsilon,\quad \beta_j^s\sim \epsilon.
\label{eq:fast-mode-slow-ob}
\end{equation}
This is due to localization of the fast modes within certain coarse-grained states, and can be verified by using Eq.~(\ref{eq:orthogo}) and Eq.~(\ref{eq:fast-mode-corresp}). The slow observable $\dot{Q}^s$ is insensitive to flux fluctuation within coarse-grained states, which gives $B_k^p=0$ in Eq.~(\ref{eq:B-split}). Therefore, we always have
\begin{equation}
\quad \phi_j^s\sim 1,
\end{equation}
whether it is projected on the slow or fast modes. The magnitude of these the projection coefficients are listed in FIG.~\ref{fig:asymp}.
Intuitively, the correlation and response of such a slow observable is pretty well described also on the coarse-grained level in terms of the effective Markov process $\wh{M}$, which involves another set of projection coefficients $\wh{\alpha}_j^s\equiv\sum_p \wh{x}_j(p)\mathcal{Q}_p^s$, $\wh{\beta}_j^s\equiv\sum_p \wh{y}_j(p)\mathcal{Q}_p^s\wh{P}_p^{ss}$, and $\wh{\phi}_j^s\equiv \sum_p \wh{y}_j(p)\wh{B}_p$. Here, $\wh{B}_p$ is the flux fluctuation defined for $\wh{M}$, in the same spirit as $B_n$ for $M$ in Eq.~(\ref{eq:Bn-main}), which satisfies $\wh{B}_p=\sum_k B_{p_k}^{(1)}+O(\epsilon)$ due to the stationary distribution $P^{ss}_{p_k}=\wh{P}^{ss}_pP(k|p)+O(\epsilon)$. According to the mapping relations for slow modes in Eq.~(\ref{eq:slow-mode-corresp}), the descriptions at the two levels are related by ($j\le K$)
\begin{subequations}\label{eq:mapping-Qs-slow}
\begin{eqnarray}
\alpha_j^s& =\wh{\alpha}_j^s+O(\epsilon)\\
\beta_j^s&= \wh{\beta}_j^s+O(\epsilon)\\
\phi_j^s&= \wh{\phi}_j^s+O(\epsilon),
\end{eqnarray}
\end{subequations}
which are exactly the same in terms of the leading order of the slow modes. This justifies the validity of coarse-graining if we are interested in this slow observable.
The FRR violation spectrum of this slow observable can be split into the contribution from this coarse-grained description [Eq.~(\ref{eq:velo-FRR-vio})] and a correction from the underlying fast processes, i.e.,
\begin{equation}
\tilde{C}_{\dot{Q}^s}-2T\tilde{R}'_{\dot{Q}^s}=2\sum_{j=2}^K \widehat{\alpha}_j^s \frac{T\widehat{\phi}_j^s-\widehat{\beta}_j\widehat{\lambda}_j}{1+(\omega/\widehat{\lambda}_j)^2}+\epsilon V_s(\omega).
\label{eq:velo-FRR-vio-slow}
\end{equation}
The correction term $ \epsilon V_s(\omega)$ is given by
\[
\epsilon V_s(\omega)=2\sum_{j\ge 2} \alpha_j^s\frac{T\phi_j^s-\beta_j^s\lambda_j}{1+(\omega/\lambda_j)^2}-2\sum_{j=2}^K \wh{\alpha}_j\frac{T\wh{\phi}_j-\wh{\beta}_j\wh{\lambda}_j}{1+(\omega/\wh{\lambda}_j)^2}.
\]
With the mapping relations [Eq.~(\ref{eq:mapping-Qs-slow})] for the slow modes and the magnitude estimation for the fast modes [FIG.~\ref{fig:asymp}], we find that $\epsilon V_s(\omega)$ is of order $\epsilon$. Similar to $V_f(\omega)$, $V_s(\omega)$ also vanishes for both $\omega\gg\lambda_N$ and $\omega\ll \lambda_2$. In the intermediate frequency region $\tau_s^{-1}\ll\omega\ll \tau_f^{-1}$, only the fast modes contributes, i.e.,
\begin{equation}
V_s\simeq \frac{2}{\epsilon} \sum_{j>K}\alpha_j^s (T\phi_j^s-\lambda_j\beta_j^s)\xrightarrow{\epsilon\to 0} \text{const},
\label{eq:slow-quasi-sumrule}
\end{equation}
where the limit is obtained by using the magnitude estimation for the fast modes [FIG.~\ref{fig:asymp}]. Alternatively, we may express this plateau in terms of the slow modes by using the sum rule [Eq.~(\ref{eq:sumrule-general})], i.e.,
\begin{equation}
V_s\simeq \frac{2}{\epsilon} \sum_{j=2}^K\alpha_j^s (\lambda_j\beta_j^s-T\phi_j^s)\xrightarrow{\epsilon\to 0} \text{const}.
\label{eq:slow-quasi-sumrule}
\end{equation}
The mapping relations [Eq.~(\ref{eq:mapping-Qs-slow})] and the sum rule for the coarse-grained system, i.e., $\sum_{j=2}^K \wh{\alpha}_j(\wh{\lambda}_j\wh{\beta}_j-T\wh{\phi}_j)=0$, guarantees this finite limit. See FIG.~\ref{fig:illustration_FRRviolation}(b) for an illustration of $ \tilde{C}_{\dot{Q}^s}-2T\tilde{R}'_{\dot{Q}^s}$.
The FRR violation integral splits into two terms
\begin{equation}
\Delta_{Q^s}=\wh{\Delta}_{Q^s}+\Delta_{Q^s}^{(1)},
\end{equation}
where the leading term $\wh{\Delta}_{Q^s}$ comes from the effective system $\wh{M}$ and $\Delta_{Q^s}^{(1)}$ is the correction term from the integral of $\epsilon V_s(\omega)$. Although $\epsilon V_s(\omega)$ is small, it extends to the high frequency cutoff $1/\tau_f\sim \epsilon^{-1}$, which makes the integral $\Delta_{Q^s}^{(1)}\sim 1$, comparable to the leading term $\wh{\Delta}_{Q^s}$.
\begin{figure}
\centering
\includegraphics[width=8cm]{illustration_FRRviolation}
\caption{(a) Illustration of FRR violation spectrum for a fast observable, assumping that detailed balance is satisifed within each coarse-grained state. This is equivalent to a illustration of $V_f(\omega)$, which is related to hidden entropy production. (b) Illustration of FRR violation spectrum for a slow observable. The violation in the low frequency region, shaded by orange dash line, is due to the broken of detailed balance at the level of effective dynamics, and the small violation at the intermediate frequency region is related to hidden entropy production. }
\label{fig:illustration_FRRviolation}
\end{figure}
\section{Discussion}
\label{sect:discussion}
\subsection{Connection between the FRR violation plateau and hidden entropy production}
It is shown by Esposito that steady state entropy production
\begin{equation}
\sigma\equiv \sum_{p_k, q_l} P_{p_k}^{ss} w_{p_k}^{q_l} \ln \frac{P_{p_k}^{ss} w_{p_k}^{q_l}}{P_{q_l}^{ss} w_{q_l}^{p_k}}
\end{equation}
for a Markov system with time scale separation ($\epsilon\to 0$) can be splitted into three parts~\cite{esposito2012stochastic}: contribution from the coarse-grained dynamics
\begin{equation}
\sigma_1=\sum_{p,q} \wh{w}_p^q \wh{P}^{ss}_p \ln\frac{\wh{w}_p^q \wh{P}^{ss}_p}{\wh{w}_q^p \wh{P}^{ss}_q} ;
\end{equation}
that from the microscopic transition within the same coarse-grained state
\begin{equation}
\sigma_2=\frac{1}{\epsilon} \sum_{p}\wh{P}^{ss}_p\sum_{k,l} P(k|p) w_k^l(p)\ln \frac{P(k|p)w_k^l(p)}{P(l|p)w_l^k(p)};
\end{equation}
and that from the coupling between fast and slow transitions
\begin{eqnarray}
\sigma_3&\equiv & \sigma-\sigma_1-\sigma_2\nonumber\\
&=&\sum_{p,q} \wh{w}_p^q \wh{P}^{ss}_p \sum_{k,l} f_{p_k}^{q_l}P(k|p)\ln \frac{f_{p_k}^{q_l}P(k|p)}{f_{q_l}^{p_k} P(l|q)},
\end{eqnarray}
where $f_{p_k}^{q_l}\equiv w_{p_k}^{q_l}/\wh{w}_p^q$ is the conditional transition rate between microscopic state $p_k$ and $q_l$ given that transition between coarse-grained state $p$ and $q$ is already observed. All these three contributions are non-negative. Furthermore, $\sigma_2$, if exists, would diverge in the limit of timescale separation $\epsilon\to 0$.
In the steady state, the total entropy production rate of the system is equivalent to the total heat dissipation rate
\begin{equation}
J_{tot}\equiv T\sum_{p_k,q_l} P_{p_k}^{ss} w_{p_k}^{q_l} \ln \frac{ w_{p_k}^{q_l}}{ w_{q_l}^{p_k}}.
\end{equation}
The reason is that $\sigma-J_{tot}$ gives the change rate of the system entropy, i.e.,
\begin{eqnarray*}
\sigma-\frac{1}{T}J_{tot}&=&\sum_{p_k,q_l} P_{p_k}^{ss} w_{p_k}^{q_l} \ln \frac{ P_{p_k}^{ss}}{ P_{q_l}^{ss}}\\
&=& \sum_{p_k,q_l} [P_{p_k}^{ss} w_{p_k}^{q_l} -P_{q_l}^{ss} w_{q_l}^{p_k} ]\ln P_{p_k}^{ss}
\end{eqnarray*}
which vanishes in the steady state because $\sum_{q_l} [P_{p_k}^{ss} w_{p_k}^{q_l} -P_{q_l}^{ss} w_{q_l}^{p_k}]=0$. For the coarse-grained system, its total heat dissipation rate is given by
\begin{equation}
\wh{J}_{tot}=T\sum_{p,q} \wh{w}_p^q \wh{P}^{ss}_p \ln\frac{\wh{w}_p^q }{\wh{w}_q^p }.
\end{equation}
For a coarse-grained state $p$ that is assumed to be isolated from other coarse-grained states, its heat dissipation rate is given by
\begin{equation}
J_{tot}^p=\frac{1}{\epsilon}\sum_{k,l}P(k|p) w_k^l(p) \ln\frac{w_k^l(p) }{w_l^k(p) }.
\end{equation}
Similar as $\sigma=J_{tot}/T$, we have
\begin{equation}
\sigma_1=\frac{1}{T}\wh{J}_{tot},\quad \sigma_2= \frac{1}{T}\sum_{p}\wh{P}^{ss}_p J_{tot}^p.
\end{equation}
Therefore,
\begin{equation}
\sigma_3=\frac{1}{T}\left [J_{tot}-\wh{J}_{tot}-\sum_{p}\wh{P}^{ss}_p J_{tot}^p\right].
\end{equation}
The above relations provide a chance to evaluate $\sigma_1$, $\sigma_2$ and $\sigma_3$ by quantifying the dissipation rates through FRR violation spectrum. However, a proper observable usually only captures part of the dissipation rate, as shown in Eq.~(\ref{eq:GHS}). To capture the total dissipation rate requires very careful design of a set of ``orthogonal'' observables, with each taking care of dissipation rate from a subset of transitions in the network that are both complementary and non-overlapping. This also requires the application of the generalized Harada-Sasa equality, which is only approximately true for Markov jumping systems with proper transition rates as discussed before. Assuming that all these obstacles can be overcome, we can use the FRR violation integral for the effective dynamics, i.e., $\wh{\Delta}_{Q^s}$, to evaluate $\wh{J}_{tot}$, $\Delta_{Q^f}^p$ for the fast dynamics within a coarse-grained state $p$ to estimate $J_{tot}^p$, and the correction terms from the violation plateau, i.e., $\Delta_{Q^s}^{(1)}$ and $\Delta_{Q^f}^{(1)}$ will contain information about $\sigma_3$.
Now, we assume that detailed balance is satisfied within each coarse-grained state, which implies that $\sigma_2=0$ and $J_{tot}^p=0$. In this case, the potential FRR violation for slow and fast observables are illustrated in FIG.~\ref{fig:illustration_FRRviolation}. In FIG.~\ref{fig:illustration_FRRviolation}(b) for the slow observables, one can clearly distinguish the orange area that contributes to $\sigma_1$ and the purple area that contributes to $\sigma_3$. The plateau in FIG.~\ref{fig:illustration_FRRviolation}(a) only contributes to $\sigma_3$. Therefore, $\sigma_1$ comes from the FRR violation in the low frequency region $\omega \sim \tau_s^{-1}$, and $\sigma_3$ comes from the FRR violation in the high frequency region $\omega\sim \tau_f^{-1}$ (note that this pletau is plotted in a log scale and its integral is actually dominated by the region $\omega\sim \tau_f^{-1}$). Then, it is possible to quantify $\sigma_1$ and $\sigma_3$ only from measurable quantities of the original system. This will be illustrated later through an Markov jumping example.
Below, under the assumption that $\sigma_2=0$, we connect $\sigma_1$ and $\sigma_3$ with FRR violation spectrum for the $N_0$-component Langevin equation~(\ref{eq:general-Langevin}). For this system, not only the Harada-Sasa equality is applicable, the complete set of orthogonal observables are also well-defined, which is $\dot{x}_j$ for $j=1,2,\cdots, N_0$. We assume that the variables are indexed in such a way that $\gamma_j\sim 1$ for $j\le K_0$ and $\gamma_j\sim \epsilon $ for $j>K_0$. This means that $x_j$ is a slow variable for $j\le K_0$, with $K_0$ the number of slow variables in this system. We also assume that there is no net drift, i.e., $\langle x_j\rangle=0$. This multi-component langevin system can be described by our general Markov system with time scale separation, where the coarse-grained state would be any configuration specified by the slow variables, i.e., $\vec{x}^s\equiv (x_1,x_2,\cdots, x_{K_0})$, and the microscopic state within $\vec{x}^s$ would be any configuration specified by all the fast variables.
With these assumptions, we have
\begin{subequations}
\begin{eqnarray}\label{eq:sigma-delta}
\sigma_1&=&\frac{1}{T}\wh{J}_{tot}=\frac{1}{T}\sum_{j=1}^{K_0} \gamma_j \wh{\Delta}_{x_j},\\
\sigma_3&=&\frac{1}{T}\sum_{j=1}^{K_0} \gamma_j \Delta_{x_j}^{(1)}+\frac{1}{T}\sum_{j=K_0+1}^{N_0} \gamma_j \Delta_{x_j}.
\end{eqnarray}
\end{subequations}
Here, $ \wh{\Delta}_{x_j}$ and $\Delta_{x_j}^{(1)}$, with $j=1,2,\cdots, K_0$, can be evaluated by the integral over the low and high frequency region of the FRR violation spectrum for this slow observable $\dot{x}_j$, respectively.
For many physical systems, time scale separation usually implies that not only $\sigma_2=0$ but also $\sigma_3=0$, i.e., no hidden entropy production at all. An interesting example is the potential switching model for molecular motors~\cite{kawaguchi2014nonequilibrium}, where chemical transition is fast and displacement is slow. In this scenario, the total dissipation rate could be extracted by only studying the FRR violation of the slow observables, as long as the slow transitions satisfy our assumptions that lead to Eq.~(\ref{eq:GHS}).
\subsection{Effective temperature for a fast observable}
Here, we define effective temperature by naively assuming the FRR for all frequency:
\begin{equation}
T_{eff}(\omega; \dot{Q})\equiv \frac{\tilde{C}_{\dot{Q}}(\omega)}{2\tilde{R}_{\dot{Q}}'(\omega)}.
\end{equation}
Note that this definition should be modified accordingly to apply for a displacement observable $Q(t)$, so as to give the same result. In general, $T_{eff}$ depends both on frequency and the observable considered. It converges to the bath temperature in the high frequency region $\omega\gg\lambda_N$.
This concept becomes popular after Cugliandolo \emph{et al.} apply it to glassy systems in~\cite{LeticiaPREtemperature}, where they argue that the inverse effective temperature actually controls the direction of heat flow. For example, $1/T_{eff}(\omega,\dot{Q})<1/T$ implies that heat flows from this degree of freedom $Q(t)$ to the bath. Although $T_{eff}$ share this desirable property with the temperature of an equilibrium system, its value is in general sensitive to the choice of observable~\cite{fielding2002observable}. Below, we discuss the property of this definition for our system. Its physical meaning will be treated in elsewhere.
We assume that $\sigma_2=0$, which implies that $T\phi_j^q=\beta_j^q\lambda^q$ for transitions within the same coarse-grained state. Then, the effective temperature of a general fast observable is given by
\begin{equation}
T_{eff}(\omega,\dot{Q}^f)=T+ \frac{V_f(\omega)}{2\tilde{R}'_{\dot{Q}^f}(\omega)}.
\end{equation}
At the frequency region $\omega\gg \tau_f^{-1/2}$, the response spectrum $\tilde{R}'_{\dot{Q}^f}(\omega)\sim \epsilon^{-1}$, dominated by the fast modes, while at the frequency $\omega\ll \tau_f^{-1/2}$ we have $\tilde{R}_{\dot{Q}^f}(\omega)\sim 1$, dominated by the slow modes. Besides, $V_f(\omega)$ changes from zero to a finite value around the frequency $\tau_s^{-1}$. Combining these, we find that this effective temperature takes constant value in three frequency regions:
\begin{equation}
T_{eff}(\omega,\dot{Q}^f)=\begin{cases}
T+T_1^f, &\omega\ll \tau_s^{-1}\\
T+T_2^f,& \tau_s^{-1}\ll\omega\ll \tau_f^{-1/2}\\
T,&\omega\gg \tau_f^{-1/2}.
\end{cases}
\end{equation}
Here, $T_1^f$ and $T_2^f$ are two different numbers that satisfy, up to leading order,
\begin{subequations}
\begin{equation}
T_1^f= \frac{\sum_{j=2}^K \alpha_j^f (\beta_j^f \lambda_j-T\phi_j^f)/\lambda_j^2}{\sum_{j=2}^K \alpha_j^f\phi_j^f/\lambda_j^2}+O(\epsilon),
\label{eq:Teff_fast_low_1}
\end{equation}
\begin{equation}
T_2^f= \frac{\sum_{j=2}^K \alpha_j^f (\beta_j^f \lambda_j-T\phi_j^f)}{\sum_{j=2}^K \alpha_j^f\phi_j^f}+O(\epsilon).
\label{eq:Teff_fast_low_2}
\end{equation}
\end{subequations}
Note that $T_1^f$ has a similar structure as $T_2^f$, except for a weighting factor $1/\lambda_j^2$ that is ordered in a descending way, i.e., $1/\lambda_2^2\ge 1/\lambda_3^2\ge\cdots$. Both $T_1^f$ and $T_2^f$ pick up an eigenmode that contributes dominantly. However, $T_1^f$ favors a slower eigenmode due to the weighting factor. Only a eigenmode that violates detailed balance can contribute, i.e., $\beta_j^f \lambda_j\neq T\phi_j^f$.
Through quite a few non-trivial examples, we find that
\begin{equation}
T_1^f\approx T_2^f.
\label{eq:T1T2}
\end{equation}
This implies that both $T_1^f$ and $T_2^f$ pick up the slowest eigenmode. In other words, the slowest eigenmode ($j=2$) contributes more significantly to the violation of detailed balance than other slow modes, i.e., for $2<j\le K$,
\begin{equation}
|\alpha_2^f (\beta_2^f \lambda_2-T\phi_2^f)|>|\alpha_j^f (\beta_j^f \lambda_j-T\phi_j^f)|.
\end{equation}
This agrees with our intuition that slower modes takes longer time to relax to equilibrium and thus breaks detailed balance more easily when driven out of equilibrium. Eq.~(\ref{eq:T1T2}) implies that this two-timescale system has only two distinct temperatures at the large and slow timescale respectively, which are independent of the time scale separation index $\epsilon$.
\subsection{Effective temperature for a slow observable}
The effect of hidden fast processes on a slow observable can be captured by a renormalized (effective) rate matrix Eq.(\ref{eq:effective-M}) and a fast colored noise with small correlation time $\sim \tau_f$. We assume that the effective dynamics $\wh{M}$ in the slow time scale $\tau_s$ satisfies detailed balance, and seek to capture the noise effect by an effective temperature.
The effective temperature for a slow observable $Q^s$ consists of a constant bath temperature $T$ and a frequency-dependent component from the colored noise:
\begin{equation}
T_{eff}(\omega,\dot{Q}^s)=T+\epsilon \frac{V_s(\omega)}{2\tilde{R}'_{\dot{Q}^s}(\omega)}.
\end{equation}
In general, $\tilde{R}'_{\dot{Q}^s}(\omega)\sim 1$ at all frequency, dominated by the slow modes. Besides, $V_s(\omega)\sim 1$ and has two cutoffs at the low frequency $\tau_s^{-1}$ and high frequency $\tau_f^{-1}$, respectively. Therefore, we find that the effective temperature becomes constant in the low, intermediate, and high frequency region:
\begin{equation}
T_{eff}(\omega;\dot{Q}^s)=
\begin{cases}
T+\epsilon T_1^s,&\quad \omega\ll \tau_s^{-1}\\
T+\epsilon T_2^s,&\quad \tau_s^{-1}\ll \omega\ll \tau_f^{-1}\\
T,&\quad \omega\gg \tau_f^{-1}.
\end{cases}
\end{equation}
Here, $T_1^s$ ($T_2^s$) has a similar structure as $T_1^f$ ($T_2^f$), i.e.,
\begin{equation}
T_1^s=\frac{1}{\epsilon} \frac{\sum_{j=2}^K \alpha_j^s (\beta_j^s \lambda_j-T\phi_j^s)/\lambda_j^2}{\sum_{j=2}^K \alpha_j^s\phi_j^s/\lambda_j^2}+O(\epsilon^2),
\label{eq:Teff_low}
\end{equation}
\begin{equation}
T_2^s=\frac{1}{\epsilon}\frac{\sum_{j=2}^K \alpha_j^s (\beta_j^s \lambda_j-T\phi_j^s)}{\sum_{j=2}^K \alpha_j^s\phi_j^s}+O(\epsilon^2),
\label{eq:Teff_medium}
\end{equation}
where $\beta_j^s \lambda_j-T\phi_j^s\sim \epsilon$ due to our assumption that the system reach equilibrium effectively in the large time scale region, which ensures that $T_1^s$ and $T_2^s$ are of order 1. The slow dynamics is frozen at the intermediate frequency region $ \tau_s^{-1}\ll \omega\ll \tau_f^{-1}$, while it evolves to a stationary distribution at the time scale much larger than $\tau_s$. Since the strength of this colored noise generally depends on the value of the slow variable, it is generally renormalized into different noise strength at the intermediate and slow frequency region, respectively. $T_1^s$ or $ T_2^s$ measures the strength of this noise, and thus strength of the driving from the hidden fast processes. In the limit that $\epsilon\to 0$, the effective temperature converges to the bath temperature.
We also find that
\begin{equation}
T_1^s\approx T_2^s.
\label{eq:T1T2-s}
\end{equation}
The underlying mechanism is similar to that of Eq.~(\ref{eq:T1T2}). Eq.~(\ref{eq:T1T2-s}) implies that we can roughly parameterize the active noise effect by an constant amplitude in the whole frequency region $\omega \ll\tau_f^{-1}$. In other words, the system has the same temperatue with the bath at the time scale smaller than $\tau_f$, and a slightly different temperature $T+\epsilon T_1^s$ at the time scale larger than $\tau_f$.
\section{Example: sensory adaptation network}
\label{sect:adaptation}
\begin{figure}[!h]
\centering
\includegraphics[width=8.5cm]{adaptation_3}
\caption{Sensory adaptation network for a single membrane receptor in E.coli. The activity of this receptor $a=1$ in the active state and $0$ when inactive. Its methylation level $m$ ranges from 0 to $m_0$. Actually, $m_0=4$ for this receptor in E.coli. Here $\alpha\ll 1$ and $\tau_s\gg\tau_f$ to achieve adaptation. Each methylation level is regarded as a coarse-grained state that contains two microscopic states with different activity. }
\label{fig:adaptation}
\end{figure}
Here, we study the Markov network illustrated in FIG.~\ref{fig:adaptation}, which describes sensory adaptation of the membrane receptor in E.coli~\cite{lan2012energy, shouwen2015adaptation,Pablo2015Adaptation}. This network contains two degrees of freedom: $a\,(=0,1)$ for the activity of this protein and $m \,(=0,1,\cdots,m_0)$ for its methylation level. Here, $a$ changes relatively fast on the time scale $\tau_f$ while $m$ changes on the slow time scale $\tau_s$.
For a fixed $m$, the activity reaches a local equilibrium distribution $P(1|m)/P(0|m)=\exp(-\Delta S(m))$, where $\Delta S(m)$ is a linear function given by $\Delta S(m)=e_0 (m_1-m)$. In general, the effect of extracellular ligand binding in NESS is captured by a shift of $m_1$. We may assume that the activation rate $w_0(m)$ and inactivation rate $w_1(m)$ satisfy
\begin{subequations}\label{eq:activationRate}
\begin{eqnarray}
w_0(m)&=&\frac{1}{\tau_f}\exp\left(-\frac{\Delta S(m)}{2T}\right),\\
w_1(m)&=&\frac{1}{\tau_f}\exp\left(\frac{\Delta S(m)}{2T}\right).
\end{eqnarray}
\end{subequations}
We assume that in the inactive (active) conformation the methylation (demethylation) rate is $r$, while the reverse transition rate is attenuated by a small factor $\alpha$, as illustrated in FIG.~\ref{fig:adaptation}. The time scale of methylation events is given by $\tau_s=1/r$. $\alpha\ll r$ is required for high sensory adaptation accuracy in E.coli. The time scale separation of this system is again captured by $\epsilon\equiv \tau_f/\tau_s$, which should be small to achieve adaptation. For more details of this model, please refer to~\cite{shouwen2015adaptation}.
\begin{figure}[!h]
\centering
\includegraphics[width=8cm]{adaptation-a}
\caption{(a) Correlation and response spectrum for the fast observable $\dot{a}$ at various $\epsilon$. The inset shows discrepancy of the two spectrum at the low frequency region. (b) The effective temperature of this observable. (c) The corresponding FRR violation spectrum. (d) The dissipation rate $J_a$ due to change of only activity and the estimate $\gamma_a\Delta_a$ by assuming Harada-Sasa equality, with $\gamma_a=1.4\tau_fT$. Parameter: $r=1$ (thus, $\epsilon=\tau_f$), $\alpha=0.1$, $e_0=2$, $T=1$, $m_0=4$, and $m_1=m_0/2$. We vary $\tau_f$ to change $\epsilon$. For (d), we fix $\tau_f=0.1$ and change $\alpha$ instead. }
\label{fig:a-adaptation}
\end{figure}
First, we study the fast observable $\dot{a}$, the change rate of the activity. To obtain its response, we apply a perturbation $h$ that increase the activation rate $w_0$ by a factor $\exp(h/2T)$, but decrease the inactivation rate $w_1$ by a factor $\exp(-h/2T)$. Such a perturbation equivalently modifies $\Delta S(m)\to \Delta S(m)-h$, which can be realized by changing the extracellular ligand concentration slightly. According to the proposed perturbation form Eq.~(\ref{eq:perturbation}), the activity $a$ is the corresponding conjugate field.
FIG.~\ref{fig:a-adaptation}(a) shows the numerically exact velocity correlation spectrum $\tilde{C}_{\dot{a}}$ and the real part of the response spectrum$\tilde{R}_{\dot{a}}'$ at various $\epsilon$. We can see that the FRR is approximately satisfied in the high frequency region $\omega\sim \tau_f^{-1}$, but violated in the low frequency region $\omega\sim \tau_s^{-1}$. In particular, the response spectrum $\tilde{R}_{\dot{a}}'$ becomes negative in this low frequency region, while the correlation spectrum $\tilde{C}_{\dot{a}}$ remains positive, resulting in a negative effective temperature in this low frequency region. Still, the inverse effective temperature $1/T_{eff}(\omega,\dot{a})\le 1/T$ at all the frequency, and therefore the heat also flows from this degree of freedom $a$ to the bath even at this low frequency region. This is illustrated in FIG.~\ref{fig:a-adaptation}(b), where the effective temperature becomes a negative constant for low frequency region $\omega\ll \tau_f^{-1/2}$ and reaches bath temperature for high frequency region $\omega\gg \tau_f^{-1/2}$. FIG.~\ref{fig:a-adaptation}(c) shows that the corresponding FRR violation spectrum has a plateau in the broad intermediate frequency region $\tau_s^{-1}\ll \omega\ll \tau_f^{-1}$ with an $\epsilon$-independent amplitude, which agrees with our general analysis. Although this plateau is of order $1$, it is much smaller compared with the correlation or response spectrum in the high frequency region, which is of order $\epsilon^{-1}$. Indeed, the frequency-dependence of the effective temperature suggests that the FRR violation is much easier to be detected in the low frequency region.
Next, we consider the slow observable $\dot{m}$, the change rate of the methylation level. To obtain its response, we use a perturbation $h$ that increases all the methylation rate (i.e., $r$ for inactive state and $\alpha r$ for active state) by a factor $\exp(h/2T)$, but decreases all the demethylation rate (i.e., $\alpha r$ for inactive state and $r$ for active state) by a factor $\exp(-h/2T)$. The conjugate field here is the methylation level $m$.
The numerically exact correlation and response spectrum are shown in FIG.~\ref{fig:m-adaptation}(a). The violation of FRR appears mainly in the intermediate frequency range and this violation tends to vanish in the limit $\epsilon\to 0$, implying an equilibrium-like dynamics in the time scale separation limit. The non-equilibrium effect of the hidden fast variable can be captured by the extra effective temperature $T_{eff}-T$, as shown in FIG.~\ref{fig:m-adaptation}(b), which is almost constant in the region $\omega\ll \tau_f^{-1}$ and has a vanishingly small amplitude that scales linearly with $\epsilon$. The positivity of this extra effective temperature gives rise to a small heat flow from the degree of freedom $m$ to the bath at all frequency. The FRR violation spectrum serves as another measure of the non-equilibrium effect of the hidden fast processes, as is shown in FIG.~\ref{fig:m-adaptation}(c). It has a plateau in the frequency region $\tau_s^{-1}\ll\omega\ll \tau_f^{-1}$ with also a small amplitude of order $ \epsilon$. This behavior confirms our general analysis. This FRR violation can be quite difficult to detected experimentally.
\begin{figure}
\centering
\includegraphics[width=8cm]{adaptation-m}
\caption{ The analysis for the observable $\dot{m}$, similar to FIG.~\ref{fig:a-adaptation}. Here, $\gamma_m=T(r\sqrt{\alpha})^{-1}$. The parameters are the same as those in FIG.~\ref{fig:a-adaptation}. }
\label{fig:m-adaptation}
\end{figure}
The total dissipation rate of a Markov system is given by
\begin{equation}
J_{tot}=\frac{1}{2}\sum_{n,n'} (P^{ss}_nw_n^{n'}-P^{ss}_{n'} w_{n'}^n) \ln \frac{w_n^{n'}}{w_{n'}^n},
\end{equation}
which is always non-negative in the stationary state. In our bipartite network, the total dissipation can be decomposed into dissipation for change of activity, denoted as $J_a$, and for change of methylation level, denoted as $J_m$. For example,
\begin{equation}
J_a=\sum_m \left[ P^{ss}_{1,m}w_1(m)-P^{ss}_{0,m}w_0(m)\right] \ln\frac{w_1(m)}{w_0(m)}.
\label{eq:Ja}
\end{equation}
Then, the total entropy production $\sigma$ in our system satisfies
\begin{equation}
\sigma=\sigma_3=\frac{1}{T}(J_a+J_m),
\end{equation}
where the first equality holds because our system has an equilibrium effective dynamics for the slow variable $m$, i.e., $\sigma_1=0$, and the fast dynamics for each given methylation level also satisfies detailed balance, i.e., $\sigma_2=0$.
The result of $J_a$ and $J_m$ is shown in FIG.~\ref{fig:a-adaptation}(d) and FIG.~\ref{fig:m-adaptation}(d), respectively. Here, we fix $\tau_f=0.01$ and change $\alpha$ to tune the dissipation rate of the system. Note that $J_m>0$ for $\alpha<1$ and $J_m<0$ for $1<\alpha<\exp(1)$, which implies that $\alpha=1$ is a critical point of the system~\cite{shouwen2015adaptation}.
The generality of Harada-Sasa equality in Langevin systems motivates us to check the following relation between FRR violation and dissipation in this bipartite network:
\begin{equation}
J_a\approx \gamma_a\Delta_a ,\qquad J_m\approx \gamma_m\Delta_m,
\label{eq:HS-adaptation}
\end{equation}
where $\Delta_a$ and $\Delta_m$ are the FRR violation integral for the observable $\dot{a}$ and $\dot{m}$, respectively. And, $\gamma_a$ and $\gamma_m$ are the corresponding effective friction coefficients, which is in general not well-defined in Markov systems. We may still derive $\gamma_m=T(r\sqrt{\alpha})^{-1}$ by focusing on the region that $\alpha$ is close to 1, where the methylation dynamics is almost diffusive and can be well-approximated by a Langevin equation. See Supplemental Material in~\cite{Wang2016entropy} for details. However, $\gamma_a$ cannot be derived by a similar method because we do not have a Langevin-analogy for this two-state network (with given $m$).
A naive way to overcome this difficulty is to define $\gamma_a=J_a/\Delta_a$ at a particular set of parameters, and then use this $\gamma_a$ to check whether $J_a\approx \gamma_a\Delta_a$ holds in a more broad parameter region. In this way, we take $\gamma_a=1.4\tau_fT$, determined from $\tau_f=0.1$ and $\alpha=1$.
With these two effective friction coefficients, we numerically compare $J_x$ and $\gamma_x\Delta_x$ ($x=a,m)$ in FIG.~\ref{fig:a-adaptation}(d) and FIG.~\ref{fig:m-adaptation}(d), by fixing $\tau_f$ and varying $\alpha$. Both agree very well in the region $\exp(-1)\le \alpha\le \exp(1)$, which is very non-trivial because the system displays qualitatively very different behavior for $\alpha>1$ (no adapation with strong boundary effect from $m=0,m_0$) and for $\alpha<1$ (adaptation with negligible boundary effect from $m=0,m_0$)~\cite{shouwen2015adaptation}. These relations also hold at various $\epsilon$, as shown In FIG.~\ref{fig:HS-adaptation-2}. The Harada-Sasa equality holds approximately for the observable $\dot{m}$ because the methylation dynamics satisfies the assumptions underlying our generalized Harada-Sasa equality~(\ref{eq:GHS}), in particular, a relatively diffusive transition rate. However, the validity of $J_a\approx \gamma_a \Delta_a$ would be more difficult to understand, because this observable has only two states (i.e., $a=0,1$) and the two state dynamics is definitely not quite diffusive since $\Delta S(m)$ varies broadly.
Below, we investigate in detail how the Harada-Sasa equality works for the observable $\dot{a}$. According to Eq.~(\ref {eq:GHSE-2}) and Eq.~(\ref{eq:evaluation}), we have
\begin{equation}
\Delta_a=\sum_m\left[ P^{ss}_{1,m}w_1(m)-P^{ss}_{0,m}w_0(m)\right] \frac{w_1(m)-w_0(m)}{2}.
\label{eq:gamma_aDetla_a}
\end{equation}
We define $\Delta_a(m)$ to be the contribution of $m$-th methylation level to $\Delta_a$, and therefore $\Delta_a=\sum_m \Delta_a(m)$. Similarly, we define $J_a(m)$ to be the contribution of $m$-th methylation level to $J_a$ in Eq.~(\ref{eq:Ja}), which satisfies $J_a=\sum_m J_a(m)$. FIG.~\ref{fig:HS-adaptation-2}(b) shows that $J_a(m)\approx \gamma_a\Delta_a(m)$, which is necessary to explain why $J_a\approx \gamma_a\Delta_a$ works over such a broad parameter region. Further analysis reveals that $\gamma_a(w_1(m)-w_0(m))/2\approx \ln [w_1(m)/w_0(m)]$ holds only for a narrow methylation region where the dissipation $|\Delta S(m)|$ is relatively small, as shown in FIG.~\ref{fig:HS-adaptation-2}(c). Fortunately, the activity flux $[P^{ss}_{1,m}w_1(m)-P^{ss}_{0,m}w_0(m)]$ is significant only around the region where $|\Delta S(m)|$ is also relatively small, as shown in FIG.~\ref{fig:HS-adaptation-2}(d). This explains why $J_a(m)\approx \gamma_a\Delta_a(m)$ holds.
\begin{figure}
\centering
\includegraphics[width=8.5cm]{test_HS_a}
\caption{ (a) Test of generalized Harada-Sasa equality Eq.~(\ref{eq:HS-adaptation}) at different $\epsilon$. Major violation happens only at $\epsilon \approx 1$. Here, $\alpha=0.5$ and $\tau_f$ is varied to change $\epsilon$. Other parameters are the same as FIG.~\ref{fig:a-adaptation}. (b) Comparison between $J_a(m)$ and $\gamma_a\Delta_a(m)$. (c) Distribution of the ratio between $\gamma_a(w_1(m)-w_0(m))/2$, an element from $\gamma_a \Delta(m)$, and $\ln [w_1(m)/w_0(m)]$, an element in $J_a(m)$. (d) Activity flux distribution. In (b)(c)(d), the parameters are $\alpha=0.1$, $m_0=10$ and $\tau_f=0.1$. Other parameters are the same as FIG.~\ref{fig:a-adaptation}. }
\label{fig:HS-adaptation-2}
\end{figure}
\section{Conclusion}
\label{sect:conclusion}
Here, we have made a systematic analysis for a general Markov system with two time scales, focusing on the FRR violation spectrum. The two characteristic time scales divide the frequency region into three domains: the low, intermediate, and high frequency region. Even assuming that the fast processes satisfy detailed balance, the FRR violation for either a slow or a fast observable is characterized by a plateau in the intermediate frequency region. Generically, this plateau implies a finite hidden entropy production rate that results from coupling between slow and fast processes. This connection is formulated precisely for general Langevin systems of two time scales. A very interesting Markov jumping system motivated from sensory adaptation in E.coli also supports this connection. To quantify hidden entropy production from FRR violation spectrum, we need to properly choose a complete set of orthogonal observables that capture all independent channels of dissiaption, and measure the FRR violation plateau for each of them.
We have also studied a different measure of non-equilibrium dynamics: effective temperature. For a NESS only driven by the coupled motion between fast and slow processes, we find that the effective temperature for a fast observable approaches the room temperature in the high frequency region, while it significantly deviates from the room temperature in the low frequency region. This two-temperature two-time scale scenario is similar with glassy systems~\cite{LeticiaPREtemperature, Kurchan2005nonEQ}. However, the effective temperature for a slow observable approaches the bath temperature throughout all frequency region in the timescale separation limit, which is consistent with the emergence of an effective potential landscape. The tiny deviation of order $\epsilon $ from the bath temperature appears at the low and intermediate frequency region, which is crucial to explain the finite dissipation rate of the slow variable. This extra deviation could be modeled by an extra fast noise, which would then capture the feature of a finite dissipation rate. The above results suggest that it is much easier to probe hidden entropy production by measuring the low frequency violation of the FRR for a fast observable.
The Harada-Sasa equality was originally derived only for Langevin dynamics with an infinite state space. Here, we also present a systematic discussion on the applicability of this equality to general Markov jumping systems. We find that the generalized Harada-Sasa equality not only requires a relatively \emph{diffusive} transition along the direction of the observable, the prefactor of the transiton rate, which quantifies its time scale, should also be homogeneous along this direction. Here, a transition is called to be diffusive when it produces only a small amount of entropy in the medium. These requirements allow to lump all the transitions in this direction together, and therefore to access their dissipation rate by only monitoring the stochastic evolution projected along this direction. Other details such as the system size are not relevant. In some cases, although the relevant transitions are not always diffusive, the transitions that are more diffusive may take place much more frequently, which may again restore this equality, as supported by our sensory adaptation model. This generalized Harada-Sasa equality can be very useful to measure the total entropy production rate for a system with time scale separation, because the fast processes usually reach equilibrium and that the phenomenon of hidden entropy production is not so common in physical systems.
Our Markov system assumes a finite state space, which forbids a net drift of any observables. However, many interesting periodic systems are able to reach NESS and at the same time have a drifting motion, say molecular motors. For such systems, another key difference is that the correlation and response spectrum will not vanish at the low frequency limit, and consequently FRR violation at the low frequency limit is also possible. However, the other features identified in our current paper will remain unchanged. This discussion will be presented elsewhere.
In the near future, it would be more interesting to apply our general framework to interesting systems with hidden entropy production, say (possibly) inefficiently molecular motors~\cite{toyabe2015single}, active cytoskeletal networks~\cite{Mizuno2007cytoskeletonNetwork}, and repulsive self-propelled particles~\cite{Tailleur2008RunTumble}, where gas and liquid phase coexist.
\section*{Acknowledgements}
The authors thank M. Esposito and Y. Nakayama for fruitful discussions and comments.
The work was supported in part by the NSFC under Grant No. U1430237 and by the Research Grants Council of the Hong Kong Special Administrative Region (HKSAR) under Grant No. 12301514. It was also supported by KAKENHI (Nos. 25103002 and 26610115), and by the JSPS Core-to-Core program ``Non-equilibrium dynamics of soft-matter and information''.
| {
"attr-fineweb-edu": 1.769531,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUd5M5qhLAB-QFv7ys | \section{Introduction}
This paper is a continuation of our earlier paper \cite{GtHJR1} where Toeplitz-like operators with rational symbols which may have poles on the unit circle where introduced. While the aim of \cite{GtHJR1} was to determine the Fredholm properties of such Toeplitz-like operators, in the current paper we will focus on properties of the spectrum. For this purpose we further analyse this class of Toeplitz-like operators, specifically in the case where the operators are not Fredholm.
We start by recalling the definition of our Toeplitz-like operators.
Let $\Rat$ denote the space of rational complex functions. Write $\Rat(\mathbb{T})$ and $\Rat_0(\mathbb{T})$ for the subspaces of $\Rat$ consisting of the rational functions in $\Rat$ with all poles on $\BT$ and the strictly proper rational functions in $\Rat$ with all poles on the unit circle $\BT$, respectively. For $\om \in \Rat$, possibly having poles on $\BT$, we define a Toeplitz-like operator $T_\omega (H^p \rightarrow H^p)$, for $1 < p<\infty$, as follows:
\begin{equation}\label{Toeplitz}
\Dom(T_\omega)\!=\! \left\{ g\in H^p \! \mid \! \omega g = f + \rho \mbox{ with } f\!\in\! L^p\!\!,\, \rho \!\in\!\textup{Rat}_0(\mathbb{T})\right\},\
T_\omega g = \mathbb{P}f.
\end{equation}
Here $\BP$ is the Riesz projection of $L^p$ onto $H^p$.
In \cite{GtHJR1} it was established that this operator is a densely defined, closed operator which is Fredholm if and only if $\om$ has no zeroes on $\BT$. In case the symbol $\om$ of $T_\om$ is in $\Rat(\mathbb{T})$ with no zeroes on $\BT$, i.e., $T_\om$ Fredholm, explicit formulas for the domain, kernel, range and a complement of the range were also obtained in \cite{GtHJR1}. Here we extend these results to the case that $\om$ is allowed to have zeroes on $\BT$, cf., Theorem \ref{T:Rat(T)} below. By a reduction to the case of symbols in $\Rat(\mathbb{T})$, we then obtain for general symbols in $\Rat$, in Proposition \ref{P:injectdenserange} below, necessary and sufficient conditions for $T_\om$ to be injective or have dense range, respectively.
\paragraph{\bf Main results}
Using the fact that $\la I_{H^p}-T_{\om}=T_{\la-\om}$, our extended analysis of the operator $T_{\om}$ enables us to describe the spectrum of $T_\om$, and its various parts. Our first main result is a description of the essential spectrum of $T_\om$, i.e., the set of all $\la\in\BC$ for which $\la I_{H^p}-T_{\om}$ is not Fredholm.
\begin{theorem}\label{T:main1}
Let $\om\in\Rat$. Then the essential spectrum $\si_\textup{ess}(T_{\om})$ of $T_{\om}$ is an algebraic curve in $\BC$ which is given by
\[
\si_\textup{ess}(T_{\om})=\om(\BT):=\{\om(e^{i\theta}) \mid 0\leq \theta \leq 2\pi,\, \mbox{$e^{i\theta}$ not a pole of $\om$} \}.
\]
Furthermore, the map $\la\mapsto \Index (T_{\la-\om})$ is constant on connected components of $\BC\backslash \om(\BT)$ and the intersection of the point spectrum, residual spectrum and resolvent set of $T_\om$ with $\BC\backslash \om(\BT)$ coincides with sets of $\la\in\BC\backslash \om(\BT)$ with $\Index (T_{\la-\om})$ being strictly positive, strictly negative and zero, respectively.
\end{theorem}
Various examples, specifically in Section \ref{S:ExEssSpec}, show that the algebraic curve $\om(\BT)$, and thus the essential spectrum of $T_\om$, need not be connected in $\BC$.
Our second main result provides a description of the spectrum of $T_{\om}$ and its various parts. Here and throughout the paper $\cP$ stands for the subspace of $H^p$ consisting of all polynomials and $\cP_k$ for the subspace of $\cP$ consisting of all polynomials of degree at most $k$.
\begin{theorem}\label{T:main2}
Let $\om\in\Rat$, say $\om=s/q$ with $s,q\in\cP$ co-prime. Define
\begin{equation}\label{Kq0-}
\begin{aligned}
k_q&=\sharp\{\mbox{roots of $q$ inside $\overline{\BD}$}\}=\sharp\{\mbox{poles of $\la-\om$ inside $\overline{\BD}$}\},\\
k_\la^-&=\sharp\{\mbox{roots of $\la q-s$ inside $\BD$}\}=\sharp\{\mbox{zeroes of $\la-\om$ inside $\BD$}\},\\
k_\la^0&=\sharp\{\mbox{roots of $\la q-s$ on $\BT$}\}=\sharp\{\mbox{zeroes of $\la-\om$ on $\BT$}\},
\end{aligned}
\end{equation}
where in all these sets multiplicities of the roots, poles and zeroes are to be taken into account. Then the resolvent set $\rho(T_\om)$, point spectrum $\si_\textup{p}(T_\om)$, residual spectrum $\si_\textup{r}(T_\om)$ and continuous spectrum $\si_\textup{c}(T_\om)$ of $T_\om$ are given by
\begin{equation}\label{specparts}
\begin{aligned}
\rho(T_{\om})&=\{\la\in\BC \mid k_\la^0=0 \mbox{ and } k_q=k_\la^-\},\\
\si_\textup{p}(T_{\om})=\{\la\in\BC &\mid k_q>k_\la^-+k_\la^0\},\quad
\si_\textup{r}(T_{\om})=\{\la\in\BC \mid k_q<k_\la^-\},\\
\si_\textup{c}(T_{\om})&=\{\la\in\BC \mid k_\la^0>0 \mbox{ and } k_\la^- \leq k_q\leq k_\la^- + k_\la^0\}.
\end{aligned}
\end{equation}
Furthermore, $\si_\textup{ess}(T_\om)=\om(\BT)=\{\la\in\BC \mid k_\la^0>0\}$.
\end{theorem}
Again, in subsequent sections various examples are given that illustrate these results. In particular, examples are given where $T_{\om}$ has a bounded resolvent set, even with an empty resolvent set. This is in sharp contrast to the case where $\om$ has no poles on the unit circle $\BT$. For in this case the operator is bounded, the resolvent set is a nonempty unbounded set and the spectrum a compact set, and the essential spectrum is connected.
Both Theorems \ref{T:main1} and \ref{T:main2} are proven in Section \ref{S:Spectrum}.
\paragraph{\bf Discussion of the literature}
In the case of a bounded selfadjoint Toeplitz operator on $\ell^2$, Hartman and Wintner in \cite{HW50} showed that the point spectrum is empty when the symbol is real and rational and posed the problem of specifying the spectral properties of such a Toeplitz operator. Gohberg in \cite{G52}, and more explicitly in \cite{G67}, showed that a bounded Toeplitz operator with continuous symbol is Fredholm exactly when the symbol has no zeroes on $\BT$, and in this case the index of the operator coincides with the negative of the winding number of the symbol with respect to zero. This implies immediately that the essential spectrum of a Toeplitz operator with continuous symbol is the image of the unit circle.
Hartman and Wintner in \cite{HW54} followed up their earlier question by showing that in the case where the symbol, $\varphi$, is a bounded real valued function on $\BT$, the spectrum of the Toeplitz operator on $H^2$ is contained in the interval bounded by the essential lower and upper bounds of $\varphi$ on $\BT$ as well as that the point spectrum is empty whenever $\varphi$ is not a constant. Halmos, after posing in \cite{H63} the question whether the spectrum of a Toeplitz operator is connected, with Brown in \cite{BH64} showed that the spectrum cannot consist of only two points. Widom, in \cite{W64}, established that bounded Toeplitz operators on $H^2$ have connected spectrum, and later extended the result for general $H^p$, with $1 \leq p \leq \infty$. That the essential (Fredholm) spectrum of a bounded Toeplitz operator in $H^2$ is connected was shown by Douglas in \cite{D98}. For the case of bounded Toeplitz operators in $H^p$ it is
posed as an open question in B\"ottcher and Silbermann in \cite[Page 70]{BS06} whether the essential (Fredholm) spectrum of a Toeplitz operator in $H^p$ is necessarily connected.
Clark, in \cite{C67}, established conditions on the argument of the symbol $\varphi$ in the case $\varphi\in L^q, q \geq 2$ that would give the kernel index of the Toeplitz operator with symbol
$\varphi$ on $L^p$, where $\frac{1}{p} + \frac{1}{q} = 1$, to be $m\in\BN$.
Janas, in \cite{J91},
discussed unbounded Toeplitz operators on the Bargmann-Siegel space and
showed that $\sigma_\tu{ess}(T_\varphi) \subset \cap_{R>0} \textrm{ closure } \{\varphi (z): \vert z\vert\geq R\}$.
\paragraph{\bf Overview} The paper is organized as follows. Besides the current introduction, the paper consists of five sections. In Section \ref{S:Review} we extend a few results concerning the operator $T_\om$ from \cite{GtHJR1} to the case where $T_\om$ need not be Fredholm. These results are used in Section \ref{S:Spectrum} to compute the spectrum of $T_{\om}$ and various of its subparts, and by doing so we prove the main results, Theorems \ref{T:main1} and \ref{T:main2}. The remaining three sections contain examples that illustrate our main results and show in addition that the resolvent set can be bounded, even empty, and that the essential spectrum can be disconnected in $\BC$.
\paragraph{\bf Figures}
We conclude this introduction with a remark on the figures in this paper illustrating the spectrum and essential spectrum for several examples. The color coding in these figures is as follows: the white region is the resolvent set, the black curve is the essential spectrum, and the colors in the other regions codify the Fredholm index, where red indicates index $2$, blue indicates index $1$, cyan indicates index $-1$, magenta indicates index $-2$.
\section{Review and new results concerning $T_\omega$}\label{S:Review}
In this section we recall some results concerning the operator $T_\om$ defined in \eqref{Toeplitz} that were obtained in \cite{GtHJR1} and will be used in the present paper to determine spectral properties of $T_\om$. A few new features are added as well, specifically relating to the case where $T_\om$ is not Fredholm.
The first result provides necessary and sufficient conditions for $T_{\om}$ to be Fredholm, and gives a formula for the index of $T_\om$ in case $T_\om$ is Fredholm.
\begin{theorem}[Theorems 1.1 and 5.4 in \cite{GtHJR1}]\label{T:recall1}
Let $\om\in \Rat$. Then $T_\om$ is Fredholm if and only if $\om$ has no zeroes on $\BT$. In case $T_\om$ is Fredholm, the Fredholm index of $T_\om$ is given by
\[
\Index (T_\om) = \sharp \left\{\begin{array}{l}\!\!\!
\textrm{poles of } \om \textrm{ in }\overline{\BD} \textrm{ multi.}\!\!\! \\
\!\!\!\textrm{taken into account}\!\!\!
\end{array}\right\} -
\sharp \left\{\begin{array}{l}\!\!\! \textrm{zeroes of } \om\textrm{ in }\BD \textrm{ multi.}\!\!\! \\
\!\!\!\textrm{taken into account}\!\!\!
\end{array}\right\},
\]
and $T_\om$ is either injective or surjective. In particular, $T_\om$ is injective, invertible or surjective if and only if $\Index(T_\om)\leq 0$, $\Index(T_\om)=0$ or $\Index(T_\om)\geq 0$, respectively.
\end{theorem}
Special attention is given in \cite{GtHJR1} to the case where $\om$ is in $\Rat(\BT)$, since in that case the kernel, domain and range can be computed explicitly; for the domain and range this was done under the assumption that $T_\om$ is Fredholm. In the following result we collect various statements from Proposition 4.5 and Theorems 1.2 and 4.7 in \cite{GtHJR1} and extend to or improve some of the claims regarding the case that $T_\om$ is not Fredholm.
\begin{theorem}\label{T:Rat(T)}
Let $\om\in \Rat(\BT)$, say $\om=s/q$ with $s,q\in\cP$ co-prime. Factor $s=s_-s_0s_+$ with $s_-$, $s_0$ and $s_+$ having roots only inside, on, or outside $\BT$. Then
\begin{equation}\label{DomRanId}
\begin{aligned}
&\qquad \kernel (T_\omega) = \left\{r_0/s_+ \mid \deg(r_0) < \deg(q) - \deg(s_-s_0) \right\};\\
&\Dom(T_\om)=qH^p+\cP_{\deg(q)-1}; \quad
\Ran(T_\om)=s H^p+\wtil\cP,
\end{aligned}
\end{equation}
where $\wtil\cP$ is the subspace of $\cP$ given by
\begin{equation}\label{tilP}
\wtil\cP = \{ r\in\cP \mid r q = r_1 s + r_2 \mbox{ for } r_1,r_2\in\mathcal{P}_{\deg(q)-1}\}\subset \cP_{\deg(s)-1}.
\end{equation}
Furthermore, $H^p=\overline{\Ran(T_\om)} + \wtil{\cQ}$ forms a direct sum decomposition of $H^p$, where
\begin{equation}\label{tilQ}
\wtil\cQ=\cP_{k-1}\quad \mbox{with}\quad k=\max\{\deg(s_-)-\deg(q) , 0\},
\end{equation}
following the convention $\cP_{-1}:=\{0\}$.
\end{theorem}
The following result will be useful in the proof of Theorem \ref{T:Rat(T)}.
\begin{lemma}\label{L:closure}
Factor $s\in\cP$ as $s=s_-s_0s_+$ with $s_-$, $s_0$ and $s_+$ having roots only inside, on, or outside $\BT$. Then $sH^p =s_-s_0 H^p$ and $\overline{s H^p}= s_- H^p$.
\end{lemma}
\begin{proof}[\bf Proof]
Since $s_+$ has no roots inside $\overline{\BD}$, we have $s_+ H^p=H^p$. Furthermore, $s_0$ is an $H^\infty$ outer function (see, e.g., \cite{N}, Example 4.2.5)
so that $\overline{s_0 H^p}=H^p$.
Since $s_-$ has all it's roots inside $\BD$, $T_{s_-}:H^p\to H^p$ is an injective operator with closed range. Consequently, we have
\[
\overline{s H^p}
=\overline{s_- s_0 s_+ H^p}
=\overline{s_- s_0 H^p}
=s_- \overline{ s_0 H^p}=s_- H^p,
\]
as claimed.
\end{proof}
\begin{proof}[\bf Proof of Theorem \ref{T:Rat(T)}]
In case $T_\om$ is Fredholm, i.e., $s_0$ constant, all statements follow from Theorem 1.2 in \cite{GtHJR1}. Without the Fredholm condition, the formula for $\kernel(T_\om)$ follows from \cite[Lemma 4.1]{GtHJR1} and for $\Dom(T_\om)$ and $\Ran(T_\om)$ Proposition 4.5 of \cite{GtHJR1} provides
\begin{equation}\label{DomRanIncl}
\begin{aligned}
qH^p+\cP_{\deg(q)-1}&\subset \Dom(T_\om);\\
T_\om (qH^p+\cP_{\deg(q)-1})&=s H^p+\wtil\cP\subset \Ran(T_\om).
\end{aligned}
\end{equation}
Thus in order to prove \eqref{DomRanId}, it remains to show that $\Dom(T_{\om})\subset q H^p +\cP_{\deg(q)-1}$.
Assume $g\in \Dom(T_{\om})$. Thus there exist $h\in H^p$ and $r\in \cP_{\deg(q)-1}$ so that $s g= q h +r$. Since $s$ and $q$ are co-prime, there exist $a,b\in\cP$ such that $s a+ q b\equiv 1$. Next write $ar=q r_1+r_2$ for $r_1,r_2\in\cP$ with $\deg(r_2)<\deg (q)$. Thus $sg=q h +r=q h+ q br + s ar=q(h+br+sr_1)+ s r_2$. Hence $g=q(h+br+sr_1)/s +r_2$. We are done if we can show that $\wtil{h}:=(h+br+sr_1)/s$ is in $H^p$.
The case where $g$ is rational is significantly easier, but still gives an idea of the complications that arise, so we include a proof. Hence assume $g\in\Rat\cap H^p$. Then $h=(s g-r)/q$ is also in $\Rat \cap H^p$, and $\wtil{h}$ is also rational. It follows that $q(h+br+sr_1)/s=q \wtil{h} =g-r_2\in \Rat \cap H^p$ and thus cannot have poles in $\overline{\BD}$. Since $q$ and $s$ are co-prime and $h$ cannot have poles inside $\overline{\BD}$, it follows that $\wtil{h}=(h+br+sr_1)/s$ cannot have poles in $\overline{\BD}$. Thus $\wtil{h}$ is a rational function with no poles in $\overline{\BD}$, which implies $\wtil{h}\in H^p$.
Now we prove the claim for the general case. Assume $q\wtil{h} +r_2=g\in H^p$, but $\wtil{h}=(h+br+sr_1)/s\not \in H^p$, i.e., $\wtil{h}$ is not analytic on $\BD$ or $\int_{\BT} |\wtil{h}(z)|^p\textup{d}z=\infty$. Set $\widehat{h}=h+br+sr_1\in H^p$, so that $\wtil{h}=\what{h}/s$. We first show $\wtil{h}$ must be analytic on $\BD$. Since $\wtil{h}=\what{h}/s$ and $\what{h}\in H^p$, $\wtil{h}$ is analytic on $\BD$ except possibly at the roots of $s$. However, if $\wtil{h}$ would not be analytic at a root $z_0\in\BD$ of $s$, then also $g= q \wtil{h}+r_2$ should not be analytic at $z_0$, since $q$ is bounded away from 0 on a neighborhood of $z_0$, using that $s$ and $q$ are co-prime. Thus $\wtil{h}$ is analytic on $\BD$. It follows that $\int_{\BT} |\wtil{h}(z)|^p\textup{d}z=\infty$.
Since $s$ and $q$ are co-prime, we can divide $\BT$ as $\BT_1\cup \BT_2$ with $\BT_1\cap \BT_2=\emptyset$ and each of $\BT_1$ and $\BT_2$ being nonempty unions of intervals, with $\BT_1$ containing all roots of $s$ on $\BT$ as interior points and $\BT_2$ containing all roots of $q$ on $\BT$ as interior points. Then there exist $N_1,N_2>0$ such that $|q(z)|> N_1$ on $\BT_1$ and $|s(z)|>N_2$ on $\BT_2$.
Note that
\begin{align*}
\int_{\BT_2}|\wtil{h}(z)|^p \textup{d}z
&=\int_{\BT_2}|\what{h}(z)/s(z)|^p \textup{d}z
\leq N_2^{-p} \int_{\BT_2}|\what{h}(z)|^p \textup{d}z
\leq N_2^{-p} \|\what{h}\|^p_{H^p}<\infty.
\end{align*}
Since $\int_{\BT}|\wtil{h}(z)|^p \textup{d}z=\infty$ and $\int_{\BT_2}|\wtil{h}(z)|^p \textup{d}z<\infty$, it follows that $\int_{\BT_1}|\wtil{h}(z)|^p \textup{d}z=\infty$. However, since $|q(z)|> N_1$ on $\BT_1$, this implies that
\begin{align*}
\|g-r_2\|^p_{H^p}&=\int_{\BT}|g(z)-r_2(z)|^p \textup{d}z
=\int_{\BT}|q(z)\wtil{h}(z)|^p \textup{d}z
\geq \int_{\BT_1}|q(z) \wtil{h}(z)|^p \textup{d}z\\
&\geq N_1^p \int_{\BT_1}|\wtil{h}(z)|^p \textup{d}z =\infty,
\end{align*}
in contradiction with the assumption that $g\in H^p$. Thus we can conclude that $\wtil{h}\in H^p$ so that $g=q \wtil{h}+r_2$ is in $q H^p+\cP_{\deg(q)-1}$.
It remains to show that $H^p=\overline{\Ran(T_\om)}+\wtil{\cQ}$ is a direct sum decomposition of $H^p$. Again, for the case that $T_\om$ is Fredholm this follows from \cite[Theorem 1.2]{GtHJR1}. By the preceding part of the proof we know, even in the non-Fredholm case, that $\Ran(T_\om)=s H^p+\wtil{\cP}$. Since $\wtil{\cP}$ is finite dimensional, and thus closed, we have
\[
\overline{\Ran(T_\om)}=\overline{s H^p}+\wtil{\cP}= s_- H^p + \wtil{\cP},
\]
using Lemma \ref{L:closure} in the last identity. We claim that
\[
\overline{\Ran(T_\om)}=s_- H^p + \wtil{\cP}=s_- H^p + \wtil{\cP}_-,
\]
where $\wtil{\cP}_-$ is defined by
\[
\wtil{\cP}_{-}:=\{r\in\cP \mid qr=r_1 s_- +r_2\mbox{ for }r_1,r_2\in\cP_{\deg(q)-1}\}\subset \cP_{\deg(s_-)-1}.
\]
Once the above identity for $\overline{\Ran(T_\om)}$ is established, the fact that $\wtil{\cQ}$ is a complement of $\overline{\Ran(T_\om)}$ follows directly by applying Lemma 4.8 of \cite{GtHJR1} to $s=s_-$.
We first show that $\overline{\Ran(T_\om)}=s_- H^p +\wtil{\cP}$ is contained in $s_- H^p+\wtil{\cP}_-$. Let $g=s_- h+r$ with $h\in H^p$ and $r\in\wtil{\cP}$, say $qr=r_1s +r_2$ with $r_1,r_2\in\cP_{\deg(q)-1}$. Write $r_1 s_0s_+=\wtil{r}_1 q + \wtil{r}_2$ with $\deg(\wtil{r}_2)<\deg(q)$. Then
\[
qr= r_1 s_- s_0s_+ +r_2=q\wtil{r}_1 s_-+\wtil{r}_2s_- +r_2,\mbox{ so that }
q(r-\wtil{r}_1s_-)= \wtil{r}_2s_- + r_2,
\]
with $r_2,\wtil{r_2}\in\cP_{\deg(q)-1}$. Thus $r-\wtil{r}_1s_-\in\wtil{\cP}_-$. Therefore, we have
\[
g=s_-(h+\wtil{r}_1)+(r-\wtil{r}_1s_-)\in s_- H^p +\wtil{\cP}_-,
\]
proving that $\overline{\Ran(T_\om)}\subset s_- H^p+\wtil{\cP}_-$.
For the reverse inclusion, assume $g=s_-h+r\in s_- H^p+\wtil{\cP}_-$. Say $qr=r_1 s_- + r_2$ with $r_1,r_2\in\cP_{\deg(q)-1}$. Since $s_0s_+$ and $q$ are co-prime and $\deg(r_1)<\deg(q)$ there exit polynomials $\wtil{r}_1$ and $\wtil{r}_2$ with $\deg(\wtil{r}_1)<\deg(q)$ and $\deg(\wtil{r}_2)<\deg(s_0s_+)$ that satisfy the B\'ezout equation $\wtil{r}_1 s_0 s_+ + \wtil{r}_2 q=r_1$. Then
\[
\wtil{r}_1 s+ r_2 = \wtil{r}_1 s_0s_+s_- + r_2= (r_1-\wtil{r}_2 q)s_- + r_2= r_1 s_- + r_2 -q \wtil{r}_2 s_- = q(r-\wtil{r}_2 s_-).
\]
Hence $r-\wtil{r}_2 s_-$ is in $\wtil{\cP}$, so that $g= s_- h+ r= s_-(h+\wtil{r}_2)+ (r-\wtil{r}_2 s_-)\in s_- H^p +\wtil{\cP}$. This proves the reverse inclusion, and hence completes the proof of Theorem \ref{T:Rat(T)}.
\end{proof}
The following result makes precise when $T_\om$ is injective and when $T_\om$ has dense range, even in the case where $T_\om$ is not Fredholm.
\begin{proposition}\label{P:injectdenserange}
Let $\om\in \Rat$. Then $T_\om$ is injective if and only if
\[
\sharp \left\{\begin{array}{l}\!\!\!
\textrm{poles of } \om \textrm{ in }\overline{\BD} \textrm{ multi.}\!\!\! \\
\!\!\!\textrm{taken into account}\!\!\!
\end{array}\right\} \leq
\sharp \left\{\begin{array}{l}\!\!\! \textrm{zeroes of } \om\textrm{ in }\overline{\BD} \textrm{ multi.}\!\!\! \\
\!\!\!\textrm{taken into account}\!\!\!
\end{array}\right\}.
\]
Moreover, $T_\om$ has dense range if and only if
\[
\sharp \left\{\begin{array}{l}\!\!\!
\textrm{poles of } \om \textrm{ in }\overline{\BD} \textrm{ multi.}\!\!\! \\
\!\!\!\textrm{taken into account}\!\!\!
\end{array}\right\} \geq
\sharp \left\{\begin{array}{l}\!\!\! \textrm{zeroes of } \om\textrm{ in }\BD \textrm{ multi.}\!\!\! \\
\!\!\!\textrm{taken into account}\!\!\!
\end{array}\right\}.
\]
In particular, $T_\om$ is injective or has dense range.
\end{proposition}
\begin{proof}[\bf Proof]
First assume $\om\in \Rat(\BT)$. By Corollary 4.2 in \cite{GtHJR1}, $T_\om$ is injective if and only if the number of zeroes of $\om$ inside $\overline{\BD}$ is greater than or equal to the number of poles of $\om$, in both cases with multiplicity taken into account. By Theorem \ref{T:Rat(T)}, $T_\om$ has dense range precisely when $\wtil{\cQ}$ in \eqref{tilQ} is trivial. The latter happens if and only if the number of poles of $\om$ is greater than or equal to the number of zeroes of $\om$ inside $\BD$, again taking multiplicities into account. Since in this case all poles of $\om$ are in $\BT$, our claim follows for $\om\in \Rat(\BT)$.
Now we turn to the general case, i.e., we assume $\om\in\Rat$. In the remainder of the proof, whenever we speak of numbers of zeroes or poles, this always means that the respective multiplicities are to be taken into account. Recall from \cite[Lemma 5.1]{GtHJR1} that we can factor $\om(z)= \om_-(z)z^\kappa \om_0(z) \om_+(z)$ with $\om_-,\om_0,\om_+\in\Rat$, $\om_-$ having no poles or zeroes outside $\BD$, $\om_+$ having no poles or zeroes inside $\overline{\BD}$ and $\om_0$ having poles and zeroes only on $\BT$, and $\kappa$ the difference between the number of zeroes of $\om$ in $\BD$ and the number of poles of $\om$ in $\BD$. Moreover, we have $T_\om=T_{\om_-}T_{z^\kappa \om_0} T_{\om_+}$ and $T_{\om_-}$ and $T_{\om_+}$ are boundedly invertible on $H^p$. Thus $T_\om$ is injective or has closed range if and only it $T_{z^\kappa\om_0}$ is injective or has closed range, respectively.
Assume $\kappa \geq 0$. Then $z^\kappa \om_0\in \Rat(\BT)$ and the results for the case that the symbol is in $\Rat(\BT)$ apply. Since the zeroes and poles of $\om_0$ coincide with the zeroes and poles of $\om$ on $\BT$, it follows that the number of poles of $z^\kappa \om_0$ is equal to the number of poles of $\om$ on $\BT$ while the number of zeroes of $z^\kappa \om_0$ is equal to $\kappa$ plus the number of zeroes of $\om$ on $\BT$ which is equal to the number of zeroes of $\om$ in $\overline{\BD}$ minus the number of poles of $\om$ in $\BD$. It thus follows that $T_{z^\kappa \om_0}$ is injective, and equivalently $T_\om$ is injective, if and only if the number of zeroes of $\om$ in $\overline{\BD}$ is greater than or equal to the number of poles of $\om$ in $\overline{\BD}$, as claimed.
Next, we consider the case where $\kappa<0$. In that case $T_{z^\kappa \om_0}=T_{z^\kappa}T_{\om_0}$, by Lemma 5.3 of \cite{GtHJR1}. We prove the statements regarding injectivity and $T_\om$ having closed range separately.
First we prove the injectivity claim for the case where $\kappa<0$. Write $\om_0 =s_0/q_0$ with $s_0,q_0\in\cP$ co-prime. Note that all the roots of $s_0$ and $q_0$ are on $\BT$. We need to show that $T_{z^\kappa \om_0}$ is injective if and only if $\deg(s_0) \geq \deg(q_0)-\kappa$ (recall, $\kappa$ is negative).
Assume $\deg(s_0)+\kappa \geq \deg(q_0)$. Then $\deg(s_0) > \deg(q_0)$, since $\kappa<0$, and thus $T_{\om_0}$ is injective. We have $\kernel(T_{z^{\kappa}})=\cP_{|\kappa|-1}$. So it remains to show $\cP_{|\kappa|-1} \cap \Ran (T_{\om_0})=\{0\}$. Assume $r\in\cP_{|\kappa|-1}$ is also in $\Ran (T_{\om_0})$. So, by Lemma 2.3 in \cite{GtHJR1}, there exist $g\in H^p$ and $r'\in\cP_{\deg(q_0)-1}$ so that $s_0 g=q_0 r+ r'$, i.e., $g=(q_0 r+ r')/s_0$. This shows that $g$ is in $\Rat(\BT)\cap H^p$, which can only happen in case $g$ is a polynomial. Thus, in the fraction $(q_0 r+ r')/s_0$, all roots of $s_0$ must cancel against roots of $q_0 r+ r'$. However, since $\deg(s_0)+\kappa \geq \deg(q_0)$, with $\kappa<0$, $\deg(r)<\deg |\kappa|-1$ and $\deg(r')<\deg(q_0)$, we have $\deg(q_0 r + r')<\deg(s_0)$ and it is impossible that all roots of $s_0$ cancel against roots of $q_0 r + r'$, leading to a contradiction. This shows $\cP_{|\kappa|-1} \cap \Ran (T_{\om_0})=\{0\}$, which implies $T_{z^{\kappa\om_0}}$ is injective. Hence also $T_\om$ is injective.
Conversely, assume $\deg(s_0)+\kappa < \deg(q_0)$, i.e., $\deg(s_0)< \deg(q_0)+|\kappa|=:b$, since $\kappa<0$. Then
\[
s_0\in \cP_{b-1}=q_0 \cP_{|\kappa|-1} +\cP_{\deg(q_0)-1}.
\]
This shows there exist $r\in\cP_{|\kappa|-1}$ and $r'\in\cP_{\deg(q_0)-1}$ so that $s_0= q_0 r+ r'$. In other words, the constant function $g\equiv 1\in H^p$ is in $\Dom (T_{\om_0})$ and $T_{\om_0}g=r\in \cP_{|\kappa|-1}=\kernel (T_{z^\kappa})$, so that $g\in \kernel (T_{z^\kappa \om_0})$. This implies $T_\om$ is not injective.
Finally, we turn to the proof of the dense range claim for the case $\kappa<0$. Since $\kappa<0$ by assumption, $\om$ has more poles in $\overline{\BD}$ (and even in $\BD$) than zeroes in $\BD$. Thus to prove the dense range claim in this case, it suffices to show that $\kappa<0$ implies that $T_{z^\kappa\om_0}$ has dense range. We have $T_{z^\kappa \om_0}=T_{z^\kappa}T_{\om_0}$ and $T_{z^\kappa}$ is surjective. Also, $\om_0\in \Rat(\BT)$ has no zeroes inside $\BD$. So the proposition applies to $\om_0$, as shown in the first paragraph of the proof, and it follows that $T_{\om_0}$ has dense range. But then also $T_{z^\kappa \om_0}=T_{z^\kappa}T_{\om_0}$ has dense range, and our claim follows.
\end{proof}
\section{The spectrum of $T_\omega$}\label{S:Spectrum}
In this section we determine the spectrum and various subparts of the spectrum of $T_\om$ for the general case, $\om\in\Rat$, as well as some refinements for the case where $\om\in\Rat(\BT)$ is proper. In particular, we prove our main results, Theorems \ref{T:main1} and \ref{T:main2}.
Note that for $\om\in\Rat$ and $\la\in\BC$ we have $\la I-T_\om=T_{\la-\om}$. Thus we can relate questions on the spectrum of $T_\om$ to question on injectivity, surjectivity, closed rangeness, etc.\ for Toeplitz-like operators with an additional complex parameter. By this observation, the spectrum of $T_\om$, and its various subparts, can be determined using the results of Section \ref{S:Review}.
\begin{proof}[\bf Proof of Theorem \ref{T:main1}]
Since $\la I-T_\om=T_{\la-\om}$ and $T_{\la-\om}$ is Fredholm if and only if $\la-\om$ has no zeroes on $\BT$, by Theorem \ref{T:recall1}, it follows that $\la$ is in the essential spectrum if and only if $\la=\om(e^{i\theta})$ for some $0\leq \theta\leq 2\pi$. This shows that $\si_\textup{ess}(T_\om)$ is equal to $\om(\BT)$.
To see that $\om(\BT)$ is an algebraic curve, let $\omega=s/q$ with $s,q\in\cP$ co-prime. Then $\la=u+iv=\om(z)$ for $z=x+iy$ with $x^2+y^2=1$ if and only if $\la q(z)-s(z)=0$. Denote
$q(z)=q_1(x,y)+iq_2(x,y)$ and $s(z)=s_1(x,y)+is_2(x,y)$, where $z=x+iy$ and the functions
$q_1, q_2, s_1, s_2$ are real polynomials in two variables. Then $\la=u+iv$ is on the curve $\om(\BT)$ if and only if
\begin{align*}
q_1(x,y)u-q_2(x,y)v&=s_1(x,y),\\
q_2(x,y)u+q_1(x,y)v&=s_2(x,y),\\
x^2+y^2&=1.
\end{align*}
Solving for $u$ and $v$, this is equivalent to
\begin{align*}
(q_1(x,y)^2+q_2(x,y)^2)u-(q_1(x,y)s_1(x,y)+q_2(x,y)s_2(x,y))&=0,\\
(q_1(x,y)^2+q_2(x,y)^2)v-(q_1(x,y)s_2(x,y)-q_2(x,y)s_1(x,y))&=0,\\
x^2+y^2&=1.
\end{align*}
This describes an algebraic curve in the plane.
For $\lambda$ in the complement of the curve $\om(\BT)$ the operator $\lambda I -T_\om=T_{\la-\om}$ is
Fredholm, and according to Theorem \ref{T:recall1} the index is given by
$$
\Index (\lambda-T_\om)=
\sharp\{\textrm{ poles of } \om \textrm{ in } \overline{\BD}\}-
\sharp\{\textrm{zeroes of } \om-\lambda \textrm{ inside }\BD\},
$$
taking the multiplicities of the poles and zeroes into account. Indeed, $\lambda - \om = \frac{\lambda q - s}{q}$ and since $q$ and $s$ are co-prime, $\lambda q - s$ and $q$ are also co-prime. Thus Theorem \ref{T:recall1} indeed applies to $T_{\la-\om}$. Furthermore, $\lambda - \om$ has the same poles as $\om$, i.e., the roots of $q$. Likewise, the zeroes of $\la-\om$ coincide with the roots of the polynomial $\lambda q - s$. Since the roots of this polynomial depend continuously on the parameter $\lambda$ the number of them is constant on connected components of the complement of the curve $\omega(\BT)$.
That the index is constant on connected components of the complement of the essential spectrum in fact holds for any unbounded densely defined operator (see \cite[Theorem VII.5.2]{S71}; see also \cite[Proposition XI.4.9]{C90} for the bounded case; for a much more refined analysis of this point see \cite{FK}).
Finally, the relation between the index of $T_{\la-\om}$ and $\la$ being in the resolvent set, point spectrum or residual spectrum follows directly by applying the last part of Theorem \ref{T:recall1} to $T_{\la-\om}$.
\end{proof}
Next we prove Theorem \ref{T:main2} using some of the new results on $T_\om$ derived in Section \ref{S:Review}.
\begin{proof}[\bf Proof of Theorem \ref{T:main2}]
That the two formulas for the numbers $k_q$, $k_\la^-$ and $k_\la^0$ coincides follows from the analysis in the proof of Theorem \ref{T:main1}, using the co-primeness of $\la q -s$ and $q$. By Theorem \ref{T:recall1}, $T_{\la-\om}$ is Fredholm if and only if $k_\la^0=0$, proving the formula for $\si_\textup{ess}(T_\om)$. The formula for the resolvent set follows directly from the fact that the resolvent set is contained in the complement of $\si_\textup{ess}(T_\om)$, i.e., $k_\la^0=0$, and that it there coincides with the set of $\la$'s for which the index of $T_{\la-\om}$ is zero, together with the formula for $\Index(T_{\la-\om})$ obtained in Theorem \ref{T:recall1}.
The formulas for the point spectrum and residual spectrum follow by applying the criteria for injectivity and closed rangeness of Proposition \ref{P:injectdenserange} to $T_{\la-\om}$ together with the fact that $T_{\la-\om}$ must be either injective or have dense range.
For the formula for the continuous spectrum, note that $\si_\textup{c}(T_\om)$ must be contained in the essential spectrum, i.e., $k_\la^0>0$. The condition $k_\la^- \leq k_q\leq k_\la^- + k_\la^0$ excludes precisely that $\la$ is in the point or residual spectrum.
\end{proof}
For the case where $\om\in\Rat(\BT)$ is proper we can be a bit more precise.
\begin{theorem}\label{T:spectrum2}
Let $\om \in\textup{Rat}(\mathbb{T})$ be proper, say $\om=s/q$ with $s,q\in\cP$ co-prime. Thus $\degr( s) \leq \degr( q)$ and all roots of $q$ are on $\BT$. Let $a$ be the leading coefficient of $q$ and $b$ the coefficient of $s$ corresponding to the monomial $z^{\deg(q)}$, hence $b=0$ if and only if $\om$ is strictly proper. Then $\si_\textup{r}(T_\om)=\emptyset$, and the point spectrum is given by
\[
\si_\textup{p}(T_\om)=\om(\BC\backslash \overline{\BD}) \cup \{b/a\}.
\]
Here $\om(\BC\backslash \overline{\BD})=\{\om (z) \mid z\in \BC\backslash \overline{\BD}\}$.
In particular, if $\om$ is strictly proper, then $0=b/a$ is in $\si_\textup{p}(T_\om)$.
Finally,
\[
\si_\textup{c}(T_\om)=\{\lambda\in\mathbb{C} \mid k_\la^0 >0 \mbox{ and all roots of } \lambda q-s \mbox{ are in }\overline{\BD} \}.
\]
\end{theorem}
\begin{proof}[\bf Proof]
Let $\om = s/q\in\Rat(\BT)$ be proper with $s,q\in\cP$ co-prime. Then $k_q=\deg(q)$. Since $\degr(s) \leq \deg(q)$, for any $\la\in\BC$ we have
\[
k_\la^-+k_\la^0\leq \deg(\la q-s)\leq \deg(q)=k_q.
\]
It now follows directly from \eqref{specparts} that $\si_\textup{r}(T_\om)=\emptyset$ and $\si_\textup{c}(T_\om)=\{\lambda \in\mathbb{C}\mid k_\lambda^0 >0, k_\lambda^-+k_\lambda^0=\deg(q)\}$. To determine the point spectrum, again using \eqref{specparts}, one has to determine when strict inequality occurs. We have $\deg(\la q-s)<\deg(q)$ precisely when the leading coefficient of $\la q$ is cancelled in $\la q-s$ or if $\la=0$ and $\deg(s)<\deg(q)$. Both cases correspond to $\la=b/a$. For the other possibility of having strict inequality, $k_\la^-+k_\la^0<\deg(\la q-s)$, note that this happens precisely when $\la q-s$ has a root outside $\overline{\BD}$, or equivalently $\la=\om(z)$ for a $z\not\in \overline{\BD}$.
\end{proof}
\section{The spectrum may be unbounded, the resolvent set empty}
\label{S:Examples1}
In this section we present some first examples, showing that the spectrum can be unbounded
and the resolvent set may be empty.
\begin{example}\label{E:spectrum2}
Let $\om(z) = \frac{z - \alpha}{z - 1}$ for some $1\neq \alpha\in\BC$, say $\alpha=a+ib$, with $a$ and $b$ real. Let $L\subset\BC$ be the line given by
\begin{equation}\label{Line}
L=\{z=x+iy\in\BC \mid 2by = (a^2 + b^2 - 1) + (2 - 2a)x \}
\end{equation}
Then we have
\begin{align*}
\rho(T_\om)=\om(\BD),\quad & \sigma_\textup{ess} (T_\om)=\om(\BT)=L =\si_\tu{c}(T_\om), \\
\si_\tu{p}(T_\om)&=\om(\BC\backslash\overline{\BD}),\quad \si_\tu{r}(T_{\om})=\emptyset.
\end{align*}
Moreover, the point spectrum of $T_\om$ is the open half plane determined by $L$ that contains $1$ and the resolvent set of $T_\om$ is the other open half plane determined by $L$.\medskip
\begin{figure}
\begin{center}
\includegraphics[height=4cm]{figure_one}
\\
\caption
{Spectrum of $T_\om$ where $\om(z)=\frac{z-\alpha}{z-1}$, with $\alpha=-\frac{i}{2}$.}
\end{center}
\end{figure}
To see that these claims are true note that for $\lambda\not= 1$
\[
\lambda - \om(z) = \frac{z(\lambda - 1) + \alpha - \lambda}{z - 1} = \frac{1}{\lambda - 1}\frac{z + \frac{\alpha - \lambda}{\lambda - 1}}{z -1},
\]
while for $\lambda = 1$ we have $\lambda - \om(z) = \frac{\alpha - \lambda}{z - 1}$. Thus $\lambda = 1\in\sigma_\textup{p}(T_\om)$ for every $1\neq \alpha\in\BC$ as in that case $k_q=1> 0=k_\la^-+k_\la^0$. For $\la\neq 1$, $\la-\om$ has a zero at $\frac{\al-\al}{\la-1}$ of multiplicity one. For $\lambda = x + iy$ we have $\vert \alpha - \lambda \vert = \vert \lambda - 1 \vert $ if and only if $ (a - x)^2 + (b - y)^2 = (x - 1)^2 + y^2$, which in turn is equivalent to $2by = (a^2 + b^2 - 1) + (2 - 2a)x$. Hence the zero of $\la-\om$ is on $\BT$ precisely when $\la$ is on the line $L$. This shows $\si_\tu{ess}=L$. One easily verifies that the point spectrum and resolvent set correspond to the two half planes indicated above and that these coincide with the images of $\om$ under $\BC\backslash\overline\BD$ and $\BD$, respectively. Since $\la-\om$ can have at most one zero, it is clear from Theorem \ref{T:main2} that $\si_\tu{r}(T_\om)=\emptyset$, so that $\si_\tu{c}(T_\om)=L=\si_\tu{ess}(T_\om)$, as claimed.
\hfill$\Box$
\end{example}
\begin{example}\label{E:spectrum4a}
Let $\om(z) = \frac{1}{(z-1)^k}$ for some positive integer $k>1$. Then
\[
\si_\tu{p}(T_\om)=\si(T_\om)=\BC,\quad \si_{r}(T_\om)=\si_\tu{c}(T_\om)=\rho(T_\om)=\emptyset,
\]
and the essential spectrum is given by
\[
\si_\tu{ess}(T_\om)=\om(\BT)=\{(it-\half)^k \mid t\in\BR \}.
\]
For $k=2$ the situation is as in Figure 2; one can check that the curve $\om(\BT)$ is the parabola $\re(z)=\frac{1}{4}-\im(z)^2$. (Recall that different colors indicate different Fredholm index, as explained at the end of the introduction.)
\begin{figure}
\begin{center}
\includegraphics[height=4cm]{figure_two}
\caption{Spectrum of $T_\om$ where $\om(z)=\frac{1}{(z-1)^2}$}
\end{center}
\end{figure}
To prove the statements, we start with the observation that for $\vert z\vert = 1$, $\frac{1}{z-1}$ is of the form $it-\frac{1}{2} , t\in\BR$. Thus for $z\in\BT$ with $\frac{1}{z-1}=it-\frac{1}{2}$ we have
\[
\om(z) = \frac{1}{(z-1)^k} = (z-1)^{-k} = (it -\half)^k.
\]
This proves the formula for $\si_\tu{ess}(T_\om)$. For $\la=re^{i\theta}\neq 0$ we have
\[
\la-\om(z)=\frac{\la(z-1)^{k}-1}{(z-1)^k}.
\]
Thus $\la-\om(z)=0$ if and only if $(z-1)^k=\la^{-1}$, i.e., $z=1+r^{-1/k}e^{i(\theta +2\pi l)/k}$ for $l=0,\ldots,k-1$. Thus the zeroes of $\la-\om$ are $k$ equally spaced points on the circle with center 1 and radius $r^{-1/k}$. Clearly, since $k>1$, not all zeroes can be inside $\overline{\BD}$, so $k_q> k_\la^{0}+k_\la^{-}$, and thus $\la\in\si_\tu{p}(T_\om)$. It follows directly from Theorem \ref{T:main2} that $0\in\si_\tu{p}(T_\om)$. Thus $\si_\tu{p}(T_\om)=\BC$, as claimed. The curve $\om(\BT)$ divides the plane into several regions on which the index is a positive constant integer, but the index may change between different regions.
\hfill $\Box$
\end{example}
\section{The essential spectrum need not be connected}\label{S:ExEssSpec}
For a continuous function $\omega$ on the unit circle it is obviously the case that the
curve $\om(\BT)$ is a connected and bounded curve in the complex plane, and hence the
essential spectrum of $T_\omega$ is connected in this case. It was proved by Widom \cite{W64}
that also for $\omega$ piecewise continuous the essential spectrum of $T_\omega$ is connected,
and it is the image of a curve related to $\om(\BT)$ (roughly speaking, filling the jumps with
line segments). Douglas \cite{D98} proved that even for $\omega\in L^\infty$ the essential
spectrum of $T_\omega$ as an operator on $H^2$ is connected.
In \cite{BS06} the question is raised whether or not
the essential spectrum of $T_\omega$ as an operator on $H^p$ is always connected when
$\om \in L^\infty$.
Returning to our case, where $\omega$ is a rational function possibly with poles on the unit circle,
clearly when $\omega$ does have poles on the unit
circle it is not a-priori necessary that $\si_\tu{ess}(T_\om)=\om(\BT)$ is connected. We shall present examples that show that indeed the essential spectrum need not be connected, in contrast with the case where $\omega\in L^\infty$.
Consider $\om=s/q\in\Rat(\BT)$ with $s,q\in\cP$ with real coefficients. In that case $\overline{\om(z)}=\om(\overline{z})$, so that the essential spectrum is symmetric with respect to the real axis. In particular, if $\om(\BT)\cap \BR=\emptyset$, then the essential spectrum is disconnected. The converse direction need not be true, since the essential spectrum can consist of several disconnected parts on the real axis, as the following example shows.
\begin{example}\label{E:disconR}
Consider $\om(z)=\frac{z}{z^2+1}$. Then
\[
\si_\tu{ess}(T_\om)=\om(\BT)=(-\infty,-1] \cup [1,\infty)=\si_\tu{c}(T_\om),\quad
\si_\tu{p}(T_\om)=\BC\backslash \om(\BT),
\]
and thus $\si_\tu{r}(T_\om)=\rho(T_\om)=\emptyset$. Further, for $\la\not\in\om(\BT)$ the Fredholm index is 1.\medskip
Indeed, note that for $z=e^{i\theta}\in\BT$ we have
\[
\om(z)=\frac{1}{z+z^{-1}}=\frac{1}{2\,\re(z)}=\frac{1}{2\cos(\theta)}\in\BR.
\]
Letting $\theta$ run from $0$ to $2\pi$, one finds that $\om(\BT)$ is equal to the union of $(-\infty,-1]$ and $[1,\infty)$, as claimed. Since $\om$ is strictly proper, $\si_\tu{r}(T_\om)=\emptyset$ by Theorem \ref{T:spectrum2}. Applying Theorem \ref{T:recall1} to $T_\om$ we obtain that $T_\om$ is Fredholm with index 1. Hence $T_\om$ is not injective, so that $0\in\si_\tu{p}(T_\om)$. However, since $\BC\backslash \om(\BT)$ is connected, it follows from Theorem \ref{T:main1} that the index of $T_{\la-\om}$ is equal to 1 on $\BC\backslash \om(\BT)$, so that $\BC\backslash \om(\BT)\subset\si_\tu{p}(T_\om)$. However, for $\la$ on $\om(\BT)$ the function $\la-\om$ has two zeroes on $\BT$ as well as two poles on $\BT$. It follows that $\om(\BT)=\si_\tu{c}(T_\om)$, which shows all the above formulas for the spectral parts hold.
\end{example}
As a second example we specify $q$ to be $z^2-1$ and determine a condition on $s$ that guarantees $\si_\tu{ess}(T_\om)=\om(\BT)$ in not connected.
\begin{example}
Consider $\om(z)=\frac{s(z)}{z^2-1}$ with $s\in\cP$ a polynomial with real coefficients. Then for $z\in\BT$ we have
\[
\om(z)=\frac{\overline{z}s(z)}{z-\overline{z}}
=\frac{\overline{z}s(z)}{-2i\,\im(z)}
=\frac{i\overline{z}s(z)}{2\,\im(z)},\quad \mbox{so that}\quad
\im(\om(z))=\frac{\re(\overline{z}s(z))}{2\,\im(z)}.
\]
Hence $\im(\om(z))=0$ if and only if $\re(\overline{z}s(z))=0$. Say $s(z)=\sum_{j=0}^k a_j z^j$. Then for $z\in\BT$ we have
\begin{align*}
\re(\overline{z}s(z)) & = \sum_{j=0}^k a_j \re(z^{j-1}).
\end{align*}
Since $|\re(z^j)|\leq 1$, we obtain that $|\re(\overline{z}s(z))|>0$ for all $z\in\BT$ in case $2|a_1|>\sum_{j=0}^k|a_j|$. Hence in that case $\om(\BT)\cap \BR=\emptyset$ and we find that the essential spectrum is disconnected in $\BC$.
We consider two concrete examples, where this criteria is satisfied.
Firstly, take $\omega(z)=\frac{z^3+3z+1}{z^2-1}$. Then
$$
\omega(e^{i\theta})= \frac{1}{2}(2\cos\theta -1) -\frac{i}{2}\frac{2(\cos\theta +1/4)^2+7/4}{\sin\theta},
$$
which is the curve given in Figure 3, that also shows the spectrum and resolvent as well as the essential spectrum.
\begin{figure}
\includegraphics[height=4cm]{figure_three}
\caption{Spectrum of $T_\omega$, where $\om(z)=\frac{z^3+3z+1}{z^2-1}$}
\end{figure}
Secondly, take $\omega(z)=\frac{z^4+3z+1}{z^2-1}$. Figure 4 shows the spectrum and resolvent and the essential spectrum. Observe that this is also a case where the resolvent is a bounded set.
\begin{figure}
\includegraphics[height=4cm]{figure_four}
\caption{Spectrum of $T_\omega$, where $\om(z)=\frac{z^4+3z+1}{z^2-1}$}
\end{figure}
\end{example}
\section{A parametric example}\label{S:Examples2}
In this section we take $\om_k(z) = \frac{z^k + \alpha}{(z - 1)^2}$ for $\alpha\in\BC, \al\neq -1$ and for various integers $k\geq 1$. Note that the case $k=0$ was dealt with in Example \ref{E:spectrum4a} (after scaling with the factor $1+\al$). The zeroes of $\la-\om$ are equal to the roots of
\[
p_{\lambda,\alpha,k}(z)=\lambda q(z)- s(z) = \lambda (z-1)^2 - (z^k + \alpha).
\]
Thus, $\la$ is in the resolvent set $\rho(T_{\om_k})$ whenever $p_{\lambda,\alpha,k}$ has at least two roots in $\BD$ and no roots on $\BT$. Note that Theorem \ref{T:spectrum2} applies in case $k=1,2$. We discuss the first of these two cases in detail, and then conclude with some figures that contain possible configurations of other cases.
\begin{example}\label{E:spectrum4b}
Let $\om(z)=\om_1(z) = \frac{z+\alpha}{(z-1)^2} $ for $\alpha\not = -1$. Then
\begin{equation}\label{EssSpecPara}
\si_\tu{ess}(T_\om)=\om(\BT)=\{(it-\half) + (1+\alpha)(it-\half)^2 \mid t\in\BR\}.
\end{equation}
Define the circle
\[
\BT(-\half,\half)=\{z\in\BC \mid |z+\half|=\half\},
\]
and write $\BD(-\half,\half)$ for the open disc formed by the interior of $\BT(-\half,\half)$ and $\BD^c(-\half,\half)$ for the open exterior of $\BT(-\half,\half)$.
For $\al\notin \BT(-\half,\half)$ the curve $\om(\BT)$ is equal to the parabola in $\BC$ given by
\begin{align*}
\om(\BT) &=\left\{ -(\al+1)(x(y)+i y) \mid y\in\BR \right\},\quad \mbox{ where } \\
x(y) &= \frac{|\al+1|^4}{(|\al|^2+\re(\al))^2}y^2+
\frac{(\re(\al)+1)|\al+1|^2\im(\al)}{(|\al|^2+\re(\al))^2}y+
\frac{|\al|^2(1-|\al|^2)}{(|\al|^2+\re(\al))^2},
\end{align*}
while for $\al\in \BT(-\half,\half)$ the curve $\om(\BT)$ becomes the half line given by
\[
\om(\BT)=\left\{-(\al+1)r - \frac{(\al+1)(1+2\overline{\al})}{4(1-|\al|^2)} \mid r\geq 0 \right\}.
\]
As $\om$ is strictly proper, we have $\si_\tu{r}(T_\om)=\emptyset$. For the remaining parts of the spectrum we consider three cases.
\begin{itemize}
\item[(i)]
For $\al\in \BD(-\half,\half)$ the points $-\half$ and $0$ are separated by the parabola $\om(\BT)$ and the connected component of $\BC\backslash \om(\BT)$ that contains $-\half$ is equal to $\rho(T_{\om})$, while the connected component that contains 0 is equal to $\si_\tu{p}(T_\om)$. Finally, $\si_\tu{ess}(T_\om)=\om(\BT)=\si_\tu{c}(T_\om)$.
\item[(ii)]
For $\al\in \BT(-\half,\half)$ we have
\[
\rho(T_\om)=\emptyset,\quad \si_\tu{c}(T_\om)=\om(\BT)=\si_\tu{ess}(T_\om),\quad \si_\tu{p}(T_\om)=\BC\backslash \om(\BT),
\]
and for each $\la\in \om(\BT)$, $\la-\om$ has two zeroes on $\BT$.
\item[(iii)] For $\al\in \BD^c(-\half,\half)$ we have $\si_\tu{p}(T_\om)=\BC$, and hence $\rho(T_\om)=\si_\tu{c}(T_\om)=\emptyset$.
\end{itemize}
The proof of these statements will be separated into three steps.\smallskip
\paragraph{\it Step 1.}
We first determine the formula of $\om(\BT)$ and show this is a parabola. Note that
\[
\om(z) = \frac{z+\alpha}{(z-1)^2} = \frac{z-1}{(z-1)^2} + \frac{1+\alpha}{(z-1)^2} = \frac{1}{z-1} + (\alpha+1)\frac{1}{(z-1)^2}.
\]
Let $|z|=1$. Then $\frac{1}{z-1}$ is of the form $it-\half$ with $t\in\BR$. So $\om(\BT)$ is the curve
\begin{equation*}
\om(\BT)=\{(it-\half) + (\alpha+1)(it-\half)^2 \mid t\in\BR\}.
\end{equation*}
Thus \eqref{EssSpecPara} holds. Now observe that
\begin{align*}
& (it-\half) + (\alpha+1)(it-\half)^2=\\
&\qquad = -t^2(\alpha+1) + t(i - (\alpha+1)i) + (-\half + \mbox{$\frac{1}{4}$}(\alpha+1))\\
&\qquad = -t^2(\alpha+1) + (-\alpha i)t + (-\mbox{$\frac{1}{4}$} + \mbox{$\frac{1}{4}$}\alpha)\\
&\qquad = \displaystyle -(\alpha+1)\left(t^2 + t\frac{\alpha i}{\alpha+1} - \frac{1}{4}\left(\frac{\alpha - 1}{\alpha + 1}\right )\right ).
\end{align*}
The prefactor $-(1+\alpha)$ acts as a rotation combined with a real scalar multiplication, so $\om(\BT)$ is also given by
\begin{equation}\label{omTeq}
\om(\BT)=-(\al+1)\left\{t^2 + t\left(\frac{\alpha i}{\alpha+1}\right ) - \frac{1}{4}\left (\frac{\alpha-1 }{\alpha+1}\right ) \mid t\in\BR\right\}.
\end{equation}
Thus if the above curve is a parabola, so is $\om(\BT)$. Write
\begin{align*}
x(t) &= \re \left(t^2 + t\frac{\alpha i}{1+\alpha} - \frac{1}{4}\left(\frac{\alpha - 1}{\alpha + 1}\right )\right ),\\
y(t) &= \im \left(t^2 + t\frac{\alpha i}{1+\alpha} - \frac{1}{4}\left(\frac{\alpha - 1}{\alpha + 1}\right )\right ).
\end{align*}
Since
\[
\frac{\al i}{\al+1}=\frac{-\im(\al)+i(|\al|^2+\re(\al))}{|\al+1|^2}
\ands
\frac{\al-1}{\al+1}=\frac{(|\al|^2-1)+2i\im(\al)}{|\al+1|^2}
\]
we obtain that
\[
x(t) = t^2-\frac{\im(\al)}{|\al+1|^2}t-\frac{|\al|^2-1}{4|\al+1|^2},\quad
y(t) = \frac{|\al|^2+\re(\al)}{|\al+1|^2}t-\frac{\im(\al)}{2|\al+1|^2}.
\]
Note that $|\al+\half|^2=|\al|^2+\re(\al)+\frac{1}{4}$. Therefore, we have $|\al|^2+\re(\al)=0$ if and only if $|\al+\half|=\half$. Thus $|\al|^2+\re(\al)=0$ holds if and only if $\al$ is on the circle $\BT(-\half,\half)$.
In case $\al\notin \BT(-\half,\half)$, i.e., $|\al|^2+\re(\al)\neq 0$, we can express $t$ in terms of $y$, and feed this into the formula for $x$. One can then compute that
\[
x=\frac{|\al+1|^4}{(|\al|^2+\re(\al))^2}y^2+
\frac{(\re(\al)+1)|\al+1|^2\im(\al)}{(|\al|^2+\re(\al))^2}y+
\frac{|\al|^2(1-|\al|^2)}{(|\al|^2+\re(\al))^2}.
\]
Inserting this formula into \eqref{omTeq}, we obtain the formula for $\om(\BT)$ for the case where $\al\notin \BT(-\half,\half)$.
In case $\al\in \BT(-\half,\half)$, i.e., $|\al|^2+\re(\al)= 0$, we have
\[
|\al+1|^2=1-|\al|^2=1+\re(\al), \quad \im(\al)^2=|\al|^2(1-|\al|^2)
\]
and using these identities one can compute that
\[
y(t)=\frac{-2\im(\al)}{4(1-|\al|^2)}\ands
x(t)=\left(t-\frac{\im(\al)}{2(1-|\al|^2)}\right)^2+\frac{1+2\re(\al)}{4(1-|\al|^2)}.
\]
Thus $\{x(t)+iy(t) \mid t\in\BR\}$ determines a half line in $\BC$, parallel to the real axis and starting in $\frac{1+2\overline{\al}}{4(1-|\al|^2)}$ and moving in positive direction. It follows that $\om(\BT)$ is the half line
\[
\om(\BT)=\left\{-(\al+1)r - \frac{(\al+1)(1+2\overline{\al})}{4(1-|\al|^2)} \mid r\geq 0 \right\},
\]
as claimed.\medskip
\paragraph{\it Step 2.} Next we determine the various parts of the spectrum in $\BC\backslash \om(\BT)$. Since $\om$ is strictly proper, Theorem \ref{T:spectrum2} applies, and we know $\si_\tu{r}(T_\om)=\emptyset$ and $\si_\tu{p}=\om(\BC\backslash \overline{\BD})\cup \{0\}$.
For $k=1$, the polynomial $p_{\la,\al}(z)=p_{\la,\al,1}(z)=\la z^2 -(1+2\la)z+\la-\al$ has roots
\[
\frac{-(1+2\la)\pm \sqrt{1+4\la(1+\al)}}{2\la}.
\]
We consider three cases, depending on whether $\al$ is inside, on or outside the circle $\BT(-\half,\half)$.
Assume $\al\in\BD(-\half,\half)$. Then $\om(\BT)$ is a parabola in $\BC$. For $\la=-\half$ we find that $\la-\om$ has zeroes $\pm i\sqrt{1+2\al}$, which are both inside $\BD$, because of our assumption. Thus $-\half\in\rho(T_\om)$, so that $\rho(T_\om)\neq \emptyset$. Therefore the connected component of $\BC\backslash \om(\BT)$ that contains $-\half$ is contained in $\rho(T_\om)$, which must also contain $\om(\BD)$. Note that $0\in\om(\BT)$ if and only if $|\al|=1$. However, there is no intersection of the disc $\al\in\BD(-\half,\half)$ and the unit circle $\BT$. Thus 0 is in $\si_\tu{p}(T_\om)$, but not on $\om(\BT)$. Hence $0$ is contained in the connected component of $\BC\backslash \om(\BT)$ that does not contain $-\half$. This implies that the connected component containing $0$ is included in $\si_\tu{p}(T_\om)$. This proves our claims for the case $\al\in\BD(-\half,\half)$.
Now assume $\al\in\BT(-\half,\half)$. Then $\om(\BT)$ is a half line, and thus $\BC\backslash \om(\BT)$ consists of one connected component. Note that the intersection of the disc determined by $|\al+\half|<\half$ and the unit circle consists of $-1$ only. But $\al\neq -1$, so it again follows that $0\notin\om(\BT)$. Therefore the $\BC\backslash \om(\BT)=\si_\tu{p}(T_\om)$. Moreover, the reasoning in the previous case shows that $\la=-\half$ is in $\si_\tu{c}(T_{\om})$ since both zeroes of $-\half-\om$ are on $\BT$.
Finally, consider that case where $\al$ is in the exterior of $\BT(-\half,\half)$, i.e., $|\al+\half|>\half$. In this case, $|\al|=1$ is possible, so that $0\in\si_\tu{p}(T_\om)$ could be on $\om(\BT)$. We show that $\al=\om(0)\in\om(\BD)$ is in $\si_\tu{p}(T_\om)$. If $\al=0$, this is clearly the case. So assume $\al\neq0$. The zeroes of $\al-\om$ are then equal to $0$ and $\frac{1+2\al}{\al}$. Note that $|\frac{1+2\al}{\al}|> 1$ if and only if $|1+2\al|^2-|\al|^2>0$. Moreover, we have
\[
|1+2\al|^2-|\al|^2=3|\al|^2+4\re(\al)+1=3|\al+\mbox{$\frac{2}{3}$}|^2-\mbox{$\frac{1}{3}$}.
\]
Thus, the second zero of $\al-\om$ is outside $\overline{\BD}$ if and only if $|\al+\frac{2}{3}|^2>\frac{1}{9}$. Since the disc indicated by $|\al+\frac{2}{3}|\leq\frac{1}{3}$ is contained in the interior of $\BT(-\half,\half)$, it follows that for $\al$ satisfying $|\al+\half|>\half$ one zero of $\al-\om$ is outside $\overline{\BD}$, and thus $\om(0)=\al\in \si_\tu{p}(T_\om)$. Note that
\[
\BC=\om(\BC)=\om(\BD)\cup \om(\BT) \cup \om(\BC\backslash \overline{\BD}),
\]
and that $\om(\BD)$ and $\om(\BC\backslash \overline{\BD})$ are connected components, both contained in $\si_\tu{p}(T_\om)$. This shows that $\BC\backslash \om(\BT)$ is contained in $\si_\tu{p}(T_\om)$.\medskip
\paragraph{\it Step 3.} In the final part we prove the claim regarding the essential spectrum $\si_\tu{ess}(T_\om)=\om(\BT)$. Let $\la\in\om(\BT)$ and write $z_1$ and $z_2$ for the zeroes of $\la-\om$. One of the zeroes must be on $\BT$, say $|z_1|=1$. Then $\la\in\si_\tu{p}(\BT)$ if and only if $|z_1z_2|=|z_2|>1$. From the form of $p_{\la,\al}$ determined above we obtain that
\[
\la z^2-(1+2\la)z+\la-\al=\la(z-z_1)(z-z_2).
\]
Determining the constant term on the right hand sides shows that $\la z_1z_2=\la-\al$. Thus
\[
|z_2|=|z_1z_2|=\frac{|\la-\al|}{|\la|}.
\]
This shows that $\la\in\si_\tu{p}(T_\om)$ if and only if $|\la-\al| > |\la|$, i.e., $\lambda$ is in the half plane containing zero determined by the line through $\half\alpha$ perpendicular to the line segment from zero to $\alpha$.
Consider the line given by $|\la-\al| = |\la|$ and the parabola $\om(\BT)$, which is a half line in case $\al\in\BT(-\half,\half)$. We show that $\om(\BT)$ and the line intersect only for $\al\in\BT(-\half,\half)$, and that in the latter case $\om(\BT)$ is contained in the line. Hence for each value of $\al\neq -1$, the essential spectrum consists of either point spectrum or of continuous spectrum, and for $\al\in\BT(-\half,\half)$ both zeroes of $\la-\om$ are on $\BT$, so that $\om(\BT)$ is contained in $\si_\tu{c}(T_\om)$.
As observed in \eqref{EssSpecPara}, the parabola $\om(\BT)$ is given by the parametrization $(it-\half)^2(\alpha+1)+(it-\half)$ with $t\in\BR$, while the line is given by the parametrization
$\half\alpha +si\alpha$ with $s\in\BR$. Fix a $t\in\BR$ and assume the point on $\om(\BT)$ parameterized by $t$ intersects with the line, i.e., assume there exists a $s\in\BR$ such that:
$$
(it-\half)^2(\alpha+1)+(it-\half)=\half\alpha +si\alpha,
$$
Thus
$$
(-t^2-it+\mbox{$\frac{1}{4}$})(\alpha+1)+(it-\half)=\half\alpha +si\alpha,
$$
and rewrite this as
$$
i(-t(\alpha+1)+t-\alpha s)+((-t^2+\mbox{$\frac{1}{4}$})(\alpha+1)-\half -\half \alpha)=0,
$$
which yields
$$
-\alpha i(t+s)+(\alpha+1)(-t^2-\mbox{$\frac{1}{4}$})=0.
$$
Since $t^2+\mbox{$\frac{1}{4}$}>0$, this certainly cannot happen in case $\al=0$. So assume $\al\neq 0$.
Multiply both sides by $-\overline{\alpha}$ to arrive at
$$
|\alpha|^2i (t+s)+(|\alpha|^2+\overline{\alpha})(t^2+\mbox{$\frac{1}{4}$})=0.
$$
Separate the real and imaginary part to arrive at
\[
(|\alpha|^2+\re(\alpha))(t^2+\mbox{$\frac{1}{4}$})+ i(|\al|^2(t+s)-(t^2+\mbox{$\frac{1}{4}$})\im(\al))=0.
\]
Thus
\[
(|\alpha|^2+\re(\alpha))(t^2+\mbox{$\frac{1}{4}$})=0
\ands
|\al|^2(t+s)=(t^2+\mbox{$\frac{1}{4}$})\im(\al).
\]
Since $t^2+\mbox{$\frac{1}{4}$} >0$, the first identity yields $|\alpha|^2+\re(\alpha)=0$, which happens precisely when $\al\in\BT(-\half,\half)$. Thus there cannot be an intersection when $\al\notin\BT(-\half,\half)$. On the other hand, for $\al\in\BT(-\half,\half)$ the first identity always holds, while there always exists an $s\in\BR$ that satisfies the second equation. Thus, in that case, for any $t\in\BR$, the point on $\om(\BT)$ parameterized by $t$ intersects the line, and thus $\om(\BT)$ must be contained in the line.
We conclude by showing that $\om(\BT)\subset \si_{\tu{p}}(T_\om)$ when $|\al +\half|>\half$ and that $\om(\BT)\subset \si_{\tu{c}}(T_\om)$ when $|\al +\half|<\half$. Recall that the two cases correspond to $|\al|^2+\re(\al)>0$ and $|\al|^2+\re(\al)<0$, respectively. To show that this is the case, we take the point on the parabola parameterized by $t=0$, i.e., take $\la=\frac{1}{4}(\al+1)-\half=\frac{1}{4}(\al-1)$. Then $\la-\al=-\frac{1}{4}(3\al+1)$. So
\[
|\la-\al|^2=\mbox{$\frac{1}{16}$}(9|\al|^2+6\re(\al)+1) \ands
|\la|^2=\mbox{$\frac{1}{16}$}(|\al^2|-2\re(\al)+1).
\]
It follows that $|\la-\al|>|\la|$ if and only if
\[
\mbox{$\frac{1}{16}$}(9|\al|^2+6\re(\al)+1)> |\la|^2=\mbox{$\frac{1}{16}$}(|\al^2|-2\re(\al)+1),
\]
or equivalently,
\[
8(|\al|^2+\re(\al))>0.
\]
This proves out claim for the case $|\la+\half|>\half$. The other claim follows by reversing the directions in the above inequalities.
Figure 5 presents some illustrations of the possible situations.
\begin{figure}
\includegraphics[width=12cm]{figure_five}
\caption{Spectrum of $T_\omega$, where $\om(z)=\frac{z+\alpha}{(z-1)^2}$ for some values of
$\alpha$, with $\alpha = 1$, and $\alpha=0$ (top row left and right), $\alpha=1/2$ and $\alpha=-2$ (middle row left and right), $\alpha =-\frac{1}{2}+\frac{1}{4}i$ and $\alpha=-2+i$
(bottom row).}
\end{figure}
\hfill$\Box$
\end{example}
The case $k=2$ can be dealt with using the same techniques, and very similar results are obtained in that case.
The next examples deal with other cases of $\om_k$, now with $k>2$.
\begin{example}\label{E:spectrum4d}
Let $\om = \frac{z^3 + \alpha}{(z-1)^2}$. Then
{\small
\[
\om(z) = \frac{z^3 + \alpha}{(z-1)^2} =
(z-1) + 3 + \frac{3}{z-1} + \frac{1+\alpha}{(z-1)^2}.
\]
}
For $z\in\BT$, $\frac{1}{z-1}$ has the form $-\frac{1}{2} + ti, t\in\BR$ and so $\om(\BT)$ has the form
\[
\om(\BT)=\left\{
\frac{1}{-\frac{1}{2} + ti} + 3 + 3(-\frac{1}{2} + ti) + (1+\alpha)\left (- \frac{1}{2} + ti\right)^2,\mid t\in\BR\right\}.
\]
Also $\lambda - \om(z) = \frac{\lambda(z-1)^2 - z^3 - \alpha}{(z-1)^2}$ and so for invertibility we need the polynomial $p_{\lambda,\alpha}(z) = \lambda(z-1)^2 - z^3 - \alpha$ to have exactly two roots in $\BD$. Since this is a polynomial of degree $3$ the number of roots inside $\BD$ can be zero, one, two or three, and the index of $\lambda-T_\om$ correspondingly can be two, one, zero or minus one. Examples are given in Figure 6.
\bigskip
\begin{figure}
\includegraphics[width=12cm]{figure_seven}
\caption{Spectrum of $T_\om$ where $\om(z)=\frac{z^3+\alpha}{(z-1)^2}$ for several values of $\alpha$, with $\alpha$ being (left to right and top to bottom) respectively, $-2, -1.05, -0.95, 0.3, 0.7, 1, 1.3, 2$.
}
\end{figure}
\end{example}
\begin{example}\label{E:spectrum5d}
To get some idea of possible other configurations we present some examples with other values of $k$.
For $\om (z)= \frac{z^4 }{(z-1)^2}$ (so $k=4$ and $\alpha =0$) the essential spectrum of $T_\om$ is the curve in Figure 7, the white region is the resolvent set, and color coding for the Fredholm index is as earlier in the paper. For $\om (z)= \frac{z^6 + 1.7}{(z-1)^2}$ (so $k=6$ and $\alpha =1.7$) see Figure 8, and as a final example Figure 9 presents the essential spectrum and spectrum for
$\om(z)=\frac{z^7+1.1}{(z-1)^2}$ and $\om(z)=\frac{z^7+0.8}{(z-1)^2}$. In the latter figure
color coding is as follows: the Fredholm index is $-3$ in the yellow region, $-4$ in the green region and $-5$ in the black region.
\begin{figure}
\includegraphics[height=4cm]{figure_eight}
\caption{The spectrum of $T_\om$, with $k=4$ and $\alpha=0$.}
\end{figure}
\begin{figure}
\includegraphics[height=4cm]{figure_nine}
\caption{The spectrum of $T_\om$ with $k=6$ and $\alpha=1.7$.}
\end{figure}
\begin{figure}
\includegraphics[height=4cm]{figure_ten}
\includegraphics[height=4cm]{figure_eleven}
\caption{The spectrum of $T_\om$ for $k=7$ and $\alpha=1.1$ (left) and $k=7$, $\alpha=0.8$ (right)}
\end{figure}
\end{example}
\paragraph{\bf Acknowledgement}
The present work is based on research supported in part by the National Research Foundation of South Africa. Any opinion, finding and conclusion or recommendation expressed in this material is that of the authors and the NRF does not accept any liability in this regard.
| {
"attr-fineweb-edu": 1.332031,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUd6A5qhLBWkLTSjgg | \section{Outline}
\section{Introduction}
\label{section:introduction}
Graphical models are widely used for real-world problems in a broad range of fields, including social science, economics, genetics, and computational neuroscience \citep{newman2002random,luscombe2004genomic, rubinov2010complex}. Scientists and practitioners aim to understand the underlying network structure behind large-scale datasets. For a high-dimensional random vector $\bX =(\bX_1, \cdots, \bX_d) \in \RR^d$, we let $\cG = (\cV, \cE)$ be an undirected graph, which encodes the conditional dependence structure among $\bX$. Specifically, each component of $\bX$ corresponds to some vertex in $\cV=\{1,2\cdots, d\}$, and $(j,k) \notin \cE$ if and only if $\bX_j$ and $\bX_k$ are conditionally independent given the rest of variables. Many existing works in the literature seek to learn the structure of $\cG$ via estimating the weight matrix $\bTheta$. For example, \citet{meinshausen2006high, yuan2007model, friedman2008sparse, rothman2008sparse, peng2009partial, lam2009sparsistency, ravikumar2011high, cai2011constrained, shen2012likelihood} focus on estimating the precision matrix in a Gaussian graphical model. Further, there is also a line of work developing methodology and theory to assess the uncertainty of edge estimation, i.e., constructing hypothesis tests and confidence intervals on the network edges, see \citet{cai2013optimal, gu2015local, ren2015asymptotic, cai2016inference, jankova2017honest, yang2018semiparametric, feng2019high, ding2020estimation}. Recently, simultaneously testing multiple hypotheses on edges of the graphical models has received increasing attention \citep{liu2013ggmfdr, cai2013two, xia2015testing, xia2018multiple,li2019ggm, eisenach2020high}.
Most of the aforementioned works formulate the testing problems based on continuous parameters and local properties. For example, \citet{liu2013ggmfdr} proposes a method to select edges in Gaussian graphical models with asymptotic FDR control guarantees. Testing the existence of edges concerns the local structure of the graph. Under certain modeling assumptions, its null hypothesis can be translated into a single point in the continuous parameter space, for example, $\bTheta_{jk}=0$ where $\bTheta$ is the precision matrix or the general weight matrix. However, for many scientific questions involving network structures, we need to detect and infer discrete and combinatorial signals in the networks, which does not follow from single edge testing. For example, in the study of social networks, it is interesting to discover active and impactful users, usually called ``hub users," as they are connected to many other nodes in the social network \citep{ilyas2011distributed,lee2019discovering}. In gene co-expression network analysis, identifying central regulators/hub genes \citep{yuan2017co,liu2019bioinformatics,liu2019identification} is known to be extremely useful to the study of progression and prognosis of certain cancers and can support the treatment in the future. In neuroscience, researchers are interested in identifying the cerebral areas which are intensively connected to other regions \citep{shaw2008neurodevelopmental,van2013network,power2013evidence} during certain cognitive processes. The discovery of such central/hub areas can provide scientists better understanding of the mechanisms of human cognition.
Motivated by these applications in various areas, in this paper, we consider the hub node selection problem from the network models. In specific, given a graph $\cG = (\cV, \cE)$, where $\cV$ is the vertex set and $\cE \subseteq \cV \times \cV$ is the edge set, we consider multiple hypotheses on whether the degree of some node $j \in \cV$ exceeds a given threshold $k_\tau$:
\begin{equation}\nonumber
H_{0j}: \text{degree of node } j < k_{\tau} \text{ v.s. } H_{1j}: \text{degree of node } j \ge k_{\tau}.
\end{equation}
Throughout the paper, these nodes with large degrees will be called hub nodes. For each $j\in [d]$, let $\psi_j = 1$ if $H_{0j}$ is rejected and $\psi_j = 0$ otherwise. When selecting hub nodes, we would like to control the false discovery rate, as defined below:
\[
{\rm FDR} = \EE{\frac{\sum_{j \in \mathcal{H}_0} \psi_j}{\max\big\{\sum_{j=1}^d \psi_j, 1\big\}}},
\]
where $\mathcal{H}_0 = \{j \mid \text{degree of node } j < k_{\tau} \}$. Remark the hypotheses $H_{0j}, j \in [d]$ are not based on continuous parameters. They instead involve the degrees of the nodes, which are intrinsically discrete/combinatorial functionals. To the best of our knowledge, there is no existing literature studying such combinatorial variable selection problems. The most relevant work turns out to be \citet{lu2017adaptive}, which proposes a general framework for inference about graph invariants/combinatorial quantities on undirected graphical models. However, they study single hypothesis testing and have to decide which subgraph to be tested before running the procedure.
The combinatorial variable selection problems bring many new challenges. First, most of the existing work focus on testing continuous parameters \citep{liu2013ggmfdr, javanmard2013nearly,javanmard2014confidence,javanmard2014hypothesis,belloni2014inference,van2014asymptotically,xia2015testing, xia2018multiple, javanmard2019false, sur2019modern,zhao2020asymptotic}. For discrete functionals, it is more difficult to construct appropriate test statistics and estimate its quantile accurately, especially in high dimensions.
Second, many multiple testing procedures rely on an independence assumption (or certain dependence assumptions) on the null p-values \citep{benjamini1995controlling,benjamini2001control,benjamini2010discovering}. However, the single hypothesis here is about the global property of the graph, which means that any reasonable test statistic has to involve the whole graph. Therefore, complicated dependence structures exist inevitably, which presents another layer of difficulty for controlling the false discoveries. Now we summarize the motivating question for this paper: how to develop a combinatorial selection procedure to discover nodes with large degrees on a graph with FDR control guarantees?
This paper introduces the StarTrek filter to select hub nodes. The filter is based on the maximum statistics, whose quantiles are approximated by the Gaussian multiplier bootstrap procedure.
Briefly speaking, the Gaussian multiplier bootstrap procedure estimates the distribution of a given maximum statistic of general random vectors with unknown covariance matrices by the distribution of the maximum of a sum of the conditional Gaussian random vectors. The validity of high dimensional testing problems, such as family-wise error rate (FWER) control, relies on the non-asymptotic bounds of the Kolmogorov distance between the true distribution of the maximum statistics and the Gaussian multiplier bootstrap approximation, which is established in \citet{chernozhukov2013gaussian}. However, in order to control the FDR in the context of combinatorial variable selection, a more refined characterization of the quantile approximation errors is required. In specific, we need the so called Cram\'er-type comparison bounds
quantifying the accuracy of the p-values in order to control the FDR in the simultaneous testing procedures \citep{chang2016cramer}. In our context, consider two centered Gaussian random vectors $U,V\in \RR^{d}$ with different covariance matrices $\bSigma^U$, $\bSigma^V$ and denote the $\ell_{\infty}$ norms of $U,V$ by $\maxnorm{U},\maxnorm{V}$ respectively, then the Cram\'er-type comparison bounds aim to control the relative error $\left|\frac{\mathbb{P}(\maxnorm{U} > t)}{\mathbb{P}(\maxnorm{V} > t)}-1\right|$ for certain range of $t$. Comparing to the Kolmogorov distance $\sup_{t\in \RR}\left|{\mathbb{P}(\maxnorm{U} > t)}-{\mathbb{P}(\maxnorm{V} > t)}\right|$ \citep{chernozhukov2015comparison}, the Cram\'er-type comparison bound leads to
the relative error between two cumulative density functions, which is necessary to guarantee the FDR control. In specific, we show in this paper a novel Cram\'er-type Gaussian comparison bound
\begin{equation}\label{eq:intro_ccb_max}
\sup_{0\le t \le C_0\sqrt{\log d}}\left|\frac{\mathbb{P}(\maxnorm{U} > t)}{\mathbb{P}(\maxnorm{V} > t)}-1\right|= O\rbr{ \min\Big\{(\log d)^{5/2}\Delta_{\infty}^{1/2}, \frac{\Delta_{0} \log d }{\fp}\Big\}},
\end{equation}
for some constant $C_0>0$, where $\Delta_{\infty}:= ||\bSigma^U-\bSigma^V||_{\max}$ is the entrywise maximum norm difference between the two covariance matrices, $\Delta_{0}:= ||\bSigma^U-\bSigma^V||_{0}$ with $\nbr{\cdot}_{0}$ being the entrywise $\ell_0$-norm of the matrix, and $\fp$ is the number of connected subgraphs in the graph whose edge set $\cE = \{(j,k): \bSigma^U_{jk}\neq 0 \text{ or } \bSigma^V_{jk}\neq 0 \}$. This comparison bound in \eqref{eq:intro_ccb_max} characterizes the relative errors between Gaussian maxima via two types of rates: the $\ell_\infty$-norm $\Delta_\infty$ and the $\ell_0$-norm $\Delta_0$. This implies a new insight that the Cram\'{e}r type bound between two Gaussian maxima is small as long as either their covariance matrices are uniformly close or only sparse entries of the two covariance matrices differ. As far as we know, the second type of rate in \eqref{eq:intro_ccb_max} has not been developed even in Kolmogorov distance results of high dimensional Gaussian maxima. In the study of FDR control, we need both types of rates: the $\Delta_\infty$ rate is used to show that the Gaussian multiplier bootstrap procedure is an accurate approximation for the maximum statistic quantiles and the $\Delta_0$ rate is used to quantify the complicated dependence structure of the p-values for the single tests on the degree of graph nodes. In order to prove the Cram\'{e}r-type comparison bound in \eqref{eq:intro_ccb_max}, we develop two novel theoretic techniques to prove the two types of rates separately. For the $\Delta_\infty$ rate, we reformulate the Slepian's interpolation \citep{slepian1962one} into an ordinary differential inequality such that the relative error can be controlled via the Gr{\"o}nwall's inequality \citep{gronwall1919note}.
To control the $\Delta_0$ rate, the anti-concentration inequality of Gaussian maxima developed in \cite{chernozhukov2015comparison} is no longer sufficient, we establish a new type of anti-concentration inequality for the derivatives of the soft-max of high dimensional Gaussian vectors. The existing works on the Cram\'{e}r type comparison bounds such as \citet{liu2010cramer,liu2014phase,chang2016cramer} does not cover the high dimensional maximum statistics. Therefore, their techniques can not be directly extended to our case. To the best of our knowledge, it is the first time in our paper to prove the Cram\'er-type Gaussian comparison bounds \eqref{eq:intro_ccb_max} for high dimensional Gaussian maxima.
In summary, our paper makes the following major contributions. First, we develop a novel StarTrek filter to select combinatorial statistical signals: the hub nodes with the FDR control. This procedure involves maximum statistic and Gaussian multiplier bootstrap for quantile estimation.
Second, in theory, the proposed method is shown to be valid for many different models with the network structures. In this paper, we provide two examples, the Gaussian graphical model and the bipartite network in the multiple linear models. Third, we prove a new Cram\'er-type Gaussian comparison bound with two types of rates: the maximum norm difference and $\ell_0$ norm difference. These results are quite generic and has its own significance in the probability theory.
\subsection{Related work}
Canonical approaches to FDR control and multiple testing \citep{benjamini1995controlling,benjamini2001control,benjamini2010discovering} require that valid p-values are available, and they only allow for certain forms of dependence between these p-values. However, obtaining asymptotic p-values with sufficient accuracy is generally non-trivial for high dimensional hypothesis testing problems concerning continuous parameters \citep{javanmard2013nearly,javanmard2014confidence,javanmard2014hypothesis,belloni2014inference,van2014asymptotically,sur2019modern,zhao2020asymptotic}, not even to mention discrete/combinatorial functionals.
Recently, there is a line of work conducting variable selection without needing to act on a set of valid p-values, including \citet{barber2015controlling,barber2019knockoff,panning2019knockoff,xing2019controlling,dai2020false,dai2020scale}. These approaches take advantage of the symmetry of the null test statistics and establish FDR control guarantee. As their single hypothesis is often formulated as conditional independence testing, it is challenging to apply those techniques to select discrete signals for the problem studied in this paper.
Another line of work develops multiple testing procedures based on asymptotic p-values for specific high dimensional models \citep{liu2013ggmfdr,liu2014hypothesis,javanmard2019false,xia2015testing,xia2018multiple,liu2020integrative}. Among them, \citet{liu2013ggmfdr} studies the edge selection problem on Gaussian graphical models, which turns out to be the most relevant work to our paper. However, their single hypothesis is about the local property of the graph. Our problem of discovering nodes with large degrees concerns the global property of the whole network, therefore requiring far more work.
There exists some recent work inferring combinatorial functionals. For example, the method proposed in \citet{ke2020estimation} provides a confidence interval for the number of spiked eigenvalues in a covariance matrix. \citet{jin2020estimating} focuses on estimating the number of communities in a network and yields confidence lower bounds. \citet{neykov2019combinatorial,lu2017adaptive} propose a general framework for conducting inference on graph invariants/combinatorial quantities, such as the maximum degree, the negative number of connected subgraphs, and the size of the longest chain of a given graph. \citet{shen2020combinatorial} develops methods for testing the general community combinatorial properties of the stochastic block model. Regarding the hypothesis testing problem, all these works only deal with a single hypothesis and establish asymptotic type-I error rate control. While simultaneously testing those combinatorial hypotheses is also very interesting and naturally arises from many practical problems.
\subsection{Outline}
In Section \ref{sec:method}, we set up the general testing framework and introduce the StarTrek filter for selecting hub nodes. In Section \ref{sec:cramer_theory}, we present our core probabilistic tools: Cram\'er-type Gaussian comparison bounds in terms of maximum norm difference and $\ell_0$ norm difference. To offer a relatively simpler illustration of our generic theoretical results, we first consider the hub selection problem on a bipartite network (multitask regression with linear models). Specifically, the input of the general StarTrek filter is chosen to be the estimators and quantile estimates described in Section \ref{sec:bipartite_selection}. Applying the probabilistic results under this model, we establish FDR control guarantees under certain conditions. Then we move to the Gaussian graphical model in Section \ref{sec:hub_selection}. In Section \ref{sec:simul}, we demonstrate StarTrek's performance through empirical simulations and a real data application.
\subsection{Notations}
Let $\phi(x),\Phi(x)$ be the probability density function (PDF) and the cumulative distribution function (CDF) respectively of the standard Gaussian distribution and denote $\bar{\Phi}(x) = 1 - \Phi(x)$. Let $\mathbf{1}_{d}$ be the vector of ones of dimension $d$. We use $\Indrbr{\cdot}$ to denote the indicator function of a set and $|\cdot|$ to denote the cardinality of a set. For two sets $A$ and $B$, denote their symmetric difference by $A \ominus B$, i.e., $A \ominus B = (A\setminus B) \cup (B\setminus A)$; let $A \times B$ be the Cartesian product. For two positive sequences $\{x_n\}_{n=1}^{\infty}$ and $\{y_n\}_{n=1}^{\infty}$, we say $x_n = O\rbr{y_n}$ if $x_n\le C y_n$ holds for any $n$ with some large enough $C>0$. And we say $x_n = o\rbr{y_n}$ if $x_n/y_n \rightarrow 0$ as $n\rightarrow \infty$. For a sequence of random variables $\{X_n\}_{n=1}^\infty$ and a scalar $a$, we say $X_n \le a + o_{\mathbb{P}}(1)$ if for all $\epsilon > 0$, $\lim_{n \rightarrow \infty} \PP{ X_n - a > \epsilon } = 0$.
Let $[d]$ denote the set $\{1,\dots,d\}$. The $\ell_{\infty}$ norm and the $\ell_{1}$ norm on $\RR^d$ are denoted by $\maxnorm{\cdot}$ and $\norm{\cdot}_1$ respectively. For a random vector $X$, let $\maxnorm{X}$ be its $\ell_{\infty}$ norm. For a matrix $\Ab \in \RR^{d_1\times d_2}$, we denote its minimal and maximal eigenvalues by $\lambda_{\min}(\Ab), \lambda_{\max}(\Ab)$ respectively, the elementwise max norm by $\nbr{\Ab}_{\max} = \max_{i\in [d_1],j\in [d_2]}|\Ab_{ij}|$ and the elementwise $\ell_0$ norm by $\nbr{\Ab}_{0} = \sum_{i\in [d_1],j\in [d_2]}\Indrbr{\Ab_{ij} \ne 0}$. Throughout this paper, $C, C',C'', C_0, C_1, C_2,\dots$ are used as generic constants whose values may vary across different places.
\section{Methodology}
\label{sec:method}
Before introducing our method, we set up the problem with more details. Specifically, we consider a graph $\cG = (\cV_1, \cV_2, \cE)$ with the node sets $\cV_1,\cV_2$ and the edge set $\cE$. Let $d_1=|\cV_1|$, $d_2=|\cV_2|$ and denote its weight matrix by $\bTheta \in \RR^{d_1\times d_2}$. In the undirected graph where $\cV_1 = \cV_2:=\cV$, $\bTheta$ is a square matrix and its element $\bTheta_{jk}$ is nonzero when there is an edge between node $j$ and node $k$, zero when there is no edge.
In a bipartite graph where $\cV_1 \ne \cV_2$, elements of $\bTheta$ describe the existence of an edge between node j in $\cV_{1}$ and node $k$ in $\cV_{2}$. Without loss of generality, we focus on one of the node sets and denote it by $\cV$ with $|\cV|:=d$. We would like to select those nodes among $\cV$ whose degree exceeds a certain threshold $k_{\tau}$. And the selection problem is equivalent to simultaneously testing $d$ hypotheses:
\begin{equation} \label{eq:problem_setup}
H_{0j}: \text{degree of node } j < k_{\tau} \text{ v.s. } H_{1j}: \text{degree of node } j \ge k_{\tau},
\end{equation}
for $j \in [d]$. Let $\psi_j = 1$ if $H_{0j}$ is rejected and $\psi_j = 0$ otherwise, then for some multiple testing procedure with output $\{\psi_j\}_{j\in[d]}$, the false discovery proportion (FDP) and FDR can be defined as below:
\[
{\rm FDP} = \frac{\sum_{j \in \mathcal{H}_0}^d \psi_j}{\maxof{1}{\sum_{j=1}^d \psi_j} },\quad {\rm FDR} := \EEE[{\rm FDP} ],
\]
where $\mathcal{H}_0 = \{j \mid \text{degree of node } j < k_{\tau} \}$. We aim to propose a multiple testing procedure such that the FDP or FDR can be controlled at a given level $0 < q < 1$.
We illustrate the above general setup in two specific examples. In multitask regression with linear models, we are working with the bipartite graph case, then the weight matrix $\bTheta$ corresponds to the parameter matrix whose row represents the linear coefficients for one given response variable. Given a threshold $k_{\tau}$, we want to select those rows (response variables) with $\ell_0$ norm being at least $k_{\tau}$. In the context of Gaussian graphical models where $\cV_1 = \cV_2$, $\bTheta$ represents the precision matrix, and we want to select those hub nodes i.e., whose degree is larger than or equal to $k_{\tau}$.
\subsection{StarTrek filter}
\label{sec:startrek}
Letting $\bTheta_{j}$ be the $j$-th row of $\bTheta$ and $\bTheta_{j,-j}$ be the vector $\bTheta_{j}$ excluding its $j$-th element, we formulate the testing problem for each single node as below,
\[
H_{0j}: \|\bTheta_{j,-j}\|_0 < k_{\tau} \text{ v.s. } H_{1j}: \|\bTheta_{j,-j}\|_0 \ge k_{\tau}.
\]
To test the above hypothesis, we need some estimator of the weight matrix $\bTheta$.
In Gaussian graphical model, it is natural to use the estimator of a precision matrix. In the bipartite graph (multiple response model), estimated parameter matrix will suffice. Denote this generic estimator by $\tilde{\bTheta}$ (without causing confusion in notation), the maximum test statistic over a given subset $E$ of $\cV \times \cV$ will be
$$
T_{E}:= \max_{(j,k)\in E}\sqrt{n}\abr{\tilde{\bTheta}_{jk} }
$$
and its quantile is defined as
$
{c} (\alpha,E) = \inf \left\{ t\in \RR \; | \; \PPP \left( T_E \le t \right) \ge 1-\alpha \right\}
$, which is often unknown. Assume it can be estimated by $\hat{c} (\alpha,E)$ from some procedure such as the Gaussian multiplier bootstrap, a generic method called skip-down procedure can be used, which was originally proposed in \citet{lu2017adaptive} for testing a family of monotone graph invariants. When applied to the specific degree testing problem, it leads to the following algorithm.
\begin{algorithm}[htp]
\caption{Skip-down Method in \citet{lu2017adaptive} (for testing the degree of node $j$)}
\begin{algorithmic}\label{algo:skipdown}
\STATE \textbf{Input:} $\{ \tilde{\bTheta}_e \}_{e \in \cV \times \cV}$, significance level $\alpha$.
\STATE Initialize $t = 0, E_0 = \{(j,k) : k \in [d], k \neq j\}$.
\REPEAT
\STATE $t \gets t+1$;
\STATE Select the rejected edges $\cR \gets \{(j,k) \in E_{t-1} \mid \sqrt{n}| \tilde{\bTheta}_{jk} | > \hat{c} (\alpha,E_{t-1}) \}$;
\STATE $E_t \gets E_{t-1} \backslash \cR$;
\UNTIL{$|E_t^{c}| \ge k$ or $\cR = \emptyset$}
\STATE \textbf{Output:} $\psi_{j,\alpha} = 1$ if $|E_t^c| \ge k$ and $\psi_{j,\alpha} = 0$ otherwise.
\end{algorithmic}
\end{algorithm}\\
To conduct the node selection over the whole graph, we need to determine an appropriate threshold $\hat{\alpha}$ then reject $H_{0j}$ if $\psi_{j,\hat{\alpha}} =1$. A desirable choice of $\hat{\alpha}$ should be able to discover as many as hub nodes with the FDR remaining controlled under the nominal level $q$. For example, if the BHq procedure is considered, $\hat{\alpha}$ can be defined as follows:
\begin{equation}\label{eq:BHq_alpha}
\hat{\alpha} = \sup \left\{ \alpha\in (0,1) : \frac{ \alpha d }{\maxof{1}{\sum_{j\in [d]}{ \psi_{j,\alpha}}} } \le q \right\}.
\end{equation}
The above range of $\alpha$ is $(0,1)$, it will be very computationally expensive if we do an exhaustive search since for each $\alpha$, we have to recompute the quantiles $\hat{c} (\alpha,E)$ for a lot of sets $E$.
We overcome the computational difficulty and propose a efficient procedure called
StarTrek filter, which is presented in Algorithm \ref{algo:startrek}.
\begin{algorithm}[htp]
\caption{StarTrek Filter}
\begin{algorithmic}\label{algo:startrek}
\STATE \textbf{Input:} $\{\tilde{\bTheta}_e \}_{e \in \cV \times \cV}$, nominal FDR level $q$.
\FOR {$j \in [d]$}
\STATE We order the elements in $\{|\tilde{\bTheta}_{j\ell}|: \ell \neq j\}$ as
$
|\tilde{\bTheta}_{j, (1)}| \ge |\tilde{\bTheta}_{j, (2)}| \ge \ldots \ge |\tilde{\bTheta}_{j, (d-1)}|,
$ where $|\tilde{\bTheta}_{j, (\ell)}|$ is the $\ell$th largest entry.
Compute $\alpha_j = \max_{1 \le s \le k_{\tau}} \hat{c}^{-1}(\sqrt{n}|\tilde{\bTheta}_{j, (s)}|,E^{(s)}_j)$ where $E^{(s)}_j:=\{ (j,\ell):\ell \neq j, |\tilde{\bTheta}_{j\ell}| \le |\tilde{\bTheta}_{j, (s)}|\}$.
\ENDFOR
\STATE Order $\alpha_j$ as $\alpha_{(1)} \le \alpha_{(2)} \le \dots \le \alpha_{(d)} $ and set $\alpha_{(0)}=0$, let $j_{\max} = \max\{0\le j\le d:\alpha_{(j)} \le {qj}/{d}\}$.
\STATE \textbf{Output: $S =\{j: \alpha_j \le \alpha_{(j_{\max})}\}$} if $j_{\max}>0$; $S =\emptyset$ otherwise.
\end{algorithmic}
\end{algorithm}
Remark it only involves estimating $k_{\tau}$ different quantiles of some maximum statistics per node, which is more efficient than the Skip-down procedure \citep{lu2017adaptive} in terms of computation.
\subsection{Accuracy of approximate quantiles}
\label{sec:quantile_accuracy}
Before diving into the theoretical results, we pause to give specific forms of the estimator of $\bTheta$ and how to compute the estimated quantiles of the maximum statistic. Take the Gaussian graphical model as an example, suppose that $\bX_1,\ldots, \bX_n \stackrel{\mathrm{i.i.d.}}{\sim} N_d (0,\bSigma)$. Let $\bTheta = \bSigma^{-1}$, which will have the same $\ell_{0}$ elementwise norm as the adjacency matrix $\bTheta$. Denote $\eb_k$ be the $k$th canonical basis in $\RR^d$, we consider the following one-step estimator of $\bTheta_{jk}$,
\begin{equation}
\label{eq:one_step}
\hat{\bTheta}^\debias_{jk} := \hat{\bTheta}_{jk} - \frac{\hat{\bTheta}_{j}^{\top} \left( \hat{\bSigma} \hat{\bTheta}_k - \eb_k\right)}{\hat{\bTheta}_j^{\top} \hat{\bSigma}_j},
\end{equation}
where $\hat{\bTheta}$ could be either the graphical Lasso (GLasso) estimator \citep{friedman2008sparse} or the CLIME estimator \citep{cai2011constrained}. Let
$\tilde{\bTheta}^\debias_{jk}:={\hat{\bTheta}^\debias_{jk}}/{\sqrt{\hat{\bTheta}^\debias_{jj}\hat{\bTheta}^\debias_{kk}}}$ and the standardized version $\{\tilde{\bTheta}^\debias_{e}\}_{\cV \times \cV}$ will be the input $\{\tilde{\bTheta}_{e}\}_{\cV \times \cV}$ of Algorithm \ref{algo:startrek}.
Then the maximum test statistics (over the subset $E$) is defined as $T_{E}= \underset{(j,k)\in E}{\max} \; \sqrt{n} | \tilde{\bTheta}^\debias_{jk}|$.
To estimate its quantile, we construct the following Gaussian multiplier bootstrap
\begin{equation}\label{eq:gmb_quantile}
T^{\cB}_{E} := \underset{(j,k)\in E}{\max} \; \frac{1}{\sqrt{n~ {\hat{\bTheta}_{jj}\hat{\bTheta}_{kk}}}} \bigg|\sum_{i=1}^n {\hat{\bTheta}^{\top}_j \left( \bX_i \bX_i^{\top} \hat{\bTheta}_k - \eb_k \right)} \xi_i \bigg|,
\end{equation}
where $\xi_i \stackrel{\mathrm{i.i.d.}}{\sim} N(0,1)$, which produces
$
\hat{c} (\alpha,E) = \inf \left\{ t\in \RR : \PPP_\xi \left( T^{\cB}_{E} \le t \right) \ge 1-\alpha \right\}
$
as the quantile estimate. We also denote the standardized true precision matrix $(\bTheta_{jk}/\sqrt{\bTheta_{jj} \bTheta_{kk}})_{j,k \in [d]}$ by $\bTheta^\star$. The theoretical results for Gaussian multiplier bootstrap developed in \citet{chernozhukov2013gaussian} basically imply the above quantile estimates are accurate in the following sense:
\begin{lemma}\label{lem:quantile}
Suppose $\bTheta \in \cU(M,s, r_0)$ and $(\log (dn))^7/n + s^2 (\log dn)^{4}/ {n} = o(1)$, for any edge set $E \subseteq \cV \times \cV$, we have
\begin{equation}\label{eq:quantile-valid}
\lim_{(n,d)\rightarrow \infty} \sup_{\bTheta \in \cU(M,s, r_0)} \sup_{\alpha \in (0,1)}
\left|\PPP
\left( \max_{e \in E} \sqrt{n} |\tilde{\bTheta}^\debias_{e}-\bTheta^\star_e|> \hat{c}(\alpha, E)
\right) - \alpha
\right|=0.
\end{equation}
where $\tilde{\bTheta}^\debias_e$ is the standardized version of the one-step estimator \eqref{eq:one_step}.
\end{lemma}
Note that $\cU(M,s, r_0)$ denotes the parameter space of precision matrices and is defined as below:
\begin{equation}\nonumber
\begin{aligned}
\cU(M,s, r_0) &= \Big\{\bTheta \in \RR^{d \times d} \,\big|\, \lambda_{\min}(\bTheta) \ge 1/r_0, \lambda_{\max}(\bTheta) \le r_0, \max_{j \in [d]} \|\bTheta_{j}\|_{0} \le s, \|\bTheta \|_1 \le M \Big\}.
\end{aligned}
\end{equation}
The proof of Lemma \ref{lem:quantile} can be found in Appendix \ref{app:pf:lem:quantile}. However, Lemma \ref{lem:quantile} is not sufficient for our multiple testing problem. Generally speaking, the probabilistic bounds in \citet{chernozhukov2013gaussian} are in terms of Kolmogorov distance, which only provides a uniform characterization for the deviation behaviors. Their results can be used to establish FWER control for global testing problems based on the maximum test statistics.
However, in order to establish FDR control, we have to show that the estimation of number of false discoveries is sufficiently accurate enough in the following sense, i.e., uniformly over certain range of $\alpha$,
$$
\frac{ \alpha d_{0}}{ \sum_{j\in \mathcal{H}_0} \psi_{j,\alpha} } \rightarrow 1, \quad \text{ in probability}
$$
where $\mathcal{H}_0 = \{j: \|\bTheta_{j,-j}\|_0 < k_{\tau}\}$. The above result is different from the one needed for FWER control: $\EE{{\psi_{j,\alpha}}} = \alpha + o(1),j\in \mathcal{H}_0$. In the context of our node selection problem, it can be reduced to the following,
$$
\left |
\frac{\sum_{j\in \mathcal{H}_{0}}\Indrbr{ \max_{e \in E } \sqrt{n}|\tilde{\bTheta}^\debias_e -\bTheta^\star_e| \ge \hat c(\alpha,E) } }{d_{0}\alpha} - 1 \right |\rightarrow 0 ~~~ \text{in probability}
$$
uniformly over certain range of $\alpha$ for some subset $E$. The above ratio is closedly related to the ratio in Cram\'{e}r-type moderation deviation results \citep{liu2010cramer,liu2014phase,liu2013ggmfdr}. To this end, we establish the Cram\'{e}r-type deviation bounds for the Gaussian multiplier bootstrap procedure. This type of results is built on two types of Cram\'{e}r-type Gaussian comparison bounds, which are presented in Section \ref{sec:cramer_theory}.
\section{Cram\'{e}r-type comparison bounds for Gaussian maxima}
\label{sec:cramer_theory}
In this section, we present the theoretic results on the Cram\'{e}r-type comparison bounds for Gaussian maxima.
Let $U,V\in \RR^{d}$ be two centered Gaussian random vectors with different covariance matrices $\bSigma^U=(\sigma_{jk}^U)_{1\le j,k\le d},\bSigma^V=(\sigma_{jk}^V)_{1\le j,k\le d}$.
Recall that the maximal difference of the covariance matrices is $\Delta_{\infty}:= ||\bSigma^U-\bSigma^V||_{\max}$ and the elementwise $\ell_0$ norm difference of the covariance matrices is denoted by $\Delta_{0}:= \nbr{\bSigma^U-\bSigma^V}_{0} = \sum_{j,k\in [d]}\Indrbr{\sigma^U_{jk}\ne \sigma^U_{jk}}$.
The Gaussian maxima of $U$ and $V$ are denoted as $\maxnorm{U}$ and $\maxnorm{V}$.
Now we present a Cram\'{e}r-type comparison bound (CCB) between Gaussian maxima in terms of the maximum norm difference $\Delta_{\infty}$.
\begin{theorem}[CCB with maximum norm difference]\label{thm:ccb_max}
Suppose $(\log d)^{5}\Delta_{\infty} = O(1)$, then we have
\begin{equation}\label{eq:ccb_max}
\sup_{0\le t \le C_0\sqrt{\log d}}\left|\frac{\mathbb{P}(\maxnorm{U} > t)}{\mathbb{P}(\maxnorm{V} > t)}-1\right| = O \left( (\log d)^{5/2}\Delta_{\infty}^{1/2} \right),
\end{equation}
for some constant $C_0>0$.
\end{theorem}
\begin{remark}\label{rk:thm:ccb_max}
We can actually prove a more general form (see Theorem \ref{thm:ccb_max_general} in the appendix) of the upper bound on the above term, without the assumption on $\Delta_{\infty}$. In fact, we bound the right hand side of \eqref{eq:ccb_max} as
$
M_3(\log d)^{3/2} A(\Delta_{\infty})e^{M_3(\log d)^{3/2} A(\Delta_{\infty})},
$
where $A(\Delta_{\infty})=M_1
\log d \Delta_{\infty}^{1/2} \exp{(M_2 \log^2 d \Delta_{\infty}^{1/2})}$ with the constants $M_1, M_2$ only depending on the variance terms $\min_{1\le j\le d}\{\sigma^U_{jj},\sigma^V_{jj}\},\max_{1\le j\le d}\{\sigma^U_{jj},\sigma^V_{jj}\}$ and $M_3$ being a universal constant.
\end{remark}
When applying Theorem \ref{thm:ccb_max} to Gaussian multiplier bootstrap, $\maxnorm{\Delta}$ actually controls the maximum differences between the true covariance matrix and the empirical covariance matrix. Based on the bound of $\maxnorm{\Delta}$, we can show that the Cram\'{e}r-type comparison bound in \eqref{eq:ccb_max} will be $O((\log d)^{3/2}n^{-1/4})$ with high probability.
The proof can be found in Appendix \ref{app:pf:thm:ccbmax}. The above result bounds the relative difference between the distribution functions of the two Gaussian maxima. Compared with the bound in terms of Kolmogorov distance, it has more refined characterization when $t$ is large,
which benefits from our iterative use of the Slepian interpolation. We denote the interpolation between $U$ and $V$ as $ W(s)= \sqrt{s}U + \sqrt{1-s}V, s\in[0,1]$ and let $Q_t(s)=\mathbb{P}(||W(s)||_{\infty}>t)$. Existing results \citep{chernozhukov2013gaussian,chernozhukov2014anti} quantify the difference between $Q_t(1)$ and $Q_t(0)$ uniformly over $t\in \RR$, which leads to a bound on the Kolmogorov distance between Gaussian maxima. Our main innovation is to consider $R_t(s) = Q_t(s)/Q_t(0)-1$ and show that for any given $t$, $\cR_t: s\in[0,1] \mapsto |R_t(s)|$ is a contraction mapping with $0$ being its fixed point. Specifically, we have the following upper bound on $|R_{t}(s)|$,
$$
|R_{t}(s)|\le AB \int_{0}^{s} |R_{t}(\mu)| d\mu + AB\cdot s + A,
$$
where $AB$ and $A$ can be controlled via the bound on the maximal difference of the covariance matrices $\Delta_{\infty}$ and converge to $0$ under certain conditions. By Gr\"{o}nwall's inequality \citep{gronwall1919note}, we then derive the bound on $R_t(1)$ explicitly in terms of $A$ and $B$, which finally lead to the desired Cram\'{e}r-type comparison bound in \eqref{eq:ccb_max}.
The above theorem is a key ingredient for deriving Cram\'{e}r-type deviation results for the Gaussian multiplier bootstrap procedure. However, in certain situations, comparison bounds in terms of maximum norm difference may not be appropriate. There exist cases where the covariance matrices of two Gaussian random vectors are not uniformly closed to each other, but have lots of identical entries. In particular, for the combinatorial variable selection problem in this paper, there exist complicated dependence structures between the maximum statistic for different nodes, since each time when the degree of one single node is tested, the statistic is computed based on the whole graph. Again, this highlights the challenge of the multiple testing problem in our paper. To establish FDR control, we need to deal with the dependence between the maximum statistic of pairs of non-hub nodes. By the definition of non-hub nodes, the covariance matrix difference between each pair of the involving Gaussian vectors actually has lots of zero entries. We would like to take advantage of this sparsity pattern when applying the comparison bound. However, the bound in \eqref{eq:ccb_max} is not sharp when $\Delta_{\infty}$ is not negligible but $\Delta_{0}$ is small. To this end, we develop a different version of the Cram\'{e}r-type comparison bound as below.
\begin{theorem}
[CCB with elementwise $\ell_{0}$-norm difference]\label{thm:ccb_sparse_unitvar}
Assume the Gaussian random vectors $U$ and $V$ have unit variances, i.e., $\sigma^U_{jj}=\sigma^V_{jj}=1, j \in [d]$ and
there exists some $\sigma_0<1$ such that $|\sigma^V_{jk} |\le \sigma_0, |\sigma^U_{jk} | \le \sigma_0$ for any $j\ne k$.
Suppose there exists a disjoint $\fp$-partition of nodes $\cup_{\ell=1}^{\fp}\cC_\ell = [d]$ such that $\sigma^U_{jk}=\sigma^V_{jk}=0$ when $j \in \cC_{\ell}$ and $k \in \cC_{\ell'}$ for some $\ell \neq \ell'$. We have
\begin{equation}\label{eq:ccb_sparse_unitvar}
\sup_{0\le t \le C_0 \sqrt{\log d}} \left|\frac{\mathbb{P}(\maxnorm{U} > t)}{\mathbb{P}(\maxnorm{V} > t)}-1\right| \le O\left(\frac{ \Delta_{0} \log d }{\fp}\right),
\end{equation}
for some constant $C_0 > 0$.
\end{theorem}
When applying the above result to our multiple degree testing problem, specifically the covariance of maximum test statistics for pairs of non-hub nodes, $\Delta_{0}$ can be controlled as $k_\tau^2$ which is in a constant order. In Theorem \ref{thm:ccb_sparse_unitvar}, the quantity $\fp$ represents the number of connected subgraphs shared by the coviarance matrix networks of $U$ and $V$.
We refer to Theorem \ref{thm:ccb_sparse} in the appendix for a generalized definition of $\fp$ to strengthen the results in \eqref{eq:ccb_sparse_unitvar}.
The $\fp$ in the denominator of the right hand side of Cram\'er-type comparison bound in \eqref{eq:ccb_sparse_unitvar} is necessary: it is possible that even if $\Delta_0$ is small, when $\fp$ is large, the Cam\'er-type Gaussian comparison bound is not converging to zero. For example, consider Gaussian vectors with unit variances
$U=(X_1, X_2, Z, \ldots,Z) \in \RR^d$,
$V = (Y_1, Y_2, Z, \ldots,Z)\in \RR^d$, where $\text{corr}(X_1, X_2) = 0.9, \text{corr}(Y_1, Y_2) = 0$ and $(X_1, X_2) \independent Z$, $(Y_1, Y_2) \independent Z$. For this case, the Cam\'er-type Gaussian comparison bound
\[
\sup_{0\le t \le C_0 \sqrt{\log d}} \left|\frac{\mathbb{P}(\maxnorm{U} > t)}{\mathbb{P}(\maxnorm{V} > t)}-1\right| = \sup_{0\le t \le C_0 \sqrt{\log d}} \left|\frac{\mathbb{P}(\max\{|X_1|,|X_2|, |Z|\} > t)}{\mathbb{P}(\max\{|Y_1|,|Y_2|,|Z| \} > t)}-1\right|
\]
is not converging to zero as $d$ goes to infinity even if the corresponding $\Delta_0$ is 1 but $\fp=2$.
Compared with Theorem \ref{thm:ccb_max}, the above theorem provides a sharper comparison bound for large $\fp$ and small $\Delta_{0}$. The two theorems together describe a interesting phase transition phenomenon, i.e., the dependence on $\bSigma^U-\bSigma^V$ of the Cram\'{e}r-type comparison bound exhibits a difference behavior in the regime of large $\fp$ and small $\Delta_{0}$ versus the regime of small $\Delta_{\infty}$.
The proof of Theorem \ref{thm:ccb_sparse_unitvar} can be found in Appendix \ref{app:pf:thm:ccb_sparse_unitvar}. Our main technical innovation is to establish a new type of anti-concentration bound for ``derivatives" of Gaussian maxima. Since both the indicator function and maximum function are discontinuous, we follows the idea of using smoothing approximation as in the proof of Theorem \ref{thm:ccb_max}, specifically, we bound the following term
\begin{equation}\label{eq:varphi_anti}
\EEE[|\partial_j \partial_k \varphi(W(s))|\cdot \Indrbr{t-\epsilon \le ||W(s)||_{\infty}\le t +\epsilon} ],
\end{equation}
where $\varphi$ is the same approximation function of the indicator of $\ell_{\infty}$ norm with certain smoothing parameter $\beta$. Note that $\EEE[\Indrbr{t-\epsilon \le ||W(s)||_{\infty}\le t +\epsilon} ]$ is the anti-concentration bound for Gaussian maxima \citep{chernozhukov2014anti}. A non-uniform version is also established in \citet{arun2018cram}. \eqref{eq:varphi_anti} can be viewed as the anti-concentration bound on the second order partial derivatives of the smooth approximation function. When deriving the comparison bound in terms of $\ell_0$ norm difference, we have to deal with such terms as \eqref{eq:varphi_anti} when $\sigma^U_{jk}\ne \sigma^V_{jk}$. We show \eqref{eq:varphi_anti} can be controlled as
$$
\EEE[|\partial_j \partial_k \varphi(W(s))|\cdot \Indrbr{t-\epsilon \le ||W(s)||_{\infty}\le t +\epsilon} ] \lesssim\frac{ \PP{\maxnorm{V}> t} (\log d)^2}{\epsilon \beta \fp}.
$$
The above anti-concentration bound is non-uniform and has only a logarithm dependence on the dimension $d$. It provides a relatively sharp characterization when $t$ is large and the Gaussian graphical model is not highly connected (i.e., the number of connected components $\fp$ being large).
\section{Discovering hub responses in multitask regression}
\label{sec:bipartite_selection}
The theoretical results presented in Section \ref{sec:cramer_theory} will be the cornerstone for establishing FDR control of the multiple testing problem described in Section \ref{sec:method}. As seen previously, the testing problem \eqref{eq:problem_setup} is set up in a quite general way: $\bTheta$ is a weight matrix, and we would like to select rows whose $\ell_0$ norm exceeds some threshold.
This section considers the specific application to multitask/multiple response regression, which turns out to be less involved. We take advantage of it and demonstrate how to utilize the probabilistic tools in Section \ref{sec:cramer_theory}. After that, the theoretical results on FDR control for the Gaussian graphical models are presented and discussed in Section \ref{sec:hub_selection}.
In multitask regression problem, multiple response variables are regressed on a common set of predictors. We can view this example as a bipartite graph $\cG = (\cV_1, \cV_2, \cE), |\cV_1|=d_1, |\cV_2|=d_2$, where $\cV_1$ contains the response variables and $\cV_2$ represents the common set of predictors. Each entry of the weight matrix $\bTheta$ indicates whether a given predictor is non-null or not for a given response variable. In the case of parametric model, $\bTheta \in \RR^{d_1 \times d_2}$ corresponds to the parameter matrix. One might be interested in identifying shared sparsity patterns across different response variables. It can be solved by selecting a set of predictors being non-null for all response variables \citep{obozinski2006multi,dai2016knockoff}. This section problem is column-wise in the sense that we want to select columns of $\bTheta$, denoted by $\bTheta_{\cdot j}$, such that $\norm{\bTheta_{\cdot j}}_0 = d_1$. It is also interesting to consider a row-wise selection problem formalized in \eqref{eq:problem_setup}. Under the multitask regression setup, we would like to select response variables with at least a certain amount of non-null predictors. We will call this type of response variables hub responses throughout the section. This has practical applications in real-world problems such as the gene-disease network.
Consider the multitask regression problem with linear models, we have $n$ i.i.d. pairs of the response vector and the predictor vector, denoted by $(\bY_1,\bX_1), (\bY_2,\bX_2), \dots, (\bY_n,\bX_n)$, where
$\bY_i \in \RR^{d_1},\bX_i \in \RR^{d_2}$ satisfy the following relationship,
\begin{eqnarray}\label{eq:mul_linear_model}
\bY_i = \bTheta \bX_i + \bE_i, \text{ where } \bE_i\sim \cN(0,\Db_{d_1\times d_1} ) \text{ and }\bX_i \independent \bE_i,
\end{eqnarray}
where $\bTheta \in \RR^{d_1 \times d_2}$ is the parameter matrix and $\Db$ is a $d_1$ by $d_1$ diagonal matrix whose diagonal elements $\sigma_j^2$ is the noise variance for response variable $\bY^{(j)}$. Let $\bX$ be the design matrix with
rows $\bX_1^{\top},\dots, \bX_n^{\top}$, shared by different response variables, and assume the noise variables are independent conditional on the design matrix $\bX$.
Let $s = \max_{j\in [d_1]}\norm{\bTheta_{j}}_0$ be the sparsity level of the parameter matrix $\bTheta$, we want to select columns of the parameter matrix which has at least $k_{\tau}$ nonzero entries, i.e., select nodes with large degree among $[d_1]$ in the bipartite graph $\cG = (\cV_1, \cV_2, \cE)$.
As mentioned in Section \ref{sec:method}, some estimator of the parameter matrix is needed to conduct hypothesis testing. Debiased Lasso is widely used for parameter estimation and statistical inference in high dimensional linear models \citep{javanmard2014confidence,javanmard2014hypothesis}. For each response variable $\bY^{(j)}, j \in [d_1]$, we compute the debiased Lasso estimator, denoted by $\tilde{\bTheta}^\debias_{j}$ as
\begin{align}\label{eq:dlasso}
\tilde{\bTheta}^\debias_j = \hat \bTheta_j + \frac{1}{n}\, \Mb \bX^{\top}(\bY^{(j)} - \bX \hat \bTheta_j),
\text{ where }\hat\bTheta_j= \arg\min_{\beta\in\RR^{d_2}}
\Big\{\frac{1}{2n}\|\bY^{(j)}-\bX\beta
\|^2_2+\lambda\|\beta\|_1\Big\}\, .
\end{align}
Note the above $\Mb$ is defined as $\Mb = (m_1,\dots,m_{d_2})^{\top}$ where
\begin{align}
\label{eq:optimization}
m_i = &\argmin_m m^{\top} \hat{\bSigma} m, \quad \text{s.t. } \|\hat{\bSigma} m - e_i \|_{\infty} \le \mu\,,
\end{align}
and here $\hat{\bSigma} = (\bX^{\top} \bX)/n$.
Then the debiased estimator of the parameter matrix, defined by $\tilde{\bTheta}^\debias := (\tilde{\bTheta}^\debias_{1}, \cdots, \tilde{\bTheta}^\debias_{d_1})^\top$, will be used the input $\{\tilde{\bTheta}_e\}_{e \in \cV_1 \times \cV_2}$ of Algorithm \ref{algo:startrek}. In addition, we also need to compute the quantile of the maximum statistics. There exist many work studying the asymptotic distribution of the debiased Lasso estimator. Among them, the results in \citet{javanmard2014confidence} (when translated into our multitask regression setup) imply, for each response variable $\bY^{(j)}, j \in [d_1]$,
\begin{equation} \label{eq:dlasso_normal}
\sqrt{n} (\tilde{\bTheta}^\debias_j - \bTheta_j) = Z + \Xi, \quad Z |\bX \sim \cN(0,\sigma_j^2 M\hat{\bSigma} M^{\top}),
\end{equation}
under proper assumptions. Additionally with a natural probabilistic model of the design matrix, the bias term can be showed to be $\maxnorm{\Xi} =O(\frac{s \log d_2}{\sqrt{n}}) $ with high probability. As discussed in \citep{javanmard2014confidence}, the asymptotic normality result can be used for deriving confidence intervals and statistical hypothesis tests. As the noise variance $\sigma_j$ is unknown, the scaled Lasso is used for its estimation \citep{javanmard2014confidence,sun2012scaled}, given by the following joint optimization problem,
\begin{equation}\label{eq:scaled_lasso}
\{\hat\bTheta_j,\hat\sigma_j \} = \arg \min_{\beta \in \RR^{d_2}, \sigma > 0}\Big\{\frac{1}{2\sigma n}\|\bY^{(j)}-\bX\beta
\|^2_2+ \frac{\sigma}{2} +
\lambda\|\beta\|_1\Big\}.
\end{equation}
Regarding our testing problem, intuitively we can use the quantile of the Gaussian maxima of $ \cN(0,\hat \sigma_j^2 M\hat{\bSigma} M^{\top})$ to approximate the quantile of maximum statistic $T_{E} =\underset{(j,k)\in E}{\max}\sqrt{n} |\tilde{\bTheta}^\debias_{jk}|$ for some given subset $E$. Specifically, let $Z_j\mid \bX, \bY^{(j)}\sim \cN(0,\hat \sigma_j^2 M\hat{\bSigma} M^{\top})$ where $Z_j \in \RR^{d_2}$ and consider the subset $E \subset \{j\}\times \cV_2$, we approximate the quantile of $T_E$ by the following
\begin{equation}\label{eq:dlasso_chat}
T^{\cN}_{E} := \underset{(j,k)\in E}{\max} \; | Z_{jk} |,\quad \hat{c} (\alpha,E) = \inf \left\{ t\in \RR : \PPP_{Z} \left( T^{\cN}_{E} \le t \right) \ge 1-\alpha \right\}.
\end{equation}
Indeed, under proper scaling conditions, similar results as \eqref{eq:quantile-valid} can be established, i.e., as ${n,d \rightarrow \infty}$,
\begin{equation}\label{eq:dlasso_quantile_valid}
\sup_{\alpha \in (0,1)}
\left|\PPP
\left( \max_{(j,k) \in E} \sqrt{n} |\tilde{\bTheta}^\debias_{jk} -{\bTheta}_{jk}|> \hat{c}(\alpha, E)
\right) - \alpha
\right|\rightarrow 0.
\end{equation}
The above result is based on two ingredients: the asymptotic normality result and the control of the bias term $\Xi$. Below we list the required assumptions for those two ingredients, i.e., \eqref{eq:dlasso_normal} and $\maxnorm{\Xi} =O(\frac{s \log d_2}{\sqrt{n}})$.
\begin{assumption}[Debiased Lasso with random designs]\label{asp:dlasso}
The following assumptions are from the ones of Theorems 7 and 8 in \citet{javanmard2014confidence}.
\begin{itemize}
\item Let $\bSigma = \EE{\bX_1 \bX_1^\top}\in\RR^{d_2\times d_2}$
be such that $\sigma_{\min}(\bSigma) \ge C_{\min} > 0$, and
$\sigma_{\max}(\bSigma) \le C_{\max} < \infty$,
and $\max_{j\in [d_2]}\bSigma_{jj}\le 1$.
Assume $\bX\bSigma^{-1/2}$ have independent subgaussian rows, with
zero mean and subgaussian norm $\|\bSigma^{-1/2} \bX_i\|_{\psi_2} = \kappa$, for some
constant $ \kappa \in(0, \infty)$.
\item $\mu =a\sqrt{(\log d_2)/n}$, and $n\ge \max(\nu_0s\log (d_2/s), \nu_1\log d_2)$,
$\nu_1 = \max(1600\kappa^4, a/4)$,
and
$\lambda = \sigma\sqrt{(c^2\log
d_2)/n}$.
\end{itemize}
\end{assumption}
Remark that there may exist other ways of obtaining a consistent estimator of $\bTheta$ and sufficiently accurate quantile estimates under different assumptions. Since it is not the main focus of this paper, we will not elaborate on it. As mentioned before, the Kolmogorov type result in \eqref{eq:dlasso_quantile_valid} can be immediately applied to the global testing problem to guarantee FWER control. {However, it is not sufficient for FDR control of the multiple testing problem in this paper. And this is when the Cram\'{e}r-type comparison bound for Gaussian maxima established in Section \ref{sec:cramer_theory} play its role.} In addition, signal strength condition is needed.
Recall that $\mathcal{H}_0 = \{j\in [d_1]: \norm{\bTheta_{j}}_0 < k_\tau\}$ with $d_0 = |\mathcal{H}_0|$, we consider the following rows of $\bTheta$,
\begin{equation}\label{eq:strong_Y_set}
\cB:=\{j\in \mathcal{H}_{0}^{c}:\forall k \in \text{supp}(\bTheta_j),|\bTheta_{jk}|>c\sqrt{{\log d_2}/{n}}\},
\end{equation}
and define the proportion of such rows as $\rho = |\cB|/d_1$.
In the context of multitask regression, $\rho$ measures the proportion of hub response variables whose non-null parameter coefficients all exceed certain thresholds, thus characterizes the overall signal strength. Below we present our result on FDP/FDR control under appropriate assumptions.
\begin{theorem}[FDP/FDR control]\label{thm:fdr_linear}
Under Assumption \ref{asp:dlasso} and the scaling condition
$
\frac{d_2\log d_2 + d_0}{d_0 d_2 \rho} + \frac{s \log^2 d_2 }{n^{1/2}} +
\frac{\log^2 d_2}{(n \rho)^{1/5}}
= o(1)$,
if we implement the StarTrek procedure in Algorithm \ref{algo:startrek} with $\bTheta$ estimated by \eqref{eq:dlasso} and the quantiles approximated by \eqref{eq:dlasso_chat}, as $(n,d_1, d_2)\rightarrow \infty$, we have
\begin{equation}\label{eq:fdp_linear}
\text{FDP} \le q\frac{d_0}{d_1} + o_{\mathbb{P}}(1) \quad \text{and}
\quad \lim_{(n,d_1, d_2)\rightarrow \infty}\text{FDR} \le q\frac{d_{0}}{d_1}.
\end{equation}
\end{theorem}
The proof of Theorem \ref{thm:fdr_linear} can be found in Appendix \ref{pf:thm:fdr_linear}.
Note that signal strength conditions which require some entries of parameter matrix $\bTheta$ have magnitudes exceeding $c\sqrt{\log d_2/n}$ are usually assumed in existing work studying FDR control problem for high dimensional models \citep{liu2013ggmfdr,liu2014phase,liu2014hypothesis,xia2015testing,xia2018multiple,javanmard2019false}.
\section{Discovering hub nodes in Gaussian graphical models}
\label{sec:hub_selection}
This section focuses on the hub node selection problem on Gaussian graphical models. Recall in Section \ref{sec:method}, we first compute the one-step estimator $\{\hat{\bTheta}^\debias_{e}\}_{e\in \cV \times \cV}$ in \eqref{eq:one_step} then take its standardized version $\{\tilde{\bTheta}^\debias_{e}\}_{e\in \cV \times \cV}$ as the input of Algorithm \ref{algo:startrek} i.e.,
\begin{equation}
\label{eq:ggm_tdTheta}
\hat{\bTheta}^\debias_{jk} := \hat{\bTheta}_{jk} - \frac{\hat{\bTheta}_{j}^{\top} \left( \hat{\bSigma} \hat{\bTheta}_k - \eb_k\right)}{\hat{\bTheta}_j^{\top} \hat{\bSigma}_j},
\quad
\tilde{\bTheta}^\debias_{jk}:={\hat{\bTheta}^\debias_{jk}}/{\sqrt{\hat{\bTheta}^\debias_{jj}\hat{\bTheta}^\debias_{kk}}}.
\end{equation}
Our StarTrek filter selects nodes with large degrees based on the maximum statistics $T_{E}= \underset{(j,k)\in E}{\max} \; \sqrt{n} | \tilde{\bTheta}^\debias_{jk}|$ over certain subset $E$. We use the Gaussian multiplier bootstrap \eqref{eq:gmb_quantile} to approximate the quantiles, specifically,
\begin{equation}\label{eq:ggm_chat}
\hat{c} (\alpha,E) = \inf \left\{ t\in \RR : \PPP_\xi \left( T^{\cB}_{E} \le t \right) \ge 1-\alpha \right\}.
\end{equation}
\citet{chernozhukov2013gaussian} shows that this quantile approximation is accurate enough for FWER control in modern high dimensional simultaneous testing problems. Their results are based on the control of the non-asymptotic bounds in a Kolmogorov distance sense. \citet{lu2017adaptive} also takes advantage of this result to test single hypothesis of graph properties or derive confidence bounds on graph invariants.
However, in order to conduct combinatorial variable selection with FDR control guarantees, we need more refined studies about the accuracy of the quantile approximation. This is due to the ratio nature of the definition of FDR, as explained in Section \ref{sec:quantile_accuracy}. Compared with the results in \citet{chernozhukov2013gaussian}, we provide a Cram\'er-type control on the approximation errors of the Gaussian multiplier bootstrap procedure. This is built on the probabilistic tools in Section \ref{sec:cramer_theory}, in particular, the Cram\'er-type Gaussian comparison bound with max norm difference in Theorem \ref{thm:ccb_max}.
Due to the dependence structure behind the hub selection problem in Graphical models, we also have to utilize Theorem \ref{thm:ccb_sparse_unitvar}. In a bit more detail, computing the maximum test statistic for testing node node actually involves the whole graph, resulting complicated dependence among the test statistics. The non-differentiability of the maximum function makes it very difficult to track this dependence. Also note that, this type of difficulty can not be easily circumvented by alternative methods, due to the discrete nature of the combinatorial inference problem. However, we figure out that the Cram\'er-type Gaussian comparison bound with $\ell_0$ norm difference plays an important role in handling this challenge.
In general, the sparsity/density of the graph is closed related to the dependence level of multiple testing problem on graphical models. For example, \citet{liu2013ggmfdr,xia2015testing,xia2018multiple} make certain assumptions on the sparsity level and control the dependence of test statistics when testing multiple hypotheses on graphical models/networks. For the hub node selection problem in this paper, a new quantity is introduced, and we will explain why it is suitable.
Recall that we define the set of non-hub response variables in Section \ref{sec:bipartite_selection}. Similarly, the set of non-hub nodes is denoted by $\mathcal{H}_0 = \{j\in [d]: \norm{\bTheta_{j}}_0 < k_\tau\}$ with $d_{0}=|\mathcal{H}_{0}|$. Now we consider the following set,
\begin{equation}\label{eq:dep_term_set}
S = \{(j_1,j_2,k_1,k_2): j_1,j_2\in \mathcal{H}_0, j_1\ne j_2,k_1 \ne k_2 , \bTheta_{j_1 j_2} = \bTheta_{j_1 k_1} = \bTheta_{j_2 k_2} = 0, \bTheta_{j_1 k_2} \ne 0, \bTheta_{j_2 k_1}\ne 0\}.
\end{equation}
\begin{figure}[htbp]
\centering
\includegraphics[width = 0.75\linewidth]{plots/S_demo.pdf}
\caption{Left panel: a graphical demonstration of the definition of $S$ via four examples of a $4$-vertex graph; Right panel: four different graph patterns with $6$ vertices. Calculating $|S|$ yields $10,15,24,51$ for (a),(b),(c),(d) respectively.}
\label{fig:demo}
\end{figure}
Remark that in the above definition, $k_1$ can be the same as $j_2$ and $k_2$ can be the same as $j_1$.
If there exists a large number of nodes which are neither connected to $j_1$ nor $j_2$, we then do not need to worry much about the dependence between the test statistics for non-hub nodes. Therefore, $|S|$ actually measures the dependence level via checking how a pair of non-hub nodes interact through other nodes. \citet{liu2013ggmfdr,cai2013two} also examine the connection structures in the $4$-vertex graph and control the dependence level by carefully bounding the number of the $4$-vertex graphs with different numbers of edges.
We provide a graphical demonstration of $S$ and show how $|S|$ looks like in certain types of graph patterns via some simple examples.
Though the definition of $S$ does not exclude the possibility of $(j_1,j_2,k_1,k_2)$ being a graph with $2$ or $3$ vertices, we only draw $4$-vertex graph in Figure \ref{fig:demo} for convenience. In the left panel of Figure \ref{fig:demo}, we consider four different cases of the $4$-vertex graph. The upper two belong to the set $S$, while the lower three do not.
In the right panel, we consider four graphs which all have $6$ vertices. They have different graph patterns. For example, (a) clearly has a hub structure. All of the non-hub nodes are only connected to the hub node. While in (d), the edges are evenly distributed and each node are connected to its two nearest neighbours. For each graph, we count the value of $|S|$ and obtain $10,15,24,51$ respectively, which show a increasing trend of $|S|$. This sort of matches our intuition that it is relatively easier to discover hub nodes on graph (a) compared with graph (d). See more evidence in the empirical results of Section \ref{sec:simul}.
In addition to $|S|$, we also characterize the dependence level via the connectivity of the graph, specifically let $p$ be the number of connected components. And similarly as in Section \ref{sec:bipartite_selection}, we define $\rho$ to measure the signal strength, i.e., $\rho = |\cB|/d$, where $\cB:=\{j\in \mathcal{H}_{0}^{c}:\forall k \in \text{supp}(\bTheta_j),|\bTheta_{jk}|>c\sqrt{{\log d}/{n}}\}$. In the following, we list our assumptions needed for FDR control.
\begin{assumption}\label{asp:tradeoff_fdp}
{Suppose that $\bTheta \in \cU(M,s, r_0)$ and the following conditions hold:}
\begin{enumerate}[(i)]
\item Signal strength and scaling condition.
\begin{eqnarray}\label{eq:cond1}
\frac{\log d }{\rho}\rbr{
\frac{(\log d)^{19/6}}{n^{1/6}} + \frac{ (\log d)^{11/6}}{\rho^{1/3} n^{1/6}} +
\frac{{s(\log d)^{3}}}{{n}^{1/2}}
} = o(1).
\end{eqnarray}
\item Dependency and connectivity condition.
\begin{eqnarray}\label{eq:cond2}
\frac{\log d}{\rho d_0} + \frac{({\log d})^{2}|S|}{\rho d_0^2 p} = o(1).
\end{eqnarray}
\end{enumerate}
\end{assumption}
In the above assumption, \eqref{eq:cond1} places conditions on the signal strength and scaling. The first and the second term come from the Cram\'er-type large deviation bounds in the high dimensional CLT setting \citep{arun2018cram} and the Cram\'er-type Gaussian comparison bound established in Theorem \ref{thm:ccb_max}. And the third term comes from the fact that the relevant test statistics arise as maxima of approximate averages instead of the exact averages and thus the approximation error needs to be controlled. See similar discussions about this in \citep{chernozhukov2013gaussian}. Remark that the signal strength condition is mild here, due to similar reasons as the discussion in Section \ref{sec:bipartite_selection}. Regarding \eqref{eq:cond2}, there is a trade-off between the dependence level and connectivity level of the topological structure. $|S|/d_0^2$ characterizes how the test statistics of non-hub nodes are correlated to each other in average. $p$ by definition describes the level of connectivity. Due to the condition \eqref{eq:cond2}, larger signal strength generally makes the hub selection problem easier. And when $|S|/d_0^2$ is small, the graph is allowed to be more connected. When there exist more sub-graphs, we allow higher correlations between the non-hub nodes. Note that the cardinality of $S$ is directly related to the $\ell_0$ norm covariance matrix difference term $\Delta_0$, and arises from the application of Theorem \ref{thm:ccb_sparse_unitvar}.
In the following, we present our core theoretical result on FDP/FDR control for hub selection using the StarTrek filter on Gaussian graphical models.
\begin{theorem}[FDP/FDR control]\label{thm:fdr_hub}
Under Assumption \ref{asp:tradeoff_fdp},
the StarTrek procedure in Algorithm \ref{algo:startrek} with \eqref{eq:ggm_tdTheta} as input and the quantiles approximated by \eqref{eq:ggm_chat} satisfies:
as $(n,d)\rightarrow \infty$,
\begin{equation}\label{eq:fdp_hub}
\text{FDP} \le q\frac{d_0}{d} + o_{\mathbb{P}}(1) \quad \text{and}
\quad \lim_{(n,d)\rightarrow \infty}\text{FDR} \le q\frac{d_{0}}{d}.
\end{equation}
\end{theorem}
The proof can be found in Appendix \ref{app:pf:thm:fdr_hub}. Remark that control of the FDR does not prohibit the FDP from varying. Therefore our result on FDP provides a stronger guarantee on controlling the false discoveries. See clear empirical evidence in Section \ref{sec:synthetic}. To the best of our knowledge, the proposed StarTrek filter in Section \ref{sec:method} and the above FDP/FDR control result are the first Algorithm and theoretical guarantee for the problem of simultaneously selecting hub nodes. Existing work like \cite{liu2013ggmfdr, liu2014hypothesis, xia2015testing,xia2018multiple,javanmard2019false} focus on the discovery of continuous signals and their tools are not applicable to the problem here.
\section{Numerical results}
\label{sec:simul}
\subsection{Synthetic data}
\label{sec:synthetic}
In this section, we apply the StarTrek filter to synthetic data and demonstrate the performance of our method. The synthetic datasets are generated from Gaussian graphical models. The corresponding precision matrices are specified based on four different types of graphs. Given the number of nodes $d$ and the number of connected components $p$, we will randomly assign those nodes into $p$ groups. Within each group (sub-graph), the way of assigning edges for different graph types will be explained below in detail. After determinning the adjacency matrix of the graph, we follow \citet{zhao2012huge} to construct the precision matrix, more specifically, we set the off-diagonal elements to be of value $v$ which control the magnitude of partial correlations and is closely related to the signal strength. In order to ensure positive-definiteness, we add some value $v$ together with the absolute value of the minimal eigenvalues to the diagonal terms. In the following simulations, $v$ and $u$ are set to be $0.4$ and $0.1$ respectively. Now we explain how to determine the edges within each group (sub-graph) for four different graph patterns.
\begin{itemize}
\item \textbf{Hub graph.} We randomly pick one node as the hub node of the sub-graph, then the rest of the nodes are made to connect with this hub node. There is no edge between the non-hub nodes.
\item \textbf{Random graph.} This is the Erd\"os-R\'enyi random graph. There is an edge between each pair of nodes with certain probability independently. In the following simulations, we will set this probability to be $0.15$ unless stated otherwise.
\item \textbf{Scale-free graph.} In this type of graphs, the degree distribution follows a power law. We construct it by the Barab\'asi-Albert algorithm: starting with two connected nodes, then adding each new node to be connected with only one node in the existing graph; and the probability is proportional to the degree of the each node in the existing graph. The number of the edges will be the same as the number of nodes.
\item \textbf{K-nearest-neighbor (knn) graph.} For a given number of $k$, we add edges such that each node is connected to another $k$ nodes. In our simulations, $k$ is sampled from $
\{1,2,3,4\}$ with probability mass $\{0.4,0.3,0.2,0.1\}$.
\end{itemize}
See a visual demonstration of the above four different graph patterns in Appendix \ref{app:graph_pattern}. Throughout the simulated examples, we fix the number of nodes $d$ to be $300$ and vary other quantities such as sample size $n$ or the number of connected components $p$. To estimate the precision matrix, we run the graphical Lasso algorithm with 5-fold cross-validation. Then we obtain the standardized debiased estimator as described in \eqref{eq:one_step}. To obtain the quantile estimates, we use the Gaussian multiplier bootstrap with {4000} bootstrap samples. The threshold $k_\tau$ for determining hub nodes is set to be $3$. And all results (of FDR and power) are averaged over 64 independent replicates.
As we can see from Table \ref{tb:FDR}, the FDRs of StarTrek filter for different types of graph are well controlled below the nominal levels. In hub graph, the FDRs are relatively small but the power is still pretty good. Similar phenomenon for multiple edge testing problem is observed \citep{liu2013ggmfdr}. In the context of node testing, it is also unsurprising. These empirical results actually match our demonstration about $|S|$ in Figure \ref{fig:demo}: hub graphs have a relatively weaker dependence structure (smaller $S$ values) and make it is easier to discover true hub nodes without making many errors.
\begin{table}[hptb]
\addtolength{\tabcolsep}{-4pt}
\begin{center}
\caption{{ Empirical FDR}}
\begin{tabular}{
@{\hspace{1.2em}}c
@{\hspace{1.2em}}c
@{\hspace{1em}}c
@{\hspace{1em}}c|
@{\hspace{1em}}c
@{\hspace{1em}}c
@{\hspace{1em}} c
@{\hspace{1em}} c
@{\hspace{1em}} c
@{\hspace{1.2em}} }
\toprule
$d= 300~$ &\multicolumn{3}{c}{$q=0.1$}&\multicolumn{3}{c}{$q=0.2$} \\
\hline
$n$ &200 &300 &400
&200 &300 &400\\
\hline & \multicolumn{5}{c}{$\quad\quad p=20$}&\multicolumn{1}{c}{} \\
\hline
hub & 0.0000 & 0.0000 & 0.0007 & 0.0000 & 0.0000 & 0.0029 \\
random & 0.0255 & 0.0383 & 0.0467 & 0.0521 & 0.0770 & 0.0833 \\
scale-free & 0.0093 & 0.0211 & 0.0282 & 0.0352 & 0.0486 & 0.0581 \\
knn & 0.0101 & 0.0296 & 0.0370 & 0.0228 & 0.0620 & 0.0769 \\
\hline & \multicolumn{5}{c}{$\quad\quad p=30$}&\multicolumn{1}{c}{} \\
\hline
hub & 0.0013 & 0.0000 & 0.0016 & 0.0027 & 0.0054 & 0.0036 \\
random & 0.0347 & 0.0359 & 0.0568 & 0.0725 & 0.0753 & 0.0963 \\
scale-free & 0.0215 & 0.0335 & 0.0317 & 0.0521 & 0.0624 & 0.0584 \\
knn & 0.0297 & 0.0420 & 0.0563 & 0.0504 & 0.0857 & 0.1030 \\
\toprule
\end{tabular}
\label{tb:FDR}
\end{center}
\vspace{-20pt}
\end{table}
The power performance of the StarTrek filter is showed in Table \ref{tb:Power}. As the sample size grows, we see the power is increasing for all four different types of graphs. When $p$ is larger, there are more hub nodes in general due to the way of constructing the graphs, and we find the power is higher. Among different types of graphs, the power in hub graph and scale-free graph is higher than that in random and knn graph since the latter two are relatively denser and have more complicated topological structures.
\begin{table}[htbp]
\addtolength{\tabcolsep}{-4pt}
\begin{center}
\caption{{Power}}
\begin{tabular}{
@{\hspace{1.2em}}c
@{\hspace{1.2em}}c
@{\hspace{1em}}c
@{\hspace{1em}}c|
@{\hspace{1em}}c
@{\hspace{1em}}c
@{\hspace{1em}} c
@{\hspace{1em}} c
@{\hspace{1em}} c
@{\hspace{1.2em}} }
\toprule
$d= 300~$ &\multicolumn{3}{c}{$q=0.1$}&\multicolumn{3}{c}{$q=0.2$} \\
\hline
$n$ &200 &300 &400
&200 &300 &400\\
\hline & \multicolumn{5}{c}{$\quad\quad p=20$}&\multicolumn{1}{c}{} \\
\hline
hub & 0.7109 & 0.9453 & 0.9898 & 0.7805 & 0.9648 & 0.9938 \\
random & 0.3343 & 0.7815 & 0.9408 & 0.4520 & 0.8514 & 0.9604 \\
scale-free & 0.4524 & 0.8145 & 0.9363 & 0.5281 & 0.8614 & 0.9568 \\
knn & 0.0905 & 0.5306 & 0.8067 & 0.1634 & 0.6511 & 0.8630 \\
\hline & \multicolumn{5}{c}{$\quad\quad p=30$}&\multicolumn{1}{c}{} \\
\hline
hub & 0.6848 & 0.9244 & 0.9706 & 0.7588 & 0.9459 & 0.9784 \\
random & 0.4882 & 0.8863 & 0.9790 & 0.5770 & 0.9225 & 0.9870 \\
scale-free & 0.6472 & 0.9047 & 0.9810 & 0.7197 & 0.9331 & 0.9870 \\
knn & 0.2409 & 0.6841 & 0.8922 & 0.3298 & 0.7706 & 0.9241 \\
\toprule
\end{tabular}
\label{tb:Power}
\end{center}
\vspace{-20pt}
\end{table}
In Figure \ref{fig:p20q0.1} and \ref{fig:p30q0.1}, we demonstrate the performance of our method in the random graph with different parameters. Specifically, we vary the connecting probability changing from $0.1$ to $0.3$ in the x-axis. In those plots, we see the FDRs are all well controlled below the nominal level $q=0.1$. As the connecting probability of the random graph grows, the graph gets denser, resulting more hub nodes. Thus we can see the height of the short blue solids lines (representing $q d_0/d$) is decreasing. Based on our results in Theorem \ref{thm:fdr_hub}, the target level of FDP/FDR control is $q d_0/d$. This is why we find the mean and median of each box-plot is getting smaller as the connecting probability increases (hence $d_0$ decreases).
The box-plots and the jittering points show that our StarTrek procedure not only controls the FDR but also prohibit it from varying too much, as implied by the theoretical results on FDP control in Section \ref{sec:hub_selection}. Regarding the power plots, we see that the power is smaller when the graph is denser since the hub selection problem becomes more difficult with more disturbing factors. Plots with nominal FDR level $q=0.2$ are deferred to Appendix \ref{app:fdp_power_plots}.
\subsection{Application to gene expression data}
\label{sec:data}
We also apply our method to the Genotype-Tissue Expression (GTEx) data studied in \citet{lonsdale2013genotype}. Beginning with a 2.5-year pilot phase, the GTEx project establishes a great database and associated tissue bank for studying the relationship between certain genetic variations and gene expressions in human tissues. The original dataset involves 54 non-diseased tissue sites across 549 research subjects. Here we only focus on analyzing the breast mammary tissues. It is of great interest to identify hub genes over the gene expression network.
\begin{figure}[t]
\centering
\includegraphics[width = 0.8\linewidth]{plots/simul/StarTrek_cluster_n300d300p20q10.pdf}
\caption{FDP and power plots for the StarTrek filter in the random graph. The connecting probability is varied on the x-axis. The number of samples $n$ is chosen to be $300$ and the number of connected components $p$ equals 20. The nominal FDR level is set to be $q=0.1$; the short blue solid lines correspond to $q{d_0}/{d}$, calculated by averaging over the $64$ replicates. For both panels, the box plots are plotted with the black points representing the outliers. Colored points are jittered around, demonstrating how the FDP and power distribute.}
\label{fig:p20q0.1}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width = 0.8\linewidth]{plots/simul/StarTrek_cluster_n300d300p30q10.pdf}
\caption{FDP and power plots for the StarTrek filter in the random graph. The other setups are the same as Figure \ref{fig:p30q0.1} except for $p = 30$.}
\label{fig:p30q0.1}
\end{figure}
\begin{figure*}
\centering
\begin{tabular}{m{1em}ccccc}
\rot{{$\quad\quad\quad\quad\quad\quad\quad$Male}}&
\includegraphics[width=3cm]{plots/real/breast_igraphM1.pdf}&
\includegraphics[width=3cm]{plots/real/breast_igraphM2.pdf}&
\includegraphics[width=3cm]{plots/real/breast_igraphM3.pdf}&
\includegraphics[width=3cm]{plots/real/breast_igraphM4.pdf}\\[-40pt]
\rot{{$\quad\quad\quad\quad\quad\quad$Female}}&
\includegraphics[width=3cm]{plots/real/breast_igraphF1.pdf}&
\includegraphics[width=3cm]{plots/real/breast_igraphF2.pdf}&
\includegraphics[width=3cm]{plots/real/breast_igraphF3.pdf}&
\includegraphics[width=3cm]{plots/real/breast_igraphF4.pdf}\\
\end{tabular}
\vskip-60pt
\caption{The above graphs are based the estimated precision matrices (the left two plots). The adjacency matrices of the other six plots are based on the standardized estimated precision matrices but thresholded at $0.025, 0.05, 0.075$ respectively. Blue vertices represent the selected hub genes. }
\label{fig:breast_gr}
\end{figure*}
First we calculate the variances of the gene expression data and focus on the top $100$ genes in the following analysis. The data involves $n=291$ samples for male individuals and $n=168$ samples for female individuals. The original count data is log-transformed and scaled. We then obtain the estimator of the precision matrix by the Graphical Lasso with 2-fold cross-validation. As for the hub node criterion, we set $k_\tau$ as the 50\% quantile of the node degrees in the estimated precision matrix. We run StarTrek filter with $2000$ bootstrap samples and nominal FDR level $q=0.1$ to select hub genes for both the male and female datasets.
\begin{figure}[!htbp]
\centering
\includegraphics[width = 0.8\linewidth]{plots/real/pvalue.pdf}
\caption{Plots of the sorted p-values ($\alpha_j,j\in [d]$) in Algorithm \ref{algo:startrek}. Those blue points correspond to selected hub genes. The blue line is the rejection line of the BHq procedure. The coordinates of the plots are flipped. We abbreviate the names of the 100 genes and only show selected ones with blue colored text. The upper panel and the lower panel are based on male and female data respectively.}
\label{fig:breast_pval}
\end{figure}
Figure \ref{fig:breast_gr} shows that the selected hub genes by the StarTrek filter also have large degrees on the estimated gene networks (based on the estimated precision matrices). In Figure \ref{fig:breast_pval}, the results for male and female dataset agree with each other except that the number of selected hub genes using female dataset is smaller due to a much smaller sample size. The selected hub genes are found to play an important role in breast-related molecular processes, either as central regulators or their abnormal expressions are considered as the causes of breast cancer initiation and progression, see relevant literature in genetic research such as \citet{hellwig2016epsin,blein2015targeted,chen2016systematic,li2019association,lou2020overexpression,mohamed2014promoter,bai2019screening,sirois2019unique,marino2020upregulation,malvia2019study}. Therefore, our proposed method for selecting hub nodes can be applied to the hub gene identification problem. It may improve our understanding of the mechanisms of breast cancer and provide valuable prognosis and treatment signature.
\bibliographystyle{ims}
| {
"attr-fineweb-edu": 1.570312,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUd6bxK7IAFzA-gxri |
\section{Introduction}
Since the seminal works of Donaldson~\cite{DonaldsonChambers} and Freedman~\cite{Freedman}, it has
been known that closed, simply-connected four-manifolds can support
exotic smooth structures. In fact, for many homeomorphism classes,
gauge theory tools (Donaldson invariants and Seiberg-Witten
invariants) have been very successful at proving the existence of
infinitely many smooth structures, see for
example~\cite{DonaldsonPolynomials}, \cite{GompfMrowka},
\cite{FriedmanMorgan}, \cite{FSKnots}. However, exotic examples with
small Euler characteristics are much more difficult to find. For a
long time, the smallest known example was the Barlow
surface~\cite{Kotschick}, which has Euler characteristic $11$ and
which is homeomorphic, but not diffeomorphic, to $\CP{2}\# 8 \mCP^2$.
Recently, in a remarkable paper, Park~\cite{Park} constructs a symplectic
manifold $P$ with Euler characteristic $10$ using the rational
blow-down operation of Fintushel and Stern~\cite{QBD}, and proves that
it is homeomorphic, but not diffeomorphic to $\CP{2}\# 7\mCP^2$.
In this note, we compute the Seiberg-Witten invariants of $P$ and
prove the following:
\begin{theorem}
\label{thm:Park1}
Park's example $P$ does not contain any smoothly embedded two-spheres with
self-intersection number $-1$; equivalently, it is not the blow-up of
another smooth four-manifold.
\end{theorem}
In a similar manner, Park also constructs a symplectic four-manifold $Q$
which is homeomorphic, but not diffeomorphic, to $\CP{2}\# 8\mCP^2$.
We prove here the following:
\begin{theorem}
\label{thm:Park2}
The manifold $Q$ contains no smoothly embedded two-sphere with
self-intersection number $-1$, and in particular $Q$ is not
diffeomorphic to $P\# \mCP^2$.
\end{theorem}
Note that $Q$ and the Barlow surface have the same Seiberg-Witten invariants,
and we do not know whether or not they are diffeomorphic.
\noindent{\bf Acknowledgements} The authors wish to thank Jongil Park, Jacob Rasmussen, and Andr{\'a}s Stipsicz
for interesting conversations during the course of this work.
\section{Seiberg-Witten theory}
We will deal in this paper with Seiberg-Witten theory for
four-manifolds $X$ with $b_2^+(X)=1$ (and $b_1(X)=0$). For the
reader's convenience, we recall the basic aspects of this theory, and
refer the reader to~\cite{KMthom}, \cite{Morgan} for more in-depth
discussions.
The Seiberg-Witten equations can be written down on any four-manifold
equipped with a ${\mathrm{Spin}}^c$ structure and a Riemannian metric. We
identify here ${\mathrm{Spin}}^c$ structures over $X$ with characteristic classes
for the intersection form of $X$, by taking the first Chern class of
the ${\mathrm{Spin}}^c$ structure. This induces a one-to-one correspondence in
the case where $H^2(X;\mathbb{Z})$ has no two-torsion. Taking a suitable
signed count of solutions, one obtains a smooth invariant of $X$ when
$b_2^+(X)>1$. In the case where $b_2^+(X)=1$, the invariant depends on
the choice of the Riemannian metric through the cohomology class of
its induced self-dual two-form (compare
also~\cite{DonaldsonChambers}).
Formally, then, for a fixed two two-dimensional cohomology class $H\in
H^2(X;\mathbb{R})$ with $H^2>0$ and characteristic vector $K\in H^2(X;\mathbb{Z})$
with $K.H\neq 0$, the Seiberg-Witten invariant $SW_{X,H}(K)$ is an
integer which is well-defined provided that $H.K\neq 0$. This integer
vanishes whenever $$K^2< 2\chi(X)+3\sigma(X).$$ For fixed $H$, then,
the $H$-basic classes are those characteristic cohomology classes $K$
for which $SW_{X,H}(K)\neq 0$. The quantity
$K^2-2\chi(X)-3\sigma(X)$ is four times the formal dimension of the
moduli space of solutions to the Seiberg-Witten equations over $X$ in
the ${\mathrm{Spin}}^c$ structure whose first Chern class is $K$. The
Seiberg-Witten invariant vanishes when this formal dimension is
negative; when it is positive, one cuts down the moduli space by a
suitable two-dimensional cohomology class to obtain an integer-valued
invariant.
More precisely, a Riemannian metric on $X$ induces a Seiberg-Witten
moduli space. The signed count of the solutions in this moduli space
depends only on the cohomology class of the induced self-dual two-form
$\omega_g$, which in the above case was denoted by $H$. The
dependence on $H$ is captured by the wall-crossing
formula~\cite{KMthom}, \cite{LiLiu}: if $X$ is a four-manifold with
$b_1(X)=0$, and $H$ and $H'$ are two cohomology classes with
positive square and $H.H'>0$,
then $$SW_{X,H}(K)=SW_{X,H'}(K) +\left\{\begin{array}{ll} 0
&{\text{if $K.H$ and $K.H'$ have the same sign}}
\\
\pm 1 &{\text{otherwise.}}
\end{array}\right.
$$
It follows readily from the compactness result for the moduli space of
solutions to the Seiberg-Witten equations that for any $H$, there are
only finitely many $H$-basic classes.
It is interesting to note that the wall-crossing formula together with
the dimension formula (which states that $SW_{X,H}(K)=0$ when
$K^2-2\chi(X)-3\sigma(X)<0$), ensures that if $X$ is a four-manifold
with $b_2^+(X)=1$ but $b_2(X)\leq 9$, there is only one chamber.
\subsection{Rational blow-downs}
In~\cite{QBD}, Fintushel and Stern introduce a useful
operation on smooth four-manifolds, and calculate how the
Seiberg-Witten invariants transform under this
operation. Specifically, let $C_p$ be the four-manifold which is a
regular neighborhood of a chain of two-spheres $\{S_0,...,S_p\}$ where
$S_0$ has self-intersection number $-4-p$, and $S_i$ has
self-intersection number $-2$ for all $i>0$. The boundary of this
chain (the lens space $L((p+1)^2, p)$) also bounds a four-manifold
$B$ with $H^2(B;\mathbb{Q})=0$. If $X$ is a smooth, oriented four-manifold
with $b_2^+(X)>1$ which contains $C_p$, then we can trade $C_p$ for
the rational ball $B$ to obtain a new four-manifold $X'$. Clearly,
$H^2(X')$ is identified with the orthogonal complement to
$[S_i]_{i=0}^p$ in $H^2(X)$.
For each ${\mathrm{Spin}}^c$ structure over $L((p+1)^2,p+1)$ which extends over
$B$, there is an extension (as a characteristic vector $K_0$) over
$C_p$ with the property that $K_0^2-p-1=0$.
Fintushel and Stern show that for any characteristic vector $K$ for
the intersection form of $X'$, $$SW_{X'}(K)=SW_{X}({\widetilde
K}),$$ where ${\widetilde K}$ is obtained from $K$, by extending over
the boundary by the corresponding characteristic vector $K_0$ as
above.
In the case where $b_2^+(X)=1$, the relation is expressed by choosing
a chamber for $X$ (and induced chamber for $X'$) whose metric form $H$
is orthogonal to each sphere in the configuration $C_p$.
\section{The four-manifold $P$}
We review Park's construction of $P$ briefly. Start with a rational
elliptic surface with an ${\widetilde{E_6}}$ singularity (a
configuration of $-2$ spheres arranged in a star-like pattern, with a
central node and three legs of length two). There is a model of the
rational elliptic surface with the property that there are four nodal curves
in a complement of this singularity. Blowing up the nodal curves, one
obtains four spheres of square $-4$. A section of the rational
elliptic surface meets all four of these spheres, and also one of the
leaves in the ${\widetilde{E_6}}$ singularity. Adding the section and
the four $-4$-spheres, one obtains a sphere $R_0$ with
self-intersection number $-9$ and then inside the ${\widetilde{E_6}}$
singularity, this can be extended to a chain of embedded spheres with
self-intersection $-2$ $\{R_i\}_{i=1}^5$. Park's example $P$ is
obtained by performing a rational blow-down, in the sense of Fintushel
and Stern~\cite{QBD}, on the chain of spheres $\{R_i\}_{i=0}^5$.
Since the spheres are all symplectic, a result of
Symington~\cite{Symington} guarantees that $P$ is symplectic.
Theorem~\ref{thm:Park1} follows from the following refinement:
\begin{theorem}
Let $K$ denote the canonical class of $P$. Then, the Seiberg-Witten basic
classes of $P$ are $\{\pm K\}$.
\end{theorem}
It follows at once that $X$ is minimal. Specifically, if one could
write $X\cong Y \#\mCP^2$, then according to the blow-up
formula~\cite{FSthom}, the basic classes of $X$ come in pairs of the
form $K_0\pm E$ where $K_0$ runs over the basic classes of $Y$, and
$E$ denotes the exceptional curve in $\mCP^2$. But this is impossible
since $K^2=2$.
\begin{figure}
\mbox{\vbox{\epsfbox{ParkConfig.eps}}}
\caption{\label{fig:ParkConfig}
We have illustrated here a basis of two-spheres for $\CP{2}\# 12\mCP^2$.}
\end{figure}
We find it convenient to describe the manifold $P$ in a concrete model.
Specifically, consider the four-manifold $X=S^2\times S^2 \# 12\mCP^2$,
with the basis of two-spheres $A$, $B$, $\{E_i\}_{i=1}^{12}$. Here,
$A$ and $B$ are supported in the $S^2\times S^2$ factor, so that
$A=\{a\}\times S^2$ and $B=S^2\times \{b\}$, while $E_i$ is the
``exceptional sphere'' (sphere of square $-1$) in the $i^{th}$ $\mCP^2$
summand. Alternatively, this manifold can be thought of as the blowup
of rational elliptic surface with an ${\widetilde{E_6}}$ singularity, and a
complementary singularity consisting of three $-1$-spheres arranged in
a triangular pattern, which is then blown up four times, to give a
tree-like configuration of spheres with a central sphere of of square
$-4$, and three legs consisting of a chain of a $-1$ sphere and
another $-4$ sphere. See Figure~\ref{fig:ParkConfig} for an illustration.
More precisely, consider the elliptic surface singularity which can be
described by three $-1$-framed unknots, each of which links the other
two in one point apiece. Denote the corresponding two-dimensional
homology classes by $A$, $B$, and $C$. It is well-known,
c.f.~\cite{HarerKasKirby} that this singularity can be perturbed into
four nodal curves. By blowing up the four double-points, we obtain
four disjoint $-1$-spheres. In fact, the homology class of the fiber
is represented by the homology class of the fiber $A+B+C$. Thus,
the four $-4$ spheres can written in the basis of homology as
$$\{A+B+C-2E_i\}_{i=1}^4,$$ where $E_i$ are the newly-introduced exceptional spheres.
Armed with this principle, the chain of spheres in $X$
which are to be
rationally blown down can be written homologically as:
\begin{eqnarray*}
R_0&=&10A+8B-6E_1-4E_2-4E_3-4E_4-4E_5-4E_6 \\ && -3E_7-4E_8-4E_9-2E_{10}-2E_{11}-2E_{12} \\
R_1&=&B-E_1-E_4 \\
R_2&=&A-E_2-E_3 \\
R_3&=&E_3-E_6 \\
R_4&=&E_6-E_9 \\
R_5&=&E_4-E_7
\end{eqnarray*}
Note that we are using here $E_7$ as our section, which is to be added
to the four $-4$-spheres coming from the complement of the ${\widetilde{E_6}}$
singularity. The four exceptional spheres in the
complementary singularity are represented by the spheres
$A-E_1$, $E_{10}$, $E_{11}$,
$E_{12}$.
Let $P$ denote the Park manifold obtained by rationally blowing down
the configuration $R_0,...,R_5$ in $X$. ${\mathrm{Spin}}^c$ structures
over $P$ (labelled by characteristic vectors $K$)
correspond to characteristic vectors (labelled by characteristic vectors ${\widetilde K}$)
over $X$ whose evaluations on the configuration $\{R_i\}$
take one of the following
seven forms:
\begin{equation}
\label{eq:QBD}
\begin{array}{lrrrrr}
(7,& 0,& 0,& 0,& 0,& 0) \\
(-1,& 0,& -2,& 0,& 0,& 0) \\
(5,& 0,& 0,& 0,& 0,& -2) \\
(-3,& -2,& 0,& 0,& 0,& 0) \\
(3,& 0,& 0,& 0,& -2,& 0) \\
(-7,& 0,& 0,& 0,& 0,& 0) \\
(1,& 0,& 0,& -2,& 0,& 0) \\
\end{array}
\end{equation}
According to the rational blow-down formula~\cite{QBD},
$$SW_{P}(K)=SW_{X,H}({\widetilde K}),$$ where here $H\in H^2(X;\mathbb{R})$
is any real two-dimensional cohomology class with $H^2>0$ and $H.H'>0$ and which is
orthogonal to all the $\{R_i\}$. Moreover, according to the
wall-crossing formula, combined with the fact that
$S^2\times S^2$ has positive scalar curvature and hence
trivial invariants in a suitable chamber
(c.f.~\cite{Witten}), it follows that
$$SW_{X,H}({\widetilde
K})=\left\{
\begin{array}{ll}
0 &{\text{if ${\widetilde K}^2+4<0$ or ${\widetilde K}.H$ and ${\widetilde K}.H'$ have the same sign}} \\
\pm 1 &{\text{otherwise,}}
\end{array}\right.
$$
where here $H'=\PD(A+B)$.
(The first condition for vanishing
is the dimension formula for the moduli space, while the second
condition comes from the wall-crossing formula.)
Explicitly, then, we see that the basic classes $K$ for $P$ are
precisely those for which the extension ${\widetilde K}$ (by one of
the vectors from the list in Equation~\eqref{eq:QBD}) satisfies:
${\widetilde K}^2+4\geq 0$ and also $\sgn({\widetilde K}.H)\neq
\sgn({\widetilde K}.H')$, where here $H$ is any (real) cohomology
class with $H^2>0$ and $H.H'>0$ and which is orthogonal to all the
$\{R_i\}_{i=0}^5$. For example, we could use the vector $$H=(105, 92,
-67, -51, -41, -38, -36, -41, -38, -36, -41, -18, -18, -18)$$
(written here with respect to the basis Poincar\'e dual to
$\{A,B,E_1,...,E_{12}\}$).
In order to make this a finite computation, we proceed as follows.
Suppose that $Z$ is a smooth four-manifold with $b_2^+(Z)>1$, and we
have homology classes ${\mathbf C}=\{C_i\}_{i=1}^n$ with negative
self-intersection number $C_i\cdot C_i = -p_i<0$. A cohomology class
$K\in H^2(X;\mathbb{Q})$ is called ${\mathbf C}$-adjunctive if for each $i$
$\langle K,[C_i]\rangle$ is integral, and indeed the following two
conditions are satisfied:
\begin{eqnarray*}
|\langle K, [C_i] \rangle |&\leq& p_i \\
\langle K,[C_i]\rangle &\equiv& p_i\pmod{2}.
\end{eqnarray*}
Clearly, the set of ${\mathbf C}$-adjunctive cohomology classes
has size $\prod_{i=1}^n (p_i+1)$.
\begin{lemma}
\label{lemma:Adjunctive}
Let ${\mathbf S}=\{S_i\}_{i=1}^n$ be a collection of embedded
spheres in $X$ whose homology classes are orthogonal to the
the $\{R_i\}_{i=0}^5$. Let ${\mathbf C}=\{C_i\}_{i=1}^8$
denote their induced homology classes in $H_2(P)$. If every
${\mathbf C}$-adjunctive basic class for $P$ is
zero-dimensional, then in fact every basic class for $P$ is
${\mathbf C}$-adjunctive.
\end{lemma}
\begin{proof}
If $P$ has a basic class $L_0$ which is not ${\mathbf C}$-adjunctive, then
by the rational blow-down formula,
$X$ has a basic class $L_1$
and a smoothly embedded sphere $S_i$ for which $|\langle L_1,[S_i]\rangle|> -S_i\cdot S_i$, where
we can use any metric whose period point $H'$ is perpendicular to the configuration
$\{R_i\}_{i=0}^5$. By fixing $H'$ to be also perpendicular to $S_i$, and using
the adjunction formula for spheres of negative square~\cite{FSthom} we get
another basic class $L_2=L\pm 2\PD[S_i]$ of $X$. Applying the blowdown formula once more we get a
basic class $L_3$ of $P$ where the dimension of $L_3$ is bigger then the dimension
of $L_0$. Since $P$ has only finitely many basic classes this process has to stop, which means that
the final $L_{3k}$ class is ${\mathbf C}$-adjunctive. However it is also positive dimensional, thus
proving the lemma.
\end{proof}
Our next goal, then is to find a collection of embedded spheres
$\{S_i\}_{i=1}^{8}$ in $X$ which, together with the $\{R_i\}_{i=0}^5$
form a basis for
$H^2(X;\mathbb{Q})$. To this end, we use the spheres:
\begin{eqnarray*}
S_1 &=& E_5-E_8 \\
S_2 &=& E_{12}-E_{10} \\
S_3 &=& E_{11}-E_{12} \\
S_4 &=& A-E_1-E_{11} \\
S_5 &=& A+B-E_1-E_2-E_5-E_8 \\
S_6 &=& -E_5+E_{10}+E_{11} \\
S_7 &=& 2 E_7+2E_4 -2A+E_{11} \\
S_8 &=& E_6+E_9+E_3-E_2 -2E_5.
\end{eqnarray*}
The spheres $\{S_i\}_{i=1}^5$ have square $-2$, while $S_6$, $S_7$, and $S_8$
have squares $-3$, $-9$, and $-8$ respectively.
It is easy to see that these classes are all orthogonal to the homology
classes generated by the spheres $\{R_i\}_{i=0}^5$.
It is easy to see, now, that there are $612360$ $\{S_i\}$-adjunctive
vectors in $H^2(X;\mathbb{Q})$ with integral evaluations on each of the $S_i$,
and whose extension over the blow-down configuration is one of the
seven choices enumerated in Equation~\eqref{eq:QBD}. Of these, $12498$
correspond to characteristic cohomology classes in $H^2(X;\mathbb{Z})$. Of
these, $8960$ have length $\geq -4$ (i.e. satisfying
$K^2-(2\chi+3\sigma)\geq 0$). Finally, only two of these have the
property that evaluation of $H$ and $H'$ have opposite sign. Indeed,
these classes are the canonical class $K$ and also $-K$. Since these
classes have dimension zero, it follows from
Lemma~\ref{lemma:Adjunctive} that these are the only two basic classes
for $P$.
\section{The four-manifold $Q$}
The manifold $Q$ is constructed as follows. Start with a rational
surface with an ${\widetilde{E_6}}$ singularity as before, except
now blow up only three of the nodes. In a manner similar to the
previous construction, one finds now a sphere of self-intersection
number $-7$ (gotten by resolving a section and the three $-4$
spheres). This is then completed by a chain of three $-2$ spheres in
the ${\widetilde{E_6}}$ singularity. Forming the rational blow-down,
one obtains a second manifold $Q$ which is homeomorphic to
$\CP{2}\#8\mCP^2$.
For $Q$, we prove the following:
\begin{theorem}
Let $K$ denote the canonical class of $Q$. Then, the Seiberg-Witten basic
classes of $Q$ are $\{\pm K\}$.
\end{theorem}
The second construction starts again with a rational surface. For
this surface, we can take the previous one, only blow down the curve
$E_{12}$.
Again, we use the section $E_7$; now the three $-4$ spheres which are to be
added are represented by $E_1-E_4-E_7-E_{10}$, $B-E_2-E_5-E_8-E_{11}$,
and $A-E_1-E_{10}-E_{11}$. Thus, our configuration which is to be rationally blown down consists of:
\begin{eqnarray*}
R_0&=&7A+6B-4E_1-3E_2-3E_3-3E_4-3E_5-3E_6\\
&&-2E_7-3E_8-3E_9-2E_{10}-2E_{11} \\
R_1&=&E_4-E_7 \\
R_2&=&B-E_1-E_4 \\
R_3&=&A-E_2-E_3.
\end{eqnarray*}
The vector
$H=(229, 226, -143, -113, -113, -86, -87, -87, -86, -87, -87, -58, -58)$
has positive square, and is orthogonal to all the $\{R_i\}_{i=0}^3$.
A rational basis the cohomology of $(S^2\times S^2) \# 11\mCP^2$
is gotten by completing $R_0$, $R_1$, $R_2$, and $R_3$ with the following
set of spheres with negative square:
\begin{eqnarray*}
S_1&=&E_{10}-E_{11} \\
S_2&=&E_5-E_6 \\
S_3&=&E_8-E_9 \\
S_4&=&E_5-E_8 \\
S_5&=& E_2-E_3 \\
S_6 &=& A-E_1-E_{10} \\
S_7&=&A+B-E_1-E_2-E_5-E_8 \\
S_8&=&2A-2E_4-2E_7-E_{11} \\
S_9&=&2A+2B-E_1-E_2-E_3-E_4-E_7-2E_5-E_6-E_{10}.
\end{eqnarray*}
For this case, the ${\mathrm{Spin}}^c$ structures over $L(25,4)$
which extend over the rational ball can be uniquely extended over
the configuration of spheres in one of the five possible ways:
\begin{equation}
\begin{array}{lrrr}
(5,& 0,& 0,& 0) \\
(-1,& -2,& 0,& 0) \\
(3, &0, &0, &-2) \\
(-5, &0, &0, &0) \\
(1,& 0,& -2,& 0).
\end{array}
\end{equation}
Again, there are $437400$ $\{S_i\}$-adjunctive vectors in $H^2$ with
rational coefficients, which have integral evaluations on each sphere
and which extend over the configuration of spheres $\{R_i\}_{i=0}^3$
as above. Of these, $17496$ correspond to (integral) characteristic
cohomology classes. Of these, 3754 have square $\geq -3$. Finally, of
these, exactly two ($K$ and $-K$) have the evaluations with opposite
sign against $H$ and $H'$, hence correspond to basic classes for
$Q$. Arguing as in Lemma~\ref{lemma:Adjunctive}, we see that these are
the only two basic classes for $Q$.
\bibliographystyle{plain}
| {
"attr-fineweb-edu": 1.898438,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUd6o5qYVBZEfUi8IJ | \section{Introduction}
\subsection{Motivation and state of the art}
\IEEEPARstart{T}{he} increasing penetration of renewable energy in the traditional power system and particularly the massive integration of offshore wind farms, whose installed capacity exceeded 18 GW in 2017, will call for the future deployment of multiple terminal HVDC grids. MT-HVDC increase the efficiency of the power transfer over long distances compared to HVAC solutions and the reliability of the power supply compared to point-to-point interconnection. Moreover, the use of the Voltage Source Converter (VSC) technology, with independent control of active and reactive power at the AC terminals is considered and advantage to support weak power systems
\cite{chinaHVDC}.\\
The deployment of large interconnected HVDC grids requires specific power system analyses, similar to those traditionally applied to AC power systems, which however need to the adapted to describe the specific components and operation of MT-HVDC grids.\\
Classic stability analysis in power systems requires three main steps: modelling, load flow calculation and dynamic analysis. In the first step, the grid is modelled in order to capture the main dynamics of the real system. Typically, an accurate yet simplified model is required to reduce the computational complexity, particularly in the case of large and complex systems. In the second step, a load flow analysis is performed in order to obtain the equilibrium point. Numerical algorithms are required in this step due to the non-linear nature of the equations. Finally, the dynamic analysis is performed around the equilibrium point. It is important to highlight that normally the existence of the equilibrium point and convergence of the numerical algorithm are taken for granted, despite the non-linear nature of the problem.
This paper addresses this aspect by identifying exact conditions for the existence and uniqueness of the solution. \\
The related problem of existence of the equilibrium in electric systems including constant power loads has been previously studied mostly in dc microgrids and distribution level grid applications \cite{Pflowmit,koguiman,adhoc}. The equilibrium existence was ensured upon compliance with a certain inequality condition for an electric grid with constant power terminals in \cite{koguiman}. In \cite{adhoc} the stability and power flow of dc microgrids were analyzed considering a worst-case scenario. Such contributions focus on the stability for microgrids, and the same approach can be extended to the study of MT-HVDC system, where, not only constant power loads, but also constant power generators and droop controlled units are present.\\
On the other hand, linearized models are used for stability assessment for MT-HVDC has been specifically addressed in the literature (e.g. \cite{JonAre}), also including the effect of droop controllers \cite{drops}). Transient stability studies based on extensive numerical simulation have also been proposed for MT-HVDC systems \cite{transient}, as well as power flow analyses (e. g.\cite{classico}). Despite being debated for a long time, the latter still represents an open research subject, as proved, for example, by \cite{pedro2} and \cite{multiport}.\\
However, to the authors' knowledge, all the studies on MT-HVDC grids presented so far take the existence of an equilibrium point for granted without a formal demonstration.
This paper aims at filling this gap, by proposing a method to assess the existence of an equilibrium point in and MT-HVDC network, exploiting the analogy with the power flow convergence problem. The equilibrium is represented by a set of non-linear algebraic functions that require a successive approximations method for their solution. The Banach fixed point theorem is used in order to guarantee convergence of the power flow and uniqueness of the solution. The stability study of the equilibrium point is also presented with the evaluation of a Lyapunov function obtained by the Krasovskii's method \cite{sastry,khalil}. The paper represents a significant extension of the contributions included in \cite{ourmthvdc,MMCPItune}, including a grid representation based on graph theory, which includes the frequency dependence of transmission line/cable models. Additionally, it presents a more realistic model of the Modular Multilevel Converter together with conditions for its simplifications. And a procedure to find the Lyapunov function with the Krasovskii's method based on the solution of a linear matrix inequality problem.
The rest of the paper is organized as follows: in section \ref{sec:dynamicmodel} the dynamical model of the MT-HVDC is presented with an accurate model of the HVDC lines. Next, in section \ref{sec:op} the equilibrium point of the grid is studied. After that, the stability of the grid is analyzed by using the classic Lyapunov theorem and Krasovskii's method in section \ref{sec:stability}. Finally, numerical results are presented in Section \ref{sec:computa}, followed by conclusions in Section \ref{se:conclu}.
\section{Dynamical model}\label{sec:dynamicmodel}
\subsection{Model of the Modular Multilevel Converter}
\begin{figure}
\centering
\includegraphics[scale=0.5]{mmc.pdf}
\caption{MMC dynamic model}
\label{fig:mmc}
\end{figure}
The modular multilevel converter is the most promising converter technology for MT-HVDC system in offshore applications. For the purpose of power system studies, its dynamics is described by the continuous model presented in \cite{MMC1,MMC2,elimpublicable}. Initially, the $n$ submodule dynamic equivalents depicted in Fig. \ref{fig:mmc} are aggregated into an equivalent circuit per arm. The equivalent capacitor is defined as $C_{eq}=C_i/n$, where the current flowing into the equivalent capacitor is represented by the Hadamard product defined with $\circ$:
\[C_{eq}\frac{d v_{eq}}{dt}=m_{}\circ i_{};\]
where, $v_{eq}=(v_{eq,ua},v_{eq,ub},v_{eq,uc},v_{eq,la},v_{eq,lb},v_{eq,lc})^T$, $v_{eq}\in \mathbb{R}^{6\times 1}$ is the equivalent vector of aggregated submodule capacitor voltage at each phase, $i=(i_{ua},i_{ub},i_{uc},i_{la},i_{lb},i_{lc})^T$ is the arm current per phase, the subindices are used for the $\{u,l\}$ upper and lower arms, respectively and the sub-indices $\{a,b,c\}$ represents the phases. The dynamics of the equivalent arm can be controlled by the modulation index $m=(m_{ua},m_{ub},m_{uc},m_{la},m_{lb},m_{lc})^T$. Moreover, the equivalent voltage at the arm $v_{ul}=(v_{ua},v_{ub},v_{uc},v_{la},v_{lb},v_{lc})^T$ is proportional to the equivalent capacitor voltage
\[v_{ul}=m\circ v_{eq}.\]
The model in \eqref{eq:uppv} represents the voltage of the upper arm loops, with $v_{u}=(v_{ua},v_{ub},v_{uc},)^T$, $v_{}=(v_{a},v_{b},v_{c},)^T$, $i_{u}=(i_{ua},i_{ub},i_{uc})^T$ and ${1_x}$ is an all ones vector of size ${3\times 1}$.
\begin{equation}
\label{eq:uppv}
L_{a}\frac{di_{u}}{dt}=-v-v_{u}+1_x\frac{v_{dc}}{2}-R_{a}i_{u},
\end{equation}
where, $L_a$ and $R_a$ are the arm inductance and resistor, respectively. The dc voltage is $v_{dc}$ and the output voltage is $v$. The dynamics of the lower arm are described by the equation \eqref{eq:lowv}, with $v_{l}=(v_{la},v_{lb},v_{lc})^T$ and $i_{l}=(i_{la},i_{lb},i_{lc})^T$.
\begin{equation}
\label{eq:lowv}
L_{a}\frac{di_{l}}{dt}=v_{}-v_{l}+{1_x}\frac{v_{dc}}{2}-R_{a}i_{l}.
\end{equation}
with the addition and subtraction of the upper and lower currents, a new set of differential equations is obtained to represent the dynamics of the converter. This new set of differential equations facilitates the implementation of the control strategy. First, the currents are defined as:
\begin{equation}
i_{g}=i_{u}-i_{l}
\end{equation}
where $i_{g}=(i_{ga},i_{gb},i_{gc})^T$ is the current flowing into the grid at each phase as depicted in Fig \ref{fig:mmc}. The addition of the upper and lower currents produce a circulating current (defined as $i_{\Sigma}=(i_{\Sigma, a},i_{\Sigma, b},i_{\Sigma, c})^T$).
\begin{equation}
\label{eq:icirc}
i_{\Sigma}=\frac{1}{2}(i_{u}-i_{l}).
\end{equation}
Following the procedure above for the voltages, the difference of lower and upper voltages produces a voltage driving the grid current ($e=(e_{a},e_{b},e_{c})^T$), the variable in terms of upper and lower voltages is described in \eqref{eq:ex}.
\begin{equation}
\label{eq:ex}
e_{}=\frac{1}{2}(-v_{u}+v_{l})
\end{equation}
The voltage $u_{\Sigma}=(u_{\Sigma,a},u_{\Sigma,b},u_{\Sigma,c})^T$ that drives the circulating current is shown in \eqref{eq:usx}.
\begin{equation}
\label{eq:usx}
u_{\Sigma}=\frac{1}{2}(v_{u}+v_l)
\end{equation}
Therefore, the differential equations that model the system per phase are \eqref{eq:ig} and \eqref{eq:is}.
\begin{eqnarray}
\label{eq:ig}
\frac{L_a}{2}\frac{di_{g}}{dt}&=&-\frac{R_a}{2}i_{g}+e-v
\\
\label{eq:is}
L_a \frac{i_{\Sigma}}{dt}&=&-R_a i_{\Sigma}-u_{\Sigma}+{1_x}\frac{v_{dc}}{2}
\end{eqnarray}
The direct and quadrature axes representation of the grid currents are obtained applying the Park transform to the set \eqref{eq:ig} (See the Park's transformation in \cite{MMC2}). The system is represented in vector form with the current grid vector $i_{gdq}^T=\left(i_{gd},i_{gq}\right)$, the voltage vector $v_{dq}^T=\left(v_{d},v_{q}\right)$, the voltage $e_{dq}^T=\left(e_d,e_q\right)$ and the electrical frequency speed is $\omega$.
\begin{eqnarray}
\label{eq:igdq}
L_a \frac{di_{gdq}}{dt}=-R_ai_{gdq}+L_aJ_{\omega}i_{gdq}+e_{dq}-v_{dq},
\end{eqnarray}
where, $J_{\omega}$ is a skew-symmetric matrix defined as
\[J=\left(
\begin{array}{cc}
0 &-\omega\\
\omega &0
\end{array}
\right)
\]
The circulating currents at the direct and quadrature axes can be obtained applying the Park transform to a system rotating at $-2\omega$. The vector form of the circulating current for d and q is $i_{\Sigma dq}^T=\left(i_{\Sigma d},i_{\Sigma q}\right)$ and the zero sequence circulating current is $i_{\Sigma z}$. The vector form of the circulating voltage is $u_{\Sigma dq}^T=\left(u_{\Sigma d},u_{\Sigma q}\right)$ and the zero sequence component is $u_{\Sigma z}$.
\begin{eqnarray}
L_a\frac{di_{\Sigma dq}}{dt}&=&-R_ai_{\Sigma dq}+2L_aJ_{\omega}i_{\Sigma dq}-u_{\Sigma dq},\\
L_a\frac{di_{\Sigma z}}{dt}&=&-R_ai_{\Sigma z}-u_{\Sigma z}+\frac{v_{dc}}{2}.
\end{eqnarray}
\subsubsection{Currents control strategy}
The control of the MMC is realized by the application of PI controllers to the currents at the grid side, the application of the circulating current suppression control and the zero sequence circulating current with PI too. Without loss of generality a first order current system modeled as \eqref{eq:igen} has a general controller, which is described by \eqref{eq:erru} and \eqref{eq:ugen}.
\begin{eqnarray}
\label{eq:igen}
l \frac{di}{dt}&=&-ri+u+d\\
\label{eq:erru}
\frac{d\phi}{dt}&=&i_{ref}-i\\
\label{eq:ugen}
u&=&k_{p}(i_{ref}-i)+k_i\phi-d
\end{eqnarray}
where, $i$ is the current variable to be controlled, $\phi$ is the controller state variable, $u$ is the control variable, $d$ is the disturbance of the system, $l$ and $r$ are the general variables for the resistor and inductance, respectively. The controller proportional gain is $k_{p}$ and the integral gain is $k_{i}$.
\subsubsection{Zero sequence energy model and control}
It has been shown in \cite{MMC2} that with the appropriate control of the zero\footnote{Without loss of generality the sub-index $z$ is only used in this subsection to represent the zero sequence of electrical systems.} sequence energy of the MMC the system balances the power between the AC grid and the DC grid. Moreover, the variable responsible for the DC power side is $i_{\Sigma z}$. Therefore, this paper uses the control of the zero sequence energy $w_{\Sigma z}$ model.
\begin{equation}
\label{eq:wz}
\frac{dw_{\Sigma z}}{dt}\approx 2 u_{\Sigma z} i_{\Sigma z}-\frac{1}{2}(e_{d}i_{gd}+e_q i_{gq})
\end{equation}
The controller calculates the reference current $i_{ref,\Sigma z}$ and is described in \eqref{eq:wu} and \eqref{eq:werr}.
\begin{align}
\label{eq:wu}
i_{ref,\Sigma z}&=\frac{1}{2u_{\Sigma z}}\left(k_{pw}(w_{ref,\Sigma z}-w_{\Sigma z})+k_{iw}\phi_w+P_{d}\right)
\\
\frac{d\phi_{w}}{dt}&=w_{ref,\Sigma z}-w_{\Sigma z}
\label{eq:werr}
\end{align}
where, $w_{ref,\Sigma z}$ is the reference zero sequence energy, $\phi_w$ is the state of the corresponding controller, $P_{d}$ is defined as the disturbance of the system \eqref{eq:wz} and it is $P_{d}=(e_{d}i_{gd}+e_q i_{gq})/2$. The controller gains are the proportional and the integral $k_{wp}$ and $k_{wi}$, respectively.
\textcolor{black}{
\subsection{Conditions for the use of an approximated model for the MMC}
It is a common procedure in power systems analysis to approximate the fast transient behaviour of the power electronics converter to an active power injection model as presented in \cite{milanovic,KOTB201679,lewisugrid,HVDC_EMTP_stab}. This approximation is based on the reasonable assumption that the converters are tightly-regulated, the harmonic distortion is negligible, dc faults are outside the scope of the analysis, the transient response is typically in the range of few milliseconds, losses can be neglected and a generic model can be used to represent the dynamics. The control strategy presented above allows the MMC to act as an active power injection system. It is important to notice that the internal MMC energy is kept constant to control it as an active power source. \\
Therefore, under the assumption that there is perfect regulation of the MMC, the converter model for the network is reduced to \eqref{eq:pinpout}.
\begin{equation}
\label{eq:pinpout}
\tau \frac{di_{pj}}{dt}=-i_{pj}+i_{ref}
\end{equation}
where, $i_{pj}$ is the DC current at the power injection nodes, $\tau$ is the time constant that approximates the current controllers effect, $i_{ref}$ is the input reference of the converter.
}
\subsection{Model of the grid}
Let us consider an MT-HVDC grid represented by the terminals $\mathcal{N} = \left\{1,2,...,N \right\}$. Each terminal has a converter with either constant voltage control or constant power with voltage droop control; this is represented by non-empty disjoint sets $\mathcal{V}$ and $\mathcal{P}$ such that $\mathcal{N}=\left\{ \mathcal{V},\mathcal{P}\right\}$. Each transmission cable is characterized by the model depicted in Fig \ref{fig:modelo_linea} which according to \cite{CableJS}, is an accurate frequency dependent model of an HVDC cable. The model could have several parallel RL elements (The figure depicts a model with only 3 RL branches) and the parallel elements for the capacitance and conductance $C_s$ and $G_{ci}$, respectively.
\begin{figure}
\centering
\footnotesize
\ctikzset{bipoles/length=0.8cm}
\begin{circuitikz}[scale=0.4]
\draw (0,6) -- (-3,6);
\draw (8,6) -- (11,6);
\draw[very thick] (-3,5.5) -- (-3,6.5) node[above] {$n_1$};
\draw[very thick] (11,5.5) -- (11,6.5) node[above] {$n_2$};
\draw (0,0) node[ground] {} to[C,l_=$C_s$] (0,4) -- (2,4) to [L=$L_3$] (4,4) to [R=$R_3$] (6,4) -- (8,4) to [C,l_=$C_s$] (8,0) node[ground] {};
\draw (-1,0) node[ground]{} to[R=$G_{ci}$](-1,4)-- (0,4);
\draw (9,0) node[ground]{} to[R,l_=$G_{ci}$](9,4)-- (8,4);
\draw (0,4) |- (2,6) to [L=$L_2$] (4,6) to [R=$R_2$] (6,6) -| (8,4) ;
\draw (0,4) |- (2,8) to [L=$L_1$] (4,8) to [R=$R_1$] (6,8) -| (8,4) ;
\end{circuitikz}
\caption{Improved approximation model with a multi-branch $\pi$ model of and HVDC cable connecting nodes $n1$ and $n2$.}
\label{fig:modelo_linea}
\end{figure}
Let us consider $\mathcal{B}$ as the set of the RL sub-branches of each HVDC line. Therefore, the grid can be represented by a uniform hypergraph $\mathcal{HG} = \left\{\mathcal{N},\mathcal{E}\right\}$. Where $\mathcal{E} \subseteq (\mathcal{N}\times\mathcal{N})\times\mathcal{B}$ is the set of hyper-edges and each one has $\mathcal{B}$ branches as depicted in Figure \ref{fig:hypergraph}
\begin{figure}
\footnotesize
\centering
\begin{tikzpicture}[x=0.8mm,y=0.8mm]
\fill (0,0) circle (1);
\fill (0,60) circle (1);
\fill (60,0) circle (1);
\fill (60,60) circle (1);
\node at (-5,0) {$n_1$};
\node at (-5,60) {$n_2$};
\node at (65,0) {$n_4$};
\node at (65,60) {$n_3$};
\draw (0,0) to[out=0, in=180] +(20,5) -- +(20,0) to[out=0, in=180] +(40,-5);
\draw (0,0) to[out=0, in=180] +(20,10) -- +(20,0) to[out=0, in=180] +(40,-10);
\draw (0,0) to[out=0, in=180] +(20,15) -- +(20,0) to[out=0, in=180] +(40,-15);
\node at (30,3) {(14)(b3)};
\node at (30,8) {(14)(b2)};
\node at (30,13) {(14)(b1)};
\draw[dashed,rounded corners] (10,0) rectangle +(40,17);
\node at (30,19) {hyper-edge (14) };
\begin{scope}[rotate=90]
\draw (0,0) to[out=0, in=180] +(20,5) -- +(20,0) to[out=0, in=180] +(40,-5);
\draw (0,0) to[out=0, in=180] +(20,10) -- +(20,0) to[out=0, in=180] +(40,-10);
\draw (0,0) to[out=0, in=180] +(20,15) -- +(20,0) to[out=0, in=180] +(40,-15);
\node at (30,3)[ rotate=90] {(12)(b3)};
\node at (30,8) [ rotate=90]{(12)(b2)};
\node at (30,13) [ rotate=90]{(12)(b1)};
\draw[dashed,rounded corners] (10,0) rectangle +(40,17);
\node at (30,19) [rotate=90]{hyper-edge (12) };
\end{scope}
\begin{scope}[yshift=138]
\draw (0,0) to[out=0, in=180] +(20,5) -- +(20,0) to[out=0, in=180] +(40,-5);
\draw (0,0) to[out=0, in=180] +(20,10) -- +(20,0) to[out=0, in=180] +(40,-10);
\draw (0,0) to[out=0, in=180] +(20,15) -- +(20,0) to[out=0, in=180] +(40,-15);
\node at (30,3)[ ] {(23)(b3)};
\node at (30,8) []{(23)(b2)};
\node at (30,13) []{(23)(b1)};
\draw[dashed,rounded corners] (10,0) rectangle +(40,17);
\node at (30,19) []{hyper-edge (23) };
\end{scope}
\node at (30,50) {$\mathcal{N} = \left\{ n_1,n_2,n_3,n_4\right\}$};
\node at (30,45) {$\mathcal{B} = \left\{ (b1),(b2),(b3)\right\}$};
\node at (30,40) {$\mathcal{E} = \left\{ (12)(\mathcal{B}),(14)(\mathcal{B}),(23)(\mathcal{B})\right\}$};
\end{tikzpicture}
\caption{Example of a uniform hypergraph wich represents an MT-HVDC with four nodes and three HVDC lines}
\label{fig:hypergraph}
\end{figure}
Let us define the branch-to-node oriented incidence matrix $A = [A_{\mathcal{V}},A_{\mathcal{P}}] \in \mathbb{R}^{\mathcal{E}\times\mathcal{N}}$ as a matrix with $a_{ij} = 1$ if there is an hyper-edge connecting the nodes $i$ and $j$ in the direction $ij$, $a_{ij} = -1$ if there is an hyper-edge connecting the nodes $i$ and $j$ in the direction $ji$, and $a_{ij}=0$ if there is not hyper-edge between $i$ and $j$. The current in each $RL$ sub-branch requires a triple-sub index which represents the sending node, the receiving node and the sub-branch itself.
Define $1_{\mathcal{B}}$ is an all-ones vector of size $\mathcal{B} \times 1$. Therefore, the currents and voltages are given by
\begin{eqnarray}
V_{\mathcal{E}}& =& ( 1_\mathcal{B}\otimes A_{\mathcal{V}}) \cdot V_{\mathcal{V}} + (1_\mathcal{B}\otimes A_{\mathcal{P}}) \cdot V_{\mathcal{P}} \\
I_{\mathcal{V}} &=& ( 1_\mathcal{B}\otimes A_{\mathcal{V}})^{T} \cdot I_{\mathcal{E}} \\
I_{\mathcal{P}} &= &(1_\mathcal{B}\otimes A_{\mathcal{P}})^{T} \cdot I_{\mathcal{E}}
\end{eqnarray}
where $\otimes$ represents the Kronecker product, $V_{\mathcal{V}}\in\mathbb{R}^{\mathcal{V}}$ is the voltage in the voltage controlled terminals, a value given by the tertiary control, $A_{\mathcal{P}} \in \mathbb{R}^{\mathcal{E}\times\mathcal{P}}$ , $A_{\mathcal{V}} \in \mathbb{R}^{\mathcal{E}\times\mathcal{V}}$ and $I_{\mathcal{E}}\in\mathbb{R}^{\mathcal{E}\mathcal{B}\times 1}$ are the current in each HVDC line, it is defined as $I_{\mathcal{E}}=(I_{\mathcal{E} 1}^T,...,I_{\mathcal{E} \mathcal{B}}^T)^T$; the sub-indices represent the sub-branch of the hyper-edges.
\begin{figure}[tb]
\footnotesize
\centering
\begin{tikzpicture}[x=1mm, y = 1mm]
\begin{scope} [yshift = -70]
\fill[ left color=green!30, right color = green!50!blue!30, thick] (27,-4) rectangle +(26,22);
\node[right,green!50!blue] at (27,-2) {converter};
\node[right] at (-20,17) {power control node};
\draw[green!50!blue, left color = blue!60!green, right color = blue!30, thick] (11,5) rectangle +(14,5);
\node[white] at (18,7.5) {$\frac{1}{\mathcal{V}_{p}}$};
\draw[green!50!blue, fill = blue!60!green, thick] (4,7.5) circle (2);
\draw[white] (4,7.5) -- +(45:2);
\draw[white] (4,7.5) -- +(-45:2);
\draw[white] (4,7.5) -- +(135:2);
\draw[white] (4,7.5) -- +(-135:2);
\node[white] at (4,6.3) {\tiny{\ }};
\node[white] at (2.8,7.5) {\tiny{+}};
\node[white] at (4,8.8) {\tiny{+}};
\draw[green!50!blue, thick,-latex] (-9,7.5) node[left] {$\mathcal{V}_{p}-V_{ref}$} -- +(11,0);
\draw[green!50!blue, left color = blue!60!green, right color = blue!30, thick] (-7,5) rectangle +(5,5);
\node[align = center, white] at (-4.5,7.5) {$k_p$};
\node[green!50!blue] at (-4.5,12.5) {droop};
\draw[green!50!blue, thick,-latex] (4,14.5) node[right] {$P_{ref}$} -- +(0,-5);
\draw[green!50!blue, thick,-latex] (6,7.5) -- +(5,0);
\draw[green!50!blue, thick,-latex] (25,7.5) -- +(5,0);
\draw[green!50!blue, left color = blue!60!green, right color = blue!30, thick] (30,5) rectangle +(10,5);
\node[align = center, white] at (35,7.5) {$\frac{1}{1+\tau s}$};
\draw[green!50!blue, thick,-latex] (40,7.5) -- +(5,0);
\draw[-, thick] (63,-3) -| + (-15,20) -- +(0,20);
\draw[-, thick] (58,-3) -- +(0,9);
\draw[-, thick] (58,17) -- +(0,-9);
\draw[-, very thick] (56,6) -- +(4,0);
\draw[-, very thick] (56,8) -- +(4,0);
\node at (55,3) {C};
\node at (62,12) {$+$};
\node at (62,7) {$V_{\mathcal{P}}$};
\node at (62,3) {$-$};
\node at (45,5) {$I_{c}$};
\draw[-,fill=white] (45,7.5) -- +(3,-3) -- +(6,0) -- +(3,3) -- cycle;
\draw[-latex] (48,5.5) -- +(0,4);
\draw[-latex] (59,18.5) node[left] {$I_{\mathcal{P}}$} -- +(5,0);
\end{scope}
\end{tikzpicture}
\caption{Converter model based upon active power injection for a generic MT-HVDC, power control with droop.}
\label{fig:componentes}
\end{figure}
Assuming the converter can be modeled as the active power injection model with a first order system \eqref{eq:pinpout} as shown in Fig. \ref{fig:componentes}, it is possible to write the complete network as:
\begin{align}\label{eq:ie2}
L\frac{d}{dt}I_{\mathcal{E}}& = -R I_{\mathcal{E}}+(1_{\mathcal{B}}\otimes A_{\mathcal{V}}) V_{\mathcal{V}} + ( 1_{\mathcal{B}}\otimes A_{\mathcal{P}}) V_{\mathcal{P}},\\
\label{eq:ic2}
\tau \frac{d}{dt}I_{c}&=-I_c+H(V_{\mathcal{P}}),\\
\label{eq:vp2}
C \frac{d}{dt} V_{\mathcal{P}} &= I_c - ( 1_{\mathcal{B}}\otimes A_{\mathcal{P}})^{T} I_{\mathcal{E}} {-G_c V_{\mathcal{P}}},
\end{align}
where, $I_c\in \mathbb{R}^{\mathcal{P}}$ is the converter current vector and $\tau \in \mathbb{R}^{\mathcal{P}\times \mathcal{P}}$ is a diagonal matrix that represents the time constant approximation of the converter current response. $C\in \mathbb{R}^{\mathcal{P}\times\mathcal{P}}$ is the matrix of the parallel capacitors of the converters and the cable model, $G_c\in \mathbb{R}^{\mathcal{P}\times\mathcal{P}}$ is a diagonal matrix that contains the shunt conductance of the cable model.
$L\in \mathbb{R}^{\mathcal{E}\mathcal{B}\times \mathcal{E}\mathcal{B}}$ is the matrix of the inductive parameters of the hyper-edge, it contains the sub-matrices of inductive values per branch, described as:
\begin{equation}
L=\left(
\begin{array}{ccc}
L_{1}& &\\
& \ddots &\\
&&L_{\mathcal{B}}
\end{array}
\right)
\end{equation}
similarly, $R\in \mathbb{R}^{\mathcal{E}\mathcal{B}\times \mathcal{E}\mathcal{B}} $ is the matrix of branch resistances:
\begin{equation}
R=\left(
\begin{array}{ccc}
R_{1}& &\\
& \ddots &\\
&&R_{\mathcal{B}}
\end{array}
\right)
\end{equation}
both, $L$ and $R$ are diagonal and positive definite. Furthermore, $H(V_{\mathcal{P}})$ is a vector function that represents the current on the power terminals as function of its voltage $V_{\mathcal{P}}$ and the droop control $K_\mathcal{P}$, \textit{i.e.},
\begin{equation}
H(V_{\mathcal{P}}) = diag(V_{\mathcal{P}})^{-1} \cdot \left( S + K_\mathcal{P}\cdot V_{\mathcal{P}}\right),
\end{equation}
\noindent with
\begin{equation}
S = P_{ref} - K_\mathcal{P}V_{ref}
\end{equation}
\noindent and $P_{ref}$ and $V_{ref}$ the power and voltage references given by the tertiary control. It is assumed that all values are given in per unit.
Let us define the state variables $x = (I^T_{\mathcal{E}},I^T_{c},V^T_{\mathcal{P}})$ then, the non-linear dynamic system that represents the MT-HVDC grid takes the following form
\begin{equation}
M \dot{x} = \Phi x + h_0(x)+ E_0
\label{eq:modelo_enx}
\end{equation}
with
\begin{equation*}
M=\left(\begin{array}{ccc}
L_{_{\mathcal{B}\mathcal{E} \times \mathcal{B}\mathcal{E}}}&0_{_{\mathcal{B}\mathcal{E} \times \mathcal{P}}} &0_{_{\mathcal{B}\mathcal{E} \times \mathcal{P}}}\\
0_{_{\mathcal{P} \times \mathcal{B}\mathcal{E}}}&\tau_{_{\mathcal{P} \times \mathcal{P}}} &0_{_{\mathcal{P} \times \mathcal{P}}}\\
0_{_{\mathcal{P} \times \mathcal{B}\mathcal{E}}}& 0_{_{\mathcal{P} \times \mathcal{P}}}& C_{_{\mathcal{P} \times \mathcal{P}}}
\end{array}
\right),
\end{equation*}
\begin{equation*}
\Phi=\left( \begin{array}{ccc}
-R_{_{\mathcal{B}\mathcal{E} \times \mathcal{B}\mathcal{E}}} &0_{_{\mathcal{B}\mathcal{E} \times \mathcal{P}}}&(1_{\mathcal{B}}\otimes A_{\mathcal{P}})_{ _{\mathcal{B}\mathcal{E} \times \mathcal{P}}}\\
0_{_{\mathcal{P} \times \mathcal{B}\mathcal{E}}}& -\mathcal{I}_{_{\mathcal{P} \times \mathcal{P}}}&0_{_{\mathcal{P} \times \mathcal{P}}}\\
-(1_{\mathcal{B}}\otimes A_{\mathcal{P}})^T_{_{\mathcal{P} \times \mathcal{B}\mathcal{E}}}&\mathcal{I}_{_{\mathcal{P} \times \mathcal{P}}}& -G_{c_{\mathcal{P} \times \mathcal{P}}}
\end{array}
\right),
\end{equation*}
\begin{equation*}
h_0(x)=\left(
\begin{array}{c}
0 \\
H(V_{\mathcal{P}})\\
0
\end{array}
\right),\ \ \
E_0=\left(
\begin{array}{c}
(1_{\mathcal{B}}\otimes A_{\mathcal{V}} )V_{\mathcal{V}} \\
0\\
0
\end{array}
\right)
\end{equation*}
where, $\mathcal{I}_{\mathcal{P}\times \mathcal{P}}\in \mathbb{R}^{\mathcal{P}\times \mathcal{P}}$ is the identity matrix.
\section{Operating point}\label{sec:op}
Before analyzing the stability of the non-linear system (\ref{eq:modelo_enx}), it is important to establish conditions for the existence of an equilibrium point $x_0$.
\begin{lemma}[operating point]\label{lemma:1}
An MTDC-HVDC network represented as (\ref{eq:modelo_enx}), admits an equilibrium point $x_0$ with the following representation:
\begin{eqnarray*}
-R\cdot I_{\mathcal{E}} +(1_{\mathcal{B}}\otimes A_{\mathcal{V}})\cdot V_{\mathcal{V}} + ( 1_{\mathcal{B}}\otimes A_{\mathcal{P}} )\cdot V_{\mathcal{P}} = 0 \\
-I_c + H(V_{\mathcal{P}}) = 0 \\
H(V_{\mathcal{P}}) - ( 1_{\mathcal{B}}\otimes A_{\mathcal{P}})^{T}\cdot I_{\mathcal{E}} -G_c\cdot V_{\mathcal{P}}= 0
\end{eqnarray*}
\end{lemma}
\begin{proof}
since it is the equilibrium point, make zero the derivative of $x$ in (\ref{eq:modelo_enx}) and simplify the resulting equations.
\end{proof}
\begin{remark} Notice that the operating point does not depend on the inductances or capacitances of the grid, then, finding the equilibrium point entails the same problem as the power flow.
\end{remark}
\begin{lemma}
The equilibrium point can be completely characterized by a given value of $V_\mathcal{P}$
\end{lemma}
\begin{proof}
First, notice that $I_{c}=H(V_\mathcal{P})$ where $H$ is continuous for all $V_\mathcal{P}\neq 0$. Moreover, $R$ is non singular if the graph is connected, and hence, $I_\mathcal{E}$ can be calculated as
\begin{equation*}
I_\mathcal{E} = R^{-1} ( 1_{\mathcal{B}}\otimes A_\mathcal{P}V_\mathcal{P} + 1_{\mathcal{B}}\otimes A_\mathcal{V}V_{\mathcal{V}})
\end{equation*}
therefore we can obtain the equilibrium point from a point $V_\mathcal{P}$ as $x_0=(I_{\mathcal{E}}(V_\mathcal{P}),I_{c}(V_\mathcal{P}),V_\mathcal{P})^T$
\end{proof}
As consequence of this lemma, the equilibrium point can be completely defined by a $V_\mathcal{P}$ that fulfills the following:
\begin{equation*}
\begin{split}
H(V_\mathcal{P})-
( 1_\mathcal{B}^T\otimes A_\mathcal{P}^T )R^{-1}( 1_\mathcal{B}\otimes A_\mathcal{P})V_\mathcal{P} \\ -
(1_\mathcal{B}^T\otimes A_\mathcal{P}^T )R^{-1}( 1_\mathcal{B}\otimes A_\mathcal{V})V_{\mathcal{V}} - G_\mathcal{P} V_\mathcal{P} = 0
\end{split}
\end{equation*}
we can simplify this equation as follows
\begin{equation}
H(V_{\mathcal{P}}) - \Phi_\mathcal{P} V_{\mathcal{P}} - E_{\mathcal{P}} = 0
\label{eq:equlibrio_enV}
\end{equation}
with
\begin{align*}
\Phi_\mathcal{P} &= ( 1_\mathcal{B}^T\otimes A_\mathcal{P}^T )R^{-1}( 1_\mathcal{B}\otimes A_\mathcal{P} )V_\mathcal{P} + G_{\mathcal{P}} \\
E_{\mathcal{P}} &= (1_\mathcal{B}^T\otimes A_\mathcal{P}^T )R^{-1}(1_\mathcal{B}\otimes A_\mathcal{V} )V_{\mathcal{V}}
\end{align*}
Equation (\ref{eq:equlibrio_enV}) is a non-linear algebraic system which allows multiple roots. However, only the roots inside a ball $B_0=\left\{V_\mathcal{P}: |v_k-1|\leq \delta \right\}$ have a physical meaning, where $v_k$ can deviate from the nominal voltage. Now, we define conditions for the existence of this root:
\begin{proposition} \label{prop:uno}
Equation (\ref{eq:equlibrio_enV}) admits a root $V_\mathcal{P}$ if there exist a $\delta>0$ and an $\alpha<1$ such that
\begin{equation*}
\alpha = \frac{\left\| \Phi_\mathcal{P}^{-1}\right\|\left\| S\right\|}{(1-\delta)^2}
\end{equation*}
this root is unique inside the ball $B_0=\left\{V_\mathcal{P}: |v_k-1|\leq \delta \right\}$ and can be obtained by the successive approximation method.
\end{proposition}
\begin{proof} Define a map $T:B_0\rightarrow B_0$ as
\begin{equation*}
T(V_{\mathcal{P}}) = \Phi_{\mathcal{P}}^{-1} (H(V_{\mathcal{P}})-E_\mathcal{P})
\end{equation*}
notice that $\Phi_{\mathcal{P}}$ is positive definite and therefore this map is well defined for all $V_{\mathcal{P}}\neq 0$. Evidently, (\ref{eq:equlibrio_enV}) is equivalent to $ V_{\mathcal{P}} = T(V_{\mathcal{P}}) $ which defines a fixed point. Now consider two points $V_{\mathcal{P}},U_{\mathcal{P}}\in B_0$, then we have that
\begin{equation*}
\left\| T(V_{\mathcal{P}}) - T(U_{\mathcal{P}})\right\| \leq \left\| \Phi_\mathcal{P}^{-1} \right\| \left\| (H(V_{\mathcal{P}})-H(U_{\mathcal{P}}))\right\|
\end{equation*}
where $\left\|\cdot\right\|$ is any submultiplicative norm; now,
by using the mean value theorem we can conclude that
\begin{equation*}
\left\| (H(V_{\mathcal{P}})-H(U_{\mathcal{P}}))\right\| \leq \xi \left\| V_{\mathcal{P}} - U_{\mathcal{P}} \right\|
\end{equation*}
with
\begin{equation}
\xi = \underset{V_\mathcal{P}\in B_0}{sup} \left\| \frac{\partial H}{\partial V_\mathcal{P}}\right\|
\end{equation}
in this case, we have that
\begin{equation*}
\xi = \frac{\left\| S\right\|}{(1-\delta)^2}
\end{equation*}
Therefore
\begin{equation*}
\left\| T(V_{\mathcal{P}})-T(V_{\mathcal{P}})\right\| \leq \alpha \left\|V_{\mathcal{P}}-U_{\mathcal{P}} \right\|
\end{equation*}
with
\begin{equation*}
\alpha = \frac{\left\| \Phi_\mathcal{P}^{-1}\right\|\left\| S\right\|}{(1-\delta)^2}
\end{equation*}
By using the Banach fixed point theorem \cite{sholomo} we conclude that if $\alpha<1$, then $T$ is a contraction map and there exists a unique fixed point in $B_0$ which can be easily calculated by the method of successive approximations.
\end{proof}
The successive approximation is just the use of a Picard iteration starting from any point $V\in B_0$ (in this case, $v_k=1$ pu as usual in power flow applications):
\begin{equation*}
V_{\mathcal{P}}^{({iter}+1)} = T\left(V_{\mathcal{P}}^{({iter})}\right)
\end{equation*}
\begin{remark}
Notice that Proposition \ref{prop:uno} does not only guarantee the existence of the equilibrium, but also its uniqueness. In addition, it gives a numerical methodology to find it.
\end{remark}
\section{Stability analysis}\label{sec:stability}
After finding the equilibrium point our next step is to obtain stability conditions for the MT-HVDC grid by using the Lyapunov theory. In order to simplify our analysis let us define the following:
\begin{definition}[Incremental model] The incremental model for a MT-HVDC grid is given by
\begin{equation}
\dot{z}
=
M^{-1}\left(\Phi z+h(z)\right)
\label{eq:incrementalz}
\end{equation}
\noindent where $z = x - {x}_0$ (with $x_0$ the equilibrium point), $x^T=(I^T_{\mathcal{E}}, I^T_c, V^T_{\mathcal{P}})$, the operating point is $x_0^T=(I_{\mathcal{E} 0}^T, I^T_{c 0}, V^T_{\mathcal{P} 0})$ and $h(z)$ defined as follows
\begin{equation}
h(z)=\left(\begin{array}{c}
0_{_{\mathcal{B}\mathcal{E} \times 1}}\\
H(z_{V_{\mathcal{P}}})_{_{\mathcal{P} \times 1}}-I_{c0}\\
0_{_{\mathcal{P} \times 1}}
\end{array}
\right),
\label{eq:ecuaciongz}
\end{equation}
\end{definition}
where, the sub-indices represent the dimension of the matrices. The function $H(z_{V_{\mathcal{P}}})$ with the incremental voltage variable $z_{V_{\mathcal{P}}}$ is described in \eqref{eq:deltaG1}.
\begin{equation}
\label{eq:deltaG1}
\ H(z_{V_{\mathcal{P}}})=diag\left(\frac{1}{z_{V_{\mathcal{P}}}+V_{\matP0}}\right)S{+K_\mathcal{P}\cdot 1_{\mathcal{P}}}
\end{equation}
Let us analyze the stability of this incremental model by using Lyapunov theory.
\begin{theorem}[Lyapunov]\label{theo:Lyap}
Let $E$ be an open subset of $\mathbb{R}^n$ containing $x_0$. Suppose that $f\in C^1(E)$ and that $f(x_0) = 0$. Suppose further that there exists a real value function $W\in C^1(E)$ satisfying $W(x_0)=0$ and $W(x)>0$ if $x\neq x_0$. Then a) if $\dot{W}(x)\leq 0$ for all $x \in E$, then $x_0$ is a stable equilibrium point. b) if $\dot{W}(x)< 0$ for all $x \in E - \left\{x_0\right\}$ is asymptotically stable and c) if $\dot{W}(x)>0$ for all $x\in E - \left\{x_0\right\}$ is unstable.
\end{theorem}
\begin{proof}
See \cite{perko}
\end{proof}
The function $W$ in Theorem \ref{theo:Lyap} is called Lyapunov function. There is not general method to obtain these type of functions. Here, we find Lyapunov function candidate with the Krasovskii's method \cite{khalil} in which $W(z)=\dot{z}^T Q(z) \dot{z}$.
\begin{lemma}\label{lemaKrasovskii}
Consider the system $\dot{z}=f(z)$ with f(0)=0. Assume that $f(z)$ is continuously differentiable and its Jacobian $[\partial f/\partial z]$, the Lyapunov function can be differentiated to give\\
\[
\dot{W}(z)=\dot{z}^T\left\{\left(\frac{\partial f}{\partial z}\right)^TQ+Q\left(\frac{\partial f}{\partial z}\right)\right\}\dot{z}=\dot{z}^T\Psi\dot{z}.
\]
If $Q$ is a constant symmetric, positive definite matrix and $\dot{W}(z)\leq 0$, then the zero solution $z\equiv 0$ is a unique asymptotically stable equilibrium with Lyapunov function $W(z)$.\\
\end{lemma}
\begin{proof}
See \cite{slotine}
\end{proof}
\subsection{Stability for the MT-HVDC system}
\begin{lemma}
The equilibrium of the MT-HVDC network described by \eqref{eq:ie2}, \eqref{eq:ic2} and \eqref{eq:vp2} is asymptotically stable if there exist a matrix $\Psi\preceq 0$, where $\preceq$ represents the L\"{o}wner partial order on negative semidefinite matrices.
\end{lemma}
\begin{proof}
In order to apply the Lyapunov theorem with the Krasovskii's method to study the stability of the system, the operating point is shifted to the origin as above. The model is the same for \eqref{eq:incrementalz} with $M$, $\Phi$ and $h(z)$ as in \eqref{eq:modelo_enx} and \eqref{eq:ecuaciongz}.
The Jacobian for the system is given by
\begin{equation*}
\partial f/\partial z = M^{-1}(\Phi + \partial h(z)/\partial z)
\end{equation*}
that is
\begin{equation}
\frac{\partial f}{\partial z}=M^{-1}
\left(
\begin{array}{ccc}
-R_{_{\mathcal{B}\mathcal{E} \times \mathcal{B}\mathcal{E}}}& 0_{_{\mathcal{B}\mathcal{E} \times \mathcal{P}}}& \kappa _{_{\mathcal{B}\mathcal{E} \times \mathcal{P}}}\\
0_{_{\mathcal{P} \times \mathcal{B}\mathcal{E}}}&-\mathcal{I}_{_{\mathcal{P} \times \mathcal{P}}}&
\left(\frac{\partial H(z_{V_{\mathcal{P}}})}{\partial z_{V_{\mathcal{P}}}}\right)_{_{\mathcal{P} \times \mathcal{P}}}\\
-\kappa^T_{_{\mathcal{P} \times \mathcal{B}\mathcal{E}}}& \mathcal{I}_{_{\mathcal{P} \times \mathcal{P}}}&
-G_{c_{\mathcal{P} \times \mathcal{P}}}
\end{array}
\right),
\end{equation}
Where, the Jacobian matrix $\partial H(z_{V_{\mathcal{P}}})/\partial z_{V_{\mathcal{P}}}$ is a diagonal matrix described in \eqref{eq:diagdG};
\begin{equation}
\label{eq:diagdG}
\frac{\partial H(z_{V_{\mathcal{P}}})}{\partial z_{V_{\mathcal{P}}}}=diag\left(-\frac{1}{(z_{V_{\mathcal{P}}}+V_{\mathcal{P} 0})^2}\right)\cdot diag(S),
\end{equation}
and $\kappa=(1_{\mathcal{B}}\otimes A_{\mathcal{P}})$. The constant, symmetric, positive definite matrix $Q \in \mathbb{R} ^{(\mathcal{B}\mathcal{E}+2\mathcal{P})\times(\mathcal{B}\mathcal{E}+2\mathcal{P})}$ can be calculated by solving the following linear matrix inequality (LMI):
\begin{eqnarray}
\label{eq:LMIQ}
\left\{
\begin{array}{l}
Q=Q^T\succ 0\\
\left(\frac{\partial f}{\partial z}\right)_{j}^TQ+Q\left(\frac{\partial f}{\partial z}\right)_{j}\preceq 0, \forall j \in \{1,2,..r\}.
\end{array}
\right.
\end{eqnarray}
where the subindex $j=1,2,...,r$ is a set of boundary points of the ball $B_0=\{z_{v_{\mathcal{P}}}\in \mathbb{R}^{\mathcal{P}}: z_{v_{\mathcal{P}}}\leq \delta_a\}$ (evidently $z=0$ is inside of this set).
The linear matrix inequality is feasible if and only if the Jacobian $\left(\partial f/\partial z\right)_j$ is Hurtwitz. Moreover, the asymptotic stability of the system is proved if the LMI \eqref{eq:LMIQ} is feasible. Then, the matrix $Q$ can be calculated and the Lyapunov function is $W(z)=\dot{z}^T Q(z) \dot{z}$. Finally, we invoke \textit{LaSalle's} principle \cite{sastry}. Therefore, $\dot{W}(0)$ is zero \textit{if} $\dot{z}=0$ and it implies $z=0$ from \eqref{eq:incrementalz}.
\end{proof}
\color{black}
\section{Computational results}\label{sec:computa}
The MT-HVDC grid shown in Fig. \ref{fig:MTDCexample} is used to corroborate the analysis presented in the sections above. The cable model is represented with three edges for the series impedance in order to take into account the frequency dependence. Two offshore wind farms and two inland stations were considered. The reference voltages are 1.0 pu. The cable distances as well as the generated and consumed power are described in Fig. \ref{fig:MTDCexample}. The system is represented in per unit with a $P_{base}=400$ MW, $V_{base (dc)}=400$ kV. Additionally, the parameters of the network and the MMCs are described in the appendix.
For this case, the results of applying Proposition 1 are as follows:
\begin{align}
\nonumber
\alpha= 0.0068<1\\
\nonumber
\delta= 0.5>0
\end{align}
\begin{figure}[tb]
\footnotesize
\centering
\input{red1.tex}
\caption{Four nodes MT-HVDC grid with two offshore wind farms.}
\label{fig:MTDCexample}
\end{figure}
The successive approximations method was evaluated in this system by executing only four iterations. Results are shown in Fig. \ref{fig:superconvergence} were convergence is evident with a logarithmic scale at the y$-$axis. The voltage of the power nodes is $V_{\mathcal{P}}=(V_{1},V_{2},V_{4})^T= (1.0014,1.00010,0.9998)$ pu and $V_{\mathcal{V}}=1.0000$ pu.\\
\begin{figure}[tb]
\begin{tikzpicture}
\begin{axis}[scale only axis,width=7.3cm, height=3.0cm,ymode=log, xmajorgrids,ymajorgrids, yminorgrids,minor y tick num=5,
xlabel={Iterations}, ylabel ={Error}, ymax = 1, ymin=0, xmin=0.9, xmax = 4.1]
\addplot [thick, blue!50!green, mark = square]
coordinates{(1,0.0018)(2,2.527E-6)(3,3.1955E-9) (4,4.3337E-12)};
\end{axis}
\end{tikzpicture}
\caption{Convergence of the successive approximations method applied to the MT-HVDC grid from an initial point $V_\mathcal{P}=(1.00,1.00,1.00)^T$}
\label{fig:superconvergence}
\end{figure}
In order to study the stability of the MT-HVDC network, we apply the analysis described in section \ref{sec:stability}, with the evaluation of $\left(\partial f/\partial z\right)_j$ at $2\mathcal{P}=6$ points of the ball with radius $\delta_a=\delta$. The points are located at the intersections between the axes and the ball as defined above. First, the eigenvalues of $(\partial f/\partial z)_{j}$ have negative real part. The LMI of \eqref{eq:LMIQ} is evaluated with a simple procedure in existing software in MATLAB \cite{lmimatlab} or \cite{cvx}. Once, the LMI problem is feasible, the matrix $Q$ is obtained and $\Psi$ is presented for $z=0$. Figures \ref{fig:Qmatrix} and \ref{fig:Psimatrix} show the matrices values and distribution by a color map representation. It is observed from the color-map that the matrices $Q$ and $\Psi$ are symmetric. All the eigenvalues of $Q$ are greater than zero and all the eigenvalues of $\Psi$ are real and lower than zero.
The eigenvalues of $Q$ and $\Psi$ are $\lambda_{Q}, \lambda_{\Psi} \in \mathbb{R}^{(\mathcal{B}\mathcal{E}+2\mathcal{P})\times(\mathcal{B}\mathcal{E}+2\mathcal{P})}$, respectively. Therefore, for the operating point presented in the results above:
$\lambda_{Q}=$\begin{small}
$10^{-4}\times(5, 7, 10, 29, 29, 60, 85,
92, 313, 313, 1449, 2115, 3443, 3982,$\\
$ 4355, 4870, 6493, 9474, 32646, 40632, 54295)$.
\end{small}\\
The matrix $\Psi$ has the following eigenvalues:
$-\lambda_{\Psi}=$
\begin{small}$(0.6678,
0.8096, 4.6681, 1.5217, 4.2254, 3.7405, 3.2104, 3.2122,$\\
$ 3.2420, 3.2417,3.0355, 2.9865, 2.9131, 2.9169, 2.9255, 2.9563,$\\
$2.9562,2.9481, 2.9437, 2.9386, 2.9410 )$.
\end{small}
\begin{figure}[tb]
\centering
\includegraphics[scale=0.8]{Qmatrix.pdf}
\caption{Color-map representation of the matrix Q obtained by the results of the LMI problem.}
\label{fig:Qmatrix}
\end{figure}
\begin{figure}[tb]
\centering
\includegraphics[scale=0.8]{Psimatrix.pdf}
\caption{Color-map representation of the matrix $\Psi$ obtained by the results of the LMI problem.}
\label{fig:Psimatrix}
\end{figure}
The time response results for the four nodes MT-HVDC network are shown in Fig. \ref{fig:3nodesresponse}. The coordination from the black-start condition of the grid uses the closing of the lines at the following instants: initially the line 1-3 and line 1-2 are connected. The line 3-2 is connected at 5.3 s, the next line to be connected is line 2-4 at 6.5 s, finally the line 3-4 is connected at 7 s. The power references for the converters are activated at 4.5 s for converter at node 1, at 5.5 s for the converter at node 2 and finally at 7.7 for the converter at node 3. After the 7.7 s the MT-HVDC grid reaches the desired operating point.
\begin{figure}
\centering
\includegraphics[scale=0.8]{test4nodes2.pdf}
\caption{MT-HVDC grid time response with four nodes}
\label{fig:3nodesresponse}
\end{figure}
\section{Conclusions}\label{se:conclu}
The contribution of the paper folds in three parts, first a generalized model of the MT-HVDC has been described by the use of graph theory and the application of hyper-edges to describe the frequency dependence of the cable model. Therefore, the convergence of the nonlinear algebraic system of a MT-HVDC that describes the operating point and defines the existence of this point has been studied. Moreover, conditions for the existence of the solution of the model of the MT-HVDC have been listed in this paper. The third part of the contribution of the paper is the application of the Krasovskii's method to obtain a Lyapunov function and we list conditions for the stability of the system with an approximated model for the converters as is typically used. Finally, the simulation results present the behaviour of the power flow and transient in a standard network.
\vspace{-0.5cm}
\section{Appendix}
\subsection{Parameters of the dc cable}
The cable model in the network is based on \cite{Freytes2016b}, the parameters are listed in Table \ref{tab:cableparameters}. The parameters $R_{ji}$ and $L_{ji}$ define the cable resistance and inductance at the $j-$th branch, $G_{l}$ and $C_{l}$ are the conductance and capacitance in the cable, respectively.
\begin{table}
\caption{Parameters of the dc cable.}
\centering
\begin{tabular}{cccc}
Parameter&value&Paramter&value\\
\hline
\ &\ &\ &\ \\
$R_{11}$&1.1724$\cdot 10^{-1}$ [$\Omega$/km]&$L_{11}$&2.2851$\cdot 10^{-4}$ [H/km]\\
$R_{12}$&8.2072$\cdot 10^{-2}$ [$\Omega$/km]&$L_{12}$&1.5522$\cdot 10^{-3}$ [H/km]\\
$R_{13}$&1.1946$\cdot 10^{-2}$ [$\Omega$/km]&$L_{13}$&3.2942$\cdot 10^{-3}$ [H/km]\\
$G_{l}$&7.6330$\cdot 10^{-11}$ [S/km] &$C_{l}$&0.1983$\cdot 10^{-6}$ [F/km]\\
\hline
\end{tabular}
\label{tab:cableparameters}
\end{table}
\subsection{Parameters for the MMC converter}
The parameters of the converters are based on the CIGRE guide \cite{cigrehvdc}. The rated power for each converter is 400 MVA, the system uses as base voltage $220$ kV, the DC base voltage is $400$ kV. the parameters are listed in Table \ref{tab:mmcpar}. The converter transformer is simulated with $L_f$ and $R_f$ as shown in Fig. \ref{fig:mmc}. The inductance of the transformer is $L_f=0.18$ pu, the resistance is $R_{f}=0.6/100$ pu.
The controller parameters are calculated with the procedure presented in \cite{MMCPItune} with the pole placement technique. The parameter $\tau$ for the converter is calculated with the closed loop step response circulating current control. Hence, $\tau\approx 1.2$ ms.
\begin{table}
\caption{Parameters of the MMC.}
\centering
\begin{tabular}{cccccc}
Parameter&value&Parameter&value&Parameter&value\\
\hline
$R_{a}$&0.003 pu &
$L_a$&0.15 pu&
$n$&200 \\
$C_{i}$&5 mF &
$C_{pole}$&60 ms &&\\
\hline
\end{tabular}
\label{tab:mmcpar}
\end{table}
\vspace{-0.3cm}
\bibliographystyle{IEEEtran}
| {
"attr-fineweb-edu": 1.93457,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUd9M4eIOjRyLa5Zsi | \section{Introduction}
The discovery of quasicrystals in Nature \cite{She} emphasised the need for a better understanding of physical diffraction, especially for systems with pure point spectrum. Over the last two decades, tremendous amount of work has been done in this direction, and the connection between pure point diffraction and almost periodicity has become clear (see for example \cite{bm,BL,Gou,LMS,LR,LSS,LS,LS2,MoSt,SOL,SOL1,NS11,ST} to name a few).
Given a translation bounded measure $\omega$, its diffraction is defined as the Fourier transform $\widehat{\gamma}$ of the autocorrelation measure $\gamma$ \cite{Hof1} (see the monographs \cite{TAO,TAO2,KLS} for general background of mathematical diffraction theory). We say that $\omega$ is pure point diffractive if the diffraction measure $\widehat{\gamma}$ is a pure point measure, and this is equivalent to the strong almost periodicity of the autocorrelation measure $\gamma$ \cite{bm,ARMA,MoSt}. For measures with Meyer set support, this is also equivalent to the (simpler to check) norm almost periodicity of $\gamma$ \cite{bm,NS11}. This makes strong and norm almost periodicity interesting for us.
While strong almost periodicity seems to be the natural concept to study due to the direct connection with pure point diffraction, norm almost periodicity appeared in a natural way in the study of measures coming from cut and project schemes \cite{NS11}, and diffraction of measures with Meyer set support \cite{NS12}. Because of this, a better understanding of norm almost periodicity becomes important. It is known that norm almost periodicity is a stronger concept than strong almost periodicity \cite{bm}, and that for measures with Meyer set support the two concepts are equivalent \cite{bm,NS11}. This suggests that there is a deeper connection between these two concepts, a connection which has not been investigated, yet. It is our goal in this paper to look closer at the relation between these two forms of almost periodicity.
Recall that a translation bounded measure $\mu$ is called strongly almost periodic if, for each compactly supported continuous function $f$, the convolution $\mu*f$ is a Bohr almost periodic function. In Theorem~\ref{thm:char_nap}, we prove that a translation bounded measure $\mu$ is norm almost periodic if and only if the set $\{ \mu *f\ |\ f \mbox{ continuous},\, \| f \|_\infty \leqslant 1,\, {\mbox{supp}}(f) \subseteq U \}$, where $U$ is a fixed but arbitrary precompact open set, is equi-Bohr almost periodic (meaning that, for each $\varepsilon >0$, the set of common $\varepsilon$-almost periods of the entire family is relatively dense).
To achieve this characterisation, we provide in Corollary~\ref{C1} and Proposition~\ref{P1} new formulas for $\| \mu \|_U$. We want to emphasise here that, while in the literature this norm is typically defined using a compact set $K$ with non-empty interior, our choice of working with precompact open sets leads to simpler and more useful formulas (see Corollary~\ref{C1} and Proposition~\ref{P1}), and therefore it is, in our opinion, more useful. Moreover, any two precompact sets $X,Y$ with non-empty interior define equivalent norms $\| \cdot \|_X$ and $\| \cdot \|_Y$, respectively, and therefore the choice of a compact set $K$ with non-empty interior or a precompact open set $U$ is irrelevant for the concept of norm almost periodicity.
The second goal of the paper is to study the norm almost periodicity of absolutely continuous measures. We show that, given an absolutely continuous measure $\mu$ with density function $f \in L^1_{\text{loc}}(G)$, the measure $\mu$ is norm almost periodic if and only if $f$ is a Stepanov almost periodic function. We also prove that if the density function is uniformly continuous and bounded, then norm almost periodicity of $\mu$ is also equivalent to the Bohr almost periodicity of $f$ and to the strong almost periodicity of $\mu$.
\medskip
The paper is structured as follows. In Section~\ref{on norm}, we provide in Corollary~\ref{C1}, Proposition~\ref{P1} and Corollary~\ref{C4} various estimates for the norm of a measure. We also prove that the spaces of translation bounded pure point measures, translation bounded absolutely continuous measures and translation bounded singularly continuous measures, respectively, are Banach spaces with respect to this norm. We complete this section by showing that these spaces are not closed with respect to the product topology.
In Section~\ref{SAP vs NAP} we study the connection between norm and strong almost periodicity we mentioned above. We prove one of the main results of the paper in the following Theorem.
\medskip
\noindent \textbf{Theorem~\ref{thm:char_nap}.}
\textit{Let $\mu \in {\mathcal M}^\infty(G)$, let $U\subseteq G$ be an open precompact set, and let $\mathcal F \subseteq {\mathcal F}_U$ be dense in $({\mathcal F}_U, \| \cdot \|_\infty)$. Then, $\mu$ is norm almost periodic if and only if $\mathcal{G}_{{\mathcal F}}:=\{ \mu *g\ |\ g \in {\mathcal F} \}$ is equi-Bohr almost periodic.}
\textit{In particular, $\mu$ is norm almost periodic if and only if the family $\mathcal{G}:=\{ \mu *g\ |\ g \in {\mathcal F}_U \}$ is equi-Bohr almost periodic.}
\smallskip Here and below, for a precompact open set $U$, the set ${\mathcal F}_U$ is defined as
\[
{\mathcal F}_U:= \{g \in C_{\mathsf{c}}(G)\ |\ |g| \leqslant 1_U \}= \{ g \in C_{\mathsf{c}}(G)\ |\ {\mbox{supp}}(g) \subseteq U,\, \| g\|_\infty \leqslant 1 \} \,.
\]
After that we provide examples of measures $\mu \in \mathcal{S}\hspace*{-2pt}\mathcal{AP}(G)$ for which $ \mu_{\text{pp}}, \mu_{\text{ac}}$ and/or $\mu_{\text{sc}}$ are not strongly almost periodic, see Section~\ref{sap leb}. This is interesting, since norm almost periodicity carries throughout the Lebesgue decomposition by Corollary~\ref{coro:2}.
\smallskip
In Section~\ref{nap sp}, we take a closer look at norm almost periodic measures of spectral purity. Of special interest to us are norm almost periodic absolutely continuous measures. Here we prove the second main result in the paper.
\medskip
\noindent \textbf{Theorem~\ref{T1}.}
\textit{An absolutely continuous translation bounded measure $\mu=f\,\theta_G$ is norm almost periodic if and only if its density function $f \in L^1_{\text{loc}}(G)$ is $L^1$-Stepanov almost periodic.}
\textit{The mapping $f \mapsto f\, \theta_G$ is a norm preserving isomorphism between the Banach spaces $(\mathcal{S}, \| \cdot \|_U)$ and $(\mathcal{N}\hspace*{-1pt}\mathcal{AP}_{\text{ac}}(G), \| \cdot \|_U)$, where
\[
\mathcal{S}:= \{ f \in L^1_{\text{loc}}(G) \ |\ f \mbox{ is } L^1 \mbox{-Stepanov almost periodic} \} \,.
\]}
We complete the paper by looking at some consequences of these results for the diffraction of measures with Meyer set support.
\section{Preliminaries}
Throughout the paper, $G$ denotes a second countable, locally compact (Hausdorff) Abelian group. The metric on $G$ can be chosen such that it is translation invariant and all the balls are precompact \cite{STRUB}, and we assume that this holds. The associated Haar measure is denoted by $|\cdot|$ or $\theta_G$.
We use the familiar symbols $C_{\text{c}}(G)$ and $C_{\text{u}}(G)$ for the spaces of compactly supported continuous and bounded uniformly continuous functions, respectively, which map from $G$ to ${\mathbb C}$. For any function $g$ on $G$, the functions $T_tg$ and $g^{\dagger}$ are defined by
\begin{displaymath}
(T_tg)(x):=g(x-t)\quad \text{ and } \quad g^{\dagger}(x):=g(-x).
\end{displaymath}
A \textbf{measure} $\mu$ on $G$ is a linear functional on $C_{\text{c}}(G)$ such that, for every compact subset $K\subseteq G$, there is a constant $a_K>0$ with
\begin{displaymath}
|\mu(g)| \leqslant a_{K}\, \|g\|_{\infty}
\end{displaymath}
for all $g\in C_{\text{c}}(G)$ with ${\mbox{supp}}(g) \subseteq K$. Here, $\|g\|_{\infty}$ denotes the supremum norm of $g$. By the Riesz Representation theorem \cite{Reiter,ReiterSte,RUD2}, this definition is equivalent to the classical measure theory concept of regular Radon measure.
For a measure $\mu$ on $G$, we define $T_t\mu$ and $\mu^{\dagger}$ by
\begin{displaymath}
(T_t\mu)(g):= \mu(T_{-t}g)\quad \text{ and } \quad
\mu^{\dagger}(g):= \mu(g^{\dagger}).
\end{displaymath}
\smallskip
Given a measure $\mu$, there exists a positive measure $\left| \mu \right|$ such that, for all $f \in C_{\mathsf{c}}(G)$ with $f \geqslant 0$, we have \cite{Ped}
\[
\left| \mu \right| (f)= \sup \{ \left| \mu (g) \right| \ |\ g \in C_{\mathsf{c}}(G),\, |g| \leqslant f \} \,.
\]
The measure $\left| \mu \right|$ is called the \textbf{total variation of} $\mu$.
\smallskip
Recall that a measure $\mu$ on $G$ is called \textbf{translation bounded} if $\sup_{t\in G}|\mu|(t+K) < \infty$ holds for every compact subset $K\subseteq G$. The space of all translation bounded measures on $G$ is denoted by $\mathcal{M}^{\infty}(G)$. We will denote by ${\mathcal M}^\infty_{\text{pp}}(G), {\mathcal M}^\infty_{\text{ac}}(G)$ and ${\mathcal M}^\infty_{\text{sc}}(G)$ the spaces of translation bounded pure point, translation bounded absolutely continuous and translation bounded singular continuous measures, respectively.
\medskip
Now, as mentioned in the Introduction, there are different notions of almost periodicity.
\begin{definition} \label{def:1}
A function $f \in C_{\text{u}}(G)$ is called \textbf{strongly} \textbf{almost periodic} if the closure of $\{T_tf\ |\ t\in G\}$ is compact in the Banach space $(C_{\mathsf{u}}(G), \| \cdot \|_\infty)$. The spaces of strongly almost periodic functions on $G$ is denoted by $\operatorname{SAP}(G)$.
\end{definition}
\begin{remark}
Note that a function $f\in C_{\text{u}}(G)$ is strongly almost periodic if and only if it is Bohr almost periodic, i.e. for each $\varepsilon>0$, the set
\begin{displaymath}
\{t\in G\ |\ \|T_tf-f\|_{\infty} < \varepsilon\}
\end{displaymath}
is relatively dense \cite[Prop. 4.3.2]{TAO2}. \hfill$\Diamond$
\end{remark}
Definition~\ref{def:1} carries over to measures.
\begin{definition}
A measure $\mu$ is called \textbf{strongly almost periodic} if, for all $f \in C_{\text{c}}(G)$, the function $f*\mu$ is a strongly almost periodic function. We will denote by $\mathcal{S}\hspace*{-2pt}\mathcal{AP}(G)$ the space of all strongly almost periodic measures.
\end{definition}
Later, we will compare this notion of almost periodicity with the following stronger version.
\begin{definition}
Let $K\subseteq G$ be a compact subset with non-empty interior. A measure $\mu\in\mathcal{M}^{\infty}(G)$ is called \textbf{norm almost periodic} if, for all $\varepsilon > 0$, the set
\[
P_{\varepsilon}^K(\mu) := \{t \in G \ |\ \|\mu -T_t\mu\|_K < \varepsilon\}
\]
is relatively dense in $G$. The space of norm almost periodic measures will be denote by $\mathcal{N}\hspace*{-1pt}\mathcal{AP}(G)$. Here, for a translation bounded measure $\nu \in {\mathcal M}^\infty(G)$, its $K$-norm (see \cite{bm,NS11} for more details and properties of this) is defined as
\[
\| \nu \|_K:= \sup_{x \in G} \left| \nu \right|(x+K) \,.
\]
\end{definition}
Last but not least, we need to define the convolution of two measures.
\begin{definition}
Let $\mu$ and $\nu$ be two measures on $G$. We say that $\mu$ and $\nu$ are \textbf{convolvable} whenever their \textbf{convolution}
\[
(\mu*\nu)(f) = \int_{G} \int_{G} f(x+y)\ \mbox{d}\mu(x)\, \mbox{d}\nu(y)
\]
exists for all $f\in C_{\text{c}}(G)$.
\end{definition}
\begin{definition} A sequence $(A_n)_{n\in{\mathbb N}}$ of precompact open subsets of $G$ is called a \textbf{van Hove sequence} if, for each compact set $K \subseteq G$, we have
\[
\lim_{n\to\infty} \frac{|\partial^{K} A_{n}|}{|A_{n}|} = 0 \, ,
\]
where the \textbf{$K$-boundary $\partial^{K} A$} of an open set $A$ is defined as
\[
\partial^{K} A := \bigl( \overline{A+K}\setminus A\bigr) \cup
\bigl((\left(G \backslash A\right) - K)\cap \overline{A}\, \bigr) \,.
\]
\end{definition}
Note that every $\sigma$-compact locally compact Abelian group $G$ admits van Hove sequences \cite{Martin2}.
At the end of this section, let us review the standard notions of convergence for measures which we will use below.
\begin{definition}
Let $(\mu_n)_{n\in{\mathbb N}}$ be a sequence of measures on $G$, and let $\mu\in{\mathcal M}(G)$. Then, the sequence $(\mu_n)_{n\in{\mathbb N}}$ converges to $\mu$
\begin{enumerate}
\item[$\bullet$] in the \textbf{vague topology} if $\lim_{n\to\infty} \mu_n(f)=\mu(f)$ for all $f\in C_{\mathsf{c}}(G)$;
\item[$\bullet$] in the \textbf{norm topology} if $\lim_{n\to\infty} \|\mu_n-\mu\|_K=0$ for some (fixed) non-empty and compact set $K\subseteq G$ which is the closure of its interior;
\item[$\bullet$] in the \textbf{product topology} if $\lim_{n\to\infty} \|(\mu_n-\mu)*g\|_{\infty}=0$ for all $g\in C_{\mathsf{c}}(G)$.
\end{enumerate}
These types of convergence are denoted by $\mu_n\to\mu$, $\mu_n\Rightarrow\mu$, $\mu_n\xrightarrow{\pi}\mu$.
\end{definition}
\section{On the norm of measures}\label{on norm}
In this section, we give various estimates on the norm $\| \mu \|_U$ of a measure. Let us start with the following lemma.
\begin{lemma} \label{lem:1}
Let $U$ be an open precompact set, and let $\mu$ a measure on $G$. Then,
\begin{displaymath}
\left| \mu \right| (U)= \sup \{ | \mu (g)|\ |\ g \in C_{\mathsf{c}}(G) ,\, |g| \leqslant 1_U \}.
\end{displaymath}
\end{lemma}
\begin{proof}
$\geqslant $: First, for any such $g$, we have
\begin{displaymath}
\left| \mu (g) \right| \leqslant \left| \mu \right| (|g|) \leqslant \left| \mu \right| (1_U) = \left| \mu \right| (U) \,.
\end{displaymath}
\medskip
\noindent $\leqslant$: Let $\varepsilon >0$ by arbitrary.
By the inner regularity of $|\mu|$, there exists a compact set $K \subseteq U$ such that
\begin{displaymath}
\left| \mu \right| (U) \leqslant \left| \mu \right| (K) +\frac{\varepsilon}{2} \,.
\end{displaymath}
Next, we can find some $f \in C_{\mathsf{c}}(G)$ such that $1_K \leqslant f \leqslant 1_U$, and hence
\begin{displaymath}
\left| \mu \right| (U) \leqslant \left| \mu \right| (f) +\frac{\varepsilon}{2} \,.
\end{displaymath}
Now, since $f \geqslant 0$, we have
\begin{displaymath}
\left| \mu \right| (f)= \sup\{ | \mu (g)|\ |\ g \in C_{\mathsf{c}}(G),\, |g| \leqslant f \} \,.
\end{displaymath}
Therefore, there exists a function $g \in C_{\mathsf{c}}(G)$ such that $|g| \leqslant f$ and
\begin{displaymath}
\left| \mu \right| (f)\leqslant \left| \mu (g) \right| +\frac{\varepsilon}{2} \,.
\end{displaymath}
Thus, one has
\[
\left| \mu (g) \right| \geqslant \left| \mu \right| (f) -\frac{\varepsilon}{2} \geqslant \left| \mu \right| (U) -\varepsilon \,,
\]
and $|g| \leqslant f \leqslant 1_U$. Since $\varepsilon>0$ was arbitrary, this proves the claim.
\end{proof}
As we will often deal with functions of this type, we will use the following notation:
\[
{\mathcal F}_U:= \{g \in C_{\mathsf{c}}(G)\ |\ |g| \leqslant 1_U \}= \{ g \in C_{\mathsf{c}}(G)\ |\ {\mbox{supp}}(g) \subseteq U,\, \| g\|_\infty \leqslant 1 \} \,.
\]
As a consequence we get the following simple result, which will be important in our study of norm almost periodicity.
\begin{coro}\label{C1}
Let $U \subseteq G$ be an open and precompact subset. Then, for all $ \mu \in {\mathcal M}^\infty(G)$, we have
\begin{displaymath}
\| \mu \|_U = \sup_{t \in G} \sup_{ g \in {\mathcal F}_U } \left| \mu(T_t g) \right| = \sup_{ (t,g) \in G \times {\mathcal F}_U} \left| \mu(T_t g) \right| = \sup_{ g \in {\mathcal F}_U } \sup_{t \in G} \left| \mu(T_t g) \right|.
\end{displaymath}
\end{coro}
\begin{proof}
The first equality follows from Lemma~\ref{lem:1}. The second and third equality follow from standard properties of the supremum.
\end{proof}
We next show that each measure $\mu$ induces an operator $T_\mu$ on the space of continuous functions supported inside $-U$, and that $\| \mu \|_U$ is just the operator norm $\| T_{\mu}\|$. This enables us to give alternate formulas for $\| \mu \|_U$.
For simplicity, we write
\begin{displaymath}
C(G:U):= \{ f \in C_{\mathsf{c}}(G)\ |\ {\mbox{supp}}(f) \subseteq U \} \,.
\end{displaymath}
\begin{prop}\label{P1}
Let $\mu \in {\mathcal M}^\infty(G)$, and let $U$ be an open precompact set. Define the operator $T_{\mu}$ by
\begin{displaymath}
T_\mu: (C(G:-U), \| \cdot \|_\infty) \to (C_{\mathsf{u}}(G), \| \cdot \|_\infty) , \quad \
f \mapsto\mu*f \,.
\end{displaymath}
Then, one has
\begin{displaymath}
\|T_\mu\| = \| \mu \|_U \,.
\end{displaymath}
In particular, this gives
\begin{align*}
\| \mu \|_U
&= \sup \{ \| \mu*f \|_\infty\ |\ f \in C(G:-U),\, \| f\|_\infty =1 \} \\
&= \sup \{ \| \mu*f \|_\infty\ |\ f \in C(G:-U),\, \| f\|_\infty
\leqslant 1 \}\\
&= \sup \{ \frac{\| \mu*f \|_\infty}{ \|f \|_\infty}\ |\ f \in C(G:-U),\, f
\not\equiv 0 \} \\
&= \inf \{ C\ |\ \| \mu*f \|_\infty \leqslant C\, \|f\|_\infty \,
\text{ for all } f\in C(G:-U) \} \,.
\end{align*}
\end{prop}
\begin{proof}
First note that $g \mapsto g^\dagger$ defines an isometric isomorphism between $C(G:-U)$ and $C(G:U)$. It follows immediately from Corollary~\ref{C1} that
\begin{displaymath}
\| \mu \|_U
= \sup_{ g \in {\mathcal F}_U }\sup_{t \in G} \big| \mu(T_t g) \big| =
\sup_{ g \in {\mathcal F}_{-U} } \sup_{t \in G} \big| \mu(T_t g^\dagger)
\big|
=\sup_{ g \in {\mathcal F}_{-U} } \sup_{t \in G} \big|(\mu*g)(t) \big|=
\sup_{ g \in {\mathcal F}_{-U} } \|\mu*g \|_\infty \,.
\end{displaymath}
This yields
\begin{equation}\label{eq2}
\| \mu \|_U= \sup_{ g \in {\mathcal F}_{-U} } \|\mu*g \|_\infty \,.
\end{equation}
Now, since $\mu \in {\mathcal M}^\infty(G)$, we have $\mu*g \in C_{\mathsf{u}}(G)$ for all $g \in C_{\mathsf{c}}(G)$ \cite{ARMA,MoSt}. Therefore, $T_\mu$ is well defined, and it is easy to see that $T_\mu$ is linear.
Next, we have
\begin{displaymath}
{\mathcal F}_{-U} = \{ g \in C_{\mathsf{c}}(G)\ |\ {\mbox{supp}}(g) \subseteq -U,\, \| g\|_\infty \leqslant 1 \} = \{ g \in C(G:-U)\ |\ \| g\|_\infty \leqslant 1 \}\,.
\end{displaymath}
Hence, ${\mathcal F}_{-U}$ is the unit ball in the normed space $( C(G:-U), \| \cdot \|_\infty)$. Therefore, by Eq.~\eqref{eq2}, we get
\begin{displaymath}
\| \mu \|_U=\sup \{ \| T_{\mu}(f) \|_\infty\ |\ f \in C(G:-U),\, \| f\|_\infty \leqslant 1 \}= \| T_\mu\| \,.
\end{displaymath}
Finally, the last claim follows from standard equivalent definitions of the operator norm on normed spaces.
\end{proof}
As an immediate consequence, we obtain the next result.
\begin{coro}\label{C5} Let $U$ be an open precompact set, and let $\mathcal F \subseteq \mathcal F_{-U}$ be dense in $(\mathcal F_{-U}, \| \cdot \|_\infty)$. Then, one has
\begin{displaymath}
\| \mu \|_U= \sup \{ \| \mu*f \|_\infty\ |\ f \in \mathcal F \}
\end{displaymath}
\end{coro}
\begin{proof}
With the notation of Proposition~\ref{P1}, since $\mathcal F$ is dense in $(\mathcal F_{-U}, \| \cdot \|_\infty)$ and $(\mathcal F_{-U}, \| \cdot \|_\infty)$ is the unit ball in $(C(G:-U), \| \cdot \|_\infty)$, we get:
\begin{displaymath}
\| \mu \|_U =\|T_\mu\| = \sup_{ f \in \mathcal F} \| T_\mu (f) \| = \sup \{ \| \mu*f \|_\infty\ |\ f \in \mathcal F \} \,. \qedhere
\end{displaymath}
\end{proof}
We next provide a similar estimate for the norm for compact sets, via approximations from above. Let us start with a preliminary lemma.
\begin{lemma}
Let $\mu$ be a positive measure, and let $B$ be a precompact Borel set. Then, we have
\begin{displaymath}
\mu(\overline B) = \inf\{ \mu(f)\ |\ f\inC_{\mathsf{c}}(G),\, f\geqslant 1_B\}.
\end{displaymath}
\end{lemma}
\begin{proof}
On the one hand, we have
\begin{displaymath}
\mu(\overline B) = \mu(1_{\overline B}) \leqslant \mu(f)
\end{displaymath}
for all $f\in C_{\mathsf{c}}(G)$ with $f\geqslant 1_B$, since $f\in C_{\mathsf{c}}(G)$ and $f\geqslant 1_B$ imply $f\geqslant 1_{\overline B}$. Hence, we obtain $ \mu(\overline B) \leqslant \inf\{ \mu(f)\ |\ f\inC_{\mathsf{c}}(G),\, f\geqslant 1_B\}$.
On the other hand, we have
\begin{align*}
\mu(\overline B)
&= \inf\{\mu(U)\ |\ \overline B\subseteq U,\, U \text{ open}\} =\inf\{ \mu(1_U)\ |\ \overline B\subseteq U,\, U \text{ open}\} \\
&\geqslant \inf\{ \mu(f)\ |\ \overline B\subseteq U,\, U \text{ open},
\, f\in C_{\mathsf{c}}(G),\, 1_U\geqslant f\geqslant 1_B\} \\
&\geqslant \inf\{ \mu(f)\ |\ \, f\in C_{\mathsf{c}}(G),\, f\geqslant 1_B\}\,.
\end{align*}
Therefore, the claim follows.
\end{proof}
Consequently, we get the following estimates.
\begin{coro} \label{coro:a}
Let $\mu$ be a positive measure, and let $K\subseteq G$ be a compact set. Then, we have
\begin{displaymath}
\mu(K) = \inf\{ \mu(f)\ |\ f\inC_{\mathsf{c}}(G),\, f\geqslant 1_K\}.
\end{displaymath}
\end{coro}
The next corollary is an immediate consequence.
\begin{coro}\label{C4}
Let $\mu$ be a measure on $G$, and let $K\subseteq G$ be a compact set. Then, we have
\begin{displaymath}
\|\mu\|_K = \sup_{t\in G}\, \inf_{\substack{f\in C_{\mathsf{c}}(G),\\ f\geqslant 1_K}} \, \left| \mu \right| (T_tf) \,.
\end{displaymath}
In particular, if $\mu$ is positive, then we have
\begin{displaymath}
\|\mu\|_K = \sup_{t\in G}\, \inf_{\substack{f\in C_{\mathsf{c}}(G),\\ f\geqslant 1_K}} \, \mu (T_tf) \,.
\end{displaymath}
\end{coro}
\begin{proof}
This follows from Corollary~\ref{coro:a} because
\begin{displaymath}
\|\mu\|_K = \sup_{t\in G}\left| \mu \right| (t+K) = \sup_{t\in G} \, \inf_{\substack{f\in
C_{\mathsf{c}}(G) ,\\ f\geqslant 1_{t+K}}} \, \left|\mu\right|(f) = \sup_{t\in G} \, \inf_{\substack{f\in C_{\mathsf{c}}(G) ,\\ f\geqslant 1_K}} \, \left|\mu\right|(T_tf) \,.
\qedhere
\end{displaymath}
\end{proof}
\begin{remark} When working with precompact open sets, the formula of Corollary~\ref{C1} involves two suprema, which can be interchanged. In contrast, the supremum and infimum in Corollary~\ref{C4} cannot be interchanged. Because of this, it is much easier to work with open precompact sets when estimating $\| \mu \|_U$ than with compact sets, and this is why we make this choice below. \hfill$\Diamond$
\end{remark}
Let us emphasise that our choice of open precompact sets does not matter when working with the norm topology. The following result is proved in \cite{bm} for compact sets and the same proof works for precompact sets.
\begin{lemma}\label{lem:2}
Let $A,B$ be precompact sets with non-empty interior. Then $\| \cdot \|_{A}$ and $\| \cdot \|_{B}$ are equivalent norms on ${\mathcal M}^\infty(G)$.
\end{lemma}
\begin{proof} It is obvious that $\| \cdot \|_{A}$ defines a semi-norm, and since it has non-empty interior it is a norm.
Now, since $A$ and $B$ are precompact and have non-empty interior, each set can be covered by finitely many translates of the other. Let $N$ be the number of translates needed for both coverings. Then, it is straightforward to see that
\begin{displaymath}
\frac{1}{N}\, \| \cdot \|_A \leqslant \| \cdot \|_B \leqslant N\, \| \cdot \|_A \,. \qedhere
\end{displaymath}
\end{proof}
\smallskip
We complete the section by looking at the completion of various spaces of translation bounded measures with respect to norm and product topologies. First, let us recall the following result.
\begin{theorem}\label{cm Banach} \cite{CRS3}
Let $K \subseteq G$ be any compact set with non-empty interior. Then, the pair $({\mathcal M}^\infty(G), \| \cdot \|_K)$ is a Banach space.
\end{theorem}
Now, Lemma~\ref{lem:2} and Theorem~\ref{cm Banach} imply the next corollary.
\begin{coro}Let $U \subseteq G$ be any open precompact set. Then, $({\mathcal M}^\infty(G), \| \cdot \|_U)$ is a Banach space.
\end{coro}
\smallskip
We next show that the spaces of translation bounded measures of spectral purity are closed in $({\mathcal M}^\infty(G), \| \, \|_U)$ and hence Banach spaces. Let us start with the following result.
\begin{lemma}\label{lem:3}
For all $\alpha \in \{ pp, ac, sc \}$ and all $\mu \in {\mathcal M}^\infty(G)$ we have
\begin{displaymath}
\| \mu_{\alpha} \|_U \leqslant \| \mu \|_U \leqslant \| \mu_{\text{pp}} \|_U + \| \mu_{\text{ac}} \|_U+ \| \mu_{\text{sc}} \|_U \,.
\end{displaymath}
\end{lemma}
\begin{proof} We follow the idea of \cite[Cor.~8.4]{NS12}. By \cite[Thm.~14.22]{HR},
we have
\begin{displaymath}
\left| \mu \right| =\left| \mu_{\text{pp}} \right|+\left| \mu_{\text{ac}} \right|+\left| \mu_{\text{sc}} \right| \,.
\end{displaymath}
The claim follows immediately from this.
\end{proof}
The following statements are immediate consequences of Lemma~\ref{lem:3}
\begin{coro}
One has
\begin{displaymath}
{\mathcal M}^\infty(G)={\mathcal M}^\infty_{\text{pp}}(G)\oplus {\mathcal M}^\infty_{\text{ac}}(G) \oplus {\mathcal M}^\infty_{\text{sc}}(G) \,.
\end{displaymath}
\end{coro}
\begin{coro}\label{coro:1}
Let $\mu, \mu_n \in {\mathcal M}^{\infty}(G)$, for all $n\in{\mathbb N}$. Then, $\lim_{n\to\infty} \| \mu-\mu_n \|_U = 0$ if and only if
\begin{displaymath}
\lim_{n\to\infty} \| \left( \mu-\mu_n \right)_{\alpha} \|_U =0\quad \ \text{ for all } \alpha \in \{ \text{pp}, \text{ac} , \text{sc} \} \,.
\end{displaymath}
\end{coro}
\begin{coro}\label{coro:2}
Let $\mu \in \mathcal{N}\hspace*{-1pt}\mathcal{AP}(G)$. Then $ \mu_{\text{pp}}, \mu_{\text{ac}},\mu_{\text{sc}} \in \mathcal{N}\hspace*{-1pt}\mathcal{AP}(G)$.
\end{coro}
We can now prove the following result.
\begin{prop}\label{prop:1}
The spaces ${\mathcal M}_{\text{pp}}^\infty(G),\, {\mathcal M}_{\text{ac}}^\infty(G)$ and ${\mathcal M}_{\text{sc}}^\infty(G)$ are closed in $({\mathcal M}^\infty(G), \| \cdot \|_U)$. In particular, $({\mathcal M}_{\text{pp}}^\infty(G), \| \cdot \|_U), ({\mathcal M}_{\text{ac}}^\infty(G), \| \cdot \|_U)$ and $({\mathcal M}_{\text{sc}}^\infty(G), \| \cdot \|_U)$ are Banach spaces.
\end{prop}
\begin{proof}
Let $\alpha \in \{ \text{pp}, \text{ac}, \text{sc} \}$, and let $(\mu_n)_{n\in{\mathbb N}}$ be a sequence in ${\mathcal M}^\infty_{\alpha}(G)$. Assuming that $\mu_n \to \mu$ in ${\mathcal M}^\infty(G)$, we need to show that $\mu \in {\mathcal M}^\infty_{\alpha}(G)$.
Now, if $\beta \in \{ \text{pp}, \text{ac}, \text{sc} \}$ and $\beta \neq \alpha$, we have $\left( \mu_n \right)_\beta =0$. Therefore, by Corollary~\ref{coro:1}, we get $ \| \mu_{\beta} \|_U=0$ and hence $\mu_{\beta} =0$. As $\mu _\beta=0$ for all $\beta \neq \alpha$, $\beta \in \{ \text{pp}, \text{ac}, \text{sc} \}$, we get $\mu \in {\mathcal M}^\infty_{\alpha}(G)$, as claimed.
\end{proof}
We complete this section by showing that the spaces of pure point, absolutely continuous and singular continuous measures, respectively, are not closed in the product topology. To do so, we first provide a simple lemma which simplifies some of our computations below.
\begin{lemma}\label{lem:4}
Let $\mu_n,\, \mu \in {\mathcal M}^\infty(G)$, for all $n\in{\mathbb N}$, such that $\mu_n \xrightarrow{\pi} \mu$. If $\nu$ has compact support, we have
\begin{displaymath}
\mu_n*\nu \xrightarrow{\pi} \mu*\nu \,.
\end{displaymath}
\end{lemma}
\begin{proof} Let $g \in C_{\mathsf{c}}(G)$. Since $\nu$ has compact support, we have $f:= g*\nu \in C_{\mathsf{c}}(G)$. As $\mu_n \to \mu$ in the product topology and $f \in C_{\mathsf{c}}(G)$, we get
\begin{displaymath}
\lim_{n\to\infty} \| \mu_n *f - \mu*f \|_\infty =0 \,.
\end{displaymath}
Therefore, we have
\begin{displaymath}
\lim_{n\to\infty} \| (\mu_n*\nu) *g - (\mu*\nu)*g \|_\infty =0 \,.
\end{displaymath}
As $g \in C_{\mathsf{c}}(G)$ is arbitrary, the claim follows.
\end{proof}
\smallskip
Now, we look at some examples.
\begin{example}\label{ex1}
Let $\mu_n =\frac{1}{n}\sum_{k=1}^n \delta_{\frac{k}{n}}$, for all $n\in{\mathbb N}$. Then, $\mu_n \xrightarrow{\pi} \ensuremath{\lambda\!\!\!\lambda}_{[0,1]}$.
\end{example}
\begin{proof}
Let $f \in C_{\mathsf{c}}(G)$. Each such $f$ is uniformly continuous. Fix $\varepsilon>0$. Then, there is $N\in{\mathbb N}$ such that
\begin{align*}
\left| (f*\ensuremath{\lambda\!\!\!\lambda}|_{[0,1]})(x) - (f*\mu_n)(x)\right|
&=\left|\int_{0}^1 f(x-y)\ \mbox{d} y - \frac{1}{n} \sum_{k=1}^n f\big(x-
\frac{k}{n}\big) \right| \\
&=\left|\sum_{k=1}^n \int_{\frac{k-1}{n}}^{\frac{k}{n}} f(x-y)\ \mbox{d}
y - \sum_{k=1}^n \int_{\frac{k-1}{n}}^{\frac{k}{n}} f\big(x-
\frac{k}{n}\big)\ \mbox{d} y \right| \\
&\leqslant \sum_{k=1}^n \int_{\frac{k-1}{n}}^{\frac{k}{n}} \left|f(x-y) -
f\big(x- \frac{k}{n}\big) \right| \mbox{d} y \\
&< \sum_{k=1}^n \int_{\frac{k-1}{n}}^{\frac{k}{n}} \varepsilon\ \mbox{d} y \, =\, \varepsilon
\end{align*}
holds independently of $x$, for all $n\geqslant N$, due to the uniform continuity of $f$. Therefore, for all $n >N$, we have
\begin{displaymath}
\|f*\ensuremath{\lambda\!\!\!\lambda}|_{[0,1]}- f*\mu_n\|_{\infty} \leqslant \varepsilon \,. \qedhere
\end{displaymath}
\end{proof}
\smallskip
\begin{example}\label{ex4}
Let $\nu$ be any singular continuous measure of compact support. Then, with $\mu_n$ as in Example \ref{ex1}, the measures $\mu_n*\nu$ are singular continuous, and the sequence $(\mu_n*\nu)_{n\in{\mathbb N}}$ converges in the product topology to the absolutely continuous measure $\nu*\ensuremath{\lambda\!\!\!\lambda}_{[0,1]}$.
\end{example}
\noindent\textit{Proof.}
It is easy to see that $\mu_n*\nu$ is a finite sum of singular continuous measures, which is again singular continuous.
Next, $\nu*\ensuremath{\lambda\!\!\!\lambda}|_{[0,1]}$ is an absolutely continuous measure because
\begin{align*}
(\nu*\ensuremath{\lambda\!\!\!\lambda}|_{[0,1]})(\phi)
&= \int_{{\mathbb R}} \int_{{\mathbb R}} \phi(x+y)\, 1_{[0,1]}(x)\ \mbox{d}\ensuremath{\lambda\!\!\!\lambda}(x)\, \mbox{d}\nu(y) \\
&= \int_{{\mathbb R}} \int_{{\mathbb R}} \phi(x)\, 1_{[0,1]}(x-y)\ \mbox{d}\ensuremath{\lambda\!\!\!\lambda}(x)\, \mbox{d}\nu(y) \\
&= \int_{{\mathbb R}} \int_{{\mathbb R}}\, 1_{[0,1]}(x-y)\ \mbox{d}\nu(y)\ \phi(x)\ \mbox{d}\ensuremath{\lambda\!\!\!\lambda}(x) \\
&= \int_{{\mathbb R}} h(x)\, \phi(x)\ \mbox{d}\ensuremath{\lambda\!\!\!\lambda}(x),
\end{align*}
where $h(x):= (1_{[0,1]}*\nu)(x)$.
Finally, by Lemma~\ref{lem:4}, the sequence $(\mu_n*\nu)_{n\in{\mathbb N}}$ converges in the product topology to $\nu*\ensuremath{\lambda\!\!\!\lambda}_{[0,1]}$. \hfill$\Diamond$
\begin{example} Consider the following measures on ${\mathbb R}^2$:
\begin{displaymath}
\mu_n =\frac{1}{n}\sum_{k=1}^n \delta_{(\frac{k}{n},0)},\quad n\in{\mathbb N} \,.
\end{displaymath}
Then, exactly as in Example~\ref{ex1}, it can be shown that $(\mu_n)_{n\in{\mathbb N}}$ converges in the product topology to the singular continuous measure
\begin{displaymath}
\lambda(f)= \int_0^1 f(x,0)\ \mbox{d} x
\end{displaymath}
for all $f \in C_{\mathsf{c}}({\mathbb R}^2)$. \hfill$\Diamond$
\end{example}
\begin{example} \label{ex2}
Let $(f_\alpha)_{\alpha}$ be an approximate identity for $(C_{\mathsf{u}}(G), \| \cdot \|_\infty)$. Then, the net $(\mu_{\alpha})_{\alpha}$, with $\mu_\alpha = f_\alpha\, \theta_G$, converges in the product topology to $\delta_0$.
\end{example}
\noindent\textit{Proof.}
Since $\mu_{\alpha}*f=f_{\alpha}*f$ and $\delta_0*f=f$ for all $f\inC_{\mathsf{c}}(G)$, the claim follows from \cite[Thm. 1.2.19(b)]{Gra}. \hfill$\Diamond$
\begin{example}\label{ex5}
Let $\nu$ be any singular continuous measure of compact support. Then, with $f_\alpha$ as in Example \ref{ex2}, the measures $\mu_\alpha*\nu$ are absolutely continuous, and $(\mu_{\alpha}*\nu)_{\alpha}$ converges in the product topology to the singular continuous measure $\nu$.
\end{example}
\noindent\textit{Proof.}
It is obvious that $\mu_\alpha*\nu$ are absolutely continuous, and by Lemma~\ref{lem:4}, we have
\begin{displaymath}
\lim_\alpha \mu_\alpha*\nu =\delta_0* \nu =\nu
\end{displaymath}
in the product topology. \hfill$\Diamond$
\medskip
Next, we provide a slight generalisation of Example~\ref{ex2}.
\begin{lemma}\label{lemm gen approximate identity}
Let $(\mu_n)_{n\in{\mathbb N}}$ be a sequence of probability measures such that, for each $\varepsilon >0$, there exists some $N\in{\mathbb N}$ such that, for all $n > N$, we have ${\mbox{supp}}(\mu_n) \subseteq B_{\varepsilon}(0)$. Then, we have
\begin{displaymath}
\lim_{n\to\infty} \mu_n = \delta_0
\end{displaymath}
in the product topology.
\end{lemma}
\begin{proof}
First note that every second countable locally compact Abelian group is metrisable.
Let $f \in C_{\mathsf{c}}(G)$ be arbitrary, and fix $\varepsilon >0$. Since $f$ is uniformly continuous, there exists some $\delta >0$ such that, for all $x,y \in G$ with $d(x,y) < \delta$, we have $\left|f(x)-f(y) \right| < \varepsilon$.
By the condition on the support, there exists some $N\in{\mathbb N}$ such that, for all $n >N$, we have ${\mbox{supp}}(\mu_n) \subseteq B_{\delta}(0)$.
Then, since each $\mu_n$ is a probability measure, for all $n >N$ and all $x \in G$, we have
\begin{align*}
\left| (\mu_n*f)(x) - (\delta_0*f)(x) \right|
&= \left| \int_G f(x-y)\ \mbox{d} \mu_n(y) -f(x) \right| \\
&= \left| \int_G f(x-y)\ \mbox{d} \mu_n(y) - \int_G f(x)\ \mbox{d} \mu_n(y) \right| \\
&\leqslant \int_G \left| f(x-y) - f(x) \right|\ \mbox{d} \mu_n(y) \\
&= \int_{B_\delta(0)} \left| f(x-y) - f(x) \right| \mbox{d} \mu_n(y)
\end{align*}
where we used ${\mbox{supp}}(\mu_n) \subseteq B_{\delta}(0)$ in the last step.
Now, since $y \in B_\delta(0)$, we get $d(y,0) < \delta$ and hence, since the metric is translation invariant, $d(x-y,y) <\delta$. By the uniform continuity of $f$, this gives $\left| f(x-y) - f(x) \right| < \varepsilon$ and thus
\begin{displaymath}
\left| (\mu_n*f)(x) - (\delta_0*f)(x) \right| < \varepsilon \
\end{displaymath}
for all $x\in G$ and $n>N$, which implies the claim.
\end{proof}
\begin{remark}
\begin{itemize}
\item[(i)] In Lemma~\ref{lemm gen approximate identity}, the condition that each $\mu_n$ is a probability measure can be weakened to $\mu_n(G)=1$ and there exists some $C >0$ such that $\left| \mu_n \right|(G) <C$, for all $n\in{\mathbb N}$.
\item[(ii)] Let $G$ be an arbitrary LCAG, which is not necessarily metrisable, and let $(\mu_\alpha )_\alpha$ be a net of probability measures such that, for each open set $0 \in V \subseteq G$, there exists some $\beta$ such that, for all $\alpha > \beta$, we have ${\mbox{supp}}(\mu_\alpha) \subseteq V $. Then, exactly as in the proof of Lemma~\ref{lemm gen approximate identity}, it can be shown that $(\mu_\alpha)_{\alpha}$ converges in the product topology to $\delta_0$.
\end{itemize}\hfill$\Diamond$
\end{remark}
Now, we provide two examples of singular continuous measures which converge to pure point measures.
\begin{example}\label{ex6}
For all $n\in{\mathbb N}$, let $\mu_n$ be the normalised line integral over the circle $x^2+y^2=\frac{1}{n^2}$ in ${\mathbb R}^2$, that is
\begin{displaymath}
\mu_n(f)=\frac{1}{2 \pi} \int_{0}^{2 \pi} f\Big( \frac{\cos(t)}{n}, \frac{\sin(t)}{n} \Big)\ \mbox{d} t \,.
\end{displaymath}
Then, every $\mu_n$ is a singular continuous measure, and by Lemma~\ref{lemm gen approximate identity}, the sequence $(\mu_n)_{n\in{\mathbb N}}$ converges in the product topology to $\delta_0$. \hfill$\Diamond$
\end{example}
\begin{example}\label{ex7}
Let $\mu$ be any singular continuous probability measure on ${\mathbb R}$ supported inside $[0,1]$.
Define $\mu_n$, for all $n\in{\mathbb N}$, by
\begin{displaymath}
\mu_n(f)=\int_{{\mathbb R}} f(nx)\ \mbox{d} \mu(x) \,.
\end{displaymath}
Then, each $\mu_n$ is a singular continuous probability measures supported inside $[0, \frac{1}{n}]$, and therefore, by Lemma~\ref{lemm gen approximate identity}, $(\mu_n)_{n\in{\mathbb N}}$ converges in the product topology to $\delta_0$.\hfill$\Diamond$
\end{example}
\section{Strong versus norm almost periodicity}\label{SAP vs NAP}
The purpose of this section is to show that norm almost periodicity is a uniform version of strong almost periodicity.
Recall that, for a measure $\mu$, we can define
\begin{displaymath}
P_{\varepsilon}^U(\mu):= \{ t \in G\ |\ \| T_t \mu -\mu \|_U \leqslant \varepsilon \} \,.
\end{displaymath}
Similarly, for a $f \in C_{\mathsf{u}}(G)$, we can define
\begin{displaymath}
P_{\varepsilon}(f):= \{ t \in G\ |\ \| T_t f -f \|_\infty \leqslant \varepsilon \} \,.
\end{displaymath}
\begin{remark}
\begin{itemize}
\item [(i)] Sometimes the set of $\varepsilon$-almost periods is defined with strict inequality. It is easy to see that the notion of almost periodicity is independent of the choice of $\leqslant$ or $<$.
\item [(ii)] Usually, the norm $\| \cdot \|_K$ and norm almost periodicity are defined using compact sets $K$. Working with open precompact sets makes our computations below much simpler.
\end{itemize}\hfill$\Diamond$
\end{remark}
Let us start with the following lemma.
\begin{lemma}\label{C2} Let $U \subseteq G$ be an open precompact set, and let ${\mathcal F} \subseteq {\mathcal F}_U$ be any set which is dense in $({\mathcal F}_U, \| \cdot \|_U)$. Then, we have
\begin{displaymath}
P_{\varepsilon}^U(\mu)= \{ t \in G\ |\ \| T_t (\mu*g^{\dagger}) -\mu*g^{\dagger} \|_{\infty} \leqslant \varepsilon\ \text{ for all }g \in {\mathcal F} \} = \bigcap_{g\, \in\, {\mathcal F}} P_{\varepsilon}(\mu*g^{\dagger}) \,.
\end{displaymath}
In particular
\begin{displaymath}
P_{\varepsilon}^U(\mu)= \{ t \in G\ |\ \| T_t (\mu*g^{\dagger}) -\mu*g^{\dagger} \|_{\infty} \leqslant \varepsilon\ \text{ for all }g \in {\mathcal F}_U \} = \bigcap_{g\, \in\, {\mathcal F}_U} P_{\varepsilon}(\mu*g^{\dagger}) \,.
\end{displaymath}
\end{lemma}
\begin{proof}
Let
\begin{align*}
B_{\varepsilon}&:= \{ t \in G\ |\ \| T_t (\mu*g^{\dagger}) -\mu*g^{\dagger} \|_{\infty} \leqslant \varepsilon\ \text{ for all }g \in {\mathcal F} \} \,, \\
C_{\varepsilon}&: = \bigcap_{g\, \in\, {\mathcal F}} P_{\varepsilon}(\mu*g^{\dagger}) \,.
\end{align*}
The equality $P^U_\varepsilon(\mu)=B_{\varepsilon}$ is an immediate consequence of Corollary~\ref{C5} because
\begin{align*}
\|T_t\mu-\mu\|_{U}
&= \sup_{(x,g)\in G\times {\mathcal F}} \left| (T_t\mu-\mu)(T_xg)\right| = \sup_{(x,g)\in G\times {\mathcal F}} \left| \mu(T_{x-t}g) - \mu(T_xg)\right| \\
&= \sup_{(x,g)\in G\times {\mathcal F}} \left| \big(T_t(\mu*g^{\dagger})\big)(x) - (\mu*g^{\dagger})(x)\right| \\
&= \sup_{g\in{\mathcal F}} \sup_{x \in G} \left| \big(T_t(\mu*g^{\dagger})\big)(x) - (\mu*g^{\dagger})(x)\right| = \sup_{g\in{\mathcal F}} \| T_t(\mu*g^{\dagger}) - \mu*g^{\dagger} \|_{\infty} \,.
\end{align*}
Therefore, we have
\begin{align*}
t\in P_{\varepsilon}^{U}(\mu)
&\iff \|T_t\mu-\mu\|_U \leqslant \varepsilon \\
&\iff \sup_{g\in{\mathcal F}} \| T_t(\mu*g^{\dagger}) - \mu*g^{\dagger} \|_{\infty} \leqslant \varepsilon \\
&\iff \|T_t(\mu*g^{\dagger}) - \mu*g^{\dagger} \|_{\infty}\leqslant \varepsilon \ \ \text{ for all }g\in{\mathcal F}_{U} \\
&\iff t \in B_{\varepsilon}.
\end{align*}
The equality $B_{\varepsilon}=C_{\varepsilon}$ follows immediately from the definition of $P_\varepsilon(\mu*g^\dagger)$.
\end{proof}
\begin{remark}
If $U$ is symmetric, i.e. $-U=U$, we can replace $g^{\dagger}$ by $g$.
Since we are interested in norm almost periodicity, which by Lemma~\ref{lem:2} does not depend on the choice of $U$, one can assume without loss of generality that $U=-U$, to make the computations below slightly simpler. Since this assumption doesn't simplify the formulas too much, and in future applications one may need to work without this extra assumption, we don't assume below that $U$ is symmetric.\hfill$\Diamond$
\end{remark}
\medskip
Next, we show that to check that a measure is strongly almost periodic, it suffices to use ${\mathcal F}_U$ as the set of test functions.
\begin{prop} \label{prop:char_sap}
Let $\mu \in {\mathcal M}^\infty(G)$. Then, $\mu \in \mathcal{S}\hspace*{-2pt}\mathcal{AP}(G)$ if and only if $\mu*g \in \operatorname{SAP}(G)$ for all $g \in {\mathcal F}_U$.
\end{prop}
\begin{proof}
By definition, the given property is necessary for $\mu$ to be a strongly almost periodic measure. It is also sufficient. If $f \in C_{\mathsf{c}}(G)$ it is easy to show that there exist elements $c_1,\ldots, c_m \in {\mathbb C}$, $t_1,\ldots, t_m \in G$ and $g_1,\ldots, g_m \in {\mathcal F}_U$ such that
\begin{equation}\label{EQ1}
f= \sum_{j=1}^m c_j T_{t_j} g_j \,.
\end{equation}
Indeed, since ${\mbox{supp}}(f)$ is finite, we can find a finite open cover ${\mbox{supp}}(f) \subseteq \bigcup_{j=1}^m (-t_j+U)$. By a standard partition of unity argument, we can find $h_1,.., h_m \in C_{\mathsf{c}}(G)$ such that $\sum_{j=1}^m h_j(x)=1$ for all $x \in {\mbox{supp}}(f)$. Then $c_j = \| fh_j \|_\infty$ and $g_j =\frac{1}{c_j} f\cdot (T_{-t_j}h_j)$ for all $j$ such that $c_j \neq 0$ gives Eq. \eqref{EQ1}.
The claim is now obvious.
\end{proof}
We now introduce the concept of equi-Bohr almost periodicity.
\begin{definition}
Let $\mathcal{G} \subseteq \operatorname{SAP}(G)$ be any family of functions. We say that $\mathcal{G}$ is \textbf{equi-Bohr almost periodic} if, for each $\varepsilon >0$, the set
\begin{displaymath}
P_\varepsilon(\mathcal{G})= \{ t \in G\ |\ \|T_tf-f \|_\infty \leqslant \varepsilon\ \text{ for all } f \in \mathcal{G} \}
\end{displaymath}
is relatively dense.
\end{definition}
\begin{remark} It is easy to see that
$
P_\varepsilon(\mathcal{G})=\bigcap_{f\, \in\, \mathcal{G}} P_\varepsilon(f).
$\hfill$\Diamond$
\end{remark}
\smallskip
We can now prove the main result in this section.
\begin{theorem} \label{thm:char_nap}
Let $\mu \in {\mathcal M}^\infty(G)$, let $U\subseteq G$ be an open precompact set, and let $\mathcal F \subseteq {\mathcal F}_U$ be dense in $({\mathcal F}_U, \| \cdot \|_\infty)$. Then, $\mu$ is norm almost periodic if and only if $\mathcal{G}_{{\mathcal F}}:=\{ \mu *g\ |\ g \in {\mathcal F} \}$ is equi-Bohr almost periodic.
In particular, $\mu$ is norm almost periodic if and only if the family $\mathcal{G}:=\{ \mu *g\ |\ g \in {\mathcal F}_U \}$ is equi-Bohr almost periodic.
\end{theorem}
\begin{proof}
This is an immediate consequence of Lemma~\ref{C2}. Indeed, we have
\begin{displaymath}
P_\varepsilon(\mathcal{G}_{{\mathcal F}})=\bigcap_{g \in {\mathcal F}} P_\varepsilon(\mu*g) = P_{\varepsilon}^{-U}(\mu) \,. \qedhere
\end{displaymath}
\end{proof}
\smallskip
\begin{remark}
By combining Proposition~\ref{prop:char_sap} and Theorem~\ref{thm:char_nap}, the following are true for a measure $\mu\in\mathcal{M}^{\infty}(G)$.
\begin{enumerate}
\item[(i)] The measure $\mu$ is strongly almost periodic iff, for all $\varepsilon>0$, the set $P_{\varepsilon}(\mu*g)$ is relatively dense in $G$ for all $g\in {\mathcal F}_U$.
\item[(ii)] The measure $\mu$ is norm almost periodic iff, for all $\varepsilon>0$, the set $\bigcap_{g\,\in\,{\mathcal F}_U} P_{\varepsilon}(\mu*g)$ is relatively dense in $G$.
\end{enumerate}\hfill$\Diamond$
\end{remark}
We next use the results from this section to give simpler proofs for \cite[Prop.~6.2]{NS12} and \cite[Prop.~5.6]{LSS}.
\begin{prop} \cite[Prop.~6.2]{NS12}\label{p2} Let $\mu$ be a norm almost periodic measure and $\nu$ a finite measure. Then, $\mu*\nu$ is norm almost periodic.
\end{prop}
\begin{proof}
Let $\varepsilon>0$. Since $\mu$ is norm almost periodic, there is a relatively dense set $S$ such that $\| f*\mu-T_t (f*\mu) \|_\infty<\frac{\varepsilon}{| \nu | (G)+1}$ for all $t\in S$ and $f \in {\mathcal F}_U$.
Hence, for all $f \in {\mathcal F}_U$ we have
\begin{align*}
\| f*(\mu*\nu)-T_t f*(\mu *\nu)\|_\infty
& =\| (f*\mu-T_t (f*\mu)) *\nu\|_\infty \\
&=\sup_{x \in G}\, \left| \int_G (f*\mu-T_t( f*\mu))(x-y)\ \mbox{d} \nu(y)
\right| \\
&\leqslant \sup_{x \in G}\, \int_G \big|(f*\mu-T_t( f*\mu))(x-y) \big|\ \mbox{d}
|\nu|(y) \\
&\leqslant \| f*\mu-T_t (f*\mu) \|_\infty \, | \nu | (G) < \varepsilon\,.
\end{align*}
Consequently, $\mu*\nu$ is norm almost periodic.
\end{proof}
Next, we give an alternate proof for the following result.
\begin{prop}\cite{LSS}
Let $\mu,\nu\in \mathcal{M}^{\infty}(G)$, and let $\mathcal{A}=(A_n)_{n\in{\mathbb N}}$ be a van Hove sequence such that $\mu\circledast_{\mathcal{A}}\nu$ exists\footnote{Recall that $\mu\circledast_{\mathcal{A}}\nu$ is defined as the vague limit of $\{ \frac{1}{|A_n|}\mu|_{A_n}*\nu_{A_n}\}$, if the limit exists. Given a van Hove sequence, the limit always exists along a subsequence \cite{LSS}.}. If $\mu$ is norm almost periodic, then $\mu\circledast_{\mathcal{A}}\nu$ is norm almost periodic.
\end{prop}
\begin{proof}
By \cite[Lem. 1.1]{Martin2}, there is a constant $c>0$ such that, for all $g \in {\mathcal F}_{U}$, we have
\begin{align*}
\|\left(\mu\circledast_{\mathcal{A}}\nu \right.&- \left. T_t(\mu\circledast_{\mathcal{A}}\nu)\right)*g\|_\infty \\
&\leqslant \sup_{x \in G} \limsup_{n\to\infty} \frac{1}{|A_n|} \int_{A_n} \left| \big(\mu*g
- T_t (\mu*g)\big) (x-t)\right|\ \mbox{d} |\nu|(t) \\
&\leqslant \| \mu* g - T_t (\mu*g) \|_\infty\, \sup\Big\{\frac{ | \nu
|(A_n)}{|A_n|} \ \Big|\ n\in{\mathbb N} \Big\} \\
&\leqslant c\, \|\mu-T_t\mu\|_{\infty} \,.
\end{align*}
Now, this and Corollary~\ref{C1} imply
\begin{align*}
\| \mu\circledast_{\mathcal{A}}\nu - T_t(\mu\circledast_{\mathcal{A}}\nu)\|_U
&= \sup_{g\in{\mathcal F}_U} \sup_{s\in G} \left| \big(\mu\circledast_{\mathcal{A}}\nu
-T_t (\mu\circledast_{\mathcal{A}}\nu)\big)(T_sg) \right| \\
&= \sup_{g\in{\mathcal F}_U} \sup_{s\in G} \left| \big(\big(\mu\circledast_{\mathcal{A}}
\nu -T_t (\mu\circledast_{\mathcal{A}}\nu)\big)*g^{\dagger}\big)(s) \right|
\\
&\leqslant c\, \|\mu-T_t\mu\|_{\infty} \,,
\end{align*}
which finishes the proof.
\end{proof}
\smallskip
We next look at the completeness of $\mathcal{N}\hspace*{-1pt}\mathcal{AP}(G)$.
\begin{prop}
$\mathcal{N}\hspace*{-1pt}\mathcal{AP}(G)$ is closed in $({\mathcal M}^\infty(G), \| \cdot \|_U)$. In particular, $(\mathcal{N}\hspace*{-1pt}\mathcal{AP}(G), \| \cdot \|_U)$ is complete.
\end{prop}
\begin{proof}
Let $(\mu_n)_{n\in{\mathbb N}}$ be a sequence in $\mathcal{N}\hspace*{-1pt}\mathcal{AP}(G)$, and let $\mu \in {\mathcal M}^\infty(G)$ be such that $\mu_n \to \mu$ in $\| \cdot \|_U$. Let $\varepsilon>0$. Then, there exists some $n\in{\mathbb N}$ such that $\| \mu -\mu_n \|_U <\frac{\varepsilon}{3}$.
Since $\mu_n \in \mathcal{N}\hspace*{-1pt}\mathcal{AP}(G)$ the set $P:= \{ t \in G \ |\ \| \mu_n - \mu \|_U < \frac{\varepsilon}{3} \}$ is relatively dense. Moreover, for all $t \in P$ we have
\begin{align*}
\| T_t \mu - \mu \|_U
&= \| T_t \mu - T_t \mu_n \|_U +\| T_t \mu_n - \mu_n \|_U +\|\mu_n - \mu \|_U \\
&<\frac{\varepsilon}{3}+\frac{\varepsilon}{3}+\frac{\varepsilon}{3} \, = \, \varepsilon \,. \qedhere
\end{align*}
\end{proof}
\smallskip
However, note that $\mathcal{N}\hspace*{-1pt}\mathcal{AP}(G)$ is not a Banach space because it is not closed under addition.
Since the intersection of two closed subsets of a topological space is also closed, we get the following consequence.
\begin{coro}\label{C3} For each $\alpha \in \{ pp, ac, sc \}$ the set
\begin{displaymath}
\mathcal{N}\hspace*{-1pt}\mathcal{AP}_{\alpha}(G):= \mathcal{N}\hspace*{-1pt}\mathcal{AP}(G) \cap {\mathcal M}_{\alpha}^\infty(G)
\end{displaymath}
is closed in $({\mathcal M}^\infty(G), \| \cdot \|_U)$.
\end{coro}
\section{Strong almost periodicity and Lebesgue decomposition}\label{sap leb}
Recall that, by Corollary~\ref{coro:2}, given a measure $\mu \in \mathcal{N}\hspace*{-1pt}\mathcal{AP}(G)$ we have $\mu_{\text{pp}}, \mu_{\text{ac}}, \mu_{\text{sc}} \in \mathcal{N}\hspace*{-1pt}\mathcal{AP}(G)$. In this section, we show that the same does not hold for strong almost periodic measures. First, we will prove the following lemma, which will be our main tool for constructing examples.
\begin{lemma} Let $\mu_n, \mu$ be measures on ${\mathbb R}$ supported inside $[0,1]$, for all $n\in{\mathbb N}$, such that $\mu_n \xrightarrow{\pi} \mu$. Define
\begin{displaymath}
\omega:= \mu+\sum_{j=1}^\infty \left(\delta_{2^j+(2^{j+1}{\mathbb Z})}\right) * \mu_j \,.
\end{displaymath}
Then, $\omega \in \mathcal{S}\hspace*{-2pt}\mathcal{AP}({\mathbb R})$.
\end{lemma}
\begin{proof}
Let $f \in C_{\mathsf{c}}({\mathbb R})$, and let $\varepsilon >0$. Let $M \in {\mathbb N}$ be such that ${\mbox{supp}}(f) \subseteq [-M,M]$.
Since the sequence $(\mu_j)_{j\in{\mathbb N}}$ converges in the product topology to $\mu$, there exists some $N\in{\mathbb N}$ such that, for all $j >N$, we have
\begin{equation}\label{EQ44}
\| \mu_j*f- \mu*f \|_\infty < \frac{\varepsilon}{4M+2} \,.
\end{equation}
In particular, for all $ j,m >N$, we have
\begin{equation}\label{EQ55}
\| \mu_j*f- \mu_m*f \|_\infty < \frac{\varepsilon}{2M+1} \,.
\end{equation}
Next, we show that every element of $2^{N+1}{\mathbb Z}$ is an $\varepsilon$-almost period for $\omega*f$.
Define
\begin{displaymath}
\omega_N:= \sum_{j=1}^N \delta_{2^j+(2^{j+1}{\mathbb Z})} * \mu_j \,.
\end{displaymath}
Then, $\omega_N$ is $2^{N+1}{\mathbb Z}$ periodic. Therefore, to show that $2^{N+1}{\mathbb Z}$ are $\varepsilon$-almost periods for $\omega*f$, it suffices to show that $2^{N+1}{\mathbb Z}$ are $\varepsilon$-almost periods for $(\omega-\omega_N)*f$. Now,
\begin{displaymath}
\omega- \omega_N= \mu+\sum_{j=N+1}^\infty \delta_{2^j+(2^{j+1}{\mathbb Z})} * \mu_j \,.
\end{displaymath}
Next, if we define
\begin{displaymath}
\omega_n :=
\begin{cases}
\mu_{v_2(n)}, & \text{if } n\in{\mathbb Z}\setminus \{0\}, \\
\mu, & \text{if } n=0,
\end{cases}
\end{displaymath}
where $v_2(n)$ is the $2$-adic valuation of $n$, we can write
\begin{displaymath}
\omega- \omega_N = \sum_{n \in {\mathbb Z}} \delta_{2^{N+1}\cdot n} * \omega_n \,.
\end{displaymath}
Then, for all $k \in {\mathbb Z}$, we have
\begin{align*}
(\omega- \omega_N)*f - T_{2^{N+1}\cdot k} \left( (\omega- \omega_N)*f \right)
& =\sum_{n \in {\mathbb Z}} \delta_{2^{N+1} \cdot n} * \omega_n *f- T_{2^{N+1}\cdot k}
\left( \sum_{n \in {\mathbb Z}} \delta_{2^{N+1} \cdot n}* \omega_n *f\right) \\
&=\sum_{n \in {\mathbb Z}} \delta_{2^{N+1} \cdot n} *\left(\omega_n *f-
\omega_{n-k}*f \right)
\end{align*}
Now, by Eqs.~\eqref{EQ44} and \eqref{EQ55}, we have
\begin{displaymath}
\| \omega_n *f- \omega_{n-k}*f \|_\infty < \frac{\varepsilon}{2M+1} \,.
\end{displaymath}
Using the fact that ${\mbox{supp}}(f) \subseteq [-M,M]$, we immediately get
\begin{displaymath}
\| (\omega- \omega_N)*f - T_{2^N\cdot k} \left( (\omega- \omega_N)*f \right) \|_\infty <\varepsilon \,.
\end{displaymath}
This completes the proof.
\end{proof}
\smallskip
Now, Examples~\ref{ex1},~\ref{ex2},~\ref{ex5},~\ref{ex6} and Example~\ref{ex7} yield the following example of strongly almost periodic measures for which the components of the Lebesgue decomposition are not strongly almost periodic.
\begin{example}\label{ex3} Let
\begin{displaymath}
\mu:= \ensuremath{\lambda\!\!\!\lambda}_{[0,1]}+\sum_{j=1}^\infty \left(\delta_{2^j+(2^{j+1}{\mathbb Z})} * \bigl( \frac{1}{j} \sum_{k=0}^{j-1} \delta_{\frac{k}{j}} \bigr)\right)
\end{displaymath}
Then, $\mu \in \mathcal{S}\hspace*{-2pt}\mathcal{AP}({\mathbb R})$. \hfill$\Diamond$
\end{example}
\smallskip
\begin{example}\label{ex9}
Let $\nu$ be a singular continuous measure supported inside $[0,1]$. Define
\begin{align*}
\mu(f):&= \int_0^1 f(2x)\ \mbox{d} (\nu*\ensuremath{\lambda\!\!\!\lambda}_{[0,1]})(x)\,, \\
\mu_n(f):&= \int_0^1 f(2x)\ \mbox{d} \big(\nu*(\frac{1}{n}\sum_{k=1}^n \delta_{\frac{k}{n}})\big)(x)\,.
\end{align*}
Then, $(\mu_n)_{n\in{\mathbb N}}$ is a sequence of singular continuous measures supported inside $[0,1]$ which, by Example~\ref{ex2}, converge in the product topology to the absolutely continuous measure $\mu$. As ${\mbox{supp}}(\mu)\subseteq [0,1]$, it follows that
\begin{displaymath}
\omega:= \mu+\sum_{j=1}^\infty \delta_{2^j+(2^{j+1}{\mathbb Z})} * \mu_j \in \mathcal{S}\hspace*{-2pt}\mathcal{AP}({\mathbb R}) \,.
\end{displaymath}\hfill$\Diamond$
\end{example}
\begin{example}\label{ex8}
For all $n\in{\mathbb N}$, let
\begin{displaymath}
f_n(x):= \max \{ n-n^2\,|x| , 0 \}\quad \text{ and } \quad \mu_n:=f_n\,\ensuremath{\lambda\!\!\!\lambda}\,.
\end{displaymath}
Then, it is trivial to see that $(f_n)_{n\in{\mathbb N}}$ is an approximate identity for the convolution on ${\mathbb R}$. Therefore, by Example~\ref{ex2}, we have
\begin{displaymath}
\mu= \delta_0+ \sum_{j=1}^\infty \delta_{2^j+(2^{j+1}{\mathbb Z})} * \mu_j \in \mathcal{S}\hspace*{-2pt}\mathcal{AP}({\mathbb R}) \,.
\end{displaymath}
This is a measure with non trivial pure point and absolutely continuous components, and trivial singular continuous component. Since $\mu_{\text{pp}}=\delta_0$, neither $\mu_{\text{pp}}$ note $\mu_{\text{ac}}$ is strongly almost periodic.\hfill$\Diamond$
\end{example}
\begin{example}
Let $\nu$ be a singular continuous measure supported inside $[0,1]$, and let $f_n$ be as in Example \ref{ex8}. Define
\begin{align*}
\mu(f):&= \int_0^1 f(2x)\ \mbox{d} \nu (x)\,, \\
\mu_n(f):&= \int_0^1 f(2x)\ \mbox{d} (\nu*f_n)(x)\,.
\end{align*}
Then, $(\mu_n)_{n\in{\mathbb N}}$ is a sequence of absolutely continuous measures supported inside $[0,1]$ which, by Example~\ref{ex5}, converges in the product topology to the singular continuous measure $\nu$. As ${\mbox{supp}}(\nu)\subseteq [0,\frac{1}{2}]\subseteq [0,1]$, it follows that
\begin{displaymath}
\omega:= \mu+\sum_{j=1}^\infty \delta_{2^j+(2^{j+1}{\mathbb Z})} * \mu_j \in \mathcal{S}\hspace*{-2pt}\mathcal{AP}({\mathbb R}) \,.
\end{displaymath}
\end{example}\hfill$\Diamond$
\begin{example}
Let $\mu$ be any singular continuous probability measure on ${\mathbb R}$ supported inside $[0,1]$.
Define $\mu_n$ by
\begin{displaymath}
\mu_n(f)=\int_{{\mathbb R}} f(nx)\ \mbox{d} \mu(x) \,,
\end{displaymath}
for all $n\in{\mathbb N}$. Then, $\mu_n$ is a singular continuous probability measure supported inside $[0, \frac{1}{n}]$, and by Example~\ref{ex7}
\begin{displaymath}
\omega:= \delta_0 +\sum_{j=1}^\infty \delta_{2^j+(2^{j+1}{\mathbb Z})} * \mu_j \in \mathcal{S}\hspace*{-2pt}\mathcal{AP}({\mathbb R}) \,.
\end{displaymath}
\end{example}\hfill$\Diamond$
\begin{example} Let $\mu$ be the measure from Example~\ref{ex3} and $\omega$ be the measure from Example~\ref{ex9}. Define
\begin{displaymath}
\nu:= \mu+\omega \,.
\end{displaymath}
Then $\nu \in \mathcal{S}\hspace*{-2pt}\mathcal{AP}(G)$ but it follows from Examples~\ref{ex3},~\ref{ex9} that neither $\nu_{\text{pp}}$ nor $\nu_{\text{sc}}$ are strongly almost periodic. Moreover, $\nu_{\text{ac}}$ has compact support, and hence it is not strongly almost periodic either.\hfill$\Diamond$
\end{example}
\section{Norm almost periodic measures of spectral purity}\label{nap sp}
Here, we briefly look at each of the sets $\mathcal{N}\hspace*{-1pt}\mathcal{AP}_{\text{pp}}(G)$, $\mathcal{N}\hspace*{-1pt}\mathcal{AP}_{\text{ac}}(G)$ and $\mathcal{N}\hspace*{-1pt}\mathcal{AP}_{\text{sc}}(G)$.
\subsection{On absolutely continuous norm almost periodic measures}
First, we give a characterisation of norm almost periodicity for absolutely continuous measures in terms of $L^1$-Stepanov almost periodicity.
Let us first recall that a function $f \in L^1_{\text{loc}}({\mathbb R})$ is called \textbf{Stepanov almost periodic} if, for each $\varepsilon >0$, the set
\begin{displaymath}
\Big\{ t\in {\mathbb R}\ \big|\ \sup_{x \in {\mathbb R}} \int_{x}^{x+1} \left| f(s)- f(s-t) \right|\ \mbox{d} s \leqslant \varepsilon \Big\}
\end{displaymath}
is relatively dense. It is well known that working over intervals of arbitrary length does not change the class of Stepanov almost periodic functions \cite{Che}.
Let us first extend this definition to arbitrary locally compact Abelian groups.
\begin{definition}
Let $G$ be a LCAG, and let $U \subseteq G$ be any non-empty precompact open set. A function $f \in L^1_{\text{loc}}(G)$ is called \textbf{$L^1$-Stepanov almost periodic} (with respect to $U$) if, for each $\varepsilon >0$, the set
\begin{displaymath}
\Big\{ t\in G\ \big|\ \sup_{x \in G} \frac{1}{|U|} \int_{x+U} \left| f(s)- f(s-t) \right|\ \mbox{d} s \leqslant \varepsilon \Big\}
\end{displaymath}
is relatively dense.
\end{definition}
\begin{remark}
Each non-empty precompact open set $U$ defines a norm $\| \cdot \|_U$ on the space $BL^1_{\text{loc}}(G):= L^1_{\text{loc}}(G) \cap {\mathcal M}^\infty(G)$ via
\begin{displaymath}
\| f\|_U :=\sup_{x \in G} \frac{1}{|U|}\int_{x+U} \left| f (s) \right|\ \mbox{d} s \,.
\end{displaymath}
An immediate computation shows that $BL^1_{\text{loc}}(G)= \{ f \in L^1_{\text{loc}}(G) \ |\ \| f \|_U <\infty \}$. Moreover, any $L^1$-Stepanov almost periodic function belongs to $BL_{\text{loc}}^1(G)$, see \cite{Spi}.
It is easy to see that different precompact open sets define equivalent norms, and that a function $f \in BL^1_{\text{loc}}(G)$ is $L^1$-Stepanov almost periodic if and only if, for each $\varepsilon >0$, the set
\begin{displaymath}
\{ t \in G\ |\ \| f- T_tf \|_U \leqslant \varepsilon \}
\end{displaymath}
is relatively dense.
Also, we will see below that the norm $\| f\|_U$ we defined here is just the measure norm $\| f\, \theta_G \|_U$.
For more details on Stepanov almost periodic functions on LCAG see \cite{Spi}.\hfill$\Diamond$
\end{remark}
\begin{lemma}\label{L3} \cite[Sec. 13.16.3]{Die}
Let $f \in L^1_{\text{\text{loc}}}(G)$ be arbitrary. Then, one has
\begin{displaymath}
\left| f\, \theta_G \right| = \left| f \right|\, \theta_G \,.
\end{displaymath}
In particular, we obtain
\begin{displaymath}
\| f\, \theta_G \|_U = \sup_{x \in G} \int_{x+U} \left| f (s) \right|\ \mbox{d} s \,.
\end{displaymath}
\end{lemma}
The following is an immediate consequence of Proposition~\ref{prop:1}.
\begin{lemma}
The mapping $f \mapsto f \theta_G$ is an homomorphism between $(BL^1_{\text{loc}}(G), \| \cdot \|_U)$ and $({\mathcal M}^\infty_{\text{ac}}(G), \| \cdot \|_U)$. Moreover, one has
\[
\|f\, \theta_G\|_U = |U|\, \|f\|_U \,.
\]
In particular, $(BL^1_{\text{loc}}(G), \| \cdot \|_U)$ is a Banach space.
\end{lemma}
As an immediate consequence, we get the following result.
\begin{theorem}\label{T1}
An absolutely continuous translation bounded measure $\mu=f\,\theta_G$ is norm almost periodic if and only if its density function $f \in L^1_{\text{loc}}(G)$ is $L^1$-Stepanov almost periodic.
The mapping $f \mapsto f \theta_G$ is an isomorphism between the Banach spaces $(\mathcal{S}, \| \cdot \|_U)$ and $(\mathcal{N}\hspace*{-1pt}\mathcal{AP}_{\text{ac}}(G), \| \cdot \|_U)$, where
\begin{displaymath}
\mathcal{S}:= \{ f \in L^1_{\text{loc}}(G) \ |\ f \mbox{ is } L^1 \mbox{-Stepanov almost periodic} \} \,.
\end{displaymath}
\end{theorem}
\begin{proof} First, let us note that $\mathcal{S}$ is a vector space by \cite{Spi}.
It is easy to see that the above mapping is linear and onto, and therefore $\mathcal{N}\hspace*{-1pt}\mathcal{AP}_{\text{ac}}(G)$ is a vector space, which is complete by Corollary~\ref{C3}. The rest of the claims are now obvious.
\end{proof}
Finally, for measures with uniformly continuous and bounded Radon--Nikodym density, we get the following simple characterisation.
\begin{prop} Let $f \in C_{\mathsf{u}}(G)$, and let $\mu:= f\, \theta_G$. Then, the following statements are equivalent:
\begin{itemize}
\item [(i)] $\mu$ is norm almost periodic,
\item [(ii)] $\mu$ is strongly almost periodic,
\item [(iii)] $f$ is Bohr almost periodic,
\item [(iv)] $f$ is Stepanov almost periodic.
\end{itemize}
\end{prop}
\begin{proof} (i)$\iff$(iv): This is Theorem~\ref{T1}.
\smallskip
\noindent (ii)$\iff$(iii): This follows from \cite[Prop.~4.10.5 (i)]{MoSt}.
\smallskip
\noindent (i)$\iff$(iii): This follows from \cite[Prop.~5.4.6]{NS11}.
\end{proof}
\subsection{Pure point norm almost periodic measures}
The pure point norm almost periodic measures are well understood due to the following characterisation.
\begin{theorem}\label{t1}\cite{NS11,NS12}
\begin{enumerate}
\item[(i)]Let $\mu$ be a pure point norm almost periodic measure. Then, there exists a CPS $(G, H, {\mathcal L})$ and a continuous function $h \in C^{}_{0}(H)$ such that
\begin{displaymath}
\mu = \sum_{(x,x^\star)\, \in\, {\mathcal L}} h(x^\star)\, \delta_x =: \omega_h \,.
\end{displaymath}
\item[(ii)] Let $(G, {\mathbb R}^d, {\mathcal L})$ be a CPS and $h \in \mathcal{S}({\mathbb R}^d)$. Then, $\omega_h$ is a norm almost periodic measure.
\end{enumerate}
\end{theorem}
This allows us to construct many examples of such measures.
\subsection{Singular continuous norm almost periodic measures}
Unfortunately, we don't have a good understanding of norm almost periodic singular continuous measures.
It is easy to construct examples of such measures. Indeed, pick any pure point norm almost periodic measure $\omega$, which can be constructed by the method of Theorem~\ref{t1}. Let $\nu$ be any finite singular continuous measure. Then $\omega*\nu$ is a singular continuous measure which is norm almost periodic by Proposition~\ref{p2}.
If $\omega$ is positive and has dense support, which can easily be assured, and $\nu$ is positive, then $\omega*\nu$ has dense support.
One another hand, picking $\omega=\delta_{\mathbb Z}$ and $\nu$ a singular continuous measure with Cantor set support, then $\omega*\nu$ does not have dense support.
\bigskip
Recall that if the sets of norm almost periods of $\mu$ are locally finite, for $\varepsilon$ small enough, then they are model sets in the same CPS. While this seems to be the case for many norm almost periodic singular continuous measures, it is not always true. Indeed, $\delta_{\mathbb Z} \otimes \lambda$ is norm almost periodic and singular continuous, but the sets of almost periods contain ${\mathbb Z} \times {\mathbb R}$.
\section{Diffraction of measures with Meyer set support}
In this section, we look at the consequences of the previous sections for the diffraction of measures with Meyer set support. For an overview of cut and project schemes and Meyer sets, and their properties, we recommend the monographs \cite{TAO,TAO2} as well as \cite{LR,Meyer,MOO,CR,CRS,NS1,NS5,NS11,NS12}.
Let us start by recalling the following result.
\begin{theorem} \cite{NS12} Let $\mu$ be any Fourier transformable measure supported inside a Meyer set. Then, each of $(\widehat{\mu})_{\text{pp}}, (\widehat{\mu})_{\text{ac}}, (\widehat{\mu})_{\text{sc}}$ is a norm almost periodic measure.
\end{theorem}
As a consequence, we can state the next corollary.
\begin{coro} Let $\mu$ be any Fourier transformable measure supported inside a Meyer set.
\begin{enumerate}
\item[(i)] There exists some CPS $(\widehat{G}, H, {\mathcal L})$ and some $h \in C^{}_{0}(H)$ such that
\begin{displaymath}
(\widehat{\mu})_{pp} =\omega_h \,.
\end{displaymath}
\item[(ii)] There exists an $L^1$-Stepanov almost periodic function $f$ such that
\begin{displaymath}
(\widehat{\mu})_{ac}=f\, \theta_{\widehat{G}} \,.
\end{displaymath}
\end{enumerate}
\end{coro}
\begin{remark} It follows from \cite{NS12} that there exists a CPS $(\widehat{G}, H, {\mathcal L})$ and some function $h \in C^{}_{0}(H)$ such that
\begin{displaymath}
f = \omega_h *f_1 \quad \text{ and }\quad
(\widehat{\mu})_{\text{sc}} = \omega_h* \nu
\end{displaymath}
where $f$ is the Radon--Nikodym density of the absolutely continuous part $(\widehat{\mu})_{\text{ac}}$, $f_1 \in L^1(\widehat{G})$ and $\nu$ is a finite singular continuous measure.\hfill$\Diamond$
\end{remark}
\begin{remark} Each of the examples of compatible random substitutions in one dimension covered in \cite{BSS} is a Meyer set with mixed pure point and absolutely continuous spectrum.
It follows from the general theory that there exists some CPS $(\widehat{{\mathbb R}}, H, {\mathcal L})$, some $h \in C^{}_{0}(H)$ and an $L^1$-Stepanov almost periodic function $f$ such that
\begin{displaymath}
\widehat{\gamma} =\underbrace{\omega_h}_{\mbox{pp}} + \underbrace{f\, \ensuremath{\lambda\!\!\!\lambda}}_{\mbox{ac}} \,.
\end{displaymath}
Explicit formulas for both parts are provided in \cite{BSS}.\hfill$\Diamond$
\end{remark}
\subsection*{Acknowledgments} The work was supported by NSERC with grant 03762-2014 and by DFG, and the authors are grateful for the support.
| {
"attr-fineweb-edu": 1.920898,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUd9k4eIOjRyLa5aDI | \section{Introduction}
\label{sec:i} \setcounter{equation}{0} \setcounter{footnote}{0}
Warped extra dimensions occupy an important place in particle physics today.
The original motivation of Randall and Sundrum was to address the hierarchy problem by lowering the cut-off scale in the Higgs sector \cite{RS}.
After several improvements, of which the most important are placing the gauge and fermion fields in the bulk \cite{P} and extending the Standard Model (SM) gauge symmetry so as to include the custodial symmetry \cite{ADMS}, the Randall-Sundrum (RS) set-up remains a valid framework for studying electroweak symmetry breaking.
Yet the most exciting aspect of RS is that, according to the AdS/CFT correspondence~\cite{adscft}, it provides a rough description of a purely 4D system where fundamental fields interact with a large N strongly-coupled, approximately conformal sector (CFT) \cite{APR}.
In particular, RS with a Higgs boson localized on the IR brane is a dual realization of the old idea of a composite Higgs boson arising as a bound state of some new strong interactions.
Moreover, a 4D composite pseudo-Goldstone boson \cite{GK} can be modeled in RS by the so-called gauge-higgs scenario \cite{M}, where the Higgs boson is identified with the fifth component of a 5D gauge boson \cite{CNP}.
Confidence in this 4D/5D correspondence is strengthened by the success of the related AdS/QCD approach in modeling the low-energy meson sector of QCD \cite{EKSS,DP}.
The warped 5th dimension of RS is an interval terminated by the UV brane and the IR brane.
From the holographic point of view, the fields living on the UV brane are interpreted as the fundamental sector probing the CFT, while the IR brane describes the low-energy dynamics of the CFT that spontaneously breaks the conformal symmetry and, often, other global symmetries. The {\em hard-wall} IR brane is an idealization that translates to breaking the symmetries by a vev of an operator of infinite scaling dimension.
It is interesting to investigate a more general set-up where the IR breaking is modeled by a smooth evolution of the background geometry and/or bulk fields vevs.
Such framework is referred to as {\em the soft wall}.
Soft walls have a relatively short and not so intense history.
The apparent reason is that the soft-wall backgrounds are inevitably more complicated than RS with a metric being a slice of AdS.
The studies so far have been restricted to AdS/QCD, with the motivation of constructing a more faithful dual description of the QCD dynamics.
The interest was spawned by the observation in ref. \cite{KKSS} that a carefully designed soft-wall background leads to a linear Kaluza-Klein (KK) spectrum: $m_n^2 \sim n$, rather than $m_n^2 \sim n^2$ as encountered in the hard-wall RS.
The linear spectrum of excited vector mesons in QCD is both expected by theoretical arguments and observed experimentally.
Refs. \cite{CR,GKN} discussed more extensively how features of the soft-wall background map onto the properties of QCD.
A dynamical system that leads to the background with linear KK trajectories has been proposed in ref. \cite{BG}.
In this paper we investigate the soft wall in the context of electroweak symmetry breaking.
The most obvious possibility would be to use a bulk scalar to break the electroweak symmetry.
A working model can be constructed along the same lines as in AdS/QCD with $SU(2)_L \times SU(2)_R$ gauge group broken to $SU(2)_V$ by a bulk Higgs field in the bi-fundamental representation.
Then the ``pions'' that result from this symmetry breaking pattern play the role of the electroweak would-be-Goldstones eaten by the W and Z bosons.
The difference with respect to AdS/QCD is that the $SU(2)_L \times U(1)_Y$ subgroup of the bulk gauge group should remain unbroken on the UV brane and that an additional $U(1)$ gauge factor has to be included in the bulk to correctly accommodate the hypercharge.
Such a 5D set-up would be dual to a composite Higgs arising from a technicolor-like strongly coupled dynamics.
In this paper we jump immediately to a higher level where the Higgs is realized as a pseudo-Goldstone boson, that is to the 5D gauge-higgs scenario.
This has the attractive feature that the Higgs is proteced by approximate global symmetries and therefore it can naturally be light.
The generalization of the gauge-higgs models to the soft-wall case is not so straightforward,
as there is no IR brane to break the bulk gauge symmetries.
Instead, we have to introduce a charged bulk scalar field with a potential that forces it to obtain a vev.
The consequence is that the SM higgs boson lives not only in the fifth component of the broken gauge bosons,
but also in the Goldstone bosons hosted by the bulk scalar.
That implies that the radiatively generated Higgs mass can be UV sensitive because the scalar mass term is not protected by the symmetries at the UV brane.
We will show however that the UV sensitivity can be avoided if the bulk condensate is localized in IR and decays fast enough in the vicinity of the UV brane.
The hope is that the soft-wall version of RS provides a more adequate description of the dual composite Higgs models.
It also offers more possibilities for the KK spectrum and the couplings which is important in the context of LHC search strategies.
One important thing we show in this paper is that the constraints on the KK scale from electroweak precision tests turn out to be somewhat milder.
Even small differences in allowed KK masses are extremely relevant from the phenomenological point of view, since they may greatly increase the prospects for a discovery at the LHC.
We construct an explicit soft-wall example where the typical constraint on the lightest KK mode mass is reduced down to 2 TeV, rather than 3 TeV typically found in hard-wall models.
Our another, more exotic example with a continuous KK spectrum separated from the SM particles by a mass gap is even less constrained and admits the continuum below 1 TeV.
In Section~2 we discuss general features of the soft wall scenario.
We systematize various possibilities for the KK spectrum.
We also derive important properties of the solution to the equations of motion.
This section is a bit loaded with math but the results are very relevant for phenomenological applications.
Next, we move to the soft-wall version of the gauge-higgs scenario.
In Section~3 we discuss a toy model based on the U(1) gauge symmetry in the bulk.
This simple set-up allows us to understand the physics of the gauge-higgs and identify all degrees of freedom that arise in the soft wall set-up.
KK scalars and pseudoscalar always appear on a soft wall (while they are optional in the hard wall version), and we devote some time to discussing their equations of motions.
Section~4 is the heart of this paper.
We consider a soft-wall version of the 5D gauge-higgs model based on SO(5) gauge symmetry, which is an example of a fully realistic and calculable model of the electroweak sector.
We discuss the spectrum of the gauge bosons and evaluate the gauge contribution to the radiative Higgs potential.
We comment how the softness of the loop corrections in the usual hard-wall scenario can be maintained in the soft-wall version.
Then we derive the low-energy action and general expressions for the electroweak precision parameters.
In Section~5 we examine the electroweak sector in two particular soft-wall backgrounds.
One has a discrete resonance spectrum, which however shows a different spacing between KK modes, as compared to the hard wall models.
The other has a continuous spectrum above a mass gap, which could never be obtained with a hard wall.
We analyze contraints from electroweak precision data in both scenarios and point out that they are less severe than in typical RS scenarios. We conclude in Section~6. Finally, an appendix contains the derivation of the effective action that we use to calculate the oblique parameters.
\section{Soft-Wall Background}
\label{sec:swb} \setcounter{equation}{0} \setcounter{footnote}{0}
We consider a 5D gauge theory propagating in a warped background with the line element
\begin{equation}
\label{e.wb}
ds^2 = a^2(z) (dx_\mu^2 - dz^2).
\end{equation}
The metric in \eref{wb} may refer to the true background metric that solves the 5D Einstein equation,
or it may be an effective background that incorporates a vev of some dilaton field multiplying the gauge action: $a_{eff}(z) = e^{\Phi(z)} a_{\rm true}(z)$ .
In this paper we are not concerned with the dynamics that could produce a particular warp factor.
In other words, we study 5D gauge theories in a fixed, non-dynamical background.
This way we sweep under the carpet such important questions as radion stabilization, backreaction of condensates on the geometry, etc.
A proper discussion of these issues could easily obscure our main point which is low energy phenomenology.
Therefore, we adopt a pragmatic approach and concentrate on the effect of general backgrounds on the observables in the electroweak sector. See refs. \cite{CR,GKN,BG} for discussion of possible dynamical origins of the soft-wall background.
The conformal coordinate $z$ runs from $z_0$ (the UV brane) to infinity.
We fix $a(z_0) = 1$. The strong-coupling cuf-off scale on the UV brane is then set by $\sim 4\pi/z_0$.
Even though there is no IR brane, the KK spectrum develops a mass gap if the warp factor decays exponentially or faster in IR.
If this is the case, the proper length of the extra dimension, $L = \int_{z_0}^{\infty} a(z)$, is also finite. %
Note that the finite proper length implies that in the coordinates $ds^2 = a^2(y) dx^2 - dy^2$ the 5th coordinate spans a finite interval $y \in [0,L]$.
The difference with the usual RS scenario would be the vanishing of the warp factor on the "IR brane".
The warp factor is not specified in most of the following discussion, and we only make a couple of technical assumptions.
One is that $a(z)$ is monotonically decreasing ($a' < 0$) for all $z$; the other is that it decays sufficiently fast at large $z$, so as to generate a mass gap.
We also tacitly assume that the metric approaches AdS close to the UV brane, but our results apply to more general cases as well.
A gauge field propagating in the background of \eref{wb} has a quadratic action of the form
\begin{equation}
\label{e.5da}
S_5 = \int d^4 x \, dz \, a(z) \left ( -{1 \over 4} F_{MN}^2 + {1 \over 2} M^2(z) A_M^2 \right ) .
\end{equation}
The mass term should be understood as resulting from a condensation of a charged scalar field and, in general, it can have a non-trivial dependence on the $z$ coordinate.
This implies the condition $M^2(z) \geq 0$ for all $z$.
The equation of motion that follows from the 5D action is
\begin{equation}
\label{e.geom}
\left (a^{-1} \partial_z (a \partial_z ) - M^2 + p^2 \right ) f = 0.
\end{equation}
We are interested in the normalizable solution of this equation,
in the sense $\int_{z_0}^\infty a f^2 < \infty$, and we denote such a solution by $K_M(z,p)$ (when a normalizable solution does not exist, as in AdS, we define $K_M$ as the solution that exponentially decays for large Euclidean momenta).
In the soft-wall set-up, normalizability plays the same role as IR boundary conditions in RS, selecting one of the two independent solutions of \eref{geom}.
Then the UV boundary conditions leads to a quantization condition for $p^2$, which fixes the KK spectrum.
In the following of this section, we discuss general properties of the solutions to \eref{geom}.
The formal results that we obtain here will later prove valuable to study physical observables in realistic models.
To proceed, it is convenient to borrow some methods and terminology from quantum mechanics.
The equation of motion can be recast into the form resembling the Schr\"odinger equation by defining the "wave function" $\Psi$ as $f = a^{-1/2} \Psi$.
Note that the normalization condition translates to square-integrability: $\int \Psi^2 < \infty$.
The wave function satisfies
\begin{equation}
\left ( - \partial_z^2 + V_M(z) \right ) \Psi = p^2 \Psi \, ,
\qquad
V_M(z) = M^2 + {a''\over 2 a} - {(a')^2 \over 4 a^2}.
\end{equation}
From the shape of the potential one can quickly infer that the existence of the mass gap relies on the corresponding Schr\"odinger potential $V_M$ being confining.
The necessary condition reads $V_M \geq {\rm const} > 0$ for $z \to \infty$.
Moreover, in order to have a minimum of the potential we also need $V_M$ to grow toward UV.
This last condition is always fulfilled by metrics that are asymptotically AdS in UV, in which case $V_M \sim 1/z^2$ at small $z$.
Furthermore, it is profitable to introduce the so-called superpotential $W(z)$ \cite{SQM}, that is related to the Schr\"odinger potential by the Riccati equation
\begin{equation}
\label{e.sde}
W^2 - W' = V_M .
\end{equation}
One can prove that the asymptotic condition $V_M \geq {\rm const} > 0$ is equivalent to $W \geq {\rm const} > 0$.
This is obvious when $W^2$ dominates over $W'$.
On the other hand, it is not possible to keep $W' > W^2$ asymptotically: sooner or later $W^2$ will catch up, bringing us back to the previous case.
Embedding the SM electroweak sector in a 5D gauge theory will require that some of the bulk gauge symmetries remain unbroken, which translates to $M^2=0$ in the equation of motion for the corresponding generator.
Therefore, we are interested in the backgrounds that have a mass gap in the limit $M^2 \to 0$,
that is $V_0$ must be confining (which implies that $V_M$ is confining too, as long as $M^2$ is positive).
For $M^2 = 0$ the superpotential is $W_0 = - {a' \over 2 a} > 0$.
This shows that the KK spectrum in the unbroken phase has a mass gap if the warp factor decays in IR at least as fast as $e^{- z^\alpha}$ with $\alpha \geq 1$ \cite{GKN}.
Depending on the power $\alpha$, three general situations can arise:
\begin{enumerate}
\item {\bf Unparticles}, $W_0(z)|_{z \to \infty} \to 0$.
The spectrum consists of a continuum of non-normalizable modes and there is no mass gap \cite{G}.
The familiar example is that with the AdS metric $a(z) = z_0/z$ corresponding to $W_0 = 1/2z$, which is nothing but the RS2 set-up \cite{RS2}.
This direction is not explored in this paper.
\item {\bf Hidden Valley}, $W_0(z)|_{z \to \infty} \to \rho > 0$.
The spectrum again has a continuum of non-normalizable modes separated from the (optional) massless mode by a mass gap $\rho$ \cite{SZ}.
An example presented recently in ref. \cite{CMT} has $a(z) = e^{- 2\rho z}/z$ corresponding to $W_0(z) = 1/2z + \rho$.
\item {\bf Resonances}, $W_0(z)|_{z \to \infty} \to \infty$.
This is the most familiar scenario.
The spectrum consists of a discrete tower of vector resonances separated by a mass gap from (optional) zero modes.
An important example based on the proposal of ref. \cite{KKSS} has the metric $a(z) = e^{- \rho^2 z^2}/z$ corresponding to $W_0(z) = 1/2z + \rho^2 z $.
\end{enumerate}
One can also envisage hybrid scenarios where unbroken gauge boson are of the unparticle or the hidden-valley type,
while broken gauge bosons have a discrete spectrum due to $M^2(z)$ asymptotically growing in IR.
\vspace{.5cm}
We first study the normalizable solution to the equation of motion \erefn{geom} in the special case $p^2 = 0$:
\begin{equation}
\label{e.ee}
\left [ a^{-1}\partial_z (a \partial_z) - M^2(z) \right ] \eta = 0 .
\end{equation}
We will see in the next section that $\eta(z) \equiv K_M(z,0)$ is directly related to the gauge-higgs profile in the 5th dimension.
Obviously, for $M^2 = 0$, the normalizable solution is just $\eta = 1$.
In the following we concentrate on the case $M^2 > 0$.
To get more insight into the shape of $\eta$ we split the superpotential as
$W = W_0 + U_M$ where $W_0 = - a'/2a$, and $U_M$ is related to $M^2$ by the non-linear equation
\begin{equation}
\label{e.mse}
U_M^2 - U_M' - {a'\over a} U_M = M^2.
\end{equation}
We also fix $U_M(z_0) > 0$.
Eq. \erefn{ee} can now be written as
\begin{equation}
a^{-1} \left ( \partial_z - U_M \right )a \left ( \partial_z + U_M \right ) \eta(z) = 0.
\end{equation}
The normalizable solution is $\eta(z) = e^{- \int^z U_M}$.
Normalizability follows from the fact that $a \eta^2 = e^{- \int^z W}$ (recall that $\lim_{z\to \infty} W > 0$ is the mass gap condition).
We will prove now that $\eta(z)$ is monotonically decreasing all the way from the UV brane down to IR.
The derivative is
$\partial_z \eta = - U_M (z) e^{ - \int^z U_M(z') }$.
Since $U_M(z_0) > 0$, the gauge-higgs profile decreases in the vicinity of the UV brane.
This trend could be reversed if $\eta$ had an extremum, which would imply that $U_M$ vanishes somewhere,
$U_M(z_*) = 0$.
Note however that $U_M$ must always decrease in the vicinity of $z_*$ since,
from the equation \erefn{mse}, $- U_M'(z_*) = M^2(z_*) > 0$.
Thus, $U_M$ could have at most one zero, if $U_M$ started positive at $z_0$ and became negative asymptotically.
That is however incompatible with the asymptotic $\lim_{z \to \infty} W (z) > 0$.
Indeed, suppose $W > 0$ which implies $W_0 > |U_M|$.
From \eref{mse},
\begin{equation}
0 < M^2 = |U_M|^2 - 2 W_0 |U_M| - U_M' < - W_0 |U_M| - U_M'.
\end{equation}
This requires that $U_M'$ be negative ($U_M$ is decreasing) and that $|U_M'| > W_0 |U_M|$.
Thus $\partial_z \log |U_M| > W_0$: the logarithm of $U_M$ has to be steep enough such that it is larger than $W_0$.
But then it is impossible to keep $|U_M| < W_0$ all the way down to IR.
We conclude that, as long as there is a mass gap, $U_M$ can never become negative and that {\em $\eta (z)$ is always decreasing.}
This is an important result that will later turn out to be equivalent to positivity of the S parameter.
We move to discussing the solutions of the equations of motion for general $p^2$.
We will obtain the power expansion of the normalizable solution for small $p^2$ and for large Euclidean $p^2$.
The former can be achieved by separating $K_M(z,p) = \eta(z) \bar K_M(z,p)$,
where $\bar K_M$ satisfies the equation
\begin{equation}
\label{e.bkme}
\left (a_M^{-1} \partial_z (a_M \partial_z) + p^2 \right ) \bar K_M(z,p) = 0 ,
\end{equation}
with the effective warp factor defined as
\begin{equation}
a_M(z) = {\eta^2(z) \over \eta^2(z_0)} a(z) = e^{- 2 \int_{z_0}^z W(z')}.
\end{equation}
Since $W(z) > 0$ and asymptotically $W(z) > {\rm const}$, the effective metric $a_M(z)$ is monotonically decreasing and at least exponentially decaying in the IR, much as the original warp factor $a(z)$.
We can integrate \eref{bkme} perturbatively in $p^2$, which leads to the following expansion of $K_M$:
\begin{equation}
\label{e.kmsp}
K_M(z,p) = \eta(z) \left [ 1
+ p^2 \int_{z_0}^z a_M^{-1} \int_{z'}^\infty a_M
+ p^4 \int_{z_0}^z a_M^{-1} \int_{z'}^\infty a_M \int_{z_0}^{z''} a_M^{-1} \int_{z'''}^\infty a_M
+ {\mathcal O}(p^6) \right].
\end{equation}
In general, this expansion is valid for momenta below the mass gap.
In order to obtain an expansion for large Euclidean momenta we rewrite
\begin{equation}
K_M(z,i p) = a^{-1/2} e^{- p z} \phi(z) ,
\end{equation}
where $\phi$ satisfies the equation
\begin{equation}
\left [e^{2 p z} \partial_z ( e^{-2 p z} \partial_z) - V_M (z) \right ] \phi = 0.
\end{equation}
Again, we integrate perturbatively, this time expanding in powers of $V_M$,
which yields
\begin{equation}
\label{e.kmlp}
K_M(z,i p) = a^{-1/2}(z) e^{- p z} \left [
1 - \int_{z_0}^z e^{2 p z'} \int_{z'}^\infty e^{- 2 p z''} V_M(z'') + {\mathcal O}(V_M^2) \right ] .
\end{equation}
In general, this expansion is valid for $p z_0 \gg 1$, that is for momenta above the UV brane scale.
When the warp factor is approximately AdS near the UV brane, the potential contains a $1/z^2$ term.
Then the integrals in \eref{kmlp} lead to $\log z$ enhanced terms for $p z_0 < 1$ which undermines the perturbative expansion.
This ends the math section.
We are holding all the threads to tackle physics questions.
\section{Toy Model}
\label{sec:tm} \setcounter{equation}{0} \setcounter{footnote}{0}
We start with a simple toy model based on the $U(1)$ gauge group.
The gauge symmetry is broken both by UV boundary conditions and by a vev of a charged bulk scalar field.
The spectrum includes a massless Goldstone boson - the gauge-higgs - that is a mixture of the 5th component of the gauge field and the phase of the bulk scalar.
The toy model is interesting from the pedagogical point of view
even though the gauge group is not realistic
and there is no dynamics associated with the vev of the gauge-higgs.
The point is that the lagrangian is simple enough to identify easily all the degrees of freedom.
In particular, the condition for the existence of the gauge-higgs and its equations of motion can be simply derived.
The 5D lagrangian is
\begin{align}
& {\mathcal L} = \sqrt{g}\left\{ -
\frac{1}{4} X_{MN} X_{MN}
+ \frac{1}{2}\left|D_M\Phi\right|^2 - V(|\Phi|)
\right\} ,
\nonumber \\
& X_{MN} = \partial_M X_N - \partial_N X_M ,
\nonumber \\
& D_M \Phi = \partial_M \Phi - i g_5 X_M \Phi .
\end{align}
We parametrize the scalar as
\begin{equation}
\Phi(x,z) = \left [\Lambda(z) + \phi(x,z) \right ] e^{i g_5 G(x,z)}.
\end{equation}
where $\Lambda(z)$ is a $z$-dependent vev that is a solution to the scalar equations of motion.
The lagrangian becomes
\begin{align}
\label{e.tml}
{\mathcal L} = & - \frac{a}{4} X_{\mu\nu} X_{\mu\nu} + {a \over 2} \left (\partial_z X_\mu - \partial_\mu X_z \right )^2
\nonumber \\
\mbox{} & + \frac{a^3}{2}(\partial_\mu \phi)^2 + \frac{a^3}{2} g_5^2 (\Lambda + \phi)^2 \left (\partial_\mu G - X_\mu \right )^2
\nonumber \\
\mbox{} & - \frac{a^3}{2}(\partial_z \Lambda + \partial_z \phi)^2 - \frac{a^3}{2} g_5^2 (\Lambda + \phi)^2 \left (\partial_z G - X_z \right )^2
\nonumber \\
\mbox{} & - a^5 V(\Lambda + \phi).
\end{align}
One can see here that a vev of $X_z$ has no physical significance. Indeed, such a vev must be accompanied by a vev of $G$, with $\langle X_z \rangle = \partial_z \langle G \rangle$.
In the presence of these vevs we can shift,
\begin{equation}
X_z \to \langle X_z \rangle + X_z = \partial_z \langle G \rangle + X_z \, ,
\qquad
G \to \langle G \rangle + G ,
\end{equation}
so that they disappear from the lagrangian. In fact, they are a pure-gauge configuration.
The linear terms in $\phi$ vanish due to the equations of motion for $\Lambda(z)$.
The quadratic terms are
\begin{align}
\label{e.tmlq}
{\mathcal L} = & - \frac{a}{4} X_{\mu\nu} X_{\mu\nu} + {a \over 2} \left (\partial_z X_\mu - \partial_\mu X_z \right )^2
\nonumber \\
& \mbox{} + \frac{a^3}{2}(\partial_\mu \phi)^2 + \frac{a}{2} M^2(z) \left (X_\mu - \partial_\mu G \right )^2
\nonumber \\
& \mbox{} - \frac{a^3}{2}(\partial_z \phi)^2 - \frac{a}{2} M^2(z) \left (\partial_z G - X_z \right )^2
\nonumber \\
& \mbox{} - {1 \over 2}a^5 V''(\Lambda) \phi^2.
\end{align}
where $M^2(z) = g_5^2 a^2 \Lambda^2$.
From the above we can see that the 4D effective theory contains the following degrees of freedom
\begin{itemize}
\item U(1) gauge fields $X_{\mu,n}$ living in $X_\mu$.
\item Scalars and pseudoscalars living in $G$ and $X_z$ that mix with one another:
\begin{itemize}
\item physical pseudo-scalars $P_n$,
\item Goldstones $G_n$ eaten by the massive gauge fields,
\item depending on the boundary condition for $X_\mu$, physical massless scalar $h$ referred to as the gauge-higgs.
\end{itemize}
\item Scalar fields $\phi_n$ living in $\phi$.
\end{itemize}
Let us discuss them in turn.
\subsection{Gauge Boson}
The 5D gauge field can be expanded into the KK modes as,
\begin{align}
& X_\mu(x,z) = X_{\mu,n}(x) f_{X,n}(z) ,
\nonumber \\
& a^{-1}\partial_z (a \partial_z f_{X,n}) - M^2(z) f_{X,n} + m_n^2 f_{X,n} = 0 ,
\nonumber \\
& a f_{X,n} \partial_z f_{X,n}|_{z = z_0} = 0 ,
\nonumber \\
& \int_{z_0}^\infty a f_{X,n} f_{X,m} = \delta_{nm} .
\end{align}
The UV boundary conditions for $f_{X,n}$ can be either Neumann or Dirichlet.
Here we choose the Dirichlet one, $f_{X,n}|_{z = z_0} = 0$, because it will allow the gauge-higgs to exist.
The equation of motion for the gauge field profile $f_{X,n}(z)$ has two independent solutions.
In section 2 we defined $K_M(z,p)$ as the normalizable solution, $\int_{z_0}^\infty a K_M^2 < \infty$.
Using this notation, the gauge profile can be written as
\begin{equation}
f_{X,n} = \alpha_{X,n} K_M(z,m_n),
\end{equation}
and the KK masses are found by solving for the UV boundary condition
\begin{equation}
K_M(z_0,m_n) = 0 .
\end{equation}
\subsection{Gauge-Higgs}
\label{s.gh}
The 5D fields $G$ and $X_z$ may contain a massless scalar mode - the gauge-higgs - that is embedded as
\begin{equation}
X_z(x,z) \to h(x) \partial_z \eta (z)
\qquad
G(x,z) \to h(x) \eta (z).
\end{equation}
This particular embedding ensures that $h$ does not pick up a mass term from the
$(\partial_z G - X_z)^2$ in the lagrangian \erefn{tml}.
Furthermore, the mixing term between the gauge-higgs and the vector field reads
\begin{equation}
{\mathcal L} = X_\mu \partial_\mu h \left ( \partial_z (a \partial_z \eta) - a M^2 \eta - (a \partial_z \eta)|^{\infty}_{z_0} \right ) .
\end{equation}
The UV boundary term vanishes because the gauge field vanishes on the UV brane, while the IR boundary term vanishes for normalizable solutions.
The gauge-higgs does not mix with the tower of the gauge fields if its profile satisfies the equation
\begin{equation}
\label{e.gheom}
a^{-1}\partial_z (a \partial_z \eta) - M^2(z) \eta = 0.
\end{equation}
This is the same as the gauge equation of motion with $m_n = 0$,
and the solutions were discussed in Section 2 below \eref{ee}.
Furthermore, the gauge-higgs profile satisfies the normalization condition
\begin{equation}
1 = \int_{z_0}^{\infty} a \left ( (\partial_z \eta)^2 + M^2(z) \eta^2 \right ) ,
\end{equation}
which, upon integration by part and using the equation of motion can be written as
\begin{equation}
a(\infty)\eta(\infty) \partial_z \eta (\infty) - \eta(z_0) \partial_z \eta (z_0) = 1 .
\end{equation}
The first term must vanish for a normalizable solution.
Consequently, the normalized profile can be written as
\begin{equation}
\eta(z) = {1 \over \sqrt{U_M(z_0)}} e^{ - \int_{z_0}^z U_M(z')} ,
\end{equation}
where $U_M$ is the mass superpotential introduced in \eref{mse}.
Recall that we proved that $U_M(z) > 0$ in a theory with a mass gap and $M^2 > 0$,
which implies that the gauge-higgs profile is monotonically decreasing.
Although $\eta(z)$ completely characterizes the gauge Higgs profile, it has no immediate physical meaning.
Instead, the localization of the gauge-higgs in the gauge field component is determined by $(\partial_z \eta)^2$,
whereas the localization in the scalar component is governed by $M^2(z)\eta^2$.
\subsection{Other scalars and pseudoscalars}
We continue the discussion of the scalar spectrum with the scalar $\phi$, which corresponds to oscillations around the bulk condensate $\Lambda(z)$. $\phi $ does not mix with $G$ or $X_z$ so that it has the simple KK expansion
$\phi(x,z) = \phi_{n}(x) f_{\phi,n}(z)$.
The profile must solve the equation of motion
\begin{equation}
\left [ a^{-3}\partial_z (a^3 \partial_z) + p^2 - a^2 V''(\Lambda) \right ] f.
\end{equation}
Moreover, diagonalization of the KK action requires vanishing of the boundary term
\begin{equation}
a^3 f_{\phi,n} \partial_z f_{\phi,n}|_{z_0}^\infty = 0 ,
\end{equation}
which leaves us two options: Dirichlet or Neumann boundary conditions on the UV brane (we could obtain mixed boundary conditions if we added UV boundary mass or kinetic terms). The normalization condition reads $\int_{z_0}^{\infty} a^3 f_{\phi,n}^2 = 1$.
The scalar equation of motion is different from that for the gauge field, but similar methods apply.
We pick up the normalizable solution: $\int^\infty a^3 f^2 < \infty$, and denote it as $\bar K(z,p)$.
The spectrum is found by imposing the UV boundary condition, e.g. $\partial_z \bar K(z_0,m_n) = 0$.
Then we can write the profile as $f_{\phi,n} = \bar \alpha_n \bar K(z,m_n)$.
We can also rewrite the scalar equation of motion as a Schr\"odinger-type equation by defining
$\bar f = a^{-3/2} \bar \Psi$.
This leads to the equation $( - \partial_z^2 + \bar V) \bar \Psi = p^2 \bar\Psi$
with
$\bar V(z) = a^2 V''(\Lambda) + {3 a''\over 2 a} - {3 (a')^2 \over 4 a^2}$.
The last two terms are, up to the factor of 3, analogous to the ones in the Schr\"odinger version of the gauge equation of motion.
As long as the ''mass term" $a^2 V''$ is positive or vanishing in IR, the sufficient condition for the scalar spectrum to devolop a mass gap is the same as for the gauge fields: the warp factor should decay as $e^{-\rho z}$ or faster in IR.
For the pseudoscalars living in $X_z$ and $G$, the KK expansion and the equations of motion are more involved.
We start with the general KK expansion:
\begin{eqnarray} &
X_z(x,z) = h(x) \partial_z \eta (z) + G_{n}(x) \overline f_{X,n}(z) + P_n(x) \tilde f_{X,n} ,
& \nonumber \\ &
G(x,z) = h(x) \eta (z) + G_{n}(x) \overline f_{G,n}(z) + P_n(x) \tilde f_{G,n} .
\end{eqnarray}
The gauge-higgs profile $\eta$ was discussed before.
The Goldstones $G_n$ should marry the corresponding gauge fields $X_{\mu,n}$,
such that the quadratic lagrangian depends on the combination $m_n X_{\mu,n} - \partial_\mu G_n$.
To achieve this, the Goldstone profiles must be synchronized with those of the gauge fields
\begin{equation}
\overline f_{X,n} = m_n^{-1} \partial_z f_{X,n} \, ,
\qquad
\overline f_{G,n} = m_n^{-1} f_{X,n}.
\end{equation}
Much as the gauge-higgs, the Goldstones cancel out in the $(\partial_z G - X_z)^2$ term in \eref{tmlq}, so that they do not pick up any mass terms.
The rule for the pseudo-scalar profiles is instead that they should {\em not} mix with the gauge fields.
The mixing terms following from \eref{tmlq} are
\begin{equation}
- X_{\mu,m} \partial_\mu P_n \left ( a \partial_z f_{X,m} \tilde f_{X,n} + a M^2 f_{X,m} \tilde f_{G,n} \right ) .
\end{equation}
So, we need
\begin{equation}
\tilde f_{G,n} = {\partial_z (a \tilde f_{X,n}) \over a M^2} \, ,
\qquad
a f_{X,m} \tilde f_{X,n}| = 0.
\end{equation}
Given the above, the kinetic terms and the mass terms are
\begin{equation}
{1 \over 2} \partial_\mu P_n \partial_\mu P_m \tilde f_{X,m} a \left [
\tilde f_{X,n} - \partial_z \left ( {\partial_z (a \tilde f_{X,n}) \over a M^2 } \right )
+ {\partial_z(a \tilde f_{X,n}) \over M^2} |
\right ] ,
\end{equation}
\begin{equation}
- {1 \over 2} P_n P_m a M^2 \left [
\tilde f_{X,n} - \partial_z \left ( {\partial_z (a \tilde f_{X,n}) \over a M^2 } \right )
\right ]
\left [
\tilde f_{X,m} - \partial_z \left ( { \partial_z (a \tilde f_{X,m}) \over a M^2 } \right )
\right ] .
\end{equation}
To diagonalize the kinetic terms we need the orthogonality relation
\begin{equation}
\int_{z_0}^{\infty} a \tilde f_{X,m} \left [ \tilde f_{X,n} - \partial_z \left ( {\partial_z (a \tilde f_{X,n}) \over a M^2}\right ) \right ] = \delta_{nm}
\end{equation}
and the boundary conditions
\begin{equation}
M^{-2} \tilde f_{X,n} \partial_z (a \tilde f_{X,n})| = 0,
\end{equation}
which leave two options for the UV boundary conditions.
The mass terms are diagonalized and the orthogonality relations are fulfilled if the profile $\tilde f_{X,n}$ is a solution of the equation
\begin{equation}
\left [ M^2(z) \partial_z { 1 \over a M^2(z)} \partial_z a + p^2 - M^2(z) \right ] \tilde f = 0.
\end{equation}
We apply the same methods all over again.
We pick up the normalizable solution: $\int^\infty a \tilde f^2 < \infty$, and denote it as $\tilde K_M(z,p)$.
The spectrum is found by imposing the UV boundary condition, e.g. $\tilde K_M(z_0,m_n) = 0$.
Then we can write the profile as $\tilde f_{X,n} = \tilde \alpha_n \tilde K_M(z,m_n)$.
We can also rewrite the pseudoscalar equation by defining
$\tilde f = a^{-1/2} M\tilde \psi$
which leads to the Schr\"odinger equation with $\tilde V = M^2 + a^{1/2} M \partial_z^2 (a^{-1/2} M^{-1})$.
For a non-pathological behavior of $M^2$ in IR, the exponential decay of the warp factor in IR ensures the presence of a mass gap in the pseudoscalar spectrum.
\section{Soft-Wall Model of Electroweak Breaking}
\label{sec:eb} \setcounter{equation}{0} \setcounter{footnote}{0}
In this section we discuss a model that accommodates the SM electroweak gauge bosons and the Higgs sector.
Many features of the toy model in Section 3 carry over to the realistic setting.
In particular, the profile of the gauge-higgs is unchanged.
The main complication is that the vev of the gauge-higgs affects the mass spectrum and the KK decomposition.
The simplest model that includes the electroweak group and custodial symmetry is based on $SO(5)$ gauge symmetry \cite{ACP}.
We consider here the $SO(5) \times U(1)_X$ gauge theory, where $SO(5)$ is broken down to $SO(4)$ by a vev of a real bulk scalar transforming as $\bf 5_0$.
The UV boundary conditions break $SO(5) \times U(1)_X$ down to $SU(2)_L \times U(1)_Y$, where the electroweak group is a subgroup of $SO(4) \times U(1)_X$ left unbroken by the bulk scalar vev.
The lagrangian is given by
\begin{align}
& {\mathcal L} = \sqrt{g}\left\{
-\frac{1}{4} {\mathrm T \mathrm r} A_{MN} A_{MN}
+ \frac{1}{2} D_M\Phi^T D_M\Phi - V(\Phi)
\right\},
\nonumber \\
& A_{MN} = \partial_M A_N - \partial_N A_M - i g_5 [A_M,A_N] , \qquad A_M = A_m^\alpha T^\alpha,
\nonumber \\
& D_M \Phi = \partial_M \Phi - i g_5 A_M \Phi .
\end{align}
The $SO(5) \times U(1)_X$ generators
are normalized as $\mathrm T \mathrm r T^\alpha T^\beta = \delta^{\alpha\beta}$.
We split these generators into four classes:
$T_L^a$ and $T_R^a$, $a = 1 \dots 3$ that generate the $SO(4) \equiv SU(2)_L \times SU(2)_R$
subgroup of $SO(5)$,
$T_C^{\hat a}$, ${\hat a} = 1 \dots 4$ that generate the $SO(5)/SO(4)$ coset,
and $T_X \equiv I$ for $U(1)_X$.
The $SO(5) \times U(1)_X$ gauge field can be analogously split into $L_M^a$, $R_M^a$, $C_M^{\hat a}$, $X_M$.
The bulk scalar is parametrized as
\begin{equation}
\Phi(x,z) = \left [\Lambda(z) + \phi(x,z) \right ] e^{i g_5 G_{\hat a}(x,z) T_C^{\hat a}} \left ( \ba{c} \vec 0 \\ 1 \ea \right )
= \left [\Lambda + \phi \right ] \left ( \ba{c}
{G_a \over G} \sin (g_5 G/\sqrt{2}) \\
{G_4 \over G} \sin (g_5 G/\sqrt{2}) \\
\cos (g_5 G/\sqrt{2})
\ea \right ) ,
\end{equation}
where $G^2 = G_{\hat a} G_{\hat a}$. The vev $\Lambda(z)$ breaks $SO(5)$ to $SO(4)$ and gives the mass $M^2 = a^2 \Lambda^2 g_5^2/2$ to the coset gauge bosons $C_M^{\hat a}$.
For the time being, we do not specify the UV boundary conditions for the bulk scalar: they can be Dirichlet, or Neumann, or mixed.
Since the gauge-higgs live partly in the bulk scalar, one may want to impose the Dirichlet boundary condition $\Phi(z_0) = 0$.
This would protect us from UV brane localized $SO(5)$ violating mass terms for $\Phi$ that would imply mass terms for our gauge-higgs.\footnote{Thanks to Csaba Csaki for pointing this out.}
For our purpose, however, it is sufficient if the scalar vev is peaked in IR while it is Planck suppressed on the UV brane. This will ensure that the gauge-higgs component in $\Phi(z_0)$ is small enough so that the hierarchy problem is not reintroduced.
We fix the vev of the fifth component of the gauge field to be along the $SO(5)$ generator $T_C^4$.
Much like in the toy model, there is a physical scalar mode embedded in $C_z^4$ and $G_4$:
\begin{equation}
X_z(x,z) \to h(x) \partial_z \eta (z) ,
\qquad
G(x,z) \to h(x) \eta (z).
\end{equation}
where $\eta(z)$ is the gauge-higgs profile discussed at length in Section \ref{s.gh}.
We identify this scalar mode with the SM higgs boson.
We assume that it acquires a vev $\langle h(x) \rangle = \tilde v$.
In principle, this vev is not a free parameter but it is fixed by the parameters of the 5D model, as it is obtained by minimizing the radiative Coleman-Weinberg potential for $h$.
Typically, the largest contribution to the potential comes from the loops of the top quark and its KK modes, and the vev depends mostly on the parameters in the top sector.
In this paper we do not study the fermionic sector nor the one-loop dynamics, postponing the complete discussion to future publications.
\subsection{Vector Spectrum}
Unlike in the toy model, the gauge-higgs vev affects the spectrum of the theory,
in particular, it gives masses to some of the zero mode gauge bosons.
The quadratic lagrangian for the gauge fields in the gauge-higgs background reads
\begin{align}
\label{e.ql}
{\mathcal L} = &
-\frac{a}{4} {\mathrm T \mathrm r} A_{\mu\nu} A_{\mu\nu} - {a \over 2} \mathrm T \mathrm r D_z A_\mu D_z A_\mu
\nonumber \\
& \mbox{} + {a^3 \over 2} M^2 (C_\mu^{4})^2
+ {a^3 \over 2} M^2 \left ( C_\mu^a \cos_z + {1 \over \sqrt{2}} (L_\mu^a - R_\mu^a) \sin_z \right )^2 .
\end{align}
\begin{eqnarray}
D_z L_\mu^a &= & \partial_z L_{\mu}^a - {g_5 \tilde v \over 2} C_{\mu}^a \partial_{z} \eta ,
\nonumber \\
D_z R_\mu^a &= & \partial_z R_{\mu}^a + {g_5 \tilde v \over 2} C_{\mu}^a \partial_{z} \eta ,
\nonumber \\
D_z C_\mu^a &= & \partial_z C_{\mu}^a + {g_5 \tilde v\over 2} (L_{\mu}^a - R_{\mu}^a) \partial_{z} \eta ,
\nonumber \\
D_z C_\mu^4 & = & \partial_z C_{\mu}^4 , \qquad D_z X_\mu = \partial_z X_{\mu},
\nonumber \\
\sin_z & \equiv & \sin (g_5 \tilde v \eta(z)/\sqrt{2}) .
\end{eqnarray}
One can see that $\tilde v$ mixes $L_\mu^a$, $R_\mu^a$, $C_\mu^a$ with each other.
Thus, the mass eigenstates in the presence of the vev will be embedded in all these fields (and also in $X_\mu$, which mixes with the others via UV boundary conditions).
Therefore, we write the KK expansion as
\begin{eqnarray}
L_\mu^a(x,z) &=& A_{\mu,n}(x) f_{L,n}^a(z) ,
\nonumber \\
R_\mu^a(x,z) &=& A_{\mu,n}(x) f_{R,n}^a(z) ,
\nonumber \\
C_\mu^{\hat a}(x,z) &=& A_{\mu,n}(x) f_{C,n}^{\hat a}(z) ,
\nonumber \\
X_\mu(x,z) &=& A_{\mu,n}(x) f_{X,n}(z) .
\end{eqnarray}
The profiles satisfy the UV boundary conditions, that reduce $SO(5) \times U(1)_X$ down to $SU(2)_L \times U(1)_Y$:
\begin{equation}
\partial_z f_L^a(z_0) = 0 , \qquad
f_R^i(z_0) = 0 , \qquad f_C^{\hat a}(z_0) = 0 ,
\end{equation}
\begin{equation}
c_x f_R^3(z_0) - s_x f_X(z_0) = 0 , \qquad
s_x \partial_z f_R^3(z_0) + c_x \partial_z f_X(z_0) = 0 ,
\end{equation}
where $s_x = g_X/\sqrt{g_X^2 + g_5^2}$, $c_x = g_5/\sqrt{g_X^2 + g_5^2}$.
We can identify $s_x^2 = g_Y^2/g_L^2$.
The kinetic terms fix the normalization condition:
\begin{equation}
1 = \int_{z_0}^{\infty} dz a(z) \sum_{A=L,R,C,X} (f_A(z))^2 .
\end{equation}
The equations of motion are complicated because the various SO(5) gauge bosons are mixed by the vev of the gauge-higgs.
The usual trick employed in the gauge-higgs models is to simplify the equations of motions by an appropriate rotation in the group space.
In the present case, the rotation is given by
\begin{eqnarray}
\label{e.wr}
f_{L}^a &=& {1 + \cos_z \over 2} \hat f_{L}^a + {1 - \cos_z \over 2} \hat f_{R}^a + {\sin_z \over \sqrt 2} \hat f_C^a ,
\nonumber \\
f_{R}^a &=& {1 - \cos_z \over 2} \hat f_{L}^a + {1 + \cos_z \over 2} \hat f_{R}^a - {\sin_z \over \sqrt 2} \hat f_C^a ,
\nonumber \\
f_{C}^a &=& - {\sin_z \over \sqrt 2} \hat f_L^a + {\sin_z \over \sqrt 2} \hat f_R^a + \cos_z \hat f_C^a ,
\nonumber \\
f_{C}^4 &=& \hat f_C^4 , \qquad f_{X} = \hat f_X .
\end{eqnarray}
This transformation {\em locally} removes the gauge-higgs vev from the lagrangian (so that it affects the spectrum only {\em non-locally}, via the boundary conditions).
As a consequence, the hatted profiles satisfy equations of motion that do not depend on $\tilde v$,
\begin{eqnarray}
\label{e.hem}
& \left (a^{-1} \partial_z (a \partial_z) + m_n^2 \right ) \hat f_{L,R,X} = 0 ,
& \nonumber \\ &
\left (a^{-1} \partial_z (a \partial_z) + m_n^2 - M^2 \right ) \hat f_{C} = 0 .
\end{eqnarray}
Note that $\eta(z)$ vanishes in the IR, thus $f = \hat f$ asymptotically for $z \to \infty$.
Therefore, we should pick the solution to the equations of motion \erefn{hem} that decays in IR.
We write the hatted profiles $\hat f_{L,R,X} = \alpha_{L,R,X} K_0(z,m_n)$, $\hat f_{C} = \alpha_{C} K_M(z,m_n)$.
From that, the true profiles $f$ can be obtained through \eref{wr}.
The constants $\alpha$ are determined up to normalization by the UV boundary conditions.
It is clear that they depend on the gauge-higgs vev only via $\sin (g_5 \tilde v \eta(z_0)/\sqrt{2})$.
Writing it as $\sin (\tilde v/f_h)$ defines the global symmetry breaking scale:
\begin{equation}
\label{e.f}
f_h = {\sqrt{2} \over g_5 \eta(z_0)}
= {\sqrt{2 U_M(z_0)} \over g_5} .
\end{equation}
When $\sin (\tilde v/f_h) \ll 1$, the gauge-higgs becomes SM-like.
The other extreme limit is \mbox{$\sin(\tilde v/f_h) = 1$}, in which the electroweak sector is Higgsless for all practical purposes (even though a light scalar particle exists, it does not couple linearly to W and Z bosons, so it cannot unitarize the longitudinal gauge boson scattering).
In between these two extremes is the pseudo-Goldstone Higgs, where the Higgs boson partly (up to $E^2/f_h^2$ terms) unitarizes the longitudinal scattering.
The UV boundary conditions relate the constants $\alpha$ and yield the quantization condition on the mass $m_n$.
There are several classes of solutions to the UV boundary conditions, that define the towers of KK modes.
Below we write down only those that contain a massless mode in the limit of $\tilde v \to 0$.
The lowest solutions of the quantization condition are identified with the SM electroweak gauge bosons.
\begin{itemize}
\item {\bf Photon tower}. \\
This is a tower of neutral gauge bosons where the hatted profiles are given by
\begin{eqnarray}
\hat f_L^3 &=& {s_x \over \sqrt {1 + s_x^2} } \alpha_{\gamma} K_0(z,m) ,
\nonumber \\
\hat f_R^3 &=& {s_x \over \sqrt {1 + s_x^2} } \alpha_{\gamma} K_0(z,m) ,
\nonumber \\
f_X &=& {c_x \over \sqrt {1 + s_x^2} } \alpha_{\gamma} K_0(z,m) ,
\end{eqnarray}
while the remaining profiles all vanish.
$\alpha_\gamma$ is given by the normalization condition
$(\alpha_{\gamma})^{-2} = \int_{z_0}^\infty a(z) K_0(z,m)^2$.
The spectrum of the photon and its KK modes is given by the quantization condition
\begin{equation}
K_0'(z_0,m) = 0 .
\end{equation}
$m = 0$ is always a solution to the above, and the corresponding mode is identified with the SM photon.
In that case $K_0(z,0) = 1$ and $\alpha_\gamma = L^{-1/2}$.
\item {\bf $W$ tower}. \\
Another solution is a tower of charged gauge bosons.
The profiles are given by
\begin{eqnarray}
\hat f_L^i &=& \alpha_W {1 + \cos(\tilde v/f_h) \over 2} K_0(z,m) ,
\nonumber \\
\hat f_R^i &=& \alpha_W {1 - \cos(\tilde v/f_h) \over 2} K_0(z,m) ,
\nonumber \\
\hat f_C^i &=& \alpha_W {\sin(\tilde v/f_h) \over \sqrt{2}} {K_0(z_0,m) \over K_M(z_0,m)} K_M(z,m) .
\end{eqnarray}
The quantization condition depends on $\tilde v$,
\begin{equation}
\label{e.wqc}
{K_0'(z_0,m) \over K_0(z_0,m)} =
{\sin^2(\tilde v/f_h) \over 2} \left ( - {K_M'(z_0,m) \over K_M(z_0,m)} + {K_0'(z_0,m) \over K_0(z_0,m)} \right ) .
\end{equation}
In the limit $\tilde v \to 0$, there is a massless mode.
In the presence of electroweak breaking the lowest solution becomes $m_W \approx {g_L f_h \over 2} \sin (\tilde v/ f_h)$ and the corresponding mode is identified with the SM W boson.
\item {\bf $Z$ tower}. \\
This is a tower of neutral gauge bosons where, unlike for the photon tower, the masses depend on the gauge-higgs vev.
The profiles are given by
\begin{eqnarray}
\hat f_L^3 &=& \alpha_Z {c_x^2 + (1+ s_x^2)\cos(\tilde v/f_h) \over 2 (1 + s_x^2)^{1/2}} K_0(z,m) ,
\nonumber \\
\hat f_R^3 &=& \alpha_Z {c_x^2 - (1+ s_x^2)\cos(\tilde v/f_h) \over 2 (1 + s_x^2)^{1/2} } K_0(z,m) ,
\nonumber \\
\hat f_C^3 &=& \alpha_Z
{\sin(\tilde v/f_h) \over \sqrt{2}} (1 + s_x^2)^{1/2} {K_0(z_0,m) \over K_M(z_0,m)} K_M(z,m) ,
\nonumber \\
f_X &=& - \alpha_Z {s_x c_x \over (1 + s_x^2)^{1/2} } K_0(z,m) .
\end{eqnarray}
The quantization condition reads
\begin{equation}
{K_0'(z_0,m) \over K_0(z_0,m)}
= {(1 + s_x^2) \sin^2(\tilde v/f_h) \over 2} \left ( - {K_M'(z_0,m) \over K_M(z_0,m)} + {K_0'(z_0,m) \over K_0(z_0,m)} \right ) .
\end{equation}
The lowest lying solution is identified with the SM Z boson.
\end{itemize}
There are other classes of solutions to the UV boundary conditions that do not lead to zero modes, and where the mass spectrum is insensitive to the gauge-higgs vev.
Furthermore, similarly as in the toy model there are scalar, pseudo-scalar and unphysical Goldstone particles.
These are however not important for the following and we omit this discussion.
\subsection{Gauge Contributions to the Higgs Mass}
We turn to discussing the one-loop corrections to the Higgs mass generated by the electroweak gauge bosons and its KK modes.
Our gauge-Higgs is a (pseudo-) Goldstone boson of SO(5) symmetry broken spontaneously down to SO(4) in the bulk, and explicitly down to SU(2) on the UV brane.
As a consequence of that, the Higgs potential vanishes at the tree-level as long as there is no SO(5) breaking operator involving the scalar $\Phi$ localized on the UV brane.
A radiative Higgs potential is generated at one-loop level because the tree-level KK mode masses depend on the vev $\tilde v$.
The simplest way to calculate it is to use the spectral function $\rho(p^2) = \det (-p^2 + m_n^2(\tilde v))$, which is
a function of 4D momenta with zeros encoding the whole KK spectrum in the presence of electroweak breaking \cite{AA}.
With a spectral function at hand, we can compute the Higgs potential from the Coleman-Weinberg formula,
\begin{equation}
\label{e.cws0}
V(\tilde v) = {N \over (4 \pi)^{2}} \int_0^\infty dp \, p^{3} \log \left ( \rho[-p^2] \right ) ,
\end{equation}
where $N = + 3$ for gauge bosons.
From the quantization conditions obtained in the previous section we know that the spectral functions for
W and Z towers have the form $\rho_{W,Z} = 1 + F_{W,Z}(p^2) \sin^2(\tilde v/f_h)$.
It is convenient to define the form factor
\begin{equation}
\label{e.ff}
\Pi_M(p^2) = \partial_z \log \left (K_M(z,p) \right)|_{z = z_0} .
\end{equation}
From \eref{wqc} we identify
\begin{equation}
\label{e.gff}
F_W(p^2) = {\Pi_M(p^2) - \Pi_0(p^2) \over 2 \Pi_0(p^2)}
\end{equation}
and $F_Z(p^2) = (1 + s_x^2) F_W(p^2)$.
$F_W$ determines the one-loop gauge contribution to the Higgs mass parameter:
\begin{equation}
m_H^2 \equiv {\partial^2 V \over \partial \tilde v^2}|_{\tilde v= 0} =
{3(3+ s_x^2) \over f_h^2 (4 \pi)^{2}} \int_0^\infty dp \, p^{3} F_W(-p^2)
\end{equation}
Quadratic divergences are avoided if the form factor $F_W$ decays faster than $1/p^2$ for large Euclidean momenta.
At this point our efforts from section 2 are beginning to pay off.
At small Euclidean momenta we can use \eref{kmsp} to find that $F_W(-p^2) \approx g_5^2 f_h^2/4 L p^2$.
This can be identified with the SM W and Z boson contribution to the Higgs mass.
The presence of the tower of KK modes changes the shape of $F_W(-p^2)$ for momenta above the mass gap.
Whether the tower cuts off the quadratic divergences depends on the asymptotic form of $F_W(-p^2)$ for $-p^2 \to \infty$.
Using \eref{kmlp} we find that the leading asymptotic behavior of the form factor is given by
\begin{equation}
F_W(-p^2) \approx { e^{2 p z_0} \int_{z_0}^\infty e^{- 2 p z} M^2 (z) \over 2 p + a'/a + \dots} .
\end{equation}
The integral is exponentially suppressed in the IR, so that we can expand the mass term around the UV brane as
$M^2(z) = M^2(z_0) + (z-z_0)\partial_z M^2(z_0) + \dots$ and integrate term by term.
If $M^2(z_0) \neq 0$, the leading asymptotic behavior of the warp factor is
\begin{equation}
F_W(-p^2) \approx {M^2 (z_0) \over 4 p^2} + {\mathcal O}(1/p^3) ,
\qquad
p z_0 \gg 1 .
\end{equation}
Thus the one-loop Higgs mass is in general quadratically divergent, $\delta m_H^2
\sim M^2 (z_0) \Lambda^2 /16\pi^2 f_h^2 \sim M^2(z_0)/ (z_0 f_h)^2$, unlike in the standard hard-wall gauge-higgs scenario.
We can avoid quadratic divergences if $M^2(z_0) = 0$,
which is automatic when the bulk scalar that breaks SO(5) satisfies Dirichlet boundary conditions on the UV brane.
More precisely, the mass is finite when we impose the condition that $M^2$ vanishes faster than $(z - z_0)^2$ in the vicinity of the UV brane,
while for $M^2(z) \sim (z- z_0)^2$ the Higgs mass is logarithmically sensitive to the cutoff.
In practice, we do not need to insist on the finiteness of the Higgs mass:
it is enough if $M^2(z_0)$ is sufficiently suppressed, $M^2(z_0) \sim {\mathcal O}(z_0^2 \, {\rm TeV}^4)$.
We note in passing that the UV behaviour of $M^2(z)$ is related to the dimension of the operator breaking the $SO(5)$ symmetry in the holographic dual. The latter should be larger than $2$ to avoid quadratic divergences.
\subsection{Oblique Corrections}
Integrating out the heavy KK modes leaves the SM lagrangian plus higher-dimension operators.
We will assume here that all the light SM fermions are localized on the UV brane.
In such a case the corrections to the SM lagrangian are universal (except the operators involving the third generation fermions), which means that there exists a field redefinition such that the higher dimension operators involve only the electroweak gauge boson and the Higgs field, whereas vertex correction and four-fermion operators are absent.
Phenomenologically, the most important are the corrections to the quadratic terms involving the $SU(2)_L \times U(1)_Y$ gauge bosons
(there are also corrections to the triple and quartic gauge boson vertices, but these are less constrained by experiment).
Restricting to the lagrangian terms with at most four derivatives, these corrections can be described by seven "oblique" parameters \cite{BPRS}.
We define them as follows:
\begin{align}
\label{e.p4}
{\mathcal L}_{eff} = &
- {1 \over 4} (L_{\mu \nu}^a)^2 - {1 \over 4} (B_{\mu \nu})^2
+ {g_L^2 v^2 \over 8} L_\mu^i L_\mu^i
\nonumber \\
& \mbox{} + {v^2 \over 8} \left ( g_L L_\mu^3 - g_Y B_\mu \right )
\left ( 1 - \alpha_T {g_Y^2 v^2 \over 2} + \alpha_U \partial^2 + \alpha_V \partial^4\right ) \left ( g_L L_\mu^3 - g_Y B_\mu \right )
\nonumber \\
& \mbox{} - {1 \over 4} \alpha_W (\partial_\rho L_{\mu\nu}^a)^2 - {1 \over 4} \alpha_Y (\partial_\rho B_{\mu\nu})^2
- {g_L g_Y v^2 \over 8} L_{\mu\nu}^3 \left( \alpha_S + \alpha_X \partial^2 \right ) B_{\mu\nu}
+ {\mathcal O} (\partial^6) .
\end{align}
As explained in \cite{BPRS}, the parameters $\alpha_{T,S,W,Y}$ are most relevant for phenomenologists, since they are the lowest order in their class (they also correspond to dimension six operators in the effective SM lagrangian).
Furthermore, in our set-up $\alpha_T = 0$.
This a consequence of the original SO(5)/SO(4) coset structure, which implies that the quadratic terms in the effective lagrangian respect the SU(2) custodial symmetry rotating the triplet $L_\mu^a$.
This leaves us with three oblique parameters which we will find by matching with the low-energy effective action obtained by integrating out the KK modes.
The oblique parameters of ref. \cite{BPRS} are related to the dimensionful coefficients $\alpha$ by
$\hat {S} = m_W^2 \alpha_S$,
$\hat T = {g_Y^2 v^2 \over 2} \alpha_T$,
$W = m_W^2 \alpha_W$,
$Y = m_W^2 \alpha_Y$.
The Peskin-Takeuchi S parameter is $S = 4 \pi v^2 \alpha_S$.
The derivation of the low-energy effective action using the ''holographic approach" \cite{BPR,ACP,PW} is shifted to Appendix \aref{hd}.
It turns out that the quadratic part of the low-energy effective action can be expressed in terms of the form factor defined in \eref{ff},
\begin{align}
\label{e.gel}
{\mathcal L}_{eff} = & - {1 \over 2} \Pi_0(p^2) \left [
Z_{L}^{-1} L_\mu^a L_\mu^a + Z_{B}^{-1} B_\mu B_\mu \right ]
\nonumber \\
& \mbox{} + Z_{L}^{-1} {\sin^2(\tilde v/f_h) \over 4} \left (\Pi_0(p^2) -\Pi_M(p^2) \right )
\left [
L_\mu^i L_\mu^i + (L_\mu^3 - {g_Y \over g_L} B_\mu )^2 \right ]
\end{align}
where $Z_{L,B}$ are arbitrary normalization factors.
In the limit of no EW breaking, $\tilde v = 0$, the gauge bosons should be massless, which is true when $\Pi_0(0) = 0$.
This is a consequence of the 5D equations of motion, namely that the gauge invariance is left unbroken for $M^2 = 0$.
Note also that, from \eref{f}, the global symmetry breaking scale $f_h$ can be expressed by the form factors as
\begin{equation}
f_h^2 = - {2 \Pi_M(0) \over g_5^2} = - {2 \Pi_M(0) \over Z_L g_L^2} .
\end{equation}
Expanding the form factors in powers of $p^2$ we match the above lagrangian with \eref{gel}.
Canonical normalization is achieved when the normalization factors are chosen as
\begin{eqnarray}
\label{e.zl}
Z_L &=& \Pi_0'(0) + {\sin^2(\tilde v/f_h) \over 2} \left (\Pi_M'(0) -\Pi_0'(0) \right) ,
\nonumber \\
Z_B &=& \Pi_0'(0) {\Pi_0'(0) + {\sin^2(\tilde v/f_h) \over 2} \left (\Pi_M'(0) -\Pi_0'(0) \right) \over
\Pi_0'(0) + {\sin^2(\tilde v/f_h) \over 2} (1 - g_Y^2/g_L^2) \left (\Pi_M'(0) -\Pi_0'(0) \right)} .
\end{eqnarray}
Then the W mass is given by $m_W^2 = - Z_L^{-1} \Pi_M(0) \sin^2(\tilde v/f_h)/2$ which allows us to identify the electroweak scale as
\begin{equation}
v^2 = -{2 \Pi_M(0) \over Z_L g_L^2} \sin^2(\tilde v/f_h) = f_h^2 \sin^2 (\tilde v/f_h) .
\end{equation}
The W and Z masses are positive if $\Pi_M(0) < 0$ which will turn out to be a general consequence of the 5D equations of motion.
We can see that $v \approx \tilde v$ for $\sin (\tilde v/f_h)\ll 1$, while in the technicolor limit, $\sin(\tilde v/f_h) = 1$, we obtain $v = f_h$.
As we mentioned, $v/f_h$ is determined by the radiatively generated Higgs potential, but we don't discuss this issue in this paper.
Further expanding the form factors we read off the oblique parameters defined in \eref{p4}:
\begin{equation}
\alpha_S = {\Pi_M'(0) - \Pi_0'(0) \over \Pi_M(0)} ,
\qquad
\alpha_W = {\Pi_0''(0) \over 2 Z_L} ,
\qquad
\alpha_Y = {\Pi_0''(0) \over 2 Z_B} .
\end{equation}
If the solutions to the 5D equations of motion can be found then we can write down the full form factor $\Pi_M(p^2)$ using \eref{ff} and trivially compute the oblique parameters.
In those (more typical) instances when the explicit solution is not known we can still learn a great deal by using the results from section 2.
In particular, the small momentum $p^2$ given in eq. \eref{kmsp} implies the following ''chiral expansion" of the form factors:
\begin{align}
& \Pi_0(p^2) = p^2 L \left ( 1
+ p^2 {\int_{z_0}^\infty a \int_{z_0}^{z} a^{-1} \int_{z'}^\infty a \over \int_{z_0}^\infty a }
\right ) + {\mathcal O}(p^6) ,
\\
&
\Pi_M(p^2) =
{\eta'(z_0) \over \eta(z_0)} + p^2 \int_{z_0}^\infty a(z) {\eta^2(z) \over \eta^2(z_0)} + {\mathcal O}(p^4) .
\end{align}
$\Pi_0(0) = 0$, as promised.
The first derivative is set by the invariant length of the 5D, $\Pi_0'(0) = \int_{z_0}^\infty a(z) = L$, while the second derivative is also positive and is given by more complicated functional of the warp factor.
It follows that the oblique parameters $W$ and $Y$ are always positive in the 5D set-up.
In the limit $\sin(\tilde v/f) \to 0$ they are equal and given by
\begin{equation}
\alpha_W = \alpha_Y = {\int_{z_0}^\infty a \int_{z_0}^{z} a^{-1} \int_{z'}^\infty a \over \int_{z_0}^\infty a} .
\end{equation}
$\Pi_M(p^2)$ depends on the SO(5) breaking bulk dynamics personified in the gauge-higgs profile $\eta(z)$.
We have proved earlier that $\eta(z)$ is monotonically decreasing everywhere in the bulk, as there is a mass gap and $M^2 > 0$ everywhere.
In particular, $\Pi_M(0) < 0$, which is what it takes to ensure $m_W^2 >0$.
Furthermore, the perturbative expression for $\Pi_M(p^2)$ leads to the "dispersive" representation of the S parameter
\begin{equation}
\label{e.asd}
\alpha_S = \int_{z_0}^\infty a(z) \left ( \eta^2(z_0) - \eta^2(z) \right )
\qquad
\end{equation}
The decrease of $\eta(z)$ implies that {\em the S parameter in the 5D set-up is always positive}.
The no-go theorem for negative S was proved in ref. \cite{BPR} for Higgsless models with a RS background, and then in ref. \cite{ACGR} in the context of 5D models with the $SU(2)_L \times SU(2)_R \times U(1)_X$ bulk gauge symmetry, for the Higgs field realized as an IR brane or a bulk scalar, and for an arbitrary warped metric.
Our analysis extends these results to the 5D soft wall models of gauge-higgs unification, where the Higgs is a pseudo-Goldstone boson living partly in a bulk scalar and partly in a fifth component of a gauge field.
Eq.\erefn{asd} and the positivity of S hold for arbitrary $v/f_h$, in particular in the Higgsless limit $v = f_h$.
Therefore, the soft wall does not allows us to evade the no-go theorem for negative S.
\section{Examples}
\label{sec:na} \setcounter{equation}{0} \setcounter{footnote}{0}
\subsection{Hard Wall}
As a reference point, we review the results for the standard hard wall RS set-up, where the 5D coordinate is cut off at finite $z = z_L$ by the IR brane, and it is also the IR brane rather than a bulk scalar that breaks $SO(5) \to SO(4)$.
The hard wall can be viewed as a special limit of the soft wall: the warp factor is discontinuous and vanishes for $z\geq z_L$,
while the symmetry breaking mass $M^2$ is zero everywhere except for $z = z_L$ where it is infinite.
Thus, our soft wall analysis applies to the hard wall as well after some obvious adjustments.
Namely, the solutions $K_M(z)$ and $K_0(z)$ should be chosen such as to satisfy the appropriate IR boundary (rather than normalizability) conditions.
Breaking of the global symmetry by the IR brane amounts to setting $K_M(z_L,p)=0$.
For $K_0$ we impose the mixed boundary conditions $a(z_L) K_0'(z_L,p) = p^2 r K_0(z_L,p)$,
where $r$ is as an IR brane kinetic term (common to SO(4) and $U(1)_X$, for simplicity).
These boundary conditions imply that $K$'s can be expanded in powers of $p^2$ as
\begin{eqnarray}
K_0(z,p) &=& C_0 \left (
1 - p^2 \int_{z}^{z_L} a^{-1} \int_{z'}^{z_L} a - p^2 r \int_{z}^{z_L} a^{-1} + {\mathcal O}(p^4)
\right) ,
\nonumber \\
K_M(z,p) &=& C_0 \left (
\int_{z}^{z_L} a^{-1} + p^2 \int_{z}^{z_L} a^{-1} \int_{z_0}^{z'} a \int_{z''}^{z_L} a^{-1}
+ {\mathcal O}(p^4) \right ) .
\end{eqnarray}
Everything else follows from that expansion, according to formulas presented in the previous section.
The form factors are
\begin{eqnarray}
\Pi_0(p^2) &=& p^2(L+r) + {\mathcal O}(p^4) ,
\nonumber \\
\Pi_M(p^2) &=& - {1 \over \int_{z_0}^{z_L} a^{-1}}
+ p^2 {\int_{z_0}^{z_L} a^{-1} \int_{z_0}^{z} a \int_{z'}^{z_L} a^{-1} \over \left (\int_{z_0}^{z_L} a^{-1} \right )^2}
+ {\mathcal O}(p^4) .
\end{eqnarray}
The expression for the global symmetry breaking scale follows:
\begin{equation}
f_h^2 = {2 \over g_*^2 z_0 \int_{z_0}^{z_L} a^{-1}} ,
\end{equation}
where $g_*^2 = g_5^2/z_0$ is the dimensionless bulk gauge coupling.
In the absence of UV brane kinetic terms the weak coupling is related to $g_*$ via
\begin{equation}
\label{e.wgc}
g_L^2 = g_*^2 {z_0 \over L + r - {\epsilon^2 \over 2} \left( L + r - {\int_{z_0}^{z_L} a^{-1} \int_{z_0}^{z} a \int_{z'}^{z_L} a^{-1} \over \left (\int_{z_0}^{z_L} a^{-1} \right )^2} \right ) } ,
\end{equation}
where $\epsilon = \sin (\tilde v/f_h)$.
The normalized Higgs profile is given by
\begin{equation}
\eta(z) = {\int_{z}^{z_L} a^{-1} \over \sqrt{\int_{z_0}^{z_L} a^{-1}}} .
\end{equation}
The S parameter becomes
\begin{equation}
S = 4 \pi v^2 \left [
{(L + r)(\int_{z_0}^{z_L} a^{-1})^2
- \int_{z_0}^{z_L} a^{-1} \int_{z_0}^{z} a \int_{z'}^{z_L} a^{-1} \over \int_{z_0}^{z_L} a^{-1} }
\right] .
\end{equation}
For AdS the warp factor is $a = z_0/z$. It follows that $f_h^2 \approx 4/g_{*}^2 z_L^{2}$.
We obtain the S parameter
\begin{equation}
\label{e.hws}
S \approx {3 \pi \over 2} v^2 z_L^2 \left ( 1 + {4 r \over 3 z_L} \right )
\approx \epsilon^2 {6 \pi \over g_*^2} \left ( 1 + {4 r \over 3 z_L} \right ) ,
\end{equation}
in agreement with ref. \cite{ACP}.
The first equality shows that, for small IR brane term, the S parameter is of order $v^2/\, M_{\rm KK}^2$ where $\, M_{\rm KK}$ is the mass of the first KK mode: $\, M_{\rm KK} \approx 2.4/z_L$.
Imposing the constraint $S < 0.2$ leads to the bound on the KK mass $\, M_{\rm KK} \stackrel{>}{{}_\sim} 3 \, {\rm TeV}$.
Adding a positive IR brane kinetic term makes the bound even stronger.
The second equality shows that for a fixed $\epsilon = v/f_h$ the size of the S parameter is determined by the strength of the bulk gauge coupling.
The latter is related to the weak gauge coupling as in \eref{wgc} but we can control this relation by adjusting $z_0$ or adding a UV gauge kinetic term.
Perturbativity of the 5D theory constrains $g_*$ such that $N_{CFT} \equiv 16 \pi^2/g_{*}^2 \gg 1$,
which then implies that $\epsilon$ has to be small enough.
For example, for $N_{CFT} = 10$ the bound is $\epsilon < 0.4$.
This leads to some tension with naturalness, since the fine-tuning involved in preparing a correct electroweak breaking vacuum is of order $\epsilon^2$.
The W and Y parameters are given by (for $r = 0$)
\begin{equation}
W \approx Y \approx {m_W^2 z_L^2 \over 4 \log (z_L/z_0)} .
\end{equation}
As long as $z_L/z_0$ is large, which is the case when the set-up addresses the Planck-TeV hierarchy, the log in the denominator is large and $W$ and $Y$ are suppressed with respect to $m_W^2/\, M_{\rm KK}^2$.
Thus, the constraints from $W$ and $Y$ are much weaker than those from $S$.
For example, for $z_L/z_0 \sim 10^{16}$, imposing $W < 10^{-3}$ yields $\, M_{\rm KK} > 500 \, {\rm GeV}$.
In the following we investigate if the constraint from the S parameter may be improved in the soft-wall backgrounds.
\subsection{Linear Soft Wall}
Our first example of electroweak breaking on the soft wall has the metric that yields a linear trajectory for KK resonances.
The warp factor is \cite{EKSS}
\begin{equation}
a(z) = {z_0 \over z} e^{-\rho^2(z^2-z_0^2)}.
\end{equation}
We assume $1/z_0 \gg \rho$ which implies that the warp factor is approximately the AdS one in the UV region, while in IR the conformal symmetry is broken smoothly for $z \stackrel{>}{{}_\sim} 1/\rho$.
The invariant length of the 5th dimension
which,
can be approximated as
\begin{equation}
L = z_0 \left ( \log(1/z_0 \rho) - \gamma/2 \right) + {\mathcal O} (z_0^2).
\end{equation}
The parameter $\rho$ plays a similar role as the IR brane in the hard-wall:
it makes the invariant length finite and generates a mass gap of order $\rho$.
We choose the symmetry breaking mass term as
\begin{equation}
M^2(z) = \mu^4 z^2.
\end{equation}
The mass term does not vanish on the UV brane, but $M^2(z_0)$ is suppressed by $z_0^2$ which ensures that the one-loop Higgs mass is not sensitive to the UV scale $1/z_0$.
The hierarchy problem is avoided when the mass parameter $\mu$ is not much larger than $\, {\rm TeV}$.
The background and the mass term corresponds to the superpotential
\begin{equation}
W = {1 \over 2 z} + \sqrt{\mu^4 + \rho^4} \, z ,
\end{equation}
which can be split as
\begin{equation}
W_0 = {1 \over 2 z} + \rho^2 z ,
\qquad
U_M = \left ( \sqrt{\mu^4 + \rho^4} - \rho^2 \right )z .
\end{equation}
Both $W$ and $W_0$ become infinite in IR which shows that the KK spectrum is discrete and has a mass gap.
$U_M$ fixes the gauge-higgs profile to be
\begin{equation}
\eta(z) = \left ( \sqrt{\mu^4 + \rho^4} - \rho^2 \right )^{-1/2} z_0^{-1/2}
\exp\left ( - {1 \over 2} \left ( \sqrt{\mu^4 + \rho^4} - \rho^2 \right ) (z^2 - z_0^2) \right ),
\end{equation}
which is the right half of a gaussian.
In this simple background we can solve the equation of motion explicitly.
The normalizable solution is
\begin{equation}
K_M(z,p) = e^{- {1 \over 2} \left ( \sqrt{\mu^4 + \rho^4} - \rho^2 \right )z^2 }
U \left (- {p^2 \over 4 \sqrt{\mu^4 + \rho^4}},0,\sqrt{\mu^4 + \rho^4} z^2 \right ),
\end{equation}
where $U$ is the confluent hypergeometric function of the second kind.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=0.4\textwidth]{lin_phprof.eps}
\includegraphics[width=0.4\textwidth]{higgsprof.eps}
\caption{Left: 5D profiles of the zero mode and the first two KK modes of the photon in the linear background for $1/z_0 = 10^{19} \, {\rm GeV}$, $\rho = 1 \, {\rm TeV}$. Right: Embedding of the Higgs boson into the gauge field (solid red) and the bulk scalar (dashed blue) for $\mu = 2.4 \, {\rm TeV}$.
}
\label{f.hp}
\end{center}
\end{figure}
The photon profile (which does not see the mass term) is proportional to $K_0(z,p) = U(- p^2/4\rho^2,0,\rho^2 z^2)$, and its KK spectrum is given by the solutions of $0 = K_0'(z_0,m_n) \to U(- m_n^2/4\rho^2,1,\rho^2 z_0^2) = 0$.
To lowest order in $z_0$, the spectrum is given by the linear Regge trajectory: $m_n \approx 2\rho n^{1/2}$.
The first KK modes of the W and Z bosons have approximately the same mass, while other vector, scalar and pseudoscalar KK modes are heavier (the splittings depend on the parameter $\mu$).
Thus, the mass parameter $\rho$ sets the KK scale $\, M_{\rm KK} \sim 2\rho$.
Knowing the explicit solution we can find out how the profiles of the KK modes look like,
and we plotted some examples in fig. \ref{f.hp}.
Even though the excited KK profiles explode in IR, overlap integrals of the form $\int a(z) [f_n(z)]^i$ are finite thanks to the exponential in the warp factor.
At this point some comments on the perturbativity of our set-up are in order.
In warped theories, the strong-coupling scale is position dependent.
One way to quantify it is to introduce the effective coupling
$g_{eff}^2(z,p) = z_0 g_*^2 p^2 i P(z,z,-p^2)$, where $P$ is the propagator in the 4D momentum/5D position space.
A physical process involving exchange of KK modes between sources peaked at $z$ is governed by $g_{eff}^2(z,p)$.
For $p^2 \to 0$ we have $P(z,z,p^2) \to 1/p^2 L$, and the effective coupling approaches the zero mode coupling, independently of $z$.
While on the UV brane $g_{eff}^2(z_0,p)$ remains perturbative up to very high scales above the mass gap, in the IR $g_{eff}^2(z,p)$ grows as a power of momentum.
The position dependent strong coupling scale $\Lambda_S(z)$ can be defined as the momentum scale where the effective coupling becomes non-perturbative: $g_{eff}(z,\Lambda_S(z)) \sim 4 \pi$.
The effective coupling for the electroweak gauge bosons in the linear background is plotted in fig. \ref{f.sc}.
We can see that on the UV brane the effective coupling grows only logarithmically with momentum while in the IR it grows much faster and quickly hits the non-perturbative values.
Nevertheless, for $z = 1/\rho$ the theory includes several KK modes before the strong coupling sets in.
For $z \gg 1/\rho$, however, the effective coupling becomes non-perturbative below the scale of the first KK mode.
Thus, sources localized for $z \gg 1/\rho$ (one could, for example, try to localize 3rd generation fermions in the far IR) are unavoidably strongly coupled in the 5D description.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=0.3\textwidth]{lin_propuv.eps}
\includegraphics[width=0.3\textwidth]{lin_proprho.eps}
\includegraphics[width=0.3\textwidth]{lin_propz.eps}
\caption{The effective $SU(2)_L$ coupling (solid red) in the linear background ($\rho = 1 \, {\rm TeV}$, $1/z_0 = 10^{19} \, {\rm GeV}$, $g_* = 3.9$)
as a function of momentum for $z = z_0$ and $z = 1/\rho$ and as a function of $z$ for $p = 2 \rho$.
The lower dashed line is the SM weak coupling, while the upper dashed line in the second and the third plot marks the strong coupling $g_{eff} = 4 \pi$.}
\label{f.sc}
\end{center}
\end{figure}
We move to the electroweak constraints.
The form factor is given by
\begin{equation}
\Pi_M(p^2) = - \left ( \sqrt{\mu^4 + \rho^4} - \rho^2 \right ) z_0
+ p^2 {z_0 \over 2}
{U \left (1 - {p^2 \over 4 \sqrt{\mu^4 + \rho^4}},1 ,\sqrt{\mu^4 + \rho^4} z^2 \right )
\over
U \left (- {p^2 \over 4 \sqrt{\mu^4 + \rho^4}},0,\sqrt{\mu^4 + \rho^4} z^2 \right )} .
\end{equation}
Since $U(0,b,x) = 1$, $\Pi_M(0) = - ( \sqrt{\mu^4 + \rho^4} - \rho^2) z_0$.
Thus, the global symmetry breaking scale is given by
\begin{equation}
\label{e.fhr}
f_h^2 = {2 \over g_*^2} \left ( \sqrt{\mu^4 + \rho^4} - \rho^2 \right ) .
\end{equation}
The first derivative of the form factor can be approximated as
\begin{equation}
\Pi_M'(0) = z_0 \left ( - {1 \over 4} \log\left( z_0^4 (\rho^4 + \mu^4) \right) - \gamma/2 \right) + {\mathcal O} (z_0^2) .
\end{equation}
Thus, we find the S parameter
\begin{equation}
\label{e.sws}
S \approx {\pi v^2 \log \left (1 + {\mu^4 \over \rho^4}\right) \over \sqrt{\mu^4 + \rho^4} - \rho^2}
= \epsilon^2 {2 \pi \over g_*^2} \log \left (1 + {\mu^4 \over \rho^4}\right) .
\end{equation}
As opposed to the hard-wall case, the S parameter depends on the combination of $\rho$ and $\mu$, rather than being directly related to the KK scale.
Thus, playing with $\mu/\rho$ we are able to relax the 3 TeV bound on the KK scale.
Comparing \eref{sws} with the hard-wall formula \erefn{hws} we can see that the numerical factor $6$ is replaced on the soft-wall by $ 2 \log(1 + \mu^4/\rho^4)$.
Thus, for a fixed $\epsilon$ and $g_*$, the S parameter can be reduced if $\mu$ is not larger than $\sim 2\rho$.
For the W and Y parameters we find
\begin{equation}
W \approx Y \approx {\pi^2 \over 48} {m_W^2 \over \rho^2 \log (1/z_0 \rho)} .
\end{equation}
As in the hard-wall case, as long as the UV/IR hierarchy is very large, $W$ and $Y$ are suppressed by the log of the hierarchy.
The resulting constraint on the KK scale turns out to be even weaker than in the hard-wall AdS.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=0.4\textwidth]{lin_eps3.eps}
\includegraphics[width=0.4\textwidth]{w_eps3.eps}
\caption{S and W parameter in the linear soft wall for $\epsilon = 0.3$.}
\label{f.sr}
\end{center}
\end{figure}
\begin{figure}[tb]
\begin{center}
\includegraphics[width=0.4\textwidth]{lin_eps5.eps}
\includegraphics[width=0.4\textwidth]{lin_eps10.eps}
\caption{S parameter in the linear soft wall for $\epsilon = 0.5,1$.}
\label{f.sr2}
\end{center}
\end{figure}
We conclude this analysis with some numerical studies of the electroweak constraints.
We employ the following procedure
\begin{enumerate}
\item We fix the UV scale to be of the order of the Planck scale, $1/z_0 = 10^{19}\, {\rm GeV}$.
We also pick up 3 discrete values $v/f_h \equiv \epsilon = .3,.5,1$.
We scan over $\rho \in (0.5,1.5) \, {\rm TeV}$.
\item
We assume no UV brane kinetic terms, thus the SM weak coupling is given by $g_L^2 \approx g_*^2 z_0/L$.
The bulk coupling $g_*$ can be obtained by inverting \eref{fhr}.
This way, $g_L^2$ becomes a function of $z_0,\rho,\epsilon,\mu$.
When the first three parameters are fixed, $\mu$ is determined by matching to the measured weak coupling evaluated at the TeV scale:
$g_L^2(\, {\rm TeV}) \approx 0.41$.
For our input parameters, we find $\mu \sim 1 \, {\rm TeV}$, $g_* \sim 4$.
\item We plot the S parameter as a function of $\rho$ and find the bounds on the KK scale.
The results are presented in figs. \ref{f.sr} and \ref{f.sr2}.
For $\epsilon = 0.3$, imposing the constraint $S < 0.2$ implies the rather mild bound $\, M_{\rm KK} \stackrel{>}{{}_\sim} 1.2 \, {\rm TeV}$.
As suggested by \eref{sws}, the constraints become more stringent when we decrease $f_h$.
For $\epsilon = 0.5$ we find $\, M_{\rm KK} \stackrel{>}{{}_\sim} 2.2 \, {\rm TeV}$, while for $\epsilon = 1$ (the technicolor limit) the bound becomes $\, M_{\rm KK} \stackrel{>}{{}_\sim} 2.6 \, {\rm TeV}$.
The W parameter is at most of order $10^{-4}$, safely below the bound $W \stackrel{<}{{}_\sim} 10^{-3}$.
\end{enumerate}
\subsection{Continuum soft wall}
\begin{figure}[tb]
\begin{center}
\includegraphics[width=0.4\textwidth]{hw_higgsprof.eps}
\caption{Embedding of the Higgs boson into the gauge field (solid red) and the bulk scalar (dashed blue) in the continuum background for $\epsilon = 0.3$, $1/z_0 = 10^{19} \, {\rm GeV}$, $\rho = 1 \, {\rm TeV}$.
}
\label{f.hp2}
\end{center}
\end{figure}
Our second example has a continuous KK spectrum with a mass gap, which is completely different from anything encountered in the hard wall models.
The metric is \cite{CMT}
\begin{equation}
a(z) = {z_0 \over z} e^{-\rho(z-z_0)},
\qquad
W_0 = {1 \over 2 z} + {\rho \over 2}.
\end{equation}
The invariant length of the 5th dimension is
$L = z_0 \left ( \log(1/z_0 \rho) - \gamma \right) + {\mathcal O} (z_0^2)$.
As before, the warp factor is approximately AdS in UV with conformal invariance broken at $z \stackrel{>}{{}_\sim} 1/\rho$.
This time, however, the decay of the warp factor in the IR is not fast enough to ensure a discrete spectrum.
Thus, there will be a continuum of KK modes starting $\rho/2$ (in addition to discrete resonances who feel the $M^2$ term in their Schr\"odinger potential).
In the continuum case one needs more effort to cook up a tractable example with a sensible symmetry breaking mass term.
We want the mass to decay in UV, and at the same time we want the potential $V_M$ to be simple enough so that we can find the gauge-higgs profile $\eta(z)$.
The simplest example would be to take $U_M = \mu^2 z$, but then $M^2$ contains a linear term in $z$, which leads to linear sensitivity of the Higgs mass to the UV scale $1/z_0$.
Therefore we pick up a somewhat more complicated example:
\begin{equation}
U_M = \mu^2 z + \mu^2 \rho z^2
\qquad \to \qquad M^2(z) = \mu^2 \rho^2 z^2 + \mu^4 z^2 (1 + \rho z)^2 .
\end{equation}
The second term in $U_M$ has been engineered such that $M^2 \sim z^2$ in UV.
The gauge-higgs profile is now
\begin{equation}
\eta(z) = {1 \over \sqrt{\mu^2 z_0 (1 +\rho z_0 )}}
\exp\left ( - {1 \over 2} \mu^2(z^2 - z_0^2) - {1 \over 3} \mu^2 \rho(z^3 - z_0^3) \right )
\end{equation}
and the global symmetry breaking scale is fixed by $\mu$,
\begin{equation}
\label{e.fhc}
f_h^2 = {2 \over g_*^2} \mu^2 \left (1 + \rho z_0 \right ) .
\end{equation}
This time our task is a bit harder, as we are not able to solve the equations of motion in this background.
At this point we should appreciate the formulas that express the oblique parameters as integrals of the warp factor.
We follow the same procedure as in the linear case, assuming no UV kinetic terms and fixing $\mu$ to match the weak gauge coupling.
Lacking analytical results, we obtain the S parameter by evaluating \eref{asd} numerically.
The results in fig. \ref{f.hws} show that the possibility of a continuum of KK modes is surprisingly weakly constrained by the electroweak precision data.
Imposing $S < .2$ constrains $\rho < 0.6 \, {\rm TeV} (1.4 \, {\rm TeV})$ for $\epsilon = 0.3 (0.5)$.
Given that continuum starts at $\rho/2$, in both cases new physics below $1 \, {\rm TeV}$ is perfectly compatible with the experimentally observed smallness of the S parameter.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=0.4\textwidth]{hw_eps3.eps}
\includegraphics[width=0.4\textwidth]{hw_eps5.eps}
\caption{S parameter in the continuum soft wall for $\epsilon = 0.3,0.5$.}
\label{f.hws}
\end{center}
\end{figure}
\section{Summary and Outlook}
\label{sec:d} \setcounter{equation}{0} \setcounter{footnote}{0}
We extended the formulation of 5D gauge-Higgs unification to the soft wall framework where the IR brane is replaced by a smoothly decaying warp factor.
5D gauge symmetry is broken by UV boundary conditions and by a vev of a bulk scalar field.
The Higgs boson lives partly in the 5th component of the gauge field, and partly along the Goldstone direction of the bulk scalar.
The soft-wall version can maintain the attractive feature of the standard gauge-Higgs scenario that the loop induced Higgs potential does not suffer from the large hierarchy problem.
More precisely, the Higgs potential is insensitive to the scale of the UV brane is if the bulk scalar condensate is localized in IR and vanishes as $z^2$ or faster in UV.
We argue that our construction is more than a formal exercise.
Soft wall is a box of new possibilities in KK phenomenology, allowing for new kinds of spectra and couplings of the KK resonances.
One can even construct phenomenologically viable examples where the KK spectrum is continuous above a mass gap, with potentially striking hidden-valley phenomenology.
Most interestingly, bounds from electroweak precision test that create some tension in the standard hard-wall scenario can be relaxed.
We presented one explicit example where the bound from the S parameter on the lightest KK gauge boson mass is 2 TeV, rather than 3 TeV for the hard-wall.
Somewhat surprisingly, the electroweak constraints on the hidden-valley-type spectra turn out be even weaker than the constraints on discrete resonances, allowing for a continuum starting below 1 TeV.
Softer is often safer.
Focused on the low energy phenomenology of the gauge sector, we left a couple of loose ends on the model-building side.
Firstly, bulk fermions were not included.
Because of that, we did not touch the flavor issues that usually are realized by different wave function localization of the SM fermions.
Moreover, we could not compute the fermion contribution to the radiative Higgs potential.
Since gauge fields yield a positive contribution to the Higgs mass squared, we simply assumed that the fermion contribution is negative and of appropriate magnitude to arrive at an electroweak breaking vacuum with $v/f_h < 1$.
It would be interesting to have an explicit realization of the fermion sector to see if the soft-wall scenario allows us to reduce the fine-tuning of electroweak breaking.
Secondly, we did not obtain our soft wall backgrounds as solutions of the equations of motion.
That would allow us to address the issues of back-reaction of the scalar condensate, radion stabilization and so on.
We restrained ourselves from solving all these problems here, so as to leave some for future publications.
\section*{Acknowledgements}
We would like to thank Csaba Csaki and Tony Gherghetta for important discussions.
A.F. is ever so grateful to Francesco Riva for the Euro'08 tickets.
A.F. is partially supported by the European Community Contract MRTN-CT-2004-503369 for the years 2004--2008.
M.P.V. is supported in part by MEC project FPA2006-05294 and Junta de Andaluc\'{\i}a projects FQM 101, FQM 00437 and FQM 03048.
| {
"attr-fineweb-edu": 1.568359,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUd9u6NNjgBpvICLEt | \section{Introduction}
The existence or nonexistence of a gap between the energies of the ground state
and the low lying excited states is the most important criterium for the
criticality of a quantum spin system. Haldane's conjecture \cite{Hald83,Hald83a}
states that one-dimensional (1D) quantum spin systems have no gap for half
integer spin $s$, but do have a gap for integer spin $s$. The
conjecture only holds for an appropriate choice of the couplings of the spins at
nearest neighbor sites.\cite{Muet94}
The Lieb, Schultz, Mattis (LSM) construction\cite{LSM61,AL86} allows rigorous
statements on the degeneracy of the ground state. Starting from the unitary
operator
\begin{equation}
{\bf U} \equiv\exp \left(-i\frac{2\pi}{N} \sum_{l=1}^{N} l S^{3}_{l}\right),
\label{eq:i1}
\end{equation}
it is straightforward to prove that the application of the operator ${\bf
U}^k,\; k=1,2,... $ on the ground state $|0\rangle$ generates {\it new} states
\begin{equation}
|k\rangle \equiv {\bf U}^k |0\rangle \quad k=1,2,\ldots;\; k \mbox{ finite},
\label{eq:i2}
\end{equation}
with an energy expectation value
\begin{equation}
\langle k|{\bf H}|k \rangle - \langle 0|{\bf H}|0\rangle = O(N^{-1})
\label{eq:i3}
\end{equation}
approaching the ground state energy $E_0 \equiv \langle 0|{\bf H}|0\rangle$ in the
thermodynamical limit $N \to \infty$.
Of course the crucial question is whether the {\it new} states $|k\rangle $ are
different from the ground state $|0\rangle$ or not. This question can be
answered by an analysis of the quantum numbers of the states $|k\rangle,\;
k=1,2,\ldots$. For example in the case of the Spin-1/2 Hamiltonian with nearest
neighbor couplings:
\begin{equation}
{\bf H}(h_{3}) \equiv 2\sum_{l=1}^{N} {\bf S}_{l} \cdot {\bf S}_{l+1}
-2h_{3} {\bf S}_3(0),
\label{eq:i4}
\end{equation}
and
\begin{equation}
{\bf S}_a(q) \equiv \sum_{l=1}^{N} e^{iql} S_l^{a},\quad a=1,2,3,
\label{eq:i5}
\end{equation}
the ground state has momentum $p_{s}=0,\pi$ and the total Spin $S_{T}^{3}=S=N
M(h_{3})$, where $M$ is the magnetization. The new states $|k\rangle$ turn out
to be eigenstates of the translation operator ${\bf T}$:
\begin{equation}
{\bf T} |k\rangle = {\bf T} {\bf U}^k |0\rangle = e^{i p_k} |k\rangle,
\label{eq:i6}
\end{equation}
where \cite{OYA97}
\begin{equation}
p_k = p_{s} + k q_3(M),
\label{eq:i7}
\end{equation}
and $q_3(M)\!\equiv\!\pi (1-2M)$. For $M=0$ the ground state $|0\rangle$ and
the new state $|1\rangle$ differ in their momenta by $\pi$ and are therefore
orthogonal to each other. \\
For $M=1/4$ one finds a fourfold degeneracy of the ground state with momenta
$p_k=p_{s} + k \pi/2,\; k=0,1,2,3 $.
It should be noted that the LSM construction allows to identify the zero
frequency excitations (soft modes) in the model with Hamiltonian (\ref{eq:i4}).
Some of these soft modes induce characteristic signatures, e.g. zeroes in the
dispersion curve and singularities in the transverse and longitudinal structure
factors at the soft mode momenta $q=q_1(M)\equiv\pi,\; q=q_3(M)$, which can be
easily recognized even on rather small systems.\cite{KMS95} Following conformal
field theory the corresponding critical $\eta$-exponents can be determined from
the finite-size behavior of the dispersion curve at the soft mode
momenta.\cite{FGM+96} It is known that the $\eta$-exponents of the soft mode --
i.e. the $M$ dependence of $\eta_3(M),\eta_1(M)$ -- changes, \cite{GFA+97} if
we add further couplings to the Hamiltonian (\ref{eq:i4}). In some cases (see
the discussion below) the soft mode might disappear completely and a gap opens
between the states, which were gapless before switching on the perturbation.
The following cases (A.-E.) have been studied so far.
\subsection{A transverse staggered field}\label{sec:S1q}
A gap was found \cite{FKM98a,FKM98b,OA97} in a transverse staggered field of
strength $h_{1} {\bf S}_1(\pi)$,
\begin{equation}
\label{eq:Hh3h1}
{\bf H}(h_{3},h_{1}) \equiv {\bf H}(h_{3})+2h_{1} {\bf S}_1(\pi),
\end{equation}
between the states which differ in their momenta by $\pi$. Indeed the operator
${\bf S}_1(\pi)$ is invariant only under translations ${\bf T}^2$ and the
eigenstates of the Hamiltonian are only eigenstates of ${\bf T}^2$.
In the free field case ($h_{3}=0$) the ${\bf T}^2$ quantum numbers of the ground
state $|0\rangle$ and of the LSM state $|1\rangle={\bf U} |0\rangle$ are the
same and the twofold degeneracy of the ground state is lifted by the explicit
breaking of translation invariance.
The fourfold degeneracy with momenta $p\!=\!0,\pi,\pm\!\pi/2$ which occurs at
$M=1/4$ and $h_{1}=0$ is lifted in the following manner. The states with $p=0$
and $p=\pi$ are even with respect to ${\bf T}^{2}$. The same holds for the
ground state $|0\rangle$, which is a linear combination of $p=0$ and $p=\pi$
components. A gap opens to the second state, which is even under ${\bf T}^{2}$.
The gap evolves with the strength $h_{1}$ of the perturbation as
$h_{1}^{\epsilon}$. The exponent $\epsilon=\epsilon_{1}(h_{3})$ is
given by the exponent $\eta_1(M)$
\begin{equation}
\epsilon_1(h_{3}) = 2[4-\eta_1(M(h_{3}))]^{-1},
\label{eq:i9}
\end{equation}
associated with the divergence of the transverse structure factor at $q=\pi$.
The LSM construction with the operator (\ref{eq:i1}) leads to a second state
$|1\rangle = {\bf U} |0 \rangle$ which is degenerate with the ground state
$|0\rangle$ and which is odd under ${\bf T}^2$. This state can be constructed as
a linear combination of momentum eigenstates with $p=\pm\pi/2$.
\subsection{A longitudinal periodic field}\label{sec:bS3q}
A longitudinal periodic field ${\bf \bar S}_3(q)$ of strength $2h_{q}$
\begin{equation}
\label{eq:100}
{\bf H}(h_{3},h_{q}) \equiv {\bf H}(h_{3})+2h_{q} {\bf \bar S}_3(q),
\end{equation}
with
\begin{equation}
{\bf \bar S}_3(q)\equiv[{\bf S}_3(q) + {\bf S}_3(-q)]/2,
\label{eq:i10}
\end{equation}
induces a plateau in the magnetization curve $M=M(h_{3})$ at
$M =(1-q/\pi)/2$, i.e. $q$ has to meet the soft mode momentum $q=q_3(M)$. The
difference of the upper and lower critical field:
\begin{equation}
\Delta({h_{q}},h_{3})\equiv
h^{u}_{3} - h^{l}_{3} \sim h_{q}^{\epsilon_3(h_{3})},
\end{equation}
evolves with an exponent, which is again related via (\ref{eq:i9}) to the
corresponding $\eta_3$-exponent and which can be extracted from the finite-size
behavior of the longitudinal structure factor \cite{FGM+96} at $q=q_3(M)$.
\subsection{A next-to-nearest-neighbor coupling}\label{sec:Oalpha}
A next-to-nearest-neighbor coupling
\begin{equation}
{\bf H}_{2} \equiv 2\sum_{l=1}^{N} {\bf S}_{l} \cdot{\bf S}_{l+2},
\label{eq:i12}
\end{equation}
added to Hamiltonian~(\ref{eq:i4}):
\begin{equation}
{\bf H}(h_{3},\alpha) \equiv {\bf H}(h_{3})+
\alpha {\bf H}_{2},
\label{eq:i13}
\end{equation}
does not change the position of the soft modes $q_1=\pi$ and $q_3(M)$ but
changes the associated $\eta_1(M,\alpha),\;\eta_3(M,\alpha)$
exponents.\cite{GFA+97} A singlet triplet gap opens in the free field case
$(h_{3}=0)$ for $\alpha\!>\!\alpha_c\!=\!0.241\ldots$.\cite{ON92} Note, however,
that (\ref{eq:i12}) is translation invariant and therefore the ground state
degeneracy with momenta $p=0,\pi$ -- predicted by the LSM construction -- still
holds, i.e. the singlet ground state is still twofold degenerate in the singlet
sector.
\subsection{A staggered dimer field}\label{sec:OalphaDpi}
A plateau in the magnetization curve at $M=1/4$ has been found in the
Hamiltonian (\ref{eq:i4}) with an additional next-to-nearest-neighbor coupling
and a staggered dimer field: \cite{TNK98,Tots98}
\begin{equation}
{\bf H}(h_{3},\alpha,\delta) \equiv {\bf H}(h_{3})+
\alpha {\bf H}_{2} + \delta {\bf D}(\pi).
\label{eq:d1}
\end{equation}
The dimer operator is defined as:
\begin{equation}
{\bf D}(q) \equiv 2\sum_{l=1}^{N} e^{iql} {\bf S}_{l} \cdot {\bf S}_{l+1}.
\label{eq:i14}
\end{equation}
Such a Hamiltonian only commutes with ${\bf T}^2$ and therefore reduces the
degeneracy of the ground state. At $M=0$ the twofold degeneracy of the ground
state is lifted and a gap opens between the energies of the ground state and the
excited states.\cite{FKM98a}
At $M=1/4$ a gap opens between the ground state $|0\rangle$ and one further
state, which is even under ${\bf T}^2$. The LSM construction yields a second
state $|1\rangle={\bf U}|0\rangle$ degenerate with the ground state $|0\rangle$,
which is odd under ${\bf T}^2$.
\subsection{A periodic dimer field}\label{sec:bDq}
A plateau in the magnetization curve at $M=1/6$ has been found \cite{Hida94}
for a Hamiltonian of the type (\ref{eq:i4}) with a dimer field ${\bf \bar D}(q)$
of period $q=2\pi/3$:
\begin{equation}
{\bf \bar D}(q) \equiv [{\bf D}(q)+{\bf D}(-q)]/2.
\label{eq:i14b}
\end{equation}
The Hamiltonian used in Ref.\onlinecite{Hida94} can be reformulated as a single
spin-1/2 chain with ferromagnetic nearest neighbor coupling being strongly
disturbed by an antiferromagnetic next-to-nearest neighbor coupling with a
period of $q=2\pi/3$.
Note, that the periodicity $q=2\pi/3$ coincides with the soft mode momentum
$q_{3}(M=1/6)$. Such a coincidence occurs in both examples~\ref{sec:bS3q}
and~\ref{sec:bDq} and we conclude that the special type of the periodic
perturbation~(\ref{eq:i10}) and~(\ref{eq:i14b}) is not relevant for the
formation of a magnetization plateau.
The situation in example~\ref{sec:OalphaDpi} is different. Here the periodicity
($q=\pi$) of the staggered dimer field coincides with the {\it second soft mode}
$2q_{3}(M=1/4)=\pi$ [~(\ref{eq:i7}) for $k=2$], predicted by the LSM
construction. Note, however, that the magnetization plateau at $M=1/4$ is only
visible if the parameters $\alpha$ and $\delta$ in~(\ref{eq:d1}) are
appropriately chosen. In particular the magnetization plateau at $M=1/4$ seems
to be absent if the next-to-nearest-neighbor coupling $\alpha$ is switched off.
Therefore, we conclude that the coincidence of the periodicity $q$ in the
perturbation operator with the momentum of one LSM-soft mode is a necessary --
but not sufficient -- condition for the formation of a plateau.
It is the purpose of this paper to investigate in more detail the mechanism for
the formation of gaps and magnetization plateaus by means of periodic
dimer-perturbations~(\ref{eq:i14b}) of strength $2\delta_{q}$. The
$\delta_{q}$-evolution of the energy eigenvalues and transition matrix elements
of the perturbation operator is given by a closed set of differential equations,
which we have discussed in Refs.\onlinecite{FKM98a,FKM98b}. The initial values for
these evolution equations are given by the energy eigenvalues and transition
amplitudes for the unperturbed case ($\delta_{q}=0$).
The outline of the paper is as follows. In Sec.~\ref{sec:LSM-construction} we
complete the discussion of the quantum numbers of the LSM state $|k\rangle$ by
investigating their ${\bf S}_{T}^{2}$ content, where ${\bf S}_{T}$ is the total
spin operator. We are in particular interested in the question, whether or not
the LSM state $|1\rangle$ at $M=0$ with momentum $p_{1}=p_{0}+\pi$ contains a
triplet ($S=1$) or higher spin component [${\bf S}_{T}^{2}=S(S+1)$].
Section~\ref{sec:sm-dimer} is devoted to an analysis of the LSM soft modes in
the dimer-dimer structure factor. This analysis is used to fix the above
mentioned initial conditions for the evolution equations. In
Sec.~\ref{sec:gaps-plateau} we then present numerical results on the formation
of gaps and plateaus by means of the periodic dimer perturbations.
The occurrence of magnetization plateaus in spin ladders is discussed in
Sec.~\ref{sec:spin-ladders}.
\section{The Lieb, Schultz, Mattis (LSM) Construction and the quantum numbers
of the degenerate ground states.}\label{sec:LSM-construction}
It has been pointed out in the introduction that the quantum number
analysis of the states $|k\rangle={\bf U}^k|0\rangle$ in the LSM
construction is crucial to decide whether these states are new, i.e.
orthogonal to the ground state $|0\rangle$, or not. The transformation
behavior (\ref{eq:i6}) under translations ${\bf T}$ yields the momenta
$p_k$ (\ref{eq:i7}) of these states.
The operator ${\bf U}$ (\ref{eq:i1}) obviously commutes with the total spin in
3-direction $S^3_T$. Therefore all the states $|k\rangle = {\bf U}^k |0\rangle$
have the same total Spin $S^3_T$ in 3-direction. The Hamiltonian of
type~(\ref{eq:i13}) with isotropic couplings commutes with ${\bf S}^2_T$. One
might ask for the ${\bf S}_T^{2}$ content of the states $|k\rangle$. To answer
this question we compute the expectation value
\begin{eqnarray}
\langle k| {\bf S}^2_T |k \rangle - \langle 0|{\bf S}^2_T |0 \rangle &=&
\langle 0| {\bf U}^{\dagger k}{\bf S}^2_T {\bf U}^k |0\rangle -
\langle0|{\bf S}^2_T |0 \rangle \nonumber\\ && \hspace*{-2cm}
= 2 N [{S}_1(q=k2\pi/N,M)-{S}_1(0,M)],
\label{eq:lsm1}
\end{eqnarray}
using the considerations developed by LSM to show the vanishing of the energy
difference (\ref{eq:i3}). The right-hand side of (\ref{eq:lsm1}) is determined
by the transverse structure factor, exposed to an external field $h_{3}$ with
magnetization $M(h_{3})$:
\begin{equation}
{S}_1(q,M) \equiv \sum_{l=1}^{N} e^{iql}
\langle S,p_{s}|S^{1}_{1}S^{1}_{1+l}|S,p_{s} \rangle ,
\label{eq:lsm2}
\end{equation}
which has been studied on finite systems in Ref.~\onlinecite{KMS94} for the
nearest neighbor model (\ref{eq:i4}) and in Ref.~\onlinecite{SGMK96} for the
model with next-to-nearest-neighbor couplings $\alpha$.
From these investigations we conclude for the thermodynamical limit of the
difference appearing on the right-hand side of (\ref{eq:lsm1}):
\begin{equation}
2N[{S}_1(k2\pi/N,M)\!-\! {S}_1(0,M)]
\stackrel{N\to\infty}{\longrightarrow}
A(M)\left(\frac{2\pi}{N}\right)^{\beta_{k}}.
\label{eq:lsm3}
\end{equation}
The exponent $\beta_{k}=\beta_{k}(M,\alpha)$ turns out to be zero for $M=0$ and
$\alpha=0$ [cf. Fig.~5(d) in Ref.~\onlinecite{KMS94}]. This means, that the
right-hand side of (\ref{eq:lsm1}) is non vanishing. For $M=0,\alpha=0$ the
ground state is a singlet state [${\bf S}^2_T = S(S+1)=0$] and (\ref{eq:lsm1})
tells us that the soft mode state $|k=1\rangle = {\bf U}|0\rangle$ with momentum
$\pi$ contains triplet $[{\bf S}^2_T=S(S+1)=2]$ and higher spin components, i.e.
the LSM construction together with (\ref{eq:lsm3}) and
$\beta_{k}(M=0,\alpha=0)=0$ forbids a singlet triplet gap.
The exponent $\beta_{k}(M,\alpha)$ is larger than zero for $M>0$ [cf. Inset of
Fig.~5(b) in Ref.~\onlinecite{KMS95}] In this case the right-hand side of
(\ref{eq:lsm1}) vanishes and the soft mode states $|k\rangle = {\bf U}^k
|0\rangle$ have the same total spin ${\bf S}^2_T = S(S+1)$ as the ground state.
Switching on the next-to-nearest-neighbor coupling $\alpha$ the free field
exponent must be larger than zero
\begin{equation}
\beta_{1}(0,\alpha) > 0 \quad \mbox{for} \quad \alpha > \alpha_c=0.241\ldots,
\label{eq:lsm6}
\end{equation}
since the dimer phase $\alpha > \alpha_c$ is characterized by a singlet triplet
gap.\cite{ON92} In other words, for $\alpha > \alpha_c$ the degenerate LSM state
$|1\rangle = {\bf U}|0\rangle$ with momentum $p_{s}+\pi$ must be a pure singlet
state as well. The exponent $\beta_{1}(0,1/2)$ can easily be calculated at the
Majumdar-Ghosh \cite{MG69,MG69b} point $\alpha=1/2$ . Here we find
$\beta_{1}(0,1/2)=1$.
In Fig.~\ref{fig:plot2a} we have plotted the energy difference $\omega_{\pi}
= E_1 - E_0$ of the two singlet states $|0\rangle$ and
$|1\rangle$
\begin{figure}[ht]
\centerline{\epsfig{file=fig_w_M0_a.ps,width=6.0cm,angle=-90}}
\caption{The energy differences~(\ref{eq:lsm7}) for system sizes
$N=8,10,\ldots,16$ and $S=0$.}
\label{fig:plot2a}
\end{figure}
\begin{equation}
\omega_{\pi}(S=0,\alpha) = E(p_{s}+\pi,S=0,\alpha) - E(p_{s},S=0,\alpha),
\label{eq:lsm7}
\end{equation}
versus the coupling $\alpha$. This is an oscillating function for $\alpha > 0.5$
with zeroes at:
\begin{equation}
\alpha = \alpha_1(N) < \alpha_2(N) < ... < \alpha_Z(N),
\label{eq:lsm8}
\end{equation}
where numerical data suggest that the total number of zeroes is given by
\begin{equation}
Z = \frac{1}{4}
\begin{cases}
N &: N=8,12,16,\ldots \\
N-2 &: N=10,14,18,\ldots .
\end{cases}
\label{eq:lsm9}
\end{equation}
In the thermodynamical limit we have a dense distribution of zeroes. The height
of the maxima and minima in between converges to zero -- a signal for the
degeneracy of the two states $|0\rangle$ and $|1\rangle = {\bf U} |0\rangle$ in
the thermodynamical limit as predicted by the LSM construction.
\begin{figure}[ht]
\centerline{\epsfig{file=fig_w_M1d4_a.ps,width=6.0cm,angle=-90}}
\caption{The energy differences~(\ref{eq:lsm10}) for system sizes
$N=8,12,16$ and momentum $p=\pi,\pi/2$ and $S=N/4$. Same line types denote the
same system sizes.}
\label{fig:plot2b}
\end{figure}
As was pointed out in the introduction, a fourfold degeneracy of the states
$|k\rangle = {\bf U}^k|0\rangle,\; (k=0,1,2,3) $ is predicted at $M=1/4$. All
these states have the same total spin squared ($S=N/4$) and momenta $p=0,\pm
\pi/2,\pi$. On finite systems one again observes oscillations in the energy
differences
\begin{equation}
\omega_p(S,\alpha) \equiv E(p,S=N/4,\alpha) - E(0,S=N/4,\alpha),
\label{eq:lsm10}
\end{equation}
for $p=\pi/2,\pi$, if we switch on the next-to-nearest-neighbor coupling
$\alpha$ as is shown in Fig.~\ref{fig:plot2b}. For $\alpha$ values large enough
these oscillations die out and the ground state momentum $p_{s}, S=N/4 $ is
supposed to be
\begin{equation}
p_{s}(1/4)=\frac{\pi}{2}
\begin{cases}
2 &: N=8,24,40,\ldots \\
1 &: N=12,28,44,\ldots \\
0 &: N=16,32,48,\ldots \\
1 &: N=20,36,52,\ldots.
\end{cases}
\label{eq:lsm11}
\end{equation}
\section{soft modes in the dimer dimer structure factor}\label{sec:sm-dimer}
According to the LSM construction for the translation invariant models with
nearest and next-to-nearest-neighbor couplings (\ref{eq:i13}), ${\bf
H}(h_{3},\alpha)$, we expect the dispersion curve
\begin{equation}
\omega_{q}(S,\alpha) \equiv E(p_{s}+q,S,\alpha) - E(p_{s},S,\alpha),
\label{eq:31}
\end{equation}
to develop zeroes
\begin{equation}
\omega^{(k)}(h_{3},\alpha)\equiv \omega_{q}(S,\alpha),
\label{eq:32}
\end{equation}
at the soft mode momenta $q=q^{(k)}(M) \equiv k \pi (1-2M)$. If in the
thermodynamical limit the scaled energy differences:
\begin{equation}
\hat\Omega^{(k)}(M,\alpha) \equiv \lim_{N\to\infty} N\omega^{(k)}(h_{3},\alpha).
\label{eq:32b}
\end{equation}
and
\begin{equation}
v(M,\alpha) = \lim_{N \to \infty} N
[E(p_{s}\!+\!2\pi/N,S,\alpha) - E(p_{s},S,\alpha)],
\label{eq:34}
\end{equation}
are finite and non vanishing, the ratios
\begin{equation}
\eta^{(k)} (M,\alpha) \equiv \
\frac{\hat\Omega^{(k)} (M,\alpha)}{\pi v(M,\alpha)},
\label{eq:33}
\end{equation}
yield the exponents $\eta^{(k)}(M,\alpha)$, which govern the critical behavior
of the dimer dimer structure factor:
\begin{equation}
S_{DD} (q,M) = \frac{1}{N}
\langle p_{s},S|{\bf D}_c(q) {\bf D}_c^\dag (q) |p_{s},S \rangle.
\label{eq:35}
\end{equation}
Here
\begin{displaymath}
{\bf D}_c(q) \equiv {\bf D}(q) - \delta_{q0} {\bf D}(0)
\end{displaymath}
is the connected part of the dimer operator~(\ref{eq:i14}).
For $M=0$ a zero should occur at $q=\pi$ and for $M=1/4$ we should find two
zeroes at $q=\pi/2$ and $q=\pi$. This is indeed the case as can be seen from
Fig.~\ref{fig:disp} where we have plotted the dispersion curves for
$M=1/4$, $\alpha=0,1/4$.
\begin{figure}[ht]
\centerline{\epsfig{file=fig_disp.ps,width=6.0cm,angle=-90}}
\caption{The dispersion curve of the Hamiltonian~(\ref{eq:i13}) for finite
system ($N=24,28$) and magnetization $M=1/4$.}
\label{fig:disp}
\end{figure}
Note, that the dimer operator commutes with the total Spin ${\bf S}^2_T$ and the
dispersion curve~(\ref{eq:31}) describes the lowest lying excitations
contributing to $S_{DD}(q,M,\alpha)$.
The $q$-dependence of the dimer-dimer structure factor~(\ref{eq:35}) at $M=1/4$
and $\alpha=0,1/4$ is shown in Fig.~\ref{fig:SDD_q}. A pronounced peak is found
at the first soft mode $q=\pi(1-2M)=\pi/2$. The finite-size behavior of the
structure factor at the first soft mode
\begin{equation}
S_{DD} (q_{3}(M),M,\alpha) \sim N^{1-\eta_{3}(M,\alpha)}
\label{eq:37}
\end{equation}
is shown for $\alpha=0,1/4$ and $M=1/4$ in the inset of Fig.~\ref{fig:SDD_q}.
It is well described by an exponent
\begin{equation}\label{eq:eta_a_M1d4}
\eta^{(1)}(1/4,\alpha) = \eta_{3}(1/4,\alpha) =
\begin{cases}
1.53 &: \alpha=0\\
0.72 &: \alpha=1/4.
\end{cases}
\end{equation}
identical with the exponent $\eta_{3}(M,\alpha)$ in the longitudinal structure
factor. The latter has been calculated exactly in the model with nearest
neighbor couplings ($\alpha=0$) by means of Bethe ansatz solutions for the
energy eigenvalues, which enter in the differences~(\ref{eq:31})
and~(\ref{eq:34}). The resulting curve $\eta_{3}(M,\alpha= 0)$ is shown in
Fig.~2 of Ref.\onlinecite{FGM+96}. The $\alpha$-dependence of
$\eta_{3}(1/4,\alpha)$ has been calculated on small systems $(N\leq 28)$ and can
be seen in Fig.~4(b) of Ref.\onlinecite{GFA+97}.
According to its definition~(\ref{eq:35}), the dimer dimer structure factor can
be represented:
\begin{equation}
S_{DD} (q,M,\alpha) \equiv \frac{1}{N}
\sum_{n} \left| \langle n | {\bf D}_{c}(q) | 0 \rangle \right|^{2}
\label{eq:35b}
\end{equation}
in terms of transition amplitudes from the ground state $|0\rangle =|
p_{s},S\rangle$ to excited states $|n\rangle$ with momenta $p_{n}=p_{s}+q$ and
total spin $S$. The peak in Fig.\ref{fig:SDD_q} tells us that the transition
matrix elements at the first soft mode $q=\pi(1-2M)$
\begin{equation}
\label{eq:13}
\langle n|{\bf D}(\pi(1-2M))|0\rangle
\stackrel{N\to\infty}{\longrightarrow}
N^{\kappa_{3}},
\end{equation}
with $\kappa_{3}=1-\eta_{3}(M(h_{3}),\alpha)/2$, diverge with the
system size.
\begin{figure}[ht]
\centerline{\epsfig{file=fig_SDD_q.ps,width=6cm,angle=-90}}
\caption{The dimer dimer structure factor~(\ref{eq:35b}) at $M=1/4$ for
$N=12,16,\ldots,28$. The insets show the $N$-dependence of the structure
factor $S_{DD}$ at $q=\pi/2$, versus $N^{1-\eta_{3}}$, with exponent
$\eta_{3}$ given by Eq.~(\ref{eq:eta_a_M1d4}). For $\alpha=1/4$ the structure
factor diverges and for $\alpha=0$ it is finite. The extrapolated value
($N\to\infty$) is marked with a solid symbol ($\blacksquare$). }
\label{fig:SDD_q}
\end{figure}
The fact, that there is no peak at the second soft mode, indicates that the
corresponding transition matrix elements $\langle n | \bar {\bf D}(q=2\pi(1-2M))
| 0 \rangle$ are small -- at least for next-to-nearest-neighbor couplings
$\alpha\leq 1/4$. The magnitude of these transition matrix elements is crucial
for the formation of gaps and magnetization plateaus with a periodic
perturbation $\bar {\bf D}(q)$.
\section{Periodic dimer perturbations and the formation of gaps and plateaus}
\label{sec:gaps-plateau}
In this section we will study the impact of a periodic dimer perturbation
${\bf \bar D}(q)$~(\ref{eq:i14b}):
\begin{equation}
{\bf H}(h_{3},\alpha,\delta_{q}) \equiv
{\bf H}(h_{3},\alpha)+ 2\delta_{q} {\bf \bar D}(q),
\label{eq:Hh3deltaq}
\end{equation}
on the soft modes. We will restrict ourselves to the model with nearest neighbor
and next-to-nearest-neighbor couplings and follow the procedure of
Refs.~\onlinecite{FKM98a,FKM98b}. There it was shown that the
$\delta_{q}$-evolution of the energy eigenvalues and transition matrix elements
$\langle n | \bar{\bf D}(q)|0\rangle$ is described by a closed set of
differential equations which possess finite size scaling solutions in the limit
$N \to \infty$, $\delta_{q} \to 0$, $x=\delta_{q}^{\epsilon} N $ fixed.
This statement holds, if the periodicity $q$ of the dimer perturbation coincides
with a soft mode momentum:
\begin{equation}
q = q^{(k)}(M) = k \pi (1-2M).
\label{eq:42}
\end{equation}
Then a gap in the energy difference (\ref{eq:31}) between the ground state and
lowest state which can be reached with the perturbation $\bar{\bf D}(q)$ at
$q^{(k)}(M)$ is predicted:
\begin{equation}
\omega^{(k)} (h_{3},\alpha,\delta_{q}) =
\delta_{q}^{\epsilon^{(k)}} \Omega^{(k)} (M,\alpha,x).
\label{eq:43}
\end{equation}
The gap opens with an exponent $\epsilon^{(k)}=\epsilon^{(k)}(h_{3},\alpha)$,
related to the corresponding $\eta$-exponent (\ref{eq:33}) via
\begin{equation}
\epsilon^{(k)}(h_{3},\alpha) = 2[4-\eta^{(k)}(M(h_{3}),\alpha)]^{-1}.
\label{eq:44}
\end{equation}
A test of the prediction (\ref{eq:43}) is given in Fig.~\ref{fig:plot4a}, where
we plotted the gap ratio
\begin{equation}
\frac{\omega^{(1)}(h_{3},\alpha,\delta_{q})}{\omega^{(1)}(h_{3},\alpha,0)}
= 1 + e^{(1)}(h_{3},\alpha,x),
\label{eq:45}
\end{equation}
for $M=1/4$, i.e. $q=\pi/2$ and $\alpha=0,1/4$ versus the scaling variable
$x^{2/\epsilon}$ with the exponents
\begin{eqnarray}\epsilon = \epsilon^{(1)} (1/4,\alpha) =
\begin{cases}
0.81(1)&: \alpha=0 \\
0.64(3)&: \alpha=1/4
\end{cases}
\label{eq:46}
\end{eqnarray}
\begin{figure}[htb]
\centerline{\epsfig{file=fig_skal_one.ps,width=6.0cm,angle=-90}}
\caption{The gap ratio~(\ref{eq:45}) versus the scaling variable
$(N\delta^{\epsilon})^{2/\epsilon-1}$, with $\epsilon$ given by
Eq.~(\ref{eq:46}). The solid lines represent linear fits to the small
$x$-behavior.}
\label{fig:plot4a}
\end{figure}
Note that the gap ratio (\ref{eq:45}) is linear in $x^{2/\epsilon}$ for small
values of $x$, as predicted by the evolution equations for the scaling
functions.\cite{FKM98a,FKM98b}
Let us next study the influence of the periodic perturbation~(\ref{eq:i14b}) on
the magnetization curve $M=M(h_{3})$. A plateau in the magnetization curve with
an upper and lower critical field $h_{3}^{u}$ and $h^{l}_{3}$ will emerge if
\begin{eqnarray}
\Delta({\delta_{q}},h_{3})
\equiv h_{3}^{u}-h_{3}^{l} &=& \lim_{N \to \infty} \left[
E(p_{s\!+\!1},S+1,\alpha,\delta_{q})\right. \nonumber \\ && \hspace{-1.5cm}
\left. \hspace{-1cm} + E(p_{s\!-\!1},S-1,\alpha,\delta_{q}) - 2
E(p_{s},S,\alpha,\delta_{q})
\right],
\label{eq:47}
\end{eqnarray}
does not vanish in the thermodynamical limit, i.e. if the ground state energies
$E(p_{s'},S',\alpha,\delta_{q})$ for $S'=S-1,S,S+1$ evolve in a different manner
under the perturbation $\delta_{q}$. This happens exactly if the periodicity $q$ of
the dimer perturbation coincides with a soft mode momentum (\ref{eq:42}). Then
the ground state energy at the 'critical' magnetization $M=S_{T}^{3}/N$ is
lowered stronger than at the neighboring magnetizations $M=(S_{T}^{3}\pm1)/N$.
In Fig.~\ref{fig:fig_plateau_a0.25} we show the $\delta_{q}$-evolution of the
magnetization curves for $q=\pi/2$ and $\alpha=1/4$. The emergence of the
predicted plateau at $M=1/4$ is clearly visible.
\begin{figure}[ht]
\vspace{5cm}
\centerline{\hspace{4cm}\epsfig{file=fig_plateau_a0.25_deltaq.ps,width=4cm}}
\caption{The magnetization curve of ${\bf
H}(h_{3},\alpha,\delta_{q})$ for $\alpha=0.25$ and different
$\delta_{q}$-values, determined from finite system sizes $N=8,12,16,20$. }
\label{fig:fig_plateau_a0.25}
\end{figure}
The scaling behavior $\delta_{q}^{\epsilon}$ with
$\epsilon=\epsilon^{(1)}(h_{3},\alpha)$ of the difference (\ref{eq:47}) is
governed by the critical exponent $\eta^{(1)}(h_{3},\alpha)$ at
$h_{3}=h_{3}(M=1/4)$ of the unperturbed model at the first soft mode $q_{3}(M)$
as can be seen in Fig.~\ref{fig:fig_hu-hl_m1d4}.
We also looked for the $\delta$-evolution of the magnetization curves for a
dimer perturbation with periodicity $q=\pi$. There is a plateau for $M=0$ --
corresponding to the gap above the ground state discussed in
Ref.~\onlinecite{GFA+97}. However, no plateau is visible
at $M=1/4$ for small perturbations $\delta {\bf D}(\pi)$. This corresponds to
the observation that the second soft mode at $M=1/4$, $q=\pi$, does not produce
any signature in the dimer dimer structure factor.
These statements only hold for small perturbations $\delta {\bf D}(\pi)$. It is
indeed known from Refs.~\onlinecite{TNK98,Tots98} that a plateau at $M=1/4$ can
be enforced with a large perturbation of order $1$.
\begin{figure}[hvt]
\centerline{\epsfig{file=fig_hu-hl_deltaq_m1d4.ps,width=6cm,angle=-90}}
\caption{The evolution of the difference~(\ref{eq:47}) between the upper and
lower critical field at the plateau $M=1/4$. The dotted and dashed line show a
fit, proportional to $\delta_{q}^{\epsilon}$ with $\epsilon=\epsilon^{(1)}$
given by Eq.~(\ref{eq:46}), to small values of the external field
$\delta_{q}$.}
\label{fig:fig_hu-hl_m1d4}
\end{figure}
\begin{figure}[htb]
\vspace{5cm}
\centerline{\hspace{4cm}\epsfig{file=fig_plateau_2.ps,width=4cm}}
\caption{The magnetization curve of Hamiltonian~(\ref{eq:Hh3deltaq}) with a
perturbation $\bar {\bf D}(2\pi/3) + \bar {\bf D}(\pi/3)$ for $\alpha=0$
and different $\delta_{q}$-values, determined from finite system sizes
$N=6,12,18$.}
\label{fig:fig_plateau_2}
\end{figure}
We have computed magnetization curves for the Hamiltonian~(\ref{eq:Hh3deltaq})
with perturbations $\bar {\bf D}(q)$ of period $q=2\pi/3$ and $q=\pi/3$. We
found clear evidence for the expected magnetization plateaus at $M=1/6$ and
$M=1/3$. The magnetization curve for a Hamiltonian with a superposition of both
perturbations
\begin{equation}
\label{eq:11}
\bar{\bf D}(2\pi/3) + \bar{\bf D}(\pi/3),
\end{equation}
is shown in Fig.~\ref{fig:fig_plateau_2}. Here, we find two plateaus in the
magnetization curve at $M=1/6$ and $M=1/3$.
\section{Magnetization plateaus in spin ladders}\label{sec:spin-ladders}
All the considerations we made so far for the 1D
Hamiltonian~(\ref{eq:Hh3deltaq}) with nearest and next-to-nearest-neighbor
couplings can be extended to the case
\begin{equation}
\label{eq:H_h3_al}
{\bf H}(h_{3},\alpha_{l}) \equiv {\bf H}(h_{3}) + \alpha_{l} {\bf H}_{l},
\end{equation}
where we substitute the next-to-nearest-neighbor coupling by couplings over $l$
lattice spacings
\begin{equation}
\label{eq:Hl}
{\bf H}_{l} \equiv 2 \sum_{n=1}^{N} {\bf S}_{n} \cdot {\bf S}_{n+l}.
\end{equation}
For $l$ finite the position $q_{3}(M)$ of the first soft mode --
generated by the LSM construction -- will not change, the corresponding
$\eta$-exponent might change. According to our experience with the case
$l=2$, we expect that a slightly frustrating coupling enhances the singularity
in the dimer dimer structure factor at $q=q_{3}(M)$. Hamiltonians of the
type~(\ref{eq:H_h3_al}) are interesting, since they can be viewed as a spin
ladder system with $l$ legs, as is shown in Fig.~\ref{fig:l_ladder}.
\begin{figure}[ht]
\centerline{\epsfig{file=l_ladder.ps,width=7.5cm}}
\caption{A spin ladder system described by Hamiltonian~(\ref{eq:H_h3_al})
with $l$ legs, and additional diagonal couplings (dashed lines). The coupling
strength between the legs is one unit.}
\label{fig:l_ladder}
\end{figure}
\clearpage
They differ, however, from usual spin ladder systems with $l$ legs, owing to the
diagonal (dashed) couplings [($l\!\leftrightarrow\! l+1),(2l \!\leftrightarrow\!
2l+1),\ldots$], which are inferred by the helical boundary conditions.
Indeed, these additional couplings change the physical properties. Spin ladder
systems with helical boundary conditions are gapless -- irrespective of the
number of legs. This statement holds if the couplings $\alpha_{l}$ along the
legs is chosen properly, e.g. the two leg system with $l=2$ is gapless for
$\alpha\le\alpha_{c}=0.241\ldots.$ Conventional ladder systems are known to be
gapless for an odd number of legs, but to be gapped for an even number of legs.
This fundamental difference becomes clear, when we add a
special dimer field to~(\ref{eq:H_h3_al})
\begin{equation}
\label{eq:dimer_l}
{\bf D}^{(l)} \equiv \sum_{n=1}^{N}J_{n}^{(l)} {\bf S}_{n} \cdot {\bf S}_{n+1},
\end{equation}
which only affects the diagonal couplings [($l\leftrightarrow
l+1),(2l\leftrightarrow 2l+1),\ldots$], in Fig.~\ref{fig:l_ladder}:
\begin{eqnarray}
\label{eq:2}
J_{n}^{(l)} &=&
\begin{cases}
0 &: n=1,\ldots,l-1 \\ \delta &: n=l,
\end{cases} \\
\label{eq:14}
J_{n+l}^{(l)} &=& J_{n}^{(l)}.
\end{eqnarray}
The periodicity~(\ref{eq:14}) of the couplings $J_{n}^{(l)}$ is given by a
Fourier series
\begin{equation}
\label{eq:1}
J_{n}^{(l)} = \sum_{j=0}^{[l/2]} \cos(2\pi nj/l) \delta(q=2\pi j/l),
\end{equation}
where $[l/2]$ is the largest integer smaller than $l/2$. The Fourier coefficients
$\delta(q)$ follow from~(\ref{eq:2}), e.g. for $l=2$ we find:
\begin{equation}
\label{eq:3}
\delta(q=0) = \delta(q=\pi) = \delta/2.
\end{equation}
The dimer perturbation:
\begin{equation}
\label{eq:4}
\delta{\bf D}^{(2)} = \frac{\delta}{2}[{\bf D}(0)+{\bf D}(\pi)]
\end{equation}
generates a plateau at $M=(1-q/\pi)/2=0$. Therefore, the gap -- typical for the
two leg ladder -- appears immediately if we switch on the dimer
field~(\ref{eq:dimer_l}), which breaks the translation invariance of the 1D
system.
Let us next consider the three leg ladder ($l=3$). The Fourier coefficients turn
out to be
\begin{equation}
\label{eq:5}
\delta(q=0)=\frac{c_{3}}{1+c_{3}}\delta,\quad
\delta(q=2\pi/3)=\frac{1}{1+c_{3}}\delta,
\end{equation}
with $c_{3}=\cos(\pi/3)$. The $q=2\pi/3$ component in the dimer
field~(\ref{eq:dimer_l}):
\begin{equation}
\label{eq:6}
\delta{\bf D}^{(3)} = \delta(q=0){\bf D}(0)
+\delta(q=2\pi/3){\bf D}(2\pi/3),
\end{equation}
generates a plateau at $M=1/6$. This is exactly the plateau found in
Ref.~\onlinecite{CHP97}.
Note, that the Fourier decomposition of $\delta {\bf D}^{(l)}$ contains in
general a $q=\pi$ component for even $l$ and no $q=\pi$ component for odd
$l$. We therefore find a gap at $M=0$ for ladders with an even number of legs but
no gap for ladders with an odd number of legs.
In summary we can say, the Fourier decomposition of the dimer
field~(\ref{eq:dimer_l})
\begin{equation}
\label{eq:7}
\delta {\bf D}^{(l)} = \sum_{j=0}^{[l/2]} \delta(q=2\pi j/l)
{\bf D}(q=2\pi j/l),
\end{equation}
tells us, where to expect plateaus in the magnetization curve of a spin ladder
system with $l$ legs. The position of plateaus can be seen in
Table~\ref{tab:legs}.
The number of plateaus increases with the number of legs and one is tempted to
suggest that in the two dimensional limit $l\to\infty$, the magnetization curve
is again a continuous function. It should be noted, however, that the
LSM-construction with the operator~(\ref{eq:i1}) breaks down in the combined
limit $N\to\infty,\;k=\sqrt{N}$ in the sense that~(\ref{eq:i3}) does not hold.
The magnetic properties of the two dimensional Heisenberg model with helical
boundary conditions at $T=0$ have been studied in Ref.~\onlinecite{YM97}.
Concerning the isotropic model with nearest neighbour coupling, there is no
indication for a plateau in the magnetization curve.
Finally, let us mention that there is a second way to map ladder systems with $l$
legs onto one-dimensional systems with far reaching couplings:
\begin{equation}
\label{eq:8}
{\bf H}(h_{3},\tau_{l}) \equiv {\bf H}(h_{3}) + \tau_{l} {\bf H}_{N/l}.
\end{equation}
The couplings for the $l$ leg system are shown in Fig.~\ref{fig:tau_ladder},
\begin{figure}[ht]
\centerline{\epsfig{file=tau_ladder.ps,width=7cm}}
\caption{Spin ladder system with $l$ legs described by
Hamiltonian ${\bf H}(h_{3},\tau_{l})$ [Eq.~(\ref{eq:8})]. }
\label{fig:tau_ladder}
\end{figure}
and for the two leg system in Fig.~\ref{fig:tau_2_ladder},
\begin{figure}[ht]
\centerline{\epsfig{file=tau_2_ladder.ps,width=7cm}}
\caption{Spin ladder system with two legs described by
Hamiltonian ${\bf H}(h_{3},\tau_{2})$.}
\label{fig:tau_2_ladder}
\end{figure}
The latter differs from the conventional two leg system (with periodic
boundary-conditions) only by a twist at the boundary, which should not change
the physical properties in the thermodynamical limit. Therefore, we expect a gap
in this system. The appearance of a gap in the systems with an even number $l$
of legs originates from the second term in the Hamiltonian~(\ref{eq:8}). If we
repeat the calculation of the expectation values~(\ref{eq:i3}) for the
Hamiltonian~(\ref{eq:8}) we find:
\begin{eqnarray}
\label{eq:9}
\langle k | {\bf H} | k \rangle - \langle 0 | {\bf H} | 0 \rangle &=&
O(N^{-1}) \nonumber\\ && \hspace{-3cm} +
\sum_{n=1}^{N} \left(
\langle 0 | {\bf U}^{k} {\bf S}_{n}{\bf S}_{n+N/l} {\bf U}^{\dag^{k}}
- {\bf S}_{n}{\bf S}_{n+N/l} | 0 \rangle
\right) = \nonumber \\ && \hspace{-3cm}
\sum_{n=1}^{N} f_{l}^{k}\langle 0 | {\bf S}_{n}^{+}{\bf S}^{-}_{n+N/l} +
{\bf S}_{n}^{-}{\bf S}^{+}_{n+N/l} | 0 \rangle+O(N^{-1}).
\end{eqnarray}
The coefficient $f_{l}^{k}=[\cos(2\pi k/l)-1]\tau_{l}$ does not vanish unless
\begin{equation}
\label{eq:10}
k=l,2l,\ldots.
\end{equation}
This means in particular that the LSM-operators ${\bf U}^{k}$, $k=1,\ldots,l-1$
do not create states, which are degenerate with the ground state in the
thermodynamical limit.
Those situations, where the ground state degeneracy is lifted completely, are of
special interest. They occur if the momenta of the states $|l\rangle={\bf
U}^{l}|0\rangle$ [$p_{l}=l\pi(1-2M)+p_{s}$] and of the ground state
$|0\rangle$ ($p_{s}$) differ by a multiple of $2\pi$, i.e. for
\begin{equation}
\label{eq:12}
\frac{l}{2}(1-2M)\in \Bbb{Z}.
\end{equation}
The condition~(\ref{eq:12}) is satisfied exactly for the $l$ and $M$ values,
listed in Table~\ref{tab:legs}, i.e. for those values, where we expect a plateau
in the magnetization curve.
\section{Conclusion and Perspectives}\label{sec:conclusion}
In this paper we tried to elucidate the mechanism which generates gaps and
plateaus in spin $1/2$ antiferromagnetic Heisenberg models with nearest and next
to nearest neighbor couplings of strength $\alpha$. A priori these models have
no gap in the presence of a homogeneous field $h_{3} > h_{3}^{c}(\alpha)$ above
the critical field $ h_{3}^{c}(\alpha)$, which is needed to surmount the singlet
triplet gap for $\alpha > \alpha_c = 0.241\ldots$. The LSM construction
predicts the existence of soft modes (zero energy excitations) at wave vectors
$q^{(k)}(M)=k\pi(1-2M),\; k=1,2,3,\ldots$.
It is shown in Sec.~\ref{sec:LSM-construction} that the total spin squared ${\bf
S}^2_T=S(S+1)$ of the lowest excited states at the soft mode momenta is the
same as that of the ground state for $S=M N$. The soft modes are therefore
expected to generate signatures in the dimer dimer structure factor, since the
dimer operator does not change the total spin squared.
Indeed, for $M=1/4$ a pronounced peak is seen at the first soft mode
$q=q^{(1)}(1/4)=\pi/2$ if $\alpha \le 1/4$, indicating a large transition matrix
element $\langle 1|\bar {\bf D}(\pi/2)|0\rangle $ between the ground state -- with
momentum $p_{s}$ -- and the first excited state with momentum
$p_{s}+\pi/2$.
The second soft mode $q=q^{(2)}(1/4)=\pi$, however, does not produce any visible
structure in the dimer dimer structure factor (for $\alpha \le 0.25$). Here the
relevant transition matrix elements $\langle 1|{\bf D}(\pi)|0\rangle $ between
the ground state $|0\rangle$ and the first excited state with momentum
$p_{s}+\pi$ are small. The situation is different for large
next-to-nearest-neighbor couplings $\alpha$.
The magnitude of the transition matrix elements $\langle 1|\bar {\bf
D}(q)|0\rangle$ is crucial for the efficiency of the mechanism to generate a
plateau in the magnetization curve at a rational value of $M$ by means of a
periodic perturbation $\delta_{q}{\bf \bar D}(q)$. According to the criterium
of Oshikawa, Yamanaka and Affleck\cite{OYA97} a plateau at $M$ is possible if
$q$ meets one of the soft modes $q=q^{(k)}(M),\; k=1,2,3,\ldots$.
Our numerical analysis shows that the width of the plateau -- i.e. the
difference of the upper and lower critical field (\ref{eq:47}) -- depends on the
magnitude of the transition matrix element in the unperturbed model
$(\delta_{q}=0)$. Indeed these matrix elements enter as initial conditions in
the differential equations [(2.4),(2.5)] and [(2.2),(2.3)] in
Refs.~\onlinecite{FKM98a,FKM98b}, which describe the evolution of gaps and
plateaus under the influence of a periodic perturbation $\delta_{q}{\bf \bar
D}(q)$. We have also demonstrated that a superposition of two periodic
perturbations [$\bar {\bf D}(2\pi/3) +\bar {\bf D}(\pi/3) $] generates two
plateaus in the magnetization curve exactly at those magnetization values
($M=1/6,1/3$) where the period ($q=2\pi/3,\pi/3$) in the perturbation coincides
with the first soft mode momentum $q=q_{3}(M)$.
Ladder systems with $l$ legs [cf. Fig.~\ref{fig:l_ladder}] can be interpreted as
one-dimensional systems with additional couplings over $l$ lattice spacings and
a dimerized perturbation~(\ref{eq:dimer_l}) and~(\ref{eq:2}). The latter breaks
translation invariance of the 1D system and the Fourier analysis~(\ref{eq:7}) of
the dimerized perturbation~(\ref{eq:dimer_l}) reveals the occurrence of
magnetization plateaus [cf. Tab.~\ref{tab:legs}] in spin ladder systems with $l$
legs.
In this paper we concentrated our investigations on different spin-1/2 systems
that all had in common that they reduce in the unperturbed case to critical
Heisenberg chains. It should be pointed out that there exist exact solutions on
other (multichain spin-1/2 antiferromagnetic Heisenberg-like) models,\cite{PZ93}
where the existence of magnetization plateaus is still unclear. We remark that
the application of the method we presented here, is not limited to the cases we
discussed. However, besides looking for the existence of soft modes it turns out
to be needed to discuss the strength of the transition matrix elements. Here, we
cannot offer general prescription and therefore the dynamics of each different
model of interest has to be treated separately.
\acknowledgments
We would like to thank A. Honecker and A. Kl\"umper for discussions.
| {
"attr-fineweb-edu": 1.876953,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdA3xaKgS2Pj73-Bt | \section{Introduction}
\begin{figure} \centering
\includegraphics[width=9cm]{h2447f1.ps}
\caption{Apparent morphology of NGC 6741 in the medium-excitation line $\lambda$5007 $\AA\/$ of [O III] (upper panel), and in the
low-excitation line $\lambda$6584 $\rm \AA\/$ of [N II] (lower panel). North is up and East to the left. HST-WFPC2 frames; programme
GO 8773, P.I. Arsen Hajian.}
\end{figure}
The sentence by the late Professor Lawrence H. Aller (1994): ``A nebula is a three-dimensional structure for which we obtain a
two-dimensional projection'' fully synthesizes the many, so far unsolved, observational limitations and interpretation problems connected to the
Planetary Nebula (PN) research, leading to: (a) rough spatio-kinematical reconstruction, (b) unrealistic assumptions for the gas
parameters (in particular, electron
temperature ($T_{\rm e}$) and electron density ($N_{\rm e}$) constant all across the nebula), and (c) proliferation of kinematical,
physical and evolutional models, frequently based on the mere nebular morphology (i. e. appearance).
\begin{figure} \centering
\includegraphics[width=9cm]{h2447f2.ps}
\caption{Apparent I([N II])/I([O III]) distribution over NGC 6741 (same WFPC2 images as Fig. 1). The sharp, inhomogeneous layer at
low-excitation ([N II]) framing the main nebula is punched, along and close to the apparent major axis, by a series of radial [O III] rays?
yets? penetrating into the faint, roundish halo.}
\end{figure}
At last, the tomographic and 3-D analyses developed at the Astronomical Observatory of Padua (Sabbadin et al. 2004 and references therein)
overcame the stumbling block of nebular de-projection, rebuilding the ionic spatial structure, and allowing us a direct comparison
of each real, true PN with the current theoretical evolutionary models (Sch\"onberner et al. 1997, Steffen et al. 1998, Marigo et al. 2001), the
detailed hydro-dynamical simulations (Icke et al. 1992, Frank 1994, Mellema 1997, Perinotto et al. 2004a), and the updated photo-ionization codes
(Ferland et al. 1998, Ercolano et al. 2003).
Though the observational starting point is common - i. e. long-slit spectra -, the ``philosophy'' of tomography is just opposite of the
conventional method. In fact, the latter compacts the spectrum along the slit (in practice, it restricts
the nebula to a point), and gives mean, integrated results (line flux, expansion velocity, $T_{\rm e}$,
$N_{\rm e}$, ionization etc.). Vice versa, tomography is based on a pixel-to-pixel analysis of both flux and velocity, and furnishes
the bi-dimensional structure (in different ions) of the radial slice of nebula intercepted by the spectrograph slit.
Later on, a 3-D rendering procedure combines all
tomographic slices and provides the true spatial distribution of the kinematics, physical
conditions ($T_{\rm e}$ and $N_{\rm e}$), and ionic and chemical abundances at unprecedented accuracy.
Tomography needs spectra at high ``relative'' spatial (SS) and spectral (RR) resolutions (SS=r/$\Delta$r, r=apparent radius,
$\Delta$r=seeing; RR=$V_{\rm exp}$/$\Delta$V, $\Delta$V=instrument spectral resolution). It is based on the simple consideration that the position,
depth and density of each elementary volume within an extended,
regularly expanding nebula can be, in principle, derived from the radial velocity, FWHM and flux, respectively, of the corresponding
emission.
So far we have studied NGC 40 (Sabbadin et al. 2000a), NGC 1501 (Sabbadin et al. 2000b, Ragazzoni et al. 2001), NGC 6565 (Turatto
et al. 2002), NGC 6818 (Benetti et al. 2003) and NGC 7009 (Sabbadin et al. 2004). Here we present the results for NGC 6741.
NGC 6741 (PN G033.8-02.6, Acker et al. 1992) is a compact (main body$\simeq$7''x5'', halo diameter$\simeq$15''; Curtis 1918,
Schwarz et al. 1992),
high-surface brightness, high-excitation (class 8, Hyung \& Aller 1997) PN with a large degree of stratification of the
radiation. The powering star is very hot (log T$_*$$\ge$5.22, Pottasch 1981, Pottasch \& Preite-Martinez 1983, Heap et al. 1989, Tylenda et
al. 1989, Kaler \& Jacoby 1989) and faint (m$_{\rm V}$$\simeq$19.5, Pottasch 1981, Tylenda et al. 1989, Kaler \& Jacoby 1989, Hyung \& Aller
1997).
The [O III] and [N II] apparent morphology of NGC 6741 (sometimes called ``Phantom Streak Nebula'') is shown in Figs.
1 and 2 (HST-WFPC2 frames retrieved from NASA public archives).
Note the vertical (equatorial?) inhomogeneous strip of absorbing knots in the [O III] image of Fig. 1, and in Fig. 2 the roundish halo
and the series of weak, radial [O III] rays punching the [N II] skin along and close
to the apparent major axis. A multi-color HST reproduction of the nebula is given by
Hajian \& Terzian at http://ad.usno.navy.mil/pne/gallery.html.
We immediately point out that the optical appearance of NGC 6741, the co-existence of
ionic species with a large range of ionization potential, IP (from [O I] IP=0 eV to [Ne V] IP=97.1 eV; Aller et al. 1985, Hyung \& Aller 1997),
and the characteristics of the central star (hot and faint) are suggestive of a recombining PN, i. e. the star has exhausted
the hydrogen-shell nuclear burning and is rapidly fading in luminosity (and temperature); the UV flux being unable to fully ionize the
gas, the outer nebular regions recombine, producing the faint halo (Tylenda 1986, Phillips 2000). Under many respects NGC 6741
is very much like NGC 6565 (Turatto et al. 2002).
To deepen the kinematics, physical conditions, ionic and spatial structure, distance and evolutionary status
of NGC 6741, we have secured long-slit ESO NTT+EMMI echellograms (spectral range
$\lambda\lambda$3900-7900 $\rm\AA\/$, spectral resolution $\lambda$/$\Delta$$\lambda$=R=60\,000)
at nine position angles (PA). The spectra were reduced and analysed using our
reconstruction 3-D technique (Turatto et al. 2002, Benetti et al. 2003).
The results are presented in this paper,
whose plan is as follows: Sect. 2 illustrates the
observational procedure and the reduction method, Sect. 3 defines the kinematics of the ionized gas, Sect. 4 quantifies the
interstellar and circum-nebular absorption, Sect. 5 provides the nebular distance, size and age, in Sect. 6 we discuss the parameters of the
central star (temperature, luminosity, mass and evolutionary phase)
and in Sect. 7 the nebular parameters ($T_{\rm e}$, $N_{\rm e}$, ionic mass and structure, chemical abundances, photo-ionization model),
Sect. 8 re-builds the 3-D spatio--kinematical structure, Sect. 9 contains the general discussion, and Sect. 10 draws some conclusions.
\section{Observations and reductions}
NGC 6741 was observed with ESO NTT + EMMI (echelle mode; grating $\#$14, grism $\#3$) at nine equally spaced PA, under photometric
sky conditions and seeing ranging between 0.50\arcsec\,and 0.70\arcsec. The spectrograph slit (1.0\arcsec \,wide
and 30\arcsec\,long) was
centered on the nebular image, the exciting star being invisible in the slit-viewer.
The echellograms (exposure time 600s) cover the spectral range $\lambda$$\lambda$3900--7900 $\rm\AA\/$ with resolution
R$\simeq$60\,000, and provide
the kinematical structure of the main ionic species within the nebular slices covered by the slit.
Bias, zero--order flat field and distortion corrections, and wavelength and flux calibrations were performed according
to the straightforward procedure fully described by Turatto et al. (2002).
\begin{figure*}
\centering \includegraphics[width=17cm]{h2447f3.ps}
\caption{Detailed spectral image of twelve ionic species in NGC 6741 at PA=15$\degr$ (close to the apparent minor axis),
arranged in order of increasing IP (top-left to bottom-right).
The original fluxes are multiplied by the factor given in parenthesis, to make each emission
comparable with $\lambda$5007 $\rm\AA\/$ of [O III]. The blue-shifted gas is to the left. The top-left frame also shows the [O I]
night-sky emission at $\lambda$6300.304 $\rm\AA\/$. In the top-right frame the [O II] line at $\lambda$7319 $\rm\AA\/$, partially in
blend with $\lambda$7320 $\rm\AA\/$, has been suppressed (for details, see the text and Fig. 5).}
\end{figure*}
\begin{figure*}
\centering \includegraphics[width=17cm]{h2447f4.ps}
\caption{Same as Fig. 3, but for PA=75$\degr$ (close to the apparent major axis of NGC 6741).}
\end{figure*}
We stress the fundamental difference between our frames, covering 80 echelle orders, and the observing procedure usually adopted for
extended objects, inserting an interference filter to isolate a single order. The same is valid for the reduction and investigation
methods: so far long-slit echellograms have been used to obtain, either the kinematics in a few ions (in general, [O III], H I and
[N II]), or the ``average'' nebular properties (physical conditions, ionic and chemical abundances) integrated over the whole slit
length. On the contrary, we proceed to a detailed, pixel-to-pixel, flux and velocity determination for a number of nebular lines, thus inferring
the spatial distribution of the kinematics, diagnostics, ionic and total abundances at the same time.
The richness of physical information contained in the echellograms is illustrated in Figs. 3 and 4, presenting the spectral structure
of twelve ionic species (from [O I], IP=0 eV to [Ar V], IP=59.8 eV) at PA=15$\degr$ and 75$\degr$ (close to the apparent minor and major axes
of NGC 6741, respectively).
These figures evidence the spectral characteristics common at all PA, in particular: (a) large stratification of
both the radiation and kinematics (compare, e. g., the ionic sequences [O I]-[O II]-[O III] and [Ar III]-[Ar IV]-[Ar V]), and (b)
blurred appearance of recombination
lines (of H I, He I and He II), due to a combination of thermal motions, fine-structure and expansion velocity gradient. At the same time,
they highlight the kinematical and physical differences at the two PA:
\begin{description}
\item[-] un-tilted, quite uniform emissions close to the apparent minor axis (Fig. 3), though the blue-shifted gas is systematically fainter
than the red-shifted gas,
\item[-] tilted, inhomogeneous lines close to the apparent major axis (Fig. 4), suggestive of a dense equatorial torus and two extended and
faint polar lobes.
\end{description}
In the original NTT+EMMI frames, $\lambda$7320.121 $\rm\AA\/$ of [O II] (top-right panel in Figs. 3 and 4; hereafter $\lambda$7320
$\rm\AA\/$) is partially in blend with $\lambda$7319.044 $\rm\AA\/$, the bluest component of the [O II] red quartet (hereafter $\lambda$7319
$\rm\AA\/$). Since the intensity ratio I($\lambda$7320 $\rm\AA\/$)/I($\lambda$7319 $\rm\AA\/$)=constant=3.071 in
the $N_{\rm e}$ range here considered (De Robertis et al. 1985, Keenan et al. 1999, Sharpee et al.
2004), the $\lambda$7319 $\rm\AA\/$ suppression was obtained in the simple fashion illustrated in Fig. 5.
\begin{figure}
\centering \includegraphics[width=9cm]{h2447f5.ps}
\caption{De-blending procedure for the [O II] $\lambda\lambda$7319-7320 $\rm\AA\/$ doublet (using IRAF packages). Top panel: Image A= part
of the echellogram (at PA=75$\degr$) centered on $\lambda$7319 $\rm\AA\/$. Middle panel: Image B= part of the echellogram (at PA=75$\degr$)
centered on $\lambda$7320 $\rm\AA\/$. Bottom panel: Image C = Image A - (Image B/3.071), where 3.071 corresponds to I(7320)/I(7319).}
\end{figure}
\begin{figure}
\centering \includegraphics[width=9cm]{h2447f6.ps}
\caption{High-contrast [O III] and [N II] line profiles (in logarithmic scale) of NGC 6741 close to the apparent minor and major
axes (PA=15$\degr$ and 75$\degr$, respectively) showing the faint halo and red-shifted tail. Same orientation as Figs. 3 and 4.}
\end{figure}
We note in passing that the ``relative'' spectral and spatial resolutions of our echellograms are similar to each other:
SS=$V_{\rm exp}$/$\Delta$V$\simeq$3 to 5 $\simeq$ RR=r/$\Delta$r. This means that the spatial information of NGC 6741 is as accurate as
the kinematical information,
and both are close to the lower limit for the tomographic analysis (Ragazzoni et al. 2001). Ergo, the spectral images in Figs. 3 and 4 are
affected by non-negligible blurring agents.
Seeing and guiding are the broadening components
along the spatial (i. e. vertical) axis; in our case the full-width at half maximum, W(spat)$_{\rm seeing+guiding}$,
results to be 0.60$\arcsec$ for PA=15$\degr$ (Fig. 3), and 0.65$\arcsec$ for PA=75$\degr$ (Fig. 4). The true, ``intrinsic'' profile of the
nebula has:
\begin{equation}
{\rm W(spat)_{\rm intrinsic}^2= W(spat)_{\rm obs}^2 - W(spat)_{seeing+guiding}^2}.
\end{equation}
Concerning the velocity (i. e. horizontal) axis, we must take into account:
\begin{description}
\item{(1)} instrumental resolution, corresponding to a gaussian profile with W(vel)$_{\rm EMMI}$=5.09 km s$^{-1}$ (measured in the [O I]
night-sky line at $\lambda$6300 $\rm\AA\/$);
\item{(2)} thermal motions, generating a gaussian distribution with W(vel)$_{\rm thermal}$ $\simeq$21.6$\times$10$^{-2}\times$$T_{\rm e}$$^{0.5}
\times$${\rm m}$$^{-0.5}$ km s$^{-1}$, where m is the atomic weight of the element (Clegg et al. 1999 and references therein); we use the
radial $T_{\rm e}$ profile given in Sect. 7;
\item{(3)} turbulence, i.e. random, small-scale motions; this is a very uncertain parameter for PNe, whose quantification is deferred to
a dedicated paper (in preparation), based on very-high resolution (R$\simeq$115\,000) spectra secured with Telescopio Nazionale Galileo (TNG)
+ SARG on a representative sample of targets. In the present case of NGC 6741, at the moment we can only infer that W(vel)$_{\rm turb}$ is below
10.0 km s$^{-1}$, as indicated by the
sharpness of the spectral images of forbidden lines along the velocity (i. e. x) axis (see Figs. 3 and 4; larger turbulences should
produce blurred spectral images of forbidden lines, similar to the H$\alpha$ one). According to the ``a posteriori'' analysis presented at the
end of Sect. 3, in the following we will assume W(vel)$_{\rm turb}$=
3.5 km s$^{-1}$,
in partial agreement with the general results by Neiner et al. (2000), and Gesicki et al. (2003) (but see the caveat in Sect. 3);
\item{(4)} fine-structure (only for recombination lines); following Clegg et al. (1999),
W(vel)$_{\rm fine-s.}\simeq$7.5 km s$^{-1}$ for H$\alpha$; moreover, we adopt W(vel)$_{\rm fine-s.}$=5.0 km s$^{-1}$ for
$\lambda$4686 $\rm\AA\/$ (after suppression of the 20 km s$^{-1}$ blue-shifted tail with the method illustrated in Fig. 5) and
$\lambda$6560 $\rm\AA\/$ of
He II, and $\lambda$5876 $\rm\AA\/$ of He I.
\end{description}
Note that thermal motions and/or fine-structure represent the main broadening factors for recombination lines, whereas
instrumental resolution, thermal motions and turbulence are (more or less) equivalent for forbidden lines.
The final broadening along the velocity axis is:
\begin{equation}
{\rm W(vel)_{blur}^2= \sum_{i=1}^4 W(vel)_{i}^2},
\end{equation}
and the true, intrinsic full-width at half maximum:
\begin{equation}
{\rm W(vel)_{intrinsic}^2= W(vel)_{obs}^2 - W(vel)_{blur}^2}.
\end{equation}
W(vel)$_{\rm intrinsic}$ (in km s$^{-1}$) corresponds to the expansion velocity range of the emitting layer; it can be transformed into
arcsec by means of the general expansion law of the ionized gas: in NGC 6741 being $V_{\rm exp}$(km s$^{-1}$)=13($\pm$1)$\times$R$\arcsec$
(see Sect. 3), we have W(vel)$_{\rm intrinsic}$ (arcsec)= [W(vel)$_{\rm intrinsic}$ (km s$^{-1}$)]/13.
Let us consider, for example, the spectral images at PA=15$\degr$ (Fig. 3). The intrinsic FWHM in both the zvpc and cspl (as defined at the
opening of
Sect. 3) results to be 0.52$\arcsec$ to 0.72$\arcsec$ (forbidden lines), 0.77$\arcsec$ to 0.95$\arcsec$ (He I), 1.00$\arcsec$ to 1.15$\arcsec$
(He II), and 1.35$\arcsec$ to
1.45$\arcsec$ (H I), thus confirming that:
(a) each emitting region is extremely sharp,
(b) the whole spectral images must be carefully deconvolved: to this end we use the Richardson-Lucy algorithm (Richardson 1972, Lucy 1974),
and a point-spread
function given by a bi-dimensional gaussian profile characterized by W(spat)$_{\rm seeing+guiding}$ and W(vel)$_{\rm blur}$.
So far we have considered the bright main shell of NGC 6741. At lower-flux cuts the faint halo appears (Fig. 6), whose spectral signature supports the
recombination hypothesis (note, in particular, the broad, un-tilted halo-emission at PA=15$\degr$, and the broad, tilted
halo-emission mimicing the kinematics of the main nebula at PA=75$\degr$). In fact, according to the current
evolutionary models (e. g. Sch\"onberner et al. 1997, Steffen et al. 1998, Marigo et al. 2001, Perinotto et al. 2004a), a PN halo can be:
\begin{description}
\item[-] an {\bf AGB halo}, i. e. the low-density envelope of an optically thin PN, directly ionized by the UV flux of the bright central
star. It represents the AGB mass-loss in the pre-superwind phase, and expands at the original ejection velocity ($V_{\rm exp}
\simeq$10-15 km s$^{-1}$), independent on the kinematical field of the main nebula;
\item[-] a {\bf recombination halo}, corresponding to the outer, almost neutral parts of the main nebula, no more reached
by the UV flux of a hot central star which has exhausted the H-shell nuclear burning and passed the turnover point in stellar evolution,
rapidly fading in luminosity and temperature. The recombining layers essentially retain the general kinematical properties of
the main thick nebula (as observed in the halo of NGC 6741).
\end{description}
Fig. 6 also shows the faint red-shifted tail present in the strongest emissions of NGC 6741 at all PA, whose nature (halo?
interaction with the ambient ISM? early-AGB wind? instrumental scattered light?) remains unclear.
This preliminary, qualitative description enhances the complexity of NGC 6741 (a common characteristic of all PNe analysed at
adequate spatial and spectral resolutions). Before starting a thorough investigation of the kinematical and physical
information contained in the spectra, we underline that the general characteristics of NGC 6741 - in particular: (a) nebular
compactness, and (b) star weakness (the stellar continuum, generally used as a position marker, is absent in the echellograms) - do make
this nebula the ideal target to test the reliability limits of our 3-D reconstruction procedure.
\section{The gas spatio-kinematics}
According to Sabbadin et al. (2004, and references therein), the overall spatio-kinematical properties of a regularly expanding
nebula can be derived by combining the kinematics of the gas projected at the apparent position of the star (the ``central star
pixel line'', cspl, of the echellograms, common at all PA) with the spatial profile at the systemic radial velocity (the ``zero
velocity pixel column'', zvpc, at each PA).
Though the stellar continuum is absent in the echellograms of NGC 6741,
we obtain a satisfactory cspl-location and zvpc-alignement (within $\pm$0.1$\arcsec$) of the echelle orders
thanks to: (a) the central, symmetrical position of the star in the broad-band HST-WFPC2 nebular images, (b) the cspl- and zvpc-calibration
given by the continuum spectrum of the central star of NGC 7009 (observed at the same night, and with the same instrumental
configuration).
\begin{centering}
\begin{table*}
\caption{Peak separation in the cspl of NGC~6741}
\begin{tabular}{ccccc}
\hline
\\
Ion &IP range (eV)&&2$V_{\rm exp}$ (km/s)&\\
\cline {3-5}
\\
&& Wilson (1950)& Robinson et al. (1982) & this paper \\
\\
\hline
\\
$[$O I$]$ & 0.0-13.6 & - & - & 55.1 \\
$[$S II$]$ & 10.4-23.3 & - & - & 54.0\\
$[$O II$]$ & 13.6-35.1 & 44.2 & - & 53.7 \\
H I & $\ge$13.6 & & - &44.0 \\
$[$N II$]$ & 14.5-29.6 & 42.1 & - &53.5 \\
$[$S III$]$ & 23.4-34.8 & - & - &48.2 \\
He I & 24.6-54.4 & - & - & 48.0 \\
$[$Ar III$]$& 27.6-40.7 & - & - & 47.6 \\
$[$O III$]$ & 35.1-54.9 &41.6 & 42 &46.1 \\
$[$Ar IV$]$ & 40.7-59.8 & - & - & 33.7 \\
$[$Ne III$]$& 41.0-63.4 & 41.0 & - & 48.2 \\
N III & 47.4-77.5 & - & - & 40.0 \\
He II & $\ge$54.4 & - & - & 34.0 \\
$[$Ar V$]$ & 59.8-75.0 & - & - & 25.0 \\
$[$Ne V$]$ & 97.1-126.2 & 0.0: & - &- \\
\\
\hline
\end{tabular}
\end{table*}
\end{centering}
The peak separations in the cspl of NGC 6741, 2$V_{\rm exp}$, are contained in Table 1 (last column), where
ions are put in order of increasing IP.
Typical errors are 1.0 km s$^{-1}$ for the strongest forbidden emissions (like
$\lambda$4959-5007 $\rm\AA\/$ of [O III] and $\lambda$6548-6584 $\rm\AA\/$ of [N II])
to 2.0 km s$^{-1}$ for the faintest ones (in particular,
$\lambda$6312 $\rm\AA\/$ of [S III], $\lambda$4711-4740 $\rm\AA\/$ of [Ar IV] and $\lambda$6435-7005 $\rm\AA\/$ of [Ar V]).
The uncertainties for recombination lines
are: 2.0 km s$^{-1}$ for $\lambda$4861-6563 $\rm\AA\/$ of H I, $\lambda$4686-6560
$\rm\AA\/$ of He II and $\lambda$4640 $\rm\AA\/$ of N III, and 1.5 km
s$^{-1}$ for $\lambda$5876-6678 $\rm\AA\/$ of He I.
The agreement with the kinematical results given in the literature (also contained in Table 1) is quite poor, probably due to the
compactness of NGC 6741, combined with image rotation on the slit (long-exposure Coud\'e spectra by Wilson 1950) or inadequate spatial
resolution (circular aperture 18$\arcsec$ in diameter by Robinson et al. 1982).
\begin{figure*} \centering
\includegraphics[width=17.9cm]{h2447f7.ps}
\caption{Nebular tomography vs. spectral image connection. In this example we use the [O III] spectral images of NGC 6741 at PA=75$\degr$ and
PA=15$\degr$ (same orientation as Figs. 3 and 4).
Left panel: tomographic map of the nebular slice intercepted by a spectrograph slit aligned with the apparent major axis of an ellipsoidal PN
denser at the equator than at the poles; a and b are the polar and equatorial axes, respectively, of the elliptical nebular slice.
Central panel: spectral image of the nebular slice shown in the left panel; a' and b' are the polar and equatorial axes, respectively, of the
spectral image. Right panel: [O III] tomographic reconstruction of NGC 6741 at PA=15$\degr$. See the text for details.}
\end{figure*}
Concerning the zvpc at the nine PA of NGC 6741, the intensity peak separations, 2r$_{\rm zvpc}$,
in the different ionic species are listed in Table 2.
\begin{centering}
\begin{table*}
\caption{Peak separation in the zvpc at the nine PA of NGC~6741}
\begin{tabular}{ccccccccccccc}
\hline
\\
Ion&&&&2r$_{\rm zvpc}$&(arcsec)&&&\\
\cline{2-10}
\\
&PA=15$\degr$ &35$\degr$ &55$\degr$ &75$\degr$ &95$\degr$ &115$\degr$
&135$\degr$ &155$\degr$&175$\degr$\\
\\
\hline
\\
$[$O I$]$ & 4.3 & 4.6 & 5.7 & 7.3 & 6.8 & 7.1 &6.1 & 5.6 & 4.6 \\
$[$S II$]$ & 4.1 & 4.5 & 5.5 & 7.1 & 6.6 & 6.9 &6.0 & 5.5 & 4.4 \\
$[$O II$]$ & 4.0 & 4.3 & 5.2 & 6.6 & 6.5 & 6.6 &5.8& 5.2 & 4.3\\
HI & 3.5 & 3.5 & 4.1 & 6.0 & 6.0 & 6.0 &5.2 & 4.5 & 3.8\\
$[$N II$]$ & 4.1 & 4.4 & 5.4 & 6.8 & 6.5 & 6.5 &5.9 & 5.2 & 4.4 \\
$[$S III$]$ & 3.6 & 4.1 & 4.6 & 6.0 & 5.7 & 6.0 &5.5 & 4.6 & 3.9 \\
He I & 3.8 & 4.0 & 4.5 & 6.2 & 5.7 & 6.0 & 5.5& 4.6 & 4.0 \\
$[$Ar III$]$ & 3.7 & 3.8 & 4.5 & 5.9 & 5.0 & 6.0 & 5.5& 4.6& 4.1 \\
$[$O III$]$ & 3.6 & 3.7 & 4.4 & 5.3 & 4.7 & 5.5 & 5.0& 4.4 & 3.8 \\
$[$Ar IV$]$ & 2.7 & 3.2 & 3.6 & 3.9 & 3.9: & 5.3: & 4.5: & 3.4 &3.1 \\
$[$Ne III$]$ & 3.6 & 3.5: & 4.0 & 4.5 & 5.0: & 5.6: & 5.0& 4.0 &3.6 \\
N III & 3.0: & 3.1: & 3.6: & 4.0: & 4.2: & 5.1: & 4.4: & 3.4:&3.2: \\
He II & 2.6 & 2.9 & 3.4 & 3.8 & 3.9:& 5.0: & 3.9 & 3.2 &2.9 \\
$[$Ar V$]$ & 2.3 & 2.6 & 2.8 & 3.0 & 3.5: & 4.0: & 3.6 & 3.0 & 2.7\\
\\
\hline
\end{tabular}
\end{table*}
\end{centering}
To assemble the kinematical results of Table 1 and the spatial results of Table 2
we can follow two ways:
\begin{description}
\item[(I) ] ${\bf Search\,for\, the\, PA\, with\, R_{\rm zvpc} \simeq R_{\rm cspl}}$. Let us assume for the main shell of NGC 6741 the most general spatial
structure, i. e. a tri-axial ellipsoid with axes a, b and c.
At PA=75$\degr$ to 115$\degr$ (close to the apparent major axis, Fig. 4) the line-tilt indicates that the observer is not aligned with the major
axis, and the overall emission structure (suggestive of a dense equatorial torus + two faint polar lobes) excludes the oblate
(a=b) ellipsoid hypothesis. Moreover, the absence of line-tilt at PA=15$\degr$ (perpendicular to the apparent major axis, Fig. 3) means that
either the minor axis of the ellipsoid is close to PA=15$\degr$, or the intermediate
axis of the ellipsoid is close to PA=15$\degr$, or the ellipsoid is prolate (b=c). In all cases we conclude that, in first approximation,
R(minor axis)$\simeq$R(intermediate axis)$\simeq$R$_{\rm zvpc}$(PA=15$\degr$)$\simeq$ R$_{\rm cspl}$, providing (through
Tables 1 and 2):
$V_{\rm exp}$(km s$^{-1}$)=13($\pm$2)$\times$R$\arcsec$.
\item[(II)] ${\bf Spectral\, image\, deformation\, along\, the\, major\, axis}$.
Let us consider a generic, expanding ($V_{\rm exp}\propto$R) tri-axial ellipsoid denser in the equatorial region than at the poles,
seen
at an intermediate direction. A spectrograph slit aligned with the apparent major axis intercepts the radial slice of nebula shown in
Fig. 7 (left panel): an ellipse with polar axis a and equatorial axis b, perpendicular to each other. The corresponding spectral
image is an ellipse too (Fig. 7, central panel), but deformed (with respect to the tomographic map) by the telescope +
spectrograph characteristics. Note that:
- the polar axis (a') and the equatorial axis (b') of the spectral image are no more perpendicular to each other,
- the original tomographic map (Fig. 7, left panel) can be obtained by means of a simple compression of the spectral
image (Fig. 7, central panel) along the x (i. e. velocity) axis, till having a'$\bot$b'. $\footnote {This visual effect can be seen by
rotating Fig. 7 around the y axis.}$
In practice, and reversing the foregoing procedure, we compress the [O III] spectral image of NGC 6741 at PA=75$\degr$
(Fig. 7, central panel) by a factor 1.60 along the x axis, thus correcting for the spectral deformation introduced by NTT+EMMI
on the nebular slice intercepted by the slit, and obtaining:
- the tomographic map shown in Fig. 7 (left panel),
- the expansion law $V_{\rm exp}$(km s$^{-1}$)=13($\pm$1)$\times$R$\arcsec$.
This is the same law given by method (I). In fact,
the compression factor along the x axis being the same at all PA, we can repeat the procedure for the [O III] spectral image of NGC 6741
at PA=15$\degr$ (close to the apparent minor axis; Fig. 3), thus recovering the tomographic map shown in the right panel of Fig. 7, i. e. an almost
circular ring with R$_{\rm zvpc}$(PA=15$\degr$)$\simeq$
R$_{\rm cspl}$, QED! $\footnote {quod erat demostrandum; Latin for: which was to be proved.}$
\end{description}
The detailed cspl--zvpc relation for NGC 6741 at PA=15$\degr$, shown in Fig. 8, indicates that:
a) on the whole, the ionized gas follows Wilson's law: the high-excitation
zones expand more slowly than the low-excitation ones, and a positive correlation exists between the expansion velocity and
the size of the monochromatic image (Wilson 1950);
b) the range of both 2r$_{\rm zvpc}$ and 2$V_{\rm exp}$ is quite large (2.3 to 4.3 arcsec
and 25.0 to 55.1 km s$^{-1}$, respectively), suggesting a broad radial density profile and large stratification of the radiation;
c) the general expansion law, $V_{\rm exp}$(km s$^{-1}$)=13$\times$R$\arcsec$, fails in the innermost, highest ionization layers
marked by the [Ar V] emissions: they expand slower than expected.
Point c) is quite peculiar among PNe, and deserves some comments. Deceleration is present (and clear) in both
the $\lambda$6435 $\rm\AA\/$ and $\lambda$7005 $\rm\AA\/$ [Ar V] lines at all PA of
NGC 6741 (except along and close to the apparent major axis, where the open-ended structure of the [Ar V] spectral image prevents any
conclusion; see
Fig. 4, lower-right panel), whereas no evidence of deceleration appears in other high ionization species, like He II (due to the blurred
emissions) and N III ($\lambda$4640 $\rm\AA\/$ is too faint).
Though a detailed spatio-kinematical study at even higher ionization stages appears indispensable - for example in the forbidden line
of Ne$^{+4}$ (IP=97.1 eV) at $\lambda$3425 $\rm\AA\/$ (which is outside our spectral range) - a support to the
deceleration hypothesis comes from the classical work by Wilson (1950), who obtained $V_{\rm exp}$[Ne V]$\simeq$0 km s$^{-1}$
(quite uncertain).
We believe NGC 6741 is the first PN showing clear evidences of deceleration in the highest ionization layers (the second candidate
being IC 418: a preliminary analysis of our NTT+EMMI echellograms, taken at six PA, suggests the probable presence of infalling gas).
Vice versa, recent results by Gesicki \& Zijlstra (2003, 3 PNe) and Gesicki et al. (2003, 14 PNe) - based on high-resolution emission
profiles integrated
along the slit, and spherical shell hypothesis for the emitting gas - indicate
that acceleration is a quite common property of the PN innermost layers (i. e. ``U''-shaped expansion profile), due to the dynamical contribution
by the shocked, hot wind
from the central star. Gesicki \& Zijlstra (2003)
adopted $\lambda$4686 $\rm\AA\/$ of He II as diagnostic of the high-ionization kinematics, whereas Gesicki et al. (2003) used
$\lambda$5007 $\rm\AA\/$ of [O III] (in a few cases $\lambda$6560 $\rm\AA\/$ of He II).
The following ${\bf caveat}$ are in order:
1) the spherical shell assumption is wide of the mark;
2) $\lambda$5007 $\rm\AA\/$ of [O III] (IP range 35.1 to 54.9 eV) is a poor diagnostic of the highest ionization strata (except for very-low
excitation PNe);
3) the recombination lines of hydrogen and helium do suffer severe blurring effects (thermal motions, fine-structure and expansion velocity
gradient across the nebula combined with the small number of ionization stages of H and He), introducing spurious kinematical results.
In particular, $\lambda$4686 $\rm\AA\/$, He II Paschen $\alpha$, consists of thirteen fine-structure components spread in the
$\lambda$$\lambda$4685.377--4685.918 $\rm\AA\/$ range (corresponding to $\Delta$V=34.6 km s$^{-1}$), with a strong blue-shifted tail (see Figs.
3 and 4).
In the case of $\lambda$6560 $\rm\AA\/$, He II Pickering $\beta$, the fine-structure components are even nineteen, covering the
$\lambda\lambda$6559.769--6560.209 $\rm\AA\/$ spectral range ($\Delta$V=20.1 km s$^{-1}$).
To further test this point, let us consider the emission line profiles integrated along the slit of NGC 6741 at PA=15$\degr$.
The results for $\lambda$4686 $\rm\AA\/$ and $\lambda$6560 $\rm\AA\/$ of He II, and the ionic sequence of argon - [Ar III] at $\lambda$7135
$\rm\AA\/$, [Ar IV] at $\lambda$4740 $\rm\AA\/$ and [Ar V] at $\lambda$7005 $\rm\AA\/$ - are shown in Fig. 9; they confirm that:
a) spherical symmetry is a simplistic assumption,
b) the He II recombination lines fail to fit the well-defined stratification of both the
radiation and expansion present in the ionic sequence of argon,
FWHM(He II, IP$>$54.4 eV) being intermediate between FWHM([Ar III], IP range 27.6-40.7 eV) and FWHM([Ar IV], IP range 40.7-59.8 eV),
whereas, according to the detailed kinematical results presented in this section, we would expect
FWHM(He II) $\le$ FWHM([Ar IV]) $<<$ FWHM([Ar III]).
The same discrepancy occurs when considering either more parameters (like full-width at 10\% maximum flux), or different ionic
sequences, or further PA of NGC 6741, or other PNe of our sample.
A direct confirmation of the misleading kinematical results - in particular, the high-velocity of the innermost layers - provided by the
combination (spectral profile of recombination lines integrated
along the slit) + (spherical symmetry assumption) comes from a comparative analysis of the emission line profiles contained in
Gesicki \& Zijlstra
(2003) and Gesicki et al. (2003): in all cases (17 PNe) the forbidden lines of the ionic species with the highest IP, also show the
sharpest emission profile (as expected for a simple, positive $V_{\rm exp}$ vs. radius relation).
All this questions the general validity of the spatio-kinematical studies based on the spherical symmetry assumption and/or spectral
profiles of recombination lines integrated along the slit, and weakens (or even cancels) the reliability of their results on turbulence,
radial kinematics,
matter distribution and ionization, whose quantification needs detailed studies at high spatial and spectral resolutions.
\begin{figure} \centering
\includegraphics[width=9.5cm, height=9.5cm]{h2447f8.eps}
\caption{The cspl--zvpc relation for NGC 6741 at PA=15$\degr$, superimposed to the adopted expansion law $V_{\rm exp}$ (km s$^{-1}$)
=13$\times$R$\arcsec$, which is valid across the whole nebula, except in the innermost, highest-ionization, decelerated regions marked by [Ar V].}
\end{figure}
\begin{figure} \centering
\includegraphics[width=9.5cm, height=9.5cm]{h2447f9.eps}
\caption{Selected emission line profiles integrated along the slit for NGC 6741 at PA=15$\degr$, showing the spurious kinematical
results provided by the recombination lines of He II.
Symbols: continuous line= [Ar III] at $\lambda$7135
$\rm\AA\/$; dotted line= [Ar IV] at $\lambda$4740 $\rm\AA\/$; short-dashed line= [Ar V] at $\lambda$7005 $\rm\AA\/$; long-dashed line=
He II at $\lambda$4686
$\rm\AA\/$; dotted-dashed line= He II at $\lambda$6560 $\rm\AA\/$. For details, see the text.}
\end{figure}
Let us go on. Fig. 10 (multi-color map only in the ``free'' electronic version of the paper, for lack of funds) shows the complete velocity field at
the nine
observed PA of NGC 6741. We select He II, [O III] and [N II] as markers of
the high, medium and low-excitation regions, respectively.
These position--velocity (P--V) maps are relative to the systemic heliocentric velocity
of the nebula, $V_{\rm rad \odot}$= +39.6($\pm1.0$) km s$^{-1}$, corresponding
to $V_{\rm LSR}$= +56.4($\pm$1.0) km s$^{-1}$, and are scaled (along the x axis) according to $V_{\rm exp}$ (km s$^{-1}$)$\simeq$13$\times$R$\arcsec$,
i.e. they reproduce the tomographic maps of the nebular slices covered by the spectrograph slit.
Fig. 10 highlights the kinematical complexity and large stratification of the radiation within NGC 6741. The main nebula consists
of an almost-prolate ellipsoid (a$\simeq$7.4$\arcsec$, a/b$\simeq$1.8, a/c$\simeq$2.0), whose major axis (projected at PA$\simeq$95$\degr$)
forms an angle of 55($\pm$3)$\degr$ with the line of sight: on the whole, the eastern part of the nebula is approaching the observer, and the
western part receding.
All this throws new light on two specific fields: (I) nature of the curious skip of absorbing knots present in Fig. 1 (upper panel), and
(II) turbulent motions.
{\bf (I) skip of absorbing knots}: the central location,
orientation (vertical), and light curvature (concavity towards East) are suggestive of an inhomogeneous belt of neutral matter (gas +
molecules + dust) embedding the dense ionized gas of the equatorial regions. The amount of circum-nebular neutral gas
can be estimated from the [O III] flux-depletion suffered by the underlying ionized nebula, being:
\begin{equation}
\log \frac{{\rm F}(5007)_{\rm off-knot}}{{\rm F}(5007)_{\rm on-knot}}=k_{5007}\,-\,k_{\rm H_{\beta}} +
c({\rm H}\beta)_{\rm circum-nebular},
\end{equation}
where k$_{\lambda}$ is the extinction coefficient (Seaton 1979), and c(${\rm H}\beta)_{\rm circum-nebular}$ the logarithmic extinction at
H$\beta$
caused by the local absorbing matter.
Intensity scans in the neighbourhood of the deepest knots provide $\frac{{\rm F}(5007)_{\rm off-knot}}{{\rm F}(5007)_{\rm on-knot}}$
up to 1.4 ($\pm$0.1), i. e.
c(${\rm H}\beta)_{\rm circum-nebular}$ up to 0.18 ($\pm$0.03). Assuming a ``normal'' gas-to-dust ratio (Spitzer 1978, Bohlin et al. 1978) and
c(${\rm H}\beta)$=1.48$\times$E(B-V) (Acker 1978), we have:
\begin{equation}
n{\rm (H I})= \frac{4.8\times 10^{21}\times{\rm E(B-V)}_{\rm circum-nebular}}{\Delta{\rm l}},
\end{equation}
where $n{\rm (H\,I)}$ is the H I density (atoms cm$^{-3}$), and $\Delta{\rm l}$ the radial thickness (cm) of the absorbing layer. For D=2000
pc (see Sect. 5) and $\Delta{\rm l}$=[r(halo) - r(main nebula at PA=15$\degr$)]/2$\simeq$2.0$\arcsec$ (from Figs. 1 and 2), we obtain an
indicative value of $n{\rm (H\,I)}\simeq$7$\times$10$^3$ atoms cm$^{-3}$.
Such a high density of the circum-nebular matter in NGC 6741 is a further support to the recombination hypothesis outlined in
the previous sections.
{\bf (II) turbulent motions}. In Fig. 10 the tomographic map close to the apparent minor axis (i.e. at PA=15$\degr$) being almost circular
and quite homogeneous, we can assume {\rm W(spat)$_{\rm intrinsic}$}(zvpc at PA=15$\degr$) $\simeq$
{\rm W(vel)$_{\rm intrinsic}$}(cspl), thus inferring {\rm W(vel)$_{\rm turb}$} (through Eqs. (1) to (3), $V_{\rm exp}$(km s$^{-1}$)
= 13$\times$R$\arcsec$, and the $T_{\rm e}$ radial profile given in Sect. 7.1). We overlook
the recombination lines of hydrogen and helium (dominated by thermal motions and/or fine-structure), and consider the strongest
forbidden lines. The analysis of $\lambda$6300 $\rm\AA\/$ ([O I]), $\lambda$6731 $\rm\AA\/$ ([S II]), $\lambda$6584 $\rm\AA\/$ ([N II]),
$\lambda$7135 $\rm\AA\/$ ([Ar III]), and $\lambda$5007 $\rm\AA\/$ ([O III]) provides {\rm W(vel)$_{\rm turb}$}= 3.5($\pm$2.0) km s$^{-1}$,
with no evident relation to the ionization degree. Thus, in spite of the crude assumptions and wide uncertainties, we conclude
that turbulent motions in NGC 6741 are quite modest.
\begin{figure*} \centering
\includegraphics[width=17.8cm]{h2447f10.eps}
\caption{Combined position--velocity maps in the nine observed PA of NGC 6741 at high (He II), medium ([O III]) and low ([N II]) excitation
(multi-color maps in the electronic version of the paper; blue=He II, green=[O III], and red=[N II]), scaled according to the relation $V_{\rm exp}$
(km s$^{-1}$)=13$\times$R$\arcsec$. The
orientation of these tomographic maps is the same of Figs. 3 and 4.}
\end{figure*}
\section{The absorption (interstellar + circum-nebular)}
In general, the observed line intensities must be corrected for absorption according to:
\begin{equation}
\log \frac{{\rm I}(\lambda)_{\rm corr}}{{\rm I}(\lambda)_{\rm obs}}=k_{\lambda}\times c({\rm H}\beta)_{\rm tot},
\end{equation}
where k$_{\lambda}$ is the extinction coefficient (Seaton 1979), and c(${\rm H}\beta)_{\rm tot}$ the logarithmic extinction at H$\beta$
given by:
\begin{equation}
c({\rm H}\beta)_{\rm tot}=c({\rm H}\beta)_{\rm interstellar} + c({\rm H}\beta)_{\rm circum-nebular}
\end{equation}
with c(${\rm H}\beta)_{\rm circum-nebular}$=0 for an optically thin, fully ionized, density bounded nebula. Decidedly, this is not
the case of NGC 6741,
which is optically thick, ionization bounded, and wrapped up in a dense cocoon of almost-neutral matter.
The extinction estimates reported in the literature (mean values along the nebular slice covered by the spectrograph slit) cluster
around c(${\rm H}\beta)_{\rm tot}$=1.05 (Kaler \& Lutz 1985, Kaler \& Jacoby 1989, Cahn et al. 1992, Hyung \& Aller 1997).
To disentangle the complex absorption over NGC 6741, we apply the
H$\alpha$/H$\beta$ analysis of the whole spectral image, as introduced by Turatto et al. (2002). Fig. 11 shows the F(H$\alpha$)/F(H$\beta$)
isophotal contours superimposed to the H$\beta$ spectral image for three representative PA of NGC 6741:
at PA=15$\degr$ (close to the apparent minor axis; untilted H$\beta$ spectral image) F(H$\alpha$)/F(H$\beta$) increases outwards along the
spatial (i. e. y) axis, peaking beyond the top of the H$\beta$ flux.
At PA=55$\degr$ (intermediate PA; tilted H$\beta$ spectral image) the F(H$\alpha$)/F(H$\beta$) raise along the y axis is overwhelmed by the
broad maximum at S-W of the nebular centre (at the expected position of the absorbing belt visible in Fig. 1, upper
panel). The same occurs at PA=75$\degr$ (close to the apparent major axis; tilted H$\beta$ spectral image), where F(H$\alpha$)/F(H$\beta$)
peaks at W-SW of the centre, in correspondence of the equatorial absorbing belt.
\begin{figure*} \centering
\includegraphics[width=18cm,height=9cm]{h2447f11.ps}
\caption{F(H$\alpha$)/F(H$\beta$)
isophotal contours superimposed to the H$\beta$ spectral image for three representative PA of NGC 6741. Left panel: PA=15$\degr$
(close to the apparent minor axis); central panel: PA=55$\degr$ (intermediate direction); right panel: PA=75$\degr$ (close to the apparent
major axis). Same orientation as Figs. 3 and 4. The isophotal contours cover the range 5.20 (the outermost) to 7.60, with a constant
step of 0.30. }
\end{figure*}
Summing up the H$\alpha$/H$\beta$ intensity distribution at the nine observed PA of NGC 6741, we infer that:
\begin{description}
\item[a)] c(${\rm H}\beta)_{\rm interstellar}$=0.95 ($\pm$0.05),
\item[b)] c(${\rm H}\beta)_{\rm circum-nebular}$ is 0.10 ($\pm$0.03) in the central region of the nebular image (out of the equatorial absorbing
belt), and raises up to 0.20 ($\pm$0.05) within the equatorial absorbing belt, and at the edge of the ionized zone.
\end{description}
Moreover, a modest decrease of F(H$\alpha$)/F(H$\beta$) in the innermost regions (out of the equatorial absorbing belt) is suggestive of
a local increase of $T_{\rm e}$.
Last, an indicative value of the total neutral mass embedding the ionized nebula (through Eq. (5) and using simple geometrical considerations)
is M$_{\rm neutral}$$\simeq$0.20 ($\pm$0.05) M$_\odot$.
Though the small angular size prevents a deeper analysis, these results do represent the umpteenth sign of the complex structure and peculiar
evolutionary phase of NGC 6741, and will be a precious support in the determination of the topical parameter, i. e. distance.
\section{The nebular distance, size and age}
The ``statistical'' distance of NGC 6741, provided by two dozen catalogues using different methods and assumptions, is:
\begin{description}
\item[]$<$D$>$(Shklovsky)$\simeq$3300($\pm$1000) pc
\item[]$<$D$>$(ionized mass--radius relation)$\simeq$1500($\pm$700) pc
\item[]$<$D$>$(other methods)$\simeq$2000($\pm$1000) pc,
\end {description}
where $<$D$>$(Shklovsky) represents an upper limit of the true distance, NGC 6741 being an optically thick, ionization bounded PN.
Individual values reported in the literature are: 1300 pc (nebular radial velocity combined with the circular law of galactic rotation;
Acker 1978), 2100 pc (Acker 1978) and 1400 pc (Pottasch 1983) (both based on the large-scale galactic interstellar absorption map by Lucke 1978),
and 1500 pc (color-excess vs distance relation for early-type stars within 1.5$\degr$ of the nebula; Kaler \& Lutz 1985).
We tried to determine the nebular parallax by combining the expansion velocity field with the angular expansion measured in
first- and second-epoch HST-WFPC2 frames. The NASA
public archives contain 21 images of NGC 6741, taken at two epochs separated by 2.97 years (programs GO 7501 and GO 8773; P.
I. Arsen Hajian). The [O III] and [N II] multiple exposures (WFPC2 central planetary camera, PC; pixel size=0.0455 arcsec) were co-added,
corrected for optical distortions, aligned and rotated using
IRAF packages (Reed et al. 1999; Palen et al. 2002).
No apparent image-shift is detected, i. e. image-shift$\le$$\frac{1}{2}$pixel, and $\frac{d\theta}{dt}$$\le$8$\times$10$^{-3}$ arcsec yr$^{-1}$.
For an optically thin, fully ionized PN this implies D$\ge$1400 pc, being:
\begin{equation}
{\rm D(pc)}=\frac{0.211 V_{\rm exp} {\rm (km\ s^{-1})}}{\frac{d\theta}{dt} {\rm (arcsec\ yr^{-1})}}.
\end{equation}
But NGC 6741 is optically thick (probably recombining), and Eq. (8) cannot be applied, since in this case we should compare the expansion velocity
of the ionized gas with the angular expansion of the ionization front. To be noticed that, in the limit case of a nearby nebula in a
very deep recombination phase, we even expect a detectable contraction of the ionization front. Thus, we infer (a mere speculation, at the
moment) that D(NGC 6741)$\ge$1500 pc, and/or the nebula is at the end of the recombination phase (or at the reionization start).
As a last chance for the nebular distance determination, we decided to enrich (and sharpen, if possible) the broad color-excess vs. D
relation given by Kaler \& Lutz (1985) with the recent bibliographic reports.
The scanning of the NGC 6741 neighbourhood with the SIMBAD facilities of CDS (Centre de Donn\'ees astronomiques de Strasbourg) gave fruitful
results; besides two dozen (low-weight) field stars with accurate photometry and spectral type (in most cases the luminosity class is absent;
we assume a luminosity class V), we have identified two important distance markers:
- the open cluster OCL 88 (at an apparent distance of 15.4' from NGC 6741), characterized by E(B-V)=1.0 and D=3000 pc (Sagar \& Griffiths 1998,
and referenced therein),
- the classical Cepheid V336 Aql (apparent distance from NGC 6741=37.0'), with log P=0.8636 days, $<$m$_{\rm V}$$>$=9.875, E(B-V)=0.61--0.64, and
D=1800--2100 pc (Pel 1976, Feast \& Whitelock 1997, Metzger et al. 1998, Berdnikov et al. 2000).
The improved E(B-V) vs. D relation in the direction of NGC 6741 is shown in Fig. 12: the interstellar absorption starts at D$\simeq$ 200--300 pc,
and gradually increases up to 3000 pc (and beyond). This is in partial disagreement with the literature reports. We recall that NGC 6741 is
within the Aquila-Scutum cloud, a galactic region characterized by a high, quite uniform star density, suggestive of a low oscuration.
According to Forbes (1985), it shows an almost immediate climb at A$_{\rm V}$=2.3 mag. within 300 pc; beyond this distance, there is little or no
increase of extinction out to D$\simeq$6000 pc.
\begin{figure} \centering
\includegraphics[width=9.5cm, height=9.5cm]{h2447f12.eps}
\caption{Color-excess vs. distance relation in the direction of NGC 6741. Symbols: small dots= field stars; filled square= open cluster OCL 88;
large dot= classical Cepheid V336 Aql.
}
\end{figure}
To test this point, let us consider the four more PNe projected within the Aquila-Scutum cloud (at an apparent distance from
NGC 6741 lower than 90'). They are CBSS 3, K 3-19, M 1-66 and K 3-20 (we reject a fifth object, Sp 2-151, because of the unreliable line
intensities - in particular F(H$\alpha$)/F(H$\beta$)$\simeq$1.0 - and peculiar spectral-type A for the exciting star reported by Acker et al.
1992, probably due to the partial blending with a relatively bright, m$_{\rm R}$$\simeq$13.0, field star). The four selected PNe in the
Aquila-Scutum cloud do have 1.5$\le$c(${\rm H}\beta)$$\le$2.2 (line intensities by Acker et al. 1992, and Cappellaro et al. 1994, for
$T_{\rm e}$=10$^4$ K and $N_{\rm e}$=2$\times$10$^3$ cm$^{-3}$), i. e. 3.2$\le$A$_{\rm V}$$\le$4.7, and, using the oscuration law by Forbes
(1985), D$>$ 6000 pc for all four nebulae.
The last result appearing quite improbable, and, in our opinion, questionable, in the following we will adopt the
color-excess vs.
distance relation shown in Fig. 12. It provides D(NGC 6741) = 2000 ($\pm$300) pc.
The linear size of the main nebula is 0.036 pc x 0.020 pc x 0.018 pc (major, intermediate and minor semi-axes, respectively), whereas both
the spherical recombining halo and the [O III] rays punching the [N II] skin close to the apparent major axis extend up to 0.080 pc.
The ``kinematical'' age of NGC 6741,
t$_{\rm kin}$=R/$V_{\rm exp}$, is about 750 years, and the ``true'' age t$_{\rm true}$$\simeq$2R/ [$V_{\rm exp}$(today)+$V_{\rm exp}$(AGB)]$\simeq$
1400 years.
Summing up: NGC 6741 is a very young, compact, high surface brightness (i.e. dense) PN. The combination of: (a) high excitation
(up to [Ne V]) of the
innermost layers, (b) low ionization skin, and (c) almost neutral, large density halo, is indicative of a very hot central
star at low luminosity. A deep analysis of the powering engine is in order.
\section{The central star parameters}
The HST-WFPC2 frames of NGC 6741 taken through the broad-band filter F555W (central wavelength=5407 $\rm\AA\/$, bandwidth=1236 $\rm\AA\/$)
provide m$_{\rm V}$=20.09 ($\pm$0.05), where the un-known star color is the main source of inaccuracy. Previous ground-based estimates reported
in the
literature are: 19.5 (Pottasch 1981), 17.6 (Tylenda et al. 1989), 19.16 (Kaler \& Jacoby 1989) and 19.26 (Hyung \& Aller 1997) for
m$_{\rm V}$, and $>$20.2 (Gathier \& Pottasch 1988) and 18.2 (Tylenda et al. 1989) for m$_{\rm B}$.
The H I and He II Zanstra temperatures are given by the stellar magnitude, combined with both the total H$\beta$ nebular flux,
log F(H$\beta$)$_{\rm obs}$=-11.32 ($\pm0.03$) mW$\times$m$^{-2}$ (Kaler \& Lutz 1985, Kaler \& Jacoby 1989, Acker et al. 1991,
Cahn et al. 1992, this paper), and the flux ratio F($\lambda$4686
${\rm \AA}$)/F(H$\beta$)=0.40($\pm0.03$) (Aller at al. 1985, Kaler \& Jacoby 1989, Hyung \& Aller 1997, this paper).
We obtain
log(T$_{\rm Z}$H I)=5.33($\pm0.07$) and log(T$_{\rm Z}$He II)=5.23($\pm0.07$),
thus confirming the peculiarity already reported by Heap et al. (1989) and Hyung \& Aller (1997), i. e. the Zanstra discrepancy is reversed.
T$_{\rm Z}$H I$>$T$_{\rm Z}$He II is a typical signature of recombining PNe, e. g. NGC 6565 (Turatto et al. 2002), where the
ionization and thermal structure are out of equilibrium, and the recombination processes dominate. These are faster for the
higher-ionization species (Tylenda 1986, Stasinska 1989, Marten \& Szczerba 1997); thus, in the following, we will adopt
T$_*$$\simeq$T$_{\rm Z}$He II$<$T$_{\rm Z}$H I.
The stellar luminosity (using D=2000 pc, and the bolometric corrections by Sch\"onberner 1981) results to be log L$_*$/L$_\odot$=2.75($\pm0.15$).
The high temperature and low luminosity of the star, added to the short nebular age, suggest that the stellar mass,
M$_*$, is larger
than the average value ($\simeq$0.60 M$_\odot$) of the PNe nuclei. According to the evolutionary tracks by
Sch\"onberner (1981, 1983), Iben (1984),
Wood \& Faulkner (1986), Bl\"ocker \& Sch\"onberner (1990), Vassiliadis \& Wood (1994) and Bl\"ocker (1995), the 0.66-0.68
M$_\odot$ post-AGB star of NGC 6741 has recently (a few hundreds years ago) exhausted the hydrogen--shell nuclear burning, and is fading
along the white dwarf cooling sequence.
To be noticed: the early luminosity decline of the star was very fast (and caused the nebular recombination), but later
it gradually slowed, so that
we cannot exclude that, at present, the gas has reached (or even passed) the contraction-expansion equilibrium condition,
(1/3)$\times$(d(lnL$_*$)/dt)=-2/t$_{\rm kin}$ in the classical Str\"omgren model, thanks to the matter dilution due to expansion.
New input will come from the nebular physical conditions, radial ionization structure and photo-ionization model.
\section{The nebular parameters}
\subsection{Physical conditions and ionized mass}
Following Sabbadin et al. (2004, and references therein), the radial profile of the physical conditions is given by the zvpc,
which is independent on the expansion velocity field, since it represents the gas in the plane of the sky, whose motion is tangential.
For $T_{\rm e}$ we use the classical diagnostic line ratios of ions in p$^2$ and p$^4$ configurations ($\lambda$5007 $\rm\AA\/$/$\lambda$4363
$\rm\AA\/$ of [O III] and $\lambda$6584 $\rm\AA\/$/$\lambda$5755 $\rm\AA\/$ of [N II]; Aller 1984, Osterbrock 1989), whereas
$N_{\rm e}$ comes from both diagnostic line ratios of ions in p$^3$ configuration ($\lambda$6717 $\rm\AA\/$/$\lambda$6731 $\rm\AA\/$ of
[S II] and $\lambda$4711 $\rm\AA\/$/$\lambda$4740 $\rm\AA\/$ of [Ar IV]), and the absolute H$\alpha$ flux distribution.
The main limitation is connected to the NGC 6741 compactness, large stratification of the radiation, and weakness of the [N II] auroral line and
the [S II] and [Ar IV] doublets: $T_{\rm e}$[N II], $N_{\rm e}$[S II] and $N_{\rm e}$[Ar IV] can be derived only at (or close to) the
intensity peak of the corresponding emission.
Moreover, the large H$\alpha$ broadening implies a complex deconvolution for instrumental resolution plus thermal motions plus fine-structure
(see Sect. 2), lowering the accuracy of the F(H$\alpha)_{\rm zvpc}$ and $N_{\rm e}$(H$\alpha$) profiles. Thus, according to Benetti et al. (2003),
F(H$\alpha)_{\rm zvpc}$ and $N_{\rm e}$(H$\alpha$)
are also obtained from the radial ionization structure relative to O$^{++}$ (Sect. 7.2) and the fair assumption O/H=constant
across the nebula; at each position:
\begin{equation}
\frac{\rm F(H\alpha)_{zvpc}}{\rm F(\lambda 5007\AA)_{zvpc}}\propto\frac{H}{O}\times f(T_{\rm e})\times {\rm icf(O^{++})},
\end{equation}
with ${\rm icf(O^{++})}$=${\rm \frac{O}{O^{++}}}$=ionization correcting factor.
For the internal, high-excitation regions ${\rm icf(O^{++})}$ comes from the ionization structure of helium (Seaton 1968, Benetti et al. 2003):
\begin{equation}
{\rm icf(O^{++})_{\rm inner}} = 1+ \frac{0.3\times He^{++}}{He^+};
\end{equation}
for the external, low-excitation layers we adopt:
\begin{equation}
{\rm icf(O^{++})_{\rm outer}} = 1 + \frac{O^0}{O^{++}} + \frac{O^+}{O^{++}},
\end{equation}
which includes the ionization effects produced by the resonant charge-exchange reaction O$^+$ + H$^0$$\getsto$O$^0$ + H$^+$;
according to Osterbrock (1989) and Stancil et al. (1999), in the outermost nebular regions H$^0$/H$^+$$\simeq$0.83$\times$(O$^0$/O$^+$).
Moreover (Sabbadin et al. 2004),
\begin{equation}
N_{\rm e}(H\alpha)\propto \frac{1}{T_{\rm e}^{-0.47}} \times
(\frac{\rm F(H\alpha)_{zvpc}}{\epsilon_{\rm l} \times {\rm D}})^{1/2},
\end{equation}
where D is the (known) distance, and $\epsilon_{\rm l}$ the (un-known) ``local filling factor'', i. e. the fraction of the local
volume actually filled by matter with density $N_{\rm e}$.
A comparative analysis gives quite satisfactory results: the $N_{\rm e}$(H$\alpha$) profiles obtained in the two ways
differ by less than 5$\%$ everywhere, except in the faint, innermost regions, where the discrepancy raises up to 10$\%$. In the following we
will adopt $N_{\rm e}$(H$\alpha$) given by F($\lambda$5007 $\rm\AA\/$)$_{\rm zvpc}$ and Eqs. (9) to (12).
Fig. 13 shows the resulting radial distribution of the physical conditions in NGC 6741 at PA=15$\degr$ and PA=75$\degr$ (close to the apparent
minor and major axes, respectively).
\begin{figure*} \centering
\includegraphics[width=17cm,height=17cm]{h2447f13.eps}
\caption{Radial distribution of the physical conditions ($T_{\rm e}$ and $N_{\rm e}$) at two selected directions of NGC 6741. Upper panel: the
zvpc at PA=15$\degr$ (close to the apparent minor axis); lower panel: the zvpc at PA=75$\degr$ (close to the apparent major axis).
Left ordinate scale: $T_{\rm e}$[O III] (continuous line) and
$T_{\rm e}$[N II] (empty squares). Right ordinate scale: $N_{\rm e}$(H$\alpha$) for D=2000 pc (dotted line), $N_{\rm e}$[S II] (empty circles), and
$N_{\rm e}$[Ar IV] (filled circles).}
\end{figure*}
The radial trend of $T_{\rm e}$[O III] is common at all PA: $\ge$15\,000 K in the
weak innermost regions, rapidly decreasing outward down to 12\,000 K in the densest layers, and
more or less constant furtherout (uncertain). $T_{\rm e}$[N II] is systematically lower than $T_{\rm e}$[O III]. All this is in quantitative
agreement with the results by Hyung \& Aller (1997) and Pottasch et al. (2001).
$N_{\rm e}$ presents a broad, asymmetric bell-shape profile with peaks up to 12\,000 ($\pm$1000) cm$^{-3}$ close to the minor axis,
and 6000--8000 ($\pm$800) cm$^{-3}$ close to the major axis.
Note that, in spite of the high density peaks - causing strong collisional de-excitation of the [S II] $\lambda$6717 $\rm\AA\/$ line -
we have $N_{\rm e}$[S II]$\simeq$$N_{\rm e}$(H$\alpha$), i. e. the local filling factor is $\epsilon_{\rm l}$$\simeq$1, being
$N_{\rm e}$[S II] $\times \epsilon_{\rm l}^{0.5}\simeq N_{\rm e}$(H$\alpha)$ (Aller 1984, Pottasch 1984,
Osterbrock 1989).
$N_{\rm e}$[Ar IV] is systematically lower than $N_{\rm e}$[S II] adopting the electron impact excitation rates
by Keenan et al. (1997), whereas $N_{\rm e}$[Ar IV]$\simeq N_{\rm e}$[S II] for earlier collisional rates (for example, Aller 1984).
Concerning the halo, $N_{\rm e}$ peaks at the inner edge ($\simeq$1000 - 1500 cm$^{-3}$), and decreases outwards.
Recent density determinations (mean values integrated over the slit) reported in the literature for NGC 6741 are:
$N_{\rm e}$(different ions)=6300 cm$^{-3}$ by Hyung \& Aller (1997), $N_{\rm e}$[S II]=6000 cm$^{-3}$, $N_{\rm e}$[O II]=8500 cm$^{-3}$,
$N_{\rm e}$[Cl III]=9000 cm$^{-3}$ and $N_{\rm e}$[Ar IV]=5500 cm$^{-3}$ by Pottasch et al. (2001), and $N_{\rm e}$[Cl III]=4470 cm$^{-3}$,
$N_{\rm e}$[Ar IV]=6610 cm$^{-3}$ and $N_{\rm e}$[52$\mu$m/88$\mu$m]=2880 cm$^{-3}$ by Liu et al. (2001).
The ionized mass of NGC 6741, obtainable from the observed $N_{\rm e}$ spatial distribution, the H$\beta$ flux, and
the radio flux (Aller 1984, Pottasch 1984, Osterbrock 1989, and Turatto et al. 2002), results to be M$_{\rm ion}$$\simeq$0.06($\pm$0.02)
M$_\odot$, i. e. much lower than the mass of the external, neutral cocoon, M$_{\rm neutral}$$\simeq$0.20 ($\pm$0.05) M$_\odot$ (see Sect. 4),
thus confirming the peculiar evolutionary phase of our nebula.
To be noticed: the relatively high $N_{\rm e}$ of the halo implies a quite recent recombination start, no more
that 200 years ago. In fact, the $N_{\rm e}$ depletion rate for
recombination is:
\begin{equation}
{\rm d}N_{\rm e}/{\rm dt} = -\alpha_{\rm B}\times N_{\rm e}\times N({\rm H}^+),
\end{equation}
with $\alpha_{\rm B}$=effective recombination coefficient.
Assuming $N_{\rm e}\simeq N({\rm H}^+$) and $T_{\rm e}$=12\,000 K, and neglecting the recombination delay
due to expansion, we obtain:
\begin{equation}
N_{\rm e}(0) = N_{\rm e}({\rm t})/[1 - 8.2\times10^{-6}\times{\rm t}\times N_{\rm e}{\rm (t)}],
\end{equation}
where $N_{\rm e}$(0) is the initial electron density, and $N_{\rm e}$(t) the electron density
at time t (in years) elapsed from the recombination start. Adopting $N_{\rm e}$(0)$\simeq$10\,000 cm$^{-3}$ (mean of the density peaks
in Fig. 13),
and $N_{\rm e}$(t)$\simeq$600 cm$^{-3}$ (the lowest value in Fig. 13), we have t$\simeq$190 years. A very similar ``recombination age'' is
inferred from the mean H I density of the circum-nebular matter (Sect. 3).
All this, combined with the short nebular age ($\simeq$1400 years), fairly agrees with the theoretical evolutionary
times of a 0.66-0.68 M$_\odot$ post-AGB star. According to Bl\"ocker (1995 and references therein), in the interval t$_{\rm SW}$
(end of the superwind ejection) to t$_{\rm SW}$+1100yr the hydrogen-burning star crosses horizontaly
the H-R diagram (at log L$_*$/L$_\odot\simeq$4.00)
becoming hotter and hotter (up to logT$_*\simeq$5.20 K). At t$_{\rm SW}$+1100yr the nuclear fuelling becomes insufficient, and
in a century the luminosity
declines to log L$_*$/L$_\odot\simeq$3.70 (whereas T$_*$ reaches its maximum, logT$_*\simeq$5.30 K). At t$_{\rm SW}$+1200yr the
hydrogen-shell burning ceases at all,
the stellar evolution drastically accelerates till t$_{\rm SW}$+1400yr (log L$_*$/L$_\odot\simeq$2.80; logT$_*\simeq$5.20 K), and later
slows down (for example, at t$_{\rm SW}$+3000yr we have log L$_*$/L$_\odot\simeq$2.40 and logT$_*\simeq$5.15 K).
Summing up: NGC 6741 is close to (or has even passed) the recombination-reionization transition.
\subsection{Radial ionic and mean chemical abundances, and photo-ionization model}
The zvpc of the different emissions furnishes the detailed radial ionization structure of NGC 6741. Since the blurred H$\alpha$ appearance
lowers the accuracy of the
F(H$\alpha)_{\rm zvpc}$ distribution, we adopt $\lambda$5007 $\rm\AA\/$ as reference emission, thus
inferring the radial ionization structure relative to O$^{++}$ (for details, see Benetti et al. 2003). The $\frac{X^{+i}}{O^{++}}$ profiles
of NGC 6741 at PA=15$\degr$ and PA=75$\degr$ (close to the apparent minor and major axes, respectively), presented in Fig. 14,
confirm that the nebula is optically thick at all directions.
Within this scenario, the [O III] rays punching the [N II] skin (Fig. 2) identify a few radial directions along and close to the true
major axis of the nebula with ``low'' $N_{\rm e}$ peaks ($N_{\rm e}$(peak)$\le$4000 cm$^{-3}$ according to the photo-ionization model
presented in the course of the section): in these directions (barely visible in our spectral images at PA=75$\degr$, 95$\degr$ and 115$\degr$;
see Figs. 4 and 10) recombination processes are less efficient, and the gas can be (partially) ionized by the UV stellar flux at larger
distanced.
When combined with the results stored in the previous sections, all this indicates that the overall, complex structure of NGC 6741, fully driven by
the fast evolving central star, can be divided into three distinct zones showing quite different physical characteristics: the internal,
high-to-medium excitation nebula directly ionized by the stellar radiation ends in a sharp transition zone at medium-to-low excitation
marking the present edge of the ionization front. It is surrounded by the gas no more reached by the fading UV flux (i.e. the halo), dominated
by recombination processes. A similar onion-like radial structure (ionization-transition-recombination) is observed in NGC 6565 (Turatto
et al. 2002).
Though the ionization and thermal structure of NGC 6741 result to be out of equilibrium, the large $N_{\rm e}$ of the ionized gas (peaks
up to 12\,000 cm$^{-3}$) assures a quick nebular reply to the variable stellar flux. In particular, the delay in the ``transition zone''
is short, the recombination time, t$_{\rm rec}$=1/($\alpha_{\rm B} \times N_{\rm e}$), being a few years for hydrogen (even shorter
for higher ionization species).
In conclusion: NGC 6741 is nearly in equilibrium. This allows us to neglect (in first approximation) the time dependance, and estimate
the total chemical abundances of the ionized gas with the classical method valid for ``static'' nebulae.
\begin{figure*} \centering
\includegraphics[width=18cm,height=20cm]{h2447f14.eps}
\caption{The $\frac{X^{+i}}{O^{++}}$ radial ionization structure of NGC 6741 in the
zvpc at PA=15$\degr$ (close to the apparent minor axis; upper panel), and PA=75$\degr$ (close to the apparent major axis; lower panel).
Same orientation as Fig. 13.}
\end{figure*}
We follow a two-step procedure:
\\
{\bf I (broad approach)}: according to the critical analyses by Alexander \& Balick (1997) and Perinotto et al. (1998), the mean ionic
abundances are obtained from the line fluxes integrated over the spatial profile and the expansion velocity field; later on, we correct
for the unobserved ionic stages by means of the ionization correcting factors, derived both empirically (Barker
1983, 1986), and from interpolation of theoretical nebular models (Shields et al. 1981, Aller \& Czyzak 1983, Aller 1984, Osterbrock 1989).
The resulting mean chemical abundances are listed in Table 3 (last but one row); the recent estimates reported in the literature are
also given in the Table.
\begin{centering}
\begin{table*}
\caption{NGC 6741: chemical abundances (relative to log H=12)}
\begin{tabular}{lcccccccc}
\hline
\\
Reference& He& C& N& O &Ne&S&Cl&Ar\\
\\
\hline
\\
Hyung \& Aller 1997 (icf)&11.01&8.86&8.38&8.73&8.12&6.89&5.43&6.49\\
\\
Hyung \& Aller 1997 (model B)&11.04&8.90&8.15&8.65&8.00&6.76&5.34&6.54\\
\\
Pottasch et al. 2001&11.04&8.81&8.45&8.82&8.26&7.04&-&6.69\\
\\
Pottasch et al. 2002&11.04&8.56&8.26&8.65&8.18&6.90&5.26&6.51\\
\\
Perinotto et al. 2004b&11.08&- &8.31&8.70&8.16&6.70&- &6.54\\
\\
This paper (icf) &11.04&-&8.20&8.66&8.08&6.78&- &6.54\\
\\
This paper (CLOUDY) &11.04&-&8.28&8.66&8.08&6.90&- &6.54\\
\\
\hline
\end{tabular}
\end{table*}
\end{centering}
{\bf II (refining)}: we apply the photo-ionization code CLOUDY (Ferland et al. 1998) to a model-nebula characterized by the same
distance, gas distribution, mean chemical composition and exciting star parameters of NGC 6741, and then combine the model-line profiles
(convolved for a seeing+guiding of 0.60$\arcsec$) with the line profiles observed in the true nebula.
Let us focus on the zvpc of the S-SW sector at PA=15$\degr$ (close to the apparent minor axis of NGC 6741). The input parameters of the model-nebula
are given in Table 4, whereas Fig. 15 shows the physical conditions in the model-nebula (to be compared with Fig. 13), and the absolute
radial flux distribution of the main emissions in both the model-nebula and NGC 6741.
At first glance, Fig. 15 provides a satisfactory model-nebula vs. true-nebula agreement for the internal, high-to-medium excitation regions and the
transition zone. Furtherout, the weak recombining halo appears in NGC 6741, whereas emissions are absent at all in the static model-nebula.
The same is observed at the other PA.
When entering in more detail, some disturbing drifts arise in Fig. 15. Two minor discrepacies are:
\begin{description}
\item[(a)] uncertain flux distribution of
[Ne III] in NGC 6741 (ascribable to the edge location of $\lambda$3968 $\rm\AA\/$ in the original NTT+EMMI frames),
\item[(b)] comparable model-nebula vs. true-nebula flux-shift for [S II] and [S III], indicating a slight underestimate of
sulphur abundance
in Table 3 (last but one row), and in Table 4 (model-nebula). The same occurs for [N II]. The improved chemical abundances of NGC 6741 are
reported in the last row of Table 3.
\end{description}
The two major problems in Fig. 15 concern:
\begin{description}
\item[(1)] the ionization structure of helium: He I $\lambda$5876 $\rm\AA\/$ and He II $\lambda$4686 $\rm\AA\/$ are weaker and
stronger, respectively, in the model-nebula than in the true-nebula;
\item[(2)] the peak emission and whole flux distribution of the highest excitation ions (He II, [Ar IV] and [Ar V])
tend to be systematically closer to the central star in the true-nebula than in the model-nebula.
\end{description}
Ad hoc manipulation of the model-nebula input data lowers - or even cancels - the foregoing problems; for example, we can act on the ionizing
UV distribution, given the strong T$_*$ dependance of both the ionic structure of helium and the radial profile of the
highest-excitation emissions.
In fact, the only decrease by 17$\%$ of the stellar temperature (i.e.
for T$_*\simeq$150\,000 K, which is within the observational error box given in Sect. 6) simultaneously relieves discrepancies (1) and (2).
They almost disappear with a further, modest increase (by 20$\%$) of the matter density in the innermost nebular layers.
Unfortunately, the large number of involved parameters (density profile, chemical abundances, dust, temperature and luminosity of the star) and
assumptions (distance, black-body distribution, seeing convolution), added to the nebular compactness and peculiar evolutionary phase,
greatly complicate the affair, whose definitive solution needs detailed observations at even higher spatial and spectral
resolutions, and is beyond the aims of the paper (as well beyond the patience of the authors).
\begin{figure*} \centering
\includegraphics[width=18cm,height=21cm]{h2447f15.eps}
\caption{Model-nebula (CLOUDY) vs. true-nebula (NGC 6741, PA=15$\degr$, S-SW sector): physical conditions and
radial ionization structure.
Top panel: physical conditions in the model-nebula (to be compared with Fig. 13). Second panel: absolute flux distribution of
H I $\lambda$6563 $\rm\AA\/$, [N II] $\lambda$6584 $\rm\AA\/$, and the ionic sequence of helium (He I $\lambda$5876 $\rm\AA\/$ and He II
$\lambda$4686 $\rm\AA\/$) in the model-nebula (thin symbols) and the true-nebula (thick symbols). Third panel:
absolute flux distribution for the ionic sequences of oxygen ([O I] $\lambda$6300 $\rm\AA\/$,
[O II] $\lambda$7320 $\rm\AA\/$, and [O III] $\lambda$5007 $\rm\AA\/$) and sulphur ([S II] $\lambda$6717+6731 $\rm\AA\/$, and [S III]
$\lambda$6312 $\rm\AA\/$); same symbols as in the second panel. Bottom panel: absolute flux distribution for [Ne III] $\lambda$3968 $\rm\AA\/$,
and the ionic sequence of argon ([Ar III] $\lambda$7135 $\rm\AA\/$, [Ar IV] $\lambda$4740 $\rm\AA\/$, and [Ar V] $\lambda$7005 $\rm\AA\/$);
same symbols as in the second and third panels.}
\end{figure*}
\begin{table}
\caption{Input parameters for the model nebula (CLOUDY)}
\begin{tabular}{ll}
\hline
\\
Radial density profile & ionized gas = Fig.~13; cfr. Sect. 7.1\\
&neutral gas = cfr. Sects. 3 and 4\\
\\
Chemical abundances: & \\
~~ C, Cl, K, Ca, Mg, Si & Hyung \& Aller (1997) model B\\
~~ He, N, O, Ne, S, Ar & Table 3 (last but one row)\\
~~ other elements & PN (CLOUDY default)\\
&\\
Dust & PN (CLOUDY default)\\
&\\
Local filling factor & 1.0 \\
&\\
&blackbody distribution\\
Exciting star & T$_*$=170\,000 K \\
& log L$_*$/L$_\odot$= 2.75\\
&\\
Distance& 2.0 kpc\\
\\
\hline
\end{tabular}
\end{table}
\section{The 3-D morpho-kinematical structure}
\begin{figure*}
\centering
\includegraphics[width=15cm]{h2447f16.eps}
\caption{Stereoscopic structure of NGC~6741 for a rotation around the
North-South axis centered on the exciting star. Opaque reconstruction at high flux-cut for $\lambda$4686 $\rm\AA$
of He II, $\lambda$5007 $\rm\AA$ of [O III], and $\lambda$6584 $\rm\AA$ of [N II], as seen from thirteen directions separated by
15$\degr$. The line of view
is given by ($\theta,\psi$), where $\theta$ is the zenith angle and $\psi$ the
azimuthal angle. Each horizontal pair represents a ``direct''
stereoscopic pair (i. e. parallel eyes), and the whole figure provides twelve 3-D views of the nebula in as
many directions, covering a straight angle (for details, see Ragazzoni et al. 2001). The (0,0) images represent the rebuilt-nebula as
seen from Earth (North is up and East to the left). The complete series of nebular movies is shown in the electronic version of the paper
(on-line data).}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=15cm]{h2447f17.eps}
\caption{Same as Fig. 16, but at low flux-cut.
}
\end{figure*}
According to the reconstruction method introduced by Sabbadin et al. (1985, 1987), and deepen by Sabbadin et al. (2000a, b),
we select $\lambda$4686 $\rm\AA$
of He II, $\lambda$5007 $\rm\AA\/$ of [O III] and $\lambda$6584 $\rm\AA\/$ of [N II]
as markers of the high, medium and low-ionization regions of NGC 6741, respectively. The spectral images are:
\\
(a) de-convolved for seeing, spectral resolution and thermal motions (also fine-structure is taken into account for
the recombination line of He II),
\\
(b) de-projected through the relation $V_{\rm exp}$(km s$^{-1}$)= 13$\times$R$\arcsec$ (see Sect. 3),
\\
(c) assembled by means of the 3-D rendering procedure described by Ragazzoni et al. (2001).
Here we present a limited number of frames, corresponding to a partial rotation around
the North--South axis centered on the exciting star (i.e. almost perpendicular to the major axis).
The complete series of nebular movies is available:
- in the electronic version of the paper (on-line data),
- at {\bf http://web.pd.astro.it/sabbadin}, the WEB site dedicated to the 3-D structure of expanding nebulae,
- at {\bf http://www.edpsciences.org}.
Figs. 16 and 17 show the opaque reconstruction of NGC 6741 in He II, [O III] and [N II] at high and low flux-cuts, respectively, for a
rotation of 180$\degr$ through the first Euler angle. The
(0,0) images correspond to the Earth--nebula direction (North is up and East to the left).
An indication of the $N_{\rm e}$ limit in Figs. 16 and 17 can be obtained from the corresponding [O III] absolute flux-cut: log
E($\lambda$5007 $\rm\AA$)= -15.95 and -16.70 erg s$^{-1}$ cm$^{-3}$ (high and low cuts, respectively).
Since O/H=4.6$\times$10$^{-4}$ (Table 3), ${\rm icf(O^{++})}$$\simeq$1.1,
and $N_{\rm e}\simeq$1.15$\times$N(H$^+$), they correspond to
$N_{\rm e}$(high cut)$\simeq$9000
cm$^{-3}$ for Fig. 16, and $N_{\rm e}$(low cut)$\simeq$4000 cm$^{-3}$ for Fig. 17 (assuming $T_{\rm e}$=12\,000 K and
$\epsilon_{\rm l}$=1).
The projection of NGC 6741
for a rotation around the N--S direction (almost perpendicular to the major axis) is presented in Fig. 18 (multi-color images only in the ``free''
electronic version of the paper, for lack of funds), providing
a representative sample of the nebular appearance when changing the line of view. The left panel, (0,0), corresponds to NGC 6741 as
seen from Earth (North is up and East to the left), to be compared with Fig. 1 and the HST image by
Hajian \& Terzian at {\bf http://ad.usno.navy.mil/pne/gallery.html}.
The overall characteristics of NGC 6741 (a hot and faint star at the centre of a dense, inhomogeneous, equatorial torus merging into a closed,
ellipsoidal structure embedded into a large cocoon of almost neutral matter) closely resemble NGC 6565, a compact, recombining
PN projected pole-on (Turatto el al. 2002), and show remarkable affinities with NGC 7027, the PNe prototype (Cox et al. 2002, Bains et al.
2003).
Very likely, all three objects represent ``middle-aged'' snapshots of a massive, fast evolving post-AGB star.
\begin{figure*} \centering
\includegraphics[width=17.9cm]{h2447f18.eps}
\caption{Optical appearance of NGC 6741 (multi-color in the electronic version of the paper; blue=He II, green=[O III], red=[N II]) for a rotation
through the N--S axis centered on
the exciting star. The left panel, (0,0), corresponds to the re-built
nebula as seen from the Earth (North is up and East to the left). Recall that
projection($\theta,\psi$)=projection($\theta\pm180\degr,\psi\pm180\degr$). The complete (multi-color) movie is shown in the electronic version
(on-line data).}
\end{figure*}
\section{Discussion}
In spite of the nebula compactness, we have quite successfully applied the tomographic and 3-D analyses
to NGC 6741, covered at high spatial and spectral resolutions with ESO NTT+EMMI.
NGC 6741 is a young (age$\simeq$ 1400 years) PN at a distance of 2000 pc; it consists of a high-excitation, almost-prolate ellipsoid
(0.039 pc x 0.022 pc x 0.019 pc; major, intermediate
and minor semi-axes, respectively) denser at the equator than at the poles
($N_{\rm e}$ peaks up to 12\,000 and 7000 cm$^{-3}$, respectively), surrounded by a thin skin at low-excitation (the transition zone,
corresponding to the present ionization front), and embedded
into an extended, spherical (radius$\simeq$0.080 pc), almost neutral recombining halo containing a large fraction of the nebular mass
(M$_{\rm halo}\simeq$0.20 M$_\odot$ versus M$_{\rm ion}\simeq$0.06 M$_\odot$).
The complex ionization structure of NGC 6741 is fully driven by the fast evolution of the central star, a massive (0.66-0.68 M$_\odot$),
hot (log T$_*\simeq$5.23 K) and faint (log L$_*$/L$_\odot\simeq$2.75) post--AGB star fading towards the white-dwarf region (after exhaustion
of the hydrogen--shell nuclear burning).
The ionized gas of NGC 6741 follows the general expansion law $V_{\rm exp}$(km s$^{-1}$)=13$\times$R$\arcsec$, except the innermost,
highest-excitation layers, showing clear evidences of deceleration. This peculiar kinematical behaviour can be qualitatively ascribed to the
luminosity drop of the central star (Sch\"onberner et al. 1997, Steffen et al. 1998, Marigo et al. 2001, Perinotto et al. 2004a): in this
evolutionary phase
the stellar mass-loss quickly decreases, and the falling hot-bubble's pressure no more balances the
pressure of the ionized gas, so that the contact discontinuity and the adjacent, high-excitation nebular layers accelerate inwards.
The argument is poorly known, and deserves more attention, the kinematical properties of the innermost
layers being an excellent diagnostic of the star-nebula interaction. A search for more ``decelerated'' candidates,
a high-resolution spectroscopic survey in ionic species at large IP, and the detailed comparison with the current hydro-dynamical simulations
are highly desired.
Note that the opposite situation (i. e. acceleration) is expected for the external, recombining layers of NGC 6741, whose de-pressuring gives
rise to Rayleigh-Taylor instability at the ionization edge. Though the knotty and filamentary [N II] nebular appearance (Fig. 1, lower panel)
qualitatively agrees with this scenario, its confirmation needs very-high spectral resolution (R$\ge$100\,000) echellograms detecting the
increase of expansion velocity (and turbulence!) at the ionization edge.
A further interesting point concerns the wide radial matter distribution of NGC 6741 (much wider than the $N_{\rm e}$ profile given in
Fig. 13, due to the presence of the dense, almost-neutral recombining halo). This indicates a modest contribution of wind interaction to the
nebular shaping
(it essentially supports the innermost layers, avoiding gas infall), whereas the lion's share is taken by ionization: the thermal pressure
of the ionized gas accelerates the matter outwards and decelerates inwards, creating the density and velocity distributions observed in NGC
6741 (more general comments are towards the end of the section).
Though the precise radial profile of the matter remains unknown - a large fraction of NGC 6741 being neutral -, a more sign provides that the
outer radius
of the main nebula extends well beyond the present ionization edge: the absence of any extended, diffuse, faint and roundish
AGB-halo emission in the Palomar Sky Survey plates, deep imaging by Schwarz et al. (1992), and HST frames. This is confirmed by our spectra:
no kinematical signature of an AGB-halo (i. e. external, faint, un-tilted and un-split emission) appears.
Since the low-density gas of an AGB-halo (usually $N_{\rm e}$=50 to 200 cm$^{-3}$; Chu et al. 1987; Corradi et al. 2003) is little affected by
recombination
(from Eq. (14), $N_{\rm e}$(0)=50 to 200 cm$^{-3}$ implies $N_{\rm e}$(200yr)=47 to 162 cm$^{-3}$), we infer that:
\begin{description}
\item[-] NGC 6741 never became optically thin to the UV stellar radiation,
\item[-] the total nebular mass, M$_{\rm tot}$, is larger than 0.40 M$_\odot$.
\end{description}
Let us focus on recombination, which appears as the main characteristic (and peculiarity) of our nebula.
As outlined in the previous sections, NGC 6741 is the second PN - of
the six so far analysed with the 3-D methodology - in a
deep recombination phase (the first being NGC 6565, Turatto et al. 2002). Moreover, according to Benetti et al. (2003), a third nebula,
NGC 6818, is at the very beginning of recombination. Such an apparent excess of recombining PNe is a mere consequence of our target
selection criteria (in particular, high surface brightness and richness of ionic species), favouring massive nebulae powered by fast
evolving central stars.
Thanks to the unprecedented accuracy achieved by tomography and 3-D recovery on the manifold facets of the PN research, we have identified a
number of observational evidences supporting the recombination hypothesis for NGC 6741; they are:
\\
(1) nebula much brighter than the star (Sects. 1 and 6),
\\
(2) presence of absorbing knots (Sect. 1),
\\
(3) co-existence of ionic species in a large IP range (Sects. 1, 3 and 7.2),
\\
(4) kinematics of the halo (Sect. 3),
\\
(5) high density of the almost-neutral halo (Sects. 3 and 4),
\\
(6) c(${\rm H}\beta)$ variable over the nebula (Sect. 4),
\\
(7) high temperature and low luminosity of the central star (Sect. 6),
\\
(8) T$_{\rm Z}$H I$>$T$_{\rm Z}$He II (Sect. 6).
A minimal, but effective, recombination standard for the whole PN class - in practice, a summary of points (1) and (7) - is given by:
\begin{equation}
{\rm m(V_*)_{obs} + log F(H\beta)_{obs}} - 1.1\times c({\rm H}\beta)>5.0,
\end{equation}
where F${\rm (H\beta)_{obs}}$ is in erg cm$^{-2}$ s$^{-1}$.
A quick look at the Strasbourg--ESO Catalogue of Galactic PNe (Acker et al. 1992) allowed us to select some fifty targets satisfying
Eq. (15). Half of them (e. g. NGC 2440, Hb 4, NGC 6565, NGC 6537, NGC 6620, NGC 6741, NGC 6886, NGC 6881, NGC 7027, Hu 1-2, NGC 6302, and
NGC 6563) are compact
(sometimes with butterfly-like extentions), at high surface brightness, and represent young recombining PNe ejected and
excited by a massive (M$_*$$\ge$0.64 M$_\odot$) fast evolving star beyong the turn-around point in the H--R diagram. The remaining candidates
are mean-to-low surface brightness, extended (often bipolar) nebulae (e. g. NGC 6445,
NGC 6439, A 55, NGC 6772, A 53, NGC 6818, NGC 6894, NGC 7048, NGC 2438, NGC 2818, and IC 4406),
corresponding to aged, evolved PNe, optically thick (at least in some directions) to the fading UV radiation of a moderately-massive (0.60 to
0.64 M$_\odot$) post-AGB star in the white dwarf cooling sequence.
Further supports to the ``massive'' nature for the central stars of PNe satisfying Eq. (15) are:
\begin{description}
\item [-] most targets belong to the morphological class B of Greig (1971, 1972),
\item [-] a large fraction consists of Type-I PNe (Peimbert 1978; Peimbert \& Torres--Peimbert 1983; Torres--Peimbert \& Peimbert,
1983; Peimbert et al. 1995),
\item [-] the H$_2$ emission at 2.12 $\mu$m is commonly detected (Kastner et al. 1996; Natta \& Hollenbach 1998; Bohigas 2001). Note that,
though NGC 6741 is not listed among the H$_2$-emitters, a deep search promises fruitful results.
\end{description}
The general rule for recombining PNe is that, the higher the surface brightness of the nebula, the larger is the mass of the star.
Summing up: recombination represents a not un-common evolutionary phase in PNe; it becomes unescapable for massive nebulae ejected and
excited by a massive, fast-evolving post-AGB star (also see Tylenda 1986, Szczerba 1990, Phillips 2000, Marigo et al. 2001, Turatto et al.
2002, Benetti et al. 2003, and Perinotto et al. 2004a).
Let us close this verbose paper with some general considerations on the kinematics of the whole PN class. The comparative analysis performed in
Sect. 3 - rejecting the kinematical results based on spherical symmetry assumption and/or emission profiles of recombination lines integrated
along the slit - enhances the rarity of real nebulae showing a ``U'-shaped expansion velocity field.
We stress that:
\begin{description}
\item[(a)] Wilson's law ($V_{\rm exp}$ $\propto$ radius) is the general rule for PNe (Wilson 1950, Weedman 1968, Sabbadin et al. 2004 and references
therein);
\item[(b)] at present, only BD+30$\degr$3639 and NGC 40 show clear observational
evidences of a soft acceleration in the innermost layers (Bryce \& Mellema 1999; Sabbadin et al. 2000a). Both objects: (I) are
very-low excitation PNe (I([O III] $\lambda$5007 $\rm\AA\/$)$<$ I(H$\beta$)), (II) exhibit an extremely sharp radial matter profile (shell
thickness $\Delta$R/R$\le$0.25), and (III) are powered by a ``cold'' and bright central star of late-WR spectral type. According to Iben et al.
(1983), Bl\"ocker (1995, 2001), and Herwig et al. (1999), they represent ``born-again'' PNe (i. e. the hydrogen-deficient star suffers a
late thermal pulse during the motion towards the white-dwarf region. Due to the pulse-induced convection, hydrogen
mixes and burns on a convective turn-over time scale);
\item[(c)] three more ``born-again'' candidates - A 30, A 58 and A 78 - are faint, extended PNe presenting a central, complex structure of fast,
hydrogen-deficient knots (Jacoby 1979, Pollacco et al. 1992).
\end{description}
All this calls for severe constraints to the
evolutive parameters responsible of PNe shape and shaping (in particular, wind interaction). In fact, according to the detailed
radiation-hydrodynamics simulations (Perinotto et al. 2004a, and references therein), a PN is the result of the synergistic effects of ionization
and fast wind on the gas ejected during the
superwind phase, generating a typical double-shell structure (inner ``rim'' + outer ``shell'') characterized by a ``U'' or better ``V''-shaped
expansion velocity field. The double-shell structure may only be destroyed either by recombination of the ``shell'' when the central star
fades, or by overtaking of the ``shell'' by the ``rim''. In both cases a single-shell configuration emerges, whose expansion velocity field is
simple and increases always steadily inwards.
Wind interaction being the main responsible of the ``V''-shaped expansion profile in
double-shell model-PNe, as well of the increasing inwards velocity field in single-shell model-PNe, we infer that {\bf all current
radiation-hydrodynamics
simulations tend to overestimate the dynamical effects of fast stellar wind on the nebular gas}
(we suspect that post-AGB mass-loss rates adopted by the theoretical simulations are systematically too high).
Though the same suspect arose in the study of NGC 6565
(Turatto et al. 2002), NGC 6818 (Benetti et al. 2003) and NGC 7009 (Sabbadin et al. 2004), a quantitative answer (also including: (a) the
temporal evolution of mass-loss in the superwind phase, Sch\"onberner et al. 2005, and (b) the binarity and the possible role of magnetic
fields, Garcia-Segura \& Lopez 2000, Blackman et al. 2001) will come from the detailed
analysis performed on a representative sample of PNe in both hemispheres, covered with ESO NTT+EMMI and TNG+SARG (work in preparation).
\section{Conclusions}
In our opinion, PN observers should cancel the word ``average'' (and respective synonyms) from their scientific vocabulary: no more average
line fluxes, integrated
spectral profiles, mean electron temperature, overall electron density, and so on. In fact, to bridge the historical gap between
theory and practice (Aller 1994), we need detailed, point-to-point flux and velocity measurements in a wide ionization range, combined with a
straightforward, versatile methodology of analysis. To this end, we have applied tomography and 3-D recovery (Sabbadin et al.
2004 and references therein) to long-slit
echellograms of the compact, bright PN NGC 6741, covered at nine PA with ESO NTT+EMMI.
Our long-lasting journey started with the gas kinematics, penetrated into the galactic and circum-nebular absorptions, and explored the
nebular distance, mass and age. Later on, we dealt with the stellar properties, pushed through the nebular physical conditions, ionization
structure and photo-ionization model, went over the stumbling block of image de-projection, and, at last, flew about the multi-color and opaque
reconstructions of the nebula.
The wealth and accuracy of the results here presented:
\begin{description}
\item[-] confirm the unique capacity of tomography and 3-D recovery in extracting the huge amount of physical
information stored in high-resolution spectroscopy (a tacit invitation, extended to the worldwide researchers),
\item[-] stress once more that properly
squeezed echellograms do represent a peerless tool for deeping the kinematics, physical conditions, ionization
structure and evolution of all classes of expanding nebulae (PNe, nova and supernova remnants, shells around Population I Wolf-Rayet stars,
nebulae ejected by symbiotic stars, bubbles surrounding early spectral-type Main Sequence stars etc.).
\end{description}
\begin{acknowledgements} We wish to thank the support staff of the NTT (in particular, Olivier Hainaut) for the excellent assistance during
the observations.
\end{acknowledgements}
| {
"attr-fineweb-edu": 1.589844,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdDI5qhLA8xWe_RdU | \section{Introduction}
Recently, there has been increasing interest in understanding
correlations in quantum lattice systems prompted by applications
in quantum information theory and computation
\cite{schuch2005,cramer2005,bravyi2006,eisert2006}
and the study of complex networks \cite{hastings2004b}.
The questions that arise in the context of quantum information
and computation are sufficiently close to typical problems
in statistical mechanics that the methods developed in one
framework are often relevant in the other.
The bound on
the group velocity in quantum spin dynamics generated by a
short-range Hamiltonian, which was proved by Lieb and Robinson
more than three decades ago \cite{lieb1972}, is a case in point.
For example, as explained in \cite{bravyi2006}, the Lieb-Robinson bound
provides an upper bound on the speed of information transmission
through channels modeled by a quantum lattice systems with short-range
interactions.
The Lieb-Robinson bound plays a crucial role in the derivation
of several recent results. For some of these results it was useful,
indeed necessary, to generalize and sharpen these bounds. Several
such improvements have recently appeared \cite{nachtergaele2005,hastings2005}.
In this paper we provide a new proof of the Lieb-Robinson
bound (Theorem \ref{thm:lr}) and other estimates based on a norm-preserving
property of the dynamics (see Lemma \ref{lem:normp}).
We apply this result to give upper bounds on the rate
at which correlations can be established between two separated
regions in the lattice for a general class of models (Theorem
\ref{thm:mare}). Moreover,
our bounds allow us to prove the existence of
the dynamics (Theorem \ref{thm:existence}), in the sense of a strongly
continuous group of
automorphisms on the algebra of quasi-local observables for a
larger class of interactions than was previously known
\cite{bratteli1997,simon1993,matsui1993}.
\subsection{The Set Up} We will be considering quantum spins systems defined over a set of
vertices $\Lambda$ equipped with a metric $d$. A finite dimensional
Hilbert space $\mathcal{H}_x$ is assigned to each vertex $x \in
\Lambda$. In the most
common cases $\Lambda$ is a graph, and the metric is given by the graph
distance, $d(x,y)$, which may be the length of the shortest
path of edges connecting $x$ and $y$ in the graph.
For any finite subset $X \subset \Lambda$, the Hilbert space associated
with $X$ is the tensor product $\mathcal{H}_X = \bigotimes_{x \in X}\mathcal{H}_x$, and
the set of corresponding observables supported in $X$ is denoted by
$\mathcal{A}_X=\mathcal{B}(\mathcal{H}_X)$, the bounded linear operators over $\mathcal{H}_X$. These local
observables form an algebra, and with the natural embedding of
$\mathcal{A}_{X_1}$ in $\mathcal{A}_{X_2}$ for any $X_1 \subset X_2$, one can define the
$C^*$-algebra of all observables, $\mathcal{A}$, as the norm completion of the
union of all local observable algebras $\mathcal{A}_{X}$ for finite $X \subset
\Lambda$.
An interaction is a map $\Phi$ from the set of subsets
of $\Lambda$ to $\mathcal{A}$ with the property that $\Phi(X) \in \mathcal{A}_X$ and $\Phi(X) = \Phi(X)^*$
for all finite $X \subset \Lambda$. A quantum spin model is then defined to be the Hamiltonian,
expressed in terms of its interaction, given by
\begin{equation} \label{eq:defgenham}
H_{\Phi} := \sum_{ X \subset \Lambda} \Phi(X).
\end{equation}
For notational convenience, we will often drop the dependence of
$H_{\Phi}$ on $\Phi$.
The dynamics, or time evolution, of a quantum spin model is
the one-parameter group of automorphisms, $\{\tau_t\}_{t\in\mathbb{R}}$, defined by
\begin{equation}
\tau_t(A)=e^{itH} A e^{-itH}, \quad A \in \mathcal{A},
\end{equation}
which is always well defined for finite sets $\Lambda$. In the context of
infinite systems, a boundedness condition on the interaction is
required in order for the finite-volume dynamics to converge to
a strongly continuous one-parameter group of automorphisms on
$\mathcal{A}$.
To describe the interactions we wish to consider in this article, we
first put a condition on the set $\Lambda$; which is only relevant in
the event that $\Lambda$ is infinite. We assume that there
exists a non-increasing function $F: [0, \infty) \to (0, \infty)$
for which:
\noindent i) $F$ is uniformly integrable over $\Lambda$, i.e.,
\begin{equation} \label{eq:fint}
\| \, F \, \| \, := \, \sup_{x \in \Lambda} \sum_{y \in \Lambda}
F(d(x,y)) \, < \, \infty,
\end{equation}
\noindent and
\vspace{.3cm}
\noindent ii) $F$ satisfies
\begin{equation} \label{eq:intlat}
C \, := \, \sup_{x,y \in \Lambda} \sum_{z \in \Lambda} \frac{F \left( d(x,z) \right) \, F \left( d(z,y)
\right)}{F \left( d(x,y) \right)} \, < \, \infty.
\end{equation}
Given a set $\Lambda$ equipped with a metric $d$, it is easy to
see that if $F$ satisfies i) and ii) above, then for any
$a \geq 0$ the function
\begin{equation}
F_a(x) := e^{-ax} \, F(x),
\end{equation}
also satisfies i) and ii) with $\| F_a \| \leq \| F \|$
and $C_a \leq C$.
As a concrete example, take $\Lambda = \mathbb{Z}^d$ and $d(x,y) =
|x-y|$. In this case, one may take the function $F(x) = (1+x)^{-d - \epsilon}$ for any $\epsilon >0$. Clearly,
(\ref{eq:fint}) is satisfied, and a short calculation demonstrates
that (\ref{eq:intlat}) holds with
\begin{equation}
C \, \leq \, 2^{d + \epsilon + 1} \, \sum_{n \in \mathbb{Z}^d}
\frac{1}{(1+|n|)^{d+ \epsilon}}.
\end{equation}
We also observe that, although the {\em purely exponential} function
$G(x) = e^{-ax}$, is integrable for $a>0$, i.e., it satisfies i), it does not
satisfy ii). This is evident from the fact that the cardinality of the
set $\{ z \in \mathbb{Z}^d : |x-z| +|z-y| -|x-y| = 0 \}$ is
proportional to $|x-y|$, and therefore, there exists no constant $C$
uniform in $|x-y|$.
To any set $\Lambda$ for which there exists a function $F$ satisfying
i) and ii) above, we define the set $\mathcal{B}_a(
\Lambda)$ to be those interactions $\Phi$ on $\Lambda$ which satisfy
\begin{equation} \label{eq:defnphia}
\| \Phi \|_a \, := \, \sup_{x,y \in \Lambda} \sum_{X \ni x,y} \frac{ \| \Phi(X) \|}{F_a \left(
d(x,y) \right)} \, < \, \infty.
\end{equation}
\section{Lieb-Robinson Estimates and Existence the Dynamics} \label{sec:lr}
\subsection{Lieb-Robinson Bounds}
We first present a variant of the Lieb-Robinson result which was
first proven in \cite{nachtergaele2005,hastings2005}.
\begin{thm}[Lieb-Robinson Bound]\label{thm:lr}
Let $a \geq 0$ and take $\Lambda_1 \subset \Lambda$ a
finite subset. Denote by $\tau_t^{\Lambda_1}$ the time evolution corresponding
to a Hamiltonian
\begin{equation}
H := \sum_{X \subset \Lambda_1} \Phi(X)
\end{equation}
defined in terms of an interaction $\Phi \in \mathcal{B}_a(\Lambda)$.
There exists a function $g: \mathbb{R} \to [0, \infty)$ with the property
that, given any pair of local observable $A \in
\mathcal{A}_{X}$ and $B \in \A_Y$ with $X, Y \subset
\Lambda_1$, one may estimate
\begin{equation} \label{eq:lrbd1}
\left\| [ \tau_t^{\Lambda_1}(A), B ] \right\| \, \leq \, \frac{2 \, \| A \|
\, \|B \|}{C_a} \, g_a(t) \, \sum_{x \in X} \sum_{y \in
Y} F_a \left( d(x,y) \right),
\end{equation}
for any $t \in \mathbb{R}$. Here the function
\begin{equation} \label{eq:gatt}
g_a(t) \, = \, \left\{ \begin{array}{cc}
\left(e^{2 \, \| \Phi \|_a \, C_a \, |t|} - 1 \right) & \mbox{ if }
d(X,Y)>0, \\ e^{2 \, \| \Phi \|_a \, C_a \, |t|} & \mbox{
otherwise.} \end{array} \right.
\end{equation}
\end{thm}
\begin{proof}
Consider the function $f: \mathbb{R} \to \A$ defined by
\begin{equation}
f(t) := [ \tau_t^{\Lambda_1}(A), B].
\end{equation}
Clearly, $f$ satisfies the following differential equation
\begin{equation} \label{eq:lrde}
f'(t) = i \left[ f(t), \tau_t^{\Lambda_1} \left(H_X \right) \right] + i \left[
\tau_t^{\Lambda_1}(A), \left[ \tau_t^{\Lambda_1}(H_X),B \right] \right],
\end{equation}
where we have used the notation
\begin{equation}
H_Y = \sum_{ \stackrel{Z \subset \Lambda_1:}{Z \cap Y \neq \emptyset}} \Phi(Z),
\end{equation}
for any subset $Y \subset \Lambda_1$. The first term in (\ref{eq:lrde}) above
is norm-preserving, and therefore the inequality
\begin{equation} \label{eq:lrineq1}
\| \, [ \tau_t^{\Lambda_1}(A), B] \, \| \, \leq \, \| [A,B] \| \, + \, 2 \| A \| \, \int_0^{|t|} \,
\| \, [ \tau_s^{\Lambda_1}(H_X), B] \, \| \, ds
\end{equation}
follows immediately from Lemma~\ref{lem:normp} and the automorphism property of
$\tau_t^{\Lambda_1}$. If we further define the quantity
\begin{equation}
C_B(X,t) := \sup_{A \in \A_X} \frac{ \| [ \tau_t^{\Lambda_1}(A), B ] \|}{ \|A \|},
\end{equation}
then (\ref{eq:lrineq1}) implies that
\begin{equation} \label{eq:lrineq2}
C_B(X,t) \leq C_B(X,0) + 2 \sum_{ \stackrel{Z \subset \Lambda_1:}{Z \cap
X \neq \emptyset}} \| \Phi(Z) \| \int_0^{|t|} C_B(Z, s) ds.
\end{equation}
Clearly, one has that
\begin{equation}
C_B(Z,0) \, \leq \, 2 \, \| B \| \, \delta_{Y}(Z),
\end{equation}
where $\delta_{Y}(Z) = 0$ if $Z \cap Y = \emptyset$ and $\delta_{Y}(Z)
= 1$ otherwise. Using this fact, one may iterate (\ref{eq:lrineq2}) and find that
\begin{equation} \label{eq:seriesbd}
C_B(X,t) \, \leq \, 2 \| B \| \, \sum_{n=0}^{ \infty}
\frac{(2|t|)^n}{n!} a_n,
\end{equation}
where
\begin{equation}
a_n \, = \, \sum_{\stackrel{Z_1 \subset \Lambda_1:}{Z_1 \cap
X \neq \emptyset}} \sum_{\stackrel{Z_2 \subset \Lambda_1:}{Z_2 \cap
Z_1 \neq \emptyset}} \cdots \sum_{\stackrel{Z_n \subset \Lambda_1:}{Z_n \cap
Z_{n-1} \neq \emptyset}} \prod_{i=1}^n \| \Phi(Z_i) \| \, \delta_Y(Z_n).
\end{equation}
For an interaction $\Phi \in \mathcal{B}_a(\Lambda)$, one may estimate that
\begin{equation}
a_1 \, \leq \, \sum_{x \in X} \sum_{y \in Y} \sum_{Z \ni x,y} \|
\Phi(Z) \| \, \leq \, \| \Phi \|_a \, \sum_{x \in X} \sum_{y \in
Y} F_a \left( d(x,y) \right).
\end{equation}
In addition,
\begin{eqnarray}
a_2 & \leq & \sum_{x \in X} \sum_{y \in Y} \sum_{z \in \Lambda_1}
\sum_{ \stackrel{Z_1 \subset \Lambda_1:}{Z_1 \ni x,z}}
\| \Phi(Z_1) \| \sum_{ \stackrel{Z_2 \subset \Lambda_1:}{Z_2 \ni z,y}} \| \Phi(Z_2) \| \nonumber \\
& \leq & \| \Phi \|_a^2 \, \sum_{x \in X} \sum_{y \in Y} \sum_{z \in
\Lambda} F_a \left( d(x,z) \right) \, F_a \left( d(z,y) \right)
\nonumber \\ & \leq & \| \Phi \|_a^2 \, C_a \, \sum_{x \in X} \sum_{y \in
Y} F_a \left( d(x,y) \right),
\end{eqnarray}
using (\ref{eq:intlat}). With analogous arguments, one finds that
\begin{equation} \label{eq:aneq}
a_n \, \leq \, \| \Phi \|_a^n \, C_a^{n-1} \, \sum_{x \in X} \sum_{y \in
Y} F_a \left( d(x,y) \right).
\end{equation}
Inserting (\ref{eq:aneq}) into (\ref{eq:seriesbd}) we see that
\begin{equation} \label{eq:lrbdd}
C_B(X,t) \leq \frac{2 \, \| B \| }{C_a} \, \mbox{exp} \left[2 \, \| \Phi
\|_a \, C_a \, |t| \right] \sum_{x \in X} \sum_{y \in
Y} F_a \left( d(x,y) \right),
\end{equation}
from which (\ref{eq:lrbd1}) immediately follows.
In the event that $d(X,Y)>0$, one has that $C_B(X,0) = 0$.
For this reason the term corresponding to $a_0 = 0$, and therefore,
the bound derived in (\ref{eq:lrbdd}) above holds with $e^{2 \| \Phi \|_a C_a |t|}$ replaced by
$e^{2 \| \Phi \|_a C_a |t|}-1$.
\end{proof}
We note that, for fixed local observables $A$ and $B$,
the bounds above are independent of the volume $\Lambda_1 \subset \Lambda$.
In the event that $\Phi \in \mathcal{B}_a(\Lambda)$ for some $a>0$, then the
bound in (\ref{eq:lrbd1}) implies that
\begin{equation} \label{eq:vel}
\left\| [ \tau_t^{\Lambda_1}(A), B ] \right\| \, \leq \, \frac{2 \, \| A \|
\, \|B \|}{C_a} \, \| F \| \, \min(|X|,|Y|) \, e^{- a \,\left[
d(X,Y) - \frac{2 \| \Phi \|_a C_a}{a} |t| \right]},
\end{equation}
which corresponds to a velocity of propagation given by
\begin{equation}
V_{\Phi} := \inf_{a>0} \frac{2 \| \Phi \|_a C_a}{a}.
\end{equation}
We further note that the bounds in (\ref{eq:lrbd1}) and
(\ref{eq:vel}) above only require that one of
the observables have finite support; in particular, if $|X|<
\infty$ and $d(X,Y)>0$, then the bounds are valid irrespective of the support of $B$.
One can also view the Lieb-Robinson bound as a means of localizing the
dynamics. Let $\Lambda$ be finite and take $X \subset \Lambda$.
Denote by $X^c = \Lambda \setminus X$. For any
observable $A \in \A_{\Lambda}$ set
\begin{equation}
\langle A \rangle_{X^c} := \int_{\mathcal{U}(X^c)} U^* A U \, \mu(dU),
\end{equation}
where $\mathcal{U}(X^c)$ denotes the group of unitary operators over
the Hilbert space $\mathcal{H}_{X^c}$ and $\mu$ is the associated
normalized Haar measure.
It is easy to see that for any $A \in \A_{\Lambda}$, the quantity $\langle A
\rangle_{X^c} \in \A_X$ and the difference
\begin{equation} \label{eq:acomm}
\langle A \rangle_{X^c} \, - \, A \, = \,
\int_{\mathcal{U}(X^c)} U^* \left[ A, U \right] \, \mu(dU).
\end{equation}
We can now combine these observations with the Lieb-Robinson bounds we
have proven. Let $A \in \A_X$ be a local observable, and choose
$\epsilon \geq 0$, $a>0$, and an interaction $\Phi \in
\mathcal{B}_a(\Lambda)$. We will denote by
\begin{equation}
B_t( \epsilon) \, = \, B(A, t, \epsilon) \,:= \, \left\{ x \in \Lambda \, : \, d(x, X) \, \leq \,
\frac{2 \| \Phi \|_a C_a}{a} \, |t| \, + \, \epsilon \, \right\},
\end{equation}
the ball centered at $X$ with radius as specified above.
For any $U \in \mathcal{U}(B_t^c(\epsilon))$, we clearly have that
\begin{equation}
d \left( X, {\rm supp}(U) \right) \, \geq \, \frac{2 \| \Phi \|_a
C_a}{a} \,|t| \, + \epsilon,
\end{equation}
and therefore, using (\ref{eq:acomm}) above, we immediately
conclude that
\begin{eqnarray}
\left\| \, \tau_t(A) \, - \, \langle \tau_t(A) \rangle_{B_t^c(\epsilon)} \, \right\| &
\leq & \int_{\mathcal{U}(B_t^c(\epsilon))} \left\| \, \left[ \tau_t(A), U
\right] \, \right\| \, \mu(dU) \nonumber \\
& \leq & \frac{2 \, \| A \| \, |X| }{C_a} \, \| F \| \,
e^{- a \epsilon},
\end{eqnarray}
where for the final estimate we used (\ref{eq:vel}).
\subsection{Existence of the Dynamics}
As is demonstrated in \cite{bratteli1997}, one can use a Lieb-Robinson
bound to establish the existence of the dynamics for interactions
$\Phi \in \mathcal{B}_a( \Lambda)$. In the following we consider the thermodynamic limit
over a increasing exhausting sequence of finite subsets
$\Lambda_n\subset\Lambda$.
\begin{thm}\label{thm:existence}
Let $a\geq 0$, and $\Phi \in \mathcal{B}_a(
\Lambda)$. The dynamics $\{ \tau_t \}_{t \in \mathbb{R}}$ corresponding
to $\Phi$ exists as a strongly continuous, one-parameter group of
automorphisms on $\A$. In particular,
\begin{equation}
\lim_{n\to \infty} \| \tau_t^{\Lambda_n}(A) - \tau_t(A) \| =
0
\end{equation}
for all $A \in \A$. The convergence is uniform for $t$ in compact sets
and independent of the choice of exhausting sequence $ \{ \Lambda_n \}$.
\end{thm}
\begin{proof}
Let $n>m$. Then, $\Lambda_m \subset \Lambda_n$. It is easy to verify that for
any local observable $A \in \A_Y$,
\begin{equation}
\tau_t^{\Lambda_n}(A) - \tau_t^{\Lambda_m}(A) \, = \, \int_0^t \,
\frac{d}{ds} \left( \, \tau_s^{\Lambda_n} \tau_{t-s}^{\Lambda_m}(A) \, \right) \, ds,
\end{equation}
and therefore
\begin{equation} \label{eq:dynbd}
\left\| \tau_t^{\Lambda_n}(A) - \tau_t^{\Lambda_m}(A) \right\| \, \leq
\, \sum_{x \in \Lambda_n \setminus \Lambda_m} \sum_{X \ni x}
\int_0^{|t|} \left\| \, \left[ \, \Phi(X), \, \tau_s^{\Lambda_m}(A) \, \right] \, \right\| \, ds.
\end{equation}
Applying Theorem~\ref{thm:lr}, we see that the right hand side of
(\ref{eq:dynbd}) is bounded from above by
\begin{equation}
2 \, \| A \| \, \int_0^{|t|} g_a(s) ds \, \sum_{x \in \Lambda_n \setminus \Lambda_m} \sum_{X \ni x}
\| \Phi(X) \| \sum_{z \in X} \sum_{y \in Y} F_a \left( d(z,y) \right).
\end{equation}
Rewriting the sum on $X \ni x$ and $y \in X$ as the sum on
$y \in \Lambda$ and $X \ni x,y$, one finds that
\begin{eqnarray}
\left\| \tau_t^{\Lambda_n}(A) - \tau_t^{\Lambda_m}(A) \right\| & \leq
& 2 \, \| A \| \, \| \Phi \|_a \, C_a \, \int_0^{|t|} g_a(s) ds \, \sum_{x
\in \Lambda_n \setminus \Lambda_m} \sum_{z \in Y} F_a \left( d(x,z)
\right) \nonumber \\
& \leq & 2 \, \| A \| \, \| \Phi \|_a \, C_a \, \int_0^{|t|} g_a(s) ds \,
| Y | \, \sup_{z \in Y} \sum_{x
\in \Lambda_n \setminus \Lambda_m} F_a \left( d(x,z) \right).
\end{eqnarray}
As $m,n\to \infty$, the above sum goes to zero. This proves that the sequence
is Cauchy and hence convergent. The remaining claims follow as in
Theorem 6.2.11 of \cite{bratteli1997}.
\end{proof}
\section{Growth of Spatial Correlations}
The goal of this section is to prove Theorem~\ref{thm:mare} below
which bounds the rate at which correlations can accumulate, under the
influence of the dynamics, starting from a product state.
\subsection{The Main Result}
Let $\Omega$ be a normalized product state, i.e.
$\Omega=\bigotimes_{x\in\Lambda}\Omega_x$, where for each $x$,
$\Omega_x$ is a state (not necessarily pure) for the systems
at site $x$. We will denote by $ \langle \cdot \rangle$ the expectation with respect
to $\Omega$, and prove
\begin{thm}\label{thm:mare} Let $a \geq 0$, $\Phi \in \mathcal{B}_a(
\Lambda)$, and take $\Omega$ to be a normalized product state as
described above. Given $X, Y \subset \Lambda$ with $d(X,Y)>0$ and
local observables $A \in \A_X$ and $B \in \A_Y$, one has that
\begin{equation} \label{eq:deccor}
\left| \, \langle \tau_t \left( A B \right) \rangle \, - \langle \tau_t(A) \rangle
\, \langle \tau_t(B) \rangle \, \right| \, \leq \, 4 \, \| A \| \,
\| B \| \, \| F \| \, \left( \, |X| \, + \, |Y| \, \right) \, G_a(t)
\, e^{-a d(X,Y)},
\end{equation}
Here
\begin{equation}
G_a(t) \, = \, \frac{ C_a \, + \, \| F_a \|}{C_a} \, \| \Phi \|_a \,
\int_0^{|t|} g_a(s) \, ds,
\end{equation}
and $g_a$ is the function which arises in the Lieb-Robinson estimate
Theorem~\ref{thm:lr}.
\end{thm}
In the event that $a=0$, the bound above does not decay. However, the
estimate (\ref{eq:upbd}) below, which does decay, is valid. Moreover,
a straight forward application of the techniques used below also provides
estimates on the increase of correlations, due to the dynamics,
for non-product states.
We begin by writing the interaction $\Phi$ as the sum of two terms,
one of which decouples the interactions between observables supported
near $X$ and $Y$.
\subsubsection{Decoupling the Interaction:}
Consider two separated local observables, i.e., $A \in \A_X$ and $B \in
\A_Y$ with $d(X,Y)>0$. Let
\begin{equation} \label{eq:sab}
S_{A,B} \, : = \, \left\{ y \in \Lambda \, : \, d(y,X) \, \leq \, \frac{d(X,Y)}{2} \, \right\},
\end{equation}
denote the ball centered at $X$ with distance $d(X,Y)/2$ from $Y$. For
any $\Phi \in \mathcal{B}_a (\Lambda)$, write
\begin{equation} \label{eq:intdec2}
\Phi \, = \, \Phi \left( 1 - \chi_{A,B} \right) \, + \, \Phi
\chi_{A,B} =: \Phi_1 + \Phi_2,
\end{equation}
where for any $Z \subset \Lambda$
\begin{equation} \label{eq:defchi}
\chi_{A,B}(Z) \, := \, \left\{ \begin{array}{cc} 1 & \mbox{if } Z \cap S_{A,B} \neq
\emptyset \mbox{ and } Z \cap S_{A,B}^c \neq \emptyset, \\ 0 & \mbox{otherwise}. \end{array} \right.
\end{equation}
In this case, one has
\begin{lem} \label{lem:intlr}Let $a \geq 0$, $\Phi \in \mathcal{B}_a(\Lambda)$, and
consider any two separated local observables $A \in \A_X$
and $B \in \A_Y$ with $d(X,Y)>0$. Writing $\Phi = \Phi_1 + \Phi_2$,
as in (\ref{eq:intdec2}), one may show that
\begin{equation} \label{eq:declrbd}
\int_0^{|t|} \left\| \, \left[ \, H_2, \,
\tau_s^{(1)}(O) \, \right] \, \right\| \, ds \, \leq \,
2 \, \| O \| \, G_a(t) \, \sum_{o \in {\rm supp}(O)} \sum_{\stackrel{x \in \Lambda:}{2d(x,o) \geq d(X,Y)}} F_a
\left( d(x,o) \right),
\end{equation}
is valid for observables $O \in \{ A,B \}$. One may take
\begin{equation}
G_a(t) \, = \, \frac{ C_a \, + \, \| F_a \|}{C_a} \, \| \Phi \|_a \,
\int_0^{|t|} g_a(s) \, ds,
\end{equation}
where $g_a$ is the function from Theorem~\ref{thm:lr}.
\end{lem}
\begin{proof}
For $O \in \{ A, B \}$ and $s >0$,
\begin{equation}
\left\| \, \left[ \, H_2, \,
\tau_s^{(1)}(O) \, \right] \, \right\| \, \leq \, \sum_{\stackrel{Z \subset \Lambda:}{Z \cap S_{A,B} \neq
\emptyset, Z \cap S_{A,B}^c \neq \emptyset}} \left\| \, \left[ \, \Phi(Z), \,
\tau_s^{(1)}(O) \, \right] \, \right\|,
\end{equation}
as is clear from the definition of $\chi_{A,B}$; see (\ref{eq:defchi}).
Applying Theorem~\ref{thm:lr} to each term above, we find that
\begin{equation}
\left\| \, \left[ \, \Phi(Z), \, \tau_s^{(1)}(O) \, \right] \,
\right\| \, \leq \, \frac{ 2 \, g_a(s) \, \| O \| \, \| \Phi(Z) \|}{C_a} \, \sum_{z \in Z} \sum_{o
\in \mbox{supp}(O)} F_a \left( d(z,o) \right).
\end{equation}
One may estimate the sums which appear above as follows:
\begin{eqnarray}
\sum_{\stackrel{Z \subset \Lambda:}{Z \cap S_{A,B} \neq
\emptyset, Z \cap S^c_{A,B} \neq \emptyset}} \sum_{z \in Z} & = &
\sum_{\stackrel{Z \subset \Lambda:}{Z \cap S_{A,B} \neq
\emptyset, Z \cap S^c_{A,B} \neq \emptyset}} \left(
\sum_{\stackrel{z \in Z:}{z \in S_{A,B}}} +
\sum_{\stackrel{z \in Z:}{z \in S^c_{A,B}}} \right) \nonumber \\
& \leq & \sum_{z \in S_{A,B}} \sum_{x \in S^c_{A,B}} \sum_{Z \ni z,x}
+ \sum_{z \in S^c_{A,B}} \sum_{x \in S_{A,B}} \sum_{Z \ni z,x},
\end{eqnarray}
and therefore, we have the bound
\begin{equation}
\int_0^{|t|} \left\| \, \left[ \, H_2, \,
\tau_s^{(1)}(O) \, \right] \, \right\| \, ds \, \leq \,
\frac{2 \| O \|}{C_a} \, \left( S_1 + S_2 \right) \, \int_0^{|t|} g_a(s) ds ,
\end{equation}
where
\begin{equation}
S_1 \, = \, \sum_{z \in S_{A,B}} \sum_{x \in S^c_{A,B}} \sum_{Z \ni
z,x} \, \| \Phi(Z) \| \, \sum_{o
\in \mbox{supp}(O)} F_a \left( d(z,o) \right)
\end{equation}
and
\begin{equation}
S_2 \, = \, \sum_{z \in S^c_{A,B}} \sum_{x \in S_{A,B}} \sum_{Z \ni
z,x} \, \| \Phi(Z) \| \, \sum_{o
\in \mbox{supp}(O)} F_a \left( d(z,o) \right).
\end{equation}
In the event that the observable $O=A$, then one may bound $S_1$ by
\begin{eqnarray}
S_1 & \leq & \| \Phi \|_a \, \sum_{z \in S_{A,B}} \sum_{x \in
S^c_{A,B}} F_a \left( d(z,x) \right) \, \sum_{y
\in X} F_a \left( d(z,y) \right) \\
& \leq & C_a \, \| \Phi \|_a \, \sum_{x \in
S^c_{A,B}} \sum_{y \in X} F_a \left( d(x,y) \right) \nonumber
\end{eqnarray}
and similarly,
\begin{eqnarray}
S_2 & \leq & \| \Phi \|_a \, \sum_{z \in S^c_{A,B}} \sum_{x \in
S_{A,B}} F_a \left( d(z,x) \right) \, \sum_{y
\in X} F_a \left( d(z,y) \right) \\
& \leq & \| F_a \| \, \| \Phi \|_a \, \sum_{z \in
S^c_{A,B}} \sum_{y \in X} F_a \left( d(z,y) \right) \nonumber
\end{eqnarray}
An analogous bound holds in the case that $O=B$.
We have proven (\ref{eq:declrbd}).
\end{proof}
\subsubsection{Proof of Theorem~\ref{thm:mare}:}
To prove Theorem~\ref{thm:mare}, we will first provide an
estimate which measures the effect on the dynamics
resulting from dropping certain interaction terms.
\begin{lem} \label{lem:difham} Let $ \Phi_0 = \Phi_1 + \Phi_2$ be an interaction on
$\Lambda$ for which each of the dynamics $\{ \tau_t^{(i)} \}_{t \in
\mathbb{R}}$, for $i \in \{0,1,2 \}$, exists as a strongly continuous
group of $*$-automorphisms on $\A$. Let $\{ A_t \}_{ t \in \mathbb{R}}$ be a
differentiable family of quasi-local observables on $\A$.
The estimate
\begin{equation} \label{eq:dropint}
\| \, \tau_t^{(0)}(A_t) \, - \, \tau_t^{(1)}(A_t) \, \| \, \leq
\, \int_0^{|t|} \, \left\| [ H_2 , \tau_s^{(1)}(A_s) ] \right\| \, + \,
\left\| \tau_s^{(0)}( \partial_sA_s) - \tau_s^{(1)}( \partial_s A_s)
\right\|
\, ds,
\end{equation}
holds for all $t \in \mathbb{R}$. Here, for each $i \in \{0,1,2\}$, we denote
by $H_i$ the Hamiltonian corresponding to $\Phi_i$.
\end{lem}
\begin{proof}
Define the function $f: \mathbb{R} \to \A$ by
\begin{equation}
f(t) \, := \, \tau_t^{(0)}(A_t) \, - \, \tau_t^{(1)}(A_t).
\end{equation}
A simple calculation shows that $f$ satisfies the following differential equation:
\begin{equation} \label{eq:fder}
f'(t) \, = \, i \left[ H_0, f(t) \right] \, + \, i
\left[ H_2, \tau_t^{(1)}(A_t) \right] \, + \, \tau_t^{(0)}( \partial_t
A_t) - \tau_t^{(1)}( \partial_t A_t),
\end{equation}
subject to the boundary condition $f(0)=0$. The first term appearing
on the right hand side of (\ref{eq:fder}) above is norm preserving, and therefore, Lemma~\ref{lem:normp}
implies that
\begin{equation}
\| \, f(t) \, \| \, \leq
\, \int_0^{|t|} \, \left\| [ H_2, \tau_s^{(1)}(A_s) ] \right\| \, + \,
\left\| \tau_s^{(0)}( \partial_sA_s) - \tau_s^{(1)}( \partial_s A_s)
\right\| \, ds,
\end{equation}
as claimed.
\end{proof}
We will now prove Theorem~\ref{thm:mare}. Denote by $B_t := B - \langle
\tau_t(B) \rangle $, and observe that proving (\ref{eq:deccor}) is
equivalent to bounding $| \langle \tau_t(AB_t) \rangle |$. Write
$\Phi \, = \, \Phi_1 \, + \, \Phi_2$, as is done in (\ref{eq:intdec2}).
One easily sees that $\Phi_1$ decouples $A$ from $B$, i.e.,
\begin{equation} \label{eq:fac}
\langle \, \tau_t^{(1)}(AB) \, \rangle \, = \, \langle \,
\tau_t^{(1)}(A) \, \rangle \, \langle \, \tau_t^{(1)}(B) \, \rangle.
\end{equation}
Here, again, we have denoted by $\tau_t^{(1)}$ the time evolution corresponding to
$\Phi_1$. It is clear that
\begin{eqnarray} \label{eq:corbd1}
\left| \langle \tau_t(AB_t) \rangle \right| & \leq & \left| \langle
\tau_t^{(1)}(AB_t) \rangle \right| \, + \, \left| \langle
\tau_t(AB_t) \, - \, \tau_t^{(1)}(AB_t) \rangle \right| \\
& \leq & \| A \| \, \left\| \tau_t(B) - \tau_t^{(1)}(B) \right\| \, +
\, \left\| \tau_t(AB_t) - \tau_t^{(1)}(AB_t) \right\|. \nonumber
\end{eqnarray}
Moreover, the second term on the right hand side above can be further estimated by
\begin{equation} \label{eq:corbd2}
\left\| \tau_t(AB_t) - \tau_t^{(1)}(AB_t) \right\| \, \leq \, 2 \| B
\| \, \left\| \tau_t(A) - \tau_t^{(1)}(A) \right\| \, + \, \| A \|
\, \left\| \tau_t(B_t) - \tau_t^{(1)}(B_t) \right\|.
\end{equation}
Applying Lemma~\ref{lem:difham} to the bounds we have found in (\ref{eq:corbd1}) and
(\ref{eq:corbd2}) yields
\begin{equation}
\left| \langle \tau_t(AB_t) \rangle \right| \, \leq \, 2 \| A \| \,
\int_0^{|t|} \left\| \left[ H_2, \tau_s^{(1)}(B) \right] \right\| \,
ds \, + \, 2 \| B \| \,
\int_0^{|t|} \left\| \left[ H_2, \tau_s^{(1)}(A) \right] \right\| \, ds.
\end{equation}
In fact, we are only using (\ref{eq:dropint}) in trivial situations
where the second term, i.e., $\tau_s( \partial_s A_s) - \tau_s^{(1)}(
\partial_s A_s)$ is identically zero. Finally, using Lemma~\ref{lem:intlr}, we
find an upper bound on $| \langle \tau_t(AB_t) \rangle |$ of the form
\begin{equation} \label{eq:upbd}
4 \, \| A \| \, \| B \| \, G_a(t) \left( \sum_{x \in X} \sum_{ \stackrel{y \in
\Lambda:}{2d(x,y) \geq d(X,Y)}} F_a \left( d(x,y) \right) \, + \,
\sum_{y \in Y} \sum_{ \stackrel{x \in \Lambda:}{2d(x,y) \geq d(X,Y)}} F_a \left( d(x,y) \right) \,\right).
\end{equation}
Theorem~\ref{thm:mare} readily follows from (\ref{eq:upbd}) above.
\setcounter{equation}{0}
\renewcommand{\theequation}{\thesection.\arabic{equation}}
\begin{appendix}
\section{}
In this appendix, we recall a basic lemma about the growth of the solutions
of first order, inhomogeneous differential equations.
Let $\mathcal{B}$ be a Banach space. For each $t \in \mathbb{R}$,
let $A( t) : \mathcal{B} \to \mathcal{B}$ be a linear
operator, and denote by $X( t)$ the solution of the
differential equation
\begin{equation} \label{eq:fode}
\partial_{t} X( t) \, = \, A( t) \, X( t)
\end{equation}
with boundary condition $X(0) = x_0 \in \mathcal{B}$.We say that the
family of operators $A(t)$ is {\em norm-preserving} if for
every $x_0 \in \mathcal{B}$, the mapping $\gamma_{t} :
\mathcal{B} \to \mathcal{B}$ which associates $x_0 \to X( t)$,
i.e., $\gamma_{t}(x_0) = X( t)$, satisfies
\begin{equation} \label{eq:normp}
\| \, \gamma_{t}(x_0) \, \| \, = \, \| \, x_0 \, \| \quad \mbox{for all } t \in
\mathbb{R}.
\end{equation}
Some obvious examples are the case where $\mathcal{B}$ is a Hilbert space
and $A(t)$ is anti-hermitian for each $t$, or when $\mathcal{B}$
is an $*$-algebra of operators on a Hilbert space with a spectral norm and,
for each $t$, $A(t)$ is a derivation commuting with the $*$-operation.
\begin{lem} \label{lem:normp} Let $A( t)$, for $t \in \mathbb{R}$, be a family of
norm preserving opeartors in some Banach space $\mathcal{B}$. For any
function $B : \mathbb{R} \to \mathcal{B}$, the solution of
\begin{equation} \label{eq:inhom}
\partial_{t} Y( t) \, = \, A( t) Y( t) \, + \, B( t),
\end{equation}
with boundary condition $Y(0) = y_0$, satisfies the bound
\begin{equation} \label{eq:yest}
\| \, Y( t) \, - \, \gamma_{t}(y_0) \, \| \, \leq \, \int_0^{ t} \| \, B( t')
\, \| \, d t' .
\end{equation}
\end{lem}
\begin{proof}
For any $t \in \mathbb{R}$, let $X( t)$ be the solution of
\begin{equation} \label{eq:fode1}
\partial_{t} X( t) \, = \, A( t) \, X( t)
\end{equation}
with boundary condition $X(0) = x_0$, and let $\gamma_{t}$ be the
linear mapping which takes $x_0$ to $X( t)$. By variation of constants,
the solution of the inhomogeneous equation (\ref{eq:inhom}) may be
expressed as
\begin{equation} \label{eq:ysol}
Y( t) \, = \, \gamma_{t} \left( \, y_0 \, + \, \int_0^{
t} ( \gamma_s)^{-1} \left( B(s) \right) ds \, \right).
\end{equation}
The estimate (\ref{eq:yest}) follows from (\ref{eq:ysol}) as $A( t)$ is
norm preserving.
\end{proof}
\end{appendix}
\subsection*{Acknowledgements}
This article is based on work supported by the U.S. National Science
Foundation under Grant \# DMS-0303316. Y.O. is supported by the Japan
Society for the Promotion of Science.
We would like to acknowledge Frank Verstraete for
posing the questions, the answers to which form the basis
of this short note
| {
"attr-fineweb-edu": 1.305664,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdEfxaJJQnKrAjqMc | \section{Introduction}\label{introduction}
Being the fundamental resource in a wide range of situations in
quantum information processing, entanglement is considered as a
`standard currency' for quantum information tasks, and it is highly
desirable to know which states of a given system exhibit a high or
maximal amount of entanglement \cite{Horodecki09}. When it comes to
multipartite states this question becomes complicated. There are
different \emph{types} of entanglement \cite{Dur00}, alongside which
there are many different ways to quantify entanglement, each of which
may capture a different desirable quality of a state as a resource.
In this work, the geometric measure of entanglement, a distance-like
entanglement measure \cite{Shimony95,Wei04}, will be investigated to
analyze maximally entangled multipartite states. There are several
incentives to consider this particular measure. Firstly, it has a
broad range of operational interpretations: for example, in local
state discrimination \cite{Hayashi06}, additivity of channel
capacities \cite{Werner02} and recently for the classification of
states as resources for measurement-based quantum computation
(MBQC)\cite{Gross09,Nest07,Mora10}. Another advantage of the
geometric measure is that, while other known entanglement measures are
notoriously difficult to compute from their variational definitions,
the definition of the geometric measure allows for a comparatively
easy calculation. Furthermore, the geometric measure can be linked to
other distance-like entanglement measures, such as the robustness of
entanglement and the relative entropy of entanglement
\cite{Wei04,Hayashi08,Cavalcanti06}. The function also has
applications in signal processing, particularly in the fields of
multi-way data analysis, high order statistics and independent
component analysis (ICA), where it is known under the name \emph{rank
one approximation to high order tensors}
\cite{Lathauwer00,Zhang01,Kofidis02,Wang09,Ni07,Silva08}.
We focus our attention on permutation-symmetric states -- that is,
states that are invariant when swapping any pair of particles. This
class of states has been useful for different quantum information
tasks (for example, in leader election \cite{Dhondt06}). It includes
the Greenberger-Horne-Zeilinger (GHZ) states \cite{Greenberger90}, W
states and Dicke states \cite{Dicke54}, and also occurs in a variety
of situations in many-body physics. There has been lots of activity
recently in implementing these states experimentally
\cite{Prevedel09,Wieczorek09}. Furthermore, the symmetric properties
make them amenable to the analysis of entanglement properties
\cite{Hayashi08,Markham10,Toth09,Hubener09,Bastin09,Mathonet10}.
An important tool in this work will be the Majorana representation
\cite{Majorana32}, a generalization of the Bloch sphere representation
of single qubits, where a permutation-sym\-me\-tric state of $n$
qubits is unambiguously mapped to $n$ points on the surface of the
unit sphere. Recently, the Majorana representation has proved very
useful in analyzing entanglement properties of symmetric states
\cite{Bastin09,Mathonet10,Markham10}. In particular, the geometric
measure of entanglement has a natural interpretation, and the Majorana
representation facilitates exploitation of further symmetries to
characterize entanglement \cite{Markham10}. For example, the
two-qubit symmetric Bell state $| \psi^{+} \rangle = 1 / \sqrt{2}
\left( |01\rangle + |10\rangle \right)$ is represented by an antipodal
pair of points: the north pole $|0\rangle$ and the south pole
$|1\rangle$. Roughly speaking, symmetric states with a high degree of
entanglement are represented by point distributions that are well
spread out over the sphere. We will use this idea along with other
symmetry arguments to look for the most entangled states. Along the
way we will compare this problem to other optimization problems of
point distributions on the sphere.
The paper is organized as follows. In \sect{geometric_measure}, the
definition and properties of the geometric measure of entanglement are
briefly recapitulated, which is followed by an introduction and
discussion of symmetric states in \sect{positive_and_symm}. In
\sect{majorana_representation}, the Majorana representation of
symmetric states is introduced. The problem of finding the maximally
entangled state is phrased in this manner, and is compared to two
other point distribution problems on $S^2$: T\'{o}th's problem and
Thomson's problem. In \sect{analytic}, some theoretical results for
symmetric states are derived with the help of the intuitive idea of
the Majorana representation. The numerically determined maximally
entangled symmetric states of up to 12 qubits are presented in
\sect{maximally_entangled_symmetric_states}. Our results are
discussed in \sect{discussion}, and \sect{conclusion} contains the
conclusion.
\section{The geometric measure of entanglement}
\label{geometric_measure}
The geometric measure of entanglement is a distance-like entanglement
measure for pure multipartite states that assesses the entanglement of
a state in terms of its remoteness from the set of separable states
\cite{Vedral98}. It is defined as the maximal overlap of a given pure
state with all pure product states \cite{Shimony95,Wei03,Barnum01} and
is also defined as the geodesic distance with respect to the
Fubini-Study metric \cite{Brody01}. Here we present it in the inverse
logarithmic form of the maximal overlap, which is more convenient in
relation to other entanglement measures:
\begin{equation}\label{geo_1}
E_{\text{G}}(| \psi \rangle ) = \min_{| \lambda \rangle \in
\mathcal{H}_{\text{SEP}} } \log_2 \left(
\frac{1}{ \vert \langle \lambda | \psi \rangle \vert^2 }
\right) \enspace .
\end{equation}
$E_{\text{G}}$ is non-negative and zero iff $| \psi \rangle$ is a
product state. We denote a product state closest to $| \psi \rangle$
by $| \Lambda_{\psi} \rangle \in \mathcal{H}_{\text{SEP}}$, and it
should be noted that a given $| \psi \rangle$ can have more than one
closest product state. Indeed, we will usually deal with entangled
states that have several closest product states. Due to its
compactness, the normalized, pure Hilbert space of a
finite-dimensional system (e.g. $n$ qudits) always contains at least
one state $| \Psi \rangle$ with maximal entanglement, and to each such
state relates at least one closest product state. The task of
determining maximal entanglement can be formulated as a
max-min problem, with the two extrema not necessarily being
unambiguous:
\begin{equation}\label{geo_meas}
\begin{split}
E_{\text{G}}^{\text{max}} & = \max_{| \psi \rangle \in
\mathcal{H}} \min_{| \lambda \rangle \in
\mathcal{H}_{\text{SEP}} } \log_2 \left( \frac{1}{ \vert
\langle \lambda | \psi \rangle \vert^2 } \right) \enspace , \\
& = \max_{| \psi \rangle \in \mathcal{H}} \log_2 \left(
\frac{1}{ \vert \langle \Lambda_{\psi} | \psi \rangle \vert^2 }
\right) \enspace , \\
& = \log_2 \left( \frac{1}{ \vert \langle \Lambda_{\Psi} | \Psi
\rangle \vert^2 } \right) \enspace .
\end{split}
\end{equation}
It is often more convenient to define $G(| \psi \rangle ) = \max_{|
\lambda \rangle } \vert \langle \lambda | \psi \rangle \vert \, ,$
so that we obtain $E_{\text{G}} = \log_2 ( 1/ {G}^2 )$. Because of
the monotonicity of this relationship, the task of finding the
maximally entangled state is equivalent to solving the min-max problem
\begin{equation}\label{minmax}
\min_{| \psi \rangle \in \mathcal{H}}
G(| \psi \rangle ) =
\min_{| \psi \rangle \in \mathcal{H}}
\max_{| \lambda \rangle \in \mathcal{H}_{\text{SEP}} }
\vert \langle \lambda | \psi \rangle \vert \enspace .
\end{equation}
As mentioned in the Introduction, there are several advantages of this
measure of entanglement. First of all, it has several operational
interpretations. It has implications for channel capacity
\cite{Werner02} and can also be used to give conditions as to when
states are useful resources for MBQC \cite{Gross09,Nest07,Mora10}. If
the entanglement of a set of resource states scales anything below
logarithmically with the number of parties, it cannot be an efficient
resource for deterministic universal MBQC \cite{Nest07}. On the other
hand, somewhat surprisingly, if the entanglement is too large, it is
also not a good resource for MBQC. If the geometric measure of
entanglement of an $n$ qubit system scales larger than $n - \delta$
(where $\delta$ is some constant), then such a computation can be
simulated efficiently computationally \cite{Gross09}. Of course, we
should also note that there are many other quantum information tasks
that are not restricted by such requirements. For example, the
$n$-qubit GHZ state can be considered the most non-local with respect
to all possible two-output, two-setting Bell inequalities
\cite{Werner01}, whereas the geometric measure is only $E_{\text{G}}
(|\text{GHZ}\rangle)=1$, independent of $n$. The $n$-qubit W state, on
the other hand, is the optimal state for leader election
\cite{Dhondt06} with entanglement $E_{\text{G}} (| \text{W} \rangle )
= \log_2 (n/(n-1))^{n-1}$. Indeed, for local state discrimination,
the role of entanglement in blocking the ability to access information
locally is strictly monotonic -- the higher the geometric measure of
entanglement, the harder it is to access information locally
\cite{Hayashi06}.
In addition, the geometric measure $E_{\text{G}}$ has close links to
other distance-like entanglement measures, namely the (global)
robustness of entanglement $R$ \cite{Vidal99} and the relative entropy
of entanglement $E_{\text{R}}$ \cite{Vedral98}. Between these
measures the inequalities $E_{\text{G}} \leq E_{\text{R}} \leq \log_2
(1 + R)$ hold for all states \cite{Wei04,Hayashi08,Cavalcanti06}, and
they turn into equalities for stabilizer states (e.g. GHZ state),
Dicke states (e.g. W state) and permutation-antisymmetric basis states
\cite{Hayashi06,Hayashi08,Markham07}.
An upper bound for the entanglement of pure $n$ qubit states is given
in \cite{Jung08} as
\begin{equation}
E_{\text{G}} ( | \psi \rangle ) \leq n-1 \enspace .
\end{equation}
We can see that this allows for states to be more entangled than is
useful, e.g. for MBQC. Indeed, although no states of more than two
qubits reach this bound \cite{Jung08}, most states of $n$ qubits have
entanglement $E_{\text{G}} > n - 2 \log_2 (n) - 3$ \cite{Gross09}. In
the next section, we will see that symmetric states have generally
lower entanglement.
We can also make a general statement for positive states that will
help us in calculating entanglement for this smaller class of
states. For finite-dimensional systems, a general quantum state can be
written in the form $| \psi \rangle = \sum_i a_i | i \rangle$ with an
orthonormalized basis $\{ | i \rangle \}$ and complex coefficients
$a_i \in \mathbb{C}$. We will call $| \psi \rangle$ a \emph{real
state} if -- for a given basis $\{ | i \rangle \}$ -- all
coefficients are real ($a_i \in \mathbb{R}$), and likewise call $|
\psi \rangle$ a \emph{positive state} if the coefficients are all
positive ($a_i \geq 0$). A \emph{computational basis} is one made up
of tensors of local bases.
\begin{lem}\label{lem_positive}
Every state $| \psi \rangle$ of a finite-dimensional system that is
positive with respect to some computational basis has at least one
positive closest product state $| \Lambda_{\psi} \rangle$.
\end{lem}
\begin{proof}
Picking any computational basis in which the coefficients of $| \psi
\rangle$ are all positive, we denote the basis of subsystem $j$ with
$\{ | i_{j} \rangle \}$, and can write the state as $| \psi \rangle
= \sum_{\vec{i}} a_{\vec{i}} \, | i_1 \rangle \cdots | i_n \rangle$,
with $\vec{i} = ( i_1 , \dots , i_n)$ and $a_{\vec{i}} \geq 0$. A
closest product state of $| \psi \rangle$ can be written as $|
\Lambda_{\psi} \rangle = \bigotimes_{j} | \sigma_{j} \rangle$, where
$| \sigma_{j} \rangle = \sum_{i_j} b^{j}_{i_j} | i_j \rangle$ (with
$b^{j}_{i_j} \in \mathbb{C}$) is the state of subsystem $j$. Now
define a new product state with positive coefficients as $|
\Lambda_{\psi} ' \rangle = \bigotimes_{j} | \sigma_{j} ' \rangle$,
where $| \sigma_{j} ' \rangle = \sum_{i_j} | b^{j}_{i_j} | \, | i_j
\rangle$. Because of $|\langle \psi | \Lambda_{\psi} ' \rangle| =
\sum_{\vec{i}} a_{\vec{i}} \prod_{j} | b^{j}_{i_j} | \geq \left|
\sum_{\vec{i}} a_{\vec{i}} \prod_{j} b^{j}_{i_j} \right| =
|\langle \psi | \Lambda_{\psi} \rangle|$, the positive state $|
\Lambda_{\psi} ' \rangle$ is a closest product state of $| \psi
\rangle$.
\end{proof}
This lemma, which was also shown in \cite{Zhu10}, asserts that
positive states have at least one positive closest product state, but
there can nevertheless exist other non-positive closest product
states. A statement analogous to Lemma \ref{lem_positive} does not
hold for real states, and it is easy to find examples of real states
that have no real closest product state.
From now on we will simply denote entanglement instead of referring to
the geometric measure of entanglement. It must be kept in mind,
however, that the maximally entangled state of a multipartite system
subtly depends on the chosen entanglement measure \cite{Plenio07}.
\section{Permutation symmetric states}\label{positive_and_symm}
In general it is very difficult to find the closest product state of a
given quantum state, due to the large amount of parameters in $|
\Lambda \rangle$. The problem will be considerably simplified,
however, when considering permutation-symmetric states. In
experiments with many qubits, it is often not possible to access
single qubits individually, necessitating a fully symmetrical
treatment of the initial state and the system dynamics \cite{Toth07}.
The ground state of the Lipkin-Meshkov-Glick model was found to be
permutation-invariant, and its entanglement was quantified in term of
the geometric measure and its distance-related cousins \cite{Orus08}.
For these reasons it is worth analyzing various theoretical and
experimental aspects of the entanglement of symmetric states, such as
entanglement witnesses or experimental setups
\cite{Korbicz05,Korbicz06}.
The symmetric basis states of a system of $n$ qubits are given by the
Dicke states, the simultaneous eigenstates of the total angular
momentum $J$ and its $z$-component $J_z$
\cite{Dicke54,Stockton03,Toth07}. They are mathematically expressed
as the sum of all permutations of computational basis states with
$n-k$ qubits being $|0\rangle$ and $k$ being $|1\rangle$.
\begin{equation}\label{dicke_def}
| S_{n,k} \rangle = {\binom{n}{k}}^{- 1/2} \sum_{\text{perm}} \;
\underbrace{ | 0 \rangle | 0 \rangle \cdots | 0 \rangle }_{n-k}
\underbrace{ | 1 \rangle | 1 \rangle \cdots | 1 \rangle }_{k}
\enspace ,
\end{equation}
with $0 \leq k \leq n$, and where we omitted the tensor symbols that
mediate between the $n$ single qubit spaces. The Dicke states
constitute an orthonormalized set of basis vectors for the symmetric
Hilbert space $\mathcal{H}_{\text{s}}$. The notation $| S_{n,k}
\rangle$ will sometimes be abbreviated as $| S_{k} \rangle$ when the
number of qubits is clear.
Recently, there has been a very active investigation into the
conjecture that the closest product state of a symmetric state is
symmetric itself \cite{Wei04,Hayashi08,Hubener09}. A proof of this
seemingly straightforward statement is far from trivial, and after
some special cases were proofed \cite{Hayashi09,Wei10}, H\"{u}bener
\textit{et al. } \cite{Hubener09} were able to extend this result to the general
case. They also showed that, for $n \geq 3$ qudits (general quantum
$d$-level systems) the closest product state of a symmetric state is
\emph{necessarily} symmetric. This result greatly reduces the
complexity of finding the closest product state and thus the
entanglement of a symmetric state.
A general pure symmetric state of $n$ qubits is a linear combination
of the $n+1$ symmetric basis states, with the operational nature being
that the state remains invariant under the permutation of any two of
its subsystems.
A closest product state of $| S_{n,k} \rangle$ is \cite{Hayashi08}
\begin{equation}\label{dicke_cs}
| \Lambda \rangle = \Big( \sqrt{ \tfra{n-k}{n} } \, |0\rangle +
\sqrt{ \tfra{k}{n} } \, |1\rangle \Big)^{\otimes n} \enspace ,
\end{equation}
i.e. a tensor product of $n$ identical single qubit states. From
this, the amount of entanglement is found to be
\begin{equation}\label{dicke_ent}
E_{\text{G}} ( | S_{n,k} \rangle ) = \log_2 \left(
\frac{ \big( \frac{n}{k} \big)^k \big( \frac{n}{n-k}
\big)^{n-k}} {\binom{n}{k}} \right) \enspace .
\end{equation}
This formula straightforwardly gives the maximally entangled Dicke
state. For even $n$ it is $| S_{n,n/2} \rangle$ and for odd $n$ the
two equivalent states $| S_{n,(n+1)/2} \rangle$ and $| S_{n,(n-1)/2}
\rangle$. In general, however, the maximally entangled symmetric state
of $n$ qubits is a superposition of Dicke states. Nevertheless,
\eq{dicke_ent} can be used as a lower bound to the maximal
entanglement of symmetric states. This bound can be approximated by
the Stirling formula for large $n$ as $E_{\text{G}} \geq \log_2
\sqrt{n \pi/2}$.
An upper bound to the geometric measure for symmetric $n$ qubit states
can be easily found from the well-known decomposition of the identity
on the symmetric subspace (denoted $\mathbbm{1}_{\text{Symm}}$, see
e.g. \cite{Renner}),
\begin{equation}
\int_{\mathcal{S}(\mathcal{H})}
(|\theta\rangle\langle\theta|)^{\otimes n}\omega(\theta)
= \frac{1}{n+1}\mathbbm{1}_{\text{Symm}} \enspace ,
\end{equation}
where $\omega$ denotes the uniform probability measure over the unit
sphere $\mathcal{S}(\mathcal{H})$ on Hilbert space $\mathcal{H}$. We
can easily see that $G ( | \psi \rangle )^2 = \max_{\omega \in
\mathcal{H}_{\text{SEP}}} \text{Tr} (\omega |\psi\rangle\langle
\psi|)\geq 1 / (n+1)$. Hence, for any symmetric state of $n$ qubits,
the geometric measure of entanglement is upper bounded by
\begin{equation}
E_{\text{G}} (|\psi\rangle_{\text{s}}) \leq \log_2 (n+1) \enspace .
\end{equation}
An alternative proof that has the benefit of being visually accessible
is presented in \app{normalization_bloch}.
The maximal symmetric entanglement for $n$ qubits thus scales
polylogarithmically between $\mathcal{O} (\log \sqrt{n})$ and $\mathcal{O} (\log
n)$. To compare this with the general non-sym\-me\-tric case,
consider the lower bound of the maximal $n$ qubit entanglement ($n$
even) of $E_{\text{G}} \geq (n/2)$ \footnote{A trivial example of an
$n$ qubit state with $E_{\text{G}} = (n/2)$ are $(n/2)$ bipartite
Bell states, each of which contributes 1 ebit. Another example is
the 2D cluster state of $n$ qubits which has $E_{\text{G}} = (n/2)$
\protect{\cite{Markham07}}.}. Thus the maximal entanglement of
general states scales much faster, namely linearly rather than
logarithmically. As mentioned, for most states the entanglement is
even higher and thus too entangled to be useful for MBQC. While the
bounds for symmetric states mean that permutation-symmetric states are
never too entangled to be useful for MBQC, unfortunately their scaling
is also too low to be good universal deterministic resources
\cite{Nest07}. They may nevertheless be candidates for approximate,
stochastic MBQC \cite{Mora10}. Regardless of their use as resources
for MBQC, the comparatively high entanglement of symmetric states
still renders them formidable candidates for specific quantum
computations or as resources for other tasks, such as the leader
election problem \cite{Dhondt06} and LOCC discrimination
\cite{Hayashi06}.
We end this section by mentioning a simplification with respect to
symmetric positive states. States that are symmetric as well as
positive in some computational basis are labelled as \emph{positive
symmetric}. From the previous discussion it is clear that such
states have a closest product state which is positive symmetric
itself, a result first shown in \cite{Hayashi08}. It should be noted
that, while each closest product state of a positive symmetric state
is \emph{necessarily} symmetric for $n \geq 3$ qudits, it \emph{need
not} be positive. We can formulate this as a statement akin to Lemma
\ref{lem_positive}.
\begin{lem}\label{lem_positive_symmetric}
Every symmetric state $| \psi \rangle_{\text{s}}$ of $n$ qudits,
which is positive in some computational basis, has at least one
positive symmetric closest product state $| \Lambda_{\psi}
\rangle_{\text{s}}$.
\end{lem}
\section{Majorana representation of symmetric states}
\label{majorana_representation}
With the discussion of the geometric measure and symmetric states
behind us, we have gathered the prerequisites to introduce a crucial
tool, the Majorana representation. It will help us to understand the
amount of entanglement of symmetric states.
\subsection{Definition}\label{majorana_definition_sect}
In classical physics, the angular momentum $\mathbf{J}$ of a system
can be represented by a point on the surface of the 3D unit sphere
$S^2$, which corresponds to the direction of $\mathbf{J}$. No such
simple representation is possible in quantum mechanics, but Majorana
\cite{Majorana32} pointed out that a pure state of spin-$j$ can be
uniquely represented by $2j$ not necessarily distinct points on $S^2$.
This is a generalization of the spin-$1/2$ (qubit) case, where the 2D
Hilbert space is isomorphic to the unit vectors on the Bloch sphere.
An equivalent representation also exists for per\-mu\-tation-symmetric
states of $n$ spin-$1/2$ particles \cite{Majorana32,Bacry74}. By
means of this `Majorana representation' any symmetric state of $n$
qubits $| \psi \rangle_{\text{s}}$ can be uniquely composed from a sum
over all permutations $P : \{ 1, \dots , n \} \rightarrow \{ 1, \dots,
n \}$ of $n$ undistinguishable single qubit states $\{ | \phi_1
\rangle, \dots , | \phi_n \rangle \}$:
\begin{align}
| \psi \rangle_{\text{s}} = {} & \frac{1}{\sqrt{K}}
\sum_{ \text{perm} } | \phi_{P(1)} \rangle | \phi_{P(2)} \rangle
\cdots | \phi_{P(n)} \rangle \enspace , \label{majorana_definition} \\
& \text{with} \quad | \phi_i \rangle = \cos \tfra{\theta_i}{2} \,
|0\rangle + \text{e}^{\text{i} \varphi_i} \sin
\tfra{\theta_i}{2} |1\rangle \enspace , \nonumber \\
& \text{and} \quad \; K = n! \sum_{\text{perm}} \, \prod_{i = 1}^{n}
\, \langle \phi_{i} | \phi_{ P(i) } \rangle \enspace . \nonumber
\end{align}
The normalization factor $K$ is in general different for different $|
\psi \rangle_\text{s}$. By means of \eq{majorana_definition}, the
multi-qubit state $| \psi \rangle_\text{s}$ can be visualized by $n$
unordered points (each of which has a Bloch vector pointing in its
direction) on the surface of a sphere. We call these points the
\emph{Majorana points} (MP), and the sphere on which they lie the
\emph{Majorana sphere}.
With \eq{majorana_definition}, the form of a symmetric state $| \psi
\rangle_{\text{s}}$ can be explicitly determined if the MPs are
known. If the MPs of a given state $| \psi \rangle_{\text{s}} =
\sum^{n}_{k=0} a_k | S_k \rangle$ are unknown, they can be determined
by solving a system of $n+1$ equations.
\begin{gather}
a_k = {\binom{n}{k}}^{1/2} \sum_{ \text{perm} }
\text{S}_{P(1)} \cdots \text{S}_{P(k)} \text{C}_{P(k+1)} \cdots \text{C}_{P(n)} \: ,
\label{state_to_mp} \\
\text{with} \quad
\text{C}_{i} = \cos \tfra{\theta_i}{2} \enspace ,
\qquad \text{S}_{i} = \text{e}^{\text{i} \varphi_{i}} \sin
\tfra{\theta_i}{2} \enspace . \nonumber
\end{gather}
The Majorana representation has been rediscovered several times, and
has been put to many different uses across physics. In relation to the
foundations of quantum mechanics, it has been used to find efficient
proofs of the Kochen-Specker theorem \cite{Zimba93,PenroseRindler} and
to study the `quantumness' of pure quantum states in several respects
\cite{Zimba06,Giraud10}, as well as the approach to classicality in
terms of the discriminability of states \cite{Markham03}. It has also
been used to study Berry phases in high spin \cite{Hannay96} and
quantum chaos \cite{Hannay98,Leboeuf91}. Within many-body physics it
has been used for finding solutions to the Lipkin-Meshkov-Glick model
\cite{Ribiero08}, and for studying and identifying phases in spinor
BEC \cite{Barnett06,Barnett07,Barnett08,Makela07}. It has also been
used to look for optimal resources for reference frame alignment
\cite{Kolenderski08} and for phase estimation \cite{Kolenderski09}.
Recently, the Majorana representation has also become a useful tool in
studying the entanglement of permutation-symmetric states. It has been
used to search for and characterize different classes of entanglement
\cite{Bastin09,Mathonet10,Markham10}, which have interesting mirrors
in the classification of phases in spinor condensates
\cite{Markham10,Barnett07}. Of particular interest, in this work, is
that it gives a natural visual interpretation of the geometric measure
of entanglement \cite{Markham10}, and we will see how symmetries in
the point distributions can be used to calculate the entanglement and
assist in finding the most entangled states.
The connection to entanglement can first be noticed by the fact that
the point distribution is invariant under local unitary maps.
Applying an arbitrary single-qubit unitary operation $U$ to each of
the $n$ subsystems yields the LU map
\begin{equation}\label{m_def_1}
| \psi \rangle_{\text{s}} \; \longmapsto \;
| \varphi \rangle_{\text{s}} \equiv U \otimes \cdots \otimes
U \, | \psi \rangle_{\text{s}} \enspace ,
\end{equation}
and from \eq{majorana_definition} it follows that
\begin{align}\label{lusphere}
| \varphi \rangle_{\text{s}} = {} & \frac{1}{\sqrt{K}} \sum_{
\text{perm} } | \vartheta_{P(1)} \rangle | \vartheta_{P(2)}
\rangle \cdots | \vartheta_{P(n)} \rangle \enspace , \\
& \text{with} \quad | \vartheta_i \rangle = U | \phi_i \rangle \;
\forall i \enspace . \nonumber
\end{align}
In other words, the symmetric state $| \psi \rangle_{\text{s}}$ is
mapped to another symmetric state $| \varphi \rangle_{\text{s}}$, and
the MP distribution of $| \varphi \rangle_{\text{s}}$ is obtained by a
joint rotation of the MP distribution of $| \psi \rangle_{\text{s}}$
on the Majorana sphere along a common axis. Therefore $| \psi
\rangle_{\text{s}}$ and $| \varphi \rangle_{\text{s}}$ have different
MPs, but the same \emph{relative} distribution of the MPs, and the
entanglement remains unchanged.
When it comes to the geometric measure of entanglement, we can be even
more precise. For $n \geq 3$ qubits, every closest product state $|
\Lambda \rangle_{\text{s}}$ of a symmetric state $| \psi
\rangle_{\text{s}}$ is symmetric itself \cite{Hubener09}, so that one
can write $| \Lambda \rangle_{\text{s}} = | \sigma \rangle^{\otimes
n}$ with a single qubit state $| \sigma \rangle$, and visualize $|
\Lambda \rangle_{\text{s}}$ by the Bloch vector of $| \sigma
\rangle$. In analogy to the Majorana points, we refer to $| \sigma
\rangle$ as a \emph{closest product point} (CPP).
For the calculation of the geometric measure of entanglement, the
overlap with a symmetric product state $| \lambda \rangle = | \sigma
\rangle^{\otimes n}$ is
\begin{equation}\label{bloch_product}
| \langle \lambda | \psi \rangle_{\text{s}} | =
\frac{n!}{\sqrt{K}} \, \prod_{i=1}^{n} \, | \langle \sigma |
\phi_i \rangle | \enspace .
\end{equation}
The task of determining the CPP of a given symmetric state is thus
equivalent to maximizing the absolute value of a product of scalar
products. From a geometrical point of view, the $\langle \sigma |
\phi_i \rangle$ are the angles between the two corresponding points on
the Majorana sphere, and thus the determination of the CPP can be
viewed as an optimization problem for a product of geometrical angles.
\subsection{Examples}\label{examples}
We will now demonstrate the Majorana representation for two and three
qubit symmetric states. The case of two qubits is very simple,
because any distribution of two points can be rotated on the Majorana
sphere in a way that both MPs are positive, with $| \phi_1 \rangle = |
0 \rangle$ and $| \phi_2 \rangle = \cos \tfra{\theta}{2} | 0 \rangle +
\sin \tfra{\theta}{2} | 1 \rangle$ for some $\theta \in [0, \pi
]$. One CPP of this MP distribution is easily found to be $| \sigma
\rangle = \cos \tfra{\theta}{4} | 0 \rangle + \sin \tfra{\theta}{4} |
1 \rangle$. \Fig{bell_pic} shows two examples for $\theta = \pi / 2$
and $\theta = \pi$, with the latter representing the Bell state $|
\psi^{+} \rangle = 1 / \sqrt{2} \left( |01\rangle + |10\rangle
\right)$. Due to the azimuthal symmetry of its MPs on the sphere, the
CPPs form a continuous ring $| \sigma \rangle = 1 / \sqrt{2} \left( |
0 \rangle + \text{e}^{\text{i} \varphi} | 1 \rangle \right)$, with $\varphi \in
[0,2 \pi)$ around the equator. The amount of entanglement is
$E_{\text{G}} ( | \psi^{+} \rangle ) = 1$. For two qubits, the
maximally entangled symmetric states are easily found to be those
whose MPs lie diametrically opposite on the sphere.
\begin{figure}
\begin{center}
\begin{overpic}[scale=.5]{figure1a}
\put(-5,0){(a)}
\put(32.5,86){$| \phi_1 \rangle$}
\put(-15,54){$| \phi_2 \rangle$}
\put(3,85){$| \sigma \rangle$}
\put(45.3,54.7){$\theta$}
\end{overpic}
\hspace{5mm}
\begin{overpic}[scale=.5]{figure1b}
\put(-5,0){(b)}
\put(32.5,86){$| \phi_1 \rangle$}
\put(32.5,8){$| \phi_2 \rangle$}
\put(-15,54){$| \sigma_1 \rangle$}
\end{overpic}
\end{center}
\caption{\label{bell_pic} The Majorana representations of two
symmetric states of two qubits. MPs are shown as white circles
and CPPs as dashed lines or crosses. Panel (a) depicts the state
$\sqrt{2/3} \, | 00 \rangle + \sqrt{1/6} \, ( | 01 \rangle + | 10
\rangle )$. Its single CPP lies in the middle of the two MPs at
an angle of $\theta = \pi / 4$. Panel (b) shows the Bell state $|
\psi^{+} \rangle = 1 / \sqrt{2} \left( |01\rangle + |10\rangle
\right)$, whose CPPs form a continuous ring on the equatorial
belt.}
\end{figure}
For three qubit states, the GHZ state and the W state, both of which
are positive and symmetric, are considered as extremal among three
qubit states \cite{Tamaryan09}. The tripartite GHZ state $| \text{GHZ}
\rangle = 1 / \sqrt{2} \left( | 000 \rangle + | 111 \rangle \right)$
\cite{Greenberger90} has the MPs
\begin{equation}\label{GHZ-maj}
\begin{split}
| \phi_1 \rangle & = \tfra{1}{\sqrt{2}} \big( | 0 \rangle +
| 1 \rangle \big) \enspace , \\
| \phi_2 \rangle & = \tfra{1}{\sqrt{2}} \big( | 0 \rangle +
\text{e}^{\text{i} 2 \pi / 3} | 1 \rangle \big) \enspace , \\
| \phi_3 \rangle & = \tfra{1}{\sqrt{2}} \big( | 0 \rangle +
\text{e}^{\text{i} 4 \pi / 3} | 1 \rangle \big) \enspace .
\end{split}
\end{equation}
Its two CPPs are easily calculated to be $| \sigma_1 \rangle =
|0\rangle$ and $| \sigma_2 \rangle = |1\rangle$, yielding an
entanglement of $E_{\text{G}} ( | \text{GHZ} \rangle ) =
1$. \Fig{ghz_w_pic}(a) shows the distribution of the MPs and CPPs for
the GHZ state. The three MPs form an equilateral triangle inside the
equatorial belt, and the two CPPs lie at the north and the south pole,
respectively.
\begin{figure}[b]
\begin{center}
\begin{overpic}[scale=.5]{figure2a}
\put(-5,0){(a)}
\put(-15,53){$| \phi_1 \rangle$}
\put(71,31){$| \phi_2 \rangle$}
\put(65,63){$| \phi_3 \rangle$}
\put(32,86){$| \sigma_1 \rangle$}
\put(32,8){$| \sigma_2 \rangle$}
\end{overpic}
\hspace{5mm}
\begin{overpic}[scale=.5]{figure2b}
\put(-5,0){(b)}
\put(31,86){$| \phi_1 \rangle$}
\put(55,86){$| \phi_2 \rangle$}
\put(53,8){$| \phi_3 \rangle$}
\put(-11,71){$| \sigma_1 \rangle$}
\put(44.5,53){$\theta$}
\end{overpic}
\end{center}
\caption{\label{ghz_w_pic} The MPs and CPPs of the three qubit (a)
GHZ state and (b) W state. The GHZ state has two discrete CPPs
whereas for the W state the CPPs form a continuous ring due to the
azimuthal symmetry.}
\end{figure}
The W state $| \text{W} \rangle = | S_{3,1} \rangle = 1/ \sqrt{3} ( |
001 \rangle + | 010 \rangle + | 100 \rangle )$ is a Dicke state, and
its MPs can be immediately accessed from its form as
\begin{equation}\label{W-maj}
\begin{split}
| \phi_1 \rangle & = | \phi_2 \rangle = | 0 \rangle \enspace , \\
| \phi_3 \rangle & = | 1 \rangle \enspace .
\end{split}
\end{equation}
Generally, the definition \eqsimple{dicke_def} of the Dicke states $|
S_{n,k} \rangle$ asserts that $n-k$ MPs lie at the north pole and $k$
at the south pole. \Eq{dicke_cs} yields $| \sigma_1 \rangle = \sqrt{
2/3 } \, |0\rangle + \sqrt{ 1/3 } \, |1\rangle$ as a positive CPP of
the W state, and from the azimuthal symmetry of the MP distribution it
is clear that the set of all CPPs is formed by the ring of vectors $|
\sigma \rangle = \sqrt{2/3} \, | 0 \rangle + \text{e}^{\text{i} \varphi}
\sqrt{1/3} \, | 1 \rangle$, with $\varphi \in [0,2 \pi)$.
\Fig{ghz_w_pic}(b) shows the MPs and CPPs of $| \text{W} \rangle$. The
amount of entanglement is $E_{\text{G}} ( | \text{W} \rangle ) =
\log_2 \left( 9/4 \right) \approx 1.17$, which is higher than that of
the GHZ state. It was recently shown that, in terms of the geometric
measure, the W state is the maximally entangled of all three qubit
states \cite{Chen10}.
\begin{figure}[b]
\begin{center}
\begin{minipage}{86mm}
\begin{center}
\begin{overpic}[scale=.22]{figure3a} \put(-14,0){(a)}
\end{overpic}
\begin{overpic}[scale=.22]{figure3b} \put(-14,0){(b)}
\end{overpic}
\begin{overpic}[scale=.22]{figure3c} \put(-14,0){(c)}
\end{overpic}
\begin{overpic}[scale=.22]{figure3d} \put(-14,0){(d)}
\end{overpic}
\begin{overpic}[scale=.22]{figure3e} \put(-14,0){(e)}
\end{overpic}
\end{center}
\end{minipage}
\begin{minipage}{86mm}
\vspace{4mm}
\begin{center}
\begin{overpic}[scale=.66]{figure3}
\end{overpic}
\end{center}
\end{minipage}
\end{center}
\caption{\label{3_graph} Change of the entanglement and the location
of the CPP when the MP distribution is modified. The position of
the CPP does not change until the two moving MPs have reached a
latitude slightly below the equator. From the distribution (c)
onwards, the CPP rapidly moves southwards and reaches the equator
at the GHZ state (d). After that, the location of the CPP and the
entanglement changes only weakly until the W state (e) is
reached.}
\end{figure}
It is insightful to examine how the CPPs and the entanglement change
when the MP distribution of the underlying state is modified.
Starting out with three MPs lying on the north pole, two of the MPs
are moved southwards on opposite sides (cf. \fig{3_graph}), describing
an isosceles triangle with the remaining MP on the north pole. Using
the abbreviations $\text{c}_{\theta} = \cos (\theta / 2)$ and $\text{s}_{\theta}
= \sin (\theta / 2)$, the MPs have the form
\begin{equation}\label{mp_3_form}
\begin{split}
| \phi_{1} \rangle & = | 0 \rangle \enspace , \\
| \phi_{2,3} \rangle & = \text{c}_{\theta} | 0 \rangle \pm \text{i} \,
\text{s}_{\theta} | 1 \rangle \enspace ,
\end{split}
\end{equation}
with the parametrization $\theta \in [0,\pi]$. The form of the
underlying quantum state follows from \eq{majorana_definition}
as
\begin{equation}\label{3_mp_state1}
\vert \psi \rangle = \frac{3 \, \text{c}_{\theta}^2 \, \vert 000 \rangle +
\text{s}_{\theta}^2 \left( \vert 011 \rangle \! + \! \vert 101 \rangle
\! + \! \vert 110 \rangle \right) }
{\sqrt{9 \, \text{c}_{\theta}^4 + 3 \, \text{s}_{\theta}^4 }} \enspace .
\end{equation}
This state is positive, so Lemma \ref{lem_positive_symmetric} asserts
the existence of at least one positive CPP. With the ansatz $\vert
\sigma \rangle = \text{c}_{\varphi} \vert 0 \rangle + \text{s}_{\varphi} \vert 1
\rangle$ for the CPP, the position of the CPP is found by calculating
the absolute maximum of $\vert \langle \psi \vert \sigma
\rangle^{\otimes 3} \vert$. From this it is found that the parameter
$\varphi (\theta)$ of the CPP depends on the parameter $\theta$ of the
MPs as follows:
\begin{equation}\label{3_mp_state2}
\text{c}_{\varphi}^2 = \text{s}_{\theta}^2 / (6 \text{s}_{\theta}^2 - 3) \enspace .
\end{equation}
The permitted values of the left-hand side are [0,1], but the
right-hand side lies outside this range for $\theta < \pi - \arccos
(1/5)$. For these values the CPP is fixed at $\vert \sigma \rangle =
\vert 0 \rangle$. \Fig{3_graph} shows how the CPP parameter $\varphi
( \theta )$ changes with $\theta$. It is seen that from $\theta = \pi
- \arccos (1/5)$ onwards, the CPP abruptly leaves the north pole and
moves towards the south pole along the prime meridian. From Equations
\eqsimple{3_mp_state1} and \eqsimple{3_mp_state2} the amount of
entanglement is easily calculated and is displayed in \fig{3_graph}.
$E_{\text{G}}$ is monotonously increasing \cite{Tamaryan09} and
reaches a saddle point at the GHZ state ($\theta = 2 \pi / 3$).
\subsection{Entanglement and extremal point distributions}
\label{extremal_point}
The main point of interest in this paper is the study of maximally
entangled symmetric states. For this the Majorana representation is
extremely helpful, because it allows the optimization problem of
maximizing the entanglement to be written in a simple form. With the
help of \eq{bloch_product}, the min-max problem \eqsimple{minmax} for
finding the maximally entangled state can be reformulated as
\begin{equation}\label{maj_problem}
\min_{ \{ | \phi_i \rangle \}} \frac{1}{\sqrt{K}}
\left( \max_{ | \sigma \rangle } \, \prod_{i=1}^{n} \,
| \langle \sigma | \phi_i \rangle | \right) \enspace .
\end{equation}
This `Majorana problem' bears all the properties of an optimization
problem on the surface of a sphere in $\mathbb{R}^3$. These kinds of
problems deal with arrangements of a finite number of points on a
sphere so that an extremal property is fulfilled \cite{Whyte52}. Two
well-known members, T\'{o}th's problem and Thomson's problem, have
been extensively studied in the past.
\textbf{T\'{o}th's problem,} also known as Fejes' problem and Tammes'
problem, asks how $n$ points have to be distributed on the unit sphere
so that the minimum distance of all pairs of points becomes maximal
\cite{Whyte52}. This problem was first raised by the biologist Tammes
in 1930 when trying to explain the observed distribution of pores on
pollen grains \cite{Tammes30}. Recasting the $n$ points as unit
vectors $\mathbf{r}_{i} \in \mathbb{R}^3$, the following cost function
needs to be maximized:
\begin{equation}
f_{\text{T\'{o}th}} ( \mathbf{r}_1 , \mathbf{r}_2 , \dots ,
\mathbf{r}_{n} ) =
\min_{i < j} \, | \mathbf{r}_{i} - \mathbf{r}_{j} | \enspace .
\end{equation}
The point configuration that solves this problem is called a spherical
code or sphere packing \cite{Weisstein}. The latter term refers to the
equivalent problem of placing $n$ identical spheres of maximal
possible radius around a central unit sphere, touching the unit sphere
at the points that solve T\'{o}th's problem.
\textbf{Thomson's problem,} also known as the Coulomb problem, asks
how $n$ point charges can be distributed on the surface of a sphere so
that the potential energy is minimized. The charges interact with
each other only through Coulomb's inverse square law. Devised by
J. J. Thomson in 1904, this problem raises the question about the
stable patterns of up to 100 electrons on a spherical surface
\cite{Thomson04}. Its cost function is given by the Coulomb energy
and needs to be minimized.
\begin{equation}
f_{\text{Thomson}} ( \mathbf{r}_1 , \mathbf{r}_2 , \dots ,
\mathbf{r}_{n} ) = \sum_{i < j} \, | \mathbf{r}_{i} -
\mathbf{r}_{j} |^{-1} \enspace .
\end{equation}
The original motivation for Thomson's problem was to determine the
stable electron distribution of atoms in the plum pudding model.
While this model has been superseded by modern quantum theory, there
is a wide array of novel applications for Thomson's problem or its
generalization to other interaction potentials. Among these are
multi-electron bubbles in liquid $^4$He \cite{Leiderer95}, surface
ordering of liquid metal drops confined in Paul traps \cite{Davis97},
the shell structure of spherical viruses \cite{Marzec93},
`colloidosomes' for encapsulating biochemically active substances
\cite{Dinsmore02}, fullerene patterns of carbon atoms \cite{Kroto85}
and the Abrikosov lattice of vortices in superconducting metal shells
\cite{Dodgson97}.
Exact solutions to T\'{o}th's problem are only known for
$n_{\text{To}} = 2-12,24$ points \cite{Erber91}, and in Thomson's
problem for $n_{\text{Th}} = 2-8,12$ points \cite{Erber91,Whyte52}.
Despite the different definitions of the two problems, they share the
same solutions for $n = 2-6,12$ points \cite{Leech57}. Numerical
solutions are, furthermore, known for a wide range of $n$ in both
problems \cite{Ashby86,Altschuler94,Sloane,Wales}.
The solutions to $n = 2, 3$ are trivial and given by the dipole and
equilateral triangle, respectively. For $n = 4,6,8,12,20$ the
Platonic solids are natural candidates, but they are the actual
solutions only for $n = 4,6,12$ \cite{Berezin85}. For $n = 8,20$ the
solutions are not Platonic solids and are different for the two
problems. We will cover the solutions for $n=4-12$ in more detail
alongside the Majorana problem in
\sect{maximally_entangled_symmetric_states}.
On symmetry grounds, one could expect that the center of mass of the
$n$ points always coincides with the sphere's middle point. This is,
however, not the case, as the solution to T\'{o}th's problem for $n=7$
\cite{Erber91} or the solution to Thomson's problem for $n = 11$ shows
\cite{Erber91, Ashby86}. Furthermore, the solutions need not be
unique. For T\'{o}th's problem, the first incident of this is $n = 5$
\cite{Ogilvy51}, and for Thomson's problem at $n=15$ \cite{Erber91}
and $n = 16$ \cite{Ashby86}. These aspects show that it is, in
general, hard to make statements about the form of the `most spread
out' point distributions on the sphere. The Majorana problem
\eqsimple{maj_problem} is considered to be equally tricky,
particularly with the normalization factor $K$ depending on the MPs.
Furthermore, the MPs of the solution need not all be spread out far
from each other, as demonstrated by the three qubit $| \text{W}
\rangle$ state with its two coinciding MPs.
\section{States and symmetries of MP and CPP distributions}
\label{analytic}
In this section, results for the interdependence between the form of
$n$ qubit symmetric states and their Majorana representation will be
derived. More specifically, it will be examined what the distributions
of MPs and CPPs look like for states whose coefficients are real,
positive or vanishing. In some of these cases the MPs or CPPs have
distinct patterns on the sphere, which can be described by symmetries.
In this context, care has to be taken as to the meaning of the word
`symmetric'. Permutation-\emph{symmetric} states were introduced in
\sect{positive_and_symm}, and only these states can be represented by
point distributions on the Majorana sphere. For some of these
symmetric states, their MP distribution exhibits symmetry properties
on the sphere. Examples of this can be found in \fig{ghz_w_pic},
where the GHZ state and W state have \emph{rotational symmetries}
around the Z-axis, as well as \emph{reflective symmetries} along some
planes.
Let $| \psi \rangle_{\text{s}} = \sum_{k = 0}^{n} a_k | S_{k} \rangle$
be a general symmetric state of $n$ qubits. To understand the
relationship between the state's coefficients and the Majorana
representation, consider the effect of symmetric LUs. A symmetric LU
acting on the Hilbert space of an $n$ qubit system is defined as the
$n$-fold tensor product of a single-qubit unitary operation:
$U^{\text{s}} = U \otimes \cdots \otimes U$. This is precisely the LU
map that was shown in \eq{m_def_1} and \eqsimple{lusphere} to map
every symmetric state to another symmetric state.
Considering the Hilbert space of a single qubit, the rotation operator
for $Z$-axis rotations of the qubit is
\begin{equation}\label{z_rotationmatrix}
R_z (\theta) =
\begin{pmatrix}
1 & 0 \\
0 & \text{e}^{\text{i} \theta}
\end{pmatrix} \enspace ,
\end{equation}
and the rotation operator for $Y$-axis rotations of the qubit is
\begin{equation}\label{y_rotationmatrix}
R_y (\theta) =
\begin{pmatrix}
\cos \frac{\theta}{2} & - \sin \frac{\theta}{2} \vspace*{0.8mm} \\
\sin \frac{\theta}{2} & \phantom{-} \cos \frac{\theta}{2}
\end{pmatrix} \enspace .
\end{equation}
$R_z$ changes the relative phase, but not the absolute value of the
qubit's coefficients. Conversely, $R_y$ changes the absolute value,
but not the relative phase of the coefficients. From
\eq{majorana_definition} it is easily seen that $R_z$ and $R_y$ pass
this behavior on to the symmetric LUs $R^{\, \text{s}}_z :=
R_z^{\otimes n}$ and $R^{\, \text{s}}_y := R_y^{\otimes n}$. For
example, the effect of $R^{\, \text{s}}_z$ on $| \psi
\rangle_{\text{s}}$ is
\begin{equation}\label{rot_z}
R^{\, \text{s}}_z (\theta) \, | \psi \rangle_{\text{s}} =
\sum_{k = 0}^{n} a_k \text{e}^{ \text{i} k \theta} \, | S_{k} \rangle \enspace .
\end{equation}
From this it is easy to determine the conditions for the MPs of $|
\psi \rangle_{\text{s}}$ having a rotational symmetry around the
Z-axis, i.e. $R^{\, \text{s}}_z (\theta) \, | \psi \rangle_{\text{s}}
= | \psi \rangle_{\text{s}}$ (up to a global phase) for some $\theta <
2 \pi$. From \eq{rot_z} it is clear that the possible rotational
angles (up to multiples) are restricted to $\theta = 2 \pi / m$, with
$m \in \mathbb{N}$, $1<m \leq n$. The necessary and sufficient
conditions are:
\begin{lem}\label{rot_symm}
The MP distribution of a symmetric $n$ qubit state $| \psi
\rangle_{\text{s}}$ is rotationally symmetric around the $Z$-axis
with rotational angle $\theta = 2 \pi / m$ ($\, 1<m \leq n$) iff
\begin{equation}\label{rot_cond}
\forall \{ k_i , k_j | \, a_{k_{i}} \neq 0 \wedge
a_{k_{j}} \neq 0 \} : ( k_i - k_j ) \bmod m = 0
\end{equation}
\end{lem}
\begin{proof}
\Eq{rot_cond} is equivalent to: $\exists \: l \in \mathbb{Z} :
\forall \{ k | \, a_{k} \neq 0 \} : k \bmod m = l$. From this it
follows that $R^{\, \text{s}}_z (2 \pi / m) \, | \psi
\rangle_{\text{s}} = \sum_{k} a_k \exp(\text{i} 2 \pi k/m) \vert S_k
\rangle = \sum_{k} a_k \exp(\text{i} 2 \pi l/m) \vert S_k \rangle = \text{e}^{\text{i}
\delta} | \psi \rangle_{\text{s}}$, with $\delta = 2 \pi l/m$.
\end{proof}
In other words, a sufficient number of coefficients need to vanish,
and the spacings between the remaining coefficients must be multiples
of $m$. For example, a symmetric state of the form $\vert \psi
\rangle_{\text{s}} = a_3 \vert S_3 \rangle + a_7 \vert S_7 \rangle +
a_{15} \vert S_{15} \rangle$ is rotationally symmetric with $\theta =
\pi / 2$, because the spacings between non-vanishing coefficients are
multiples of $4$.
\begin{lem}\label{maj_max}
Every maximally entangled symmetric state $| \Psi
\rangle_{\text{s}}$ of $n$ qubits has at least two different CPPs.
\end{lem}
\begin{proof}
The cases $n = 2, 3$ are trivial, because their maximally entangled
states (Bell states and W state, respectively) have an infinite
number of CPPs. For $n > 3$, we consider a symmetric state $| \psi
\rangle$ with only one CPP $| \sigma \rangle$ and show that $| \psi
\rangle$ cannot be maximally entangled.
Because of the LU invariance on the Majorana sphere
(cf. \eq{lusphere}), we can take the CPP to be the north pole,
i.e. $| \sigma \rangle = | 0 \rangle$. Denoting a single qubit with
$| \omega \rangle = \text{c}_{\theta} |0\rangle + \text{e}^{\text{i} \varphi}
\text{s}_{\theta} |1\rangle$, the smooth and continuous overlap function
$g( |\omega \rangle ) = | \langle \psi | \omega \rangle^{\otimes
n}|$ then has its absolute maximum at $| \omega \rangle = | 0
\rangle$. For any other local maximum $| \omega ' \rangle$, the
value of $g(| \omega ' \rangle)$ is smaller than $g(|0\rangle)$ and
therefore an infinitesimal change in the MPs of $| \psi \rangle$
cannot lead to a CPP outside a small neighborhood of the north pole.
We will now present an explicit variation of $| \psi \rangle$ that
increases the entanglement. $| \psi \rangle = \sum_{k=0}^n a_k |
S_k \rangle$ has complex coefficients that fulfil $\langle \psi |
\psi \rangle= 1$, as well as $a_0 > 0$ and $a_1 = 0$ \footnote{$a_0$
can be set positive by means of the global phase, and for
symmetric states with $| 0 {\rangle}$ as a CPP it is easy to
verify that $a_1 = 0$ is a necessary condition for the partial
derivatives of $| {\langle} \psi | \omega {\rangle}^{\otimes n}|$
being zero at $| \omega {\rangle} = | 0 {\rangle}$.}. Define the
variation as $| \psi_{\epsilon} \rangle = (a_0 - \epsilon ) | S_0
\rangle + \sum_{k=2}^{n-1} a_k | S_k \rangle + (a_n + \epsilon a_0/
a_n^{*} ) | S_n \rangle$, with $\epsilon \ll 1$. This state fulfils
the requirement $| \psi_{\epsilon} \rangle \stackrel{\epsilon
\rightarrow 0}{\longrightarrow} | \psi \rangle$, and is
normalized: $\langle \psi_{\epsilon} | \psi_{\epsilon} \rangle = 1 +
\mathcal{O} (\epsilon^2)$. We now investigate the values of
$g_{\epsilon} ( |\omega \rangle ) = |\langle \psi_{\epsilon} |
\omega \rangle^{\otimes n}|$ around the north pole. In this area $|
\omega \rangle = (1 - (\theta^2 / 8) + \mathcal{O} (\theta^4))
|0\rangle + \text{e}^{\text{i} \varphi} ( (\theta / 2) - \mathcal{O} (\theta^3))
|1\rangle$, hence $g_{\epsilon} ( |\omega \rangle ) = | (1 -
(\theta^2 / 8))^n (a_0 - \epsilon ) | + \mathcal{O} (\epsilon^2,
\theta^2) = | a_0 - \epsilon | + \mathcal{O} (\epsilon^2, \theta^2) <
|a_0| = g ( |0 \rangle )$ for small, but nonzero $\epsilon$ and
$\theta$. Therefore the absolute maximum of $g_{\epsilon}$ is
smaller than that of $g$, and $| \psi_{\epsilon} \rangle$ is more
entangled than $| \psi \rangle$.
\end{proof}
\subsection{Real symmetric states}
\label{maximally_entangled_real_states}
For symmetric states with real coefficients, the following lemma
asserts a reflection symmetry of the MPs and CPPs with respect to the
$X$-$Z$-plane that cuts the Majorana sphere in half. In mathematical
terms, the MPs and CPPs exhibit a reflection symmetry with respect to
the $X$-$Z$-plane iff for each MP $| \phi_i \rangle = \text{c}_{\theta}
|0\rangle + \text{e}^{\text{i} \varphi} \text{s}_{\theta} |1\rangle$ the complex
conjugate point $| \phi_i \rangle^{*} = \text{c}_{\theta} |0\rangle + \text{e}^{-
\text{i} \varphi} \text{s}_{\theta} |1\rangle$ is also a MP, and the same holds
for CPPs too.
\begin{lem}\label{maj_real}
Let $| \psi \rangle_{\text{s}}$ be a symmetric state of $n$ qubits.
$| \psi \rangle_{\text{s}}$ is real iff all its MPs are reflective
symmetric with respect to the $X$-$Z$-plane of the Majorana sphere.
\end{lem}
\begin{proof}
($\Rightarrow$) Let $| \psi \rangle_{\text{s}}$ be a real state.
Then $| \psi \rangle_{\text{s}} = | \psi \rangle_{\text{s}}^{*}$,
and since Majorana representations are unique, $| \psi
\rangle_{\text{s}}$ has the same MPs as $| \psi
\rangle_{\text{s}}^{*}$. Therefore the complex conjugate $| \phi_i
\rangle^{*}$ of each non-real MP $| \phi_i \rangle$ is also a MP.
($\Leftarrow$) Let the MPs of $| \psi \rangle_{\text{s}}$ be
symmetric with respect to the $X$-$Z$-plane. Then for every nonreal
MP $| \phi_i \rangle$ its complex conjugate $| \phi_i \rangle^{*}$
is also a MP. Because $\left( | \phi_i \rangle | \phi_i \rangle^{*}
+ | \phi_i \rangle^{*} | \phi_i \rangle \right)$ is real, it
becomes clear, from the permutation over all MPs in
\eq{majorana_definition}, that the overall state $| \psi
\rangle_{\text{s}}$ is real, too.
\end{proof}
The reflective symmetry of the MPs naturally leads to the same
symmetry for the CPPs.
\begin{cor}\label{cpp_real}
Let $| \psi \rangle_{\text{s}}$ be a symmetric state of $n$
qubits. If $| \psi \rangle_{\text{s}}$ is real, then all its CPPs
are reflective symmetric with respect to the $X$-$Z$-plane of the
Majorana sphere.
\end{cor}
\begin{proof}
Lemma \ref{maj_real} asserts that for every MP $| \phi_i \rangle$ of
$| \psi \rangle_{\text{s}}$, the complex conjugate $| \phi_i
\rangle^{*}$ is also a MP. By considering the complex conjugate of
the optimization problem \eqsimple{maj_problem}, it becomes clear
that for any CPP $| \sigma \rangle$ the complex conjugate $| \sigma
\rangle^{*}$ is also a CPP.
\end{proof}
\subsection{Positive symmetric states}
\label{maximally_entangled_positive_states}
For symmetric states with positive coefficients, strong results can be
obtained with regard to the number and locations of the CPPs. In
particular, for non-Dicke states it is shown that there are at most
$2n-4$ CPPs and that non-positive CPPs can only exist if the MP
distribution has a rotational symmetry around the Z-axis.
Furthermore, the CPPs can only lie at specified azimuthal angles on
the sphere, namely those that are `projected' from the meridian of
positive Bloch vectors by means of the Z-axis rotational symmetry
(see, e.g., the positive seven qubit state shown in \fig{bloch_7}).
Dicke states constitute a special case due to their continuous
azimuthal symmetry. The two Dicke states $| S_0 \rangle$ and $| S_n
\rangle$ are product states, with all their MPs and CPPs lying on the
north and the south pole, respectively. For any other Dicke state $| S_k
\rangle$ the MPs are shared between the two poles, and the CPPs form a
continuous horizontal ring with inclination $\theta = 2 \arccos
\sqrt{{n-k}/{n}}$.
\begin{lem}\label{cpp_mer}
Let $| \psi \rangle_{\text{s}}$ be a positive symmetric state of $n$
qubits, excluding the Dicke states.
\begin{enumerate}
\item[(a)] If $| \psi \rangle_{\text{s}}$ is rotationally symmetric
around the Z-axis with minimal rotational angle $2 \pi / m$, then
all its CPPs $| \sigma (\theta, \varphi) \rangle = \text{c}_{\theta}
|0\rangle + \text{e}^{\text{i} \varphi} \text{s}_{\theta} |1\rangle$ are restricted
to the $m$ azimuthal angles given by $\varphi = \varphi_{r} = 2
\pi r / m$ with $r \in \mathbb{Z}$. Furthermore, if $| \sigma
(\theta, \varphi_{r} ) \rangle$ is a CPP for some $r$, then it is
also a CPP for all other values of $r$.
\item[(b)] If $| \psi \rangle_{\text{s}}$ is not rotationally
symmetric around the Z-axis, then all its CPPs are positive.
\end{enumerate}
\end{lem}
\begin{proof}
The proof runs similar to the one of Lemma \ref{lem_positive}, where
the existence of at least one positive CPP is established. We use
the notations $| \psi \rangle_{\text{s}} = \sum_{k} a_{k} | S_{k}
\rangle$ with $a_k \geq 0$, and $| \lambda \rangle = | \sigma
\rangle^{\otimes n}$.
Case (a): Consider a non-positive CPP $| \sigma \rangle =
\text{c}_{\theta} |0\rangle + \text{e}^{\text{i} \varphi} \text{s}_{\theta} |1\rangle$
with $\varphi = 2 \pi r / m$ and $r \in \mathbb{R}$, and define $|
\widetilde{\sigma} \rangle = \text{c}_{\theta} |0\rangle + \text{s}_{\theta}
|1\rangle$. Then $|\langle \lambda | \psi \rangle_{\text{s}} | = |
\sum_k \text{e}^{\text{i} k \varphi} a_k \text{c}_{\theta}^{n-k} \text{s}_{\theta}^{k}
\sqrt{n/k} | \leq \sum_k a_k \text{c}_{\theta}^{n-k} \text{s}_{\theta}^{k}
\sqrt{n/k} = |\langle \widetilde{\lambda} | \psi \rangle_{\text{s}}
|$. If this inequality is strict, then $| \sigma \rangle$ is not a
CPP. This would be a contradiction, so it must be an equality.
Thus, for any two indices $k_i$ and $k_j$ of non-vanishing
coefficients $a_{k_i}$ and $a_{k_j}$, the following must hold:
$\text{e}^{\text{i} k_{i} \varphi} = \text{e}^{\text{i} k_{j} \varphi}$. This can be
reformulated as $k_{i} r \bmod m = k_{j} r$, or equivalently
\begin{equation}\label{pos_mer_cond}
( k_{i} - k_{j} ) \, r \bmod m = 0 \enspace .
\end{equation}
Because $\varphi = 2 \pi / m$ is the minimal rotational angle, $m$
is the largest integer that satisfies \eq{rot_cond}, and thus there
exist $k_i$ and $k_j$ with $a_{k_i}, a_{k_j} \neq 0$ s.t. $k_i - k_j
= m$. From this and from \eq{pos_mer_cond}, it follows that $r \in
\mathbb{Z}$. Therefore $| \sigma (\theta, \varphi_{r} ) \rangle$ is
a CPP if and only if $r$ is an integer.
Case (b): Considering a CPP $| \sigma \rangle = \text{c}_{\theta}
|0\rangle + \text{e}^{\text{i} \rho} \text{s}_{\theta} |1\rangle$ with $\rho = 2 \pi
r$ and $r \in \mathbb{R}$, we need to show that $r \in \mathbb{Z}$.
Defining $| \widetilde{\sigma} \rangle = \text{c}_{\theta} |0\rangle +
\text{s}_{\theta} |1\rangle$, and using the same line of argumentation as
above, the equation $\text{e}^{\text{i} k_{i} \rho} = \text{e}^{\text{i} k_{j} \rho}$ must
hold for any pair of non-vanishing $a_{k_i}$ and $a_{k_j}$. This is
equivalent to
\begin{equation}\label{pos_mer_cond2}
( k_{i} - k_{j} ) \, r \bmod 1 = 0 \enspace ,
\end{equation}
or $( k_{i} - k_{j} ) \, r \in \mathbb{Z}$, in particular $r \in
\mathbb{Q}$. If there exist indices $k_{i}$ and $k_{j}$ of
non-vanishing coefficients s.t. $k_{i} - k_{j} = 1$, then $r \in
\mathbb{Z}$, as desired. Otherwise consider $r = x/y$ with coprime
$x,y \in \mathbb{N}$, $x < y$. Because $| \psi \rangle_{\text{s}}$
is not rotationally symmetric, the negation of Lemma \ref{rot_symm}
yields that, for any two $k_{i}$ and $k_{j}$ ($a_{k_{i}}, a_{k_{j}}
\neq 0$) with $k_{i} - k_{j} = \alpha > 1$, there must exist a
different pair $k_{p}$ and $k_{q}$ ($a_{k_p}, a_{k_q} \neq 0$) with
$k_{p} - k_{q} = \beta > 1$ s.t. $\alpha$ is not a multiple of
$\beta$ and vice versa. From $r = x/y$ and \eq{pos_mer_cond2}, it
follows that $y = \alpha$ as well as $y = \beta$. This is a
contradiction, so $r \in \mathbb{Z}$.
\end{proof}
With this result about the confinement of the CPPs to certain
azimuthal angles, it is possible to derive upper bounds on the number
of CPPs.
\begin{theo}\label{maj_max_pos_zero}
The Majorana representation of every positive symmetric state $|
\psi \rangle_{\text{s}}$ of $n$ qubits, excluding the Dicke states,
belongs to one of the following three mutually exclusive classes.
\begin{enumerate}
\item[(a)] $| \psi \rangle_{\text{s}}$ is rotationally symmetric
around the Z-axis, with only the two poles as possible CPPs.
\item[(b)] $| \psi \rangle_{\text{s}}$ is rotationally symmetric
around the Z-axis, with at least one CPP being non-positive.
\item[(c)] $| \psi \rangle_{\text{s}}$ is not rotationally symmetric
around the Z-axis, and all CPPs are positive.
\end{enumerate}
Regarding the CPPs of states from class (b) and (c), the following
assertions can be made for $n \geq 3$:
\begin{enumerate}
\item[(b)] If both poles are occupied by at least one MP each, then
there are at most $2n-4$ CPPs, else there are at most $n$ CPPs.
\item[(c)] There are at most $\lceil \tfra{n+2}{2} \rceil$ CPPs
\footnote{The ceiling function ${\lceil} x {\rceil}$ is the
smallest integer not less than $x$.}.
\end{enumerate}
\end{theo}
\begin{proof}
Starting with the first part of the theorem, case (c) has already
been shown in Lemma \ref{cpp_mer}, so consider states $| \psi
\rangle_{\text{s}}$ with a rotational symmetry $\varphi = 2 \pi / m$
around the Z-axis. If all CPPs are either $\vert 0 \rangle$ or
$\vert 1 \rangle$, then we have case (a), otherwise there is at
least one CPP $| \sigma \rangle$ which does not lie on a pole. If
this $| \sigma \rangle$ is non-positive, then we have case (b), and
if $| \sigma \rangle$ is positive, then Lemma \ref{cpp_mer} states
the existence of another, non-positive CPP, thus again resulting in
case (b).
The proof of the second part of the theorem is a bit involved and
can be found in \app{max_cpp_number}.
\end{proof}
\section{Numerical solutions for up to twelve qubits}
\label{maximally_entangled_symmetric_states}
In this section, we present the numerically determined solutions of
the Majorana problem for up to 12 qubits. In order to find these, the
results from the previous sections were extremely helpful. This is
because an exhaustive search over the set of all symmetric states
quickly becomes unfeasible, even for low numbers of qubits and because
the min-max problem \eqsimple{maj_problem} is too complex to allow for
straightforward analytic solutions.
Among the results particularly helpful for our search are Lemma
\ref{cpp_mer} and Theorem \ref{maj_max_pos_zero}. For positive states
they strongly restrict the possible locations of CPPs, and thus
greatly simplify the calculation of the entanglement. It then
suffices to determine only the positive CPPs because all other CPPs
automatically follow from the Z-axis symmetry (if any exists). We
will see that this result is especially powerful for the Platonic
solids in the cases $n=4,6$ where the location of the CPPs can be
immediately determined from this argument alone, without the need to
do any calculations.
From the definition \eqsimple{geo_meas} of $E_{\text{G}}$, it is clear
that the exact amount of entanglement of a given state (or its
corresponding MP distribution) is automatically known once the
location of at least one CPP is known. A numerical search over the set
of positive symmetric states often detects the maximally entangled
state to be of a particularly simple form, enabling us to express the
exact positions of its MPs and CPPs analytically. In some cases,
however, no analytical expressions were found for the positions of the
CPPs and/or MPs. In these cases the exact amount of entanglement
remains unknown, although it can be numerically approximated with high
precision.
In this way we can be quite confident of finding the maximally
entangled positive symmetric state. In the general symmetric case we
do not have as many tools, so the search is over a far bigger set of
possible states, and we can be less confident in our results. We
therefore focus our search to sets of states that promised high
entanglement. Such states include those with highly spread out MP
distributions, particularly the solutions of the classical
optimization problems. From these results we can propose candidates
for maximal symmetric entanglement.
For two and three qubits, the maximally entangled symmetric states are
known and were discussed in \sect{majorana_representation}, so we
start with four qubits. A table summarizing the amount of
entanglement of the numerically determined maximally entangled
positive symmetric as well as the entanglement of the candidates for
the general symmetric case can be found in \app{entanglement_table}.
\subsection{Four qubits}\label{majorana_four}
For four points, both T\'{o}th's and Thomson's problem are solved by
the vertices of the regular tetrahedron \cite{Whyte52}. The numerical
search for the maximally entangled symmetric state returns the
Platonic solid too. Recast as MPs, the vertices are
\begin{equation}\label{4_mp}
\begin{split}
| \phi_{1} \rangle & = | 0 \rangle \enspace , \\
| \phi_{2} \rangle & = \tfra{1}{\sqrt{3}} | 0 \rangle +
\sqrt{\tfra{2}{3}} | 1 \rangle \enspace , \\
| \phi_{3} \rangle & = \tfra{1}{\sqrt{3}} | 0 \rangle +
\text{e}^{\text{i} 2 \pi / 3} \sqrt{\tfra{2}{3}} | 1 \rangle \enspace , \\
| \phi_{4} \rangle & = \tfra{1}{\sqrt{3}} | 0 \rangle +
\text{e}^{\text{i} 4 \pi / 3} \sqrt{\tfra{2}{3}} | 1 \rangle \enspace .
\end{split}
\end{equation}
The symmetric state constructed from these MPs by means of
\eq{majorana_definition} shall be referred to as the `tetrahedron
state'. Its form is $| \Psi_{4} \rangle = \sqrt{1/3} \, | S_{0}
\rangle + \sqrt{2/3} \, | S_{3} \rangle$, and its MP distribution is
shown in \fig{bloch_4}. Because the state is positive and has a
rotational symmetry around the Z-axis, Lemma \ref{cpp_mer} restricts
the possible CPP locations to the three half-circles $| \sigma
(\theta, \varphi ) \rangle$ with $\varphi = 0, 2 \pi / 3, 4 \pi / 3$.
\begin{figure}[b]
\begin{center}
\begin{overpic}[scale=.5]{figure4}
\put(30,86){$| \phi_1 \rangle$}
\put(-11,21){$| \phi_2 \rangle$}
\put(66,16){$| \phi_3 \rangle$}
\put(68,48){$| \phi_4 \rangle$}
\end{overpic}
\end{center}
\caption{\label{bloch_4} MPs and CPPs of the `tetrahedron state'.}
\end{figure}
From the symmetry of the Platonic solid it is clear that the MP
distribution of \fig{bloch_4} can be rotated s.t. $| \phi_{2}
\rangle$, $| \phi_{3} \rangle$ or $| \phi_{4} \rangle$ is moved to the
north pole, with the actual distribution (and thus $| \Psi_{4}
\rangle$) remaining unchanged. Each of these rotations, however,
gives rise to new restrictions on the location of the CPPs mediated by
Lemma \ref{cpp_mer}. The intersections of all these restrictions are
the four points where the MPs lie. This yields the result that $|
\Psi_{4} \rangle$ has four CPPs, with their Bloch vectors being the
same as those in \eq{4_mp}. From this the amount of entanglement
follows as $E_{\text{G}} ( | \Psi_{4} \rangle ) = \log_2 3 \approx
1.59$.
\subsection{Five qubits}\label{majorana_five}
For five points, the solution to Thomson's problem is given by three
of the charges lying on the vertices of an equatorial triangle and the
other two lying at the poles \cite{Ashby86,Marx70}. This is also a
solution to T\'{o}th's problem, but it is not unique
\cite{Ogilvy51,Schutte51}. The corresponding quantum state, the
`trigonal bipyramid state', is shown in \fig{bloch_5}(a). This state
has the form $| \psi_{5} \rangle = 1 / \sqrt{2} ( | S_{1} \rangle + |
S_{4} \rangle )$, and a simple calculation yields that it has three
CPPs that coincide with the equatorial MPs, giving an entanglement of
$E_{\text{G}} ( | \psi_{5} \rangle ) = \log_2 ( 16 / 5 ) \approx
1.68$.
\begin{figure}[hb]
\begin{center}
\begin{overpic}[scale=.5]{figure5a}
\put(-7,0){(a)}
\put(31,83){$| \phi_1 \rangle$}
\put(-18,49){$| \phi_2 \rangle$}
\put(74,30){$| \phi_3 \rangle$}
\put(67,63){$| \phi_4 \rangle$}
\put(31,10){$| \phi_5 \rangle$}
\end{overpic}
\hspace{5mm}
\begin{overpic}[scale=.5]{figure5b}
\put(-7,0){(b)}
\put(31,85){$| \phi_1 \rangle$}
\put(20,16){$| \phi_2 \rangle$}
\put(64,20){ $| \phi_3 \rangle$}
\put(74,49){$| \phi_4 \rangle$}
\put(4,48){$| \phi_5 \rangle$}
\put(-10,17){$| \sigma_1 \rangle$}
\end{overpic}
\end{center}
\caption{\label{bloch_5} The distribution (a) shows the `trigonal
bipyramid state', but the conjectured solution of the Majorana
problem is the `square pyramid state', shown in (b).}
\end{figure}
However, a numerical search among all positive symmetric states yields
states with higher entanglement. Our numerics indicate that the
maximally entangled state is the `square pyramid state' shown in
\fig{bloch_5}(b). This state has five CPPs, one on the north pole and
the other four lying in a horizontal plane slightly below the plane
with the MPs. The form of this state is
\begin{equation}\label{5_opt_form}
| \Psi_{5} \rangle = \tfra{1}{\sqrt{1 + A^2}} | S_{0} \rangle +
\tfra{A}{\sqrt{1 + A^2}} | S_{4} \rangle \enspace .
\end{equation}
Its MPs are
\begin{equation}\label{5_opt_maj}
\begin{split}
| \phi_{1} \rangle & = | 0 \rangle \enspace , \\
| \phi_{2,3,4,5} \rangle & = \alpha | 0 \rangle + \text{e}^{\text{i} \kappa}
\sqrt{1 - \alpha^2} | 1 \rangle \enspace ,
\end{split}
\end{equation}
with $\kappa = \tfra{\pi}{4}, \tfra{3 \pi}{4}, \tfra{5 \pi}{4},
\tfra{7 \pi}{4}$, and the CPPs are
\begin{equation}\label{5_opt_cpp}
\begin{split}
| \sigma_{1} \rangle & = | 0 \rangle \enspace , \\
| \sigma_{2,3,4,5} \rangle & = x | 0 \rangle + k \sqrt{1 - x^2}
| 1 \rangle \enspace ,
\end{split}
\end{equation}
with $k = 1, \text{i}, -1, -\text{i}$. The exact values can be determined
analytically by solving quartic equations. The $x \in (0,1)$ of
\eq{5_opt_cpp} is given by the real root of $4 x^4 + 4 x^3 + 4 x^2 - x
- 1 = 0$, and this can be used to calculate $A = ( 1 - x^5 ) / (
\sqrt{ 5 } x (1 - x^2)^2 )$. With the substitution $a = \alpha^2
\in (0,1)$ the value of $\alpha$ is given by the real root of $(5 A^2
- 1) a^4 + 4 a^3 - 6 a^2 + 4 a - 1 = 0$. Approximate values of these
quantities are:
\begin{equation}\label{5_opt_form_approx}
x \approx 0.46657 \enspace , \quad A \approx 1.53154 \enspace ,
\quad \alpha \approx 0.59229 \enspace .
\end{equation}
The entanglement is $E_{\text{G}} ( | \Psi_{5} \rangle ) = \log_2 ( 1
+ A^2 ) \approx 1.74$, which is considerably higher than that of
$E_{\text{G}} ( | \psi_{5} \rangle )$. We remark that the `center of
mass' of the five MPs of $| \Psi_{5} \rangle$ does not coincide with
the sphere's origin, thus ruling out the corresponding spin-$5/2$
state as an anticoherent spin state, as defined in \cite{Zimba06}.
\subsection{Six qubits}\label{majorana_six}
The regular octahedron, a Platonic solid, is the unique solution to
T\'{o}th's and Thomson's problem for six points. The corresponding
`octahedron state' was numerically confirmed to solve the Majorana
problem for six qubits.
\begin{figure}[hb]
\begin{center}
\begin{overpic}[scale=.5]{figure6a}
\put(-7.5,0){(a)}
\put(32,84){$| \phi_1 \rangle$}
\put(32,9){$| \phi_2 \rangle$}
\put(-17,48){$| \phi_3 \rangle$}
\put(57,31){$| \phi_4 \rangle$}
\put(79,56){$| \phi_5 \rangle$}
\put(27,56){$| \phi_6 \rangle$}
\end{overpic}
\hspace{5mm}
\begin{overpic}[scale=.5]{figure6b}
\put(-7.5,0){(b)}
\put(32,89){$| \phi_1 \rangle$}
\put(31,8){$| \phi_2 \rangle$}
\put(11,32){$| \phi_3 \rangle$}
\put(78,33){$| \phi_4 \rangle$}
\put(77,62){$| \phi_5 \rangle$}
\put(7,62){$| \phi_6 \rangle$}
\end{overpic}
\end{center}
\caption{\label{bloch_6_1} Two possible orientations of the
`octahedron state'.}
\end{figure}
The straightforward orientation shown in \fig{bloch_6_1}(a) has the
form $| \Psi_{6} ' \rangle = 1/{\sqrt{2}} ( | S_{1} \rangle - | S_{5}
\rangle )$, and its MPs are
\begin{equation}\label{6_alpha1_maj}
\begin{split}
& | \phi_{1} \rangle = | 0 \rangle \, , \quad
| \phi_{2} \rangle = | 1 \rangle \enspace , \\
& | \phi_{3,4,5,6} \rangle = \tfra{1}{\sqrt{2}} \big( | 0 \rangle
+ k | 1 \rangle \big) \enspace ,
\end{split}
\end{equation}
with $k = 1, \text{i}, -1, -\text{i}$. $| \Psi_{6} ' \rangle$ can be turned into
the positive state $| \Psi_{6} \rangle = 1 / \sqrt{2} ( | S_{1}
\rangle + | S_{5} \rangle )$ by means of an $R^{\, \text{s}}_z (\pi /
4)$ rotation. The CPPs can be obtained from this state in the same
way as for the tetrahedron state. Being a Platonic solid, the MP
distribution of \fig{bloch_6_1}(b) is left invariant under a finite
subgroup of rotation operations on the sphere. From Lemma
\ref{cpp_mer}, the intersection of the permissible locations of the
CPPs is found to be the eight points lying at the center of each face
of the octahedron, forming a cube inside the Majorana sphere.
\begin{equation}\label{6_sigma_2}
\begin{split}
| \sigma_{1,2,3,4} \rangle & = \sqrt{ \tfra{\sqrt{3}+1}{2
\sqrt{3}}} \, | 0 \rangle + k \sqrt{ \tfra{\sqrt{3}-1}{2
\sqrt{3}}} \, | 1 \rangle \enspace , \\
| \sigma_{5,6,7,8} \rangle & = \sqrt{ \tfra{\sqrt{3}-1}{2
\sqrt{3}}} \, | 0 \rangle + k \sqrt{ \tfra{\sqrt{3}+1}{2
\sqrt{3}}} \, | 1 \rangle \enspace ,
\end{split}
\end{equation}
with $k = 1,\text{i},-1,-\text{i}$. In contrast to the tetrahedron state, where
the MPs and CPPs overlap, the CPPs of the octahedron state lie as far
away from the MPs as possible. This is plausible, because in the case
of the octahedron \eq{maj_problem} is zero at the location of any MP,
due to the MPs forming diametrically opposite pairs. The amount of
entanglement is $E_{\text{G}} ( | \Psi_{6} \rangle ) = \log_2 (9/2)
\approx 2.17$.
\subsection{Seven qubits}\label{majorana_seven}
For seven points, the solutions to the two classical problems become
fundamentally different for the first time. T\'{o}th's problem is
solved by two triangles asymmetrically positioned about the equator
and the remaining point at the north pole \cite{Erber91}, or (1-3-3)
in the F{\"o}ppl notation \cite{Whyte52}. Thomson's problem is solved
by the vertices of a pentagonal dipyramid
\cite{Ashby86,Marx70,Erber91}, where five points lie on an equatorial
pentagon and the other two on the poles. The latter is also
numerically found to be the solution to the Majorana problem.
\begin{figure}[ht]
\begin{center}
\begin{overpic}[scale=.5]{figure7}
\put(38,103){$| \phi_1 \rangle$}
\put(34,-8){$| \phi_2 \rangle$}
\put(-16,50){$| \phi_3 \rangle$}
\put(20,33){$| \phi_4 \rangle$}
\put(77,35){$| \phi_5 \rangle$}
\put(70,62){$| \phi_6 \rangle$}
\put(13,62){$| \phi_7 \rangle$}
\put(2,88){$| \sigma_1 \rangle$}
\put(3,6){$| \sigma_2 \rangle$}
\end{overpic}
\end{center}
\caption{\label{bloch_7} MPs and CPPs of the `pentagonal dipyramid
state'. The ten CPPs are equidistantly located on two circles
above and below the equator.}
\end{figure}
The `pentagonal dipyramid state', shown in \fig{bloch_7}, has the form
$| \Psi_{7} \rangle = 1/{\sqrt{2}} ( | S_{1} \rangle + | S_{6} \rangle
)$, and its MPs are
\begin{equation}\label{7_maj}
\begin{split}
& | \phi_{1} \rangle = | 0 \rangle \, , \quad
| \phi_{2} \rangle = | 1 \rangle \enspace , \\
& | \phi_{3,4,5,6,7} \rangle = \tfra{1}{\sqrt{2}} \big( | 0
\rangle + \text{e}^{\text{i} \kappa} | 1 \rangle \big) \enspace ,
\end{split}
\end{equation}
with $\kappa = 0, \tfra{2 \pi}{5}, \tfra{4 \pi}{5}, \tfra{6 \pi}{5},
\tfra{8 \pi}{5}$. The CPPs of this positive state can be determined
analytically by choosing a suitable parametrization. With $x :=
\cos^2 \theta$ the position of $| \sigma_1 \rangle = \text{c}_{\theta} | 0
\rangle + \text{s}_{\theta} | 1 \rangle$ and $| \sigma_2 \rangle =
\text{s}_{\theta} | 0 \rangle + \text{c}_{\theta} | 1 \rangle$ is determined by
the real root of the cubic equation $49 x^3 + 165 x^2 - 205 x + 55 =
0$ in the interval $[0,\tfra{1}{2}]$. The approximate amount of
entanglement is $E_{\text{G}} ( | \Psi_{7} \rangle ) \approx 2.299$.
\subsection{Eight qubits}\label{majorana_eight}
For eight points, T\'{o}th's problem is solved by the cubic antiprism,
a cube with one face rotated around 45$^\circ$ and where the distances
between neighboring vertices are all the same. The solution to
Thomson's problem is obtained by stretching this cubic antiprism along
the Z-axis, thereby introducing two different nearest-neighbor
distances and further lowering symmetry \cite{Marx70,Ashby86,Erber91}.
One would expect that a similar configuration solves the Majorana
problem too, but, surprisingly, this is not the case. The `asymmetric
pentagonal dipyramid' shown in \fig{bloch_8}(b) is numerically found
to have the highest amount of entanglement. An analytic form of this
positive state is not known, but it can be numerically approximated to
a very high precision. The state is $| \Psi_{8} \rangle \approx 0.672
| S_{1} \rangle + 0.741 | S_{6} \rangle$, and its entanglement is
$E_{\text{G}} ( | \Psi_{8} \rangle ) \approx 2.45$. For comparison,
the regular cube yields $E_{\text{G}} ( | \psi_{\text{cube}} \rangle )
= \log_2 (24/5) \approx 2.26$. Interestingly, the positive state $|
\Psi_{8} \rangle$ has a higher amount of entanglement than any state
of the antiprism form, all of which are non-positive. Furthermore,
two of the MPs of $| \Psi_{8} \rangle$ coincide, akin to the W state
of three qubits.
\begin{figure}[ht]
\begin{center}
\begin{overpic}[scale=.5]{figure8a}
\put(-7.5,0){(a)}
\end{overpic}
\hspace{5mm}
\begin{overpic}[scale=.5]{figure8b}
\put(-7.5,0){(b)}
\end{overpic}
\end{center}
\caption{\label{bloch_8} The maximally entangled antiprism state is
shown in (a), while the `asymmetric pentagonal dipyramid state' in
(b) is conjectured to be the maximally entangled state.}
\end{figure}
$| \Psi_{8} \rangle$ is positive, with two MPs lying on the north
pole, one on the south pole and the other five on a circle below the
equator. The exact inclination of this circle as well as the
inclination of the two circles with the CPPs is not known, but can be
approximated numerically.
\subsection{Nine qubits}\label{majorana_nine}
For nine points, the solutions to T\'{o}th's and Thomson's problems
are slightly different manifestations of the same geometric form
(3-3-3) of neighboring triangles being asymmetrically positioned.
This is also known as a triaugmented triangular prism. In contrast to
this, the Majorana problem is numerically solved by $| \Psi_{9}
\rangle = 1 / \sqrt{2} ( | S_{2} \rangle + | S_{7} \rangle )$, shown
in \fig{bloch_9}. This is again a positive state with a rotational
Z-axis symmetry and with coinciding MPs.
\begin{figure}
\begin{center}
\begin{overpic}[scale=.5]{figure9}
\end{overpic}
\end{center}
\caption{\label{bloch_9} The `pentagonal dipyramid state' with both
of the poles occupied by two MPs.}
\end{figure}
The CPPs can be found analytically in the same way as for seven
qubits. With the substitution $x := \cos^2 \theta$, the inclination
$\theta$ of the CPPs in the northern hemisphere (or $\pi - \theta$ for
the CPPs in the southern hemisphere) follows from the real root of $81
x^3 + 385 x^2 - 245 x + 35 = 0$ in the interval $[0,0.3]$. The
approximate amount of entanglement is $E_{\text{G}} ( | \Psi_{9}
\rangle ) \approx 2.554$.
\subsection{Ten qubits}\label{majorana_ten}
The solution to T\'{o}th's problem is an arrangement of the form
(2-2-4-2), while Thomson's problem is solved by the gyroelongated
square bipyramid, a deltahedron that arises from a cubic antiprism by
placing square pyramids on each of the two square surfaces.
The ten-qubit case is distinct in two respects. It is the first case
where the numerically determined positive solution is not rotationally
symmetric around any axis. Furthermore, we found non-positive states
with higher entanglement than the conjectured solution for positive
states.
A numerical search returns a state of the form $| \Psi_{10} \rangle =
\alpha | S_{0} \rangle + \beta | S_{4} \rangle + \gamma | S_{9}
\rangle$ as the positive state with the highest entanglement, namely
$E_{\text{G}} ( | \Psi_{10} \rangle ) \approx 2.6798$. From Lemma
\ref{rot_symm} it is clear that this state is not rotationally
symmetric around the Z-axis. The MP distribution is shown in
\fig{bloch_10}(a). The state has only three CPPs, which are all
positive (cf. Theorem \ref{maj_max_pos_zero}), but there are six other
local maxima of $g( \sigma )$ with values close to the CPPs. Their
positions are shown by dashed crosses in \fig{bloch_10}(a). While the
total MP distribution is not rotationally symmetric around the Z-axis,
one would expect from the numerical results that the MPs form two
horizontal planes, one with five MPs and another with four MPs, with
equidistantly spread out MPs. However, this is not the case, as the
locations of the MPs deviate by small, but significant, amounts from
this simple form.
\begin{figure}[hb]
\begin{center}
\begin{overpic}[scale=.35]{figure10a}
\put(-7.5,0){(a)}
\end{overpic}
\hspace{2mm}
\begin{overpic}[scale=.35]{figure10b}
\put(-7.5,0){(b)}
\end{overpic}
\hspace{2mm}
\begin{overpic}[scale=.35]{figure10c}
\put(-7.5,0){(c)}
\end{overpic}
\end{center}
\caption{\label{bloch_10} The numerically determined maximally
entangled positive state is shown in (a). A similarly highly
entangled positive state with a rotational symmetry is shown in
(b). The candidate for the general case is shown in (c).}
\end{figure}
Interestingly, there is a fully rotationally symmetric positive state
that comes very close to $| \Psi_{10} \rangle$ in terms of
entanglement. Its straightforward form is $| \psi_{10} \rangle = 1 /
\sqrt{2} ( | S_{2} \rangle + | S_{8} \rangle )$, as displayed in
\fig{bloch_10}(b). The 12 CPPs of this state are easily found as the
solutions of a quadratic equation. The two positive CPPs are
\begin{equation}\label{10_sigma}
\begin{split}
| \sigma_{1} \rangle & = \tfra{1}{\sqrt{3 - \sqrt{3}}} \, | 0
\rangle + \tfra{1}{\sqrt{3 + \sqrt{3}}} \, | 1 \rangle \enspace , \\
| \sigma_{2} \rangle & = \tfra{1}{\sqrt{3 + \sqrt{3}}} \, | 0
\rangle + \tfra{1}{\sqrt{3 - \sqrt{3}}} \, | 1 \rangle \enspace ,
\end{split}
\end{equation}
and the entanglement is $E_{\text{G}} ( | \psi_{10} \rangle ) = \log_2
(32/5) \approx 2.6781$. This is less than $0.1 \%$ difference from $|
\Psi_{10} \rangle$.
The solution to Thomson's problem, recast as a quantum state of the
form $| \Psi_{10} ' \rangle = \alpha | S_{1} \rangle + \beta | S_{5}
\rangle - \alpha | S_{9} \rangle$, is not positive and has an
entanglement of $E_{\text{G}} \approx 2.7316$. From numerics one can
see that the entanglement of this state can be further increased by
slightly modifying the coefficients, arriving at a state with eight
CPPs and an entanglement of $E_{\text{G}} ( | \Psi_{10} ' \rangle )
\approx 2.7374$. The state is shown in \fig{bloch_10}(c), and we
propose it as a candidate for the maximally entangled symmetric state
of ten qubits.
\subsection{Eleven qubits}\label{majorana_eleven}
The solution to T\'{o}th's problem is a pentagonal antiprism with a
pentagonal pyramid on one of the two pentagonal surfaces, or
(1-5-5). The solution to Thomson's problem is of the form (1-2-4-2-2).
Analogous to the ten-qubit case, the numerically found positive state
of 11 qubits with maximal entanglement is not rotationally
symmetric. The state, shown in \fig{bloch_11}(a), is of the form $|
\Psi_{11} \rangle = \alpha | S_{1} \rangle + \beta | S_{5} \rangle +
\gamma | S_{10} \rangle$, and its entanglement is $E_{\text{G}} ( |
\Psi_{11} \rangle ) \approx 2.77$. The state has only two CPPs, but
there exist seven more local maxima with values close to the CPPs.
\begin{figure}[hb]
\begin{center}
\begin{overpic}[scale=.5]{figure11a}
\put(-7.5,0){(a)}
\end{overpic}
\hspace{5mm}
\begin{overpic}[scale=.5]{figure11b}
\put(-7.5,0){(b)}
\end{overpic}
\end{center}
\caption{\label{bloch_11} The conjectured maximally entangled
positive state of 11 qubits is shown in (a), while the candidate
for the general case is shown in (b).}
\end{figure}
The solution to T\'{o}th's problem, which is of the form $| \psi_{11}
' \rangle = \alpha | S_{0} \rangle + \beta | S_{5} \rangle - \gamma |
S_{10} \rangle$, yields very low entanglement, but by modifying the
coefficients of this non-positive state one can find a state $|
\Psi_{11} ' \rangle$ which is even more entangled than $| \Psi_{11}
\rangle$. As shown in \fig{bloch_11}(b), the state is rotationally
symmetric around the Z-axis and has 11 CPPs. The entanglement is
$E_{\text{G}} ( | \Psi_{11} ' \rangle ) \approx 2.83$, making the
state the potentially maximally entangled state of 11 qubits.
\subsection{Twelve qubits}\label{majorana_twelve}
For 12 points, both of the classical problems are solved by the
icosahedron, a Platonic solid. Because the icosahedron cannot be cast
as a positive state, the numerical search for positive states yields a
different state of the form $| \Psi_{12} ' \rangle = \alpha | S_{1}
\rangle + \beta | S_{6} \rangle + \alpha | S_{11} \rangle$. From
\fig{bloch_12}(a) it can be seen that this state can be thought of as
an icosahedron with one circle of MPs rotated by 36$^\circ$ so that it
is aligned with the MPs of the other circle. There are three circles
of CPPs with five in each circle. One of these planes coincides with
the equator, so that $\vert \sigma \rangle = 1 / \sqrt{2}
(\vert0\rangle + \vert1\rangle)$ is a trivial CPP. Nevertheless, the
exact location of some of the MPs and CPPs are unknown. The
approximate entanglement is $E_{\text{G}} ( | \Psi_{12} ' \rangle )
\approx 2.99$.
\begin{figure}[ht]
\begin{center}
\begin{overpic}[scale=.5]{figure12a}
\put(-7.5,0){(a)}
\end{overpic}
\hspace{5mm}
\begin{overpic}[scale=.5]{figure12b}
\put(-7.5,0){(b)}
\end{overpic}
\end{center}
\caption{\label{bloch_12} An orientation of the maximally entangled
positive symmetric state of 12 qubits is shown in (a). The
icosahedron state, shown in (b), is considered to be the maximally
entangled of all symmetric 12 qubit states.}
\end{figure}
Due to the high symmetry present in Platonic solids, the `icosahedron
state' is a strong candidate for maximal symmetric entanglement of
twelve qubit states. The state can be cast with real coefficients $|
\Psi_{12} \rangle = \tfra{\sqrt{7}}{5} | S_{1} \rangle -
\tfra{\sqrt{11}}{5} | S_{6} \rangle - \tfra{\sqrt{7}}{5} | S_{11}
\rangle$, and its MPs can be easily derived from the known angles and
distances in the icosahedron.
\begin{equation}\label{12_maj}
\begin{split}
& | \phi_{1} \rangle = | 0 \rangle \, , \quad
| \phi_{12} \rangle = | 1 \rangle \enspace , \\
& | \phi_{2,3,4,5,6} \rangle = \sqrt{ \tfra{3+\sqrt{5}}{5
+\sqrt{5}}} \, | 0 \rangle + \text{e}^{\text{i} \kappa}
\sqrt{\tfra{2}{5+\sqrt{5}}} \, | 1 \rangle \enspace , \\
& | \phi_{7,8,9,10,11} \rangle = \sqrt{ \tfra{2}{5+\sqrt{5}}}
\, | 0 \rangle + \text{e}^{\text{i} (\kappa + \pi / 5 )} \sqrt{
\tfra{3+\sqrt{5}}{5 +\sqrt{5}}} \, | 1 \rangle \enspace ,
\end{split}
\end{equation}
with $\kappa = 0, \tfra{2 \pi}{5}, \tfra{4 \pi}{5}, \tfra{6 \pi}{5},
\tfra{8 \pi}{5}$. From numerics and from the Z-axis rotational
symmetry, it is evident that there are 20 CPPs, one at the center of
each face of the icosahedron. Equivalent to the six-qubit case, the
MPs appear as diametrically opposite pairs, forcing the CPPs to be as
remote from the MPs as possible. The CPPs are
\begin{equation}\label{12_sigma}
\begin{split}
| \sigma_{1,\ldots ,5} \rangle & = \text{a}_{+} | 0 \rangle +
\text{e}^{\text{i} (\kappa + \pi / 5 )} \, \text{a}_{-} | 1
\rangle \enspace , \\
| \sigma_{6,\ldots ,10} \rangle & = \text{b}_{+} | 0 \rangle +
\text{e}^{\text{i} (\kappa + \pi / 5 )} \, \text{b}_{-} | 1
\rangle \enspace , \\
| \sigma_{11,\ldots,15} \rangle & = \text{b}_{-} | 0 \rangle +
\text{e}^{\text{i} \kappa} \, \text{b}_{+} | 1 \rangle \enspace , \\
| \sigma_{16,\ldots,20} \rangle & = \text{a}_{-} | 0 \rangle +
\text{e}^{\text{i} \kappa} \, \text{a}_{+} | 1 \rangle \enspace ,
\end{split}
\end{equation}
with $\kappa = 0, \tfra{2 \pi}{5}, \tfra{4 \pi}{5}, \tfra{6
\pi}{5}, \tfra{8 \pi}{5}$ and
\begin{equation}\label{12_sigma_coeff}
\begin{split}
\text{a}_{\pm} & = \sqrt{\frac{1}{2} \pm \frac{1}{2}
\sqrt{\frac{5+2 \sqrt{5}}{15}}} \enspace , \\
\text{b}_{\pm} & = \sqrt{\frac{1}{2} \pm \frac{1}{2}
\sqrt{\frac{5-2 \sqrt{5}}{15}}} \enspace .
\end{split}
\end{equation}
With the knowledge of the exact positions of the MPs and CPPs, the
entanglement of the icosahedron state can be calculated as
$E_{\text{G}} ( | \Psi_{12} \rangle ) = \log_2 (243/28) \approx
3.1175$. \Fig{icosahedronpic} shows a spherical plot of the overlap
function $g( \sigma ) = | \langle \Psi_{12} | \sigma \rangle^{\otimes
12} |$ from the same viewpoint as in \fig{bloch_12}(b). Due to
their diametrically opposite pairs, the MPs coincide with the zeros in
this plot. The CPPs can be identified as the maxima of $g( \sigma )$.
\begin{figure}[ht]
\begin{center}
\begin{overpic}[scale=.20]{figure13}
\end{overpic}
\end{center}
\caption{\label{icosahedronpic} (color online) A spherical plot of
the overlap function $g( \sigma ) = |\langle \Psi_{12} | \sigma
\rangle^{\otimes 12}|$ for the icosahedron state $| \Psi_{12}
\rangle$.}
\end{figure}
\section{Discussion}\label{discussion}
The MP distribution of highly entangled states can be explained with
the overlap function $g(\sigma) = |\langle \psi | \sigma
\rangle^{\otimes n}|$ seen in \fig{icosahedronpic}.
\app{normalization_bloch} states that the integration volume of
$g(\sigma)^2$ over the sphere is the same for all symmetric
states. Therefore a bunching of the MPs in a small area of the sphere
would lead to high values of $g(\sigma)^2$ in that area, thus lowering
the entanglement. This explains the tendency of MPs to spread out as
far as possible, as it is seen for the classical problems. However,
there also exist highly entangled states where two or more MPs
coincide (as seen for $n =3,8,9$). This is intriguing because such
configurations are the least optimal ones for classical
problems. Again, this can be explained with the constant integration
volume of $g(\sigma)^2$. Because of $g(\sigma)^2 \propto \prod_i
|\langle \phi_i | \sigma \rangle |^2$, the zeros of $g(\sigma)^2$ are
the diametrically opposite points (antipodes) of the MPs and therefore
a lower number of \emph{different} MPs leads to a lower number of
zeros in $g(\sigma)^2$. This can lead to the integration volume being
more evenly distributed over the sphere, thus yielding a higher amount
of entanglement.
Excluding the Dicke states with their infinite amount of CPPs, one
observes that highly entangled states tend to have a large number of
CPPs. The prime example for this is the case of five qubits, where
the classical solution with only three CPPs is less entangled than the
`square pyramid' state that has five CPPs. In Theorem
\ref{maj_max_pos_zero} it was shown that $2n-4$ is an upper bound on
the number of CPPs of positive symmetric $n$ qubit states. For all of
our numerically determined maximally entangled states -- including the
non-positive ones -- this bound is obeyed, and for most states the
number of CPPs is close to the bound ($n=5,8$) or coincides with it
($n = 4,6,7,12$). This raises the question whether this bound also
holds for general symmetric states. To date, neither proof nor
counterexample is known.
When viewing the $n$ MPs of a symmetric state as the edges of a
polyhedron, Euler's formula for convex polyhedra yields the upper
bound $2n-4$ on the number of faces. This bound is strict if no pair
of MPs coincides and all polyhedral faces are triangles.
Intriguingly, this bound is the same as the one for CPPs mentioned in
the previous paragraph, and the polyhedral faces of our numerical
solutions come close to the bound ($n=5,8,11$) or coincide with it
($n=4,6,7,10,12$). The faces of the polyhedron associated with the
Majorana representation might therefore hold the key to a proof for
$2n-4$ being the upper bound on the number of CPPs for all symmetric
states (with only the Dicke states excluded).
The case of ten qubits seems to be the first one where the maximally
entangled symmetric state cannot be cast as a positive state. For
$n=10,11,12$, our candidates for maximal entanglement are real states,
so the question remains whether the maximally entangled state can
always be cast with real coefficients. We consider this unlikely,
firstly because of the higher amount of MP freedom in the general case
and secondly because many of the solutions to the classical problems
cannot be cast as real MP distributions for higher $n$. For Thomson's
problem, the first distribution without any symmetry (and thus no
representation as a real state) arises at $n=13$, and for T\'{o}th's
problem at $n=15$.
Upper and lower bounds on the maximal entanglement of symmetric states
have already been discussed in \sect{positive_and_symm}, with a new
proof for the upper bound given in \app{normalization_bloch}.
Stronger lower bounds can be computed from the known solutions to
T\'{o}th's and Thomson's problems by translating their point
distribution into the corresponding symmetric state and determining
its entanglement. The diagram in \fig{e_graph} displays the
entanglement of our numerical solutions, together with all bounds.
\begin{figure}
\begin{center}
\begin{overpic}[scale=1.1]{figure14}
\end{overpic}
\end{center}
\caption{\label{e_graph} (color online) Scaling of maximal symmetric
entanglement with the number of qubits $n$. The upper bound is
represented by a black line, while the most entangled Dicke states
form the lower bound. Their Stirling approximation is displayed
as a grey line. The most entangled positive symmetric states
found are denoted by blue crosses. For $n = 10-12$, the best
candidates cannot be cast with positive coefficients and are
depicted as red stars. The solutions of T\'{o}th's and Thomson's
problems readily provide lower bounds, displayed as dashed green
lines. The solutions of Thomson's problem are generally more
highly entangled than those of T\'{o}th's problem.}
\end{figure}
For $n > 5$ qubits, the maximally entangled state cannot be symmetric
or turned into one by LOCC, because the lower bound on general states
is higher than the upper bound of symmetric states. For $n=3$, the
maximally entangled state (W state) is demonstrably symmetric, but for
$n = 4,5$ the numerical solutions for symmetric states have less
entanglement than the lower bound of general states. This would imply
that $n$ qubit maximally entangled states can be symmetric if, and
only if, $n \leq 3$.
\section{Conclusion}\label{conclusion}
In this paper, we have investigated the maximally entangled state
among symmetric quantum states of $n$ qubits. By visualizing symmetric
states through the Majorana representation and with the help of
analytical and numerical methods, strong candidates for the maximally
entangled symmetric states of up to 12 qubits were found. A comparison
with the extremal distributions of T\'{o}th's and Thomson's problems
shows that, in some cases, the optimal solution to Majorana's problem
coincides with that of the two classical problems, but in other cases
it significantly differs.
Lower and upper bounds show that the maximal entanglement of
permutation-symmetric qubit states scales between $\mathcal{O} (\log
\sqrt{n})$ and $\mathcal{O} (\log n)$ with the number of qubits $n$. With
respect to MBQC, these results indicate that, although
permutation-symmetric states may not be good resources for
deterministic MBQC, they may be good for stochastic approximate MBQC
\cite{Nest07,Mora10}. It also gives bounds on how much information can
be discriminated locally, for which explicit protocols are known in
some cases (in particular for Dicke states)
\cite{Hayashi06,Hayashi08}.
We remark that, due to the close relationship of distance-like
entanglement measures \cite{Hayashi06}, the results for the geometric
measure give bounds to the robustness of entanglement and the relative
entropy of entanglement, which can be shown to be tight in certain
cases of high symmetry \cite{Hayashi08,Markham10}.
We finally note that a similar study has been carried out, although
from a different perspective, in search of the `least classical' state
of a single spin-$j$ system \cite{Giraud10} (which they call the
`queens of quantumness'). There the Majorana representation is used to
display spin-$j$ states, through the well-known isomorphism between a
single spin-$j$ system and the symmetric state of $n=2j$ spin-$1/2$
systems. In this context, the most `classical' state is the spin
coherent state, which corresponds exactly to a symmetric product state
in our case (i.e. $n$ coinciding MPs). The problem of \cite{Giraud10}
is similar to ours in that they look for the state `furthest away'
from the set of spin coherent states. However, different distance
functions are used, so the optimization problem is subtly different
and again yields different solutions. It is in any case interesting to
note that our results also have interpretations in this context and
vice versa.
\begin{acknowledgments}
The authors thank S.~Miyashita, S.~Virmani, A.~Soeda and
K.-H.~Borgwardt for very helpful discussions. This work was
supported by the National Research Foundation \& Ministry of
Education, Singapore, and the project `Quantum Computation: Theory
and Feasibility' in the framework of the CNRS-JST Strategic
French-Japanese Cooperative Program on ICT. MM thanks the `Special
Coordination Funds for Promoting Science and Technology' for
financial support.
\emph{Note added.} During the completion of this manuscript, we
became aware of very similar work that also looks at the maximum
entanglement of permutation-symmetric states using very similar
techniques \cite{Martin10}.
\end{acknowledgments}
| {
"attr-fineweb-edu": 1.78125,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdH45qYVBWXyirnBv | \section{Introduction}
A system of interacting particles in sinusoidal external potential
(Frenkel-Kontorova (FK) model \cite{frenk}) is widely used for description
of a broad variety of physical phenomena, such as statics and dynamics of
incommensurate phases (see, e.g. \cite{phas}), transport properties in
quasi-one dimensional conductors (see Ref. 3 and references therein),
adatoms diffusion on a metal surface \cite{adat}, etc. Peculiar features of
the FK model usually explored are related to the kink-like solitons.
Properties of the kinks have been described in a number of publications \cite
{kink1,kink2,kink3,kink4,kivsh1,kivsh2}. The dynamics of the FK model has
also been extensively studied, but mostly in relation to dynamics of
incommensurate superstructures rather than to the single kink \cite
{dyn1,dyn2,dyn3,dyn4}. Whereas it is not completely clear yet at what system
parameters the single-kink effects can be still important.
Incommensurate charge density wave (CDW) conductors are the physical systems
for which the single-kink effects can be very important. Vibration and
transport properties of these systems attract great interest due to numerous
peculiarities. The most striking among them are: i) the giant peak of
unknown origin in the low frequency infrared (IR) spectrum of a number of
inorganic CDW conductors such as $K_{0.3}MoO_3$, $(TaSe_4)_2I$ and $TaS_3$
\cite{don,kim}; ii) nonlinear conductivity and noise generation when
conducting dc current by these materials.
The aim of the present study is to investigate an impact of both single kink
and kink lattice onto IR active phonon spectrum and to specify the range of
the model parameters in which its properties can be treated in terms of
nearly independent kinks rather than in terms of superstructure, associated
with the kink lattice. I believe that after comparison of the numerical
results with the experimental data and proper justification of the model it
can serve as a bases for understanding of the microscopic nature of the
aforementioned peculiar features of CDWs.
The investigations were performed in two approaches: i) molecular dynamic
(MD) simulation was used for the system to reach an equilibrium state
according to the method proposed in Ref.\cite{aubry}, after what all the
particles were subjected to a small uniform step-like displacement and
subsequent vibrations were analyzed via Fourier-transformation; ii)
eigenvector problem (EVP) was solved in the harmonic approximation to study
the vibration spectrum of the system. The kinks in this case were taken into
account through expansion of the potential energy around particle
equilibrium positions obtained from MD simulation.
\section{Vibration spectrum in the presence of kinks}
Let us consider a chain of particles of mass $m$ and charge $e$ with nearest
neighbor interaction in the sinusoidal external potential $V\left( x\right)
=-\frac{V\cdot a^2}{4\pi ^2}\cdot \cos \left( 2\pi \cdot \frac xa\right) $
where $a$ is the potential period. In case of harmonic interparticle
interaction the motion equation for $n$-th particle is
\begin{equation}
m\cdot \frac{\partial ^2U_n}{\partial t^2}+\gamma \cdot \frac{\partial U_n}{%
\partial t}+K_2\cdot (2U_n-U_{n-1}-U_{n+1})+\frac{V\cdot a}{2\pi }\cdot \sin
\left( 2\pi \cdot \frac{U_n}a\right) =e\cdot E(t), \label{eq1}
\end{equation}
where $\gamma $ is phenomenological damping and $E(t)$ is external electric
field. Let the time dependent position $U_n$ of the particle can be
represented as $U_n(t)=n\cdot a+U_n^0+\delta _n(t),$where $U_n^0$ is
quasistatic variable describing a shift of the equilibrium position of the
particle with respect to the corresponding potential minimum, $\delta _n(t)$
describes a vibration of the particle around the new equilibrium position $%
U_n^0$. Then suggesting $\delta _n(t)=\delta _n(\omega )\cdot \exp (i\omega
\cdot t)$ and $E(t)=E_0\cdot \exp (i\omega \cdot t)$ the Eq. (\ref{eq1}) can
be splitted into two equations
\begin{equation}
K_2\cdot (2U_n^0-U_{n-1}^0-U_{n+1}^0)+\frac{V\cdot a}{2\pi }\cdot \sin
\left( 2\pi \cdot U_n^0\right) =0, \label{eq2}
\end{equation}
\begin{equation}
\delta _n(\omega )\cdot \left[ V\cdot \cos \left( 2\pi \cdot U_n^0\right)
-\omega ^2+i\omega \cdot \gamma \right] +\frac{K_2}m\cdot (2\delta _n(\omega
)-\delta _{n-1}(\omega )-\delta _{n+1}(\omega ))=\frac em\cdot E_0.
\label{eq3}
\end{equation}
Disregarding the trivial case $U_n^0=0$ when number of particles $N_{part}$
is equal to the number of potential minima $N_{pot}$, Eq. (\ref{eq2})
describes quasistatic kink-like deformation of the chain (due to neglection
of the dynamical term we restrict our consideration by standing kinks only).
Eq. (\ref{eq3}) describes the particle vibration around the new equilibrium
position. In the continues limit Eq. (\ref{eq2}) reduces to the sine-Gordon
equation \cite{SineGord} with the single-kink solution \cite{kinksol}: $%
U_n^0(i)=2a\cdot \pi ^{-1}\cdot arctg\left\{ \exp \left[ \pm \frac{2\cdot
(n-i)\cdot a}{R_k}\right] \right\} ,$ $R_k=2a\sqrt{\frac{K_2}V}$ can be
considered as the kink radius, $i$ is the kink position. Substituting this
solution into Eq. (\ref{eq3}) one can obtain the complex susceptibility $%
\chi (\omega )=\frac 1{E_0}\cdot \sum \delta _n(\omega )$, where the peaks
in $%
\mathop{\rm Im}
\left( \chi (\omega )\right) $ correspond to resonances $\omega _r$ and $%
\mathop{\rm Re}
\left( \delta _n(\omega _r)\right) $ corresponds to suitably normalized
eigenvector of the mode at $\omega _r$.
Fig.1 shows the $\omega (k)$ plot without (a) and with (b-c) kinks in the
model system. One can see the smearing of the resonances in the presence of
single kink (Fig.1a) while this smearing is gone in the system with the kink
lattice (Fig.1c). Instead, the phonon band folding due to the kink lattice
and the additional low frequency vibration arise. The latter is a so-called
phason, which is related to translational motion of kink(s) (domain
wall(s)). In case of negligible Peierls-Nabarro potential barrier the phason
frequency tends to zero. The phason is IR active and looks like a steep
increase at $\omega \longrightarrow 0$ in the optical conductivity spectrum $%
\sigma \left( \omega \right) =\omega \cdot
\mathop{\rm Im}
\left( \chi (\omega )\right) $ (see Fig.2). The high frequency peaks in
Fig.2 correspond to phonons, the strongest one being related to in-phase
vibration of the particles situated near bottom of potential wells.
\section{Similarity between kink and the force constant defect}
The smearing of the vibrational states in the presence of single kink shown
in Fig.1b suggests that the kink is acting like a point defect. Moreover,
the particles involved into the kink formation are almost completely
eliminated from the high frequency phonon-like normal modes while these
particles obviously possess a higher vibration amplitude (local vibration
density) at low frequencies (see Fig.3). This is quantitatively illustrated
in Fig.4 where the eigenvectors of phason mode and that of IR phonon mode
are shown. Accordingly, one could try to describe vibration properties of
kinks in terms of localized vibrations around force constant defect. That
means the original incommensurate ($N_{part}\neq N_{pot}$) FK model is
replaced by commensurate one (or the ordinary harmonic chain of particles
with harmonic incite and interparticle potentials) and some particle cites
possess a defect force constant. The strength of the defect has been
determined from equation \cite{Barker}
\begin{equation}
1+\frac{\Delta V}N\cdot
\mathop{\displaystyle \sum }
\limits_{k=\frac \pi N}^\pi \frac 1{V+4K_2\cdot \sin ^2\left( \frac k2%
\right) }=0, \label{eq4}
\end{equation}
what means the zero-frequency gap mode formation in the vicinity of the
defect cite. As it is shown in Fig.4 the eigenvector of the gap mode is very
close to that of the phason while the eigenvectors of the phonon-like mode
nearly coincide in both cases. The corresponding spectrum of the 1D crystal
with the force constant defect is also shown in Fig.2. Note, that the
localization length of the gap mode $S_{gap}$ (the halfwidth of the peak
shown by dashed line in Fig.4) is equal to $R_k/\sqrt{2}$ in a wide range of
$R_k$ values (see insert in Fig.4).
Thus, one may consider the system with kinks as a defect, or impurity
crystal taking for the description of its vibration properties all the
results already known. For instance, it is well understood that $S_{gap}$ is
determined basically by splitting of the gap mode from the bottom of the
optical band $\omega _0=\sqrt{V}$ and by the optical bandwidth $2\sqrt{K_2}$%
. One may argue therefore that the similarity between phason and the gap
mode eigenvectors and the phason eigenvector itself does not depend on the
potential anharmonicity provided that its influence on the above mentioned
parameters is small enough. It is thought therefore that the obtained
results will be applicable for a more realistic interparticle potential too.
From the analogy between the kinks and the defects it follows also that the
IR phonon mode intensity will show a linear decrease versus $n_k$ ($n_k=%
\frac{N_k}{N_{part}}$ is the kink concentration and $N_k=|N_{part}-N_{pot}|$
is the number of kinks) for low kink concentration. This indeed does take
place in certain range of $R_k$ values.
\section{When the kink effects are important}
Although the N-kink solution of Eq. (\ref{eq2}) is also available \cite
{nkinksol} it is more convenient to approximate it with the sum of
single-kink solutions. Our MD study of the ground state of a system
consisting of 128 particles arranged in 128-$N_k$ potential wells with
cyclic boundary conditions showed that even for $N_k\gg 1$ the kink lattice
can be perfectly described as a sum of the single-kink solutions with $%
R_k\simeq 2a\sqrt{\frac{K_2}V}$. Namely, for $N_k=8$ and $K_2=4*V$ ($%
R_k^{theor}=4.0a$) the value of $R_k^{\exp }$ $\simeq 3.94a$ has been
obtained. Similar results have been obtained also for the case when the
number of potential wells exceeds that of the particles. The dipole moment
spectrum $I\left( \omega \right) =%
\mathop{\rm Im}
\left[ \frac{\sum \delta _n(\omega )}{E_0}\right] $ has been both calculated
from (\ref{eq3}) substituting $U_n^0=\sum\limits_{i=1}^{N_k}U_n^0(i)$ with $%
R_k=R_k^{\exp }$ and obtained from MD simulation via fluctuation dissipative
approach for various values of $\eta =R_k\cdot n_k$. Both approaches agree
rather well even at very low frequencies although the harmonic approximation
obviously fails at $\omega =0$. Two examples of the particles arrangement
and corresponding eigenvectors of the IR vibrations are presented in Fig.5.
The eigenvectors for $\eta =0.25$ can be represented as a superposition of
the single-kink eigenvectors while for $\eta =0.5$ they are looking quite
different: even those particles which still occupy the potential wells and
are not involved into the kinks formation, are strongly involved into the
characteristic IR vibration (compare (a) and (b) in Fig.5). It should be
pointed out that there is no noticeable difference between the commensurate
and incommensurate cases (kink lattice period is equal and is not equal to
an integer number of $a$, respectively) if the kink concentration is not too
high. Otherwise the difference manifests itself in a small shift of the
zero-frequency peak in Fig.2 from its position.
I concentrate here on a question concerned with the intensity of the phonon
peaks in Fig.2 as a function of the parameter $\eta $. The EVP approach (Eq.
(\ref{eq3}) with various values of $\frac{K_2}V$ and $n_k$) was used for
this study. The results are presented in Fig.6. The integrated intensity $%
I_\Sigma =\int I\left( \omega \right) d\omega $ of the phonon peaks reveals
a universal dependence on the parameter $\eta $ \cite{burl}. It was also
found that the eigenvector of the strongest IR vibration obey some sort of
scaling invariance: the vectors obtained at different $n_k$ but for one and
the same $\eta $ can be transformed to each other by proper scaling of $a$.
Note, that the parameter $\eta $ means a volume fraction (in 1-D case)
occupied by the kinks and the observed decrease in $I_\Sigma $ at low $\eta $
values can be interpreted as washing out of the high frequency density of
states by gap modes associated with kinks. At higher $\eta $, when the kinks
form real lattice and eventually sinusoidal superstructure due to
interaction with each other, the decrease in $I_\Sigma \propto \eta $ slows
down because the real kink radius can not exceed at least one half of the
kink lattice period. Indeed, the linear decrease in $I_\Sigma $ shown in
Fig.6 ends at a cut-off value of $\eta \simeq 0.4$ (for $a=1$) which implies
the above mentioned restriction on the kink radius is $R_k\leq $ $0.4\cdot
k_s^{-1}$, $(k_s=n_k)$ where $k_s$ is the kink lattice (or superstructure)
wave vector measured in units of $\frac \pi a$. Thus, one can display a
range of parameters $k_s\cdot \sqrt{\frac{K_2}V}$ $\leq 0.2$ in which the
single-kink effects are important, i.e. the system can not be explicitly
treated in terms of some sinusoidal superstructure related to the kink
lattice. Since the IR eigenvectors has been argued to be not very sensitive
to anharmonicity one might expect this criterion holds for more realistic
potentials too.
\section{CDW collective dynamics}
The low frequency excitation spectrum of the CDW ground state have been
widely investigated both theoretically and experimentally (see, for example,
reviews \cite{rev1,rev2,rev3} and references therein). It is well
established that incommensurate CDW ground state is characterized by two
specific collective excitations: IR active phase mode and Raman active
amplitude mode.\cite{walker} The frequency of the former, $\omega _p,$ in
the incommensurate CDW conductors is of the order of $1cm^{-1}$ while the
amplitude mode frequency $\omega _a$ is about one-two orders of magnitude
higher. These vibrations have been observed experimentally in such model CDW
conductors as $K_{0.3}MoO_3$, $(TaSe_4)_2I$ and $TaS_3$ \cite
{kim,don,trav1,trav2,tom,sherwin,tkim,deg1}.
Besides the phase and the amplitude modes an additional vibration obeying
giant IR activity has been observed in all the above mentioned compounds
\cite{don,kim}. The frequency of this additional feature in $(TaSe_4)_2I$ is
about $38cm^{-1}$ in between the phase ($\sim 1cm^{-1}\;$\cite{twkim}) and
the amplitude ($\sim 90cm^{-1}\;$\cite{tom}) mode frequencies. Several
explanations have been proposed to account for the additional giant IR peak,
but microscopic origin of this vibration is still not clear (see, for
instance, the discussions in Refs.1 and 15). In phenomenological model \cite
{deg2} the additional giant IR peak was thought to result from a bound
collective-mode resonance localized around impurity, but again without
emphasis of microscopic origin of the model parameters.
Below it is shown that the giant IR resonance occurs in the incommensurate
CDW system even in the absence of any impurities provided that the dynamical
charge transfer between adjacent CDW periods is taken into account and the
CDW possesses some kink lattice structure rather than sinusoidal structure.
Since the lattice deformation coupled to the CDW is much smaller than the
crystal lattice constant it is reasonable to describe the CDW system within
the FK model. The CDW periods in this case are associated with particles of
mass $m$ and charge $e$ which are placed into sinusoidal external (crystal
lattice) potential $V\left( x\right) =-\frac{V\cdot a^2}{4\pi ^2}\cdot \cos
\left( 2\pi \cdot \frac xa\right) $. The interparticle distance in the
commensurate phase is accepted to be equal to $2a$ what means the CDW is
formed due to dimerization. In case of $2N_{part}\neq N_{pot}$ one again
obtains incommensurate structure (kink lattice) and the time dependent
position $U_n$ of the particle can be represented as $U_n(t)=2\cdot n\cdot
a+U_n^0+\delta _n(t)$.
\subsection{Dynamic charge transfer}
As it has been demonstrated by Itkis and Brill\cite{brill}, spatial
redistribution of the charge condensed in CDW takes place under action of
static electric field. Obviously a characteristic time for the charge
redistribution or, in the other words, for the charge transfer from one CDW
period to another is determined by the amplitude mode frequency ($\sim
90cm^{-1}\;$\cite{tom}). Therefore, in the case of the giant peak frequency
the adiabatic condition is fulfilled.
To take into account the charge transfer contribution to the IR intensity of
any mode let us suppose that the particle charge in our model is determined
as
\begin{equation}
\widetilde{e}_n(t)=e\cdot (1+\beta \cdot (\delta _{n+1}(t)-\delta
_{n-1}(t))), \label{eq5}
\end{equation}
what means that the charge is transferred from the region of local
compression of the CDW to a region of local dilatation. The factor $\beta $
determines the fraction of the particle charge transferred during vibration.
The dipole moment $P(t)$ is determined as a sum of the part related to the
particles displacement $P_0(t)$ and the part, related to the charge transfer
between adjacent unit cells $P_{ct}(t)$
\begin{equation}
\begin{array}{c}
P(t)=P_0(t)+P_{ct}(t)= \\
\mathop{\displaystyle \sum }
\limits_ne\cdot \left[ \delta _n(t)+\beta \cdot \left(
U_n^0-U_{n-1}^0\right) \cdot (\delta _{n+1}(t)-\delta _{n-1}(t)-\delta
_n(t)+\delta _{n-2}(t))\right] .
\end{array}
\label{eq6}
\end{equation}
In commensurate phase $U_n^0-U_{n-1}^0=a$ for all $n$ and the charge
transfer dipole moment $P_{ct}(t)$ vanishes according to (\ref{eq6}). In
incommensurate phase the $P_{ct}(t)$ value can be rather high. It will be
shown that the charge transfer effect is essentially determined by
kink-related disturbance of the periodicity in the particle arrangement.
Using the criterion of importance of single-kink effects obtained in the
preceding section one can examine if the kinks are important for description
of the charge density wave conductor $(TaSe_4)_2I$. The superstructure wave
vector in this system is $k_s\simeq 0.085$ \cite{wavevec}, $\sqrt{V/m}$ can
be associated with the giant IR peak frequency $\omega \sim 0.005\,eV$ \cite
{rich} and $\sqrt{K_2}$ can be estimated from the phason dispersion $\sqrt{%
K_2/m}<0.001\,eV$ \cite{phas_disp}. Thus, one obtains $k_s\cdot \sqrt{\frac{%
K_2}V}\ll 1$ what implies that the kink effects can be important in this
compound.
Fig.7a shows the fragment of MD simulation of arrangement of 128 particles
over 264 potential wells, i.e. the CDW with superstructure. The 51-th
particle (shown by arrow in the figure) is pinned. The conductivity spectra $%
I\left( \omega \right) =\omega \cdot
\mathop{\rm Im}
\left[ \frac{\sum \delta _n(\omega )}{E_0}\right] $ obtained from MD
simulation are shown in Fig.8 for both pinned and depinned system. The
features of the interest are the phase mode (PM) and the peak of CT mode
(charge transfer mode), marked by PM- and CT-arrows in Fig.8, respectively.
The latter peak is genetically related to the vibration with the wave vector
equal to that of the superstructure. The corresponding eigenvectors are
shown in Fig.7b. Taking into account that the CDW internal deformation can
be adiabatically accompanied by charge redistribution one obtains that the
CT peak acquires the giant IR intensity. The conductivity spectra in which
the charge transfer effect has been taken into account according to Eqs (\ref
{eq5}) and (\ref{eq6}) are shown in Fig.8 by symbols (depinned chain) and
thin solid line (pinned chain). The phase mode intensity is almost
independent on the charge transfer effect while the CT mode intensity
increases several orders in magnitude.
Fig.9 shows that the CT mode intensity $\left(
\mathop{\rm Im}
\left[ \frac{\sum \delta _n(\omega )}{E_0}\right] \right) $ with charge
transfer contribution decreases with the increase of the parameter $4a\sqrt{%
\frac{K_2}V}$ and possesses a universal dependence regardless the
superstructure wave vector. Physically this feature can be understood in the
following way. The charge transfer dipole moment consists of a sum of
elementary dipole moments resulting from the charge transfer over the
inter-kink distance. The longer is the latter the higher is the elementary
dipole moment, but the smaller is the number of these dipole moments. Thus,
the total charge transfer dipole moment does not depend on the kink
concentration (probably unless $4a\sqrt{\frac{K_2}V}<a/n_k$). On the other
hand, this dipole moment strongly depends on the kink-mediated distortion of
the chain. The latter decreases with the increase of the kink radius
resulting in the observed decrease in CT mode intensity in Fig.9.
It was experimentally proved that the giant IR peak intensity in $%
(TaSe_4)_2I $ increases with the increase of the sample temperature \cite
{physb}. This unusual feature can be naturally explained in terms of the
charge transfer effect discussed above. Indeed, as it is clear from Fig.9
the integrated intensity of the CT mode as a function of the system
parameters can be approximated as (see dashed line in Fig.9)
\begin{equation}
I\simeq C_0\cdot e^2\cdot \left[ \sqrt{\frac{K_2}V}\right] ^{-2.25},
\label{eq7}
\end{equation}
where $C_0$ is constant. The interparticle force constant, obviously,
depends on the particle charge $K_2=e^2K_2^{^{\prime }}$ since the particles
are associated with the charges condensed in CDW. The (\ref{eq7}) can be
then rewritten as
\begin{equation}
I\simeq C_0\cdot e^{-0.25}\cdot \left[ \sqrt{\frac{K_2^{^{\prime }}}V}%
\right] ^{-2.25}. \label{eq8}
\end{equation}
Thus, despite the decrease of the particle charge (the CDW amplitude) the CT
mode integrated intensity increases upon approaching $T_p\simeq 261K$ from
below! Due to short range order the particle charge (CDW amplitude) remains
finite even at very high temperatures and accounts for the high CT mode
intensity well above $T_p$.
Note one more peculiarity of the CT peak. It does not depend on the number
of particles in the coherent CDW domain or, in the other words, on the
effective mass of CDW condensate. The latter can explain why the
corresponding frequency has nearly the same value in such different
compounds as $K_{0.3}MoO_3$ and $(TaSe_4)_2I$ \cite{deg1,twkim}.
\section{Summary}
The kink-like solitons in the incommensurate Frenkel-Kontorova model are
investigated regarding to their impact on the vibration spectrum. It is
found that the IR phonon intensity possesses universal dependence on the
product of the kink radius and the kink concentration suggesting some sort
of scaling invariance for the corresponding eigenvector. The model
accounting for the giant IR peak in the incommensurate inorganic CDW
conductors is proposed. It is shown the giant IR peak is related to the
fundamental vibration with the wave vector equal to that of the
superstructure and the giant IR intensity is caused by dynamical charge
transfer accompanying the CDW internal motion.
\section{Acknowledgments}
This work was partially supported by Russian Ministry of Science through the
program ''Fundamental Spectroscopy''.
| {
"attr-fineweb-edu": 1.526367,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdIs5qoYDgPtUqkj_ | \section{Introduction}
The stability of a group $\Gamma$ (with respect to a class of groups $\mathscr{C}$) means that any almost homomorphism to $\mathscr{C}$ is
close to a homomorphism, see Definition~\ref{stability}. In \cite{1} the notion of defect diminishing was introduced, see Definition~\ref{def_dd1} and
Definition~\ref{def_dd2}. It was shown in \cite{1} that for some classes $\mathscr{C}$ and $\Gamma$-modules $M$ the vanishing of the second cohomology
$H^2(\Gamma,M)$ implies the defect diminishing and that the defect diminishing implies stability. So, the defect diminishing is a kind of
linear stability.
In the present paper, we show that (under weaker assumptions) defect diminishing is equivalent to stability with a linear rate for finitely presented groups.
Particularly, this implies that there are stable groups that do not have defect diminishing. Indeed, O. Becker and J. Mosheiff \cite{Oren} showed that the rate of stability
of $\mathbb Z^d$ is polynomial (with respect to symmetric groups with normalized Hamming distance).
It is worth mentioning that the stability of any abelian group (with respect to symmetric groups with normalized Hamming distance) was proven
by G. Arzhantseva and L. Paunescu in \cite{Goulnara_and_Liviu}.
\section{Stability}
Let $S$ be a finite set of symbols. We denote by $F(S)$ the free group on $S$. Let $R \subseteq F(S)$ be finite and $ \Gamma $ be a finitely presented group $\Gamma=\left\langle S\;|\;R\right\rangle= F(S) / \NC $ where $ \NC $ is the normal subgroup of $F(S)$ generated by $R$. Let $ \mathscr{C} $ be a class of groups, all equipped with bi-invariant metric. Any map $ \phi: S \rightarrow G $, for a group $ G \in \mathscr{C} $ uniquely determines a homomorphism $ F(S) \rightarrow G $ that we also denote by $ \phi $.
\begin{defn} \cite{1} \label{defdistHomDist} Let $G \in \mathscr{C}$ and let $\phi, \psi: S \rightarrow G$ be maps. The defect of $\phi$ is defined by:
\begin{center}
$\defect (\phi)= \max\limits_{r \in R}d_G(\phi(r),1_G)$
\end{center}
The distance between $\phi$ and $\psi$ is defined by:
\begin{center}
$\dist(\phi,\psi)=\max\limits_{s \in S}d_G(\phi(s),\psi(s))$
\end{center}
The homomorphism distance of $\phi$ is defined by:
\begin{center}
$\HomDist(\phi)= \inf\limits_{\pi \in Hom(\Gamma, G)} \dist(\phi, \pi\restriction_S)$
\end{center}
Let $\langle \mathscr{C}^S \rangle= \bigcup\limits_{G \in \mathscr{C} } G^S$ where $G^S= \lbrace \phi: S \rightarrow G \rbrace$, that is, $\langle \mathscr{C}^S \rangle$ are all possible maps $\phi: S \rightarrow G$ for $G \in \mathscr{C} $.
\end{defn}
\begin{defn} \label{stability} \cite{Thom} A finitely presented group $\Gamma$ is called $ \mathscr{C} $-stable if for all $\epsilon >0 $ there exists $\delta>0$ such that
for all $\phi \in \langle \mathscr{C}^S \rangle$ the inequality
$\defect (\phi) < \delta$ implies $\HomDist(\phi) < \epsilon$. Let us restate it to avoid ambiguity:
$$
\forall \epsilon >0 \,\exists \delta>0 \,\forall \phi \in \langle \mathscr{C}^S \rangle\;\left(
\defect(\phi) < \delta \Rightarrow \HomDist (\phi) < \epsilon \right).
$$
\end{defn}
\begin{rem}The stability of $\Gamma$ does not depend on the particular choice of the presentation of the group $ \Gamma $ (see \cite{Goulnara_and_Liviu}): Tietze transformations preserve stability since the metric is bi-invariant. The stability of a group does depend on the class $ \mathscr{C} $.
\end{rem}
Interesting examples $ \mathscr{C} =\{(G_n,d_n)\;|\;n\in \mathbb N\}$ are:
\begin{enumerate}[(1)]
\item \label{example1} $G_n = U(n)$, the group of Unitary $n\times n$ matrix. The metric $d_n$ is induced by the normalized Hilbert-Schmidt norm $ \Vert A \Vert_{HS} = \sqrt{\frac{1}{n} \traza (A^* A) } $ ($d_n(A,B) = \Vert A - B \Vert$).
\item \label{example2} $G_n = U(n)$, the metric $d_n$ is induced by the Schatten $p$-norm $ \Vert A \Vert_{p} = \left( \traza \vert T \vert^p \right) ^{\frac{1}{p}} $, where $\vert T \vert = \sqrt{T^*T}$. Note that if $p=2$ then $ \Vert A \Vert_{2} = \Vert A \Vert_{\Frob}$.
\item \label{example3} $G_n = U(n)$, the metric $d_n$ is induced by the operator norm $ \Vert A \Vert_{op} = \sup\limits_{\Vert v \Vert =1} \Vert Av \Vert $ also known as Schatten $\infty$-norm.
\item \label{example4} $G_n=\Sym(n)$, the symmetric group of $n$ elements. $d_n$ is the normalized Hamming distance:
$d_n(\alpha,\beta)=\frac{1}{n}|\{j\;|\;\alpha(j)\neq\beta(j)\}|$.
\end{enumerate}
\subsection{Rate of stability}
The rate of stability is, roughly speaking, the dependence of $\epsilon$ and $\delta$ in Definition~\ref{stability}. See \cite{Oren} for details.
To make this precise we define the function
$\D :\mathbb{R^+} \rightarrow \mathbb{R^+} $ as follows:
$$
\D (\delta)= \sup \limits_{\phi \in \langle \mathscr{C}^S \rangle} \lbrace \HomDist (\phi) \mid \defect (\phi)< \delta \rbrace
$$
The function $\D $ is monotone increasing and depends on the presentation of the group $\Gamma$, but we show now that this dependence is just linear.\\
The following lemma is a reformulation of Definition \ref{stability}. The analogue of the lemma is used as the definition of stability in \cite{Ozawa_Thom}
\begin{lem} $ \displaystyle\lim_{\delta \rightarrow 0^{+}} \D (\delta) =0$ if and only if $\Gamma$ is $\mathscr{C}$-stable.
\end{lem}
Following O. Becker and J. Mosheiff, we define the rate stability $\rateD$ of the group $\Gamma$ as a class of functions (see Definition~\ref{stability_rate}).
\begin{defn} \label{def_precede}
Let $f,g: (0, \delta_0] \rightarrow \mathbb{R^{+}}$ be monotone nondecreasing functions. Write $f \preceq g$ if $f(\delta) \leq C g(C\delta)+C\delta$ for some $C>0$ and all $\delta\in (0,\delta_0]$ for some $\delta_0>0$. We define the equivalence relation $\thicksim$ by saying that $f \thicksim g$ if and only if $f \preceq g$ and $g \preceq f $ (notice that the relation $\preceq$ is reflexive and transitive). Let $\left[ f \right]$ denote the class of $f$ with regard
to this equivalence relation. Clearly, $\preceq$ defines a partial order on equivalence classes: $\left[ f \right] \preceq \left[ g \right]$ if and only if
$ f \preceq g $.
\end{defn}
Note that if $f \preceq \id$ then $f(\delta) \leq M \delta$ for some $M$. Here $\id$ is an identical function: $\id(\delta)=\delta$.
\begin{prop} \label{prop_rate} \cite{Oren}
Let $\Gamma=\left\langle S\;|\;R\right\rangle$ be a finitely presented group. If $ \Gamma = \langle S'\;|\;R' \rangle$ is another presentation of $\Gamma$. Then $D_{(S,R)} \sim D_{(S',R')}$.
\end{prop}
\begin{defn} \label{stability_rate}
Let $\Gamma=\left\langle S\;|\;R\right\rangle$ be a finitely presented group. The rate stability $\rateD$ of the group $\Gamma$ is the equivalence class $\rateD = \left[ D_{(S,R)} \right]$.
\end{defn}
Proposition~\ref{prop_rate} implies that the rate of stability $\rateD$ of the finitely presented group $\Gamma$ does not depend on the presentation of $\Gamma$.
By the definition of $\thicksim$ the rate of stability $\rateD$ of a group $\Gamma$ can not be faster then linear.
The following lemma shows that it is not just by definition of $\thicksim$ but rather a natural phenomenon for non-free groups.
\begin{lem} \cite{Oren} \label{lem_lin}
Let $ \Gamma = \langle S\;|\;R \rangle $ be a finitely presented group with $R \neq \emptyset$, $R \neq 1_\Gamma$ and $\mathscr{C}$ is the class of symmetric groups with the normalized Hamming
distance. Then there exists $C>0$ and $\delta_0>0$ such that $C \delta \leq \D (\delta)$ for all $ \delta \in (0,\delta_0]$.
\end{lem}
By O. Becker and J. Mosheiff \cite{Oren} if $\mathscr{C}$ is symmetric group with Hamming distance and $d=2,3,4...$ then
$O(\delta^\frac{1}{b})\preceq D_{\mathbb Z^d} \preceq O(\delta^\frac{1}{c})$ for any $b<2$ and some $c=c_d$, depending on $d$.
\section{Property of defect diminishing}
In this section we give the definition of the property of defect diminishing and a proof of the main theorem.
\begin{defn} An ultrafilter $\mathcal{U}$ on $\mathbb{N}$ is a collection of subsets of $\mathbb{N}$, such that:
\begin{enumerate}[(i)]
\item $A \in \mathcal{U}$ and $ A \subset B $ implies $B \in \mathcal{U} $
\item $A, B \in \mathcal{U} $ implies $ A \cap B \in \mathcal{U}$
\item $A \notin \mathcal{U}$ if, and only if $\mathbb{N} \setminus A \in \mathcal{U}$
\end{enumerate}
\end{defn}
We say that $\mathcal{U}$ is non-principal if $\lbrace n \rbrace \notin \mathcal{U} $ for every $n \in \mathbb{N}$. The existence of non-principal ultrafilters on $\mathbb{N}$ is ensured by the axiom of choice.
We fix a non-principal ultrafilter $\mathcal{U}$ on $\mathbb{N}$. Given a bounded sequence $(x_n )_{n \in \mathbb{N}}$ of real numbers we denote the limit along the ultrafilter by $ \lim\limits_{n \to \mathcal{U}} x_n \in (- \infty, \infty)$. Formally, the limit is the unique $x \in \mathbb{R}$ such for all $\epsilon > 0$ we have $ \lbrace n \in \mathbb{N}: \mid x_n - x \mid < \epsilon \rbrace \in \mathcal{U}$. For more information on ultrafilters and ultralimits see \cite{Ultra} appendix B. \\
We will use the notation Landau, let $(x_n )_{n \in \mathbb{N}}$ and $(y_n )_{n \in \mathbb{N}}$ be two sequences of positive real numbers, we denoted by $x_n= O_{\mathcal{U}}(y_n)$ if there exists $C>0$ such that $\lbrace n \mid x_n \leq C y_n \rbrace \in \mathcal{U}$. We denoted by $x_n=o_{\mathcal{U}}(y_n)$ if there is a third sequence of positive real numbers $\varepsilon_n$ such that $ \lim\limits_{n \to \mathcal{U}} \varepsilon_n = 0$ and $x_n= \varepsilon_n y_n$.
\begin{defn} \cite{1} A sequence of maps $\phi_n : S \rightarrow G_n$, for $(G_n,d_n) \in \mathscr{C} $ is called an asymptotic homomorphism to $\mathscr{C}$ if
\begin{center}
$ \lim\limits_{n \to \mathcal{U}} \defect (\phi_n)=0$
\end{center}
\end{defn}
\begin{defn}\label{def_dd1}
Let $\phi_n: S \rightarrow G_n$ with $G_n \in \mathscr{C}$ be an asymptotic homomorphism, we say that an asymptotic homomorphism $\phi'_n: S \rightarrow G_n$ diminishes the defect of $ (\phi_n)_{n \in \mathbb{N}}$ if:
\begin{enumerate}[(a)]
\item $\dist(\phi_n,\phi'_n) = O_{\mathcal{U}}(\defect(\phi_n$))
\item $\defect(\phi'_n)= o_{\mathcal{U}}(\defect(\phi_n))$
\end{enumerate}
We say that $ (\phi_n)_{n \in \mathbb{N}}$ has the property of defect diminishing if there is an asymptotic homomorphism $(\phi'_n)_{n \in \mathbb{N}}$ that diminishes the defect of $ (\phi_n)_{n \in \mathbb{N}}$.
\end{defn}
\begin{defn}\label{def_dd2} The group $\Gamma$ has the property of defect diminishing (with respect to $\mathscr{C}$) if every asymptotic homomorphism to $\mathscr{C}$ has the property of defect diminishing.
\end{defn}
\begin{thm} \label{thm:neat}
Let $\Gamma= \langle S \mid R \rangle$ be a finitely presented group and $\mathscr{C}$ a class of groups such that each $(G,d) \in \mathscr{C}$ is a complete metric space.
Then the group $\Gamma$ has the property of defect diminishing if and only if $\D \preceq \id $.
\end{thm}
\begin{cor}
Let $\Gamma$ be a finitely presented group and $\mathscr{C}$ a class of groups such that each $(G,d) \in \mathscr{C} $ is a complete metric space. The group $\Gamma$ has the property of defect diminishing if and only if $D_{\Gamma} \preceq \left[ \id \right]$.
\end{cor}
\begin{cor}
The property of defect diminishing does not depend on the particular choice of the presentation of the group $ \Gamma $.
\end{cor}
\begin{proof}
If $\mathscr{C}$ is a class of groups such that each $(G,d) \in \mathscr{C}$ is a complete metric space, the proof follows from the Proposition~\ref{prop_rate} and
Theorem~\ref{thm:neat}. General case may be proved directly similarly to Proposition~\ref{prop_rate}.
\end{proof}
For the proof of Theorem~\ref{thm:neat} we need the following proposition.
\begin{prop} \label{proposition}
If $\Gamma=\langle S \mid R \rangle$ has the property of defect diminishing then there exists $M, \varepsilon \in \mathbb{R^+}$ such that for all
$G \in \mathscr{C}$ and $\phi \in G^S$ with $\defect(\phi)<\epsilon$ there exists $\psi \in G^S$ such that:
\begin{center}
\begin{enumerate}
\item $\defect(\psi) < \frac{1}{2} \defect (\phi)$.
\item $\dist(\phi , \psi) < M \defect(\phi)$.
\end{enumerate}
\end{center}
\end{prop}
\begin{proof}
Suppose that the conclusion of the proposition is false. Then for every $n \in \mathbb{N}$ there is $\phi_n \in (G_n)^S $ with $G_n \in \mathscr{C} $ and $\defect(\phi_n)<\frac{1}{n}$, such that every $\psi \in (G_n)^S $ with $\defect (\psi) < \frac{1}{2} \defect (\phi_n)$ satisfies $\dist(\phi_n , \psi) \geq n \defect(\phi_n)$.
So we have an asymptotic homomorphism $(\phi_n)_{n \in \mathbb{N}}$ that does not have the property of defect diminishing. Therefore, $\Gamma$ does not have the property of defect diminishing.
\end{proof}
\begin{proof} [Proof of Theorem \ref{thm:neat}]
Suppose that $\D\preceq \id $, that is, there exists $M>0$ and $\delta_0 > 0$ such that $\forall 0< \delta <\delta_0$ we have that $\D (\delta)< M \delta$.
Let $(\phi_n)_{n \in \mathbb{N}}$ be an asymptotic homomorphism and $\epsilon_n= \defect (\phi_n)$. By the definition of asymptotic homomorphism
$ \displaystyle\lim_{n \rightarrow \mathcal{U}} \epsilon_n =0$. Let $X=\{n\;|\;\epsilon_n<\delta_0\}$.
For $n\in X$ we have that $\HomDist(\phi_n) < M\epsilon_n$ by Definition \ref{defdistHomDist} and there is a $\pi_n \in Hom(\Gamma,G_n)$
that complies $\dist(\phi_n, \pi_n\restriction_S) < M \defect(\phi_n)$. Define $\phi'_n=\pi_n$ for $n\in X$ and $\phi'_n=\phi_n$ for $n\not\in X$.
Then $\phi'_n$ diminishing the defect of $\phi_n$ as $X\in\mathcal{U}$. \\
Suppose that the group $\Gamma$ has the property of defect diminishing. We apply Proposition~\ref{proposition}. Let $M, \varepsilon \in \mathbb{R^+}$
be as in Proposition~\ref{proposition}.
Let $\phi \in G^S $ be with $\defect(\phi)<\varepsilon$. Inductively we may construct a sequence of maps $\phi_j \in G^S$, $\phi_0=\phi$, such that
$\defect (\phi_j) < \frac{1}{2} \defect (\phi_{j-1}) < \frac{\varepsilon}{2^{j}}$ and
$\dist(\phi_j , \phi_{j-1}) < M \defect (\phi_{j-1})<M\defect(\phi)\frac{1}{2^{j-1}}$.
It follows that $(\phi_n)_{n \in \mathbb{N}}$ is a Cauchy sequence. Let $\phi_\infty$ be its limit point ($(G^S, \dist)$ is a complete metric space as
$(G,d) \in \mathscr{C} $ is). We can check that $\phi_\infty$ is a homomorphism and $\dist(\phi,\phi_\infty)<2M\defect(\phi)$.
It follows that $\D (\delta)<2M \delta$ for $\delta<\varepsilon$. Therefore, $\D \preceq \id$.
\end{proof}
| {
"attr-fineweb-edu": 1.808594,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdJA5qhLACGCwNsMv | \section{Renormalisation of the band gap}
From Eqs.~(3)-(5) we find that the quantity $\lambda\Delta$ flows upon renormalisation as
\begin{equation}
\partial_l(\lambda\Delta)=-C_dK^\alpha\frac{\gamma}{a}.
\label{DeltaLambdaFlow}
\end{equation}
This can be absorbed into the redefinition of the chemical potential
or the band gap $\Delta$, which then flows as
\begin{equation}
\partial_l\Delta=-C_dK^\alpha\frac{\gamma}{a\lambda}-C_d\Delta\frac{\gamma}{a^2}.
\label{DeltaFlow}
\end{equation}
The renormalisation of the band gap by the elastic scattering processes
in the conduction band is similar to the renormalisation of the critical temperature in
the $\phi^4$-theory.
As Eqs.~(\ref{DeltaLambdaFlow}) and (\ref{DeltaFlow}) explicitly contain the ultraviolet
momentum cutoff $K$ in the right-hand-side part, the exact value of the renormalised
band gap $\Delta$ depends on the details of the cutoff procedure.
\section{Drude conductivity of Weyl semimetal}
\subsubsection*{Weak disorder, $\gamma\ll\gamma_c$}
Let us first evaluate the Drude contribution to the conductivity of Weyl semimetal for a finite chemical
potential $\mu>0$ (measured from the bottom of the conduction band) and in the limit of weak
disorder, $\gamma_0\ll\gamma_c$. As we have demonstrated, in this case the disorder strength $\gamma(K)K^{-1}$
does not experience renormalisations, and the interference phenomena far from the Fermi surface
can be neglected.
Then the conductivity is determined by the scattering between the momentum states with energies close to the
chemical potential $\mu$.
Because of the small disorder strength, the elastic scattering rate can be
evaluated in the Born approximation;
\begin{equation}
\tau^{-1}=2i\gamma(K)K^\varepsilon\int \langle\hat G^R(\mu,{\bf k})\rangle_{dis}
\frac{d{\bf k}}{(2\pi)^3}=\pi\rho_F\gamma(K)K^{-1},
\label{tauBorn}
\end{equation}
where $\rho_F(\mu)=\frac{\mu^2}{2\pi^2v^3}$ is the density of states in Weyl semimetal,
and
\begin{equation}
\langle\hat G^{R}(\mu,{\bf k})\rangle_{dis}=\frac{\mu{\hat 1}+v\hat{\boldsymbol \sigma}{\bf k}}{(\mu+\frac{i}{2\tau})^2-v^2k^2},
\label{GRaveraged}
\end{equation}
is the disorder-averaged retarded Green's function, a $2\times2$ matrix in the pseudospin space,
with $\hat 1$ being the unity matrix.
The conductivity is given by the Kubo formula
\begin{equation}
\sigma_{xx}=\frac{v^2}{2\pi}
\int{\mathrm Tr\left<\hat\sigma_x \hat G^A(\mu,{\bf r},{\bf r}^\prime) \hat\sigma_x \hat G^R(\mu,{\bf r}^\prime,{\bf r})\right>_{dis}}
d{\bf r}^\prime,
\label{KuboWeyl}
\end{equation}
where $\hat G^R(\mu,{\bf r}^\prime,{\bf r})$ and $\hat G^A(\mu,{\bf r},{\bf r}^\prime)$
are the retarded and advanced Green's functions,
and $v\hat\sigma_x$ is the operator of velocity along the $x$ axis.
In the limit of weak disorder under consideration the conductivity is dominated by
the Drude contribution, shown diagrammatically in Fig.~\ref{Fish}, and the weak-localisation
corrections are negligible.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.3\textwidth]{Fish.png}
\caption{\label{Fish}
Diagram for the Drude conductivity in Weyl semimetal.
}
\end{figure}
Each step of the diffusion ladder (diffuson), which renormalises the velocity vertex $v\hat\sigma_x$
in Fig.~\ref{Fish}, equals
\begin{align}
\hat\Pi(\mu)=&(\pi\rho_F\tau)^{-1}\int \langle\hat G^R(\mu,{\bf p})\rangle_{dis}
\otimes
\langle\hat G^A(\mu,{\bf p})\rangle_{dis}\frac{d{\bf p}}{(2\pi)^3}
\nonumber\\
&=\frac{1}{2}\left(1+\frac{1}{3}\Sigma_{i=1}^3\hat\sigma_i\otimes\hat\sigma_i\right),
\label{Step}
\end{align}
where $\otimes$ is a product of the operators which act in the advanced and retarted spaces, i.e.
on the upper and the lower lines in Fig.~\ref{Fish},
and the prefactor $(\pi\rho_F\tau)^{-1}$ is the value of the impurity line.
Using Eq.~(\ref{Step}), we sum up the diffusion
ladder in Fig.~\ref{Fish} and obtain the renormalised velocity vertex:
\begin{align}
\tilde v\hat\sigma_x=v\hat\sigma_x+\frac{v}{2}\left(\hat\sigma_x+\frac{1}{3}\sum_{i=1}^3\hat\sigma_i\hat\sigma_x\hat\sigma_i\right)+\ldots
=\frac{3}{2}v\hat\sigma_x.
\end{align}
The conductivity is then given by
\begin{equation}
\sigma=\frac{v^2\rho_F\tau}{2}=\frac{v^2}{2\pi\gamma(K)K^{-1}}.
\label{WeylDrudeSuppl}
\end{equation}
The conductivity~(\ref{WeylDrudeSuppl}) is independent of the chemical potential and momentum $K$,
as the disorder strength $\gamma(K)K^{-1}=\gamma_0 K_0^{-1}$ does not depend on $K$ in the limit of
weak disorder.
The Drude contribution to the conductivity at weak disorder, Eq.~(\ref{WeylDrudeSuppl}),
has been calculated recently in Ref.~\onlinecite{SOminato:WeylDrude} using the kinetic-equation approach and,
diagrammatically, in Ref.~\onlinecite{SRyuBiswas}.
Similar formulas have been obtained also in Refs.~\onlinecite{SBurkov:WeylProp} and \onlinecite{SCompleteRubbish};
however, the renormalisation of the velocity vertex by a diffuson, Fig.~\ref{Fish}, has not been taken into account,
leading to a mistake of order unity in the expression for conductivity.
\subsection*{Renormalised disorder}
For $\gamma_0$ of the order of or larger than $\gamma_c$ the system is subject to strong
renormalisation. The RG procedure removes the higher momenta from the system, resulting in a renormalised
action of the quasiparticles near the Fermi surface.
If $\gamma_0<\gamma_c$, the system flows towards weaker disorder, and the conductivity can be
then evaluated as above, in a controlled weak-disorder perturbation theory,
using the renormalised chemical potential $\mu^\prime=\mu\lambda(K)$, and the disorder strength
$\gamma(K)K^{-1}$, where $vK=\mu^\prime$. As the conductivity is independent of the chemical
potential $\mu^\prime$, it is given by Eq.~(\ref{WeylDrudeSuppl}) with the renormalised disorder
strength $\gamma(K)K^{-1}$.
In the case $\gamma_0>\gamma_c$ the system flows towards strong disorder. Eq.~(\ref{WeylDrudeSuppl})
can still be applied if the RG flow is terminated by a sufficiently large Fermi energy, such that
the effective disorder remains weak, $\gamma(K)\ll v^2$.
\section{Self-consistent Born approximation}
In this Section we analyse transport in a high-dimensional semiconductor
by means of the self-consistent Born approximation (SCBA) and discuss the difference
between the SCBA results and those obtained from the RG analysis
controlled by an $\varepsilon=2\alpha-d$-expansion.
The imaginary part $\Sigma(E)$
of the self-energy of, e.g., the disorder-averaged retarded Green's function can be obtained
for short-range disorder within the
SCBA using the self-consistency equation
\begin{equation}
\Sigma(E)=\gamma_0 K_0^\varepsilon \int\frac{d{\bf p}}{(2\pi)^d}
\frac{\Sigma(E)}{(E-a p^\alpha)^2+\Sigma^2(E)}.
\label{SCE}
\end{equation}
\begin{figure}[ht]
\centering
\includegraphics[width=0.45\textwidth]{SCBAdiagr.png}
\caption{\label{Fig:SCBAdiagr}
Diagrams for a disorder-averaged Green's function. Double and single solid lines
correspond to disorder-averaged and bare Green's functions respectively.
a) Dyson equation. b) Self-energy part in the Born approximation.
c) Self-energy part in the self-consistent Born approximation.
}
\end{figure}
Let us first address the behaviour of the self-energy part of the zero-energy ($E=0$)
Green's function.
Because in high dimensions ($\varepsilon=2\alpha-d<0$)
\begin{equation}\int\frac{d{\bf p}}{(2\pi)^d}
\frac{1}{a^2p^{2\alpha}+\Sigma^2(0)}
\leq\int\frac{d{\bf p}}{(2\pi)^d}\frac{1}{a^2p^{2\alpha}}=\frac{C_dK_0^{|\varepsilon|}}{|\varepsilon| a^2},
\end{equation}
Eq.~(\ref{SCE}) has a non-zero solution $\Sigma(0)\neq0$ only for sufficiently strong
disorder $\gamma_0>\gamma_c^{SCBA}$,
with
\begin{equation}
\gamma_c^{SCBA}=\frac{|\varepsilon| a^2}{C_d}\equiv 4\gamma_c.
\end{equation}
Thus, the SCBA analysis suggests that states at the bottom of the band undergo a phase transition
at $\gamma_0=\gamma_c^{SCBA}$; for supercritical disorder the elastic scattering time
$\tau(0)=[2\Sigma(0)]^{-1}$ is finite, while for disorder below critical the system is
effectively ballistic, $\tau(0)=[2\Sigma(0)]^{-1}=\infty$.
Although the SCBA correctly predicts the existence of the disorder-driven phase transition,
the critical value of the disorder strength $\gamma_c^{SCBA}=4\gamma_c$, obtained from the SCBA,
is different by a factor of $4$ from the value $\gamma_c$, obtained from the controlled-in-$\varepsilon$
RG analysis.
Indeed, the SCBA self-energy part is given by the sum of ``rainbow'' diagrams, Fig.~\ref{Fig:SelfEnergyDiagr},
and thus does not take into account the other diagrams that contribute to the quasiparticle self-energy
part. For instance in the fourth order in the disorder potential the SCBA takes into account the diagram
in Fig.~\ref{Fig:SelfEnergyDiagr}b and disregards that in Fig.~\ref{Fig:SelfEnergyDiagr}c, which
for small momenta and energy $E$ has the same order of magnitude of the imaginary part
\begin{equation}
\Sigma^{3b}\sim\Sigma^{3c}\sim\frac{\gamma_0^2\rho_{clean}(E)}{|\varepsilon|a^2K_0^{|\varepsilon|}},
\end{equation}
where we have introduced the density of states in a clean semiconductor
\begin{equation}
\rho_{clean}(E)=\frac{C_d E^\frac{d-\alpha}{\alpha}}{\alpha a^\frac{d}{\alpha}}.
\label{RhoClean}
\end{equation}
\begin{figure}[ht]
\centering
\includegraphics[width=0.45\textwidth]{FourthOrder.png}
\caption{
\label{Fig:SelfEnergyDiagr}
Diagrams contributing to the self-energy part.
a) ``Rainbow'' diagrams, contributing in the SCBA.
b) SCBA contribution in the fourth order in the random potential.
c) A contribution of the same order as b) neglected in the SCBA.
}
\end{figure}
We emphasise that, although diagram \ref{Fig:SelfEnergyDiagr}c contains crossing
impurity lines, in a higher-dimensional system under consideration
it is not suppressed by the parameter $\left[E\tau(E)\right]^{-1}\ll1$, unlike the case of
a conventional metal\cite{SAGD}. Such suppression in low-dimensional metals and semiconductors
occurs if only momenta close to the Fermi surface are important\cite{SAGD}, while in
diagrams \ref{Fig:SelfEnergyDiagr}b and \ref{Fig:SelfEnergyDiagr}c the momentum
integration is carried out over all momenta in the band.
Thus, the SCBA presents an uncontrolled approximation for studying transport in disordered
systems, except for sufficiently small disorder strengths, when it coincides with the usual
(non-self-consistent) Born approximation, that takes into account only the leading-order
contribution to the self-energy part. Detailed criticism of the applicability of the SCBA
to conduction in graphene is presented in Refs.~\cite{SAleinerEfetov} and \cite{SOstrovskyGornyMirlin}.
Also, SCBA in Weyl semimetal has been recently discussed in Ref.~\cite{SBrouwer:WSMcond}.
\subsection{Conductivity of a semiconductor in the SCBA}
{\it Scattering rate for subcritical disorder.}
For subcritical disorder, $\gamma_0<\gamma_c^{SCBA}$, the scattering rate $\Sigma(E)$
vanishes at the bottom of the band, $E=0$, but has a finite value for any $E>0$.
Assuming $E\gg\Sigma(E)$, the integral in the right-hand-side of Eq.~(\ref{SCE}) can be evaluated as a sum
of the contribution from large momenta $p \sim K_0$ and the contribution near the Green's function
pole $E\approx ap^\alpha$, yielding
\begin{equation}
\Sigma(E)=\frac{\pi \rho_{clean}(E)\gamma_0 K_0^\varepsilon}{1-\gamma_0/\gamma_c^{SCBA}}.
\label{SigmaSmall}
\end{equation}
Thus, for subcritical disorder ($\gamma_0<\gamma_c^{SCBA}$) the SCBA scattering rate $\Gamma$ decreases with energy
$\propto E^\frac{d-\alpha}{\alpha}$, faster than the energy $E$ in the case of higher dimensions under consideration
(where $\Gamma/E\propto E^{-|\varepsilon|}$, in agreement with the RG analysis of the weak disorder relevance),
and diverges as the disorder strength $\gamma_0$ approaches the critical strength $\gamma_c^{SCBA}$.
{\it Scattering rate for supercritical disorder.}
For stronger disorder, $\gamma_0>\gamma_c^{SCBA}$, the scattering rate $\Sigma(E)$ is finite already
at the bottom of the band.
For small $\varepsilon\ll1$ and $E=0$
the integral in the right-hand-side part of Eq.~(\ref{SCE}) can be evaluated
as
\begin{align}
\int\frac{d{\bf p}}{(2\pi)^d}
\frac{1}{a^2p^{2\alpha}+\Sigma^2(0)}
\nonumber \\
=
\int\frac{d{\bf p}}{(2\pi)^d}\frac{1}{a^2p^{2\alpha}}
-\int\frac{d{\bf p}}{(2\pi)^d}\frac{\Sigma^2(0)}{a^2p^{2\alpha}\left[a^2p^{2\alpha}+\Sigma^2(0)\right]}
\nonumber \\
\approx\frac{C_dK_0^{|\varepsilon|}}{|\varepsilon| a^2}
-\frac{C_dK_0^{|\varepsilon|}}{|\varepsilon| a^2}\left[\frac{\Sigma(0)}{aK_0^\alpha}\right]^\frac{|\varepsilon|}{\alpha}.
\label{StrongSigmaCalc}
\end{align}
From Eqs.~(\ref{SCE}) and (\ref{StrongSigmaCalc}) we find that the scattering rate near the bottom of the band
for supercritical disorder is given by
\begin{equation}
\Sigma(0)=aK_0^\alpha\left(1-\frac{\gamma_c^{SCBA}}{\gamma_0}\right)^\frac{\alpha}{|\varepsilon|}.
\label{SigmaLarge}
\end{equation}
{\it Conductivity.}
In the case of a conventional semiconductor there is no renormalisation of the velocity vertex by the diffuson,
and the conductivity for the Fermi energy $E$ in the conduction band is given by
\begin{align}
\sigma(E)=\frac{v^2(E)}{d}\int\frac{d{\bf p}}{(2\pi^d)}G^R({\bf p},E)G^A({\bf p},E)
\nonumber\\
=\frac{\pi v^2\rho_{\text{clean}}(E)}{\Sigma(E)d}.
\label{KuboSCBA}
\end{align}
Using Eqs.~(\ref{KuboSCBA}), (\ref{SigmaSmall}) and (\ref{SigmaLarge}), we obtain the
dependency of the conductivity on the Fermi level $E$ and the deviation
$\delta\gamma=\gamma_0-\gamma_c^{SCBA}$
of the disorder strength from the critical value:
\begin{equation}
\sigma(E,\delta\gamma)
\propto
\left\{
\begin{array}{cc}
|\delta\gamma|\: E^{2-\frac{2}{\alpha}}, & \delta\gamma<0,
\\
E^\frac{d+\alpha-2}{\alpha} \delta\gamma^\frac{\alpha}{|\varepsilon|}, & \delta\gamma>0.
\end{array}
\right.
\end{equation}
Thus, for subcritical disorder ($\delta\gamma<0$) the SCBA predicts the same dependency of conductivity
on energy $E$ and disorder strength $\delta\gamma$
as the rigorous (for small $\varepsilon$) RG calculation [Eq.~(11) in the main
text].
However, for supercritical disorder ($\delta\gamma>0$) the SCBA is
qualitatively incorrect: it predicts a {\it finite} conductivity
for a disordered semiconductor in the orthogonal symmetry class,
in which the conductivity is absent at strong disorder, $\sigma=0$, due to the localisation
of low-energy states\cite{SSyzranov:unconv}.
\subsection{Weyl semimetal and SCBA}
Transport in a Weyl semimetal can be studied by means of the SCBA similarly to the case of a semiconductor considered
above. The respective detailed SCBA analysis for WSM are presented in Ref.~\cite{SOminato:WeylDrude}.
Also, RG analysis in the large $N$ (number of valleys) approximation, equivalent to the SCBA, have been carried out
in Ref.~\onlinecite{SFradkin1}.
The self-consistency condition for the self-energy part $\Sigma(0)$ of a WSM is also given by Eq.~(\ref{SCE})
with $\alpha=1$, $a=v$, and $d=3$. It gives the critical disorder strength $\gamma_c=2\pi^2v^2$, which by a factor
of $2$ exceeds that obtained from the RG analysis with $\varepsilon=-1$ presented in this paper. For a small
finite chemical potential, the SCBA predicts a finite zero-temperature
conductivity on both sides of the
transition, that vanishes at the critical point, which is qualitatively consistent with the result of the RG
analysis presented here.
\section{Conductance of a finite sample of Weyl semimetal}
Recently the conductivity of a finite sample of Weyl semimetal at zero temperature
and chemical potential has been studied numerically in Ref.~\onlinecite{SBrouwer:WSMcond};
it has been concluded that the conductivity for strong disorder, $\gamma_0>\gamma_c$,
is finite and qualitatively consistent with the predictions of this paper,
while the conductivity
for subcritical disorder, $\gamma_0<\gamma_c$,
vanishes.
At first glance, such vanishing weak-disorder conductivity
is inconsistent with our prediction of a finite conductivity
in the same regime. However, as explained in Ref.~\onlinecite{SBrouwer:WSMcond},
the conclusions of our paper apply to
samples of sufficiently large sizes $L$ and finite temperatures or chemical potentials such that
$T, \mu\gg v/L$, while the results of Ref.~\onlinecite{SBrouwer:WSMcond} apply in the opposite
limit, and the qualitative difference between the results may come from the non-commutativity
of these limits.
\begin{figure}[ht]
\centering
\includegraphics[width=0.33\textwidth]{ConductanceSetup.png}
\caption{
\label{Fig:setup}
Setup for observing the conductance of a sample of length $L_\parallel$ and the characteristic
width $L_\bot$, connected to two infinite reservoirs.
}
\end{figure}
Indeed, in a disorder-free sample of finite width, Fig.~\ref{Fig:setup},
the transverse momentum of charge carriers is quantised, resulting in a finite number $N_\bot$ of
modes that contribute to the conductance. For instance, in a 3D sample with a finite cross-section $S_\bot$,
and sufficiently large Fermi momentum $k_F\gg L_\bot^{-1}\sim1/\sqrt{S_\bot}$ the number of modes (per one discrete degree
of freedom, e.g, spin and valley), that provide conduction at small temperatures $T\ll E_F$, is given by
\begin{equation}
N_\bot= k_F^2S_\bot/\pi+{\cal O}(1).
\end{equation}
In an undoped Weyl semimetal, studied in Ref.~\onlinecite{SBrouwer:WSMcond}, $k_F=0$, and no more than one
or several transverse modes $N_\bot\sim1$ may contribute to conduction for {\it weak disorder} and temperatures
$T$ smaller than
the characteristic energy scale $v/L_\bot$ of spatial quantisation.
Therefore, the conductance of such system
is smaller or of the order of several conductance quanta, $G\lesssim1$.
Then the conductivity of such sample of length $L_\parallel$, defined according to
\begin{equation}
\sigma=G\frac{L_\parallel}{S_\bot},
\end{equation}
vanishes, $\sigma\lesssim L_\parallel/L_\bot^2\rightarrow0$ in the limit $L_\parallel\propto L_\bot\rightarrow\infty$.
Thus, as emphasised in Ref.~\cite{SBrouwer:WSMcond}, the numerical zero-conductivity result,
obtained there for small disorder strengths, is likely attributable to considering finite sample size $L$ while setting the chemical
potential and temperature to zero.
However, in the limit $T,\mu \gg v/L$, the conductivity is finite and in the limit of weak disorder is described
by Eq.~\eqref{WeylDrudeSuppl},
thus resolving the apparent inconsistency between our predictions and Ref.~\onlinecite{SBrouwer:WSMcond}.
The above estimate for the conductivity is no longer valid in the case of {\it strong disorder}.
As we have shown recently in Ref.~\cite{SSyzranov:unconv},
the density of states at the Dirac point becomes finite at $\gamma>\gamma_c$ due to strong fluctuations of the
(renormalised) disorder potential $U^*\sim vK^*$, which thus leads to a large effective number
\begin{equation}
N_\bot(\gamma>\gamma_c)\sim (K^*)^2S_\bot
\end{equation}
of conducting modes even in the absence of doping.
Although, due to backscattering, the contribution of each such mode to the
conductance is in general smaller than the conductance quantum, the conductivity may thus
become finite at strong disorder, as observed numerically in Ref.~\cite{SBrouwer:WSMcond}.
| {
"attr-fineweb-edu": 1.855469,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdJfxaKgTw5qkDuU3 | \subsubsection*{The Georgi vector limit}
\indent
We begin by sketching the essential points of the Georgi vector symmetry
and vector limit\cite{georgi}. Consider two chiral flavors u(p) and d(own),
with chiral symmetry $SU(2)_L\times SU(2)_R$. The standard way of looking at
this symmetry is that it is realized either in Nambu-Goldstone (or Goldstone
in short) mode, with $SU(2)_L\times SU(2)_R$ broken down spontaneously to
$SU(2)_{L+R}$ or in Wigner-Weyl (or Wigner in short) mode with parity
doubling. Georgi observes, however, that
there is yet another way of realizing the symmetry which requires
both Goldstone mode and Wigner mode to {\it co-exist}. Now the signature
for {\it any} manifestation of the chiral symmetry is the pion decay constant
$f_\pi$
\begin{eqnarray}
\langle 0 |A^i_\mu|\pi^j (q)\rangle=i f_\pi q_\mu \delta^{ij}\label{goldstone}
\end{eqnarray}
where $A_\mu^i$ is the isovector axial current. The Goldstone mode is
characterized by the presence of the triplet of Goldstone bosons,
$\pi^i$ with $i=1, 2, 3$ with a non-zero pion decay constant.
The Wigner mode is realized when the pion decay constant vanishes, associated
with
the absence of zero-mass bosons. In the latter case, the symmetry is realized
in a parity-doubled mode. The Georgi vector symmetry we are interested in
corresponds to
the mode (\ref{goldstone}) co-existing with a triplet of scalars $S^i$
with $f_S=f_\pi$ where
\begin{eqnarray}
\langle 0|V_\mu^i| S^j (q)\rangle= i f_S q_\mu \delta^{ij}
\end{eqnarray}
where $V_\mu^i$ is the isovector-vector current. In this case, the
$SU(2)\times SU(2)$ symmetry is unbroken. At low $T$ and/or low density,
low-lying isovector-scalars are {\it not} visible and hence either the vector
symmetry is broken in Nature with $f_S\neq f_\pi$ or they are ``hidden"
in the sense that they are eaten up by vector particles (\`a la Higgs).
In what follows, we would like to suggest that as temperature and/or density
rises to the critical value corresponding to the chiral phase transition,
the symmetry characterized by
\begin{eqnarray}
f_S=f_\pi\label{equal}
\end{eqnarray}
is restored with the isovector scalars making up the longitudinal components
of the massive $\rho$ mesons, which eventually get ``liberated" at some high
temperature (or density) from the
vectors and become degenerate with the zero-mass
pions at $T\roughly> T_{\chi SR}$ where
$T_{\chi SR}\sim 140$ MeV is the chiral transition temperature.
The symmetry (\ref{equal}) with the scalars ``hidden" in the massive vector
mesons resembles Weinberg's mended symmetry presumed to be realized near the
chiral symmetry restoration\cite{weinbergMS}, so we shall
refer to this as ``mended symmetry." We shall reserve
``Georgi vector limit" as the symmetry limit in which (\ref{equal}) holds
together with $m_\pi=m_S=0$.
The relevant Lagrangian to use for illustrating
what we mean is the hidden gauge
symmetric Lagrangian of Bando {{\it et al}}\ \cite{bando} which is valid below
the chiral transition\footnote{We will ignore the small quark masses and
work in the chiral limit, unless otherwise noted.},
\begin{eqnarray}
{\cal L}=\frac 12 f^2\left\{{\rm Tr}(D^\mu\xi_L D_\mu \xi_L^\dagger) +
(L\rightarrow R)\right\} +\kappa \cdot \frac 14 f^2 {\rm Tr} (\partial^\mu U\partial_\mu
U^\dagger) +\cdots\label{bandoL}
\end{eqnarray}
where
\begin{eqnarray}
U&=&\xi_L \xi_R^\dagger,\nonumber\\
D^\mu \xi_{L,R}&=&\partial^\mu\xi_{L,R}-ig\xi_{L,R}\rho^\mu, \nonumber\\
\rho_\mu&\equiv& \frac 12 \tau^a \rho^a_\mu
\end{eqnarray}
and $g$ stands for the hidden gauge coupling. The ellipsis stands for other
matter fields and higher-derivative terms needed to make the theory more
realistic.
The $\xi$ field can be parametrized as
\begin{eqnarray}
\xi_{L,R}\equiv e^{iS(x)/f_S} e^{\pm i\pi (x)/f_\pi}
\end{eqnarray}
with $S(x)=\frac 12 \tau^a S^a (x)$ and $\pi (x)=\frac 12 \tau^a \pi^a (x)$.
Under the global chiral $SU(2)_L\times SU(2)_R$ transformation,
\begin{eqnarray}
\xi_L\rightarrow L\xi_L G^\dagger, \ \ \ \xi_R\rightarrow R\xi_R G^\dagger
\end{eqnarray}
with $L(R)\in SU(2)_{L(R)}$ and $G\in SU(2)_{local}$ is the hidden
local transformation. The Lagrangian (\ref{bandoL}) is invariant under
$G$. Thus the symmetry of the Lagrangian (\ref{bandoL}) is
$(SU(2)_L\times SU(2)_R)_{global} \times G_{local}$.
Setting $S(x)=0$ corresponds to taking the unitary gauge in which case
we are left with physical fields only ({{\it i.e}}, no ghost fields).
At tree level, we get that
\begin{eqnarray}
f_S=f, \ \ f_\pi=\sqrt{1+\kappa} f
\end{eqnarray}
and the $\rho\pi\pi$ coupling
\begin{eqnarray}
g_{\rho\pi\pi}=\frac{1}{2(1+\kappa)} g.
\end{eqnarray}
Going to the unitary gauge, one gets the KSRF mass relation
\begin{eqnarray}
m_\rho=fg=\frac{1}{\sqrt{1+\kappa}} f_\pi g =
2\sqrt{1+\kappa} f_\pi g_{\rho\pi\pi}.\label{KSRF}
\end{eqnarray}
Now we know from experiments that at zero $T$ (or low density), the
$\kappa$ takes the value $-\frac 12$ for which the KSRF relation is
accurately satisfied. The symmetry (\ref{equal}) therefore is broken.
The symmetry is recovered
for $\kappa=0$ in which case the second term of (\ref{bandoL})
that mixes L and R vanishes, thus restoring $SU(2)\times SU(2)$. In this
limit, the hidden gauge symmetry swells to $G_L\times G_R$, and $\xi_{L,R}$
transform
\begin{eqnarray}
\xi_L\rightarrow L\xi_L G_L^\dagger, \ \ \xi_R\rightarrow R\xi_R G_R^\dagger.
\end{eqnarray}
If the gauge coupling is not zero, then the $\rho$ mesons are still
massive and we have the mended symmetry (\ref{equal}). However if the
gauge coupling vanishes, then the vector mesons become massless and
their longitudinal components get liberated, giving the scalar massless
multiplets $S(x)$. In this limit, the symmetry is the global $[SU(2)]^4$.
Local symmetry is no longer present.
We shall now argue that in hot and/dense matter approaching the chiral
restoration, the constant $\kappa\rightarrow 0$ and the gauge coupling
$g\rightarrow 0$. For this purpose, we shall extrapolate a bit
the results obtained by Harada and Yamawaki\cite{harada}. These authors
studied the hidden gauge Lagrangian (\ref{bandoL}) to one loop order
and obtained the $\beta$ functions for the hidden gauge coupling $g$
and the constant $\kappa$ (in dimensional regularization)
\begin{eqnarray}
\beta_g (g)&=& \mu \frac{dg}{d\mu}=-\frac{87-a^2}{12} \frac{g^2}{(4\pi)^2},
\\
\beta_\kappa (a)&=& \mu \frac{da}{d\mu}= 3a (a^2-1) \frac{g^2}{(4\pi)^2}
\end{eqnarray}
with $a=\frac{1}{1+\kappa}$. One notes that first of all, there
is a nontrivial ultraviolet fixed point at $a=1$ or $\kappa=0$
and that the coupling constant $g$ scales to zero as $\sim 1/\ln \mu$ in the
ultraviolet limit. This perturbative result may not be realistic
enough to be taken seriously -- and certainly cannot be pushed too high in
energy-momentum scale but for the reason given below, we think it
plausible that the Harada-Yamawaki results hold at least qualitatively as
high $T$ (or density) is reached.
In fact we expect that the gauge coupling should fall off to zero much faster
than logarithmically in order to explain what follows below.
\subsubsection*{Quark-number susceptibility}
\indent
As a first case for the scenario sketched above, we consider the
lattice gauge calculations by Gottlieb {{\it et al}}
\cite{gottliebchi} of the quark-number susceptibility defined by
\begin{eqnarray}
\chi_{\pm}=\left(\partial/\partial \mu_{u} \pm \partial/\partial \mu_d\right) (\rho_u\pm
\rho_d)
\end{eqnarray}
where the $+$ and $-$ signs define the singlet (isospin zero) and triplet
(isospin one) susceptibilities, $\mu_u$ and $\mu_d$ are the chemical
potentials of the up and down quarks and
\begin{eqnarray}
\rho_i={\rm Tr} N_i exp\left[-\beta (H-\sum_{j=u,d} \mu_j N_j)\right]/V
\equiv \langle\la N_i\rangle\ra/V
\end{eqnarray}
with $N_i$ the quark number operator for flavor $i=u,d$.
One can see that the $\chi_+$ is in the $\omega$-meson channel and
the $\chi_-$ in the $\rho$-meson channel.
For $SU(2)$ symmetry, we expect $\chi_+=\chi_-$ and this is what one observes
in the lattice results. One can classify the lattice results by roughly
three temperature regimes. Focusing on the non-singlet susceptibility,
we see that in the very low temperature regime, the $\chi_-$ is dominated
by the $\rho$ meson and is small. As the temperature moves toward the onset of
the phase transition, the $\chi_-$ increases rapidly to near that of
non-interacting quarks. This regime may be described in terms of constituent
quarks. In RPA approximation of the constituent quark model as used
by Kunihiro\cite{kunihiro}, the susceptibility below the critical
temperature is
\begin{eqnarray}
\chi=\frac{\chi_0}{1+G_v \chi_0}
\end{eqnarray}
where $G_v$ is the coupling of the constituent quark (denoted $Q$) to the
vector meson $\rho$ and $\chi_0$ is the susceptibility for non-interacting
quarks which at $T\approx T_{\chi SR}$ where the dynamical mass $m_Q$ has
dropped to zero has the value
\begin{eqnarray}
\chi_0\approx N_f T^2
\end{eqnarray}
with $N_f$ the number of flavors. In terms of the gauge coupling of
(\ref{bandoL}), we have
\begin{eqnarray}
G_v\approx \frac{g^2}{4m_\rho^2}.
\end{eqnarray}
As noted by Kunihiro in the NJL model, the rapid increase of the
susceptibility\ can be understood by a steep drop in the vector coupling across
the $T_{\chi SR}$. Let us see what we obtain with the hidden gauge
symmetry Lagrangian (\ref{bandoL}). If we assume that the KSRF
relation (\ref{KSRF}) holds at $T$ near $T_{\chi SR}\approx 140$ MeV
(the recent work by Harada {{\it et al}}\ \cite{haradaPRL} supports this
assumption) and that $\chi_0\approx 2T_{\chi SR}^2$ for $N_f=2$, then we find
\begin{eqnarray}
\chi (T_{\chi SR})/\chi_0 (T_{\chi SR})\approx \frac{1}{1+\frac 12
(\frac{T_{\chi SR}}{f_\pi})^2} \approx 0.47\label{HGSchi}
\end{eqnarray}
with $\kappa=0$.
Here we are assuming that $f_\pi$ remains at its zero temperature value,
93 MeV, up to near $T_{\chi SR}$. The ratio (\ref{HGSchi})
is in agreement with the lattice data at $T\roughly< T_{\chi SR}$.
Let us finally turn to the third regime, namely above
$T_{\chi SR}$.
It has been shown by Prakash and Zahed \cite{prakash} that with increasing
temperature, the susceptibility goes to its perturbative value which can
be calculated with perturbative gluon-exchanges.
The argument is made with the dimensional reduction at asymptotic temperatures,
but it seems to apply even at a temperature slightly above $T_{\chi SR}$.
We will later
make a conjecture why this is reasonable. We shall schematize
the Prakash-Zahed argument using the dimensionally reduced model of Koch
{{\it et al}}\ \cite{KSBJ} which hints at the onset of the Georgi vector limit.
In this model which exploits the ``funny space" obtained by interchanging
$z$ and $t$ (somewhat like ``moduli transform"), the helicity-zero state of
the $\rho$ meson is found to come out degenerate with the pion while the
helicity $\pm$ states are degenerate with each other. In finite temperature,
$z$ replaces $T$, so asymptotically in $T$, the configuration space with the
new $z$ becomes 2-dimensional with $x$ and $y$. The $\rho$ meson has gone
massless and behaves like a (charged) photon with helicities $\pm$ 1
perpendicular to the plane. The helicity-zero state originating from
the longitudinally polarized component of the $\rho$ before it went massless
now behaves as an isotriplet scalar. We identify this with the scalar $S(x)$
described above, a realization of the Georgi vector symmetry.
Let us assume then that the vector mesons have decoupled with $g=0$. Going to
the perturbative picture with quark-gluon exchanges, we take
one-gluon-exchange potential of Koch {{\it et al}},
\begin{eqnarray}
V(r_t)=\frac{\pi}{m^2}\frac 43 \bar{g}^2 T
\sigma_{z,1}\sigma_{z,2}\delta (r_t)
\label{V}
\end{eqnarray}
with $\bar{g}$ the color gauge coupling and $\delta (r_t)$ is the
$\delta$-function in the two-dimensional reduced space. Here $m=\pi T$ is
the chiral mass of quark or antiquark as explained in \cite{KSBJ}.
Possible constant terms that can contribute to eq.(\ref{V}) will be ignored
as in \cite{KSBJ}.
In order to evaluate the expectation value of the $\delta (r_t)$, we note that
the helicity-zero
$\rho$-meson wave function in two dimensions is well approximated by
\begin{eqnarray}
\psi_\rho\approx N e^{-r_t/a}
\end{eqnarray}
with $a\approx \frac 23$ fm and the normalization
\begin{eqnarray}
N^2=\frac{2}{\pi a^2}.
\end{eqnarray}
For the helicity $\pm 1$ $\rho$-mesons, $\sigma_{z,1}\sigma_{z,2}=1$,
so we find that the expectation value of $V$ is
\begin{eqnarray}
\langle V\rangle=\frac 83 \frac{\bar{g}^2 T}{\pi^2 T^2 a^2}.
\end{eqnarray}
Now summing the ladder diagrams to all orders, we get
\begin{eqnarray}
\frac{\chi}{\chi_0}=\left(1+\frac{\langle V \rangle}{2\pi T}\right)^{-1},
\label{ratio}
\end{eqnarray}
where the energy denominator $2\pi T$ corresponds to the mass of a pair
of quarks.
The lattice calculations \cite{gottliebchi} use $6/\bar{g}^2=5.32$
which would give $\alpha_s=0.07$ at scale of $a^{-1}$ where $a$ is the lattice
spacing. (The relevant scale may be more like $2\pi/a$.) Calculations
use 4 time slices, so the renormalized $\bar{g}$ is that appropriate
to $a^{-1/4}$. Very roughly we take this into account by multiplying the
above $\alpha_s$ by $\ln 4^2$; therefore using $\alpha_s\cong 0.19$.
With this $\alpha_s$ and the above wave function, we find
\begin{eqnarray}
\frac{\chi (T_{\chi SR}^+)}{\chi_0 (T_{\chi SR}^+)}\approx 0.68.
\label{pertchi}
\end{eqnarray}
This is just about the ratio obtained above $T_{\chi SR}$
in the lattice calculations. Remarkably the perturbative result
(\ref{pertchi}) above $T_c$ matches smoothly onto
the HGS prediction (\ref{HGSchi}) just below $T_c$. Neglecting logarithmic
dependence of the gauge coupling constant, eq. (\ref{ratio}) can be
written as
\begin{eqnarray}
\frac{\chi}{\chi_0} (T)\approx \frac{1}{1+0.46 (T_c/T)^2}
\end{eqnarray}
which follows closely the lattice gauge results of
Gottlieb {{\it et al}}\ \cite{gottliebchi}. We consider this an
indication for the Georgi vector symmetry in action, with the
induced flavor gauge symmetry in the hadronic sector ceding to the
fundamental color gauge symmetry of QCD in the quark-gluon sector.
We should remark that to the extent that the screening mass obtained in
\cite{KSBJ} $m_\pi=m_S\approx 2\pi T$ is consistent with two non-interacting
quarks and that the corresponding wave functions obtained therein are
the same for the pion $\pi$ and the scalar $S$, we naturally expect the
relation
(\ref{equal}) to hold. A short calculation
of the matrix element of the axial current $A_\mu$
with the $\pi$ wave function in the
``funny space" gives, for large $T$ \footnote{We denote
the constant by $\tilde{f}_\pi$
to distinguish it from the physical pion decay constant $f_\pi$.}
\begin{eqnarray}
\tilde{f}_\pi \sim c\sqrt{\bar{g}} T\label{decay}
\end{eqnarray}
where $c$ is a constant $<< 1$ and $\bar{g}$ is
the color gauge coupling constant.
\subsubsection*{``Cool" kaons in heavy-ion collisions}
\indent
The vanishing of the hidden gauge coupling $g$ can have a dramatic effect
on the kaons produced in relativistic heavy-ion collisions. In particular,
it is predicted that the kaons produced from quark-gluon plasma would have a
component that has
a temperature much lower than that for other hadrons. This
scenario may provide an explanation of the recent preliminary
data\cite{stachel} on the $14.6$ GeV collision (experiment E-814)
\begin{eqnarray}
^{28}{\rm Si} + {\rm Pb}\rightarrow K^+ (K^-) + X
\end{eqnarray}
which showed cool components with effective
temperature of 12 MeV for $K^+$ and 10 MeV for $K^-$, which cannot be
reproduced in the conventional scenarios employed in event generators.
The latter give kaons of effective temperature $\sim 150$ MeV.
There are two points to keep in mind in understanding what is happening
here. Firstly, the Brookhaven AGS experiments determined the freeze-out --
the effective decoupling in the sense of energy exchange of pions and
nucleons -- at $T_{fo}\approx 140$ MeV\cite{BSW}\footnote{The original
determination of $T\roughly> 150$ MeV from the
ratio of isobars to nucleons by Brown, Stachel and Welke \cite{BSW}
was corrected about 10 MeV downward by taking effects such as the finite
width of the isobar into account.}. This is essentially the same as
the chiral transition temperature measured in lattice gauge calculations
\cite{lattice}. This suggests that the freeze-out for less strongly interacting
particles other than
the pion and the nucleon is at a temperature higher than $T_{\chi SR}$ and
that the pion and nucleon freeze out at about $T_{\chi SR}$. This means
that interactions in the interior of the fireball will be at temperature
greater than $T_{\chi SR}$. At this temperature, the vector coupling
$g$ would have gone to zero, so the Georgi vector limit would be
operative were it to be relevant.
The second point is that the fireball must expand slowly.
The slow expansion results because the pressure in the region
for some distance above $T_{\chi SR}$ is very low \cite{kochbrown}, the
energy in the system going into decondensing gluons rather than giving
pressure.
This results in an expansion velocity of $v/c\sim 0.1$.
In the case of 15 GeV/N Si on Pb transitions, the fireball has been measured
\cite{braun} through Hanbury-Brown-Twiss correlations of the pions to increase
from a transverse size of $R_T (Si)=2.5$ fm to $R_T=6.7$ fm, nearly a factor
of 3, before pions freeze out. With an expansion velocity of $v/c\sim 0.1$,
this means an expansion time of $\sim 25 - 30$ fm/c. (The full expansion
time cannot be measured from the pions which occur as a short flash at the
end.)
In a recent paper, V. Koch\cite{koch}
has shown that given a sizable effective attractive
interaction between the $K^+$ and the nucleon at the freeze-out phase,
a cool kaon component can be reproduced in the conditions specified above.
We argue now that such an attractive
interaction can result if the Georgi vector limit is realized.
The description by chiral perturbation theory\cite{knpw,LJMR,LBR,LBMR}
of kaon nuclear interactions and kaon condensation in
dense nuclear matter has shown that three mechanisms figure prominently
in kaon-nuclear processes at low energy: (1) the $\omega$ meson exchange
giving rise to repulsion for $K^+ N$ interactions and attraction
for $K^- N$; (2) the ``sigma-term" attraction for both $K^\pm N$:
(3) the repulsive ``virtual pair term." In effective chiral
Lagrangians, the first takes the form, $\sim \pm
\frac{1}{f^2} K^\dagger \partial_\mu K \bar{N} \gamma^\mu N$ for $K^\pm$,
the second $\sim \frac{\Sigma_{KN}}{f^2} K^\dagger K \bar{N} N$
and the third term $\sim (\partial_\mu K)^2 \bar{N}N$\footnote{The $\Lambda (1405)$
driven by the vector-exchange term plays an important role in $K^-p$ scattering
but an irrelevant role in kaon condensation and no role at all for
$K^+ N$ processes.}. Roughly the vector-exchange gives the repulsive
potential\footnote{Note that by $G$-parity, this potential turns attractive
for $K^- N$.}
\begin{eqnarray}
V_{K^+ N}\cong \frac 13 V_{NN}\cong 90\ {\rm MeV}\,\frac{\rho}{\rho_0}
\label{repulsion}
\end{eqnarray}
where $\rho_0$ is nuclear matter density. This term is proportional to
the hidden gauge coupling $g^2$. One can estimate the scalar attraction
by the ``sigma term"
\begin{eqnarray}
S_{K^+ N}\approx -\frac{\Sigma_{KN} \langle \bar{N}N\rangle}
{2 m_K f^2}\cong -45\ {\rm MeV}\,\frac{\rho_s}{\rho_0}
\label{attraction}
\end{eqnarray}
where $\rho_s$ is the scalar density and $\Sigma_{KN}$ is the $KN$ sigma
term. Being $G$-parity invariant, this remains attractive for $K^- N$
interactions. The virtual pair term
(proportional to $\omega^2$ where $\omega$ is
the kaon frequency) -- which is
related to Pauli blocking -- removes, at zero temperature, about 60 \%
of the attraction (\ref{attraction}). At low temperature, the net effect is
therefore highly repulsive for $K^+ N$ interactions.
It is easy to see what happens as $T\rightarrow T_{\chi SR}$. First of all,
part of the virtual pair repulsion gets ``boiled" off as discussed in
\cite{BKR}. What is more
important, if the Georgi vector limit is relevant, then
the vector mesons decouple with $g\rightarrow 0$, killing off the repulsion
(\ref{repulsion}). As a consequence, the residual attraction from the
scalar exchange remains. The calculation of Koch\cite{koch} supports this
scenario. Given that the vector coupling is absent, one can see that
both $K^+$ and $K^-$ will have a similar cool component.
\subsubsection*{Instanton-molecule model for chiral restoration}
\indent
As a final case, we mention a microscopic model that seems to
realize the Georgi vector symmetry at high temperature.
In a model where the chiral phase transition is described as a change
in the instanton liquid from a randomly distributed phase at low temperature
to an instanton-anti-instanton molecular phase above $T_{\chi SR}$, it has been
observed\cite{schafer} that the molecules get polarized in the time direction
and the interactions in the pion and the longitudinal vector channel
become identical. This leads to the degeneracy of the triplets of
$\pi$ and $\rho_\parallel$ which may be identified with the scalar $S$.
The interaction in the longitudinal vector-meson
channel becomes equally strong as attraction in
the scalar-pseudoscalar channel, while
transversely polarized vector mesons have no interaction.
If one assumes that the polarized molecules are the dominant agent for
interactions above $T_{\chi SR}$, then one can see that all coupling
constants in an NJL-type effective Lagrangian so generated could be
specified in terms of a {\it single} coupling constant, implying the swelling
of the symmetry in a manner closely paralleling the Georgi vector symmetry.
In this case, the restored
symmetry is $U(2)\times U(2)$ since the axial $U(1)_A$
is also supposed to be restored. Perturbative QCD effects are not expected
to modify this symmetry structure but it is not clear that no other
non-perturbative effects can enter to upset this. Nonetheless this is
a microscopic picture consistent with the Georgi vector symmetry.
\subsubsection*{Discussions}
\indent
We have seen that the behavior of the quark number
susceptibility with temperature, as calculated in lattice gauge calculations,
shows that the hadronic vector coupling disappears as $T$ moves upwards
through $T_{\chi SR}$, and that the perturbative color gluon exchange
describes the susceptibility well above $T_{\chi SR}$, as argued by
Prakash and Zahed\cite{prakash}. Somewhat surprising is the fact that
the perturbative description, which gives a $1/T^2$ behavior in the difference
$\chi (T)-\chi (0)$ between the susceptibility and that for free quarks,
sets in just above $T_{\chi SR}$; {{\it i.e}}, for this purpose, asymptotia is
$T \roughly> T_{\chi SR}$. Note that the screening mass of the
$\rho$-meson goes to $2\pi T$, its asymptotic value, as soon as $T$ reaches
$T_{\chi SR}$. Why is asymptotia reached so soon?
We suggest that the relevant parameter for reaching asymptotia for this
particular quantity is $T/m_{dyn}$; {{\it i.e}}, the ratio of temperature to
dynamically generated mass. The temperature $T_{\chi SR}$ need not be
very large as long as $m_{dyn}\rightarrow 0$ at $T_{\chi SR}$ in order to reach
asymptotia with this parameter.
While other viable explanations may be found in the future, as far as we know,
the soft kaons found in the E-814 experiment can be explained only if the
vector-meson coupling is essentially absent at the relevant temperature, and
this is a strong support for the Georgi picture in which the hidden gauge
coupling constant ``melts" at the temperature $T_{\chi SR}$.
Now the gauge symmetry in the hidden local symmetry
scheme is an ``induced" gauge symmetry lodged in hadronic variables.
In this sector, the fundamental color gauge symmetry is not
visible. It is the induced flavor one that is seen. What we observe is
then that as $T$ goes towards $T_{\chi SR}$, the induced gauge symmetry
gives way to the fundamental gauge symmetry. What is surprising is that
this changeover seems to take place suddenly, with increasing temperature,
the effective hadronic gauge symmetry applying for $T < T_{\chi SR}$,
and the fundamental color gauge symmetry being realized
perturbatively for $T > T_{\chi SR}$.
Finally we would like to suggest that while the Georgi vector limit
is relevant to the chiral symmetry restoration at high temperature and/or
high density, the
``mended symmetry" with $\kappa=0$ and $g\neq 0$ (with $m_\rho\neq 0$)
may also be relevant
for nuclear physics at temperatures and matter densities below the critical
values. As pointed out by Beane and van Kolck\cite{beane}, it is the
existence of a symmetry of this type (involving a light dilaton)
that allows one to linearize the non-linear chiral
Lagrangian of current algebra to the linear sigma model at some shorter
length scale, {{\it e.g.}}, in nuclear medium. As discussed elsewhere\cite{newbr}
it is this mechanism that allows a separation into two components of the
scalar $\chi$ field that enters into the trace anomaly of QCD and gives
rise to the medium-scaling (so-called ``BR scaling") proposed in
ref.\cite{br91} of effective chiral Lagrangians in nuclear medium.
For more discussion on this issue, see refs.\cite{newbr,mrelaf}.
\subsection*{Acknowledgments}
\indent
We are grateful for discussions with Tetsuo Hatsuda, Volker Koch, Maciej Nowak,
Koichi Yamawaki and Ismail Zahed.
| {
"attr-fineweb-edu": 1.924805,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdMjxK7Dgs_cY5tGB | \section*{Abstract}
{\bf
In high energy scattering, the multi-production process is unique in its relevance to the total cross section and in its global properties, such as rapidity and other kinematic distributions. If there is a hard interaction, the jet rate and structure is a good arena for perturbative quantum chromodynamics. However, once any hadron is specified, its properties and structure must make sense, while the global and/or perturbative chromodynamic mechanism still can put important constraints. The relation of the hadron's properties and structure with its production cross section, distribution etc. can be much more complex than its decay width. On the one hand there are many difficulties and challenges in the calculations, while on the other hand, the production process provides a unique way to study the details of properties and structure of the hadron, which is beyond the approach of its decay process. Here I review our works on such topics in recent years, mainly on the multi-quark state production in multi-production processes and the Bethe-Salpeter wave function in exclusive processes. For the former, I emphasize the unitarity of the hadronization process and relevant models, so that almost no multi-quark state is observed in multi-production processes. I also address how to calculate hadron molecule production in multi-production processes. The most recently observed $T^+_{cc}$ is also stated with its relevance to colour and baryon number fluctuation of the preconfinement clusters. For the latter, I emphasize the Dirac structure of the hadron-quark coupling vertex.
}
\vspace{10pt}
\noindent\rule{\textwidth}{1pt}
\tableofcontents\thispagestyle{fancy}
\noindent\rule{\textwidth}{1pt}
\vspace{10pt}
\section{Introduction}
\label{sec:intro}
This symposium is about multiple particle dynamics. Multiple particle systems are mainly produced from the high energy scatterings, of which the multi-production process is a dominant part of the total cross section. It is also the reason for the increasing total cross section of hadronic scattering \cite{cw}. This topic, as well as other global properties, jet observable, etc. are all important subjects in studying QCD in high energy scattering. At the same time, multi-production process is also a copious source of various hadrons. This last fact indicates that multi-production process is a good arena to study hadron structure.
Compared to other ways of studying hadron structure, such as some static property if available, decay width distributions, 'projector/probe' scattering (e.g. DIS) etc., this way is more complex. However this is why this study provides more complex information, even with different physical pictures. On the one hand, there are many difficulties and challenges in the calculations, while on the other hand, the production process provides a unique way to study the details of the properties and structure of the hadron, which is beyond the approach of its decay process. This study relates the production mechanism with the structure and properties of a specific hadron.
Our study has two aspects: one is the global properties, putting constraints on the production mechanism of hadrons; the other is looking at the concrete details of the structure of certain hadrons.
\section{Unitarity, colour and baryon fluctuation}
Prompt production of the multi-quark states and/or bound states of other ingredient hadrons in high energy scattering can set a crucial bench-mark for understanding the hadronization mechanism, since they contain more than three constituent quarks. In any hadronization process, the produced color-singlet (anti)quark system (e.g., preconfinement cluster \cite{AV}) eventually transitions to various hadron states (mesons, baryons and beyond) with a total probability of exactly 1, which reflects the fact that there are no free quarks in the final states of any high energy process (confinement). This process conserves entropy, since it only makes a unitary transition on the density matrix. This consideration is very important, since such an analysis releases and shuts down a long paradox that the entropy will decrease and introduce unphysical predictions in a combination process/model. We have also pointed out that energy conservation and unitarity of the combination model are enough for this physical requirement.
The introduction of multi-quark states sets a challenge for the hadronization models/the mechanism dealing with the transition from color-singlet (anti)quark system to the hadron system. With these new states introduced, a more detailed investigation of the whole hadron Hilbert space as well as that of the quarks is needed.
As a matter of fact, from experiments, the production of general mesons and baryons is dominant, so the production rate of exotic particles could be small if nonvanishing. However,
the present knowledge is not enough to judge how many kinds of multi-quark states there are and how they `share' the total probability, $\epsilon$. It is not easy to predict the production rate of a specific multi-quark state. What we can say, though, it that if there are many kinds of multi-quark hadrons, each only shares a small part of the small $\epsilon$. So the production rate of each is almost vanishing. This is consistent with the fact almost no multi-quark state observed in multi-production processes.
The exotic hadrons are observed to be produced from the decays of heavier hadrons (e.g. bottom hadrons) rather than promptly produced from multi-production processes in experiments. From the theoretical aspects, this fact is understood, not only because of the unitarity constraint but also the modest mass of the preconfinement clusters \cite{Han:2009jw,Jin:2016vjn,Jin:2010yd,Li:2019vrc}, which is independent from the collision energy, and results from the interplay between perturbative and non-perturbative QCD.
So to understand this topic, the preconfinement concept \cite{AV} is also very important, especially
in the case of a large number of quarks produced (e.g. in high energy nuclear collisions). This is consistent with unitarity, confinement, etc. Furthermore one has to consider the fluctuations of colour and baryon number besides the hadronization models. These fluctuations can be that of other quantum number, e.g. stangeness.
The relation lies in that all kinds of multi-quark state hadrons have one common property, that
the bound (anti)quarks inside can be grouped into several clusters, with each
cluster {\it possibly} in colour-singlet. But the ways of grouping
these (anti)quarks are not unique;
and dynamically, the colour interactions in
the system via exchanging gluons can change the colour state of each
individual cluster, so each method of grouping/reduction seems to have no special physical
reason. This ambiguity has been discussed in other circumstances, under the name of 'colour
recombination/rearrangement' \cite{our1, our2, Jin:2013bra}. This fact shows that
multi-quark hadron can not be considered in a unique and uniform way.
This is a phenomenological
duality: even the production of multi-quark hadrons is considered as
'hadron molecule formation' ('production definition'), however, the subsequent colour interactions
in the system can eventually transit this
'molecule' into a 'real' multi-quark hadron, at least by some
probability --- and vice verse \cite{Jin:2014nva, Jin:2015mla,Li:2020ggh, lsy2016,r2005lsy}. The baryon number fluctuation means some cluster can have one or more extra $qqq (\overline{qqq})$. Based on this consideration, the colour and baryon fluctuation of the preconfinement clusters has non-trivial relevance to the multi-quark hadrons, and can be applied to construct various models for a specific multi-quark state for comparison.
\section{Bethe-Salpeter wave function in production}
An example is to investigate the exclusive production ratio $\frac{\sigma(e^+e^-\to K_S K_L)}{\sigma(e^+e^-\to K^+K^-)} $ in the $e^+e^-$ continuum below the mass of $J/\Psi$, under the spirit of Straton Model (i.e. completely relativistic Bethe-Salpeter framework). The coupling of the virtual photon to the Kaons is via a triangle quark loop: the photon-quark-quark vertex is exactly that of Standard Model. The vertices between the quarks and the corresponding Kaon is the Bethe-Salpeter vertex, in terms of the valence quark field. Hence the electromagnetic interaction and non-perturbative QCD interaction are separately assigned. The difference of these two kinds of channels lies in the electric charge. Due to the scalar wave function in the Bethe-Salpeter vertex, which can be considered to regularize and renormalize the infinite integrations, the loop integral is finite. The ratio can be calculated straightforwardly \cite{Jin:2019gfa}, by adopting the vertex as $\gamma^5 (1+ B_1 \gamma_\mu P^\mu/M) \phi( q^2)$ \cite{Bhatnagar:2005vw}. One gets
$\frac{\sigma(e^+e^-\to K_S K_L)}{\sigma(e^+e^-\to K^+K^-)} \cong (\frac{m_s-m_d}{M})^2$.
This is consistent with former experiments and can be further tested by BESIII measurements. The method was also applied to exclusive production process $e^+e^- \to J/\Psi+ \eta_c$ and explains the data well.
\section{Conclusion}
To study the structure of hadrons via production is an important. The recently observed $T_{cc}$ \cite{LHCb:2021vvq,LHCb:2021auc} is investigated taking into account the colour connection and baryon number fluctuation \cite{Li:2019vrc}, and a compact four-quark nature is favoured \cite{Jin:2021cxj}. The interplay of multi-HEAVY-quark production and multi-HEAVY-quark bound states is a quite new and copious direction in this field. Now three pairs of charm can be carefully investigated in LHC (see, e.g., \cite{CMS:2021qsn}).
\section*{Acknowledgements}
I thank all Collaborators.
This work is supported in part by National Natural Science Foundation of China (grant Nos. 11635009, 11775130).
| {
"attr-fineweb-edu": 1.913086,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdN45qsJBjXav8gve | \section{Introduction}
Ultraluminous X-ray sources (ULXs) are point-like off-nuclear extragalactic sources with X-ray luminosity higher than $\sim10^{39}$\,erg\,s$^{-1}$ \citep{fabbiano89, feng_soria11}. The apparent X-ray luminosity of ULXs exceeds the Eddington limit of a stellar black hole (BH) with a typical mass of $\sim10M_\odot$ found in Galactic BH X-ray binaries (BHXRBs, \citealt{remillard_mcclintock06}). It has been generally believed that ULXs are powered either by super-Eddington accretion onto stellar-mass black holes, or by intermediate mass black holes (IMBHs) with sub-Eddington accretion rate \citep[e.g.][]{colbert_mushotzky99, feng_soria11}. Observational evidence for stellar-mass BHs have been found in a few ULXs (e.g. M101 ULX-1, \citealt{liu_etal13}), while ESO 243-49 HLX1 \citep{farrell_etal09} and M82 X-1 \citep{feng_kaaret10, pasham_etal14}, both with relatively high peak X-ray luminosity ($L_\mathrm{X}\geq10^{41}$\,\,erg\,s$^{-1}$), are promising IMBH candidates. However, the detection of pulsations in the X-ray data of four ULXs (M82 X-2: \citealt{bachetti_etal14}; NCG 5907 ULX-1: \citealt{israel_etal17b}; NGC 7793 P13: \citealt{furst_etal16, israel_etal17a}; NGC 300 ULX-1: \citealt{carpano_etal18}) show clear evidence that the accretors in those systems are neutron stars (NS), indicating that the apparent X-ray luminosities in those ULXs are at least $\ge10$ times the Eddington limit for a standard NS of mass $1.4M_\odot$.
\begin{figure*}
\begin{center}
\includegraphics[width=1.0\textwidth]{multi_mission_img.pdf}
\vspace*{-5mm}
\caption{\label{fig:srcimg}\textit{Main image}: DSS image of NGC 7090. The plus and cross symbols mark the position of X1 (measured from \mission{Chandra}) and X2 (measured from \mission{Swift/XRT}), respectively. \textit{Bottom-left} subset: \mission{HST} F814W image in the region around X1. The circle indicates the 90 per cent \mission{Chandra} position uncertainty (0.54\,arcsec) of X1. \textit{Top-left} subset: \mission{HST} F804W image in the region around X2. The position uncertainty of X2 is 1.8\,arcsec (dashed circle, 90 per cent confidence level). \textit{Top-right} subset: combined (observations from 4 June to 2 July 2012) \mission{Swift/XRT} image. X2 is clearly detected by \mission{XRT}. \textit{Bottom-right} subset: mosaic \mission{XMM-Newton} EPIC image. The position of X1 measured from \mission{Chandra} is consistent with the source detected by \mission{XMM-Newton}.}
\end{center}
\end{figure*}
Some ULXs show low level short-term variability with fractional variability $\ll 10$ per cent, while some may be highly variable with fractional variability $\sim10-30\,$ per cent \citep[e.g.][]{heil_etal09, sutton_etal13, middleton_etal15}. ULXs with long-term flux variability by a factor of $\sim40-1000$, though quite rare, have also been found, e.g. NGC 3628 \citep{strickland_etal01}, M101 ULX-X1 \citep{mukai_etal05}, M82 X2 \citep{feng_kaaret07}, NGC 1365 ULX X2 \citep{soria_etal09}, CXOM31 J004253.1+411422 \citep{kaur_etal12} and XMMU J004243.6+412519 \citep{esposito_etal13} in M31 and NGC 5907 ULX-2 \citep{pintore_etal18}. All the four pulsar ULXs discovered so far are also highly variable (even transient) X-ray sources.
The X-ray spectra of many luminous ULXs ($L_\mathrm{X}>3\times10^{39}$\,erg\,s$^{-1}$) can generally be fitted with either a two component model (the ultraluminous state, UL), i.e., a multicolour disc blackbody (DBB) plus a Comptonisation or a single Comptonisation component \citep{gladstone_etal09, sutton_etal13}. For the less luminous ULXs ($L_\mathrm{X}<3\times10^{39}$\,erg\,s$^{-1}$), their spectra can be well described with a single $p$-free disc model (the broadened disk, BD; \citealt{sutton_etal13}) for which the local disc temperature $T(r)$ is proportional to $r^{-p}$. Some ULXs with luminosity higher than $10^{40}$\,erg\,s$^{-1}$ also show a spectral shape consistent with the BD model \citealt{pintore_etal16}. Spectral variability has been revealed in some individual ULXs through detailed X-ray spectral or colour analysis \citep[e.g.][]{kubota_etal01, roberts_etal06, feng_kaaret09, kajava_poutanen09}. Some ULXs, similar to the Galactic X-ray binaries (XRBs), can change their spectral state dramatically \citep{sutton_etal13, marlowe_etal14}, e.g. Holmberg IX X-1 showed a two component disc plus power-law spectrum at lower luminosity, while the spectral shape changed to a broadened disc at higher luminosity \citep{walton_etal14, luangtip_etal16}. The spectral properties of the four pulsar ULXs are similar to typical ULXs, although pulsar ULXs show a further excess at high energy whose origin may be associated to the accretion column above the NS surface. However, even though less robustly, indications of such an excess are observed also in other non-pulsating ULXs, suggesting that the ULX population can host a larger number of neutron stars than previously expected \citep{walton_etal18a}.
In this letter, we report the X-ray properties of two transient ULXs (Fig.\,\ref{fig:srcimg}, hereafter X1, X2) found in the nearby star-forming galaxy NGC 7090. X1 is classified as an ULX candidate in \citet{lin_etal12} based on \mission{XMM-Newton} observations. X2 was detected in the 2012 \mission{Swift/XRT} observations and included in the first \mission{Swift}-XRT point source catalogue \citep[1SXPS][]{evans_etal14}. In this work we identify it as an ULX with peak 0.3--10\,keV X-ray luminosity higher than $3\times10^{39}$\,erg\,s$^{-1}$. We adopted a distance to NGC 7090 of $6.6$\,Mpc \citep{tully_etal92} throughout this work .
\section{Data analysis}
NGC 7090 was observed by \mission{XMM-Newton}, \mission{Chandra}, \mission{Swift} and \mission{Hubble} in the past decades. The observation log can be found in Table\,\ref{tab:obs_log}. In this section, we describe the details of the data analysis.
\input{obs_log}
\subsection{\mission{XMM-Newton}}
NGC 7090 was observed by \mission{XMM-Newton} on 2004 April 18 (ObsID: 0200230101), 2004 May 13 (ObsID: 0200230201) and 2007 October 5 (ObsID: 0503460101) with exposure time 28ks, 19ks and 31ks, respectively. The first observation was severely affected by high background flaring, and thus was excluded from this work. The \mission{XMM-Newton} Science Analysis System (\textsc{SAS}) software version 16.1 \citep{gabriel_etal04} was used to reduce \mission{XMM-Newton} data. We ran \textsc{SAS} tasks \textsc{emchain} and \textsc{epchain} to generate the event lists for the European Photon Imaging Camera (EPIC) MOS \citep{turner_etal01} and pn \citep{struder_etal01} detectors, respectively. Flaring background periods were identified and filtered from the event lists. The effective exposure time of the EPIC pn, M1 and M2 cameras, after filtering the high background periods, were 6024, 10390, 10290\,s (4270, 12790, 12980\,s) for the 2001 (2007) observation, respectively. Source detection was performed on all the individual EPIC image as well as the combined EPIC image for each observation using the \textsc{SAS} task \textsc{edetect\_chain}. The parameters \textit{likemin} (minimum detection likelihood) and \textit{mlmin} (minimum likelihood) of 8 and 10 were adopted as suggested by the \textsc{SAS} guide\footnote{\url{https://xmm-tools.cosmos.esa.int/external/sas/current/doc/eboxdetect/node3.html}}. We found X2 was not detected in the individual EPIC image or in the combined image of the two observations, while X1 was only detected in the October 2007 observation (both in the individual image and the combined image). We thus only extracted the X-ray spectra for source X1. A circular region with a radius of 12\,arcsec was used to extract the source spectra. Apart from X2, we note that X1 is also about 19\,arcsec away from the closest source, and it is $\sim5$ times brighter than that source during the 2007 \mission{XMM-Newton} observation. X-ray events with pattern $\le 12$ and $\le 4$ were selected to extract the MOS and pn spectrum, respectively. The background spectra were extracted from a source-free region with a circle of radius 100\,arcsec located on the same CCD chip as the source for MOS, while a circular region centred at the same CCD read-out column as the source position was selected for pn. The \textsc{arfgen} and \textsc{rmfgen} tasks were used to generate the response files.
\subsection{\mission{Chandra}}
\mission{Chandra} observed NGC 7090 on 2005 December 18 (26ks, ObsID: 7060) and 2006 April 10 (31ks, ObsID: 7250) with the Advanced CCD Imaging Spectrometer (ACIS). \mission{Chandra} data were reduced with \textsc{CIAO} (\citealt{fruscione_etal06}, ver 4.10) software package and calibration files CALDB (ver 4.7.6). We ran \textsc{wavdetect} tool on the \mission{Chandra} observations to generate a source list. X2 was not detected in the two \mission{Chandra} observations, while X1 was only detected in the 2006 observation. The overall 90 per cent absolute astrometry uncertainty of \mission{Chandra} is $\sim0.8$\,arcsec\footnote{\url{http://cxc.cfa.harvard.edu/cal/ASPECT/celmon}}. Following the online data analysis guide\footnote{\url{http://cxc.harvard.edu/ciao/threads/reproject_aspect}}, we corrected the absolute astrometry by cross-matching \mission{Chandra} sources with the GAIA DR2 catalogue \citep{gaiadr2} using a correlation radius of 1\,arcsec. Three sources were selected to perform absolute astrometry correction. The \textsc{CIAO} task \textsc{wcs\_match} and \textsc{wcs\_update} were used to correct and update the aspect ratio. The residual rms scatter in the corrected X-ray positions of the GAIA sources is 0.26\,arcsec, which corresponds to a 90 per cent position error of $\approx0.53$\,arcsec (assuming Rayleigh distribution).
To extract the source spectrum for X1, we selected a circular region with a radius of 2\,arcsec. The background spectrum was extracted from an annulus (concentric with the source) region with an inner and outer radius of 6 and 24\,arcsec, respectively. The regions surrounding the events from the X2 (a circle with radius of 2.7\,arcsec) and the other close by source (a circle with radius of 5.2\,arcsec) were excluded from the annulus background region. The \textsc{CIAO} task \textsc{dmextract} was used to extract the source and background spectra. The response files are generated using the \textsc{mkacisrmf} and \textsc{mkarf} tasks.
\subsection{\mission{Swift/XRT} observations}
NGC 7090 was observed by the X-ray Telescope (\mission{XRT}, \citealt{burrows_etal05}) of the \mission{Neil Gehrels Swift Observatory} (\mission{Swift}) from 2006 to 2014. All the XRT data (21 observations) were analysed with the XRT online data analysis tool\footnote{\url{http://www.swift.ac.uk/user_objects}}\citep{evans_etal09}. We ran source detection using $\textsc{ximage}$ task $detect$. Source X1 was not detected in either individual observations or the combined observation with signal-to-noise ratio (S/N) more than 2, while source X2 was detected in observations performed on 2012 June 4 and July 2 (ObsIDs: 00032287008, 00032287011) with S/N higher than 3.7 \citep{evans_etal14}. To increase the S/N, we extracted the source and background spectra from a combined image of the four observations performed between 2012 June 4 and July 2 (ObsIDs: 00032287008-11, total exposure: 11\,ks, $\mathrm{S/N}>9$, hereafter \mission{Swift}1). X2 was also detected in the combined image of observations performed from 2012 July 30 to August 20 (ObsIDs: 00032287011-15, total exposure: 8.7\,ks, $\mathrm{S/N}>5$, hereafter \mission{Swift}2). Source and background spectra were also extracted for this combined observation.
\subsection{\mission{HST}}
NGC 7090 was observed six times by \mission{HST} from 1994 to 2016. In this work, the observations with better S/N carried out on 2001 September 24 with the Wide Field and Planetary Camera 2 (WFPC2, filter $F814W$), on 2005 June 23 with the Wide Field Camera 3 (ACS/WFC3, filter $625W$) and on 2012 April 9 with ACS/WFC3 (filters: $F814W$ and $F606W$) were used\footnote{Note that the 2007 \mission{HST} observation had a very long exposure time. However, no photometric measurements were given on the \mission{Hubble} Source Catalogue website. Thus this observation is not used in this work.}. The \mission{HST} images were retrieved from the \mission{Hubble Legacy Archive}\footnote{\url{http://hla.stsci.edu}} (HLA). The absolute astrometry for the 2012 observations (which have the best spatial resolution and S/N) was corrected by aligning the \mission{HST} images with the source positions found in the GAIA DR2 catalogue. The absolute astrometry accuracy of \mission{HST} after correction is $\sim1\,\mathrm{mas}$ (68 per cent confidence level), consistent with the position accuracy obtained at HLA.
\section{Results}
The position of X1 was obtained from the 2006 \mission{Chandra} observation using the \textsc{wavdetect} task, which gives $\mathrm{RA}=21^\mathrm{h}36^\mathrm{m}31^\mathrm{s}.81$ and $\mathrm{Dec.}=\ang{-54;33;57.82}$, within the error circle of the position measured from the \mission{XMM-Newton} 2007 observation. Following \citet{evans_etal14}, we improved the position accuracy of \mission{Swift/XRT} by aligning the \mission{XRT} image with the sources detected by \mission{Chandra}. The improved position of X2 given by \mission{XRT} is then: $\mathrm{RA}=21^\mathrm{h}36^\mathrm{m}29^\mathrm{s}.11$ and $\mathrm{Dec.}=\ang{-54;33;48.31}$ (with 90 per cent uncertainty of $1.8\,\mathrm{arcsec}$), which is about 25.3\,arcsec away from X1.
\subsection{\label{xray_var}X-ray variability}
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{lumi_lc.pdf}
\vspace*{-0.5cm}
\caption{\label{fig:lc}The long-term unabsorbed $0.3-10$\,keV light curves of X1 (top) and X2 (bottom). The errors on the luminosities are at 90 per cent confidence level, while the upper-limits (in grey) are at the $3\sigma$ confidence level.}
\end{center}
\end{figure}
Fig.~\ref{fig:lc} shows the long-term X-ray variability of the two ULXs. The unabsorbed 0.3--10\,keV X-ray luminosities of X1, estimated by fitting the X-ray spectra with an absorbed power-law model (see Sec.\,\ref{x1_spec} for more details), are $L_\mathrm{X}\sim1\times10^{38}$ and $\sim4\times10^{39}$\,erg\,s$^{-1}$ for the 2006 April \mission{Chandra} and 2007 October \mission{XMM-Newton} observations, respectively. The $3\sigma$ upper limits, estimated using the best-fitting absorbed power-law model of the 2007 \mission{XMM-Newton} observation, are plotted for the other observations (or the \mission{Swift/XRT} combined observations) for which X1 was not detected. The lowest X-ray luminosity was given by the 2005 \mission{Chandra} observation with a $3\sigma$ upper limit of $\sim5\times10^{37}$\,erg\,s$^{-1}$.
Source X2 was significantly detected by \mission{Swift/XRT} in the observations made on 2012 June 4 ($>3\sigma$) and July 2 ($>5\sigma$). It was also seen in the other two observations performed in 2012 June, albeit with less significance ($\sim2.6\sigma$). The average X-ray luminosity estimated by fitting the average X-ray spectrum of those four observations with an absorbed power-law model is $\sim4\times10^{39}$\,erg\,s$^{-1}$ (see Fig.~\ref{fig:lc}). X2 was not detected in any of the other individual observations. But it was clearly seen in the combined image of the \mission{Swift/XRT} data observed between 2012 July 30 and August 20 (\mission{Swift}2) with an estimated X-ray luminosity of $7\times10^{38}$\,erg\,s$^{-1}$ (see Fig.~\ref{fig:lc}). The lowest X-ray luminosity was calculated from the 2006 \mission{Chandra} observation with a $3\sigma$ upper limit of $8\times10^{36}$\,erg\,s$^{-1}$.
From Fig.~\ref{fig:lc}, it is clear that both X1 and X2 showed dramatic long-term X-ray variability. Comparing to the 2005 \mission{Chandra} observation, the highest X-ray luminosity of X1 (the 2007 \mission{XMM-Newton} observation) and X2 (the 2012 \mission{Swift/XRT} observations) increased by a factor of $>80$ and $>300$, respectively. We also analysed the temporal properties of X1 within the 2007 \mission{XMM-Newton} data. No significant short-term (e.g. minutes to hours) variability was found in the 31\,ks exposure time. We did not find any coherent signal in the power spectrum created using the $0.3-10\,\mathrm{keV}$ \mission{XMM-Newton} data. Assuming a sinusoidal modulation, a $3\sigma$ upper limit of $\sim60$ per cent on the pulsed fraction (defined as the semi-amplitude of the sinusoid divided by the source average count rate) was derived using the \mission{XMM-Newton} data for periods in the range of $\sim0.4-150\,\mathrm{s}$.
\input{./table_fitting.tex}
\subsection{X-ray spectral analysis}
X-ray spectral analysis was carried out for the 2006 \mission{Chandra} (background subtracted 0.3--10\,keV photon counts $C_\mathrm{sub}=34$) and 2007 \mission{XMM-Newton} ($C_\mathrm{sub}=350$, 353 and 283 for EPIC M1, M2 and pn, respectively) observations of X1, as well as the two X-ray spectra of X2 ($C_\mathrm{sub}=110$ and 15 for \mission{Swift}1 and \mission{Swift}2, respectively). \textsc{Xspec} (\citealt{arnaud96} ver 12.10) is used to fit the X-ray spectra. The cash statistic (wstat in \textsc{Xspec}) is used due to the relatively low photon counts. Galactic and host galaxy absorption are included in all models (model \textsc{tbabs} and \textsc{ztbabs} in \textsc{Xspec}, abundances are set to \texttt{wilm}, \citealt{wilms_etal00}). The Galactic absorption is fixed at $5.4\times10^{20}$\,$\mathrm{cm}^{-2}$ \citep{kalberla_etal05}. Quoted uncertainties on spectral parameters are the 90 per cent confidence limits unless stated otherwise.
\subsubsection{\label{x1_spec}Source X1}
The EPIC 0.3--10\,keV M1, M2 and pn spectra were fitted simultaneously. A normalization factor is included to account for the calibration differences between the detectors. We fitted the data with two simple models: \textsc{powerlaw} model (\textsc{cons*tbabs*ztbabs*powerlaw} in \textsc{Xspec}) and \textsc{diskbb} model (\textsc{cons*tbabs*ztbabs*diskbb}). Both models can fit the data well (see Fig.\,\ref{fig:spec}) with $\Gamma = 1.55\pm0.15$ in the \textsc{powerlaw} model and inner disc temperature $T_\mathrm{in}=2.1^{+0.3}_{-0.2}$\,keV for \textsc{diskbb} model. Best-fitting values of the intrinsic absorption are $5.0^{+1.0}_{-1.0}\times10^{21}$\,$\mathrm{cm}^{-2}$ (\textsc{powerlaw}) and $3.0^{+1.0}_{-1.0}\times10^{21}$\,$\mathrm{cm}^{-2}$ (\textsc{diskbb}). The estimated unabsorbed 0.3-10\,keV X-ray luminosity is higher than $3\times10^{39}$\,\,erg\,s$^{-1}$. We also tried to fit the data with two component models, i.e. a \textsc{powerlaw} plus a \textsc{diskbb}, which gave $T_\mathrm{in}=0.22^{+0.18}_{-0.08}$\,keV and $\Gamma=1.44^{+0.26}_{-0.29}$, or a \textsc{blackbody} plus a \textsc{diskbb} ($T_\mathrm{BB}=1.41^{+0.24}_{-0.16}$\,keV, $T_\mathrm{in}=0.40^{+0.14}_{-0.09}$\,keV). Those two component models improve the fit slightly comparing with a single powerlaw model ($\Delta{C}=5.4$ and $11.3$ for 2 d.o.f, see Table\,\ref{tab:fit_para}). The best-fitting models as well as the data-to-model ratios for the \textsc{powerlaw} and \textsc{diskbb} models are shown in Fig.\,\ref{fig:spec}. The 2006 \mission{Chandra} data were fitted with the two simple models. The intrinsic column was fixed at values found by \mission{XMM-Newton} data. Although with large uncertainties, the data suggest a steeper photon index ($\Gamma=2.67^{+0.69}_{-0.64}$) or a lower disc temperature ($T_\mathrm{in}=0.64^{+0.28}_{-0.17}$\,keV), indicating a change in spectral shape. The best-fitting parameters of different models can be found in Table\,\ref{tab:fit_para}.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{spec_multi_bw_po_dbb.pdf}
\vspace*{-0.5cm}
\caption{\label{fig:spec}Top panel: the X-ray spectra of X1 during the 2007 observation: open circle for EPIC-M1, filled circle for EPIC-M2, square for EPIC-pn. The grey solid and dashed lines show the best-fitting \textsc{powerlaw} and \textsc{diskbb} model, respectively. The lower two panels show the data/model ratio for different models.}
\end{center}
\end{figure}
\subsubsection{\label{x2_spec}Source X2}
The same simple models were fitted to the \mission{Swift/XRT} spectra for X2. Both models can fit the \mission{Swift}1 spectrum well, with best-fitting temperature $T_\mathrm{in}=1.69^{+1.17}_{-0.48}$\,KeV or photon index $\Gamma=1.61^{+0.55}_{-0.50}$. Due to the low S/N, we did not fit the X-ray spectrum with more complicated models. The best-fitting results of the \mission{Swift}2 spectrum were consistent with the \mission{Swift}1 data, though with large uncertainties (intrinsic column was fixed at the values found by \mission{Swift}1) and relatively small change in flux (by a factor of $\sim4$).
\subsection{Optical counterpart}
We found one optical counterpart (see Fig.\,\ref{fig:srcimg}) within the position uncertainty of X1 in the \mission{HST} images. The AB magnitudes of the X1 counterpart (obtained from the \mission{Hubble} Source Catalogue) are: $23.27^{+0.06}_{-0.06}$, $23.32^{+0.01}_{-0.01}$, $23.28^{+0.02}_{-0.02}$ and $23.45^{+0.02}_{-0.02}\,\mathrm{mag}$ for the $WFPC2/F814W$ (2001), $ACS/F625W$ (2005), $ACS/F814W$ (2012) and $ACS/F606W$ (2012), respectively. Assuming $N_\mathrm{H}=2.21\times10^{21}A_\mathrm{V}$ \citep{guver_ozel09} and $A_V=E(B-V)/3.1$, the estimated extinction for $ACS/F814W$, $ACS/F625W$ and $ACS/F606W$ are $A_{F814W}=1.3$, $A_{F625W}=1.9$ and $A_{F606W}=2.1$ \citep{sirianni_etal05} with $N_\mathrm{H, host}=3\times10^{21}\,\mathrm{cm^{-2}}$, respectively. The estimated $V$, $R$ and $I$ band magnitudes, transformed from the $ACS/WFC$ AB magnitude \citep{sirianni_etal05}, are 21.28, 21.24 and 21.49\,mag, respectively. No significant variability is found for the $F814W$ flux in the two \mission{HST} observations.
Multiple optical counterparts were found within the position uncertainty of X2 in the \mission{HST} images. The magnitudes of the brightest source in the 2012 \mission{HST} observation were 24.86 and 25.93 mag in $F814W$ and $F606W$ band,\footnote{There is no photometric measurement for the 2005 observation on the \mission{Hubble} Source Catalogue.} respectively. Assuming $N_\mathrm{H, host}=3\times10^{21}\,\mathrm{cm^{-2}}$, the upper limit magnitudes for the X2 counterpart are $24.0$ and $23.1$ mag in the $V$ and $I$ bands, respectively.
\section{Discussion}
In this letter, we report the X-ray properties of two highly variable ULXs in the nearby star-forming galaxy NGC 7090. Source X1 has been classified as an ULX candidate in the catalogue compiled by \citet{lin_etal12} using \mission{XMM-Newton} data. Source X2 is a new ULX detected in the 2012 \mission{Swift/XRT} observations. The long-term X-ray light curves show that both sources are highly variable: flux changed by a factor of $>80$ for X1 and $> 300$ for X2. AGNs are known to be highly variable especially in the X-ray bands. However, variability by a factor of more than $80$ are rare in AGNs \citep[e.g.][]{strotjohann_etal16}. We further explore the possibility that the two sources are background AGNs by considering the $\log{N}-\log{S}$ of extragalactic X-ray sources. The expected number of AGN, with X-ray flux higher than $\sim10^{-13}$\,erg\,cm$^{-2}$\,s$^{-1}$ covering by the approximate $7.0\times1.5\,\mathrm{arcmin}^2$ area by NGC 7090 is smaller than 0.04 \citep{moretti_etal03}, suggesting that X1 and X2 are unlikely to be background AGNs.
Most Galactic BHXRBs are transient X-ray sources with dramatic X-ray variability. If X1 and X2 are similar to BHXRBs, i.e. stellar massive BH with sub-Eddington accretion rate, then the mass of the BH should be around $30M_\odot$, assuming the observed peak luminosity are close to the Eddington luminosity (i.e. in the soft state, \citealt{remillard_mcclintock06}). However, the temperature ($2.07^{+0.30}_{-0.23}$ and $1.69^{+1.17}_{-0.48}$\,keV for X1 and X2, respectively) obtained from X-ray spectral analysis is inconsistent with the prediction for a disc around a $30M_\odot$ BH. The X-ray data of X1 have slightly better S/N, and can be fitted with a two component model. The \textsc{powerlaw+diskbb} model showed a hard ($1.44^{+0.26}_{-0.29}$) photon index with a weak and cool disc (the ratio of the disc flux to the total flux $f_\mathrm{disc}\sim0.19$, $T_\mathrm{in}=0.22^{+0.18}_{-0.08}$), which is consistent with the low/hard state of BHXRBs. If this is the case, the peak luminosity of X1 could be even higher, thus the BH mass should be much larger (e.g. an IMBH). But we note that the \textsc{powerlaw+diskbb} model does not improve the fit significantly comparing to the single component models. X1 also showed a transition in spectral shape with a much softer or cooler spectrum in 2006. This is reminiscent of the quiescent state ($\Gamma=1.5-2.1$, \citealt{remillard_mcclintock06}) in BHXRBs.
Alternatively, super-Eddington accretion onto an NS or a BH with mass less than $10M_\odot$ cannot be ruled out. High X-ray variability, though rare, has been found in some ULXs (e.g. \citealt{pintore_etal18} and references therein). All the four pulsating ULXs also showed high level flux variability. In a recent paper, \citet{walton_etal18b} showed that the broadband X-ray spectra of the bright ULXs can be fitted with a model consistent with super-Eddington accretion onto NSs, which may suggest that the compact object in many ULXs are neutron stars. The photon index and the disc temperature of X2, when fitted with the two simple models, are in agreement with the typical values found in ULXs with low S/N data (e.g. \citealt{makishima_etal00}) as well as the transient pulsar M82 X2 at a similar luminosity range \citep{brightman_etal16}. Similar to the other ULXs, the spectra of X1 in the high luminosity state can be described with a hot blackbody component plus a cool multicolour disc component. Though X1 did not show strong variability or pulsation during the 2007 \mission{XMM-Newton} observation, it is known that short-term variability and pulsation in some pulsar ULXs is transient. To further confirm the nature of the compact object of those two ULXs, future high S/N X-ray observations are necessary.
We did not find significant variability in $F814W$ flux for the X1 optical counterpart in the two \mission{HST} observations, which may suggest that the optical emission is from the companion star. Future simultaneous optical and X-ray observations are needed to confirm the nature of the companion star as well as the optical emission, however.
\section*{Acknowledgements}
ZL thanks the support from the China Scholarship Council. This work is supported by the Strategic Pioneer Program on Space Science, Chinese Academy of Sciences, Grant No. XDA15052100. PTOB acknowledges support from STFC. JPO, PAE and KLP acknowledge support from the UK Space Agency. This work made use of data supplied by the UK Swift Science Data Centre at the University of Leicester. This work is based on observation obtained with \mission{XMM-Newton}, an ESA science mission with instruments and contributions directly fund by ESA Member States and NASA, based on observations made with the NASA/ESA Hubble Space Telescope, and obtained from the Hubble Legacy Archive, which is a collaboration between the Space Telescope Science Institute (STScI/NASA), the Space Telescope European Coordinating Facility (ST-ECF/ESA) and the Canadian Astronomy Data Centre (CADC/NRC/CSA) The \mission{Chandra} data are obtained from the \mission{Chandra Data Archive}. This research has made use of software provided by the Chandra X-ray Center (CXC) in the application packages CIAO.
\bibliographystyle{mnras}
| {
"attr-fineweb-edu": 1.666016,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdNDxK1UJ-rWoJtLA | \section{Introduction}
In this article, we explain the f\/irst step towards the application of \emph{algebraic} Bethe ansatz to
the elliptic (or dynamical) quantum group $E_{\tau,\eta}(so_3)$. The elliptic quantum group is
the algebraic structure associated to elliptic solutions of the star-triangle relation
which appears in interaction-round-a-face models in statistical mechanics. As it was shown
by Felder \cite{Fe}, this structure is also related to Knizhnik--Zamolodchikov--Bernard equation of conformal
f\/ield theory on tori. In fact, to each solution of the (see \cite{Ji}) star-triangle relation
a dynamical $R$-matrix can be associated. This $R$-matrix, in turn, will def\/ine
an algebra similar to quantum groups appearing in the quantum inverse scattering method (QISM).
Despite all the dif\/ferences,
this new structure preserves a prominent feature of quantum groups: a tensor product of representations
can be def\/ined.
The adjective dynamical refers to the fact that the $R$-matrix appearing in these
structures contains a parameter which in the classical limit will be interpreted as the position coordinate
on the phase space of a classical system and the resulting classical $r$-matrix will depend on it. In
the quantum setting, apart from the appearance of this extra parameter the Yang--Baxter equation (YBE) is
also deformed. At the technical level, the main dif\/ference between usual quantum groups and the one
we are about to describe lies not so much in the elliptic nature of the appearing functions as rather in
the introduction of the extra ``dynamical'' parameter and the corresponding deformation the YBE.
In QISM, the physically interesting quantity is the transfer matrix. The Hamiltonian of the
model and other observables are derived from it. The knowledge of its spectrum is thus essential.
Dif\/ferent kinds of methods under the federating name of Bethe
ansatz have been developed to calculate the eigenvalues of the transfer matrix \cite{Fa,Ko,Ku}.
The question whether the algebraic Bethe ansatz (ABA) technique can be applied to transfer matrices
appearing in the context of dynamical quantum groups has received an af\/f\/irmative answer from Felder and
Varchenko \cite{Fe2,FeVa}.
They showed how to implement ABA for the elliptic quantum group $E_{\tau,\eta}(sl_2)$, they also showed its
applications to IRF models and Lam\'e equation. Later, for the $E_{\tau,\eta}(sl_n)$ elliptic
quantum group the nested Bethe ansatz method was used \cite{Es,Sa}
and a relation to Ruijsenaars--Schneider~\cite{Sa} and quantum Calogero--Moser Hamiltonians was established \cite{ABB}.
In the f\/irst section we introduce the basic def\/initions of dynamical $R$-matrix, Yang--Baxter equation, representations,
operator algebra and commuting transfer matrices.
We def\/ine ele\-ments~$\Phi_n$ in the operator algebra which have the necessary symmetry properties to be the
creation operators of the corresponding Bethe states. As it turns out, the creation operators are not
simple functions of the Lax matrix entries, unlike in \cite{FeVa}, but they are complicated
polynomials of three generators $A_1(u)$, $B_1(u)$, $B_2(u)$ in the elliptic operator algebra. We give the
recurrence relation which def\/ines the creation operators. Moreover, we give explicit formulas of
the eigenvalues and Bethe equations for $n = 1,2$. This strongly suggests that for higher $n$ these are correct choice
of creation operators.
Derivation of the eigenvalues and the corresponding Bethe equations for general $n$ (from the
usual cancelation of the unwanted terms) will be published elsewhere.
\section[Representations of $E_{\tau,\eta}(so_3)$ and transfer matrices]{Representations
of $\boldsymbol{E_{\tau,\eta}(so_3)}$ and transfer matrices}
\subsection[Definitions]{Def\/initions}
Let us f\/irst recall the basic def\/initions which will enter our construction. First, we f\/ix two complex numbers
$\tau$, $\eta$ such that ${\rm Im}(\tau) > 0$.
The central object in this paper is the $R$-matrix $R(q,u)$ which depends on two arguments $q,u \in \mathbb{C}$:
the f\/irst one is referred to
as the dynamical parameter, the second one is called the spectral parameter. The elements of the $R$-matrix
are written in terms of Jacobi's theta function:
\[
\vartheta(u)=-\sum_{j\in \mathbb{Z}}\exp \left(\pi i\left(j+\frac{1}{2}\right)^2\tau+2\pi i \left(j+\frac{1}{2}\right)
\left(u+\frac{1}{2}\right)\right).
\]
This function has two essential properties. It is quasiperiodic:
\[
\vartheta(u+1)=-\vartheta(u), \qquad \vartheta(u+\tau)=-e^{-i \tau-2 i u}\vartheta(u)
\]
and it verif\/ies the identity:
\begin{gather*}
\vartheta(u+x)\vartheta(u-x)\vartheta(v+y)\vartheta(v-y)=\vartheta(u+y)\vartheta(u-y)\vartheta(v+x)\vartheta(v-x)\\
\qquad{}+\vartheta(u+v)\vartheta(u-v)\vartheta(x+y)\vartheta(x-y).
\end{gather*}
The entries of the $R$-matrix are written in terms of the following functions
\begin{gather*}
g(u) = \frac{\vartheta(u-\eta) \vartheta(u-2\eta)}{\vartheta(\eta) \vartheta(2 \eta)},\\
\alpha(q_1,q_2,u) = \frac{\vartheta(\eta-u)\vartheta(q_{12}-u)}{\vartheta(\eta)\vartheta(q_{12})},\\
\beta(q_1,q_2,u) = \frac{\vartheta(\eta-u)\vartheta(u)\vartheta(q_{12}-2\eta)}{\vartheta(-2\eta)\vartheta(\eta)\vartheta(q_{12})},\\
\varepsilon(q,u)= \frac{\vartheta(\eta+u)\vartheta(2\eta-u)}{\vartheta(\eta)\vartheta(2\eta)}-\frac{\vartheta(u)\vartheta(\eta-u)}{\vartheta(\eta)\vartheta(2\eta)}
\left( \frac{\vartheta(q+\eta)\vartheta(q-2\eta)}{\vartheta(q-\eta)\vartheta(q)}+\frac{\vartheta(q-\eta)\vartheta(q+2\eta)}{\vartheta(q+\eta)\vartheta(q)}\right),\\
\gamma(q_1,q_2,u)= \frac{\vartheta(u)\vartheta(q_1+q_2-\eta-u)\vartheta(q_1-2\eta)\vartheta(q_2+\eta)}{\vartheta(\eta)\vartheta(q_1+q_2-2\eta)\vartheta(q_1+\eta)\vartheta(q_2)},\\
\delta(q,u) = \frac{\vartheta(u-q)\vartheta(u-q+\eta)}{\vartheta(q)\vartheta(q-\eta)}.
\end{gather*}
The $R$-matrix itself will act on the tensor product $V \otimes V$ where $V$ is a three-dimensional
complex vector space with the standard basis $\{e_1,e_2,e_3\}$. The matrix units $E_{ij}$ are def\/ined
in the usual way: $E_{ij}e_k=\delta_{jk}e_i$. We will also need the following diagonal matrix
later on $h=E_{11}-E_{33}$.
Now we are ready to write the explicit form of the $R$-matrix. The matrix is obtained via a~gauge transformation
from the solution
of the star-triangle relation which is associated to the vector representation of $B_1$ \cite{Ji}. According to
a remark in \cite{Ji}, that solution can also be derived as a symmetric tensor product (i.e. fusion) of the $A_1$
solution
\begin{gather}
R(q,u)=g(u)E_{11}\otimes E_{11}+g(u)E_{33}\otimes E_{33}+\varepsilon(q,u)E_{22}\otimes E_{22}\nonumber\\
\phantom{R(q,u)=}{}+\alpha(\eta,q,u)E_{12}\otimes E_{21}+\alpha(q,\eta,u)E_{21}\otimes E_{12}
+\alpha(-q,\eta,u)E_{23}\otimes E_{32}\nonumber\\
\phantom{R(q,u)=}{}+ \alpha(\eta,-q,u)E_{32}\otimes E_{23}\nonumber\\
\phantom{R(q,u)=}{}+ \beta(\eta,q,u) E_{11}\otimes E_{22}+\beta(q,\eta,u) E_{22}\otimes E_{11}
+\beta(-q,\eta,u) E_{22}\otimes E_{33}\nonumber\\
\phantom{R(q,u)=}{}+ \beta(\eta,-q,u)E_{33}\otimes E_{22}\nonumber\\
\phantom{R(q,u)=}{}+\gamma(-q,q,u)E_{11}\otimes E_{33}+\gamma(-q,\eta,u)E_{12} \otimes E_{32}
- \gamma(\eta,q,u) E_{21} \otimes E_{23}\nonumber\\
\phantom{R(q,u)=}{}+ \gamma(q,-q,u) E_{33} \otimes E_{11}+ \gamma(q,\eta,u) E_{32} \otimes E_{12}
- \gamma(\eta,-q,u) E_{23} \otimes E_{21}\nonumber\\
\phantom{R(q,u)=}{}+ \delta(q,u) E_{31}\otimes E_{13}+\delta(-q,u) E_{13} \otimes E_{31}.\label{Rmat}
\end{gather}
This $R$-matrix also enjoys the unitarity property:
\[
R_{12}(q,u)R_{21}(q,-u)=g(u)g(-u)\mathbbm{1}
\]
and it is of zero weight:
\[
\left[h \otimes \mathbbm{1}+\mathbbm{1} \otimes h,R_{12}(q,u)\right]=0 \qquad (h \in \mathfrak{h}).
\]
The $R$-matrix also obeys the dynamical quantum Yang--Baxter equation (DYBE) in ${\rm End}(V \otimes V \otimes V)$:
\begin{gather*}
R_{12}(q-2\eta h_3,u_{12}) R_{13}(q,u_1) R_{23}(q-2\eta h_1,u_2)\\
\qquad{}= R_{23}(q,u_2)R_{13}(q-2\eta h_2,u_1)
R_{12}(q,u_{12}),
\end{gather*}
where the ``dynamical shift'' notation has the usual meaning:
\begin{gather}\label{shift}
R_{12}(q-2\eta h_3,u) \cdot v_1\otimes v_2 \otimes v_3 = \left(R_{12}(q-2\eta \lambda,u) v_1\otimes v_2\right)
\otimes v_3,
\end{gather}
whenever $h v_3= \lambda v_3$. Shifts on other spaces are def\/ined in an analogous manner.
Notice that the notion and notation of ``dynamical shift'' can be extended
to dif\/ferent situations as well, even if the appearing (possibly dif\/ferent) vector spaces $V_i$
are not 3-dimensional. For this, one only needs to
verify two conditions: that an action of $h$ is def\/ined on each $V_i$, and that each $V_i$ is a direct sum of
the weight subspaces $V_i[\lambda]$ def\/ined by that action of $h$. It is easy to see then that equation (\ref{shift})
makes sense.
Furthermore, along these lines the notion
of dynamical quantum Yang--Baxter equation and of the corresponding algebraic structures can be extended to the case
where $h$ is replaced by a higher rank Abelian Lie algebra $\mathfrak{h}$. However, in this paper
we only deal with the a special rank-one case, so from now on $\mathfrak{h}=\mathbb{C}h$. It will be clear
from the context how to generalize the relevant notions to the higher rank case.
Let us also describe a more intuitive way of looking at this shift. Def\/ine f\/irst
the shift operator acting on functions of the
dynamical parameter:
\[
\exp(2\eta \partial_q) f(q)=f(q+2\eta) \exp(2\eta \partial_q).
\]
Then equation (\ref{shift}) can also be written in the following form:
\[
R_{12}(q-2\eta h_3,u) = \exp(-2\eta h_3 \partial_q) R_{12}(q,u) \exp(2\eta h_3 \partial_q)
\]
in the sequel we will use whichever def\/inition is the f\/ittest for the particular point in our calculation.
\subsection{Representation, operator algebra}
Now we describe the notion of representation of (or module over) $E_{\tau,\eta}(so_3)$. It
is a pair $(\mathcal{L}(q,u),W)$ where $W$ is a diagonalizable $\mathfrak{h}$-module, that is, $W$ is a direct sum of
the weight subspaces
$W=\oplus_{\l \in \mathbb{C}}W[\l]$ and $\mathcal{L}(q,u)$ is an operator in $\mathrm{End}(V \otimes W)$ obeying:
\begin{gather}
R_{12}(q-2\eta h_3,u_{12}) \mathcal{L}_{13}(q,u_1) \mathcal{L}_{23}(q-2\eta h_1,u_2)\nonumber\\
\qquad {}= \mathcal{L}_{23}(q,u_2)\mathcal{L}_{13}(q-2\eta h_2,u_1)
R_{12}(q,u_{12}).\label{RLL}
\end{gather}
$\mathcal{L}(q,u)$ is also of zero weight
\[
\left[h_V \otimes \mathbbm{1}+\mathbbm{1} \otimes h_W , \mathcal{L}_{V,W}(q,u)\right]=0 \qquad (h \in \mathfrak{h}),
\]
where the subscripts remind the careful reader that in this formula $h$ might act in a dif\/ferent way on spaces
$W$ and $V$.
An example is given forthwith by $W=V$ and $\mathcal{L}(q,u)=R(q,u-z)$ which is called the fundamental
representation with evaluation point $z$.
A tensor product of representations can also be def\/ined which corresponds to the existence of a coproduct-like
structure at the abstract algebraic level. Let $(\mathcal{L}(q,u),X)$ and $(\mathcal{L}'(q,u),Y)$
be two $E_{\tau,\eta}(so_3)$
modules, then $\mathcal{L}_{1X}(q-2\eta h_Y,u)\mathcal{L}_{1Y}(q,u)$, $X\otimes Y$ is a
representation of $E_{\tau,\eta}(so_3)$ on
$X \otimes Y$ endowed, of course, with the tensor product $\mathfrak{h}$-module structure.
The operator $\mathcal{L}$ is reminescent of the quantum Lax matrix in the FRT formulation
of the quantum inverse scattering
method, although it obeys a dif\/ferent exchange relation,
therefore we will also call it a Lax matrix. This allows us to view
the $\mathcal{L}$ as a matrix with operator-valued entries.
Inspired by that interpretation, for any module over $E_{\tau,\eta}(so_3)$ we def\/ine the corresponding
\textit{operator algebra}.
Let us take an arbitrary representation $\mathcal{L}(q,u) \in \mathrm{End}(V \otimes W)$.
The elements of the operator algebra corresponding to this representation will act on the space $\mathrm{Fun}(W)$ of
meromorphic functions of $q$ with values in $W$. Namely let $L \in \mathrm{End}(V \otimes \mathrm{Fun}(W))$
be the operator def\/ined as
\[
L(u)=\left( \begin{array}{ccc}
A_1(u)& B_1(u)& B_2(u)\\
C_1(u) & A_2(u) & B_3(u)\\
C_2(u) & C_3(u) &A_3(u)
\end{array}\right)=\mathcal{L}(q,u)e^{-2\eta h \partial_q}.
\]
We can view it as a matrix with entries in $\mathrm{End}(\mathrm{Fun}(W))$.
It follows from equation (\ref{RLL}) that $\tilde{L}$
verif\/ies:
\begin{gather}\label{RLLti}
R_{12}(q-2\eta h,u_{12}) L_{1W}(q,u_1) L_{2W}(q,u_2)= L_{2W}(q,u_2)L_{1W}(q,u_1)
\tilde{R}_{12}(q,u_{12})
\end{gather}
with $\tilde{R}_{12}(q,u):= \exp(2\eta(h_1+h_2)\partial_q)R_{12}(q,u)\exp(-2\eta(h_1+h_2)\partial_q)$.
The zero weight condition on $L$ yields the relations:
\begin{gather*}
\left[h,A_i\right]=0 ,\qquad \left[h,B_j\right]=-B_j \quad (j=1,3), \qquad \left[h,B_2\right]=-2B_2,\\
\left[h,C_j\right]=C_j \quad (j=1,3), \qquad \left[h,C_2\right]=2C_2,
\end{gather*}
so $B_i$'s act as lowering and $C_i$'s as raising operators.
And f\/inally the following theorem shows how to associate a family of commuting quantities to a representation
of the elliptic quantum group.
\begin{theorem}
Let $W$ be a representation of $E_{\tau,\eta}(so_3)$. Then the transfer matrix def\/ined by $t(u)={\rm Tr}\, \tilde{L}(u) \in
\mathrm{End}(\mathrm{Fun}(W))$
preserves the subspace $\mathrm{Fun}(W)[0]$ of functions with values in the zero weight subspace of $W$.
When restricted to this subspace, they commute at different values of the spectral parameter:
\[
\left[t(u),t(v)\right]=0.
\]
\end{theorem}
\begin{proof}
The proof is analogous to references \cite{FeVa3,ABB}.
\end{proof}
The eigenvalues of the transfer matrix can be found by means of the algebraic Bethe ansatz.
In the next section we develop the f\/irst steps in this direction.
\section{Construction of the Bethe state}
\subsection{The framework of the algebraic Bethe ansatz}
The above theorem tells us how to associate the transfer matrix to an arbitrary representation of the
dynamical quantum group. Our aim is to determine the spectrum of such a transfer matrix in the usual
sense of the Bethe ansatz techniques, i.e.\ to write the Bethe equations f\/ixing the eigenvalues.
In order for the algebraic Bethe ansatz to work, this representation must
be a highest weight representation, that is possess a highest weight vector $|0\rangle$
(also called pseudovacuum) which is annihilated by the raising operators and is an eigenvector
of the diagonal elements of the quantum Lax matrix
\[
C_i(u)|0\rangle=0, \qquad A_i(u)|0\rangle=a_i(q,u)|0\rangle, \qquad i=1,2,3.
\]
Actually, any vector of the form $|\Omega \rangle = f(q)|0\rangle$ is also a highest weight vector of the representation
in question. This freedom in choosing the highest weight vector will prove essential in the sequel, so we
do not f\/ix the arbitrary function $f(q)$ for the moment. The preceding relations are modif\/ied as follows:
\begin{gather*}
C_i(u)|\Omega\rangle=0, \quad i=1,2,3, \qquad A_1(u)|\Omega\rangle=a_1(q,u)\frac{f(q-2\eta)}{f(q)}|\Omega\rangle,\\
A_2(u)|\Omega\rangle=a_2(q,u)|\Omega\rangle, \qquad A_3(u)|\Omega\rangle=a_3(q,u)\frac{f(q+2\eta)}{f(q)}|\Omega\rangle.
\end{gather*}
The representations obtained by tensorising the fundamental vector representation possesses this highest weight
vector and its transfer matrix is the transfer matrix of an IRF (interaction-round-a-face) model
with Boltzmann weights derived from the dynamical $R$-matrix (\ref{Rmat}). In this case we also have the property
that $a_1(q,u)$ does not depend on $q$, this is what we will assume in the sequel. Other representation are related
to Lam\'e equation or Ruijsenaars--Schneider Hamiltonians (see \cite{FeVa} for the $E_{\tau,\eta}(sl_2)$ case).
This is expected to happen in the $E_{\tau,\eta}(so_3)$ case, too, and we hope to report on progress in representations
and related models soon.
Once the pseudovacuum is f\/ixed, one looks for eigenvalues in the form:
\[
\Phi_n(u_1,\ldots,u_n)|\Omega\rangle
\]
under some simple (symmetry) assumptions on the lowering operator $\Phi_n$.
In the XXZ model, or for $E_{\tau,\eta}(sl_2)$, $\Phi_n$ is a simple product of the only lowering operator
$B(u)$. We will explain later, in analogy with the Izergin--Korepin model,
why $\Phi_n$ is not that simple in the $E_{\tau,\eta}(so_3)$ case.
The main result of this paper is the construction of $\Phi_n$
(the Bethe state) for the $E_{\tau,\eta}(so_3)$ dynamical
quantum group under simple assumptions.
Finally, one calculates the action of the transfer matrix on the Bethe state. This will yield
3 kinds of terms. The f\/irst part (usually called wanted terms in the literature)
will tell us the eigenvalue of the transfer matrix, the second part (called unwanted terms) must be annihilated
by a careful choice of the spectral parameters $u_i$ in $\Phi_n(u_1,\ldots,u_n)$; the vanishing of these unwanted
terms is ensured if the $u_i$ are solutions to the so called Bethe equation. The third part contains terms
ending with a raising operator acting on the pseudovacuum and thus vanishes. We hope to report soon on
the form of the Bethe equations and eigenvalues, too.
Right now, we propose to develop step 2 and write the recurrence relation def\/ining $\Phi_n$.
We thus assume that a representation with highest weight vector pseudovacuum already exists.
\subsection{The creation operators}
We explicitly write the commutation relations coming from the $RLL$ relations (\ref{RLLti})
which will be used in the construction of the Bethe state
\begin{gather}
B_1(u_1)B_1(u_2)=\omega_{21}\left(B_1(u_2)B_1(u_1)-\frac{1}{y_{21}(q)}B_2(u_2)A_1(u_1)\right)+
\frac{1}{y_{12}(q)}B_2(u_1)A_1(u_2), \label{crB1B1} \\
A_1(u_1)B_1(u_2)=z_{21}(q)B_1(u_2)A_1(u_1)-\frac{\alpha_{21}(\eta,q)}{\beta_{21}(q,\eta)}B_1(u_1)A_1(u_2),
\label{crB1B1+} \\
A_1(u_1)B_2(u_2)=\frac{1}{\gamma_{21}(q,-q)}\left( g_{21}B_2(u_2)A_1(u_2)+\gamma_{21}(\eta,-q)B_1(u_1)B_1(u_2) \nonumber
\right.\\
\left.\phantom{A_1(u_1)B_2(u_2)=}{} -\delta_{21}(-q)B_2(u_1)A_1(u_1)\right), \label{crB1B1++}\\
B_1(u_2)B_2(u_1)=\frac{1}{g_{21}}\left( \beta_{21}(-q,\eta)B_2(u_1)B_1(u_2)+\alpha_{21}(\eta,-q)B_1(u_1)B_2(u_2)\right),
\label{crB1B1+++}
\\
B_2(u_2)B_1(u_1)=\frac{1}{g_{21}}\left( \beta_{21}(\eta,-q)B_1(u_1)B_2(u_2)
+\alpha_{21}(-q,\eta)B_2(u_1)B_1(u_2)\right),\label{crB2B1}
\end{gather}
where
\begin{gather*}
\omega(q,u)=\frac{\varepsilon(q,-u) \gamma(q,-q,-u)+\gamma(q,\eta,-u)\gamma(\eta,-q,-u)}{g(-u)\gamma(q,-q,-u)},\\
y(q,u)=\frac{\gamma(q,-q,u)}{\gamma(q,\eta,u)},\qquad
z(q,u)=\frac{g(u)}{\beta(q,\eta,u)}\nonumber
\end{gather*}
and as usual
\[
y_{12}(q)=y(q,u_1-u_2) \quad \textrm{etc}.
\]
\begin{remark}Furthermore, the function $\omega(q,u)$ is actually independent of $q$, a property which will prove important later on,
and takes the following simple form:
\begin{gather}\label{omeg}
\omega(u)=\frac{\vartheta(u+\eta)\vartheta(u-2\eta)}{\vartheta(u-\eta)\vartheta(u+2\eta)}.
\end{gather}
\end{remark}
This identity can be proved by looking at transformation properties under $u\rightarrow u+1$, $u\rightarrow u+\tau$
of both sides of \eqref{omeg}.
\begin{remark} Notice also that $\omega(u)\omega(-u)=1$.
\end{remark}
Now we turn to the construction of the Bethe state. In the application of algebraic Bethe ansatz
to the $E_{\tau,\eta}(sl_2)$ elliptic quantum group the algebra contains a generator
(usually also denoted by $B(u)$) which acts as a creation operator. It also
enjoys the property $B(u)B(v)=B(v)B(u)$. This allows for the straightforward construction of the creation
operators $\Phi_n$ as
\[
\Phi_n(u_1,\ldots,u_n)=B(u_1)B(u_2)\cdots B(u_n),
\]
since we immediately have the property
\[
\Phi_n(u_1,\ldots,u_n)=\Phi_n(u_1,\ldots,u_{i-1},u_{i+1},u_i,u_{i+2},\ldots,u_n), \qquad i=1,2,\ldots,n-1.
\]
As it turns out, in the $E_{\tau,\eta}(so_3)$ case the creation operators are not
simple functions of the Lax matrix entries but they are complicated
functions of three generators $A_1(u)$, $B_1(u)$, $B_2(u)$ in the elliptic operator algebra.
This situation is analogous to that of the Izergin--Korepin model as described by Tarasov in~\cite{Ta}.
We give the following def\/inition for the creation operator.
\begin{definition}
Let $\Phi_n$ be def\/ined be the recurrence relation for $n\geq 0$:
\begin{gather*}
\Phi_n(u_1,\ldots,u_n)=B_1(u_1)\Phi_{n-1}(u_2,\ldots, u_n)\\
\phantom{\Phi_n(u_1,\ldots,u_n)=}{} -\sum_{j=2}^n\frac{\prod\limits_{k=2}^{j-1}\omega_{jk}}{y_{1j}(q)}
\prod_{\substack{k=2\\k\neq j}}^n z_{kj}(q+2\eta)\ B_2(u_1) \Phi_{n-2}(u_2,\ldots,\widehat{u_j},\ldots,u_n)A_1(u_j),
\end{gather*}
where $\Phi_0=1$, $\Phi_1(u_1)=B_1(u_1)$ and the hat means that that parameter is omitted.
\end{definition}
It may be useful to give explicitly the f\/irst three creation operators
\begin{gather*}
\Phi_1(u_1)=B_1(u_1),\\
\Phi_2(u_1,u_2)=B_1(u_1)B_1(u_2)-\frac{1}{y_{12}(q)}B_2(u_1)A_1(u_2),\\
\Phi_3(u_1,u_2,u_3)=B_1(u_1)B_1(u_2)B_1(u_3)-\frac{1}{y_{23}(q)}B_1(u_1)B_2(u_2)A_1(u_3)\\
\phantom{\Phi_3(u_1,u_2,u_3)=}{}-\frac{z_{32}(q+2\eta)}{y_{12}(q)}B_2(u_1)B_1(u_3)A_1(u_2)-\frac{\omega_{32}z_{23}(q+2\eta)}{y_{13}(q)}B_2(u_1)
B_1(u_2)A_1(u_3).
\end{gather*}
The Bethe vector is then not completely symmetric under the interchange of two neighboring spectral parameters
but verif\/ies the following property instead:
\begin{gather*}
\Phi_2(u_1,u_2)=\omega_{21}\Phi_2(u_2,u_1),\\
\Phi_3(u_1,u_2,u_3)=\omega_{21}\Phi_3(u_2,u_1,u_3)=\omega_{32}\Phi_3(u_1,u_3,u_2).
\end{gather*}
For general $n$ we prove the following theorem.
\begin{theorem}
$\Phi_n$ verif\/ies the following symmetry property:
\begin{gather}\label{symm}
\Phi_n(u_1,\ldots,u_n)=\omega_{i+1,i}\Phi_n(u_1,\ldots,u_{i-1},u_{i+1},u_i,u_{i+2},\ldots,u_n),
\qquad i=1,2,\ldots,n-1.\!\!\!
\end{gather}
\end{theorem}
\begin{proof}
The proof is by induction on $n$. The symmetry property is immediately proved for $i\neq 1$.
To verify for $i=1$, we have
to expand $\Phi_n$ by one more induction step:
\begin{gather*}
\Phi_n(u_1,\ldots,u_n)=B_1(u_1)B_1(u_2)\Phi_{n-2}(u_3,\ldots,u_n)\\
\qquad{}-\frac{\prod\limits_{k=3}^n z_{k2}(q+2\eta)}{y_{12}(q)}
B_2(u_1)\Phi_{n-2}(u_3,\ldots,u_n)A_1(u_2) \nonumber\\
\qquad{}-\sum_{j=3}^n \frac{\prod\limits_{k=3}^{j-1}\omega_{jk}}{y_{2j}(q)}
\prod_{\substack{k=3\\k\neq j}}^n z_{kj}(q+2\eta)
B_1(u_1)B_2(u_2)\Phi_{n-3}(u_3,\ldots,\widehat{u_j},\ldots,u_n)A_1(u_j) \nonumber\\
\qquad{}-\sum_{j=3}^n \frac{\omega_{j2}z_{2j}(q)\prod\limits_{k=3}^{j-1}
\omega_{jk}}{y_{1j}(q)}\prod_{\substack{k=3\\k\neq j}}^n
z_{kj}(q+2\eta) B_2(u_1)B_1(u_2)\Phi_{n-3}(u_3,\ldots,\widehat{u_j},\ldots,u_n) A_1(u_j) \nonumber\\
\qquad{}+\sum_{3\leq l \leq j \leq n}\left[ \frac{\omega_{j2}z_{2j}(q+2\eta)z_{lj}(q+2\eta)}{y_{1j}(q)y_{2l}(q+2\eta)}
+\omega_{lj}\frac{\omega_{l2}z_{2l}(q+2\eta)z_{jl}(q+2\eta)}{y_{1l}(q)y_{2j}(q+2\eta)} \right]\nonumber \\
\qquad{}\times \prod_{k=3}^{j-1}
\omega_{jk}\prod_{k=3}^{l-1}\omega_{lk}
\prod_{\substack{k=3\\k \neq j, l}}^n z_{kj}(q+4\eta)z_{kl}(q+4\eta)
\frac{\vartheta(q+\eta)^2}{\vartheta(q-\eta)\vartheta(q+3\eta)} \nonumber\\
\qquad{}\times B_2(u_1)B_2(u_2)
\Phi_{n-4}(u_3,\ldots,\widehat{u_l},\ldots, \widehat{u_j},\ldots,u_n)A_1(u_l)A_1(u_j) \nonumber
\end{gather*}
then substitute into (\ref{symm})
and bring the right-hand-side to normal order of the
spectral parameters by using relations \eqref{crB1B1}--\eqref{crB2B1}. We f\/ind then that property (\ref{symm}) is
fulf\/illed provided the following identities hold true:
\begin{gather*
-\frac{\omega_{12}g_{21}}{y_{23}(q)\beta_{21}(\eta,-q)}+\frac{\alpha_{21}(\eta,-q)}{\beta_{21}(\eta,-q)y_{13}(q)}=
-\frac{\omega_{31}z_{13}(q+2\eta)}{y_{23}(q)}-\frac{\alpha_{31}(\eta,q+2\eta)}{\beta_{31}(q+2\eta,\eta)y_{21}(q)}
\end{gather*}
and
\begin{gather*}
\omega_{12}\left(\frac{\omega_{42}z_{24}(q+2\eta)z_{34}(q+2\eta)}{y_{14}(q)y_{23}(q+2\eta)}+
\omega_{34}\frac{\omega_{32}z_{23}(q+2\eta)z_{43}(q+2\eta)}{y_{13}(q)y_{24}(q+2\eta)} \right) \\
\qquad{}-\left( \frac{\omega_{41}z_{14}(q+2\eta)z_{34}(q+2\eta)}{y_{24}(q)y_{13}(q+2\eta)}+\frac{\omega_{34}\omega_{31}
z_{13}(q+2\eta)z_{43}(q+2\eta)}{y_{23}(q)y_{14}(q+2\eta)}\right) \\
\qquad{}+\frac{\omega_{12}}{y_{12}(q)}\left( \frac{\delta_{42}(-q-2\eta)}{\gamma_{42}(q+2\eta,-q-2\eta)y_{43}(q)}+
\frac{z_{42}(q+2\eta)\alpha_{32}(\eta,q+2\eta)\omega_{24}}{\beta_{32}(q+2\eta,\eta)y_{24}(q+2\eta)} \right) \\
\qquad{}-\frac{1}{y_{21}(q)}\left(\frac{\delta_{41}(-q-2\eta)}{\gamma_{41}(q+2\eta,-q-2\eta)y_{43}(q)}+\frac{z_{41}(q+2\eta)
\alpha_{31}(\eta,q+2\eta)\omega_{14}}{\beta_{31}(q+2\eta,\eta)y_{14}(q+2\eta)}
\right)=0.
\end{gather*}
These identities can be verif\/ied by tedious calculations using once again quasiperiodicty pro\-per\-ties of
the $\vartheta$-function.
\end{proof}
\begin{remark} $\Phi_n$ contains the more familiar string of $B_1(u_1)\cdots B_1(u_n)$ with coef\/f\/icient $1$.
\end{remark}
It is straightforward to check the following relations using the commutation relations \eqref{RLLti}
\begin{gather*}
t(u)\Phi_1(u_1)= z_{1u}(q)B_1(u_1)A_1(u)-\frac{\alpha_{1u}(\eta,q)}{\beta_{1u}(q,\eta)}B_1(u)A_1(u_1)\\
\qquad{}+\frac{z_{u1}(q)}{\omega_{u1}}B_1(u_1)A_2(u)-\frac{\alpha_{u1}(q,\eta)}{\beta_{u1}(q,\eta)}B_1(u)A_2(u_1)
+\frac{1}{y_{u1}(q)}
B_3(u)A_1(u_1)\\
\qquad{}+\frac{\beta_{u1}(\eta,-q)}{\gamma_{u1}(q,-q)}B_1(u_1)A_3(u)-\frac{\gamma_{u1}(q,\eta)}{\gamma_{u1}(q,-q)}B_3(u)A_2(u_1),
\end{gather*}
for $n=2$
\begin{gather*}
t(u)\Phi_2(u_1,u_2)= z_{1u}(q)z_{2u}(q)\Phi_2(u_1,u_2)A_1(u)+\frac{z_{u1}(q)z_{u2}(q-2\eta)}{\omega_{u1}\omega_{u2}}
\Phi_2(u_1,u_2)A_2(u)\\
{}+\frac{\beta_{u1}(\eta,-q)\beta_{u2}(\eta,-q)}{\gamma_{u1}(q-2\eta,-q+2\eta)\gamma_{u2}(q-2\eta,-q+2\eta)}
\Phi_2(u_1,u_2)A_3(u)\\
{}+\left(-\frac{z_{1u}(q)\alpha_{2u}(\eta,q)}{\beta_{2u}(q,\eta)}+\frac{\alpha_{1u}(\eta,q)\alpha_{21}(\eta,q)\omega_{1u}}
{\beta_{1u}(q,\eta)\beta_{21}(q,\eta)}
-\frac{\gamma_{1u}(\eta,-q)}{y_{12}(q-2\eta)ga_{1u}(q,-q)}\right)\Phi_2(u_1,u)A_1(u_2)\\
{}+\left(-\frac{z_{u1}(q)\alpha_{u2}(q-2\eta,\eta)}{\omega_{u1}\beta_{u2}(q-2\eta,\eta)}+
\frac{\alpha_{u1}(q,\eta)\alpha_{12}(q-2\eta,\eta)}{\beta_{u1}(q,\eta)\beta_{12}(q-2\eta,\eta)\omega_{u1}} \right)
\Phi_2(u_1,u)A_2(u_2)\\
{}-\frac{z_{21}(q)\alpha_{1u}(\eta,q)}{\beta_{1u}(q,\eta)}\Phi_2(u,u_2)A_1(u_1)- \frac{\alpha_{u1}(q,\eta)z_{12}(q-2\eta)}
{\beta_{u1}(q,\eta)\omega_{12}}\Phi_2(u,u_2)A_2(u_1)\\
+\left(-\frac{z_{21}(q)\alpha_{1u}(\eta,q)}{y_{u2}(q)\beta_{1u}(q,\eta)}+\frac{\alpha_{1u}(\eta,q)\alpha_{21}(\eta,q)}
{\beta_{1u}(q,\eta)\beta_{1u}(q,\eta)y_{u1}(q)}
-\frac{\gamma_{1u}(\eta,-q)}{\gamma_{1u}(q,-q)y_{12}(q-2\eta)y_{u1}(q)}\right.\\
\left.{}+\frac{\delta_{1u}(-q)}{\gamma_{1u}(q,-q)y_{12}(q-2\eta)}
\right)B_2(u)A_1(u_1)A_1(u_2)\\
\times\left( \frac{z_{u1}(q)\alpha_{u1}(q,\eta)}{\omega_{u1}y_{u2}(q)\beta_{u1}(\eta,-q)}-\frac{\alpha_{u1}(q,\eta)g_{12}}
{\beta_{u1}(q,\eta)\omega_{12}y_{u2}(q)\beta_{12}(q+2\eta,\eta)}+\frac{\alpha_{u1}(q,\eta)\alpha_{12}(q+2\eta,\eta)}
{\beta_{u1}(q,\eta)y_{u1}(q)\beta_{12}(q+2\eta,\eta)}\right.\!\!\\
\left.{}- \frac{\alpha_{u1}(q,\eta)\alpha_{u1}(\eta,-q)}{y_{12}(q)\beta_{u1}(\eta,-q)
\beta_{u1}(q,\eta)} \right)B_2(u)A_2(u_1)A_1(u_2)\\
{}\times \frac{1}{\gamma_{u1}(q,-q)}\left(\frac{\delta_{u1}(q)}{y_{12}(q-2\eta,-q+2\eta)}-\frac{\alpha_{u1}(q,\eta)}{y_{u2}(q-2\eta)}
\right)B_2(u)A_2(u_1)A_2(u_2)\\
{}\times\left( \frac{g_{u1}}{\beta_{u1}(\eta,-q)\omega_{u1}y_{u2}(q)}-\frac{\alpha_{21}(\eta,q+2\eta)}{y_{u1}(q)
\beta_{21}(q+2\eta,\eta)}
-\frac{\alpha_{u1}(\eta,-q)}{y_{12}(q)\beta_{u1}(\eta,-q)}\right)B_3(u)B_1(u_1)A_1(u_2)\\
{}+\frac{\alpha_{12}(q,\eta)}{y_{u1}(q,-q)\beta_{12}(q,\eta)}B_3(u)B_1(u_1)A_2(u_2)\\
\frac{z_{21}(q+2\eta)}{y_{u1}(q)}B_3(u)B_1(u_1)A_2(u_1)-\frac{z_{12}(q)}{y_{u1}(q)\omega_{12}}B_3(u)B_1(u_2)A_2(u_1)\\
{}+\textrm{terms ending with}\ C.
\end{gather*}
The cancelation of the unwanted terms is ensured for $n=2$ if $u_1$ and $u_2$ are solutions of the following
Bethe equations.
\begin{gather*}
\frac{a_1(u_i)}{a_2(u_i,q)}=\prod_{\substack{j=1\\j\neq i}}^{2}\frac{\vartheta(u_{ij}-\eta)}{\vartheta(u_{ij}+\eta)}\times
\frac{\vartheta(q-3\eta)^2 }{\vartheta(q-\eta)\vartheta(q-5\eta)}\times \frac{f(q)}{f(q-2\eta)}, \qquad i=1,2.
\end{gather*}
The role of the function $f(q)$ becomes clear in this context. It has to be chosen so as to eliminate
the $q$-dependence from the Bethe equation.
These results suggest that $\Phi_n$ for all $n$ are the correct choice of creation operators of the
corresponding Bethe states. The complete proof of the general case will be given elsewhere.
\section{Conclusion}
In this paper we def\/ined the elliptic quantum group $E_{\tau,\eta}(so_3)$ along the lines described
by Felder in \cite{Fe}. Although dynamical, the $R$-matrix appearing in the exchange relations
has a matrix form similar to that of the Izergin--Korepin model. Lax operator, operator algebra and families
of commuting transfer matrices are def\/ined in complete analogy with the $E_{\tau,\eta}(sl_2)$ case.
Our aim was to apply algebraic Bethe ansatz method in this setting. We have obtained a~recurrence
relation for the creation operators and have proved that these operators have a~certain symmetry property
under the interchange of two adjacent spectral parameters. Both the form of the recurrence relation and
this symmetry property are an elliptic generalization of Tarasov's results in~\cite{Ta}. Finally,
we have obtained the Bethe equations and the eigenvalues for the 2-magnon state.
The Bethe
equations for the general n-magnon state
will be published later~\cite{MaNa2}.
\subsection*{Acknowledgements}
We wish to thank Petr
Kulish for illuminating discussions.
We are also grateful to the organizers of the GEOMIS workshop in Coimbra for letting
us present these results. This work was supported by the project POCI/MAT/58452/2004,
in addition to that Z.~Nagy benef\/ited from
the FCT grant SFRH/BPD/25310/2005. The manuscript was f\/inished
in the hospitable environment of the Solvay Institute and ULB, Brussels.
\pdfbookmark[1]{References}{ref}
| {
"attr-fineweb-edu": 1.397461,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdNY4eILhQJnKqZBU | \section{Introduction}\label{sec1}
The design of static output-feedback controllers is a conceptually simple and yet theoretically very challenging problem. It is also a popular design approach of practical interest due to its straightforward implementation and the fact that, typically, only some (and not all) states of the underlying dynamical system are available for control.
Moreover, it is well-known that the synthesis of dynamic (fixed-order) output-feedback can also be formulated as a static output-feedback problem based on a suitable augmentation of the underlying system \cite{ElOus97}.
However, in contrast to, e.g., the design of static state-feedback or dynamic full-order controller, the synthesis of static output-feedback controllers is intrinsically a challenging bilinear matrix inequality (BMI) feasibility problem. Such problems are in general non-convex, non-smooth and NP-hard to solve \cite{TokOez95}.
The lack of convexity and the typically rather complex optimization landscape challenge even dedicated algorithms as the one, e.g., of \cite{AguApk18}, which tries to solve the underlying optimization directly without relying on matrix inequalities.
So far there are no techniques available to directly synthesize static controllers by means of convex optimization and heuristic approaches are employed instead, which only yield sufficient conditions for the existence of such controllers. Next to providing only sufficient conditions, another downside of such approaches is that they might get stuck in a local minimum of the underlying optimization problem that can be far away from the global minimum of interest.
Such approaches are nevertheless used and reported to work nicely on various practical examples. Two surveys on static output-feedback design presenting several of such approaches are given in \cite{SyrAbd97, SadPea16}.
Essentially the same difficulties arise for the synthesis of (static) robust output-feedback controllers, which is of tremendous relevance in practice as employed models never match the real system to be controlled; hence, this calls for the design of controllers that are capable to deal with the resulting discrepancies.
\vspace{1ex}
In this paper we present and extend the dual iteration which is one of such heuristic methods. It was introduced in \cite{Iwa97, Iwa99} and developed for the design of stabilizing static output-feedback controllers for systems unaffected by uncertainties.
We elaborate in detail on the individual steps of this procedure for the design of static output-feedback $H_\infty$-controllers for linear time-invariant systems. In particular, we demonstrate that those steps are algebraic consequences of a general version of the elimination lemma as given, e.g., in \cite{Hel99}.
As the latter lemma is very powerful and a flexible tool for controller design, which works perfectly well in tandem with the framework of linear fractional representations\cite{ZhoDoy96, SchWei00, Hof16} (LFRs), it is natural that the dual iteration generalizes to a variety of challenging non-convex synthesis problems beyond the design of static stabilizing controllers as considered in \cite{Iwa97, Iwa99}.
As an illustration, we show that it is possible to seamlessly extend the dual iteration to static generalized $H_2$-, robust $H_\infty$- and robust gain-scheduled $H_\infty$-design in the case that only output measurements are available for control; for the generalized $H_2$-design we employ typical conditions to ensure that the direct feed-through vanishes, while for the robust designs we consider arbitrarily time-varying parametric uncertainties and rely on integral quadratic constraints\cite{MegRan97} (IQCs) with constant multipliers.
Unfortunately, the elimination lemma does not apply for interesting design problems as such with multiple objectives where it would as well be desirable to have procedure for static and/or robust design.
To this end, we provide a control theoretic interpretation of the individual steps of the dual iteration, which does not involve the elimination lemma builds on \cite{HolSch19}. In \cite{HolSch19}, we developed a heuristic approach for robust output-feedback design that was motivated by the well-known separation principle. The latter approach constitutes the consecutive solution of a full-information design problem and another design problem with a structure that resembles robust estimation. We found that the latter is directly linked to the primal step of the dual iteration.
Based on this interpretation, the dual iteration is capable to deal with situations where elimination is not possible. As a demonstration, we consider the multi-objective design of static output-feedback $H_\infty$-controllers, which ensure that the closed-loop poles are located in an a priori specified generalized stability region defined by a linear matrix inequality (LMI).
\vspace{1ex}
\noindent\textit{Outline.}
The remainder of the paper is organized as follows. After a short paragraph on notation, we recall in full detail the dual iteration for static output-feedback $H_\infty$-design in Section \ref{SHI::sec::shi}. A novel control theoretic interpretation of the iterations ingredients is provided in Section \ref{SHI::sec::interpretation}.
We point out opportunities of the latter interpretation by extending the dual iteration to the static output-feedback design of $H_\infty$-controllers that ensure that the closed-loop poles are contained in an a priori specified stability region in Section \ref{SGS::sec::sgs}.
In Section \ref{RS::sec::rs} we show that the dual iteration is not limited to precisely known systems by considering the practically highly relevant synthesis of robust output-feedback controllers for systems affected by arbitrarily time-varying uncertainties. Moreover, we comment in that section on further extensions of the iteration to deal, e.g., with the challenging synthesis of robust gain-scheduling controllers.
All of the previously mentioned design problems are demonstrated in terms of numerous numerical examples inspired from the literature including a challenging missile autopilot design. Finally, several key auxiliary results are given in the appendix.
\vspace{1ex}
\noindent\textit{Notation.}
For $\circ \in \{<, \leq, =, \geq , >\}$ we use $\C_\circ := \{z\in \C~|~\Re(z)~\circ~0\}$, $\D_\circ := \{z \in \C~|~ |z| \circ 1 \}$, $\R_\circ := \C_\circ \cap \R$, $\C_\circ^\infty := \C_\circ \cup \{\infty \}$ and $\R_\circ^\infty := \R_\circ \cup \{\infty \}$.
Let
$L_2^n\! :=\! \{x \!\in\! L_{2e}^n: \|x\|^2_{L_2} \!:=\! \int_0^\infty \!x(t)^T\! x(t) \!<\! \infty \}$ where $L_{2e}^n$ is the space of locally square integrable functions $x:[0, \infty) \to \R^n$.
$\rli^{m \times n}$ ($\rhi^{m \times n}$) is the space of real rational $m \times n$ matrices without poles in $\C_=^\infty$ ($\C_\geq^\infty$) and equipped with the maximum norm $\|\cdot\|_\infty$.
If $G(s) = D + C(sI - A)^{-1}B$, we write $G=[A, B, C, D]$,
$G_{\ss} = \smat{A & B \\ C & D}$ and use $G^\ast = [-A^T, C^T, -B^T, D^T]$ as well as $-G^\ast = [-A^T, -C^T, -B^T, -D^T]$.
For matrices $X_1, \dots, X_N$, $X, P$ we further employ the abbreviations
\begin{equation*}
\diag(X_1, \dots, X_N) := \mat{ccc}{X_1 & & 0 \\ & \ddots & \\ 0 & & X_N}
\teq{ and }
\Ls(X, P, G_{\ss}) := \mat{cc}{I & 0 \\ A & B \\ \hline C & D}^T \mat{c|c}{X & 0 \\ \hline 0 & P}\mat{cc}{I & 0 \\ A & B \\ \hline C & D}.
\end{equation*}
Finally, objects that can be inferred by symmetry or are not relevant are indicated by ``$\bullet$''.
\section{Static Output-Feedback $H_\infty$-Design}\label{SHI::sec::shi}
In this section we recall the essential features of the dual iteration for static output-feedback design as proposed in \cite{Iwa97, Iwa99} in full detail. In contrast to \cite{Iwa97, Iwa99} we directly include an $H_\infty$-performance criterion and also briefly consider generalized $H_2$-performance at the end of this section. Most importantly, we give control theoretic interpretations of the individual steps of the iteration. The latter allow for very interesting extensions as exemplified in the next section.
We begin by very briefly recalling the underlying definitions and analysis results.
\subsection{Analysis}
For some real matrices of appropriate dimensions and initial conditions $x(0) \in \R^n$, we consider the system
\begin{equation}
\arraycolsep=3pt
\mat{c}{\dot x(t) \\ e(t)} = \mat{cc}{A & B \\ C & D} \mat{c}{x(t) \\ d(t)}, \quad
\label{SHI::eq::sys}
\end{equation}
for $t\geq 0$; here, $d\in L_2$ is a generalized disturbance and $e$ is the performance output desired to be small w.r.t. its $L_2$-norm.
The energy gain of the system \eqref{SHI::eq::sys}, which coincides with the $H_\infty$-norm of \eqref{SHI::eq::sys}, is defined in a standard fashion as follows.
\begin{definition
\label{SHI::def::stab}
The system \eqref{SHI::eq::sys} is said to admit an energy gain smaller than $\ga>0$ if $A$ is Hurwitz and there exists an $\eps > 0$ such that
\begin{equation*}
\|e\|_{L_2}^2 \leq (\ga^2 - \eps) \|d\|_{L_2}^2
\teq{for all}d \in L_2
\teq{ and for }x(0) = 0.
\end{equation*}
The energy gain of the system \eqref{SHI::eq::sys} is the infimal $\ga > 0$ such that the above inequality is satisfied.
\end{definition
\vspace{1ex}
We have the following well-known analysis result which is often referred to as bounded real lemma (see, e.g., \cite[Section 2.7.3]{BoyGha94}) and which is a special case of the celebrated KYP lemma \cite{Ran96}.
\begin{lemma
\label{SHI::lem::stab}
Let $P_\ga := \smat{I & 0 \\ 0 & -\ga^2 I}$ and $G(s) := C(sI - A)^{-1}B + D$ be the transfer matrix corresponding to \eqref{SHI::eq::sys}. Then the system \eqref{SHI::eq::sys} admits an energy gain smaller than $\ga$ if and only if there exists a symmetric matrix $X$ satisfying
\begin{equation}
\label{SHI::lem::lmi_stab}
X \cg 0
\teq{ and }
\Ls\left(\mat{cc}{0 & X \\ X & 0}, P_\ga, \mat{c}{G \\ I}_\ss \right) = \mat{cc}{I & 0 \\ A & B \\ \hline C & D \\ 0 & I}^T \mat{cc|c}{0 & X & 0 \\ X & 0 & 0 \\ \hline 0 &0 & P_\ga}\mat{cc}{I & 0 \\ A & B \\ \hline C & D \\ 0 & I} \cl 0.
\end{equation}
Moreover, $\|G\|_{\infty} = \sup_{\omega \in \R }\|G(i\omega)\|$ equals the infimal $\ga > 0$ such that the LMIs \eqref{SHI::lem::lmi_stab} are feasible.
\end{lemma
In our opinion, the abbreviation $\Ls(\cdot, \cdot, \cdot)$ in \eqref{SHI::lem::lmi_stab} is particularly well-suited for capturing the essential ingredients of inequalities related to the KYP lemma. Thus we make use of it throughout this paper for brevity. The involved symmetric matrix $X$ is usually referred to as (KYP) certificate or as Lyapunov matrix.
Certainly, controller design is much more interesting than analysis and is discussed next.
\subsection{Synthesis}\label{SHI::sec::synth}
\subsubsection{Problem Description}
For fixed real matrices of appropriate dimensions and initial conditions $x(0) \in \R^n$, we consider now the feedback interconnection
\begin{equation}
\arraycolsep=1pt
\mat{c}{\dot x(t) \\\hline e(t) \\ y(t)}
= \mat{c|cc}{A & B_1 & B_2 \\ \hline C_1 & D_{11} & D_{12} \\ C_2 & D_{21} & 0}
\mat{c}{x(t) \\\hline d(t) \\ u(t)}
\label{SHI::eq::sys_of}
\end{equation}
for $t\geq 0$; here, $u$ is the control input and $y$ is the measured output.
Our main goal in this section is the design of a static output-feedback controller with description
\begin{equation}
u(t) = K y(t)
\label{SHI::eq::con_of2}
\end{equation}
for the system \eqref{SHI::eq::sys_of} such that the corresponding closed-loop energy gain is as small as possible. The latter closed-loop interconnection is described by
\begin{equation}
\arraycolsep=2pt
\mat{c}{\dot x(t) \\ e(t)}
= \mat{cc}{\Ac & \Bc \\
\Cc & \Dc }
\mat{c}{x(t)\\ d(t)}
\label{SHI::eq::cl_of}
\end{equation}
with $t\geq 0$ and standard calligraphic closed-loop matrices given by
\begin{equation*}
\mat{cc}{\Ac & \Bc \\ \Cc & \Dc}
= \mat{cc}{A + B_2KC_2 & B_1 + B_2KD_{21} \\ C_1 + D_{12}KC_2 & D_{11} + D_{12}KD_{21}}
= \mat{cc}{A & B_1 \\ C_1 & D_{11}} + \mat{c}{B_2 \\ D_{12}}K \mat{cc}{C_2 & D_{21}}.
\end{equation*}
A block diagram of the interconnection \eqref{SHI::eq::cl_of} is depicted in Fig.~\ref{SHI::fig::of}. Note that the latter closed-loop system is of the same form as \eqref{SHI::eq::sys} which allows for its analysis based on the bounded real lemma \ref{SHI::lem::stab}.
As usual, trouble arises through the simultaneous search for some certificate $X$ and a controller gain $K$ which is a non-convex BMI problem. As argued in the introduction, such problems are in general very difficult to solve numerically.
A remedy for a multitude of controller synthesis problems is a convexifying parameter transformation that has been proposed in \cite{MasOha98, Sch96b}. Another option is given by the elimination lemma as developed in \cite{Hel99, GahApk94}.
The latter lemma is well-known in the LMI literature, but since we will apply it frequently, we provide the result as Lemma \ref{RS::lem::elimination} together with a constructive proof in the appendix.
In particular, by directly using the elimination lemma on the closed-loop analysis LMIs, we immediately obtain the following well-known synthesis result.
\begin{figure}
\vspace{1ex}
\begin{center}
\includegraphics[]{SHI_OF}
\end{center}
\caption{Block diagram of the interconnection of the system \eqref{SHI::eq::sys_of} and the static controller \eqref{SHI::eq::con_of2}.}
\label{SHI::fig::of}
\end{figure}
\begin{theorem}
\label{SHI::theo::of}
Let $\smat{G_{11} & G_{12} \\ G_{21} & G_{22}}(s) = \smat{D_{11} & D_{12} \\ D_{21} & 0} + \smat{C_1 \\ C_2}(s I - A)^{-1}\smat{B_1 & B_2}$ be the transfer matrix corresponding to the system \eqref{SHI::eq::sys_of}. Further, let $V$ and $U$ be basis matrices of $\ker(C_2, D_{21})$ and $\ker(B_2^T, D_{12}^T)$, respectively. Then there exists a static controller \eqref{SHI::eq::con_of2} for the system \eqref{SHI::eq::sys_of} such that the analysis LMIs \eqref{SHI::lem::lmi_stab} are feasible for the corresponding closed-loop system if and only if there exists a symmetric matrix $X$ satisfying
\begin{equation*}
\arraycolsep=3pt
X \cg 0, \quad
V^T\Ls\left(\mat{cc}{0 & X \\ X & 0}, P_\ga, \mat{c}{G_{11} \\ I
}_{\ss} \right) V \cl 0
\teq{ and }
U^T\Ls\left(\mat{cc}{0 & X^{-1} \\ X^{-1} & 0}, P_\ga^{-1}, \mat{c}{I \\ -G_{11}^\ast}_{\ss} \right) U \cg 0.
\end{equation*}
Moreover, we have
\begin{equation*}
\begin{aligned}
\ga_\opt &:= \inf\left\{ \ga > 0~\middle|
\text{ There exists a static controller \eqref{SHI::eq::con_of2} s.th. the analysis
LMIs \eqref{SHI::lem::lmi_stab} are feasible for \eqref{SHI::eq::cl_of}}\right\} \\
&\phantom{:}= \inf\left\{ \ga > 0~\middle|
\text{ There exists some symmetric $X$ satisfying the above matrix inequalities}\right\}.
\end{aligned}
\end{equation*}
\end{theorem}
By the elimination lemma we are able to remove the controller gain $K$ from the analysis LMIs for the closed-loop system \eqref{SHI::eq::cl_of}. However, the variable $X$ now enters the above inequalities in a non-convex fashion and thus determining $\ga_\opt$ or computing a suitable static controller \eqref{SHI::eq::con_of2} are still very difficult tasks.
Note that this underlying non-convexity is not limited to the employed elimination based approach, but seems to be an intrinsic feature of the static controller synthesis problem.
Thus the latter problem is usually tackled by heuristic approaches and upper bounds on $\ga_\opt$ are computed.
In the sequel, we present the dual iteration which is a heuristic procedure based on iteratively solving convex semi-definite programs. We will argue that this iteration is especially useful if compared to other approaches such as the classical D-K iteration.
Its essential features are discussed next.
\subsubsection{Dual Iteration: Initialization}\label{SHI::sec::dual_init}
In order to initialize the dual iteration we propose a starting point that allows the computation of a lower bounds on $\ga_\opt$ which can be a valuable indicator of how conservative the upper bounds on $\ga_\opt$ are that we generate later on.
This lower bound is obtained by the following observation. If there exists a static controller \eqref{SHI::eq::con_of2} for the system \eqref{SHI::eq::sys_of} achieving an closed-loop energy gain of $\ga$ then there also exists a dynamic controller achieving the same closed-loop energy gain. In general a (full-order) dynamic controller is described by
\begin{equation}
\mat{c}{\dot x_c(t) \\ u(t)}
= \mat{cc}{A^c & B^c \\
C^c & D^c} \mat{c}{x_c(t) \\ y(t)}
\label{SHI::eq::con_of3}
\end{equation}
for $t \geq 0$. Indeed, by simply choosing
\begin{equation*}
A^c = -I, \quad
B^c = 0, \quad
C^c = 0
\teq{ and }
D^c = K,
\end{equation*}
we observe that the energy gain of \eqref{SHI::eq::cl_of} is identical to the one of the closed-loop interconnection of the system \eqref{SHI::eq::sys_of} and the dynamic controller \eqref{SHI::eq::con_of3}. Note that, the matrix $-I$ can be replaced by any other stable matrix.
It is well-known that the problem of finding a dynamic controller \eqref{SHI::eq::con_of3} for the system \eqref{SHI::eq::sys_of} is a convex optimization problem which has the following solution that is again obtained by applying the elimination lemma \ref{RS::lem::elimination}. A proof can also be found, e.g., in \cite{GahApk94}.
\begin{theorem
\label{SHI::theo::gs}
Let $G_{ij}$, $U$ and $V$ be as in Theorem \ref{SHI::theo::of}. Then there exists a dynamic full-order controller \eqref{SHI::eq::con_of3} for the system \eqref{SHI::eq::sys_of} such that the analysis LMIs \eqref{SHI::lem::lmi_stab} are feasible for the corresponding closed-loop system if and only if there exist symmetric matrices $X$ and $Y$ satisfying
\begin{subequations}
\label{SHI::theo::eq::lmi_gs}
\begin{equation}
\arraycolsep=3pt
\mat{cc}{X & I \\ I & Y} \cg 0,\quad
V^T\Ls\left(\mat{cc}{0 & X \\ X & 0}, P_\ga, \mat{c}{G_{11} \\ I
}_{\ss} \right) V \cl 0
\teq{ and }
U^T\Ls\left(\mat{cc}{0 & Y \\ Y & 0}, P_\ga^{-1}, \mat{c}{I \\ -G_{11}^\ast
}_{\ss} \right) U \cg 0.
\tlabel{SHI::theo::eq::lmi_gsa}{SHI::theo::eq::lmi_gsb}{SHI::theo::eq::lmi_gsc}
\end{equation}
\end{subequations}
In particular, we have
\begin{equation*}
\ga_{\mathrm{dof}} \leq \ga_\opt
\end{equation*}
for $\ga_\mathrm{dof}$ being the infimal $\ga > 0$ such that the above LMIs are feasible.
\end{theorem
In a standard fashion and by using the Schur complement on the LMI \eqref{SHI::theo::eq::lmi_gsc}, it is possible to solve the LMIs \eqref{SHI::theo::eq::lmi_gs} while simultaneously minimizing over $\ga$ in order to compute $\ga_\mathrm{dof}$. In particular, as the latter is a lower bound on $\ga_\opt$ it is not possible to find a static output-feedback controller with an energy gain smaller than $\ga_\mathrm{dof}$.
\vspace{2ex}
As an intermediate step, note that we can easily design a static full-information controller $u = F\t y = (F_1, F_2)\t y$ for the system \eqref{SHI::eq::sys_of} such that the analysis LMIs \eqref{SHI::lem::lmi_stab} are feasible for the corresponding closed-loop system if the LMIs \eqref{SHI::theo::eq::lmi_gs} are feasible; here, the output measurements $y$ are replaced by the virtual measurements
\begin{equation*}
\arraycolsep=2pt
\t y = \t C_2 x + \t D_{21}d
\teq{ where } \mat{cc}{\t C_2 & \t D_{21} } = I
\end{equation*}
and the latter closed-loop interconnection is explicitly given by
\begin{equation}
\mat{c}{\dot x(t) \\ e(t)}
= \mat{cc}{A + B_2 F_1 & B_1 + B_2F_2 \\
C_1 + D_{12}F_1 & D_{11} + D_{12}F_2}
\mat{c}{x(t) \\ d(t)}
= \left(\mat{cc}{A& B_1 \\ C_1 & D_{11}}
+ \mat{c}{B_2 \\ D_{12}} F
\right)
\mat{c}{x(t) \\ d(t)}.
\label{SHI::eq::clF}
\end{equation}
Indeed, by applying the elimination lemma \ref{RS::lem::elimination} we immediately obtain the following convex synthesis result.
\begin{lemma
\label{SHI::lem::full_info}
There exists some full-information gain $F$ such that the analysis LMIs \eqref{SHI::lem::lmi_stab} are feasible for the system \eqref{SHI::eq::clF} if and only if there exists a symmetric matrix $Y \cg 0$ satisfying \eqref{SHI::theo::eq::lmi_gsc}.
\end{lemma
\subsubsection{Dual Iteration}\label{SHI::sec::dual}
We are now in the position to discuss the core of the dual iteration and to provide the first key result. To this end, let us suppose that we have designed a full-information controller $u = F\t y$ by Lemma \ref{SHI::lem::full_info}. Then the following convex LMI conditions are sufficient for static output-feedback design.
\begin{theorem
\label{SHI::theo::ofF}
Let $G_{ij}$ and $V$ be as in Theorem \ref{SHI::theo::of} and let $G^F$ be the transfer matrix corresponding to \eqref{SHI::eq::clF}. Further, suppose that $A + B_2F_1$ is stable. Then there exists a static controller \eqref{SHI::eq::con_of2} for the system \eqref{SHI::eq::sys_of} such that the analysis LMIs \eqref{SHI::lem::lmi_stab} are feasible for the corresponding closed-loop system if there exists a symmetric matrix $X$ satisfying
\begin{subequations}
\label{SHI::theo::eq::lmi_ofF}
\begin{equation}
\arraycolsep=3pt
V^T\Ls\left(\mat{cc}{0 & X \\ X & 0}, P_\ga, \mat{c}{G_{11} \\ I
}_{\ss} \right) V \cl 0
\teq{ and }
\Ls\left(\mat{cc}{0 & X \\ X & 0}, P_\ga, \mat{c}{G^F \\ I
}_{\ss} \right) \cl 0.
\dlabel{SHI::theo::eq::lmi_ofFa}{SHI::theo::eq::lmi_ofFb
\end{equation}
\end{subequations}
Moreover, we have
\begin{equation*}
\ga_\mathrm{dof} \leq \ga_\opt \leq \ga_F
\end{equation*}
for $\ga_F$ being the infimal $\ga > 0$ such that the above LMIs are feasible.
\end{theorem
The proof of Theorem \ref{SHI::theo::ofF} is as follows. The left upper block of \eqref{SHI::theo::eq::lmi_ofFb} reads as the Lyapunov inequality
\begin{equation*}
(A + B_2F_1)^TX + X(A + B_2F_1) + (C_1 + D_{12}F_1)^T(C_1 + D_{12}F_1) \cl 0.
\end{equation*}
Hence stability of $A + B_2F_1$ implies $X \cg 0$. This enables us to apply the elimination lemma \ref{RS::lem::elimination} in order to remove the full-information controller gain $F$ from the LMI \eqref{SHI::theo::eq::lmi_ofFb}, which yields exactly the third of the inequalities is Theorem \ref{SHI::theo::of}. Combined with $X \cg 0$ and \eqref{SHI::theo::eq::lmi_ofFa} this allows us to construct the desired static controller via Theorem \ref{SHI::theo::of}.
Observe that $A + B_2 F_1$ is stable by construction if the gain $F$ is designed based on Lemma \ref{SHI::lem::full_info}. Moreover, note that \eqref{SHI::theo::eq::lmi_ofFb} is exactly the analysis LMI \eqref{SHI::lem::lmi_stab} for the closed-loop system interconnection \eqref{SHI::eq::clF} involving the full-information controller. Intuitively, Theorem \ref{SHI::theo::ofF} links the static output-feedback and the full-information design problem with a common Lyapunov matrix $X$. Further, note that if we view the gain $F$ as a decision variable in \eqref{SHI::theo::eq::lmi_ofF} then we would even have $\ga_\opt = \ga_F$. However, the computation of $\ga_F$ would then be again as troublesome as the determination of $\ga_\opt$ itself.
\begin{remark
The LMIs \eqref{SHI::theo::eq::lmi_ofF} admit a rather particular structure which can potentially be exploited by dedicated LMI solvers such as \cite{WalHan04} instead of relying on generic solvers for semi-definite programs as, e.g., \cite{GahNem97, Mos17, Stu01}. In particular, note that under the additional assumption that $\smat{B_1 \\ D_{21}}D_{21}^T = \smat{0 \\ I}$, which is among others a standard assumption in Riccati based $H_\infty$-control \cite{DoyGlo89}, a possible choice for the annihilator $V$ is $\smat{I \\ - D_{21}^TC_2}$. Then the LMI \eqref{SHI::theo::eq::lmi_ofFa} even simplifies to a Lyapunov inequality of the form
\begin{equation*}
A^TX+XA + (C_1 - D_{11}D_{21}^TC_2)^T(C_1 - D_{11}D_{21}^TC_2) - \ga^2 C_2^T C_2 \cl 0.
\end{equation*}
Exploring this potential for numerical improvements is beyond the scope of this paper.
\end{remark
While Theorem \ref{SHI::theo::ofF} is already interesting on its own, the key idea of the dual iteration now is that improved upper bounds on $\ga_\opt$ can be obtained without difficulty. This is achieved by considering the so-called dual design problem corresponding to the synthesis of full-information controllers as well. The latter problem consists of finding some (full-actuation) controller gain $E = (E_1^T, E_2^T)^T$ such that the analysis LMIs \eqref{SHI::lem::lmi_stab} are feasible for the system
\begin{equation}
\mat{c}{\dot x(t) \\ e(t)}
= \mat{cc}{A + E_1 C_2 & B_1 + E_1D_{21} \\
C_1 + E_2C_2 & D_{11} + E_2D_{21} }
\mat{c}{ x(t) \\ d(t)}
= \left(\mat{cc}{A & B_1 \\
C_1 & D_{11} }
+ E \mat{cc}{C_2 & D_{21}}
\right)
\mat{c}{ x(t) \\ d(t)}.
\label{SHI::eq::clE}
\end{equation}
As before a convex solution in terms of LMIs is immediately obtained by the elimination lemma \ref{RS::lem::elimination} and reads as follows.
\begin{lemma
\label{SHI::lem::full_actu}
There exists some full-actuation gain $E$ such that the analysis LMIs \eqref{SHI::lem::lmi_stab} are feasible for the system \eqref{SHI::eq::clE} if and only if there exists a symmetric matrix $X \cg 0$ satisfying \eqref{SHI::theo::eq::lmi_gsb}.
\end{lemma
Based on a designed full-actuation gain $E$ we can formulate another set of convex LMI conditions that are sufficient for static output-feedback design. The proof is analogous to the one of Theorem \ref{SHI::theo::ofF} and thus omitted.
\begin{theorem
\label{SHI::theo::ofE}
Let $G_{ij}$ and $U$ be as in Theorem \ref{SHI::theo::of} and let $G^E$ be the transfer matrix corresponding to \eqref{SHI::eq::clE}. Further, suppose that $A + E_1C_2$ is stable. Then there exists a static controller \eqref{SHI::eq::con_of2} for the system \eqref{SHI::eq::sys_of} such that the analysis LMIs \eqref{SHI::lem::lmi_stab} are feasible for the corresponding closed-loop system if there exists a symmetric matrix $Y$ satisfying
\begin{subequations}
\label{SHI::theo::eq::lmi_ofE}
\begin{equation}
\arraycolsep=3pt
\Ls\left(\mat{cc}{0 & Y \\ Y & 0}, P_\ga^{-1}, \mat{c}{I \\ -(G^E)^\ast
}_{\ss} \right) \cg 0
\teq{ and }
U^T\Ls\left(\mat{cc}{0 & Y \\ Y & 0}, P_\ga^{-1}, \mat{c}{I \\ -G_{11}^\ast
}_{\ss} \right) U \cg 0.
\dlabel{SHI::theo::eq::lmi_ofEa}{SHI::theo::eq::lmi_ofEb
\end{equation}
\end{subequations}
Moreover, we have
\begin{equation*}
\ga_\mathrm{dof} \leq \ga_\opt \leq \ga_E
\end{equation*}
for $\ga_E$ being the infimal $\ga > 0$ such that the above LMIs are feasible.
\end{theorem
In the sequel we refer to the LMIs \eqref{SHI::theo::eq::lmi_ofF} and \eqref{SHI::theo::eq::lmi_ofE} as primal and dual synthesis LMIs, respectively.
Accordingly, we address Theorems \ref{SHI::theo::ofF} and \ref{SHI::theo::ofE} as primal and dual design results, respectively.
Observe that the primal and dual design results are nicely intertwined as follows.
\begin{theorem
\label{SHI::theo::it_summary}
The following two statements hold.
\begin{itemize}
\item If $A + B_2F_1$ is stable and the primal synthesis LMIs \eqref{SHI::theo::eq::lmi_ofF} are feasible for some $\ga$ and some full-information gain $F$, then there exists some full-actuation gain $E$ such that $A+E_1C_2$ is stable and the dual synthesis LMIs \eqref{SHI::theo::eq::lmi_ofE} are feasible for $\ga$. In particular, we have $\ga_E \leq \ga$.
\item If $A + E_1C_2$ is stable and the dual synthesis LMIs \eqref{SHI::theo::eq::lmi_ofE} are feasible for some $\ga$ and some full-actuation gain $E$, then there exists some full-information gain $F$ such that $A + B_2F_1$ is stable and the primal synthesis LMIs \eqref{SHI::theo::eq::lmi_ofF} are feasible for $\ga$. In particular, we have $\ga_F \leq \ga$.
\end{itemize}
\end{theorem
\begin{proof}
We only show the first statement as the second one follows with analogous arguments. If $A + B_2F_1$ is stable and the primal synthesis LMIs \eqref{SHI::theo::eq::lmi_ofF} are feasible, we can infer $X \cg 0$ from \eqref{SHI::theo::eq::lmi_ofFb} as as in Theorem \ref{SHI::theo::ofF}. Due to \eqref{SHI::theo::eq::lmi_ofFa} and Lemma \ref{SHI::lem::full_actu}, we can then infer the existence of a full-actuation gain $E$ satisfying
\begin{equation*}
\Ls\left(\mat{cc}{0 & X \\ X & 0}, P_\ga, \mat{c}{G^E \\ I}_\ss \right) \cl 0
\end{equation*}
with exactly the same Lyapunov matrix $X\cg 0$ as in \eqref{SHI::theo::eq::lmi_ofF}. In particular, the left upper block of the above LMI is a standard Lyapunov inequality which allows us to conclude that $A + E_1C_2$ is stable. Moreover, an application of the dualization lemma \ref{RS::lem::dualization} as given in the appendix allows us to infer that \eqref{SHI::theo::eq::lmi_ofEa} is satisfied for $Y := X^{-1} \cg 0$. Finally, by using the elimination lemma \ref{RS::lem::elimination} on the LMI \eqref{SHI::theo::eq::lmi_ofFb} to remove the full-information gain $F$, we infer that \eqref{SHI::theo::eq::lmi_ofEb} is satisfied as well. This concludes the proof.
\end{proof}
The dual iteration for static output-feedback design now essentially amounts to alternately applying the two statements in Theorem~\ref{SHI::theo::it_summary} and is stated as follows.
\begin{Algorithm}
\label{SHI::algo::dual_iteration}
Dual iteration for static output-feedback $H_\infty$-design.
\begin{enumerate}
\item \emph{Initialization:} Compute the lower bound $\ga_\mathrm{dof}$ based on solving the dynamic full-order synthesis LMIs \eqref{SHI::theo::eq::lmi_gs} and set $k = 1$.
Design an initial full-information gain $F$ from Lemma \ref{SHI::lem::full_info}.
\item \emph{Primal step:} Compute $\ga_F$ based on solving the primal synthesis LMIs \eqref{SHI::theo::eq::lmi_ofF} and set $\ga^k := \ga_F$.
Design a corresponding close-to-optimal full-actuation gain $E$ from Lemma \ref{SHI::lem::full_actu}.
\item \emph{Dual step:} Compute $\ga_E$ based on solving the dual synthesis LMIs \eqref{SHI::theo::eq::lmi_ofE} and set $\ga^{k+1} := \ga_E$. Design a corresponding close-to-optimal full-information gain $F$ from Lemma \ref{SHI::lem::full_info}.
\item \emph{Termination:} If $k$ is too large or $\ga^k$ does not decrease any more, then stop and construct a close-to-optimal static output-feedback controller \eqref{SHI::eq::con_of2} for the system \eqref{SHI::eq::sys_of} according to Theorem \ref{SHI::theo::ofE}. \\
Otherwise set $k = k+2$ and go to the primal step.
\end{enumerate}
\end{Algorithm}
\begin{remark
~
\label{SHI::rema::stuff}
\begin{enumerate}[(a)]
\item Theorem \ref{SHI::theo::it_summary} ensures that Algorithm \ref{SHI::algo::dual_iteration} is recursively feasible, i.e., it will not get stuck due to infeasibility of some LMIs, if the primal synthesis LMIs \eqref{SHI::theo::eq::lmi_ofF} are feasible when performing the primal step for the first time. Additionally, the proof of Theorem \ref{SHI::theo::it_summary} demonstrates that we can even warm start the feasibility problems in the primal and dual steps by providing a feasible initial guess for the involved variables. This reduces the computational burden.
Moreover, if we replace ``close-to-optimal'' with ``optimal'' in Algorithm \ref{SHI::algo::dual_iteration}, which is typically not advisable for numerical reasons, then we are guaranteed to have
\begin{equation*}
\ga_\mathrm{dof} \leq \ga_\opt \leq \ga^k \leq \dots \leq \ga^2 \leq \ga^1
\teq{ for all }k\in \N.
\end{equation*}
In general all the above inequalities are strict as Theorems \ref{SHI::theo::ofF} and \ref{SHI::theo::ofE} only provide sufficient conditions. Finally and as for other approaches, there is no information on the size of the gaps and the sequence $(\ga^k)_{k\in \N}$ is not guaranteed to converge to the optimal $\ga_\opt$.
Nevertheless, the number of required iterations to obtain acceptable bounds on the energy gain is rather low as will be demonstrated.
\item As for any heuristic design it can be beneficial to perform an a posteriori closed-loop analysis via Lemma \ref{SHI::lem::stab}. The resulting closed-loop energy gain is guaranteed to be not larger than the corresponding upper bound $\ga^k$.
\item The achieved energy gain can potentially be improved by using the designed static controller as an initial guess, e.g., for a D-K iteration or a non-smooth optimization technique. Such an initialization can be beneficial as the synthesized controller is already robustly stabilizing by design and typically admits a quite acceptable energy gain.
Similarly as in \cite{HolSch19} for robust output-feedback design, we obtained only marginal improvements for several numerical examples by following this strategy.
Conversely, if a static controller $K$ achieving a closed-loop energy gain bounded by $\ga$ is taken for initialization, we can infer feasibility of the primal synthesis LMIs \eqref{SHI::theo::eq::lmi_ofF} for the full-information controller $F = (KC_2, KD_{21})$ and we have $\ga_F \leq \ga$ due to \eqref{SHI::theo::eq::lmi_ofFb}. Analogously, we also infer feasibility of the dual synthesis LMIs \eqref{SHI::theo::eq::lmi_ofE} for $E = ((B_2K)^T, (D_{12}K)^T)^T$ and $\ga_E \leq \ga$.
\item Suppose that $K$ is a static controller achieving a closed-loop energy gain of exactly $\t \ga$, i.e., the analysis LMIs \eqref{SHI::lem::lmi_stab} for the closed-loop are feasible for $\ga = (1+\eps)\t \ga$ and infeasible for $\ga = (1-\eps)\t \ga$ for any $\eps > 0$. Then the primal synthesis LMIs \eqref{SHI::theo::eq::lmi_ofF} are feasible for $F = (KC_2, KD_{21})$ and we have $\t \ga = \ga_F$ due to \eqref{SHI::theo::eq::lmi_ofFb}. As a consequence, one should not construct an optimal controller $K$ in the dual step and choose $F = (KC_2, KD_{21})$ instead of employing Lemma \ref{SHI::lem::full_info} while iterating; such a strategy is very likely to stop the algorithm from progressing in the primal steps.
\item If one is only interested in stability as in the original publication \cite{Iwa97, Iwa99}, one should replace the analysis LMIs \eqref{SHI::lem::lmi_stab} with
\begin{equation*}
X \cg 0
\teq{ and }
A^TX + XA \cl \ga X
\end{equation*}
and adapt the design results accordingly while still trying to minimize $\ga$. Note that it is not easily possible in this case to replace the term $\ga X$ by $\ga I$ as dualization and elimination are involved. Hence all appearing design problems will no longer be convex LMIs, but generalized eigenvalue problems. Such problems can be efficiently solved as well, e.g., with Matlab and LMIlab \cite{GahNem95}.
\end{enumerate}
\end{remark
\begin{remark}
\label{SHI::rema::init}
The selection of a suitable gain $F$ during the initialization of Algorithm \ref{SHI::algo::dual_iteration} can be crucial as feasibility of the primal synthesis LMIs \eqref{SHI::theo::eq::lmi_ofF} is \emph{not} guaranteed from the feasibility of dynamic full-order synthesis LMIs \eqref{SHI::theo::eq::lmi_gs} and depends on the concrete choice of the gain $F$.
Similarly as in \cite{Iwa99}, we propose to compute the lower bound $\ga_\mathrm{dof}$ and then to reconsider the LMIs \eqref{SHI::theo::eq::lmi_gs} for $\ga = (1+\eps) \ga_\mathrm{dof}$ and some fixed $\eps > 0$ while minimizing $\tr(X + Y)$. Due to \eqref{SHI::theo::eq::lmi_gsa}, this is a common heuristic that aims to push $X$ towards $Y^{-1}$ and which promotes feasibility of the non-convex design LMIs in Theorem \ref{SHI::theo::of}. Constructing a gain $F$ based on Lemma \ref{SHI::lem::full_info} and these modified LMIs promotes feasibility of the primal synthesis LMIs \eqref{SHI::theo::eq::lmi_ofF} as well.
\end{remark}
In the next sections we show that the dual iteration as described by Algorithm \ref{SHI::algo::dual_iteration} can be extended to a variety of other design problems. For some of those we rely on a control theoretic interpretation of its individual steps as given next.
\subsubsection{A Control Theoretic Interpretation}\label{SHI::sec::interpretation}
So far the entire dual iteration solely relies on algebraic manipulations by heavily exploiting the elimination lemma \ref{RS::lem::elimination}. This turns an application of the iteration relatively simple but not very insightful and thus difficult to generalize. An control theoretic interpretation of the individual steps can be provided based on our robust output-feedback design approach proposed in \cite{HolSch19} that was motivated by the well-known separation principle.
The classical separation principle states that one can synthesize a stabilizing dynamic output-feedback controller by combining a state observer with a state-feedback controller, which can be designed completely independently from each other.
Instead, we proposed in \cite{HolSch19} to design a full-information controller and thereafter to solve a particular robust design problem. The latter problem is briefly recalled next.
\vspace{1ex}
Suppose that we have synthesized a full-information controller $\t u = F\t y$ via Lemma \ref{SHI::lem::full_info}. Then we can incorporate this controller into the closed-loop interconnection in Fig.~\ref{SHI::fig::of} with the to-be-designed static controller $K$ and some parameter $\del \in [0, 1]$ as depicted on the left in Fig.~\ref{SHI::fig::of_homotop}. In this new configuration note that the control input $u$ satisfies
\begin{equation*}
u = (1 - \del)\t u + \del \h u,
\end{equation*}
i.e., it is a convex combination of the outputs of the full-information and of the to-be-designed static output-feedback controller.
In particular, for $\del = 0$, we retrieve \eqref{SHI::eq::clF} the interconnection of the system \eqref{SHI::eq::sys_of} with the full-information controller $u = F \t y$ for the output $\t y = \smat{x \\ d}$ and, for $\del = 1$, we recover the original interconnection as depicted in Fig.~\ref{SHI::fig::of}.
This motivates to view $\del$ as a homotopy parameter that continuously deforms the prior interconnection into the latter.
\begin{figure}
\vspace{1ex}
\begin{center}
\begin{minipage}{0.49\textwidth}
\begin{center}
\includegraphics[]{SHI_OF_homotop0}
\end{center}
\end{minipage}
\begin{minipage}{0.49\textwidth}
\begin{center}
\includegraphics[]{SHI_OF_homotopc}
\end{center}
\end{minipage}
\end{center}
\caption{Left: Incorporation of the full-information controller gain $F$ into the interconnection in Fig.~\ref{SHI::fig::of} with the to-be-designed static controller $K$ and some parameter $\del\in [0, 1]$. Right: Allowing the controller to take additional measurements of $u$.}
\label{SHI::fig::of_homotop}
\end{figure}
As in \cite{HolSch19} we treat the parameter $\del$ as an uncertainty. A robust design of $K$ turns the obtainable upper bounds on the closed-loop energy gain rather conservative. To counteract this conservatism, we allow the to-be-designed controller to additionally include measurements of the convex combination $u$ which results in the configuration on the right in Fig.~\ref{SHI::fig::of_homotop}. This is expected to be beneficial as the controller knows its own output $\h u$ and, thus, it essentially means to measure the new uncertain signal as well. Note that restricting $\t K$ to admit the structure $\t K = (K, 0)$ results again in the configuration on the left in Fig.~\ref{SHI::fig::of_homotop}.
Observe that disconnecting the controller $\t K$ leads to the uncertain open-loop system
\begin{equation}
\mat{c}{\dot x(t) \\ \hline e(t) \\ \hdashline \t z(t) \\\hdashline \h y(t)}
= \mat{c|c:c:c}{A^F & B_1^F & B_2 & 0 \\ \hline
C_1^F & D_{11}^F & D_{12} & 0 \\ \hdashline
C_2^F & D_{21}^F & 0 & I \\ \hdashline
C_3^F & D_{31}^F & D_{32}^F & 0}
\mat{c}{x(t) \\ \hline d(t) \\ \t w(t) \\ \h u(t)}
=
\mat{c|c:c:c}{A +B_2F_1 & B_1 +B_2 F_2 & B_2 & 0\\ \hline
C_1+D_{12}F_1 & D_{11}+D_{12}F_2 & D_{12} & 0\\ \hdashline
-F_1 & -F_2 & 0 & I \\ \hdashline
C_2 & D_{21} & 0 & 0 \\
F_1 & F_2 & I & 0}
\mat{c}{x(t) \\\hline d(t) \\ \hdashline \t w(t) \\ \hdashline \h u(t)},
\qquad
\t w(t) = \del \t z(t)
\label{SHI::eq::sys_es}
\end{equation}
for $t\geq 0$ and with $\t z := \h u - \t u$ as well as $\h y := \smat{y \\ u}$. Note that the structure of the system \eqref{SHI::eq::sys_es} is closely related to the one appearing in estimation problems as considered, e.g., in \cite{SunPac05, GerDeo01,SchKoe08, Ger99}. In particular, we will see next that the problem of finding a robust static controller $\t K$ for \eqref{SHI::eq::sys_es} can be turned convex.
To this end, note that reconnecting the controller $\t K$ leads to an uncertain closed-loop system with description
\begin{equation}
\mat{c}{\dot x(t) \\ \hline e(t) \\ \t z(t) } =
\mat{c|cc}{\Ac^F & \Bc_1^F & \Bc_2^F\\ \hline
\Cc_1^F & \Dc_{11}^F & \Dc_{12}^F \\
\Cc_2^F & \Dc_{21}^F & \Dc_{22}^F}
\mat{c}{x(t) \\\hline d(t) \\ \t w(t) }
=\mat{c|cc}{A^F & B_1^F & B_2\\ \hline
C_1^F & D_{11}^F & D_{12} \\
C_2^F +\t K C_3^F & D_{21}^F + \t K D_{31}^F & \t K D_{32}^F}
\mat{c}{x(t) \\\hline d(t) \\ \t w(t) },
\qquad
\t w(t) = \del \t z(t).
\label{SHI::eq::sys_escl}
\end{equation}
We analyze the latter system via static IQCs \cite{MegRan97} and employ the set of constant multipliers
\begin{equation*}
\Pb := \left\{\mat{cc}{0 & H^T \\ H & -H - H^T} ~\middle|~ H + H^T \cg 0 \right\}
\end{equation*}
in order to deal with the (uncertain) homotopy parameter $\del \in [0,1]$; any multiplier $P \in \Pb$ satisfies
\begin{equation}
\mat{c}{I \\ \del I}^T P \mat{c}{I \\ \del I}
=\del (1 - \del) (H + H^T) \cge 0
\teq{ for all }\del \in [0, 1].
\label{SHI::eq::multi_minus_one}
\end{equation}
This leads to the following robust analysis result which can also be viewed as a special case of the findings in \cite{Sch01}.
\begin{lemma
\label{SHI::lem::stabes}
Let $\Gc_{ij}^F$ be the transfer matrix corresponding to the closed-loop system \eqref{SHI::eq::sys_escl}. Then the system \eqref{SHI::eq::sys_escl} is well-posed, i.e., $\det(I - \del \Dc_{22}^F) \neq 0$ for all $\del \in [0,1]$, and admits an energy gain smaller than $\ga$ for all $\del \in [0, 1]$ if there exist symmetric matrices $X$ and $ P\in \Pb$ satisfying
\begin{equation}
X \cg 0
\teq{ and }
\Ls\left(\mat{cc}{0 & X \\ X & 0}, \mat{c|c}{P_\ga & 0 \\ \hline 0 & P}, \mat{cc}{\Gc_{11}^F & \Gc_{12}^F \\ I & 0 \\ \hline \Gc_{21}^F & \Gc_{22}^F \\ 0 & I}_\ss \right) \cl 0.
\label{SHI::lem::lmi_stabes}
\end{equation}
\end{lemma
Similarly as discussed in \cite{HolSch19}, the specific structure of the system \eqref{SHI::eq::sys_es} \emph{and} of the multipliers in $\Pb$ allow the design problem corresponding to the right in Fig.~\ref{SHI::fig::of_homotop} to be turned into a convex problem, e.g., by relying on the elimination lemma. This is the first statement of the following result.
\begin{theorem}
\label{SHI::theo::estimation}
Let $G_{ij}^F$ denote the transfer matrices corresponding to \eqref{SHI::eq::sys_es} and let $V_F$ be a basis matrix of $\ker\smat{C_2 & D_{21} & 0 \\ F_1 & F_2 & I}$. Further, suppose that $A^F = A + B_2F_1$ is stable.
Then there exists a controller $\t K$ for the system \eqref{SHI::eq::sys_es} such that the robust analysis LMIs \eqref{SHI::lem::lmi_stabes} are satisfied if and only if there exist symmetric matrices $X$ and $ P\in\Pb$ satisfying
\begin{subequations}
\label{SHI::theo::eq::lmi_es}
\begin{equation}
\arraycolsep=2pt
V_F^T\Ls \left( \mat{cc}{0 & X \\ X & 0}, \mat{c:c}{P_\ga & 0 \\ \hdashline 0 & P},
\mat{cc}{G_{11}^F & G_{12}^F \\ I & 0 \\ \hdashline
G_{21}^F & G_{22}^F \\ 0 & I }_{\ss}\right)V_F \cl 0
\teq{ and }
\Ls\left(\mat{cc}{0 & X \\ X & 0}, P_\ga,
\mat{c}{G_{11}^F \\ I }_\ss \right) \cl 0.
\dlabel{SHI::theo::eq::lmi_esa}{SHI::theo::eq::lmi_esb
\end{equation}
\end{subequations}
Moreover, the above LMIs are feasible if and only if the primal synthesis LMIs \eqref{SHI::theo::eq::lmi_ofF} are feasible. In particular, there exists a controller $K$ for the original system \eqref{SHI::eq::sys_of} such that the analysis LMIs \eqref{SHI::lem::lmi_stab} are feasible for the corresponding closed-loop interconnection.
\end{theorem}
\vspace{1ex}
In \cite{HolSch19} we gave a trajectory based proof of the second statement of Theorem \ref{SHI::theo::estimation} in the context of robust output-feedback design. Here, we show solely based on algebraic manipulations and on the LFR framework that the above theorem actually recovers the primal design result Theorem \ref{SHI::theo::ofF} while having a nice interpretation in terms of Fig.~\ref{SHI::fig::of_homotop}.
\begin{proof}
{\it First statement:} Suppose that there is a controller $\t K$ such that the closed-loop robust analysis LMIs \eqref{SHI::lem::lmi_stabes} are satisfied. Further, note that $U = \smat{I & 0 & 0 \\ 0 & I & 0}^T$ is an annihilator for $(0, 0, I)$ and that $P^{-1} = \smat{\bullet & \bullet \\ \bullet & 0}$ due to the structure of multipliers in $\Pb$. Applying the elimination lemma leads then to the LMI \eqref{SHI::theo::eq::lmi_esa} and to
\begin{multline*}
\arraycolsep=0.5pt
0 \cl (\bullet)^T \mat{cc|c:c}{0 & X & 0 & 0 \\ X & 0 & 0 & 0 \\ \hline 0 & 0 & P_\ga & 0 \\ \hdashline 0 & 0 & 0 & P}^{-1}
\mat{ccc}{I & 0 & 0 \\ -(A^F)^T & -(C_1^F)^T & -(C_2^F)^T \\ \hline
0 & I & 0 \\ -(B_1^F)^T & -(D_{11}^F)^T & -(D_{21}^F)^T \\ \hdashline
0 & 0 & I \\ -B_2^T & -D_{12}^T & 0} U
= (\bullet)^T \mat{cc|c:c}{0 & X^{-1} & 0 & 0 \\ X^{-1} & 0 & 0 & 0 \\ \hline 0 & 0 & P_\ga^{-1} & 0 \\ \hdashline 0 & 0 & 0 & P^{-1}}
\mat{cc}{I & 0 \\ -(A^F)^T & -(C_1^F)^T \\ \hline
0 & I \\ -(B_1^F)^T & -(D_{11}^F)^T \\ \hdashline
0 & 0 \\ -B_2^T & -D_{12}^T}\\
= (\bullet)^T \mat{cc|c}{0 & X^{-1} & 0 \\ X^{-1} & 0 & 0 \\ \hline 0 & 0 & P_\ga^{-1} }
\mat{cc}{I & 0 \\ -(A^F)^T & -(C_1^F)^T \\ \hline
0 & I \\ -(B_1^F)^T & -(D_{11}^F)^T}.
\end{multline*}
An application of the dualization lemma \ref{RS::lem::dualization} yields \eqref{SHI::theo::eq::lmi_esb} and finishes the necessity part of the proof. The converse is obtained by reversing the arguments.
{\it Second statement:} Observe that a valid annihilator $V_F$ is given by the choice $V_F = \smat{I \\ -F} V$ with $V$ being a basis matrix of $\ker(C_2, D_{21})$. Moreover, via elementary computations and by recalling \eqref{SHI::eq::sys_es}, we have
\begin{equation*}
\mat{c:c}{G_{11}^F & G_{12}^F \\ I & 0 \\ \hdashline
G_{21}^F & G_{22}^F \\ 0 & I }_{\ss} \mat{c}{I \\ -F}
= \mat{cc}{I & 0 \\ A & B_1 \\ \hline
C_1 & D_{11} \\ 0 & I \\ \hdashline
-\!F_1 & -F_2 \\ -F_1 & -F_2 }
= \mat{c}{\mat{c}{G_{11} \\ I
}_{\ss} \\\hdashline - \mat{c}{I \\ I}F}.
\end{equation*}
In particular, the LMI \eqref{SHI::theo::eq::lmi_esa} reads as
\begin{equation*}
0 \cg V^T\Ls\left(\mat{cc}{0 & X \\ X & 0}, P_\ga, \mat{c}{G_{11} \\ I
}_{\ss} \right) V
+ V^T F^T \mat{c}{I \\ I}^T P \mat{c}{I \\ I}FV
\end{equation*}
which is actually identical to \eqref{SHI::theo::eq::lmi_ofFb} since $\smat{I \\ I}^TP\smat{I \\ I} = 0$ due to $P \in \Pb$. This shows that feasibility of \eqref{SHI::theo::eq::lmi_es} implies validity of \eqref{SHI::theo::eq::lmi_ofF}. Conversely, if \eqref{SHI::theo::eq::lmi_ofFb} is satisfied, we can pick any $P \in \Pb$ and infer that the above inequality is true which leads to \eqref{SHI::theo::eq::lmi_es}.
\end{proof}
The most important benefit of the above interpretation is that the design problem corresponding to Fig.~\ref{SHI::fig::of_homotop} can also be solved, e.g., via a convexifying parameter transformation instead of elimination and in various other important scenarios. In particular, this allows for an extension the dual iteration to situations where elimination is not or only partly possible.
To this end let us show how to solve the design problem corresponding to Fig.~\ref{SHI::fig::of_homotop} without elimination.
\begin{theorem}
\label{SHI::theo::ofF_par}
Suppose that $A^F = A + B_2F_1$ is stable. Then there exists a controller $\t K$ for the system \eqref{SHI::eq::sys_es} such that the robust analysis LMIs \eqref{SHI::lem::lmi_stabes} are feasible if and only if there exists matrices $H$, $N = (N_1, N_2)$ and a symmetric matrix $X$ satisfying
\begin{subequations}
\label{SHI::theo::eq::lmi_ofF_par}
\begin{equation}
H + H^T \cg 0
\teq{ and }
\Ls\left(\mat{cc}{0 & I \\ I & 0},
\mat{c:c}{P_\ga & 0 \\ \hdashline 0 & \Hb},
\mat{ccc}{\Ab & \Bb_1 & \Bb_2 \\ \hline
C_1^F & D_{11}^F & D_{12} \\ 0 & I & 0 \\ \hdashline
\Cb_2 & \Db_{21} & \Db_{22} \\ 0 & 0 & I} \right) \cl 0
\dlabel{SHI::theo::eq::lmi_ofF_para}{SHI::theo::eq::lmi_ofF_parb
\end{equation}
\end{subequations}
where
\begin{equation*}
\Hb := \mat{cc}{0 & I \\ I & -H - H^T}
\teq{ and }
\mat{ccc}{\Ab & \Bb_1 & \Bb_2 \\\Cb_2 & \Db_{21} & \Db_{22}}
= \mat{ccc}{
XA^F & XB_1^F & XB_2 \\
HC_2^F + NC_3^F & HD_{21}^F + ND_{31}^F & ND_{32}^F}.
\end{equation*}
If the above LMIs are feasible, a static controller $K$ such that the analysis LMIs \eqref{SHI::lem::lmi_stab} are satisfied for the closed-loop system \eqref{SHI::eq::cl_of} is given by
\begin{equation*}
K := (H - N_2)^{-1}N_1.
\end{equation*}
\end{theorem}
\begin{proof}
We only prove the sufficiency part of the first statement and the second statement for brevity.
Note at first that $X\cg 0$ follows from stability of $A^F$ and by considering the left upper block of \eqref{SHI::theo::eq::lmi_ofF_parb}.
Moreover, observe that $H$ is nonsingular by $H + H^T \cg 0$. Then we have
\begin{equation*}
\mat{ccc}{\Cb_2 & \Db_{21} & \Db_{22}}
= H\mat{ccc}{C_2^F + H^{-1}NC_3^F & D_{21}^F + H^{-1}ND_{31}^F & H^{-1}ND_{32}^F}
=: H \mat{ccc}{\t \Cb_2 & \t \Db_{21} & \t \Db_{22}}
\end{equation*}
and we can rewrite \eqref{SHI::theo::eq::lmi_ofF_parb} with $P := \smat{0 & H^T \\ H & -H-H^T} \in \Pb$ as
\begin{equation}
\Ls\left(\mat{cc}{0 & X \\ X & 0},
\mat{c:c}{P_\ga & 0 \\ \hdashline 0 & P},
\mat{ccc}{A^F & B_1^F & B_2 \\ \hline
C_1^F & D_{11}^F & D_{12} \\ 0 & I & 0 \\ \hdashline
\t \Cb_2 & \t \Db_{21} & \t \Db_{22} \\ 0 & 0 & I} \right) \cl 0.
\label{SHI::pro::lmiF}
\end{equation}
In particular, $\t K := H^{-1}N$ is a controller for the system \eqref{SHI::eq::sys_es} as desired. Moreover, from the right lower block of \eqref{SHI::pro::lmiF} and the structure of $P$ we infer
\begin{equation*}
\mat{c}{\t \Db_{22} \\ I }^TP\mat{c}{\t \Db_{22} \\ I } \cl 0
\teq{ and }
\mat{c}{I \\ \del I}^T P \mat{c}{I \\ \del I} \cge 0
\teq{ for all }\del \in [0, 1].
\end{equation*}
This implies $\det(I - \del \t \Db_{22}) \neq 0$ for all $\del \in [0, 1]$ and, in particular, that $I - \t \Db_{22}$ is nonsingular. Note that by the definition of the bold-face matrices and by the structure in \eqref{SHI::eq::sys_es}, we have $\t \Db_{22} = H^{-1}N_2$ and hence
\begin{equation*}
W := (W_1, W_2) := (I - \t \Db_{22})^{-1}(\t \Cb_2, \t \Db_{21}) = -F + (H - N_2)^{-1}N_1 (C_2, D_{21}) = -F + K(C_2, D_{21}).
\end{equation*}
Then we obtain via elementary computations
\begin{equation*}
\mat{ccc}{I & 0 & 0 \\ A^F & B_1^F & B_2 \\ \hline
C_1^F & D_{11}^F & D_{12} \\ 0 & I & 0 \\ \hdashline
\t \Cb_2 & \t \Db_{21} & \t \Db_{22} \\ 0 & 0 & I}
\mat{cc}{I & 0 \\ 0 & I \\
W_1 & W_2 }
= \mat{cc}{I & 0 \\
A + B_2 K C_2 & B_1 + B_2 K D_{21} \\ \hline
C_1 + D_{12}K C_2 & D_{11} + D_{12}K D_{21} \\ 0 & I \\ \hdashline
W_1 & W_2 \\ W_1 & W_2 }.
\end{equation*}
In particular, we can infer from \eqref{SHI::pro::lmiF} that
\begin{equation*}
\Ls\left(\mat{cc}{0 & X \\ X & 0}, P_\ga,
\mat{cc}{A + B_2KC_2 & B_1 + B_2KD_{21} \\ \hline
C_1 + D_{12}KC_2 & D_{11} + D_{12}KD_{21} \\ 0 & I }\right)
\cl - W^T\mat{c}{I \\ I}^T P \mat{c}{I \\ I} W = 0.
\end{equation*}
This yields the last claim.
\end{proof}
The dual design result Theorem \ref{SHI::theo::ofE} can be interpreted as the solution to the dual synthesis problem corresponding to Fig.~\ref{SHI::fig::of_homotop} which is closely related to feedforward design. As for the primal design result Theorem \ref{SHI::theo::ofF}, it can also be viewed as a separation-like result since it involves the consecutive construction of a full-actuation controller and a corresponding feedforward-like controller.
For a given full-actuation gain $E = (E_1^T, E_2^T)^T$ the dual synthesis problem corresponding to Fig.~\ref{SHI::fig::of_homotop} amounts to finding a static controller $\t K$ such that the robust analysis LMIs \eqref{SHI::lem::lmi_stabes} are feasible for the interconnection of the controller $\t K$ and the uncertain open-loop system
\begin{equation}
\mat{c}{\dot x(t) \\ \hline e(t) \\ \t z(t) \\ \h y(t)}
= \mat{c|c:c:c}{A^E & B_1^E & B_2^E & B_3^E \\ \hline
C_1^E & D_{11}^E & D_{12}^E & D_{13}^E \\ \hdashline
C_2 & D_{21} & 0 & D_{23}^E \\ \hdashline
0 & 0 & I & 0}
\mat{c}{x(t) \\ \hline d(t) \\ \t w(t) \\ \h u(t)}
= \mat{c|c:c:cc}{A + E_1C_2 & B_1 + E_1D_{21} & -E_1 & B_2 & E_1 \\ \hline
C_1 + E_2C_2 & D_{11} + E_2D_{21} & -E_2 & D_{12} & E_2\\ \hdashline
C_2 & D_{21} & 0 & 0 & I \\ \hdashline
0 & 0 & I & 0 & 0}
\mat{c}{x(t) \\ \hline d(t) \\ \t w(t) \\ \h u(t)},\qquad
\t w(t) = \del \t z(t).
\label{SHI::eq::clEff}
\end{equation}
Here, $\del \in [0, 1]$ can be viewed as before as an homotopy parameter. We obtain the following convex solution which is, as for Theorem \ref{SHI::theo::ofF_par}, obtained without relying on the elimination lemma \ref{RS::lem::elimination}.
\begin{theorem}
\label{SHI::theo::ofE_par}
Suppose that $A^E = A + E_1C_2$ is stable. Then there exists a controller $\t K$ for the system \eqref{SHI::eq::clEff} such that the LMIs \eqref{SHI::lem::lmi_stabes} are feasible for the resulting closed-loop system if and only if there exists matrices $H$, $N = (N_1^T, N_2^T)^T$ and a symmetric matrix $Y$ satisfying
\begin{subequations}
\label{SHI::theo::eq::lmi_ofE_par}
\begin{equation}
H + H^T \cg 0
\teq{ and }
\Ls\left(\mat{cc}{0 & I \\ I & 0},
\mat{c:c}{P_\ga & 0 \\ \hdashline 0 & \Hb},
\mat{ccc}{\Ab & B_1^E & \Bb_2 \\ \hline
\Cb_1 & D_{11}^E & \Db_{12} \\ 0 & I & 0 \\ \hdashline
\Cb_2 & D_{21} & \Db_{22} \\ 0 & 0 & I} \right) \cl 0
\dlabel{SHI::theo::eq::lmi_ofE_para}{SHI::theo::eq::lmi_ofE_parb
\end{equation}
\end{subequations}
where
\begin{equation*}
\Hb := \mat{cc}{0 & I \\ I & -H - H^T}
\teq{ and }
\mat{cc}{\Ab & \Bb_2 \\ \Cb_1 & \Db_{12} \\ \Cb_2 & \Db_{22}}
= \mat{cc}{A^E Y & B_2^EH^T + B_3^EN \\ C_1^E Y & D_{12}^EH^T + D_{13}^E N \\ C_2 Y & D_{23}^E N}.
\end{equation*}
If the above LMIs are feasible, a static controller $K$ such that the analysis LMIs \eqref{SHI::lem::lmi_stab} are satisfied for the closed-loop system \eqref{SHI::eq::cl_of} is given by
\begin{equation*}
K := N_1(H^T - N_2)^{-1}.
\end{equation*}
\end{theorem}
\begin{proof}
Observe at first that $H$ is nonsingular. Right and left multiplication of \eqref{SHI::theo::eq::lmi_ofE_parb} with $\diag(Y^{-1}, I, H^{-T})$ and its transpose leads then to
\begin{equation*}
\Ls\left(\mat{cc}{0 & X \\ X & 0},
\mat{c:c}{P_\ga & 0 \\ \hdashline 0 & P},
\mat{ccc}{A^E & B_1^E & \t \Bb_2\\ \hline
C_1^E & D_{11}^E & \t \Db_{12} \\ 0 & I & 0 \\ \hdashline
C_2 & D_{21} & \t \Db_{22} \\ 0 & 0 & I} \right) \cl 0
\end{equation*}
for $X := Y^{-1}$, $P := \smat{0 & H^{-T} \\ H^{-1} & -H^{-1} - H^{-T} } \in \Pb$ as well as $\t \Bb_2 := B_2^E + B_3^ENH^{-T}$, $\t \Db_{12} := D_{12}^E + D_{13}^E NH^{-T}$ and $\t \Db_{22} := D_{23}^E NH^{-T}$.
The remainder is done analogously as in Theorem \ref{SHI::theo::ofF_par}.
\end{proof}
Let us finally state another interesting fact. Due to the elimination lemma, feasibility of the primal synthesis LMIs \eqref{SHI::theo::eq::lmi_ofF} is equivalent to the existence of a static output-feedback controller $K$ and a \emph{common} certificate $X$ satisfying
\begin{equation}
\Ls\left(\mat{cc}{0 & X \\ X & 0},P_\ga , \mat{c}{G_{11}^F \\ I}_\ss \right) \cl 0
\teq{ and }
\Ls\left(\mat{cc}{0 & X \\ X & 0}, P_\ga , \mat{c}{\Gc \\ I}_\ss \right) \cl 0.
\label{SHI::eq::lmiFcond}
\end{equation}
for $\Gc = [\Ac, \Bc, \Cc, \Dc]$ being the transfer matrix corresponding to \eqref{SHI::eq::cl_of} the closed-loop interconnection of the system \eqref{SHI::eq::sys_of} and the controller $K$. Thus, in each primal step, the dual iteration aims to find a static controller $K$, which is linked to the given full-information controller $F$ through the common certificate $X$. This shows once more that the suggested initialization in Remark~\ref{SHI::rema::init} makes sense for the dual iteration as well.
Due to Theorem \ref{SHI::theo::estimation}, we also know that feasibility of the primal synthesis LMIs \eqref{SHI::theo::eq::lmi_ofF} is equivalent to the existence of a controller $\t K$ such that the robust analysis LMIs \eqref{SHI::lem::lmi_stabes} are satisfied for the closed-loop system \eqref{SHI::eq::sys_escl}. Let us provide some alternative arguments that the existence of such a controller $\t K$ is equivalent to feasibility of the LMIs \eqref{SHI::eq::lmiFcond}:
Let a suitable controller $\t K = (K_1, K_2)$ be given. Then note that the closed-loop system \eqref{SHI::eq::sys_escl} can also be expressed as
\begin{equation*}
\arraycolsep=4pt
\mat{c}{\dot x(t) \\ e(t)}
=\mat{cc}{A + B_2 (I - K_2 \del)^{-1}\big[(1-\del)F_1 + \del K_1 C_2 \big] &
B_1 + B_2(I - K_2 \del)^{-1} \big[(1-\del)F_2 + \del K_1 D_{21} \big] \\
C_1 + D_{12}(I - K_2 \del)^{-1} \big[(1-\del)F_1 + \del K_1 C_2 \big] &
D_{11} + D_{12}(I - K_2\del )^{-1} \big[(1-\del)F_2 + \del K_2 D_{21} \big]}
\mat{c}{x(t) \\ d(t)};
\end{equation*}
in the sequel we abbreviate the above system matrices as $A(\del)$, $B(\del)$, $C(\del)$ and $D(\del)$, respectively. Since the robust analysis LMIs \eqref{SHI::lem::lmi_stabes} are satisfied, we infer, in particular, that
\begin{equation}
\Ls\left(\mat{cc}{0 & X \\ X & 0},P_\ga , \mat{cc}{A(\del) & B(\del)\\ \hline C(\del) & D(\del) \\ 0 & I}\right) \cl 0
\teq{ for all }\del \in [0, 1].
\label{SHI::eq::lmiFcond2}
\end{equation}
This yields \eqref{SHI::eq::lmiFcond} for $K:= (I - K_2)^{-1}K_1$ by considering the special cases $\del = 0$ and $\del = 1$.
Conversely, suppose that \eqref{SHI::eq::lmiFcond} holds for some static gain $K$. Then we can apply the Schur complement twice to infer
\begin{equation*}
\mat{ccc}{(A^F)^TX\! +\! XA^F & XB_1^F & (\bullet)^T \\
(\bullet)^T & - \ga^2 I & (\bullet)^T \\ C_1^F & D_{11}^F & - I} \cl 0
\text{ ~as well as~ }
\mat{ccc}{\Ac^TX \!+\! X\Ac & X\Bc & (\bullet)^T \\
(\bullet)^T & - \ga^2 I & (\bullet)^T \\ \Cc & \Dc & - I} \cl 0
\text{ ~and thus~ }
\mat{ccc}{A(\del)^TX \!+\! XA(\del) & XB(\del) & (\bullet)^T \\
(\bullet)^T & - \ga^2 I & (\bullet)^T \\ C(\del) & D(\del) & - I} \cl 0
\end{equation*}
for all $\del \in [0, 1]$ and for $\t K = (K, 0)$ by convexity. Applying the Schur complement once more yields again \eqref{SHI::eq::lmiFcond2}.
From the Full-Block S-procedure\cite{Sch97} we infer the existence of a symmetric matrix $\t P$ such that the LMIs \eqref{SHI::lem::lmi_stabes} and $\smat{I \\ \del I}^T \t P \smat{I \\ \del I} \cge 0$ hold for all $\del \in [0, 1]$. As argued in \cite{DetSch01} it is then possible to find some $P \in \Pb$ satisfying \eqref{SHI::lem::lmi_stabes} as well.
\vspace{2ex}
We conjecture that it might even be possible to improve the dual iteration based on some of the above insights. So far we have tried the following ideas without much success.
\begin{remark
\begin{enumerate}[(a)]
\item We treat the parameter $\del$ as an uncertainty. One could as well think of allowing the controller $\t K$ to be scheduled based on $\del$. However, for the set of multipliers $\Pb$ it turns out that a gain-scheduled controller design leads to exactly the same synthesis LMIs \eqref{SHI::theo::eq::lmi_es} as appearing for the robust design in Theorem \ref{SHI::theo::estimation}. This means there is no benefit in allowing the controller to measure $\del$ directly.
\item The analysis result Lemma \ref{SHI::lem::lmi_stabes} is in general rather conservative as we are using constant multipliers to describe the properties of the constant parameter $\del \in [0, 1]$. An analysis based on dynamic multipliers can lead to less conservative results. We used this strategy in the context of robust output-feedback design but were unable to achieve results that would justify the increased numerical complexity for several numerical examples.
\item One could as well think of employing parameter dependent Lyapunov functions instead of using the IQC based Lemma \ref{SHI::lem::lmi_stabes}. However, for such functions we were unable to obtain conditions that are convex in all decision variables.
\end{enumerate}
\end{remark
\subsection{Examples}\label{SHI::sec::exa}
In order to illustrate the dual iteration as described above, we consider the design of static output-feedback $H_\infty$-controllers for several examples from COMPl\textsubscript{e}ib \cite{Lei04}.
We compute the upper bounds $\ga^1$, $\ga^5$, $\ga^9$ resulting from the dual iteration as described in Algorithm \ref{RS::algo::dual_iteration} and compare those bounds to the ones obtained by the \texttt{hinfstruct} algorithm from \cite{ApkNol06}. The latter is available in Matlab and we denote the resulting upper bounds as $\ga_{\mathrm{his}}$. Moreover, we consider a D-K iteration scheme that is based on considering the analysis LMIs \eqref{SHI::lem::lmi_stab} for the closed-loop system \eqref{SHI::eq::cl_of}. In particular, it relies on alternately performing the following two steps:
\begin{enumerate}[1.]
\item For a given controller \eqref{SHI::eq::con_of2}, solve the LMIs \eqref{SHI::lem::lmi_stab} for the closed-loop system \eqref{SHI::eq::cl_of} with decision variables $X$ and $\ga$.
\item For a given certificate $X$, solve the LMIs \eqref{SHI::lem::lmi_stab} for the closed-loop system \eqref{SHI::eq::cl_of} with decision variables $K$ and $\ga$.
\end{enumerate}
We denote the resulting upper bounds on $\ga_\opt$ as $\ga_\mathrm{dk}^k$.
We emphasize that this D-K iterations \emph{requires} an initialization with a stabilizing controller, as otherwise the first considered LMI is infeasible. To this end we utilize here the static controller as obtained from computing $\ga^1$.
Finally, we compute $\ga_{\mathrm{dof}}$, the optimal energy gain achievable by dynamic output-feedback controllers which yields a lower bound on the gains for static output-feedback design. All computations are carried out with Matlab/LMIlab \cite{GahNem95} on a general purpose desktop computer (Intel Core i7, 4.0 GHz, 8 GB of ram). Note that there are faster solvers for semi-definite programs available such as Mosek \cite{Mos17} or SeDuMi \cite{Stu01}, but from our experience LMIlab is the most reliable one for LMI based controller design.
The numerical results are depicted in Table~\ref{SHI::tab::results_sof} and do not show dramatic differences between the dual iteration and \texttt{hinfstruct} in terms of computed upper bounds for several examples. The D-K iteration is outperformed by both algorithms even if we allow for many more iterations than used in the dual iteration. Similarly as in the original publication \cite{Iwa99}, we also observe that few iterations of the dual iteration are very often sufficient to obtain good upper bounds on the optimal $\ga_\opt$ . Moreover, the lower bound $\ga_{\mathrm{dof}}$ is very close to $\ga^9$ for several examples which implies that the upper bound $\ga^9$ is (almost) nonconservative in those situations.
Finally, note that all three algorithms can fail to provide a stabilizing solution which is due to the difficulty of the underlying non-convex synthesis problem.
Regarding the computation time we observe that our implementation of the dual iteration again outperforms the D-K iteration. Moreover, it is faster than \texttt{hinfstruct} for systems with small McMillan degree $n$, but does not scale well for systems with larger degrees. This is illustrated in Table \ref{SHI::tab::results_time} on a few examples. Here, $T_{\ga_{\mathrm{dof}}}$, $T_{\ga^9}$ $T_{\ga_{\mathrm{dk}}^9}$ and $T_{\ga_{\mathrm{his}}}$ denote the average runtime within twenty runs in seconds required for the computation of $\ga_{\mathrm{dof}}$, $\ga^9$, $\ga_{\mathrm{dk}}^9$ and $\ga_{\mathrm{his}}$, respectively. Note that the initialization of the dual iteration is the most time-consuming part; the actual iteration is relatively fast in comparison.
The bad scaling is of course not surprising as the dual iteration is based on solving LMIs and thus inherits all related computational aspects. In contrast, \texttt{hinfstruct} relies on a more specialized nonsmooth optimization techniques that avoids solving LMIs.
Nevertheless, we demonstrate in the next sections that the dual iteration is very useful since it allows us to deal, within a common framework, with various other interesting design scenarios where algorithms as \texttt{hinfstruct} do not work.
\begin{table}
\vspace{1.5ex}
\caption{Optimal closed-loop $H_\infty$-norms resulting from dynamic output-feedback design as well as upper bounds obtained via the dual iteration for static output-feedback synthesis, a D-K iteration and \texttt{hinfstruct} for several examples from \cite{Lei04}.}
\label{SHI::tab::results_sof}
\begin{center}
\begin{minipage}{0.49\textwidth}
\centering
\setlength{\tabcolsep}{5pt}
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{@{}l@{\hskip 1ex}r@{\hskip 2ex}rrr@{\hskip 2ex}rr@{\hskip 2ex}r@{}}
\toprule
Name & $\ga_\mathrm{dof}$ & $\ga^1$ & $\ga^5$ & $\ga^9$ & $\ga_\mathrm{dk}^9$ & $\ga_\mathrm{dk}^{21}$ & $\ga_\mathrm{his}$ \\ \hline
AC3 & 2.97 & 4.53 & 3.67 & 3.47 & 4.03 & 3.82 & 3.64 \\
AC4 & 0.56 & 1.74 & 1.05 & 0.97 & 1.19 & 1.18 & 0.96 \\
AC6 & 3.43 & 4.31 & 4.12 & 4.12 & 4.22 & 4.21 & 4.11 \\
AC8 & 1.62 & 2.03 & 2.01 & 2.01 & 2.03 & 2.03 & 2.01 \\
AC16 & 14.86 & 18.51 & 14.98 & 14.98 & 16.00 & 15.84 & 14.90 \\
AC18 & 5.39 & 14.08 & 10.72 & 10.71 & 11.95 & 11.56 & 10.70 \\
HE2 & 2.42 & 5.11 & 4.26 & 4.25 & 4.94 & 4.91 & 4.25 \\
HE4 & 22.84 & 31.18 & 24.90 & 23.13 & 30.56 & 29.89 & 23.60 \\
REA1 & 0.86 & 1.06 & 0.88 & 0.88 & 1.02 & 1.01 & 0.87 \\
DIS1 & 4.16 & 5.12 & 4.26 & 4.26 & 5.12 & 5.12 & 4.19 \\
\bottomrule
\end{tabular}
\end{minipage}
\hfill
\begin{minipage}{0.49\textwidth}
\centering
\setlength{\tabcolsep}{5pt}
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{@{}l@{\hskip 1ex}r@{\hskip 2ex}rrr@{\hskip 2ex}rr@{\hskip 2ex}r@{}}
\toprule
Name & $\ga_\mathrm{dof}$ & $\ga^1$ & $\ga^5$ & $\ga^9$ & $\ga_\mathrm{dk}^9$ & $\ga_\mathrm{dk}^{21}$ & $\ga_\mathrm{his}$ \\ \hline
WEC2 & 3.60 & 6.15 & 5.03 & 4.34 & 5.94 & 5.94 & 4.25 \\
BDT1 & 0.27 & 0.29 & 0.27 & 0.27 & 0.29 & 0.29 & 0.27 \\
EB2 & 1.77 & 2.08 & 2.02 & 2.02 & 2.03 & 2.03 & 2.02 \\
EB4 & 1.80 & 2.24 & 2.06 & 2.06 & 2.07 & 2.07 & 2.06 \\
TF3 & 0.25 & 4.25 & 0.51 & 0.40 & 4.21 & 4.19 & - \\
NN1 & 13.14 & 143.92 & 14.23 & 14.23 & 14.79 & 14.62 & 13.80 \\
NN2 & 1.76 & 2.36 & 2.22 & 2.22 & 2.24 & 2.23 & 2.22 \\
NN11 & 0.03 & 0.51 & 0.10 & 0.09 & 0.28 & 0.27 & 0.09 \\
NN14 & 9.44 & 30.10 & 17.53 & 17.49 & 19.90 & 18.64 & 17.50 \\
NN17 & 2.64 & - & - & - & - & - & 11.22 \\
\bottomrule
\end{tabular}
\end{minipage}
\end{center}
\end{table}
\begin{table}
\vspace{1.5ex}
\caption{Average runtime within twenty runs in seconds for the computation of $\ga_{\mathrm{dof}}$, $\ga^9$, $\ga_{\mathrm{dk}}^9$ and $\ga_{\mathrm{his}}$, for a few examples from \cite{Lei04}.}
\label{SHI::tab::results_time}
\begin{center}
\centering
\setlength{\tabcolsep}{5pt}
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{@{}l@{\hskip 3.5ex}rrrrr@{}}
\toprule
Name & HE2 & AC3 & HE4 & WEC2 & IH \\ \hline
$n$ & 4 & 5 & 8 & 10 & 21\\
$T_{\ga_\mathrm{dof}}$ & 0.02 & 0.04 & 0.35 & 1.54 & 41.91\\
$T_{\ga^9}$ & 0.07 & 0.10 & 0.58 & 2.03 & 57.51\\
$T_{\ga_\mathrm{dk}^9}$ & 0.10 & 0.15 & 0.97 & 3.66 & 100.26 \\
$T_{\ga_\mathrm{his}}$ & 0.10 & 0.16 & 0.35 & 0.23 & 2.65 \\
\bottomrule
\end{tabular}
\end{center}
\end{table}
\subsection{Static Output-Feedback Generalized $H_2$-Design}\label{SH2::sec::sh2}
In this subsection we briefly demonstrate that the dual iteration is also applicable for other interesting and challenging performance criteria that differ from $H_\infty$-performance. To this end we consider static generalized $H_2$-design in the case that only output measurements are available. This design is based on the following closed-loop analysis result with an algebraic characterization that is essentially taken from \cite{Rot93, SkeIwa97}.
\begin{lemma
\label{SH2::lem::stab}
The closed-loop system \eqref{SHI::eq::cl_of} admits an energy-to-peak gain (or generalized $H_2$-norm) smaller than $\ga > 0$, i.e., $\Ac$ is stable and there exists an $\eps > 0$ such that
\begin{equation*}
\|e\|_{L_\infty}^2 := \sup_{t \geq 0}e(t)^Te(t) \leq (\ga^2 - \eps) \|d\|_{L_2}^2
\teq{ for all }d\in L_2
\teq{ and for }x(0)=0,
\end{equation*}
if and only if $\Dc= 0$ and there exists a symmetric matrix $X$ which satisfies
\begin{subequations}
\label{SH2::lem::lmi_stab}
\begin{equation}
\mat{cc}{\Ac^T X + X\Ac & X\Bc \\ \Bc^TX & -I} \cl 0
\teq{and}
\mat{cc}{X & \Cc^T \\ \Cc & \ga^2 I} \cg 0.
\dlabel{SH2::lem::lmi_staba}{SH2::lem::lmi_stabb}
\end{equation}
\end{subequations}
\end{lemma
In order to simplify the exposition and to guarantee that $\Dc = D + D_{12}KD_{21} = 0$ holds, we proceed under the assumption that the system \eqref{SHI::eq::sys_of} satisfies
\begin{equation}
D = 0
\teq{ and }
D_{12} = 0.
\label{SH2::eq::assumptions}
\end{equation}
Under these admittedly restrictive assumption, it is straightforward to modify the dual iteration as given in Algorithm \ref{SHI::algo::dual_iteration} and the underlying synthesis results for static output-feedback generalized $H_2$-design. For convenience, we state the corresponding primal and dual design results:
\begin{theorem}
\label{SH2::theo::ofF}
Let $V$ be a basis matrix of $\ker(C_2, D_{21})$.
Then there exists a static controller $K$ for the system \eqref{SHI::eq::sys_of} such that the closed-loop analysis LMIs \eqref{SH2::lem::lmi_stab} are satisfied if there exists a symmetric matrix $X$ satisfying
\begin{equation*}
\arraycolsep=3pt
V^T\mat{cc}{A^TX + XA & XB \\ B^TX & -I} V \cl 0,\quad
\mat{cc}{A_F^TX + XA_F & XB_F \\ B_F^TX & -I} \cl 0
\teq{ and }
\mat{cc}{X & C^T \\ C & \ga^2 I}\cg 0.
\end{equation*}
\end{theorem}
\begin{theorem
\label{SH2::theo::ofE}
Let $U$ be basis matrix of $\ker(B_2^T)$. Then there exists a controller $K$ for the system \eqref{SHI::eq::sys_of} such that the closed-loop analysis LMIs \eqref{SH2::lem::lmi_stab} are satisfied if there exists a symmetric matrix $Y$ satisfying
\begin{equation*}
\arraycolsep=3pt
\mat{cc}{A_EY + YA_E^T & B_E \\ B_E^T & -I} \cl 0, \qquad
U^T(AY + YA^T + BB^T)U \cl 0
\teq{ and }
\mat{cc}{Y & YC^T \\ CY & \ga^2 I} \cg 0.
\end{equation*}
\end{theorem
We consider again several examples from COMPl\textsubscript{e}ib \cite{Lei04} and compare the dual iteration in terms of computed upper bounds on the optimal closed-loop energy-to-peak gain to a D-K iteration scheme. The examples are modified to satisfy assumption \eqref{SH2::eq::assumptions}.
The considered D-K iteration very briefly described as follows. As initialization we choose $K = 0$ and $X$ as the certificate obtained through minimizing the trace of $Y + X$ subject to the dynamic full-order generalized $H_2$-synthesis LMIs. Then the algorithm tries to find a stabilizing static gain such that \eqref{SH2::lem::lmi_staba} and $X \cg 0$ are satisfied for the resulting closed-loop system. This is achieved by minimizing $t$ subject to $\smat{(\bullet)^T + X(A + B_2KC_2) & X(B + B_2KD_{21}) \\ (\bullet)^T & -I} \cl t I$ and $X \cg 0$ while alternately fixing $X$ and $K$. After finding such a stabilizing gain, we minimize $\ga$ subject to the analysis LMIs \eqref{SH2::lem::lmi_stab} while alternately fixing $X$ and $K$ in order to improve its closed-loop performance.
We denote the upper bounds resulting from solving nine and twenty one semi-definite programs in the last step in the sequel as $\ga_{\mathrm{dk}}^{9}$ and $\ga_{\mathrm{dk}}^{21}$. Moreover, we compute the lower bound $\ga_\mathrm{dof}$ and the upper bounds $\ga^1$, $\ga^3$, $\ga^9$ resulting from the dual iteration for generalized $H_2$-design. We emphasize that in total much fewer semi-definite programs are solved for the dual iteration since we do not count those that the D-K iteration requires to find a stabilizing gain.
The numerical results are depicted in Table~\ref{SH2::tab::results_sof}. These examples illustrate that the dual iteration yields again superior upper bounds if compared to a D-K iteration scheme while requiring fewer iterations and thus being computationally less demanding. Finally, observe that the gap between $\ga_\mathrm{dof}$ and $\ga^9$ is small for several examples which implies that the upper bound $\ga^9$ is very close the optimal gain $\ga_\opt$ as well.
\begin{table}
\vspace{1.5ex}
\caption{Optimal lower and upper bounds on the closed-loop energy-to-peak gain resulting from (full-order) dynamic output-feedback design, the dual iteration for static output-feedback synthesis and a D-K iteration for several examples from \cite{Lei04}.}
\label{SH2::tab::results_sof}
\begin{center}
\begin{minipage}{0.49\textwidth}
\centering
\setlength{\tabcolsep}{4pt}
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{@{}l@{\hskip 3.5ex}r@{\hskip 3.5ex}rrr@{\hskip 3.5ex}rr@{}}
\toprule
Name & $\ga_\mathrm{dof}$ & $\ga^1$ & $\ga^3$ & $\ga^9$ & $\ga_\mathrm{dk}^9$ & $\ga_\mathrm{dk}^{21}$\\ \hline
AC3 & 1.21 & 1.51 & 1.40 & 1.39 & 1.77 & 1.65 \\
AC4 & 1.56 & 3.01 & 2.38 & 2.35 & 9.34 & 8.64 \\
AC6 & 1.91 & 2.05 & 1.99 & 1.98 & 2.10 & 2.09 \\
AC11 & 1.56 & 2.13 & 1.93 & 1.84 & 1.98 & 1.98 \\
HE1 & 0.08 & 0.16 & 0.10 & 0.09 & 0.13 & 0.12 \\
HE2 & 1.65 & 2.34 & 2.16 & 2.16 & 3.45 & 3.09 \\
HE5 & 0.82 & 2.86 & 1.69 & 1.20 & 1.51 & 1.47 \\
REA1 & 0.72 & 0.76 & 0.75 & 0.75 & 0.82 & 0.79 \\
REA2 & 0.90 & 0.93 & 0.91 & 0.91 & 0.92 & 0.91 \\
\bottomrule
\end{tabular}
\end{minipage}
\begin{minipage}{0.49\textwidth}
\centering
\setlength{\tabcolsep}{4pt}
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{@{}l@{\hskip 3.5ex}r@{\hskip 3.5ex}rrr@{\hskip 3.5ex}rr@{}}
\toprule
Name & $\ga_\mathrm{dof}$ & $\ga^1$ & $\ga^3$ & $\ga^9$ & $\ga_\mathrm{dk}^9$ & $\ga_\mathrm{dk}^{21}$\\ \hline
DIS2 & 0.29 & 0.32 & 0.30 & 0.29 & 0.31 & 0.31 \\
AGS & 4.45 & 4.75 & 4.67 & 4.67 & 4.72 & 4.72 \\
WEC3 & 3.64 & 7.93 & 5.13 & 4.84 & 15.92 & 18.86 \\
MFP & 1.26 & 6.24 & 4.86 & 3.98 & 6.63 & 6.52 \\
EB1 & 1.57 & 2.82 & 1.65 & 1.65 & 1.70 & 1.70 \\
EB3 & 1.17 & 1.21 & 1.17 & 1.17 & 1.17 & 1.17 \\
NN2 & 1.00 & 1.19 & 1.19 & 1.19 & 1.27 & 1.27 \\
NN4 & 1.10 & 1.28 & 1.26 & 1.26 & 1.30 & 1.29 \\
NN14 & 20.90 & 52.76 & 36.03 & 23.19 & 43.65 & 36.73 \\
\bottomrule
\end{tabular}
\end{minipage}
\end{center}
\end{table}
\section{Static Output-Feedback $H_\infty$-Design with Generalized Stability Regions}\label{SGS::sec::sgs}
In this section we consider the design of static output-feedback $H_\infty$-controllers guaranteeing that the closed-loop eigenvalues are located in a prescribed generalized stability region defined through some symmetric matrix $P$. This problem is challenging as the ones discussed before due to the non-convexity of static controller design. Additionally, it turns out that it is even a multi-objective design problem which introduces another source of non-convexity. It is also particularly interesting in the context of the dual iteration because as it is no longer possible to apply the elimination lemma \ref{RS::lem::elimination}. Thus we rely on the interpretation and findings in Section \ref{SHI::sec::interpretation} in order to provide a suitable variant of the dual iteration.
\subsection{Analysis}
We consider again a system of the form
\begin{equation}
\mat{c}{\dot x(t) \\ e(t)}
= \mat{cc}{A & B \\ C & D}\mat{c}{x(t) \\ d(t)}
\label{SGS::eq::sys}
\end{equation}
for $t \geq 0$, some initial condition $x(0) \in \R^n$, some generalized disturbance $d \in L_2$ and real matrices of appropriate dimensions. The considered generalized stability regions are defined as follows.
\begin{definition
\label{SGS::def::region}
For a real symmetric matrix $P$, the complex set
$L_P := \left\{s\in \C ~:~ \smat{I \\ sI}^\ast P \smat{I \\ sI} \cl 0 \right\}$
is called LMI region.
\end{definition
\begin{figure}
\vspace{1ex}
\begin{center}
\begin{minipage}{0.32\textwidth}
\begin{center}
\begin{tikzpicture}
\draw[->] (-2.5, 0) -- (0.5, 0);
\draw[->] (0, -1.5) -- (0, 1.5);
\filldraw[rounded corners = false, fill=blue!30!white, fill opacity = 0.7, draw=none]
(0, -1.25) -- (0, 1.25) arc (90:270:1.25) -- cycle;
\node[draw = none, below] at (-0.5, 0){$r$} ;
\end{tikzpicture}
\end{center}
\end{minipage}
\begin{minipage}{0.32\textwidth}
\begin{center}
\begin{tikzpicture}
\draw[->] (-2.5, 0) -- (0.5, 0);
\draw[->] (0, -1.5) -- (0, 1.5);
\filldraw[fill=blue!30!white, fill opacity = 0.7, draw=none]
(0, 0) -- (-2.5, 1.5) -- (-2.5, -1.5) -- cycle;
\draw[thin] (0, 0.5) arc (90:150:0.5) node[above]{$\phi$};
\end{tikzpicture}
\end{center}
\end{minipage}
\begin{minipage}{0.32\textwidth}
\begin{center}
\begin{tikzpicture}
\draw[->] (-2.5, 0) -- (0.5, 0);
\draw[->] (0, -1.5) -- (0, 1.5);
\filldraw[fill=blue!30!white, fill opacity = 0.7, draw=none]
(0, 0) -- (-1.5, 1.5) arc (135:225:2.1213) -- cycle;
\draw[thin] (0, 0.5) arc (90:135:0.5) node[above]{$\phi$};
\node[draw = none, below] at (-1.1, 0){$r$} ;
\end{tikzpicture}
\end{center}
\end{minipage}
\end{center}
\caption{Three LMI regions $L_P$ defined through the real symmetric matrices $P$ as given in \eqref{SGS::eq::exas}, respectively.}
\label{SGS::fig::exas}
\end{figure}
For $P = \smat{0 & 1 \\ 1 & 0}$ the region $L_P$ equals $\C^-$ the complex open left half-plane which allows us to recover all findings from Section~\ref{SHI::sec::shi}. Other interesting regions are for example specified through a radius $r$, an angle $\phi$ as well as matrices
\begin{equation}
\label{SGS::eq::exas}
P = \mat{cc:cc}{-r^2 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 \\ \hdashline
0 & 0 & 1 & 0 \\
0 & 1 & 0 & 0},\quad
P = \mat{cc:cc}{0 & 0 & \sin(\phi) & \cos(\phi) \\
0 & 0 & - \cos(\phi) & \sin(\phi) \\ \hdashline
\sin(\phi) & -\cos(\phi) & 0 & 0 \\
\cos(\phi) & \sin(\phi) & 0 & 0}
\teq{ and }
P = \mat{ccc:ccc}{-r^2 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & \sin(\phi) & \cos(\phi) & 0\\
0 & 0 & 0 & - \cos(\phi) & \sin(\phi) & 0\\ \hdashline
0 & 0 & 0 & 1 & 0 & 0 \\
0 & \sin(\phi) & -\cos(\phi) & 0 & 0 & 0 \\
0 & \cos(\phi) & \sin(\phi) & 0 & 0 & 0}
\end{equation}
and constitute a half-disc, a sector and a circular sector as depicted in Fig.~\ref{SGS::fig::exas}.
In this section we restrict out attention to LMI regions defined by symmetric matrices $P = \smat{Q & S \\ S^T & R}$ satisfying
\begin{equation}
\label{SGS::eq::assumptions}
L_P \subset \C^-
\teq{ and }
R \cge 0.
\end{equation}
The first assumption is required in order to guarantee that the closed-loop $H_\infty$-norm of the considered systems is finite and thus well-defined, while the second one ensures that the considered LMI regions are convex. The latter is a standard requirement which ensures that the upcoming LMI criteria are convex in the decision variables. Allowing for non-convex LMI regions would introduce another source of non-convexity in the design that we do not aim to tackle in this paper.
The following analysis criteria are a combination of the bounded real lemma~\ref{SHI::lem::stab} and of a result from \cite{ChiGah96} that is a generalization of the standard characterization of stability involving the Kronecker product as elaborated on, e.g., in Chapter 4.2 of \cite{HorJoh91}.
\begin{lemma
\label{SGS::lem::stab}
Let $P_\ga := \smat{I & 0 \\ 0 & -\ga^2 I}$. Then $\eig(A) \subset L_P$ and the system \eqref{SGS::eq::sys} admits an energy gain smaller than $\ga$ if and only if there exist symmetric matrices $X_s$, $X_p$ satisfying
\begin{subequations}
\label{SGS::lem::lmi_stab}
\begin{equation}
X_s \cg 0,\quad
\mat{c}{I \\ A \kron I}^T\mat{cc}{X_s \kron Q & X_s \kron S \\ X_s \kron S^T & X_s \kron R}\mat{c}{I \\ A \kron I} \cl 0,
\label{SGS::lem::lmi_staba}
\end{equation}
\begin{equation}
X_p \cg 0
\teq{and}
\Ls\left(\mat{cc}{0 & X_p \\ X_p & 0}, P_\ga, \mat{cc}{A & B \\ \hline C & D \\ 0 & I}\right) \cl 0.
\label{SGS::lem::lmi_stabb}
\end{equation}
\end{subequations}
\end{lemma
Here, $\eig(A) \subset L_P$ is algebraically characterized by \eqref{SGS::lem::lmi_staba} while a bound on the energy gain is characterized by \eqref{SGS::lem::lmi_stabb}. Clearly, these are two different objectives and, unfortunately, it is in general restrictive to enforce $X_s = X_p$, i.e., to use a common certificate for (generalized) stability and performance.
Nevertheless, considering only common certificates is a simple approach to obtain convex synthesis criteria for several multi-objective design problems which is frequently applied in the literature. In the sequel we will follow this approach for didactic reasons, but will also comment on how to directly employ the dual iteration for a static design based on Lemma \ref{SGS::lem::stab}.
By using a common Lyapunov certificate, the LMI conditions \eqref{SGS::lem::lmi_stab} can also be more compactly expressed as follows.
\begin{lemma
\label{SGS::lem::stab2}
Let $Q_e := \smat{0 & 0 \\ 0 & Q}$, $S_e := \smat{1 & 0 \\ 0 & S}$, $R_e := \smat{0 & 0 \\ 0 & R}$ and $e_1 = \smat{1 \\ 0_{\bullet \times 1}}$. Then $\eig(A) \subset L_P$ and the system \eqref{SGS::eq::sys} admits an energy gain smaller than $\ga$ if there exists a symmetric matrix $X$ satisfying
\begin{equation}
\label{SGS::lem::lmi_stab2}
X \cg 0
\teq{ and }
\Ls\left(
\mat{cc}{X \kron Q_e & X \kron S_e \\
X \kron S_e^T & X \kron R_e }, P_\ga, \mat{cc}{A \kron I & B \kron I \\ \hline
C \kron e_1^T & D \kron e_1^T \\ 0 & I \kron e_1^T}\right) \cl 0.
\end{equation}
\end{lemma
\subsection{Synthesis}\label{SGS::sec::synth}
\subsubsection{Problem Description}
For fixed real matrices of appropriate dimensions and initial conditions $x(0) \in \R^n$, we now consider the system
\begin{equation}
\arraycolsep=1pt
\mat{c}{\dot x(t) \\ e(t) \\ y(t)}
= \mat{ccc}{A & B_1 & B_2 \\ C_1 & D_{11} & D_{12} \\ C_2 & D_{21} & 0}
\mat{c}{x(t) \\ d(t) \\ u(t)}
\label{SGS::eq::sys_sof}
\end{equation}
for $t\geq 0$ and for an LMI region $L_P$ satisfying \eqref{SGS::eq::assumptions}.
Our ultimate goal in this section is the synthesis of a static output-feedback controller with description
\begin{equation}
u(t)
= K y(t)
\label{SGS::eq::con_sof}
\end{equation}
for the system \eqref{SGS::eq::sys_sof} such that all closed-loop eigenvalues are located in the LMI region $L_P$ and such that the closed-loop energy gain is as small as possible. Our design is based on the analysis criteria in Lemma \ref{SGS::lem::stab2} and, hence, we are also interested in
\begin{equation*}
\ga_\opt := \inf \{\ga > 0~|~ \text{There is a controller $K$ for \eqref{SGS::eq::sys_sof} s.th. the LMIs \eqref{SGS::lem::lmi_stab2} are feasible for the resulting closed-loop system} \}.
\end{equation*}
Note that in contrast to the previous section $\ga_\opt$ is \emph{not} the optimal energy gain achievable by static controllers \eqref{SGS::eq::con_sof} that ensure closed-loop poles in $L_P$. This is due to the conservatism in Lemma \ref{SGS::lem::stab2}.
The resulting synthesis problem is still non-convex and, thus, our goal can not directly be achieved and a computation of $\ga_\opt$ is not easily possible. Next we employ the dual iteration for the design of suitable static controllers and to provide easily computable upper bounds on $\ga_\opt$.
\subsubsection{Dual Iteration: Initialization}
For the initialization of the dual iteration we propose again the synthesis of a dynamic (full-order) controller as this design is convex and leads to a nice lower bound on $\ga_\opt$. The following result is obtained via a convexifying parameter transformation \cite{MasOha98} in a straightforward fashion.
\begin{theorem
\label{SGS::theo::dof}
There exits a dynamic (full-order) controller for the system \eqref{SGS::eq::sys_sof} such that the analysis LMIs \eqref{SGS::lem::lmi_stab2} are satisfied for the corresponding closed-loop system if and only if there exist matrices $K, L, M, N$ and symmetric matrices $X, Y$ satisfying
\begin{subequations}
\label{SGS::theo::eq::lmi_dof}
\begin{equation}
\Xb \cg 0
\teq{ and }
\Ls\left(
\mat{cc}{\Xb \kron Q_e & I \kron S_e \\
I \kron S_e^T & \Xb^{-1} \kron R_e }, P_\ga, \mat{cc}{\Ab \kron I & \Bb \kron I \\ \hline
\Cb \kron e_1^T & \Db \kron e_1^T \\ 0 & I \kron e_1^T}\right) \cl 0
\dlabel{SGS::theo::lmi_dofa}{SGS::theo::lmi_dofb}
\end{equation}
\end{subequations}
where $\Xb := \smat{Y & I \\ I & X}$ and
\begin{equation*}
\mat{cc}{\Ab & \Bb \\ \Cb & \Db}
:= \mat{cc|c}{AY + B_2 M & A + B_2NC_2 & B_1 + B_2ND_{21} \\
K & XA + LC_2 & XB_1 + L D_{21} \\ \hline
C_1Y + D_{12}M & C_1 + D_{12}NC_2 & D_{11} + D_{12}ND_{21}}
=\mat{cc|c}{AY & A & B_1\\ 0 & XA & XB_1 \\ \hline C_1Y & C_1 & D_{11}}
+ \mat{cc}{0 & B_2 \\ I & 0 \\ \hline 0 & D_{12}}
\mat{cc}{K & L \\ M & N} \mat{cc|c}{I & 0 & 0 \\ 0 & C_2 & D_{21}}.
\end{equation*}
If the above LMIs are feasible, a suitable dynamic controller is obtained by choosing nonsingular $U$, $V$ satisfying $I = XY + UV^T$ and
\begin{equation*}
\mat{cc}{A^c & B^c \\ C^c & D^c} :=
\mat{cc}{U & XB_2 \\ 0 & I}^{-1} \mat{cc}{K - XAY & L \\ M & N} \mat{cc}{V^T & 0 \\ C_2 Y & I}^{-1}.
\end{equation*}
In particular, we have
\begin{equation*}
\ga_\mathrm{dof} \leq \ga_\opt
\end{equation*}
for $\ga_{\mathrm{dof}}$ being the infimal $\ga$ such that the above LMIs are feasible.
\end{theorem
Note that the LMI \eqref{SGS::theo::lmi_dofb} is not convex in the decision variables. However, since we assumed $R \cge 0$ in \eqref{SGS::eq::assumptions} this is routinely circumvented by applying the Schur complement. Furthermore, observe that the variables $K$, $L$, $M$, $N$ can not be eliminated via Lemma \ref{RS::lem::elimination} due to the structure induced by the appearing Kronecker products. The same holds as well for all of the remaining design results in this section.
As for the previous design problems, once $\ga_{\mathrm{dof}}$ is obtained it is advisable to resolve the LMIs \eqref{SGS::theo::eq::lmi_dof} for fixed $\ga = (1+\eps)\ga_{\mathrm{dof}}$ and some $\eps > 0$ while minimizing the trace of $Y + X$ in order to push $X$ towards $Y^{-1}$ and to promote feasibility of the LMIs that are required to be solved next. As before these LMIs involve a static full-information gain $F = (F_1, F_2)$ which can be constructed as follows.
\begin{lemma
\label{SGS::lem::full_info}
There exists a full-information gain $F = (F_1, F_2)$ such that the analysis LMIs \eqref{SGS::lem::lmi_stab2} are feasible for the system
\begin{equation*}
\mat{c}{\dot x(t) \\ e(t)}
= \mat{cc}{A^F & B^F_1 \\ C^F_1 & D^F_{11}}\mat{c}{x(t) \\ d(t)}
=\mat{cc}{A + B_2 F_1 & B_1 + B_2F_2 \\
C_1 + D_{12}F_1 & D_{11} + D_{12}F_2}
\mat{c}{ x(t) \\ d(t)}
\end{equation*}
if and only if there exist matrices $M$, $N$ and a symmetric matrix $Y$ satisfying
\begin{equation*}
Y \cg 0
\teq{ and }
\Ls\left(\mat{cc}{Y \kron Q_e & I \kron S_e \\
I \kron S_e^T & Y^{-1} \kron R_e}, P_\ga, \mat{cc}{\Ab \kron I & \Bb \kron I \\ \hline
\Cb \kron e_1^T & \Db \kron e_1^T \\ 0 & I\kron e_1^T}\right) \cl 0
\end{equation*}
where
\begin{equation*}
\arraycolsep=4pt
\mat{cc}{\Ab & \Bb \\ \Cb & \Db}
:= \mat{cc}{AY + B_2 M & B + B_2N \\
CY + D_{12}M & D + D_{12}N}.
\end{equation*}
If the above LMIs are feasible, a suitable gain is obtained by $F := (MY^{-1}, N)$.
\end{lemma
\subsubsection{Dual Iteration}
Suppose now that we have designed a full-information gain $F = (F_1, F_2)$ by Lemma \ref{SGS::lem::full_info}. We can then state the primal design result of the dual iteration corresponding to the analysis criteria in Lemma \ref{SGS::lem::stab2}. The result, which is motivated by the interpretation in Section \ref{SHI::sec::interpretation}, yields convex LMI conditions that are sufficient for static output-feedback design. Its proof is almost identical to the one given for Theorem \ref{SHI::theo::ofF_par} and thus omitted for brevity.
\begin{theorem}
\label{SGS::theo::ofF}
Suppose that $A + B_2F_1$ is stable and let $(C_3^F, D_{31}^F, D_{32}^F) = \smat{C_2 & D_{21} & 0 \\ F_1 & F_2 & I}$. Then there exists a static controller \eqref{SGS::eq::con_sof} for the system \eqref{SGS::eq::sys_sof} such that the analysis LMIs \eqref{SGS::lem::lmi_stab2} are satisfied for the corresponding closed-loop system if there exists matrices $H$, $N = (N_1, N_2)$ and a symmetric matrix $X$ satisfying
\begin{equation}
\label{SGS::theo::eq::lmi_ofF}
H + H^T \cg 0
\teq{ and }
\Ls\left(\mat{cc}{X \kron Q_e & I \kron S_e \\ I \kron S_e^T & X^{-1} \kron R_e},
\mat{c:c}{P_\ga & 0 \\ \hdashline 0 & \Hb},
\mat{ccc}{\Ab \kron I & \Bb_1 \kron I & \Bb_2 \kron I\\ \hline
C_1^F \kron e_1^T & D_{11}^F \kron e_1^T & D_{12} \kron e_1^T \\ 0 & I \kron e_1^T& 0 \\ \hdashline
\Cb_2 \kron I & \Db_{21} \kron I & \Db_{22} \kron I \\ 0 & 0 & I} \right) \cl 0
\end{equation}
where
\begin{equation*}
\arraycolsep=4pt
\Hb := \mat{cc}{0 & I \\ I & (-H - H^T) \kron I}
\teq{ and }
\mat{ccc}{\Ab & \Bb_1 & \Bb_2 \\ \Cb_2 & \Db_{21} & \Db_{22}}
:= \mat{ccc}{XA^F & XB_1^F & XB_2 \\ -HF_1 + NC_3^F & -HF_2 + ND_{31}^F & ND_{32}^F}.
\end{equation*}
If the above LMIs are feasible, a suitable static controller \eqref{SGS::eq::con_sof} is obtained by
\begin{equation*}
K := (H - N_2)^{-1}N_1.
\end{equation*}
Moreover, we have
\begin{equation*}
\ga_\mathrm{dof} \leq \ga_\opt \leq \ga_F
\end{equation*}
for $\ga_F$ being the infimal $\ga > 0$ such that the above LMIs are feasible.
\end{theorem}
Note that, in contrast to Theorem \ref{SHI::theo::ofF_par}, we are required here to restrict the structure of $\Hb$ to match with the Kronecker structure of the remaining terms in order to reconstruct the static controller. In other words and by recalling Section \ref{SHI::sec::interpretation}, this means that we capture the homotopy parameter $\del \in [0, 1]$ with static multipliers contained in the set
\begin{equation*}
\left\{\mat{cc}{ 0 & H^T \kron I \\ H \kron I & (-H-H^T)\kron I}~\middle|~H+H^T\cg 0 \right\}
\teq{ instead of more general ones in }
\left\{\mat{cc}{ 0 & H^T \\ H & -H-H^T}~\middle|~H+H^T\cg 0 \right\}.
\end{equation*}
This is related to the use of a common Lyapunov function in multi-objective control and introduces conservatism. We will discuss the consequences after providing the remaining design results involved in the dual iteration.
As in the previous section and in order to formulate the dual iteration based on the analysis LMIs \eqref{SGS::lem::lmi_stab2}, we also need the dual versions of Lemma \ref{SGS::lem::full_info} and Theorem \ref{SGS::theo::ofF}. These can be stated as follows.
\begin{lemma
\label{SGS::lem::full_actu}
There exists a full-actuation gain $E = (E_1^T, E_2^T)^T$ such that the analysis LMIs \eqref{SGS::lem::lmi_stab2} are feasible for the system
\begin{equation*}
\mat{c}{\dot x(t) \\ e(t)}
= \mat{cc}{A^E & B_1^E \\ C_1^E & D_{11}^E}\mat{c}{x(t) \\ d(t)}
= \mat{cc}{A + E_1 C_2 & B_1 + E_1D_{21} \\
C_1 + E_2C_2 & D_{11} + E_2D_{21} }
\mat{c}{x(t) \\ d(t)}
\end{equation*}
if and only if there exist matrices $L$, $N$ and a symmetric matrix $X$ satisfying
\begin{equation*}
X \cg 0
\teq{ and }
\Ls \left(\mat{cc}{X \kron Q_e & I \kron S_e \\
I \kron S_e^T & X^{-1} \kron R_e}, P_\ga, \mat{cc}{\Ab \kron I & \Bb \kron I \\ \hline
\Cb \kron e_1^T & \Db \kron e_1^T \\ 0 & I \kron e_1^T}\right) \cl 0
\end{equation*}
where
\begin{equation*}
\arraycolsep=4pt
\mat{cc}{\Ab & \Bb \\ \Cb & \Db}
:= \mat{cc}{XA + LC_2 & XB + LD_{21} \\
C + NC_2 & D + ND_{21}}.
\end{equation*}
If the above LMIs are feasible, a suitable gain is obtained by $E := ((X^{-1}L)^T, N^T)^T$.
\end{lemma
\begin{theorem}
\label{SGS::theo::ofE}
Suppose that $A + E_1C_2$ is stable and let $\smat{B_3^E \\ D_{13}^E} = \smat{B_2 & E_1 \\ D_{12} & E_2}$ as well as $D_{23}^E = (0, I)$. Then there exists a static controller \eqref{SGS::eq::con_sof} for the system \eqref{SGS::eq::sys_sof} such that the analysis LMIs \eqref{SGS::lem::lmi_stab2} are feasible for the corresponding closed-loop system if there exists matrices $H$, $N = (N_1^T, N_2^T)^T$ and a symmetric matrix $Y$ satisfying
\begin{equation}
\label{SGS::theo::eq::lmi_ofE}
H + H^T \cg 0
\teq{ and }
\Ls\left(\mat{cc}{Y \kron Q_e & I \kron S_e \\ I \kron S_e^T & Y^{-1} \kron R_e},
\mat{c:c}{P_\ga & 0 \\ \hdashline 0 & \Hb},
\mat{ccc}{\Ab \kron I & B_1^E \kron I & \Bb_2 \kron I\\ \hline
\Cb_1 \kron e_1^T & D_{11}^E \kron e_1^T & \Db_{12} \kron e_1^T \\ 0 & I \kron e_1^T& 0 \\ \hdashline
\Cb_2 \kron I & D_{21} \kron I & \Db_{22} \kron I \\ 0 & 0 & I} \right) \cl 0
\end{equation}
where
\begin{equation*}
\arraycolsep=4pt
\Hb := \mat{cc}{0 & I \\ I & (-H - H^T) \kron I}
\teq{ and }
\mat{cc}{\Ab & \Bb_2 \\ \Cb_1 & \Db_{12} \\ \Cb_2 & \Db_{22}}
:= \mat{cc}{A^E Y & -E_1H^T + B_3^EN \\ C_1^E Y & -E_2H^T + D_{13}^E N \\ C_2 Y & D_{23}^E N}.
\end{equation*}
If the above LMIs are feasible, a suitable static controller \eqref{SGS::eq::con_sof} is obtained by
\begin{equation*}
K := N_1(H^T - N_2)^{-1}.
\end{equation*}
Moreover, we have
\begin{equation*}
\ga_\mathrm{dof} \leq \ga_\opt \leq \ga_E
\end{equation*}
for $\ga_E$ being the infimal $\ga > 0$ such that the above LMIs are feasible.
\end{theorem}
As in the previous sections it is rather immediate that feasibility of the primal synthesis LMIs \eqref{SGS::theo::eq::lmi_ofF} in Theorem \ref{SGS::theo::ofF} implies feasibility of the ones appearing in Lemma \ref{SGS::lem::full_actu} for exactly the same certificate $X$.
This can be seen either by manipulating the LMIs or simply by observing that $E = \smat{B_2 \\ D_{12}}K$ is a suitable full-actuation controller. Analogously the same statement is true for the dual synthesis LMIs and certificates in Theorem \ref{SGS::theo::ofE} and Lemma \ref{SGS::lem::full_info}.
However, in contrast to the previous sections and due to the structural restrictions on $\Hb$, the Theorems \ref{SGS::theo::ofF} and \ref{SGS::theo::ofE} are no longer as nicely intertwined as in Theorem \ref{SHI::theo::it_summary} for static $H_\infty$-design.
As a consequence, the upper bounds $\gamma^k$ provided by the dual iteration are no longer guaranteed to be monotonically non-increasing. Moreover, the dual iteration can even get stuck as the dual synthesis LMIs \eqref{SGS::theo::eq::lmi_ofE} might be infeasible even if the primal LMIs \eqref{SGS::theo::eq::lmi_ofF} are feasible and vice versa. Nevertheless, we show in the next subsection that this variant of the dual iteration can be successfully applied to a variety of numerical examples.
The dual iteration for static output-feedback $H_\infty$-design with generalized stability regions is explicitly stated as follows.
\begin{Algorithm}
\label{SGS::algo::dual_iteration}
Dual iteration for static output-feedback $H_\infty$-design with generalized stability regions.
\begin{enumerate}
\item \emph{Initialization:} Compute the lower bound $\ga_\mathrm{dof}$ based on solving the dynamic full-order synthesis LMIs \eqref{SGS::theo::eq::lmi_dof} and set $k = 1$.
Design an initial full-information gain $F$ from Lemma \ref{SGS::lem::full_info}.
\item \emph{Primal step:} Compute $\ga_F$ based on solving the primal synthesis LMIs \eqref{SGS::theo::eq::lmi_ofF} and set $\ga^k := \ga_F$.
Design a corresponding close-to-optimal full-actuation gain $E$ from Lemma \ref{SGS::lem::full_actu}.
\item \emph{Dual step:} Compute $\ga_E$ based on solving the dual synthesis LMIs \eqref{SGS::theo::eq::lmi_ofE} and set $\ga^{k+1} := \ga_E$. Design a corresponding close-to-optimal full-information gain $F$ from Lemma \ref{SGS::lem::full_info}.
\item \emph{Termination:} If $k$ is too large or $\min_{j = 1, \dots, k}\ga^j$ does not decrease any more, then stop and construct a close-to-optimal static output-feedback controller \eqref{SGS::eq::con_sof} for the system \eqref{SGS::eq::sys_sof} according to Theorem \ref{SGS::theo::ofE}. \\
Otherwise set $k = k+2$ and go to the primal step.
\end{enumerate}
\end{Algorithm}
\begin{remark}
\label{SGS::rema::original_gains}
In our numerical experiments we generate the gains $E$ and $F$ more concretely as follows. Suppose that the primal synthesis LMIs \eqref{SGS::theo::eq::lmi_ofF} are feasible for the certificate $X$ and $\ga := (1+\eps) \ga_F$ for some $\eps > 0$. Then consider the problem of minimizing $\alpha$ subject to $\|(M, N)\| \leq \alpha$ and the full-actuation design LMIs with free matrices $L, N$ and fixed $X$. This problem is always feasible and with any solution $L, N$ a suitable full-actuation gain is given by $E= ((X^{-1}L)^T, N^T)^T$. This ensures that the gain $E$ is not too large and thus numerically easier to handle in the subsequent steps. The full-information gain $F$ is constructed analogously.
\end{remark}
For the dual iteration, there is actually a lot of freedom in the choice of the full-information and full-actuation gains, which can be crucial. For the iteration in the last section, we designed them in a rather natural fashion which resulted in controllers with quite acceptable performance. For the above variant of the iteration, the choice in Remark \ref{SGS::rema::original_gains} works well on several numerical examples, but the values $\ga^k$ can oscillate a lot. Unfortunately, the best choice of the gains is not obvious at the outset. In the following remark we state some alternative choices resulting in variants of Algorithm \ref{SGS::algo::dual_iteration}, which we compare in the next subsection on some numerical examples.
\begin{remark}
\label{SGS::rema::variants}
Let $K^k$ denote the controller corresponding to $\ga^k$ as obtained by Theorem \ref{SGS::theo::ofF} or \ref{SGS::theo::ofE} for any $k$:
\begin{enumerate}
\item[V1:] In the primal step choose $E$ as $\smat{B_2 \\ D_{12}}K^k$ and in the dual step choose $F$ as $K^{k+1}(C_2, D_{21})$.
\end{enumerate}
With the choice $E = \smat{B_2 \\ D_{12}}K^k$, the dual step is very likely to result in a value $\ga^{k+1}$ which is close to $\ga^k$ (at least not much larger). Note that for the dual iterations in the previous section this choice would result in $\ga^{k+1}= \ga^{k}$ and means that there is no progress at all. However, since for the situation considered in this section no analogue of Theorem \ref{SHI::theo::it_summary} available, some progress is possible.
It seems that one can intuitively view the choice between $E = \smat{B_2 \\ D_{12}}K^k$ and $E$ from Remark \ref{SGS::rema::original_gains} as trade-off between almost monotony (i.e., $\ga^{k+1}$ being close to $\ga^k$) and exploration of the set of admissible controllers. This motivates the second variant of Algorithm \ref{SGS::algo::dual_iteration}.
\begin{enumerate}
\item[V2:] In the primal step choose $E$ as $\smat{B_2 \\ D_{12}}K^k$.
\end{enumerate}
Hence, only the primal design used for exploration and the dual design is intended to promote monotony.
The next two variants aim to switch more systematically between exploration and promoting monotony.
\begin{enumerate}
\item[V3:] In the primal step and after computing $\ga_F$: If $\ga_F > \min_{j=1,\dots, k-1}\ga^k$, replace $F$ by $\frac{1}{2}F + \frac{1}{2}K^{k-2}(C_2, D_{21})$, recompute $\ga_F$ based on solving the primal synthesis LMIs \eqref{SGS::theo::eq::lmi_ofF} and set $\ga^k := \ga_F$.
\item[V4:] In the primal step and after computing $\ga_F$: If $\ga_F > \min_{j=1,\dots, k-1}\ga^k$, replace $F$ by $\frac{1}{2}F + \frac{1}{2}K^{j_\ast}(C_2, D_{21})$, recompute $\ga_F$ based on solving the primal synthesis LMIs \eqref{SGS::theo::eq::lmi_ofF} and set $\ga^k := \ga_F$. Here $j_\ast \in \argmin_{j=1,\dots, k-1} \ga^j$.
\end{enumerate}
\end{remark}
\begin{remark}
\label{SGS::rema::MO}
One can as well employ the dual iteration for a design directly based on the multi-objective analysis result Lemma~\ref{SGS::lem::stab} instead of the more conservative one in Lemma \ref{SGS::lem::stab2} that uses a common Lyapunov function. This is essentially achieved by considering a dual iteration for each of the objectives separately. In the present case this leads to two full-information gains $F^s$, $F^p$, two full-actuation gains $E^s$, $E^p$, two Lyapunov certificates $X_s$, $X_p$ and so on (one for generalized stability and one for $H_\infty$-performance). The key observation is that we can allow those variables and gains to differ for the design of static multi-objective controller as long as the variables $H$ and $N$ are identical in all objectives. This arguing is very similar to the S-variable approach for multi-objective control \cite{EbiPea15, OliBer99}. We concretely state the corresponding analogue of Theorem \ref{SGS::theo::ofF} below for clarification.
In this fashion the dual also extends to various interesting multi-objective problems.
\end{remark}
\begin{theorem}
\label{SGS::theo::ofFmo}
Let $F^s = (F_1^s, F_2^s)$ and $F^p = (F_1^p, F_2^p)$ be suitable full-information gains and let
\begin{equation*}
\arraycolsep=1pt
A^{F^s} := A + B_2 F^s_1,\quad
\mat{cc}{C_3^{F^s} & D_{32}^{F^s}} := \mat{cc}{C_2 & 0 \\ F_1^s & I}, \quad
\mat{cc}{A^{F^p} & B^{F^p}_1 \\ C^{F^p}_1 & D^{F^p}_{11}}
:=\mat{cc}{A & B_1 \\ C_1 & D_{11}} + \mat{c}{B_2 \\ D_{12}}F^p,\quad
\mat{ccc}{C_3^{F^p}& D_{31}^{F^p}& D_{32}^{F^p}}:= \mat{ccc}{C_2 & D_{21} & 0 \\ F_1^p & F_2^p & I}.
\end{equation*}
Further, suppose that $A^{F^s}$ and $A^{F^p}$ are stable. Then there exists a static controller \eqref{SGS::eq::con_sof} for the system \eqref{SGS::eq::sys_sof} such that the analysis LMIs \eqref{SGS::lem::lmi_stab} are satisfied for the corresponding closed-loop system if there exists matrices $H$, $N = (N_1, N_2)$ and symmetric matrices $X_s$, $X_p$ satisfying
\begin{equation*}
\arraycolsep=1pt
H + H^T \cg 0, ~~
\Ls\left(\mat{cc}{X_s \kron Q & I \kron S \\ I \kron S^T & X_s^{-1} \kron R},
\Hb^s,
\mat{cc}{\Ab^s \kron I & \Bb_2^s \kron I\\ \hline
\Cb_2^s \kron I & \Db_{22}^s \kron I \\ 0 & I} \right) \cl 0
\text{ ~~and~~ }
\Ls\left(\mat{cc}{0 & I \\ I & 0},
\mat{c:c}{P_\ga & 0 \\ \hdashline 0 & \Hb^p},
\mat{ccc}{\Ab^p & \Bb_1^p & \Bb_2^p \\ \hline
C_1^{F^p} & D_{11}^{F^p} & D_{12} \\ 0 & I& 0 \\ \hdashline
\Cb_2^p & \Db_{21}^p & \Db_{22}^p \\ 0 & 0 & I} \right) \cl 0
\end{equation*}
where $\Hb^s := \smat{0 & I \\ I & (-H-H^T) \kron I}$, $\Hb^p := \smat{0 & I \\ I & -H-H^T}$,
\begin{equation*}
\arraycolsep=2pt
\mat{cc}{\Ab^s & \Bb_2^s \\ \Cb_2^s & \Db_{22}^s}
:= \mat{cc}{X_sA^{F^s} & X_sB_2 \\ -HF_1^s + NC_3^{F^s} & ND_{32}^{F^s}}
\teq{ and }
\mat{ccc}{\Ab^p & \Bb_1^p & \Bb_2^p \\ \Cb_2^p & \Db_{21}^p & \Db_{22}^p}
:= \mat{ccc}{X_pA^{F^p} & X_pB_1^{F^p} & X_pB_2 \\ -HF_1^p + NC_3^{F^p} & -HF_2^p + ND_{31}^{F^p} & ND_{32}^{F^p}}.
\end{equation*}
If the above LMIs are feasible, a suitable static controller \eqref{SGS::eq::con_sof} is obtained by $K := (H - N_2)^{-1}N_1$.
\end{theorem}
\subsection{Example}
We consider again several examples from COMPl\textsubscript{e}ib \cite{Lei04} and compare the dual iteration as given in Algorithm \ref{SGS::algo::dual_iteration} to its variation as described in Remark \ref{SGS::rema::MO}.
To this end, we compute the lower bound $\ga_\mathrm{dof}$ and the upper bounds $\ga^k$ on $\ga_\opt$ resulting from Algorithm~\ref{SGS::algo::dual_iteration} as well as $\ga_\mathrm{mo}^k$ for $k=1,\dots,9$. Here, $\ga_\mathrm{mo}^k$ denote the values corresponding to $\ga^k$ for the variation in Remark \ref{SGS::rema::MO}. To obtain all those values, we initialize both iterations with the same full-information gain $F = F^s = F^p$ which is designed as described after Theorem \ref{SGS::theo::dof}. Since neither $\ga^k$ nor $\ga_\mathrm{mo}^k$ is guaranteed to be monotone as discussed earlier, we compare both algorithms in terms of the values
\begin{equation*}
\del^k := \min_{j=1,\dots, k}\ga^k
\teq{ and }
\del_\mathrm{mo}^k := \min_{j=1,\dots, k}\ga_\mathrm{mo}^k.
\end{equation*}
The numerical results are depicted in Table~\ref{SGS::tab::results_sof} for an LMI region defined by
\begin{equation}
P = \mat{cc}{Q & S \\ S^T & R}
\teq{ with } Q = \mat{cc}{-r^2 & 0 \\ 0 & 0},\quad
S = \mat{cc}{0 & 0 \\ 0 & 1},\quad
R = \mat{cc}{1 & 0 \\ 0 & 0 }
\teq{ and }r = 2.
\label{SGS::eq::exa_region}
\end{equation}
This means that the considered LMI region is an open half-disc in the complex plane with center zero and radius $r = 2$.
Note at first that the gap between $\ga_\mathrm{dof}$ and $\del^9$ is small for several examples which implies that the upper bound $\del^9$ is as well very close the optimal value $\ga_\opt$ resulting from a closed-loop analysis with Lemma \ref{SGS::lem::stab}.
Moreover, these examples demonstrate that the additional freedom provided through the use of different Lyapunov certificates as discussed in Remark \ref{SGS::rema::MO} can indeed be beneficial as $\del_\mathrm{mo}^k$ is strictly smaller than $\del^k$ for various examples. The price to pay is, of course, the larger computation time resulting from the additional degrees of freedom.
Note that it is even possible for $\del_\mathrm{mo}^k$ to be below $\ga_{\mathrm{dof}}$ as for the example \verb|DIS3| since both are based on different analysis criteria and as the ones in Lemma \ref{SGS::lem::stab} are conservative in general. Finally, note that for the complex left half-plane, the LMI region defined by $P = \smat{0 & 1 \\ 1 & 0}$, we essentially recover the bounds as obtained in Section \ref{SHI::sec::exa} with slight deviations due to (accumulated) numerical inaccuracies.
\vspace{1ex}
Let us now compare the original dual iteration as described in Algorithm \ref{SGS::algo::dual_iteration} with its variants in Remark \ref{SGS::rema::variants} in more detail on few examples. Illustrated in Fig. \ref{SGS::fig::it_ga} are the corresponding computed upper bounds $\ga^k$ for $k = 1, \dots, 29$ for each approach and for the examples AC3 (left), DIS3 (center) and NN2 (right), respectively. At first let us consider the example AC3, which illustrates some of the statements in Remark \ref{SGS::rema::variants}. Indeed, we observe that the bounds corresponding to the original Algorithm \ref{SGS::algo::dual_iteration} oscillate a lot and that the variant V1 does not progress after the second iterate. The latter is also true for all other tested numerical examples. For this example the second variant V2 seems to work best as the corresponding upper bounds decrease nicely without any oscillations. However, both of the examples DIS3 and NN2 indicate that the decrease can also be too slow. The variants V3 and V4 are interestingly almost identical and offer a compromise between the original Algorithm \ref{SGS::algo::dual_iteration} and variant V2. For both of them the corresponding upper bounds decrease very fast but also try to enforce monotony if oscillations occur. This is visible for the example DIS3 after index $k = 15$ and throughout for AC3.
\begin{table}
\vspace{1.5ex}
\caption{Comparison of the dual iteration as described in Algorithm \eqref{SGS::algo::dual_iteration} and its variation in Remark \ref{SGS::rema::MO} in terms of the resulting optimal lower and upper bounds for the LMI region specified by \eqref{SGS::eq::exa_region} and for several examples from \cite{Lei04}.}
\label{SGS::tab::results_sof}
\begin{center}
\begin{minipage}{0.49\textwidth}
\centering
\setlength{\tabcolsep}{3pt}
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{@{}l@{\hskip 2.5ex}r@{\hskip 2.5ex}rrr@{\hskip 2.5ex}rrr@{}}
\toprule
Name & $\ga_\mathrm{dof}$ & $\del^1$ & $\del^5$ & $\del^9$ & $\del_\mathrm{mo}^1$ & $\del_\mathrm{mo}^{5}$ & $\del_\mathrm{mo}^9$\\ \hline
AC2 & 0.11 & 0.15 & 0.11 & 0.11 & 0.14 & 0.12 & 0.11\\
AC3 & 4.31 & 6.12 & 4.94 & 4.94 & 5.01 & 4.95 & 4.64\\
AC15 & 15.34 & 18.75 & 15.79 & 15.66 & 18.67 & 15.67 & 15.48 \\
AC16 & 15.31 & 18.84 & 15.39 & 15.34 & 18.84 & 15.14 & 15.05\\
HE2 & 2.99 & 31.89 & 4.87 & 4.65 & 6.24 & 4.24 & 4.25\\
HE3 & 0.80 & 1.34 & 0.89 & 0.88 & 1.34 & 0.95 & 0.88\\
HE4 & 22.84 & 45.65 & 33.41 & 32.45 & 37.59 & 35.32 & 35.32 \\
DIS1 & 4.23 & 5.15 & 4.27 & 4.27 & 5.15 & 4.39 & 4.38\\
\bottomrule
\end{tabular}
\end{minipage}
\hspace{1ex}
\begin{minipage}{0.49\textwidth}
\centering
\setlength{\tabcolsep}{3pt}
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{@{}l@{\hskip 2.5ex}r@{\hskip 2.5ex}rrr@{\hskip 2.5ex}rrr@{}}
\toprule
Name & $\ga_\mathrm{dof}$ & $\del^1$ & $\del^5$ & $\del^9$ & $\del_\mathrm{mo}^1$ & $\del_\mathrm{mo}^{5}$ & $\del_\mathrm{mo}^9$\\ \hline
DIS3 & 2.56 & 5.92 & 4.29 & 3.65 & 2.37 & 1.91 & 1.91 \\
DIS4 & 1.59 & 2.13 & 1.91 & 1.84 & 2.13 & 1.67 & 1.67\\
BDT1 & 0.27 & 0.29 & 0.27 & 0.27 & 0.29 & 0.27 & 0.27\\
MFP& 4.78 & 72.04 & 36.81 & 35.64 & 68.11 & 34.51 & 33.33\\
NN2 & 1.87 & 2.39 & 2.23 & 2.22 & 2.39 & 2.23 & 2.22\\
NN4 & 1.51 & 3.39 & 2.65 & 2.65 & 2.92 & 1.81 & 1.81\\
NN8 & 2.73 & 3.78 & 3.13 & 3.08 & 3.78 & 3.11 & 3.08\\
NN16 & 1.12 & 11.42 & 2.47 & 1.48 & 3.34 & 1.09 & 1.04\\
\bottomrule
\end{tabular}
\end{minipage}
\end{center}
\end{table}
\begin{figure}
\begin{center}
\begin{minipage}{0.31\textwidth}
\begin{center}
\includegraphics[width=\textwidth, trim= 55 140 50 27, clip]{AC3_it_ga}
\end{center}
\end{minipage}
\hspace{2ex}
\begin{minipage}{0.31\textwidth}
\begin{center}
\includegraphics[width=\textwidth, trim= 55 140 50 27, clip]{DIS3_it_ga}
\end{center}
\end{minipage}
\hspace{2ex}
\begin{minipage}{0.31\textwidth}
\begin{center}
\includegraphics[width=\textwidth, trim= 49 140 50 27, clip]{NN2_it_ga}
\end{center}
\end{minipage}
\end{center}
\vspace{-3ex}
\caption{Upper bounds $\ga^k$ for $k = 1, \dots, 29$ for the original Algorithm \ref{SGS::algo::dual_iteration} and its four variations in Remark \ref{SGS::rema::variants} for the examples AC3 (left), DIS3 (center) and NN2 (right), respectively.}
\label{SGS::fig::it_ga}
\end{figure}
\section{Robust Output-Feedback $H_\infty$-Design}\label{RS::sec::rs}
This section deals with robust dynamic output-feedback controller synthesis which is closely related to static output-feedback design as discussed earlier in terms of sources for non-convexity. Due to the importance of the underlying design problem, we, nevertheless, provide the details for the corresponding dual iteration. Thereby, we focus on a performance criterion in terms of the energy gain as in Section~\ref{SHI::sec::shi} as this enables the application of the elimination lemma \ref{RS::lem::elimination} throughout. In particular, we demonstrate that both, the static and the robust design, are dealt with within a common synthesis framework based on linear fractional representations. We briefly demonstrate later on that this framework even encompasses the highly interesting design of robust gain-scheduling controllers.
\subsection{Analysis}\label{RS::sec::ana}
For some real matrices of appropriate dimensions and initial conditions $x(0) \in \R^n$, we consider the feedback interconnection
\begin{equation}
\arraycolsep=3pt
\mat{c}{\dot x(t) \\ \hline z(t) \\ e(t)} = \mat{c|cc}{A & B_1 & B_2 \\ \hline C_1 & D_{11} & D_{12} \\ C_2 & D_{21} & D_{22}} \mat{c}{x(t) \\ \hline w(t) \\ d(t)}, \quad
w(t) = \Del(t) z(t),
\label{RS::eq::sys}
\end{equation}
for $t\geq 0$; here, $d\in L_2$ is a generalized disturbance, $e$ is the performance output desired to be small, $w, z$ are interconnection variables and $\Del$ is a time-varying uncertainty contained in
\begin{equation*}
\Delf(\Vb) := \{ \Del: [0, \infty) \to \Vb~|~ \Del \text{ is piecewise continuous} \}
\end{equation*}
for some known compact value set $\Vb \subset \R^{q\times p}$. In particular, we do not make any assumptions on the rate of variation of the uncertainty $\Del$. The description \eqref{RS::eq::sys} is called linear fractional representation (LFR) as closing the loop involving the signals $z$ and $w$ leads to a linear parameter-varying system of the form
\begin{equation*}
\mat{c}{\dot x(t) \\ e(t)} = \left(\mat{cc}{A & B_2 \\ C_2 & D_{22}} + \mat{c}{B_1 \\ D_{21}}\Del(t)(I - D_{11}\Del(t))^{-1}\mat{cc}{C_1 & D_{12}}\right)\mat{c}{x(t) \\ d(t)}
\end{equation*}
where $\Del$ enters in a rational fashion.
Well-posedness, robust stability and the robust energy gain of the system \eqref{RS::eq::sys} are defined in a standard fashion as follows.
\begin{definition
\label{RS::def::stab}
\begin{itemize}
\item The system \eqref{RS::eq::sys} is said to be well-posed if $\det(I - D_{11}\Del) \neq 0$ for all $\Del \in \Vb$.
\item It is said to be robustly stable (against $\Delf(\Vb)$) if it is well-posed and there exist constants $M, \la > 0$ such that
\begin{equation*}
\|x(t)\| \leq Me^{-\la t} \|x(0)\| \teq{ for all } t \geq 0,
\end{equation*}
for all $\Del\in \Delf(\Vb)$ and all initial conditions $x(0) \in \R^n$ and for $d = 0$.
\item It is said to admit a robust energy gain smaller than $\ga>0$ (against $\Delf(\Vb)$) if it is robustly stable and there exists an $\eps > 0$ such that
\begin{equation*}
\|e\|_{L_2}^2 \leq (\ga^2 - \eps) \|d\|_{L_2}^2
\teq{ for all }d \in L_2,
\teq{ all } \Del \in \Delf(\Vb)
\teq{ and for }x(0) = 0.
\end{equation*}
Its robust energy gain is the infimal $\ga > 0$ such that the above inequality is satisfied.
\end{itemize}
\end{definition
\vspace{1ex}
As we are dealing with arbitrarily time-varying uncertainties, we work with the following analysis result from \cite{Sch01} that is based on the full block S-procedure. It can also be viewed as a special case of the IQC result in \cite{MegRan97} with a static multiplier.
\begin{lemma
\label{RS::lem::stab0}
Let $P_\ga := \smat{I & 0 \\ 0 & -\ga^2 I}$ and let
$\smat{G_{11} & G_{12} \\ G_{21} & G_{22}}(s)
:= \smat{D_{11} & D_{12} \\ D_{21} & D_{22}}
+ \smat{C_1 \\ C_2}(sI - A)^{-1}\smat{B_1 & B_2}
$
be the transfer matrix corresponding to \eqref{RS::eq::sys}. Then the system \eqref{RS::eq::sys} admits a robust energy gain smaller than $\ga > 0$ if there exist symmetric matrices $X$ and $P$ which satisfy
\begin{subequations}
\label{RS::lem::lmi_stab0}
\begin{equation}
X \cg 0, \qquad
\Ls\left(\mat{cc}{0 & X \\ X & 0}, \mat{c|c}{P & 0 \\ \hline 0 & P_\ga}, \mat{cc}{G_{11} & G_{12} \\ I & 0 \\ \hline G_{21} & G_{22} \\ 0 & I}_\ss \right) \cl 0
\teq{and}
\mat{c}{I \\ \Del}^T P \mat{c}{I \\ \Del} \cge 0
\teq{ for all }\Del \in \Vb.
\tlabel{RS::lem::lmi_stab0_def}{RS::lem::lmi_stab0_sys}{RS::lem::lmi_stab0_unc}
\end{equation}
\end{subequations}
\end{lemma
\vspace{1ex}
Here, the matrix $P$ is usually referred to as multiplier. Note that \eqref{RS::lem::lmi_stab0_unc} consists of infinitely many LMIs and is thus numerically not tractable. In order to circumvent this issue, one restricts the search for the multiplier $P$ to suitable sets of multipliers $\Pb(\Vb)$ for which \eqref{RS::lem::lmi_stab0_unc} is automatically satisfied. To be precise, $\Pb(\Vb)$ should be a set of symmetric nonsingular matrices with LMI representation such that
\begin{subequations}
\label{RS::eq::multiplier_set}
\begin{equation}
\mat{c}{I \\ \Del}^T P \mat{c}{I \\ \Del} \cge 0
\text{ ~~holds for all~~ }\Del \in \Vb
\text{ ~~and for all~~ }P \in \Pb(\Vb).
\label{RS::eq::multiplier_seta}
\end{equation}
To simplify the exposition, we restrict our attention to multiplier sets that additionally satisfy
\begin{equation}
\mat{c}{0 \\ I}^T P \mat{c}{0 \\ I} \cle 0
\teq{ for all }P \in \Pb(\Vb).
\end{equation}
\end{subequations}
In order to avoid unnecessary conservatism by the restriction to such a multiplier set $\Pb(\Vb)$, these sets should always be chosen as large as possible and hence describe $\Vb$ as good as possible in terms of quadratic inequalities. Note that the set $\Pb(\Vb)$ having an LMI representation means that there exist affine matrix-valued functions $\Psi$ and $\Phi$ such that
\begin{equation*}
\Pb(\Vb) = \{\Psi(\nu)~|~ \nu \in \R^\bullet \text{ ~~and~~ } \Phi(\nu) \cg 0 \}.
\end{equation*}
As an example suppose that $\Vb$ is more concretely described as $\Vb = \mathrm{co}\{\Del_1, \dots, \Del_N\}$, the convex hull of some generators $\Del_1, \dots, \Del_N$. Then it is elementary to see that
\begin{equation}
\label{RS::eq::multiplier_set_for_ch}
\Pb(\Vb) := \left\{P=P^T ~\middle| ~\mat{c}{0 \\ I}^TP \mat{c}{0\\ I}\cl 0 \text{ ~~and~~ }\mat{c}{I \\ \Del_i}^TP\mat{c}{I \\ \Del_i}\cg 0 \text{ ~~for all~~ }i = 1,\dots, N \right\}
\end{equation}
is a set of multipliers with LMI representation which indeed satisfies \eqref{RS::eq::multiplier_set}. Moreover, note that any $P \in \Pb(\Vb)$ is nonsingular as a consequence of the minimax theorem of Courant and Fischer as given, e.g., in \cite{HorJoh90}. As another example let us suppose that $\Vb := \{ vI~:~ v \in [a, b] \}$ for some $a < b$. Then it is possible to employ the above set of multipliers for $\Vb$ as well or the commonly used alternative
\begin{equation}
\label{RS::eq::multiplier_set_for_int}
\Pb(\Vb) := \left\{ \mat{cc}{bI & -I \\ -aI & I}^T \mat{cc}{0 & H^T \\ H & 0}\mat{cc}{bI & -I \\ -aI & I}~\middle| ~ H + H^T \cg 0 \right\}
\end{equation}
which is closely related to the set of so-called D-G scalings. Note that for $[a,b] = [0, 1]$ this is exactly the set of multipliers as appearing in Section \ref{SHI::sec::interpretation}. Fixing a suitable set of multipliers leads to the following robust analysis result.
\begin{lemma
\label{RS::lem::stab}
Let $G_{ij}$ be the transfer matrices corresponding to \eqref{RS::eq::sys}. Then the system \eqref{RS::eq::sys} admits a robust energy gain smaller than $\ga > 0$ if there exist symmetric matrices $X$ and $P \in \Pb(\Vb)$ which satisfy
\begin{equation}
\label{RS::lem::lmi_stab}
X \cg 0 \teq{ and }
\Ls\left(\mat{cc}{0 & X \\ X & 0}, \mat{c|c}{P & 0 \\ \hline 0 & P_\ga}, \mat{cc}{G_{11} & G_{12} \\ I & 0 \\ \hline G_{21} & G_{22} \\ 0 & I}_\ss \right) \cl 0.
\end{equation}
\end{lemma
\vspace{1ex}
In the sequel we will also need the dual multiplier set corresponding to $\Pb(\Vb)$. The latter is defined as
\begin{equation*}
\t \Pb(\Vb) := \{\t P~|~ \t P^{-1} \in \Pb(\Vb) \}
\teq{ if it has an LMI representation.}
\end{equation*}
Note that the set $\{\t P~|~ \t P^{-1} \in \Pb(\Vb) \}$ does not have an LMI representation for any set $\Pb(\Vb)$, but in most practical situations it does. For the previous two examples of common multiplier sets the corresponding dual multiplier sets are explicitly given as
\begin{equation*}
\t \Pb(\Vb) := \left\{\t P=\t P^T ~\middle| ~\mat{c}{I \\ 0}^T\t P \mat{c}{I\\ 0}\cg 0 \text{ ~~and~~ }\mat{c}{-\Del_i^T \\ I}^T\t P\mat{c}{-\Del_i^T \\ I}\cl 0 \text{ ~~for all~~ }i = 1,\dots, N \right\}
\end{equation*}
and
\begin{equation*}
\t \Pb(\Vb) := \left\{ \frac{1}{(b-a)^2}\mat{cc}{I & I \\ aI & bI} \mat{cc}{0 & H \\ H^T & 0}\mat{cc}{I & I \\ aI & bI}^T~\middle| ~ H + H^T \cg 0 \right\},
\end{equation*}
respectively. Further, note that we have by \eqref{RS::eq::multiplier_set} and by the dualization lemma \ref{RS::lem::dualization}
\begin{equation*}
\mat{c}{I \\ 0}^T \t P \mat{c}{I \\ 0} \cge 0
\teq{ and }
\mat{c}{-\Del^T \\ I}^T \t P \mat{c}{ -\Del^T \\ I} \cle 0
\teq{ for all } \Del \in \Vb
\teq{ and all } \t P \in \t \Pb(\Vb).
\end{equation*}
To see this, note that the map $M \mapsto M^{-1}$ is continuous and that the inequality in \eqref{RS::eq::multiplier_seta} is strict, as required by Lemma \ref{RS::lem::dualization}, for $P$ replaced by $P + \smat{\eps I & 0 \\ 0 & 0}$ for any $\eps > 0$.
\subsection{Synthesis}\label{RS::sec::synth}
\subsubsection{Problem Description}
For fixed real matrices of appropriate dimensions and initial conditions $x(0) \in \R^n$, we consider now the feedback interconnection
\begin{equation}
\arraycolsep=1pt
\mat{c}{\dot x(t) \\\hline z(t)\\ e(t) \\ y(t)}
= \mat{c|ccc}{A & B_1 & B_2 & B_3 \\ \hline C_1 & D_{11} & D_{12} & D_{13} \\ C_2 & D_{21} & D_{22} & D_{23} \\
C_3 & D_{31} & D_{32} & 0}
\mat{c}{x(t) \\\hline w(t) \\ d(t) \\ u(t)}, \quad
w(t) = \Del(t) z(t)
\label{RS::eq::sys_of}
\end{equation}
for $t\geq 0$; here, $u$ is the control input, $y$ is the measured output, $\Del\in \Delf(\Vb)$ is some uncertainty and $\Vb \subset \R^{q\times p}$ is some compact value set. Further, suppose that we are given a suitable multiplier set $\Pb(\Vb)$ corresponding to $\Vb$ as well as its dual multiplier set $\t \Pb(\Vb)$.
Our main goal is the design of a robust dynamic output-feedback controller with description
\begin{equation}
\mat{c}{\dot x_c(t) \\ u(t)}
= \mat{cc}{A^c & B^c \\ C^c & D^c } \mat{c}{x_c(t) \\ y(t)}
\label{RS::eq::con_of2}
\end{equation}
for the system \eqref{RS::eq::sys_of} such that the corresponding closed-loop robust energy gain is as small as possible. The latter closed-loop interconnection is described by
\begin{equation}
\arraycolsep=2pt
\mat{c}{\dot x_\mathrm{cl}(t) \\\hline z(t) \\ e(t)}
= \mat{c|cc}{\Ac & \Bc_1 & \Bc_2 \\ \hline
\Cc_1 & \Dc_{11} & \Dc_{12} \\
\Cc_2 & \Dc_{21} & \Dc_{22}}
\mat{c}{x_\mathrm{cl}(t) \\\hline w(t) \\ d(t)}, \quad
w(t) = \Del(t) z(t)
\label{RS::eq::cl_of}
\end{equation}
with $t\geq 0$ as well as $x_\mathrm{cl} = \smat{x\\ x_c}$ and standard calligraphic closed-loop matrices. From the analysis criteria in Lemma \ref{RS::lem::stab} and by applying the elimination lemma \ref{RS::lem::elimination}, we obtain immediately the following synthesis result.
\begin{theorem}
\label{RS::theo::of}
Let $G_{ij}$ be the transfer matrices corresponding to \eqref{RS::eq::sys_of}. Further, let $V$ and $U$ be a basis matrices of $\ker(C_3, D_{31}, D_{32})$ and $\ker(B_3^T, D_{13}^T, D_{23}^T)$, respectively. Then there exists a controller \eqref{RS::eq::con_of2} for the system \eqref{RS::eq::sys_of} such that the analysis LMIs \eqref{RS::lem::lmi_stab} are feasible for \eqref{RS::eq::cl_of} if and only if there exist symmetric matrices $X$, $Y$ and $P\in \Pb(\Vb)$ satisfying
\begin{subequations}
\label{RS::theo::eq::lmi_of}
\begin{equation}
\arraycolsep=1pt
\mat{cc}{X & I \\ I & Y} \cg 0, ~~
V^T\Ls\left(\mat{cc}{0 & X \\ X & 0}, \mat{c|c}{P & 0 \\ \hline 0 & P_\ga}, \mat{cc}{G_{11} & G_{12} \\ I & 0\\ \hline
G_{21} & G_{22} \\ 0 & I
}_{\ss} \right) V \cl 0
\text{ ~~and~~ }
U^T\Ls\left(\mat{cc}{0 & Y \\ Y & 0}, \mat{c|c}{P^{-1} & 0 \\ \hline 0 & P_\ga^{-1}}, \mat{cc}{I & 0 \\ -G_{11}^\ast & -G_{21}^\ast \\ \hline
0 & I \\
-G_{12}^\ast & -G_{22}^\ast
}_{\ss} \right) U \cg 0.
\tlabel{RS::theo::eq::lmi_ofa}{RS::theo::eq::lmi_ofb}{RS::theo::eq::lmi_ofc}
\end{equation}
\end{subequations}
In particular, we have
\begin{equation*}
\begin{aligned}
\ga_\opt &:= \inf\left\{ \ga > 0~\middle|
\text{ There is a controller \eqref{RS::eq::con_of2} s.th. the analysis
LMIs \eqref{RS::lem::lmi_stab} are feasible for \eqref{RS::eq::cl_of}}\right\} \\
&\phantom{:}= \inf\left\{ \ga > 0~\middle|
\text{ There exist symmetric $X, Y$ and $P \in \Pb(\Vb)$ satisfying the above matrix inequalities}\right\}.
\end{aligned}
\end{equation*}
\end{theorem}
Note that similarly as in the previous section $\ga_\opt$ is \emph{not} the optimal robust energy gain achievable by robust controllers with description \eqref{RS::eq::con_of2}. This is due to the conservatism in the employed analysis result Lemma \ref{RS::lem::stab}.
In contrast to static output-feedback design as considered in Section \ref{SHI::sec::shi}, non-convexity emerges through the multiplier $P$ and its inverse appearing in \eqref{RS::theo::eq::lmi_ofb} and \eqref{RS::theo::eq::lmi_ofc} instead of the Lyapunov certificate $X$ and its inverse. Due to this non-convexity, computing $\ga_\opt$ or a corresponding controller is as before very difficult in general. Subsequently, we modify the dual iteration in order to compute upper bounds on $\ga_\opt$ and, in particular, solve the robust output-feedback $H_\infty$-design problem in a heuristic fashion.
\subsubsection{Dual Iteration: Initialization}
In order to initialize the dual iteration we aim again to compute a meaningful lower bound on $\ga_\opt$.
This time the lower bound is obtained by the following observation. If there exists a robust controller for the system \eqref{RS::eq::sys_of} achieving a robust energy gain of $\ga$ then there also exists a gain-scheduling controller, i.e., a controller that is able to measure the uncertainty $\Del(t)$ online, achieving the same robust energy gain. Such a controller is given by
\begin{equation}
\mat{c}{\dot x_c(t) \\ \hline z_c(t) \\ u(t)}
= \mat{c|cc}{\h A^c & \h B^c_1 & \h B^c_2 \\ \hline
\h C^c_1 & \h D^c_{11} & \h D^c_{12} \\
\h C^c_2 & \h D^c_{21} & \h D^c_{22}} \mat{c}{x_c(t) \\\hline w_c(t) \\ y(t)},\quad
w_c(t) = S(\Del(t)) z_c(t),
\label{RS::eq::con_of3}
\end{equation}
for $t \geq 0$ and with some function $S$. Indeed, we can simply choose
\begin{equation*}
\mat{c|c}{\h A^c & \h B^c_2 \\ \hline
\h C^c_2 & \h D^c_{22}}
= \mat{c|c}{A^c & B^c \\ \hline C^c & D^c}, \quad
\mat{c}{\h B^c_1 \\ \h D^c_{21}} = 0, \quad
\mat{cccc}{\h C^c_1 & \h D^c_{11} & \h D^c_{12}} = 0
\teq{ and }
S(\Del) = 0
\teq{ for all } \Del \in \Vb.
\end{equation*}
It is well-known that the problem of finding a gain-scheduling controller \eqref{RS::eq::con_of3} for the system \eqref{RS::eq::sys_of} can be again turned into a convex optimization problem; the design of structured gain-scheduling controllers, e.g., with $(\h D^c_{11}, \h D^c_{12}) = 0$ and $\h D^c_{21} = 0$ would yield even superior lower bounds but, unfortunately, seems to be a non-convex problem without additional structural properties of the underlying system \eqref{RS::eq::sys_of}. For unstructured gain-scheduling controller design we have the following result which is essentially taken from \cite{Sch99c, Sch01}.
\begin{theorem
\label{RS::theo::gs}
Let $G_{ij}$, $U$ and $V$ be as in Theorem \ref{RS::theo::of}. Then there exists a gain-scheduling controller \eqref{RS::eq::con_of3} and a scheduling function $S$ for the system \eqref{RS::eq::sys_of} such that the analysis LMIs \eqref{RS::lem::lmi_stab} are feasible for the resulting corresponding closed-loop system, for the value set $\Vb_e = \{\diag(\Del, S(\Del))~|~ \Del \in \Vb\}$ and for a corresponding multiplier set $\Pb_e(\Vb_e)$ if and only if there exist symmetric matrices $X, Y$ and $P\in \Pb(\Vb)$, $\t P\in \t \Pb(\Vb)$ satisfying
\begin{subequations}
\label{RS::theo::eq::lmi_gs}
\begin{equation}
\arraycolsep=1pt
\mat{cc}{X & I \\ I & Y} \cg 0,\quad
V^T\Ls\left(\mat{cc}{0 & X \\ X & 0}, \mat{c|c}{P & 0 \\ \hline 0 & P_\ga}, \mat{cc}{G_{11} & G_{12}\\ I & 0 \\ \hline
G_{21} & G_{22} \\ 0 & I
}_{\ss} \right) V \cl 0
\text{ ~~and~~ }
U^T\Ls\left(\mat{cc}{0 & Y \\ Y & 0}, \mat{c|c}{\t P & 0 \\ \hline 0 & P_\ga^{-1}}, \mat{cc}{I & 0 \\ -G_{11}^\ast & -G_{21}^\ast \\ \hline
0 & I \\
-G_{12}^\ast & -G_{22}^\ast
}_{\ss} \right) U \cg 0.
\tlabel{RS::theo::eq::lmi_gsa}{RS::theo::eq::lmi_gsb}{RS::theo::eq::lmi_gsc}
\end{equation}
\end{subequations}
In particular, we have
\begin{equation*}
\gaoptgs \leq \ga_\opt
\end{equation*}
for $\gaoptgs$ being the infimal $\ga > 0$ such that the above LMIs are feasible.
\end{theorem
We do not specify the multiplier set $\Pb_e(\Vb_e)$ because it is not relevant for our purposes and as we are only interested in the lower bound $\ga_\mathrm{gs}$. The latter can again be a good indicator for measuring the conservatism of the upper bounds that we compute later on. Moreover, there is no hope to find a robust controller \eqref{RS::eq::con_of2} achieving an closed-loop energy gain smaller than $\ga_\mathrm{gs}$ based on the analysis conditions in Lemma \ref{RS::lem::stab}.
\vspace{2ex}
As in previous sections the dual iteration is initialized by the design of a suitable full-information controller. For robust synthesis, such a controller is of the form
\begin{equation*}
u = F\t y = (F_1, F_2, F_3)\t y
\teq{ with } \t y^T := (x^T, w^T, d^T)^T.
\end{equation*}
Hence, these controllers are even able to measure the uncertain signal $w = \Del z$ in addition to the state $x$ and the disturbance $d$.
Synthesizing such controllers is not difficult. Indeed, an application of the elimination lemma \ref{RS::lem::elimination} leads to the following result.
\begin{lemma
\label{RS::lem::full_info}
There exists some full-information gain $F$ such that the analysis LMIs \eqref{RS::lem::lmi_stab} are feasible for the system
\begin{equation}
\mat{c}{\dot x(t) \\ \hline z(t) \\ e(t)}
= \mat{c|cc}{A + B_3 F_1 & B_1 + B_3F_2 & B_2 + B_3F_3 \\ \hline
C_1 + D_{13}F_1 & D_{11} + D_{13}F_2 & D_{12} + D_{13}F_3 \\
C_2 + D_{23}F_1 & D_{21} + D_{23}F_2 & D_{22} + D_{23}F_3}
\mat{c}{\dot x(t) \\ \hline w(t) \\ d(t)}
= \left(\mat{c|cc}{A & B_1 & B_2 \\ \hline
C_1 & D_{11} & D_{12} \\
C_2 & D_{21} & D_{22} }
+ \mat{c}{B_3 \\ \hline D_{13} \\ D_{23}}F
\right)
\mat{c}{\dot x(t) \\ \hline w(t) \\ d(t)}
\label{RS::eq::clF}
\end{equation}
if and only if there exist symmetric $\t P \in \t \Pb(\Vb)$ and $Y \cg 0$ satisfying \eqref{RS::theo::eq::lmi_gsc}.
\end{lemma
\subsubsection{Dual Iteration}
Suppose that we have synthesized a full-information gain $F$ by Lemma \ref{RS::lem::full_info}. Then the primal synthesis LMIs corresponding to the gain $F$ and to the analysis LMIs \eqref{RS::lem::lmi_stab} are obtained in a straightforward fashion.
\begin{theorem}
\label{RS::theo::ofF}
Let $G_{ij}$ and $V$ be as in Theorem \ref{RS::theo::of} and let $G_{ij}^F$ be the transfer matrices corresponding to \eqref{RS::eq::clF}. Then there exists a controller \eqref{RS::eq::con_of2} for the system \eqref{RS::eq::sys_of} such that the analysis LMIs \eqref{RS::lem::lmi_stab} are feasible for the corresponding closed-loop system if there exist symmetric matrices $X$, $Y$ and $P\in \Pb(\Vb)$ satisfying
\begin{subequations}
\label{RS::theo::eq::lmi_ofF}
\begin{equation}
\arraycolsep=3pt
\mat{cc}{X & Y \\ Y & Y} \cg 0,\quad
V^T\Ls\left(\mat{cc}{0 & X \\ X & 0}, \mat{c|c}{P & 0 \\ \hline 0 & P_\ga}, \mat{cc}{G_{11} & G_{12} \\ I & 0 \\ \hline
G_{21} & G_{22} \\ 0 & I
}_{\ss} \right) V \cl 0
\teq{ and }
\Ls\left(\mat{cc}{0 & Y \\ Y & 0}, \mat{c|c}{P & 0 \\ \hline 0 & P_\ga}, \mat{cc}{G_{11}^F & G_{12}^F \\ I & 0\\ \hline
G_{21}^F & G_{22}^F \\
0 & I
}_{\ss} \right) \cl 0.
\tlabel{RS::theo::eq::lmi_ofFa}{RS::theo::eq::lmi_ofFb}{RS::theo::eq::lmi_ofFc}
\end{equation}
\end{subequations}
Moreover, we have
\begin{equation*}
\gaoptgs \leq \ga_\opt \leq \ga_F
\end{equation*}
for $\ga_F$ being the infimal $\ga > 0$ such that the above LMIs are feasible.
\end{theorem}
\begin{proof}
Since we have $P \in \Pb(\Vb)$, we can conclude that $P$ has exactly $p$ positive and $q$ negative eigenvalues. This allows us to eliminate the full-information gain $F$ from the LMI \eqref{RS::theo::eq::lmi_ofFc} which leads to \eqref{RS::theo::eq::lmi_ofc} for $Y$ replaced by $Y^{-1}$. Finally, performing a congruence transformation of \eqref{RS::theo::eq::lmi_ofFa} with $\diag(I, Y^{-1})$ yields \eqref{RS::theo::eq::lmi_ofc} for $Y$ replaced by $Y^{-1}$. Since we have \eqref{RS::theo::eq::lmi_ofb} and $P \in \Pb(\Vb)$ by assumption as well, we can apply Theorem \ref{RS::theo::of} in order to construct the desired controller \eqref{RS::eq::con_of2}.
\end{proof}
The employed dual versions of Lemma \ref{RS::lem::full_info} and Theorem \ref{RS::theo::ofF} are given next.
\begin{lemma
\label{RS::lem::full_actu}
There exists some full-actuation gain $E$ such that the analysis LMIs \eqref{RS::lem::lmi_stab} are feasible for the system
\begin{equation}
\mat{c}{\dot x(t) \\ \hline z(t) \\ e(t)}
= \mat{c|cc}{A + E_1 C_3 & B_1 + E_1D_{31} & B_2 + E_1 D_{32} \\ \hline
C_1 + E_2C_3 & D_{11} + E_2D_{31} & D_{12} + E_2 D_{32} \\
C_2 + E_3C_3 & D_{21} + E_3 D_{31} & D_{22} + E_3D_{32}}
\mat{c}{\dot x(t) \\ \hline w(t) \\ d(t)}
=\left(\mat{c|cc}{A & B_1 & B_2 \\ \hline
C_1 & D_{11} & D_{12} \\
C_2 & D_{21} & D_{22} }
+ E \mat{c|cc}{C_3 & D_{31} & D_{32}}
\right)
\mat{c}{\dot x(t) \\ \hline w(t) \\ d(t)}
\label{RS::eq::clE}
\end{equation}
if and only if there exist symmetric $P\in \Pb(\Vb)$ and $X \cg 0$ satisfying \eqref{RS::theo::eq::lmi_gsb}.
\end{lemma
\begin{theorem
\label{RS::theo::ofE}
Let $G_{ij}$ and $U$ be as in Theorem \ref{RS::theo::of} and let $G_{ij}^E$ be the transfer matrices corresponding to \eqref{RS::eq::clE}. Then there exists a controller \eqref{RS::eq::con_of2} for the system \eqref{RS::eq::sys_of} such that the analysis LMIs \eqref{RS::lem::lmi_stab} are feasible for the corresponding closed-loop system if there exist symmetric matrices $X$, $Y$ and $\t P \in \t \Pb(\Vb)$ satisfying
\begin{subequations}
\label{RS::theo::eq::lmi_ofE}
\begin{equation}
\arraycolsep=1pt
\mat{cc}{X & X \\ X & Y} \cg 0, ~~
\Ls\left(\mat{cc}{0 & X \\ X & 0}, \mat{c|c}{\t P & 0 \\ \hline 0 & P_\ga^{-1}}, \mat{cc}{I & 0 \\ -(G_{11}^E)^\ast & -(G_{21}^E)^\ast \\ \hline
0 & I \\
-(G_{12}^E)^\ast & -(G_{22}^E)^\ast
}_{\ss} \right) \cg 0
\text{ ~~and~~ }
U^T\Ls\left(\mat{cc}{0 & Y \\ Y & 0}, \mat{c|c}{\t P & 0 \\ \hline 0 & P_\ga^{-1}}, \mat{cc}{I & 0 \\ -G_{11}^\ast & -G_{21}^\ast \\ \hline
0 & I \\
-G_{12}^\ast & -G_{22}^\ast
}_{\ss} \right) U \cg 0.
\tlabel{RS::theo::eq::lmi_ofEa}{RS::theo::eq::lmi_ofEb}{RS::theo::eq::lmi_ofEc}
\end{equation}
\end{subequations}
Moreover, we have
\begin{equation*}
\gaoptgs \leq \ga_\opt \leq \ga_E
\end{equation*}
for $\ga_E$ being the infimal $\ga > 0$ such that the above LMIs are feasible.
\end{theorem
In contrast to the previous section and as Section \ref{SHI::sec::shi} , Theorems \ref{RS::theo::ofF} and \ref{RS::theo::ofE} are again nicely intertwined as follows.
\begin{theorem
\label{RS::theo::it_summary}
The following two statements hold.
\begin{itemize}
\item If the primal synthesis LMIs \eqref{RS::theo::eq::lmi_ofF} are feasible for some $\ga$ and some full-information gain $F$, then there exists some full-actuation gain $E$ such that the dual synthesis LMIs \eqref{RS::theo::eq::lmi_ofE} are feasible for $\ga$. In particular, we have $\ga_E \leq \ga$.
\item If the dual synthesis LMIs \eqref{RS::theo::eq::lmi_ofE} are feasible for some $\ga$ and some full-actuation gain $E$, then there exists some full-information gain $F$ such that the primal synthesis LMIs \eqref{RS::theo::eq::lmi_ofF} are feasible for $\ga$. In particular, we have $\ga_F \leq \ga$.
\end{itemize}
\end{theorem
The proofs are again direct consequences of the elimination lemma \ref{RS::lem::elimination} and are thus omitted for brevity.
The dual iteration for robust output-feedback $H_\infty$-design now essentially amounts to alternately applying the two statements in Theorem~\ref{RS::theo::it_summary} and is explicitly stated as follows.
\begin{Algorithm}
\label{RS::algo::dual_iteration}
Dual iteration for robust output-feedback $H_\infty$-design.
\begin{enumerate}
\item \emph{Initialization:} Compute the lower bound $\gaoptgs$ based on solving the gain-scheduling synthesis LMIs \eqref{RS::theo::eq::lmi_gs} and set $k = 1$.
Design an initial full-information gain $F$ from Lemma \ref{RS::lem::full_info}.
\item \emph{Primal step:} Compute $\ga_F$ based on solving the primal synthesis LMIs \eqref{RS::theo::eq::lmi_ofF} and set $\ga^k := \ga_F$.
Design a corresponding close-to-optimal full-actuation gain $E$ from Lemma \ref{RS::lem::full_actu}.
\item \emph{Dual step:} Compute $\ga_E$ based on solving the dual synthesis LMIs \eqref{RS::theo::eq::lmi_ofE} and set $\ga^{k+1} := \ga_E$. Design a corresponding close-to-optimal full-information gain $F$ from Lemma \ref{RS::lem::full_info}.
\item \emph{Termination:} If $k$ is too large or $\ga^k$ does not decrease any more, then stop and construct a close-to-optimal static output-feedback controller \eqref{RS::eq::con_of2} for the system \eqref{RS::eq::sys_of} according to Theorem \ref{RS::theo::ofE}. \\
Otherwise set $k = k+2$ and go to the primal step.
\end{enumerate}
\end{Algorithm}
Essentially the same statements as in Remark \ref{SHI::rema::stuff} remain valid.
In particular, the sequence $\ga^k$ is monotonically non-increasing and we have
\begin{equation*}
\ga_{\mathrm{gs}} \leq \ga_\opt \leq \ga^k
\teq{ for all }k.
\end{equation*}
Moreover, once we found a full-information gain $F$ such that the primal synthesis LMIs \eqref{RS::theo::eq::lmi_ofF} are feasible, the algorithm will not get stuck due to infeasibility of the involved LMIs.
\begin{remark}
\label{RS::rema::extensions}
\begin{enumerate}[(a)]
\item Algorithm \ref{RS::algo::dual_iteration} can be modified in a straightforward fashion to cope with the even more challenging design of static robust output-feedback controllers. This is essentially achieved by replacing \eqref{RS::theo::eq::lmi_ofFa} and \eqref{RS::theo::eq::lmi_ofEa} with $X = Y \cg 0$ during the iteration. For the initialization we then recommend to additionally employ the considerations in Remark \ref{SHI::rema::init}.
\item It is not difficult to extend Algorithm \ref{RS::algo::dual_iteration}, e.g., to the more general and very interesting design of robust gain-scheduling controllers \cite{VeeSch12, Hel95}. For this design the uncertainty $\Del(t)$ in the description \eqref{RS::eq::sys_of} is replaced by $\diag(\Del_u(t), \Del_s(t))$ with $\Del_u(t)$ being unknown, while $\Del_s(t)$ is measurable online and taken into account by the to-be-designed controller. As for robust design, this synthesis problem is known to be convex \emph{only} if the control channel is unaffected by uncertainties \cite{VeeSch12}.
An interesting special case of the general robust gain-scheduling design is sometimes referred to as inexact scheduling \cite{SatPea18}. As for standard gain-scheduling it is assumed that a parameter dependent system \eqref{RS::eq::sys_of} is given, but that the to-be-designed controller only receives noisy online measurements of the parameter instead of exact ones.
We emphasize such modifications are all straightforward due to the flexibility of the design framework based on linear fractional representations as \eqref{RS::eq::sys_of} and on the employed multiplier separation techniques (Lemmas \ref{RS::lem::stab0} and \ref{RS::lem::stab}).
\end{enumerate}
\end{remark}
\subsubsection{Dual Iteration: An Alternative Initialization}\label{RS::sec::alternative_init}
It can happen that the LMIs appearing in the primal step of the dual iteration algorithm \ref{RS::algo::dual_iteration} are infeasible for the initially designed full-information controller.
In order to promote the feasibility of these LMIs we propose here an alternative initialization that makes use of the following result.
\begin{lemma}
\label{RS::lem::initialization}
Suppose that the gain-scheduling synthesis LMIs \eqref{RS::theo::eq::lmi_gs} are feasible, that some full-actuation gain $E$ is designed from Lemma \ref{RS::lem::full_actu} and let $G_{ij}$, $G_{ij}^E$ as well as $U$ be as in Theorem \ref{RS::theo::ofE}. Then there exist some $\alpha > 0$, symmetric $X, Y$ and $P, \t P \in \t \Pb(\Vb)$ satisfying \eqref{RS::theo::eq::lmi_ofEa}, \eqref{RS::theo::eq::lmi_ofEc}
\begin{subequations}
\label{RS::lem::eq::lmi_fa}
\begin{equation}
\arraycolsep=2pt
\Ls\left(\mat{cc}{0 & X \\ X & 0}, \mat{c|c}{P & 0 \\ \hline 0 & P_\ga^{-1}}, \mat{cc}{I & 0 \\ -(G_{11}^E)^\ast & -(G_{21}^E)^\ast \\ \hline
0 & I \\
-(G_{12}^E)^\ast & -(G_{22}^E)^\ast
}_{\ss} \right) \cg 0
\teq{ and }
\mat{cc}{\alpha I & P - \t P \\ P - \t P & I} \cg 0.
\dlabel{RS::lem::eq::lmi_faa}{RS::lem::eq::lmi_fab}
\end{equation}
\end{subequations}
\end{lemma}
Note that by the Schur complement \eqref{RS::lem::eq::lmi_fab} is equivalent to
\begin{equation*}
\|P - \t P\|^2 < \alpha.
\end{equation*}
Thus by minimizing $\alpha > 0$ subject to the above LMIs we push the two multipliers $P$ and $\t P$ as close together as possible. Due to the continuity of the map $M \mapsto M^{-1}$, this means that the inverses $P^{-1}$ and $\t P^{-1}$ are close to each other as well. We can then design a corresponding full-information gain $F$ based on Lemma \ref{RS::lem::full_info} for which the LMIs \eqref{RS::theo::eq::lmi_ofF} are very likely to be feasible for the single multiplier $P^{-1} \approx \t P^{-1}$.
\begin{remark}
\begin{enumerate}[(a)]
\item In the case that the above procedure does not lead to a full-information gain $F$ for which the LMIs \eqref{RS::theo::eq::lmi_ofF} are feasible, one can, e.g., iteratively double $\ga$ and try again until a suitable controller is found. This practical approach works typically well in various situations.
\item It would be nicer to directly employ additional constraints for the gain-scheduling LMIs \eqref{RS::theo::eq::lmi_gs} which promote $P \approx \t P^{-1}$ and thus feasibility of the primal synthesis LMIs \eqref{RS::theo::eq::lmi_ofF} similarly as was possible for static design in Remark \ref{SHI::rema::init}. However, as far as we are aware of, this is only possible for specific multipliers and corresponding value sets.
\end{enumerate}
\end{remark}
\subsection{Examples}
\subsubsection{Modified Examples from Compleib}
We now compare the dual iteration for robust output-feedback design as described in Algorithm \ref{RS::algo::dual_iteration} with two variants of a D-K iteration in terms of computed optimal bounds on the robust energy gain.
\begin{enumerate}
\item[V1:] The first variant is based on considering only the analysis LMIs \eqref{RS::lem::lmi_stab} for the closed-loop system \eqref{RS::eq::cl_of} and relies on alternately performing the following two steps:
\begin{enumerate}[1.]
\item For a given controller \eqref{RS::eq::con_of2}, solve the LMIs \eqref{RS::lem::lmi_stab} for the closed-loop system \eqref{RS::eq::cl_of} with decision variables $X$ and $P$.
\item For given $X$ and $P$, solve the LMIs \eqref{RS::lem::lmi_stab} for the closed-loop system \eqref{RS::eq::cl_of} with decision variables $A^c, B^c, C^c$ and $D^c$.
\end{enumerate}
We denote the resulting upper bounds on $\ga_\opt$ as $\ga_\mathrm{dk1}^k$.
\item[V2:] The second variant makes additionally use of non-convex design result Theorem \ref{RS::theo::of}, which resulted from the closed-loop analysis conditions LMIs \eqref{RS::lem::lmi_stab} by elimination. It relies on alternately performing the following two steps:
\begin{enumerate}[1.]
\item For a given controller \eqref{RS::eq::con_of2}, solve the LMIs \eqref{RS::lem::lmi_stab} for the closed-loop system \eqref{RS::eq::cl_of} with decision variables $X$ and $P$.
\item For a given $P$, solve the synthesis LMIs \eqref{RS::theo::eq::lmi_of} in Theorem \ref{RS::theo::of} with decision variables $X$ and $Y$.
\end{enumerate}
We denote the resulting upper bounds on $\ga_\opt$ as $\ga_\mathrm{dk2}^k$.
\end{enumerate}
Note that all of the above steps are convex in the decision variables since those (pairs of) variables that would destroy convexity are fixed alternately.
Moreover, it is possible to simultaneously minimize over $\ga$ while searching for a feasible solution.
From the mere descriptions of the variants one can already tell that the first variant is worse (in terms of provided upper bounds) than the second one which is outperformed by the dual iteration.
The reason for this is that in the second variant, and in contrast to the first, the appearing Lyapunov matrices are treated as free decision variables in both of the steps. However, the multiplier $P$ is still fixed in every second step. This is in contrast to the dual iteration where the Lyapunov matrices \emph{and} the multiplier are free decision variables in the two main steps. Essentially, the dual iteration focuses on the most important decision variables.
We stress that both D-K iterations as described above \emph{require} an initialization with a robustly stabilizing controller, as otherwise the first considered LMI is infeasible.
It is possible to modify the D-K schemes to find a robustly stabilizing controller starting from a nominal $H_\infty$ controller as, e.g., described in Chapter 8 of \cite{SchWei00}, but from our numerical experience, this is a rather cumbersome and frustrating task.
In stark contrast, finding an robustly stabilizing controller based on the dual iteration is much less problematic.
In order to only compare the iterative behavior of the named algorithms, we initialize both variants of the D-K iteration with the robust controller as obtained from Theorem \ref{RS::theo::ofF} when completing the primal step of the dual iteration for the first time.
\vspace{1ex}
We compare the three algorithms again by means of several examples from COMPl\textsubscript{e}ib \cite{Lei04}, which, unfortunately, does not comprise robust control examples. Thus we modified the included examples in order to fit the description \eqref{RS::eq::sys_of} as follows. For each example we let
\begin{equation*}
\mat{ccc}{A & B_2 & B_3 \\ C_2 & D_{22} & D_{23} \\ C_3 & D_{32} & 0}
\teq{ be identical to the system matrices (1.1) in \cite{Lei04}}
\end{equation*}
and choose the remaining matrices as
\begin{equation*}
\arraycolsep=2pt
D_{11} = 0, \quad
D_{12} = 0, \quad
D_{21} = 0, \quad
D_{31} = 0,\quad
B_1 = \mat{ccc}{1 & 0 & 1 \\ 0 & 1 & 0 \\ \hdashline & 0_{\bullet \times 3} &}, \quad
C_1 = \mat{cc:c}{1 & 0 & \\ 0 & 1 & 0_{3 \times \bullet} \\ 1 & -1} \text{ ~and~ }
D_{13} = \mat{c:c}{0 & \\ 0 & 0_{3 \times \bullet} \\ 1 &}.
\end{equation*}
Further, we suppose that the underlying systems are affected by uncertainties $\Del \in \Delf(\Vb)$ with the value set $\Vb$ being the convex hull of the (almost randomly chosen) generators
\begin{equation*}
\arraycolsep=3pt
\Del_1 :=\mat{cc:c}{-1 & 1 \\ 1 & -1 \\ \hdashline && 0},\quad
\Del_2 := \mat{cc:c}{0 & 1 \\ 0 & 0 \\ \hdashline && 1},\quad
\Del_3 := \mat{cc:c}{1 & 0 \\ 1 & 1 \\ \hdashline && 1} \teq{ and }
\Del_4 := \mat{cc:c}{0 & -1 \\ 0 & 0\\ \hdashline && -1}.
\end{equation*}
This allows us to perform a robust controller design based on the analysis criteria in Lemma \ref{RS::lem::stab}, the multiplier set \eqref{RS::eq::multiplier_set_for_ch} and its dual set as defined in Section \ref{RS::sec::ana}.
\vspace{1ex}
Table \ref{RS::tab::results} illustrates the computed optimal values and gains which were all obtained by using Matlab/LMIlab \cite{GahNem95}. It also depicts the average runtimes required for computing the upper bounds $\ga^9$, $\ga_\mathrm{dk}^9$ and $\ga_\mathrm{dk2}^9$.
These examples confirm our reasoning from earlier as that the dual iteration yields better upper bounds if compared to both D-K iteration schemes for uncertain systems within fewer iterations. Thus it is numerically much less demanding and less time consuming. We also see that the second variant of the D-K iteration provides better upper bound if compared to the first one while generally being slower. We emphasize once more that both variants are initialized with the robustly stabilizing controller corresponding to $\ga^1$ since finding such a controller based on a D-K iteration is troublesome.
Finally, note that the gap between $\ga_\mathrm{gs}$ and $\ga^9$ is small for several examples which implies that the upper bound $\ga^9$ is almost non-conservative for those.
\begin{table}
\newcommand{\gr}[1]{\textcolor{gray}{#1}}
\vspace{1.5ex}
\caption{Optimal gain bounds resulting from the dual iteration described in Algorithm \ref{RS::algo::dual_iteration} and from two variants of a D-K iteration for some modified examples from \cite{Lei04}. The average runtime within twenty runs in seconds $T_{\ga^9}$, $T_{\ga_\mathrm{dk}^9}$ and $T_{\ga_\mathrm{dk2}^9}$ for computing the upper bounds $\ga^9$, $\ga_\mathrm{dk}^9$ and $\ga_\mathrm{dk2}^9$, respectively is given as well.}
\label{RS::tab::results}
\begin{center}
\centering
\setlength{\tabcolsep}{3pt}
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{@{}l@{\hskip 3.5ex}r@{\hskip 3.5ex}rrr@{\hskip 3.5ex}r@{\hskip 3.5ex}rr@{\hskip 3.5ex}r@{\hskip 3.5ex}rr@{\hskip 3.5ex}r@{}}
\toprule
& \multicolumn{5}{@{}c@{\hskip 3.5ex}}{Algorithm \ref{RS::algo::dual_iteration}} & \multicolumn{3}{@{}c@{\hskip 3.5ex}}{D-K V1} & \multicolumn{3}{@{}c}{D-K V2} \\ \cmidrule(r{3.5ex}){2-6}\cmidrule(r{3.5ex}){7-9}\cmidrule(r){10-12}
Name & $\ga_\mathrm{gs}$ &$\ga^1$ & $\ga^5$& $\ga^9$ & $T_{\ga^9}$ & $\ga_\mathrm{dk1}^9$ & $\ga_\mathrm{dk1}^{21}$ & $T_{\ga_\mathrm{dk1}^9}$ & $\ga_\mathrm{dk2}^9$ & $\ga_\mathrm{dk2}^{21}$ & $T_{\ga_\mathrm{dk2}^9}$\\ \hline
AC3 & 7.25 & 8.18 & 7.98 & 7.98 & \gr{0.71} & 8.14 & 8.14 & \gr{1.06} & 8.06 & 8.04 & \gr{2.36} \\
AC6 & 6.63 & 7.32 & 6.95 & 6.95 & \gr{1.48} & 7.15 & 7.15 & \gr{4.93} & 7.04 & 7.01 & \gr{7.80} \\
HE2 & 12.55 & 19.18 & 15.18 & 15.18 & \gr{0.61} & 18.62 & 18.49 & \gr{0.43} & 16.37 & 16.00 & \gr{0.67} \\
HE5 & 33.76 & 60.51 & 58.24 & 58.24 & \gr{2.05} & 59.73 & 59.61 & \gr{2.31} & 58.65 & 58.55 & \gr{5.76} \\
REA1 & 0.86 & 0.93 & 0.88 & 0.88 & \gr{0.42} & 0.93 & 0.92 & \gr{0.60} & 0.89 & 0.89 & \gr{1.41} \\
DIS2 & 1.65 & 1.82 & 1.67 & 1.67 & \gr{0.26} & 1.74 & 1.74 & \gr{0.23} & 1.69 & 1.67 & \gr{0.40} \\
DIS3 & 2.11 & 2.56 & 2.15 & 2.15 & \gr{0.76} & 2.43 & 2.42 & \gr{2.26} & 2.29 & 2.24 & \gr{3.02} \\
WEC1 & 4.62 & 5.29 & 4.67 & 4.67 & \gr{7.00} &5.21 & 5.19 & \gr{21.98} & 4.73 & 4.73 & \gr{41.61} \\
WEC2 & 3.82 & 4.19 & 3.85 & 3.84 & \gr{5.96} & 4.16 & 4.11 & \gr{19.92} & 3.87 & 3.87 & \gr{31.07} \\
MFP & 6.31 & 7.51 & 7.27 & 7.27 & \gr{0.42} & 7.43 & 7.41 & \gr{0.38} & 7.32 & 7.32 & \gr{0.77} \\
NN4 & 5.30 & 6.52 & 5.33 & 5.33 & \gr{0.65} & 6.10 & 6.02 & \gr{0.54} & 5.46 & 5.42 & \gr{0.93} \\
\bottomrule
\end{tabular}
\end{center}
\end{table}
\vspace{1ex}
Let us now consider the same examples as above, but this time we assume that the underlying systems are affected by uncertainties $\Del \in \Delf(\Vb)$ with the value set being
\begin{equation*}
\Vb := \{\diag(\del_1 I_2, \del_2)~:~ \del_1, \del_2 \in [-1, 1]\}.
\end{equation*}
This allows us to employ a multiplier set similar to the one given in \eqref{RS::eq::multiplier_set_for_int} which is closely related to the set of D-G scalings. We compare the resulting dual iteration to the \texttt{musyn} command available in Matlab R2020a, which is based on a D-K iteration using dynamic D-G scalings. Note that with the settings
\begin{center}
\verb|musynOptions('MixedMU','on', 'FitOrder', [0, 0]);|
\end{center}
the algorithm uses D scalings of order zero, but keeps using dynamic G scalings instead of constant ones. These dynamic scalings can, unfortunately, not be used for robust design for systems affected by arbitrarily time-varying uncertainties. One can merely guarantee closed-loop stability if the uncertainties are assumed to be slowly varying. Hence \texttt{musyn} considers a much smaller class of uncertainties than captured by our findings based on a more dedicated analysis result. Thus the following comparison of provided upper bounds is not really fair and in favor of \texttt{musyn}.
Per default \texttt{musyn} performs 10 iterations and its output $\ga_{\mathrm{ms}}$ has the following meaning:
\begin{center}
The closed-loop system has an energy gain smaller than $\ga_{\mathrm{ms}}$ for all constant uncertainties $\Del$ in the scaled value set $\frac{1}{\ga_{\mathrm{ms}}}\Vb$.
\end{center}
Thus, for the purpose of comparing \texttt{musyn} to the dual iteration, for each individual example, we start by computing $\ga_{\mathrm{ms}}$ and perform afterwards a dual iteration for the system affected by uncertainties in the scaled set $\Delf(\frac{1}{\ga_{\mathrm{ms}}}\Vb)$. The results are depicted in Table \ref{RS::tab::results_vs_musyn}, where we also added the corresponding lower and upper bounds resulting from the dual iteration for the unscaled value set.
These examples demonstrate that the dual iteration yields superior or at least comparable results to \texttt{musyn} in terms of computed bounds on the optimal energy gain. This is even though \texttt{musyn} uses dynamic multipliers which are usually known to lead to better upper bounds.
\begin{table}
\vspace{1.5ex}
\caption{Optimal gain bounds resulting from \texttt{musyn} as well as the dual iteration described in Algorithm \ref{RS::algo::dual_iteration} for a scaled value set and a non-scaled value set for some modified examples from \cite{Lei04}.}
\label{RS::tab::results_vs_musyn}
\begin{center}
\centering
\setlength{\tabcolsep}{3pt}
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{@{}l@{\hskip 3.5ex}r@{\hskip 3.5ex}rrr@{\hskip 3.5ex}r@{\hskip 3.5ex}r@{\hskip 3.5ex}rrr@{}}
\toprule
&\multicolumn{4}{@{}c@{\hskip 3.5ex}}{Algorithm \ref{RS::algo::dual_iteration} for $\Vb$} & \multicolumn{1}{@{}c@{\hskip 3.5ex}}{\texttt{musyn}} & \multicolumn{4}{@{}c}{Algorithm \ref{RS::algo::dual_iteration} for $\frac{1}{\ga_{\mathrm{ms}}}\Vb$} \\ \cmidrule(r{3.75ex}){2-5}\cmidrule(r{3.5ex}){6-6}\cmidrule(r){7-10}
Name & $\ga_\mathrm{gs}$ &$\ga^1$ & $\ga^5$& $\ga^9$ & $\ga_\mathrm{ms}$ & $\ga_\mathrm{gs}$ & $\ga^1$ & $\ga^5$ & $\ga^9$ \\ \hline
AC2 & 0.47 & 0.72 & 0.51 & 0.51 & 1.32 & 0.36 & 0.65 & 0.43 & 0.43 \\
AC6 & 7.21 & 8.86 & 8.64 & 8.64 & 4.77 & 4.68 & 5.77 & 4.72 & 4.72 \\
HE2 & 15.21 & 92.65 & 75.84 & 75.24 & 4.11 & 3.69 & 4.72 & 3.98 & 3.98 \\
HE5 & 60.91 & 79.46 & 78.71 & 78.70 & 7.21 & 3.88 & 4.26 & 4.21 & 4.21 \\
REA1 & 0.86 & 0.94 & 0.89 & 0.89 & 0.94 & 0.87 & 0.95 & 0.89 & 0.89 \\
DIS2 & 1.73 & 1.90 & 1.78 & 1.78 & 1.51 & 1.44 & 1.55 & 1.48 & 1.48 \\
DIS3 & 2.15 & 2.36 & 2.23 & 2.23 & 1.67 & 1.52 & 1.89 & 1.56 & 1.56 \\
WEC1 & 4.07 & 4.44 & 4.11 & 4.11 & 3.75 & 3.73 & 4.04 & 3.74 & 3.74 \\
WEC2 & 3.83 & 4.27 & 3.84 & 3.84 & 3.70 & 3.67 & 4.16 & 3.68 & 3.68 \\
MFP & 13.88 & 15.76 & 14.63 & 14.63 & 6.11 & 6.09 & 7.21 & 6.41 & 6.41 \\
NN4 & 5.48 & 6.58 & 5.58 & 5.58 & 2.62 & 2.56 & 3.38 & 2.60 & 2.60 \\
\bottomrule
\end{tabular}
\end{center}
\end{table}
\subsubsection{Flight Control Design}
Let us consider a missile control problem. Similarly as, e.g., in \cite{ BalPac92, Hel95, SchNji97, PrePos08} this leads after some simplifications to a nonlinear state space model of the form
\begin{equation}
\label{RS::exa::mis}
\begin{aligned}
\dot \alpha(t) &= K_\alpha M(t) \left[\left(a_n |\alpha(t)|^2 + b_n |\alpha(t)| + cn\left(2-\frac{M(t)}{3}\right) \right)\alpha(t) + d_n \delta(t) \right] + q(t) \\
\dot q(t) &= K_q M(t)^2 \left[\left(a_m |\alpha(t)|^2 + b_m |\alpha(t)| + c_m \left(-7 + \frac{8M(t)}{3} \right) \right)\alpha(t) + d_m \delta(t) \right] \\
n(t) &= K_n M(t)^2 \left[\left(a_n |\alpha(t)|^2 + b_n |\alpha(t)| + cn\left(2+\frac{M(t)}{3}\right) \right)\alpha(t) + d_n \delta(t) \right],
\end{aligned}
\end{equation}
where $M(t)$ is the Mach number assumed to take values in $[2, 4]$ and with signals
\begin{center}
\begin{tabular}{ll@{\hskip 4ex}ll}
$\alpha$ & angle of attack (in rad) &
$q$ & pitch rate (in rad/s) \\
$\delta$ & tail fin deflection (in rad) &
$n$ & normal acceleration of the missile (in $g = 32.2$ $\mathrm{ft}/ \mathrm{s}^2$). \\
\end{tabular}
\end{center}
Note that \eqref{RS::exa::mis} is a reasonable approximation for $\alpha(t)$ between $-20$ and $20$ degrees, i.e., $|\alpha(t)| \in [0, \pi/9]$. The constants are given by
\begin{center}
\begin{tabular}{llll}
$a_n = 0.000103 \cdot (180/\pi)^3$&
$b_n = -0.00945 \cdot (180/\pi)^2$&
$c_n = -0.1696 \cdot (180/\pi)$&
$d_n = -0.034 \cdot(180/\pi)$ \\
$a_m = 0.000215 \cdot (180/\pi)^3$ &
$b_m = -0.0195 \cdot (180/\pi)^2$&
$c_m = 0.051 \cdot (180/\pi)$&
$d_m = -0.206 \cdot (180/\pi)$ \\
$K_\alpha = 0.7 P_0 S / mv_s$&
$K_q = 0.7 P_0 S d / I_y \teq{ and }$&
$K_n = 0.7 P_0 S / mg$.&
\end{tabular}
\end{center}
The terms in the latter three constants are
\begin{center}
\begin{tabular}{ll@{\hskip6ex}ll}
$P_0 = 973.3$ $\mathrm{lbf}/\mathrm{ft}^2$ & static pressure at 20,000 $\mathrm{ft}$ &
$S = 0.44$ $\mathrm{ft}^2$ & reference area \\
$m = 13.98$ $\mathrm{slugs}$ & mass of the missile &
$v_s = 1036.4$ $\mathrm{ft}/ \mathrm{s}$ & speed of sound at 20,000 $\mathrm{ft}$ \\
$d = 0.75$ $\mathrm{ft}$ & diameter &
$I_y = 182.5$ $\mathrm{slug} \cdot \mathrm{ft}^2$ & pitch moment of inertia \\
\end{tabular}
\end{center}
The goal is to find a controller such that the commanded acceleration maneuvers $n_c$ are tracked and such that the physical limitations of the fin actuator are not exceeded. Precisely, the objectives are:
\begin{itemize}
\item rise-time less than $0.35$ $\mathrm{s}$,
steady state error less than $1\%$ and
overshoot less than $10\%$.
\item tail fin deflection less than $25$ $\mathrm{deg}$ and
tail fin deflection rate less than $25$ $\mathrm{deg}/ \mathrm{s}$ per commanded $g$.
\end{itemize}
We assume at first that $\alpha$, $n_c - n$ and $M$ are available for control and design, similarly as in \cite{ BalPac92, Hel95, SchNji97, PrePos08} a gain-scheduling controller. The latter controller will depend in a nonlinear fashion on the parameters $\alpha$ and $M$ that appear in \eqref{RS::exa::mis}.
To this end we can rewrite \eqref{RS::exa::mis} as
\begin{equation*}
\mat{c}{\dot \alpha(t) \\ \dot q(t) \\ \hline z(t) \\ \hdashline n(t) \\ \alpha(t)}
={\small \mat{cc|ccccccc:c}{
0 & 1 & 0 & 0 & 0 & K_\alpha & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & K_q & 0 \\ \hline
0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
2c_n & 0 & a_n & b_n & -\tfrac{c_n}{3} & 0 & 0 & 0 & 0 & d_n \\
0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\
-7c_m & a_m & b_m & \tfrac{c_m8}{3} & 0 & 0 & 0 & 0 & 0 & d_m \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ \hdashline
0 & 0 & 0 & 0 & 0 & 0 & K_n & 0 & 0 & 0 \\
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0}}
\mat{c}{\alpha(t) \\ q(t) \\ \hline w(t) \\ \del(t)},\qquad
w(t) = \underbrace{\mat{cc}{|\alpha(t)|~ I_2 & \\ & M(t) I_5}}_{=:\Delta(t)} z(t)
\end{equation*}
which includes the measurable signal $\alpha$ as an output. In particular, the above system is the feedback interconnection of an LTI plant $P$ and a time-varying operator $\Delta$. Following \cite{SchNji97}, we aim to design a controller that ensures that the closed-loop specifications are satisfied consider by considering the weighted synthesis interconnection as depicted in Fig. \ref{RS::fig::missile_exa}. Here, the fin is driven by $G_\mathrm{act}$, an actuator of second order
\begin{equation*}
\mat{c}{\dot x_\mathrm{act}(t) \\ \hline \del(t) \\ \dot \del(t)}
= \mat{c|c}{A_\mathrm{act} & B_\mathrm{act} \\ \hline
C_\mathrm{act} & 0 \\ C_\mathrm{act}A_\mathrm{act} & C_\mathrm{act}B_\mathrm{act}} \mat{c}{x_\mathrm{act}(t) \\ u(t)}
\teq{ where }
C_\mathrm{act}(sI - A_\mathrm{act})^{-1}B_\mathrm{act}
= \frac{(150)^2}{s^2 + 2\cdot 150\cdot 0.7s + (150)^2}.
\end{equation*}
The exogenous disturbances $d_1$ and $d_2$ are used to model measurement noise.
The ideal model and weighting filters are given by
\begin{equation*}
G_\mathrm{id}(s) = \frac{144(1 -0.05s)}{s^2 + 19.2s+144},\quad
W_e(s) = \frac{0.5s + 17.321}{s + 0.0577},\quad
W_\delta(s) = \frac{1}{19},\quad
W_{\dot \delta}(s) = \frac{1}{25} \teq{ and }
W_{d_1} = W_{d_2} = 0.001.
\end{equation*}
Disconnecting the controller $\Delta \star K$ and, e.g., using the Matlab command \texttt{sysic} yields a system with description \eqref{RS::eq::sys_of} with $d := \col(n_c, W_{d_1}d_1, W_{d_2}d_2)$, $e := \col(W_e(n_\mathrm{id} - n), n, W_\delta \delta, W_{\dot \delta} \delta)$, $y := \col(n_c - n, \alpha)$ and
\begin{equation*}
\Vb := \{\diag(\del_1 I_2, \del_2 I_5))~:~ \del_1 \in [0, \pi/9] \text{ ~and~ } \del_2 \in [2, 4] \}.
\end{equation*}
We can hence again use a multiplier set similar to the one in \eqref{RS::eq::multiplier_set_for_int} which is closely related to the set of D-G scalings and employ our analysis and design results. For the synthesis of a gain-scheduling controller we make us of Theorem \ref{RS::theo::gs}; note that for D-G scalings it is possible to use the scheduling function $S(\Del) = \Del = \mathrm{id}(\Del)$.
Applying Theorem \ref{RS::theo::gs} yields an upper bound on the optimal closed-loop energy gain of $\ga_{\mathrm{gs}} = 2.23$ and Fig. \ref{RS::fig::BodeGS} illustrates Bode plots of the corresponding closed-loop system with the resulting gain-scheduling controller for frozen values of $\Delta$. Finally, simulations of the nonlinear closed-loop systems are given in Fig. \ref{RS::fig::MISGS}. Here, we consider trajectories for several (almost randomly chosen) Mach numbers
\begin{equation}
M_k(t) = \mathrm{sat}(4 - \tfrac{(t + 1.25(k-1))}{5})
\teq{ with } \mathrm{sat}(t) = \begin{cases}
4 & t \geq 4 \\
t & t \in [2, 4] \\
2 & t \leq 2
\end{cases}
\teq{ for all }t \geq 0
\label{RS::eq::exa::Machnumbers}
\end{equation}
and we let both disturbances $d_1$ and $d_2$ be zero.
We observe that the specifications are met for most of those Mach numbers apart from the constraint on the tail fin deflection rate, which is not well-captured by $H_\infty$ criteria. Of course, the performance of the designed controller can be improved by readjusting the weights, but this is not our intention at this point.
\vspace{1ex}
Instead, let us now assume that the Mach number $M$ can not be measured online and that only $\alpha$ and $n_c - n$ are available for control. Hence, we now aim to design a controller that is robust against variations in $M$, but benefits from the fact that we can take measurements of the parameter $\alpha$ that enters \eqref{RS::exa::mis} in a nonlinear fashion.
This boils down to the synthesis of a robust gain-scheduling controller, which is more general and more challenging than the design of robust controllers as considered in this section.
However, as emphasized in Remark \ref{RS::rema::extensions} it is fortunately not difficult to extend the dual iteration in order to cope with such a design as well.
Indeed, the resulting iteration yields after five iterations an upper bound of $\ga^5 = 3.30$ on the optimal closed-loop robust energy gain which is actually not that far away from the bound achieved by the gain-scheduling design.
Fig. \ref{RS::fig::BodeRGS} illustrates the resulting closed-loop frequency responses for several frozen values of $\Del$ and Fig. \ref{RS::fig::MISRGS} depicts simulations of the nonlinear closed-loop system for several Mach numbers as in \eqref{RS::eq::exa::Machnumbers}.
In particular, we observe that the tracking behavior degrades which is not surprising as the controller takes fewer measurements into account.
Finally, note that we can of course also view the whole $\Del$, i.e., $|\alpha|$ and $M$, as an uncertainty and design a robust controller based on the dual iteration as discussed in this section. For this specific example this even leads after five iterations to an upper bound of $\ga^5 = 3.32$ and an closed-loop behavior that is almost identical to the one corresponding to the previous design. Note that this in general not the case as a robust controller utilizes less information than a robust gain-scheduling controller.
\begin{figure}
\begin{center}
\includegraphics[width=0.6\textwidth]{mmcs}
\end{center}
\caption{Interconnection structure for gain-scheduled synthesis.}
\label{RS::fig::missile_exa}
\end{figure}
\begin{figure}
\begin{center}
\begin{minipage}{0.48\textwidth}
\begin{center}
\includegraphics[width=\textwidth, trim = 05 95 30 20, clip]{Bode1GS}
\end{center}
\end{minipage}
\hspace{2ex}
\begin{minipage}{0.48\textwidth}
\begin{center}
\includegraphics[width=\textwidth, trim = 05 95 30 20, clip]{Bode2GS}
\end{center}
\end{minipage}
\end{center}
\vspace{-2ex}
\caption{Bodeplots of the unweighted closed-loop interconnection with a gain-scheduling controller resulting from Theorem \ref{RS::theo::gs} and for frozen values of $\Del$.}
\label{RS::fig::BodeGS}
\end{figure}
\begin{figure}
\begin{center}
\begin{minipage}{0.48\textwidth}
\begin{center}
\includegraphics[width=\textwidth, trim = 05 95 30 20, clip]{Bode1RGS}
\end{center}
\end{minipage}
\hspace{2ex}
\begin{minipage}{0.48\textwidth}
\begin{center}
\includegraphics[width=\textwidth, trim = 05 95 30 20, clip]{Bode2RGS}
\end{center}
\end{minipage}
\end{center}
\vspace{-2ex}
\caption{Bodeplots of the unweighted closed-loop interconnection with a robust gain-scheduling controller resulting from the dual iteration and for frozen values of $\Del$.}
\label{RS::fig::BodeRGS}
\end{figure}
\begin{figure}
\begin{center}
\begin{minipage}{0.48\textwidth}
\begin{center}
\includegraphics[width=\textwidth, trim = 15 185 30 5, clip]{MIS1_GS}
\vspace{1ex}
\includegraphics[width=\textwidth, trim = 15 170 30 5, clip]{MIS3_GS}
\end{center}
\end{minipage}
\hspace{2ex}
\begin{minipage}{0.48\textwidth}
\begin{center}
\includegraphics[width=\textwidth, trim = 15 185 30 5, clip]{MIS2_GS}
\vspace{1ex}
\includegraphics[width=\textwidth, trim = 15 170 30 5, clip]{MIS4_GS}
\end{center}
\end{minipage}
\end{center}
\vspace{-2ex}
\caption{Closed-loop trajectories for the gain-scheduling controller and for several Mach numbers $M_1, \dots, M_7$ as in \eqref{RS::eq::exa::Machnumbers}.}
\label{RS::fig::MISGS}
\end{figure}
\begin{figure}
\begin{center}
\begin{minipage}{0.48\textwidth}
\begin{center}
\includegraphics[width=\textwidth, trim = 15 185 30 5, clip]{MIS1_RGS}
\vspace{1ex}
\includegraphics[width=\textwidth, trim = 15 170 30 5, clip]{MIS3_RGS}
\end{center}
\end{minipage}
\hspace{2ex}
\begin{minipage}{0.48\textwidth}
\begin{center}
\includegraphics[width=\textwidth, trim = 15 185 30 5, clip]{MIS2_RGS}
\vspace{1ex}
\includegraphics[width=\textwidth, trim = 15 170 30 5, clip]{MIS4_RGS}
\end{center}
\end{minipage}
\end{center}
\vspace{-2ex}
\caption{Closed-loop trajectories for the robust gain-scheduling controller and for several Mach numbers $M_1, \dots, M_7$ as in \eqref{RS::eq::exa::Machnumbers}.}
\label{RS::fig::MISRGS}
\end{figure}
\section{Conclusions}
We demonstrate that the dual iteration, together with linear fractional representation framework, is a powerful and flexible tool to tackle various challenging and interesting non-convex controller synthesis problems especially if compared to other heuristic approaches such as the classical D-K iteration. The iteration, as introduced in \cite{Iwa97} for the design of stabilizing static output-feedback controllers, heavily relies on the elimination lemma. We extend those ideas to the synthesis of static $H_\infty$, static $H_2$ and robust $H_\infty$ output-feedback controllers in a common fashion. As the icing on the cake, we demonstrate in terms of a missile autopilot design example, that a seamless extension to robust gain-scheduling output-feedback $H_\infty$ design is possible as well.
Since the underlying elimination lemma is not applicable for numerous non-convex design problems, such as multi-objective controller design, we also provide a novel interpretation of the individual steps of the dual iteration. The latter interpretation can help to extend the dual iteration for such situations as well. This is exemplified by considering the synthesis of static output-feedback $H_\infty$ controllers, which guarantee that the closed-loop poles are contained in an a priori specified LMI region.
Future research could be devoted to extensions of the dual iteration to robust output-feedback design based on more elaborate analysis results. Precisely, analysis results based on parameter-dependent Lyapunov functions or on integral quadratic constraints with dynamic multipliers. It would also be very interesting and fruitful to extend the iteration for static or robust output-feedback design for hybrid and switched systems.
\section*{Acknowledgments}
This project has been funded by Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy -EXC 2075 -390740016, which is gratefully acknowledged by the authors.
| {
"attr-fineweb-edu": 1.575195,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdNg5qYVBNAWDbadp |
\section{Introduction}
\label{sec:intro}
Humans have the ability to recognize 3D objects in a wide variety of positions and orientations (poses)~\cite{shepard1971mental}, even if objects are occluded.
We also seem to prefer certain \emph{canonical views}~\cite{cutzu1994canonical}, with evidence indicating that an object in a new pose is \emph{mentally rotated} to a canonical pose~\cite{tarr1989mental} to aid recognition.
Inspired by this, we aim to build scene understanding methods that reason about objects in different poses by learning to map them to a canonical pose without explicit supervision.
Given a 3D object shape, the goal of \textbf{instance-level canonicalization} is to find an \emph{equivariant frame} of reference that is consistent relative to the geometry of the shape under different 3D poses.
This problem can be solved if we have shape correspondences and a way to find a distinctive equivariant frame (\emph{e.g.},~PCA).
However, it becomes significantly harder if we want to operate on different 3D poses of different object instances
that lack correspondences.
This \textbf{category-level canonicalization} problem has received much less attention despite tremendous interest in category-level 3D object understanding~\cite{wu20153d, choy20163d, park2019deepsdf, mescheder2019occupancy, groueix2018papier, deprelle2019learning, mildenhall2020nerf}.
Most methods rely on data augmentation~\cite{liu2020fg}, or manually annotated datasets~\cite{chang2015shapenet, wu20153d} containing instances that are consistently positioned and oriented within each category~\cite{wang2019normalized, tatarchenko2019single, sridhar2019multiview}.
This has prevented broader application of these methods to un-canonicalized data sources, such as online model collections~\cite{UnityAss22:online}.
The problem is further exacerbated by the difficulty of canonicalizing partial shape observations (\emph{e.g.},~from depth maps~\cite{reizenstein2021common}), or symmetric objects that require an understanding of inter-instance part relationships.
Recent work addresses these limitations using weakly-supervised~\cite{sajnani2021draco, gu2020weaklysupervised} or self-supervised learning~\cite{novotny2019c3dpo, sun2020canonical, spezialetti2020learning, rotpredictor}, but cannot handle partial 3D shapes, or is limited to canonicalizing only orientation.
We introduce \textbf{ConDor\xspace}, a method for self-supervised category-level \underline{\textbf{C}}an\textbf{\underline{on}}icalization of the 3\underline{\textbf{D}} p\underline{\textbf{o}}se of pa\underline{\textbf{r}}tial shapes.
It consists of a neural network that is trained on an un-canonicalized collection of 3D point clouds with inconsistent 3D poses.
During inference, our method takes a full or partial 3D point cloud of an object at an arbitrary pose, and outputs a canonical rotation frame and translation vector.
To enable operation on instances from different categories, we build upon Tensor Field Networks (TFNs)~\cite{thomas2018tensor}, a 3D point cloud architecture that is equivariant to 3D rotation and point permutation, and invariant to translation.
To handle partial shapes, we use
a two-branch (Siamese) network with training data that simulates partiality through shape slicing or camera projection.
We introduce several losses to help our method learn to canonicalize 3D pose via self-supervision.
A surprising feature of our method is the (optional) ability to learn consistent part co-segmentation~\cite{chen2019bae} across instances without any supervision (see \cref{fig:teaser}).
Given only the recent interest, \textbf{standardized metrics} for evaluation of canonicalization methods have not yet emerged.
We therefore propose four new metrics that are designed to evaluate the consistency of instance- and category-level canonicalization, as well as consistency with manually pre-canonicalized datasets.
We extensively evaluate the performance of our method using these metrics by comparing with baselines and other methods~\cite{sun2020canonical, spezialetti2020learning}.
Quantitative and qualitative results on common shape categories show that we outperform existing methods and produce consistent pose canonicalizations for both full and partial 3D point clouds.
We also demonstrate previously difficult \textbf{applications} enabled by our method such as operation on partial point clouds from depth maps, keypoint annotation transfer, and expanding the size of existing datasets.
To sum up, our contributions include:
\begin{packed_itemize}
\item A self-supervised method to canonicalize the 3D pose of full point clouds from a variety of object categories.
\item A method that can also handle \textbf{partial} 3D point clouds.
\item New metrics to evaluate canonicalization methods, extensive experiments, and new applications.
\end{packed_itemize}
\section{Background}
\label{sec:background}
\iffalse
\subsection{Canonicalization}
\label{sec:canon_bg}
\iffalse
In the most general setting, canonicalization consists of choosing an element within each equivalence class of an equivalence relation:
\begin{Definition}
Given a set $\mathcal{X}$ equipped with an equivalence relation $\sim$ on $\mathcal{X}$ we consider its quotient space $\mathcal{X}/\sim$, that is, the space of equivalence classes of $\sim$. The quotient projection map $\pi: \mathcal{X} \rightarrow \mathcal{X}/\sim$ maps each element to its equivalence class. The canonicalization problem consist of finding a map $s: \mathcal{X}/\sim \ \rightarrow \mathcal{X}$ such that: $\pi \circ s = \mathrm{Id}$, where $\circ$ is the function composition operator and $\mathrm{Id}$ is identity.
We call $s$ a \textbf{canonicalizing map}.
\end{Definition}
We consider the setting where the equivalence relation is given by a group action. \fi
In the most general setting, canonicalization consists of choosing an element within each equivalence class of an equivalence relation, we consider canonicalization in the group action setting.
\begin{Definition}[Canonicalizing map]
given a group $G$ acting on $\mathcal{X}$, we have an equivalence relation $\sim$ defined by $X \sim Y \Leftrightarrow \exists g \in G, \ Y = g.X$. We say that a map $s:\mathcal{X} \rightarrow \mathcal{X}$ is a canonicalizing map iff $s(X) \sim X$ and $X \sim Y \Rightarrow s(X) = s(Y)$ for all $X, Y \in \mathcal{X}$ .
\end{Definition}
In this setting we can build a canonicalizing map:
\iffalse
\begin{Lemma}
A map $t: \mathcal{X} \rightarrow g$ satisfying the equivariance property:
$t(g.X) = t(X)g^{-1}, \forall X \in \mathcal{X}$ and $g \in G$ induces a canonicalizing map $s: \mathcal{X} / \sim \rightarrow \mathcal{X}$ defined by $s(\overline{X}) := t(X).X$ where $\overline{X} \in \mathcal{X} / G := \mathcal{X} / \sim$ is the equivalence class of $X$. See the supplementary material for a proof.
\label{lemma:can_transform}
\end{Lemma}
\fi
\begin{Lemma}
A map $t: \mathcal{X} \rightarrow g$ satisfying the equivariance property:
$t(g.X) = t(X)g^{-1}$ for all $(g,X) \in G \times \mathcal{X}$ induces a canonicalizing map $s: \mathcal{X} \rightarrow \mathcal{X}$ defined by $s(X) := t(X).X$. See the supplementary material for a proof.
\label{lemma:can_transform}
\end{Lemma}
We are interested in 3D pose, \emph{i.e.}~the action of $\mathrm{SE}(3)$, the rigid transforms group acting by rotations and translations over shapes given as point clouds $X \in \mathcal{X} = \mathbb{R}^{3 \times K}$. In practice we always mean center the shapes making them invariant to translation so we only consider the rotation action on the input. Finding a rigid transform satisfying the equivariance condition of \cref{lemma:can_transform} boils down to find a \textbf{canonical frame} $\mathcal{R}: \mathbb{R}^{3 \times K} \rightarrow \mathrm{SO}(3)$ satisfying the following rotation equivariance proprty:
\begin{equation}
\mathcal{R}(RX) = \mathcal{R}(X)R^{\top}, \ \
\label{eq:rotation_equivariance_r_t}
\end{equation}
Note that the choice of canonical frame is arbitrary, any two frames differing by a constant rotation are equally good.
We can recover the canonical frame from the canonicalizing map $s$ by solving the Procrustes alignment problem~\cite{sun2020canonical}.
\fi
\parahead{3D Pose Canonicalization\label{sec:pose_canon}}
The 3D pose of an object refers to its 3D position and orientation in space specified by an intrinsic object-centric reference frame (defined by an origin and orthonormal rotation).
Having a consistent intrinsic frame across different shapes is critical in many problems~\cite{esteves2018learning, chen2019clusternet, zhang2019rotation, poulenard2021functional}.
We denote such a consistent intrinsic frame as a \textbf{canonical frame}.
This frame transforms together with the object, \emph{i.e.},~it is equivariant.
The object pose is constant relative to the canonical frame -- we call this our \textbf{canonical pose}.
\begin{figure}[h!]
\centering
\includegraphics[width=\columnwidth]{content/images/canon/canon.jpg}
\vspace{-0.4in}
\caption{A canonical frame visualized for (\emph{\textbf{top}})~the same instance in different 3D poses, and (\emph{\textbf{bottom}}) different instances in different 3D poses. Partial shapes with amodal frame shown in last column.
}
\vspace{-0.19in}
\label{fig:pose_canon}
\end{figure}
In \textbf{instance-level 3D pose canonicalization}, our goal is to find a consistent canonical frame across different poses of the same object instance (\cref{fig:pose_canon}, top).
In \textbf{category-level 3D pose canonicalization}, we want a canonical frame that is consistent with respect to the geometry and local shape across different object instances (\cref{fig:pose_canon}, bottom).
Any equivariant frame that is consistent across shapes
is a valid canonical frame -- this allows us to compare canonicalization with manually-labeled ground truth (see \cref{sec:canonicalization_metrics}).
In addition to full shapes, we also consider partial shape canonicalization for which we define an \emph{amodal} canonical frame as shown in \cref{fig:pose_canon}.
\parahead{Tensor Field Networks\label{sec:tfn}}
Our method estimates a canonical frame for 3D shapes represented as point clouds.
For this task, we use Tensor Field Networks \cite{thomas2018tensor} (TFNs), a 3D architecture that is equivariant to point permutation and rotation, and invariant to translation.
Given a point cloud $X \in \mathbb{R}^{3 \times K}$ and a integer (aka type) $\ell \in \mathbb{N}$, a TFN can produce global (type $\ell$) feature vectors of dimension $2\ell + 1$ stacked in a matrix $F^{\ell} \in \mathbb{R}^{(2\ell + 1) \times C}$, where $C$ is user-defined number of channels.
$F_{:,j}^{\ell}(X)$ satisfies the equivariance property
$F_{:,j}^{\ell}(RX) = D^{\ell}(R)F_{:,j}^{\ell}(X)$,
where $D^{\ell}: \mathrm{SO}(3) \rightarrow \mathrm{SO}(2\ell + 1)$ is the so-called Wigner matrix (of type $\ell$) \cite{chirikjian2001engineering, knapp2001representation, lang2020wigner}.
Please see \cite{thomas2018tensor, anderson2019cormorant, weiler20183d, poulenard2021functional} for details.
\iffalse
A key fact, essential to our contribution is that $D^1(R) = R$. This makes type $1$ TFN features natural candidates for predicting the rows of a canonical frame $\mathcal{R}(X)$. We easily see that the pointcloud $\mathcal{R}(X).X$ is rotation invariant as:
thus we can use type $1$ TFN features to represent the rows of canonical frame $\mathcal{R}(X)$ with the equivariance property
Note that the rows of a canonical frame satisfy the equivariance property
\begin{equation}
\mathcal{R}(RX)_{i,:} = R\mathcal{R}(X)_{i,:} = D^1(R)\mathcal{R}(X)_{i,:}
\label{eq:can_frame_rows}
\end{equation}
this makes type $1$ TFN features natural candidates for predicting the rows of a canonical frame and a canonicalizing translation satisfying the equivariance properties of equation (\ref{eq:rotation_equivariance_r_t}). However the orthonormality constraints of the rows are not automatically provided by TFN properties and must be learned. We build our work upon this observation, we describe our approach in more detail in \cref{sec:method}.
\fi
\iffalse
A 3D object can be represented up to rotation, scale and translation, canonicalization is the task of finding a "canonical" pose for the object which is consistent across similar objects. The notion of canonicalization is formalized in C3DPO \cite{novotny2019c3dpo} as transforming a set of shapes into a "canonicalized" (or "transversal") set of shapes by rotating each shape
\begin{Definition}[Transversal Set]
A set $\mathcal{X}_0 \subset \mathbb{R}{3\times k}$ has the transversal property if, for any pair $X, X' \in \mathcal{X}_0$ of shapes (point-clouds) related by a rotation $X' = RX$, then $X = X'$.
\end{Definition}
Lemma 1 of \cite{novotny2019c3dpo} establishes an equivalence between the notion of transversal set and the notion of canonicalization function, we state the following lemma extending the result of \cite{novotny2019c3dpo}:
\begin{Lemma}
The following assertions are equivalent:
\begin{enumerate}
\item The set $\mathcal{X}_0 \subset \mathbb{R}^{3 \times K}$ has the transversal property.
\item There exists a canonicalization function $\psi : \mathbb{R}^{3 \times K} \rightarrow \mathbb{R}^{3 \times K}$ such that, for all rotations $R \in \mathrm{SO}(3)$ and shapes $X \in \mathcal{X}_0, X = \psi(RX)$.
\item There exist a canonical frame $\mathcal{R}: \mathbb{R}^{3 \times K} \rightarrow \mathrm{SO}(3)$, i.e. such that for all rotation $R \in \mathrm{SO}(3)$ and shape $X \in \mathcal{X}_0$:
\begin{equation}
\mathcal{R}(RX) = \mathcal{R}(X)R^{\top}
\label{eq:canonicalization_frame}
\end{equation}
\end{enumerate}
\end{Lemma}
Lemma 1 of \cite{novotny2019c3dpo} states the equivalence between $1)$ and $2)$ we refer to the appendix of \cite{novotny2019c3dpo} for a proof. \Adrien{shall we refer to our appendix for a proof equivalence between 2) and 3) ? the most important to us is that 3) implies 2)}
We easily see that the pointcloud $\mathcal{R}(X)X$ is rotation invariant as:
\begin{equation}
\mathcal{R}(RX)(R.X) = \mathcal{R}(X)R^{\top}(RX) = \mathcal{R}(X)X
\label{eq:canonicalizing_frame2}
\end{equation}
Note that the image of any subset $\mathcal{X}$ under a canonicalization map is a transversal set. In this work we propose to learn a canonical frame over a family of shapes. Our approach is based on Tensor Field Networks \cite{thomas2018tensor} (TFN). We recall the basic properties of TFNs which are key to our contribution, we refer to the literature \cite{thomas2018tensor, anderson2019cormorant, weiler20183d, poulenard2021functional} for a more extensive description of TFNs. Given a pointcloud $X \in \mathbb{R}^{3 \times K}$ and a (type) integer $\ell \in \mathbb{N}$ a TFN can be used to produce a global (type $\ell$) feature vector $f^{\ell}(X) \in \mathbb{R}^{2\ell + 1}$ satisfying the equivariance property:
\begin{equation}
f^{\ell}(RX) = D^{\ell}(R)f^{\ell}(X)
\end{equation}
where $D^{\ell}: \mathrm{SO}(3) \rightarrow \mathrm{SO}(2\ell + 1)$ is the so-called Wigner matrix (of type $\ell$) \cite{chirikjian2001engineering, knapp2001representation, lang2020wigner}. A key fact, essential to our contribution is that:
\begin{equation}
D^1(R) := R
\label{eq:degre_1_wigner}
\end{equation}
\Adrien{technically this is false, this is true up to xyz -> yzx change of coordinate but is seems that this technical point would complicate the discussion and this is actually just a convention} Note that the rows of a canonical frame satisfy the equivariance property
\begin{equation}
\mathcal{R}(RX)_{i,:} = R\mathcal{R}(X)_{i,:} = D^1(R)\mathcal{R}(X)_{i,:}
\label{eq:can_frame_rows}
\end{equation}
this makes type $1$ TFN features natural candidates for predicting the rows of a canonical frame, however the orthonormality constraints of the rows are not automatically provided by TFN properties and must be learned. We build our work upon this observation, we describe our approach in more detail in \Adrien{refer to relevant section}
\fi
\section{Method}
\label{sec:method}
Given a point cloud $X \in \mathbb{R}^{3 \times K}$ denoting a full or partial shape from a set of non-aligned shapes, our goal is to
estimate its rotation $\mathcal{R}(X)$ (canonical frame) sending $X$ to a canonical pose. For a partial shape $Y \subset X$ we also learn a translation $\mathcal{T}(Y)$ aligning $Y$ with $X$ in the canonical frame.
We achieve this by training a neural network on 3D shapes in a self-supervised manner (see \cref{fig:pipeline}).
\setlength{\belowdisplayskip}{1pt} \setlength{\belowdisplayshortskip}{1pt}
\setlength{\abovedisplayskip}{5pt} \setlength{\abovedisplayshortskip}{5pt}
\subsection{Learning to Canonicalize Rotation}
\label{sec:rotation}
We first discuss the case of canonicalizing 3D rotation for full shapes.
Given a point cloud $X$, our approach estimates a rotation-invariant point cloud $X^c$, and an equivariant rotation $E$ that rotates $X^c$ to $X$.
Note that for full shapes, translation can be canonicalized using mean centering~\cite{novotny2019c3dpo}, but this does not hold for partial shapes.
\parahead{Rotation Invariant Point Cloud/Embedding}
To estimate a rotation-invariant point cloud, we build on top of a permutation-, rotation-equivariant and translation-invariant neural network architecture: Tensor Field Networks (TFNs)\cite{thomas2018tensor} with equivariant non-linearities for TFNs~\cite{poulenard2021functional}.
Given $X$, we use a TFN~\cite{poulenard2021functional} to produce global \textbf{equivariant features}
$F^{\ell}$, with columns $F^{\ell}_{:,j}$ as described in \cref{sec:tfn}.
\begin{figure*}[ht!]
\centering
\includegraphics[width=\textwidth]{content/images/arch/ConDor_j_v4.png}
\vspace{-0.3in}
\caption{\textbf{ConDor}. (\emph{\textbf{left}})~Our method learns to canonicalize rotation by estimating an equivariant pose $E(X)$ and an invariant point cloud $X^c$ of an input shape $X$.
A self-supervision loss ensures that the input and transformed canonical shapes match.
(\emph{\textbf{right}})~To handle translation in partial shapes, we train a two-branch (Siamese) architecture, one taking the full shape and the other taking an occluded (\emph{e.g.},~via slicing) version of the full shape as input.
Various losses ensure that the feature embeddings of the full and partial shapes match.
We predict the amodal barycenter of the full shape $T(\mathcal{O(X)})$ from the partial shape to canonicalize for position.
}
\label{fig:pipeline}
\vspace{-0.4cm}
\end{figure*}
The central observation of \cite{poulenard2021functional} is that the features $F$ have the same rotation equivariance property as coefficients of spherical functions in the spherical harmonics basis, and can therefore be treated as such.
We exploit this property by embedding the shape using the spherical harmonics basis and using the global TFN features $F$ as coefficients of this embedding.
Since the input to the spherical harmonics embedding and the coefficients rotate together with the input shape, they can be used to define a rotation and translation \textbf{invariant embedding} of the shape.
Formally, let $Y^{\ell}(x) \in \mathbb{R}^{2\ell+1}$ be the vector of degree $\ell$ spherical harmonics which are homogeneous polynomials defined over $\mathbb{R}^3$.
We define a rotation invariant embedding of the shape as the dot products
\begin{equation}
H^{\ell}_{ij} := \langle F^{\ell}_{:,j}, Y^{\ell}(X_i) \rangle,
\label{eq:inv_emb}
\end{equation}
where $i$ is an index to a single point on the point cloud, and $j$ is the channel index as in \cref{sec:tfn}. Both sides of the dot product are rotated by the same Wigner rotation matrix when rotating the input pointcloud $X$ making $H$ invariant to rotations of $X$. The input point cloud is mean-centered to achieve invariance to translation.
Note that we can use any functional basis of the form $:x \mapsto (\varphi^r(\Vert x \Vert)Y^{\ell}(x))_{r\ell}$, where $(\varphi^r)_r$ are real valued functions to define $H$.
We use the rotation invariant embedding corresponding to $\ell = 1$ (degree 1) to produce a 3D \textbf{invariant shape} through a linear layer on top of $H$.
Note that degree 1 spherical harmonics are the $x,y,z$ coordinates of the input point cloud since $Y^1(x) = x$.
As we show in \cref{sec:segmentation}, other choices for $\ell$ enable us to learn consistent co-segmentation without supervision.
The 3D rotation invariant shape is given by:
\begin{equation}
X^c_{i} := \sum_j W_{:,j}H^1_{ij} = W (F^{1})^\top X_i.
\label{eq:eq_3d_inv_embedding}
\end{equation}
We obtain our canonical frame as described in \cref{sec:tfn} as $\mathcal{R}(X) = W (F^{1})^{\top}$ where $W$ is the learnable weights matrix of the linear layer.
\parahead{Rotation Equivariant Embedding}
Next, we seek to find an equivariant rotation that transforms $X^c$ to $X$.
In addition to the equivariant features $F$, our TFN also outputs a 3D equivariant frame $E$ which we optimise to be a rotation matrix.
$E$ satisfies the equivariance relation $E(R.X) = RE(X)$ so that the point cloud $E(X)X^c$ is rotation equivariant. Note that we could have chosen $E(X) = \mathcal{R}(X)^{\top}$ but we instead choose to learn $E(X)$ independently as this approach generalizes to the case of non-linear embeddings (\emph{e.g.},~with values other than $\ell = 1$ in \cref{eq:eq_3d_inv_embedding}) which we use for unsupervised segmentation in \cref{sec:segmentation}.
Using $E$, we can transform our 3D invariant embedding $X^c$ back to the input equivariant embedding and compare it to the input point cloud.
To handle situations with high occlusion and symmetric objects
we estimate $P$ equivariant rotations and choose the frame that minimizes the $L^2$ norm between corresponding points in the input and the predicted invariant shape.
\subsection{Learning to Canonicalize Translation}
\label{sec:translation}
Next, we discuss canonicalizing 3D translation for \textbf{partial point clouds}, \emph{e.g.},~acquired from depth sensors or LiDAR.
As noted, translation canonicalization for full shapes is achieved using mean centering~\cite{novotny2019c3dpo}.
Thus, our approach in \cref{sec:rotation} is sufficient for 3D pose canonicalization of full shapes. However, partial shapes can have different centroids depending on how the shape was occluded.
To address this issue, we extend our approach to additionally find a \textbf{rotation-equivariant translation} $\mathcal{T} \in \mathbb{R}^3$ that estimates the difference between the barycenter of the full and partial shape from the mean-centered partial point cloud that translates it to align with the full shape in the input frame.
In practice, we operationalize the above idea in a two-branch Siamese architecture as shown in \cref{fig:pipeline}.
We slice the input point cloud to introduce synthetic occlusion.
We penalize the network by ensuring semantic consistency between the full and the partial point cloud. Furthermore, our network predicts an amodal translation vector that captures the barycenter of the full shape from the partial input shape.
\subsection{Unsupervised Co-segmentation}
\label{sec:segmentation}
A surprising finding is that our method can be used for unsupervised part co-segmentation~\cite{chen2019bae} of full and partial shapes with little modification.
This result is enabled by finding the rotation invariant embedding $H$ in \cref{eq:inv_emb} corresponding to all $ \ell \geqslant 0$ to produce a \textbf{non-linear invariant embedding}.
To obtain a consistent rotation invariant part segmentation, we segment the input shape into $N$ parts by learning an MLP on top of the rotation invariant embedding.
The part label of each point in the input point cloud is given by
$S_i := \mathrm{softmax}[\mathrm{MLP}(H)_i]$.
Results visualized in the paper include these segmentations as colored labels.
Please see the supplementary material for more details.
\section{Self-Supervised Learning}
\subsection{Loss Functions}
\label{sec:losses}
A key contribution of our work is to demonstrate that 3D pose canonicalization can be achieved through self-supervised learning as opposed to supervised learning from labeled datasets~\cite{shapenet2015,wu20153d}.
We now list the loss functions that enable this.
Additionally, we describe losses that prevent degenerate results, handle symmetric shapes, and enable unsupervised segmentation.
We begin with full shapes.
\parahead{Canonical Shape Loss}
Our primary self-supervision signal comes from the canonical shape loss that tries to minimize the $L^2$ loss between the rotation invariant point cloud $X^c$ transformed by the rotation equivariant rotation $E$ with the input point cloud $X$.
It is worth noting that $X^c$ and $X$ are in correspondence because our method is permutation equivariant and we extract point-wise embeddings.
For each point $i$ in a point cloud of size $K$, we define the canonical shape loss to be
\begin{align}
\mathcal{L}_{canon} = \frac{1}{K}\sum_i \Vert EX^c_{i} - X_{i} \Vert_2.
\label{eq:l2_loss}
\end{align}
We empirically observe that our estimation of $E$ can be flipped 180$^\circ$ or $X^c$ can become a degenerate shape when the object class has symmetry or heavy occlusions.
To mitigate this issue, we estimate $P$ equivariant rotations $E_p$ and choose the one that minimizes the above loss.
\parahead{Orthonormality Loss}
The equivariant rotation $E$ estimated by our method must be a valid rotation in $\mathrm{SO}(3)$, but this cannot be guaranteed by the TFN.
We therefore add a loss to constrain $E$ to be orthonormal by minimizing its difference to its closest orthonormal matrix.
We achieve this using the SVD decomposition of $E = U\Sigma V^{\top}$ and enforcing unit eigenvalues with the loss
\begin{align}
\begin{split}
\mathcal{L}_{ortho} = \Vert UV^{\top} - E\Vert_2.
\label{eq:orth_loss}
\end{split}
\end{align}
\parahead{Separation Loss}
When estimating $P$ equivariant rotations $E_p$, our method could learn a degenerate solution where all $E_p$ are similar.
To avoid this problem, we introduce a separation loss that encourages the network to estimate different equivariant rotations as
\begin{align}
\mathcal{L}_{sep} = -\frac{1}{9P}\sum_{i \neq j} \left|\left| E_i - E_j\right|\right|_2.
\end{align}
\iffalse
\begin{figure}
\centering
\includegraphics[width=0.9\textwidth]{images/caps_full.JPG}
\caption{Image of full planes shapes under the invariant embedding $Y$ (\ref{eq:eq_3d_inv_embedding}) and the associated capsule segmentation $S$ (\ref{eq:seg_net}) }
\label{fig:full_canonical}
\end{figure}
\fi
\parahead{Restriction Loss}
We next turn our attention to partial shapes.
Similar to full shapes, we compute the canonical shape, orthonormality and separation losses.
We assume that a partial shape is a result of a cropping operator $\mathcal{O}$ that acts on a full point cloud $X$ to select points corresponding to a partial version $\mathcal{O}(X) \subseteq X$.
In practice, our cropping operator is slicing or image projection (see \cref{sec:training}).
During training, we train two branches of our method, one with the full shape and the other with a partial shape generated using a random sampling of $\mathcal{O}$.
We then enforce that the invariant embedding for partial shapes is a restriction of the invariant embedding of the full shape $X$ using the loss
\vspace{-0.2in}
\begin{align}
\mathcal{L}_{rest} = \frac{1}{|\mathcal{S}|}\sum_{ i \in \mathcal{S}} \Vert \widehat{\mathcal{O}[X^c]}_i - \left(\widehat{\mathcal{O}[X]}^{c}\right)_i\Vert_2^2,
\label{eq:frame_restriction_loss}
\end{align}
where $\mathcal{S}$ is the set of valid indices of points in both $X$ and $\mathcal{O}(X)$, and the $\widehat{\textrm{hat}}$ indicates mean-centered point clouds.
During inference, we do not require the full shape and can operate only with partial shapes.
Empirically, we observe that our method generalizes to different cropping operations between training and inference (see \cref{sec:ablation})
\parahead{Amodal Translation Loss}
Finally, to align the mean-centered partial shape with the full shape, we estimate the barycenter of the full shape after the occlusion operation $\overline{\mathcal{O}[X]}$ from the partial shape only using a rotation-equivariant translation vector $\mathcal{T}(\widehat{\mathcal{O}[X]})$ by minimizing
\begin{align}
\mathcal{L}_{amod} = \Vert \mathcal{T}(\widehat{\mathcal{O}[X]}) - \overline{\mathcal{O}(X)} \Vert^2_2.
\end{align}
\parahead{Unsupervised Part Segmentation Losses}
A surprising finding in our method is that we can segment objects into parts consistently across instances without any supervision (see \cref{fig:teaser}).
This is enabled by interpreting higher degree invariant embedding $H^{\ell}$ as a feature for unsupervised segmentation.
Our losses are based on the localization and equilibrium losses of \cite{sun2020canonical}.
We refer the reader to \cite{sun2020canonical} and the supplementary document for details on these losses.
Note that \cite{sun2020canonical} need to perform segmentation to enable rotation canonicalization, while it is optional for us.
\subsection{Network Architecture \& Training}
\label{sec:training}
Our method is trained on a collection of un-canonicalized shapes $\mathcal{X}$, and partial shapes randomly generated using a suitable operator $\mathcal{O}$.
We report two kinds of partiality: slicing and image projection (\emph{i.e.},~depth maps).
We borrow our TFN architecture from \cite{poulenard2021functional} and use the ReLU non-linearity in all layers.
We use 1024 and 512 points for full and partial point cloud.
Our method predicts 5 canonical frames for every category.
Our models are trained for 45,000 iterations for each category with the Adam~\cite{kingma2014adam} optimizer with an initial learning rate of $6 \times 10^{-4}$.
We set a step learning rate scheduler that decays our learning rate by a factor of $10^{-1}$ every 15,000 steps.
Our models are trained on Linux with Nvidia Titan V GPUs -- more details in the supplementary document.
\iffalse
\subsubsection{Scale and Translation}
So far we canonicalize shapes for rotation \Srinath{rotation?} but we would also like to canonicalize for position and scale. We canonicalize pose via an invariant embedding of the form:
\begin{equation}
Y = \mathrm{MLP}(\langle Z, F \rangle)
\end{equation}
where $Z$ are pointwise Zernike features and $F$ is a global rotation equivariant descriptor we compare the embedding to the original shape via a learned frame $B$ which is a function of $F$, we minimize the l2 loss
\begin{equation}
\Vert X - BY \Vert_2^2
\end{equation}
(where both $X$ and $Y$ are centered) as well as an orthonormality loss for $B$. More specifically, in practice we consider a siamese architecture were we have two branches $
Y^j = \mathrm{MLP}(\langle Z^j, F^j \rangle)
$
for $j \in \{1, 2\}$ also we have a (non deterministic ) cropping function $C$ taking partial views, currently we minimize the losses:
\begin{equation}
\begin{aligned}
&
\Vert X - B(X)\hat{Y}(X) \Vert_2^2
\\
&
\Vert R\hat{C(X)} - B(R\hat{C(X)})\hat{Y}(R\hat{C(X)}) \Vert_2^2
\\
&
\Vert \hat{Y}(\hat{C(X)}) - \hat{Y}(R\hat{C(X)}) \Vert_2^2
\\
&
\Vert B - UV^{\top} \Vert
\end{aligned}
\end{equation}
where $R \in \mathrm{SO}(3)$ is a random rotation, $\overline{S}$ is the barycenter of shape $S$, $\hat{S} := S - \overline{S}$ and $B = U \Sigma V^{\top}$. To canonicalize for position we can simply remove centering assumption and minimize the losses:
\begin{equation}
\begin{aligned}
&
\Vert X - B(X)\hat{Y}(X) \Vert_2^2
\\
&
\Vert \lambda R\hat{C(X)} - B(\lambda R\hat{C(X)})\hat{Y}(\lambda R\hat{C(X)}) \Vert_2^2
\\
&
\Vert \hat{Y}(\hat{C(X)}) - \hat{Y}(\lambda R\hat{C(X)}) \Vert_2^2
\\
&
\Vert B - UV^{\top} \Vert
\end{aligned}
\end{equation}
where $\lambda \in [3/4, 5/4]$ is a random scaling factor and $R \in \mathrm{SO}(3)$ is a random rotation.
\subsection{Metrics}
\paragraph{Canonicalization metric}
The canonicalization metric measures to which extent a collection of shapes $Y$ is a rotation of a collection of shapes $X$. The collections $X$ and $Y$ must be in one to one correspondence. Typically for all $i$ the shape $Y_i$ is a a transformation of the shape $X_i$ in one to one point-wise correspondence (a random rotation followed by a canonicalization in our case). We assume all shapes are centered , we define:
\begin{equation}
\begin{aligned}
d_c(X,Y)
&
:=
\underset{R \in \mathrm{SO}(3)}{\min} \sum_i \Vert Y_i - RX_i \Vert_2^2
\end{aligned}
\end{equation}
we stack the shapes in tensors $\Tilde{X} := (X_1, \dots, X_N)$ and $\Tilde{Y} := (Y_1, \dots, Y_N)$, we have:
\begin{equation}
\begin{aligned}
\sum_i \Vert Y_i - RX_i \Vert_2^2
=
\Vert \Tilde{Y}_i - R\Tilde{X}_i \Vert_2^2
\end{aligned}
\end{equation}
we recognize an orthogonal procrustes problem, let $U \Sigma V^{\top} = \Tilde{Y}\Tilde{X}^{\top}$ be the SVD decomposition of $\Tilde{Y}\Tilde{X}^{\top} = \sum_i Y_i X_i^{\top}$. Let $\Sigma'$ be the matrix $\Sigma$ where we replace the last singular value by $\det(UV^{\top})$ and the others by $1$. We have:
\begin{equation}
d_c(X,Y)
:= \sum_i \Vert Y_i - U \Sigma' V^{\top} X_i \Vert_2^2.
\end{equation}
So far we have not considered translation and scaling. We address scaling first, we want to measure to which extent $X$ is a rotation of $Y$ up to individual shapes scaling. For all $i$ we consider the SVD decomposition $U_i \Sigma_i V_i^{\top} = Y_i X_i^{\top}$, we can re-scale $Y_i$ by setting $Y^s_i := \frac{\sqrt{3}}{\Vert \Sigma_i \Vert_2} Y_i$. We now consider the SVD decomposition $U^s \Sigma^s (V^s)^{\top} = \Tilde{Y}^s \Tilde{X}^{\top}$
and denote $\hat{X}_i = X_i + \overline{\hat{X}_i}$, $\hat{Y}_i = Y_i + \overline{\hat{Y}_i}$ where $\overline{S}$ is the barycenter of shape $S$. We define our canonicalization by:
\begin{equation}
d_c(\hat{X},\hat{Y})
:= \sum_i \left\Vert \frac{\sqrt{3}}{\Vert \Sigma_i \Vert_2} \hat{Y}_i - U^s (\Sigma^s)' (V^s)^{\top} \hat{X}_i \right\Vert_2^2.
\end{equation}
\fi
\section{Experiments}
\label{sec:expt}
We present quantitative and qualitative results to compare our method with baselines and existing methods, justify design choices, and demonstrate applications.
\input{content/text/tables/full_shapes}
\parahead{Datasets}
For full shapes, we use un-canonicalized shapes from ShapeNet (Core)~\cite{chang2015shapenet} and ModelNet40~\cite{wu20153d}.
For ShapeNet, our data split~\cite{deprelle2019learning, sun2020canonical} has 31,747 train shapes, and 7,943 validation shapes where each shape is a 3D point cloud with 1024 points sampled using farthest point sampling.
The shapes are from 13 classes: airplane, bench, cabinet, car, chair, monitor, lamp, speaker, firearm, couch, table, cellphone, and watercraft.
For ModelNet40~\cite{wu20153d}, we use 40 categories with 12,311 shapes (2,468 test).
For partial shapes, we either randomly slice shapes from the above datasets, or we use the more challenging ShapeNetCOCO dataset~\cite{sridhar2019multiview} that contains
objects viewed from multiple camera angles and mimics occlusions from depth sensors.
While all these datasets are already pre-canonicalized, we use this information \textbf{only for evaluation} -- our method is trained on randomly transformed un-canonicalized shapes $X \in \mathcal{X}$ from these datasets.
\input{content/text/canonicalization}
\input{content/text/tables/partial_shapes}
\subsection{Comparisons}
\label{sec:evaluation_full}
We report comparisons on canonicalizing both full and partial shapes.
Only the rotation metrics from \cref{sec:canonicalization_metrics} are relevant for full shapes since we assume input shapes are mean-centered without translation differences~\cite{novotny2019c3dpo}.
We report the TE metric for partial shape canonicalization.
Outside of these metrics, we also report indirect evaluations of canonicalization~\cite{sun2020canonical, spezialetti2020learning} on classification.
\parahead{Canonicalization Metrics}
We compare our method with baselines and other methods using our new canonicalization metrics (\cref{sec:canonicalization_metrics}).
For this experiment, we follow previous work~\cite{deprelle2019learning} and choose 13 categories from the ShapeNet, training one model per category as well as a joint model for all categories.
We choose PCA as a baseline -- for each shape we compute the top-3 principal components and use this as an equivariant frame for alignment across instances.
We compare with two methods for rotation canonicalization: Canonical Capsules (CaCa)~\cite{sun2020canonical} and Compass~\cite{spezialetti2020learning}.
Results for full shape canonicalization are shown in \cref{table:canonicalization_metrics_full}.
We evaluate two versions of our method on full shapes, one trained with only full shapes (F) and one trained on both full and partial shapes (F+P).
For the IC metric, both our methods outperform other methods, including baselines, in almost all categories. PCA underperforms in the IC metric due to the frame ambiguity.
Our method outperforms other canonicalization methods, but surprisingly, we find that PCA is very close. For the CC metric, canonicalized shapes of different geometry are compared with each other. PCA minimizes CC metric by aligning shapes using the principal directions, but does not result in the correct canonical frame as shown in \cref{fig:money_shot} (see supplement for in-depth discussion).
Qualitative results in \cref{fig:money_shot} show that we perform significantly better than other methods.
Finally, our method outperforms other methods on the GC metric indicating that it could be used to extend the size of existing datasets (see \cref{sec:applications}).
Next, we discuss results of partial shape canonicalization shown in \cref{table:canonicalization_metrics_partial}.
Since no other method exists for partial shape canonicalization, we modified the training setting of Compass to include slicing augmentation (using $\mathcal{O}$) to operate similar to our F+P method (Compass*).
The training data and occlusion function are identical for all methods.
Different from full shapes, we observe that our method significantly outperforms other methods on all three metrics indicating that our method's design is suited for handling partiality.
We also compute the Translation Error (TE) metric averaged over all our single category models as \textbf{0.0291} while it is \textbf{0.0326} for our multi-category model.
For comparison, all our shapes lie within a unit-diagonal cuboid~\cite{shapenet2015}.
\begin{figure*}[t!]
\centering
\label{fig:money_shot}
\includegraphics[width=\textwidth]{content/images/qualitative/qualitative.jpg}
\vspace{-0.3in}
\caption{(\emph{\textbf{left}})~Qualitative comparison with other methods on 6 \emph{randomly chosen} full shapes.
(\emph{\textbf{center}})~More qualitative results from our method on challenging full/partial car shapes and a variety of full/partial lamp shapes (missing parts only shown for visualization).
The last row (red border) shows failure cases caused due to incorrect canonical translation for partial shapes, or symmetric shapes.
(\emph{\textbf{right}})~Rows 1--2: Application of our method in transferring sparse keypoints from one shape to another.
Row 3: Canonicalization of two depth maps from the ShapeNetCoco~\cite{sridhar2019multiview} dataset showing consistency in canonicalized shapes.
All results rendered using Mitsuba 2~\cite{NimierDavidVicini2019Mitsuba2}.
}
\vspace{-0.1in}
\end{figure*}
\parahead{3D Shape Classification}
We measure 3D shape classification accuracy as an indirect metric of canonicalization following~\cite{spezialetti2020learning}.
We train models with un-canonicalized shapes from all 13 categories.
We augment the PCA baseline, CaCa, Compass and our full shape models with PointNet~\cite{qi2017pointnet} which performs classification on canonicalized outputs.
We observe that our method (\textbf{74.6\%}) outperforms other methods on classification accuracy: PCA
(64.9\%), CaCa (72.5\%), and Compass (72.2\%).
Please see the supplementary document for comparison on registration.
\parahead{Registration}
We measure the registration accuracy of our method for categories (airplanes, chairs, multi) on full shapes in table~\ref{table:registration_shapenet}. Our method does not perform well in this task as we predict a frame $E \in O(3)$ which can have reflection symmetries resulting in high $\mathrm{RMSE}$, but low $\mathrm{CD}$.
\input{content/text/supp/tables/exp_registration}
\iffalse
\begin{table}[ht]
\begin{center}
\scalebox{0.84}{
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Methods & $\{I\}$ & $\mathrm{SO}(3)$ \\
\hline \hline
PCA & & \\
PointNet \cite{} & & \\
PointNet++ \cite{} & & \\
Point2Seq \cite{} & & \\
SphericalCNN \cite{} & & \\
LDGCNN \cite{} & & \\
SO-Net \cite{} & & \\
PRIN \cite{} & & \\
CaCa \cite{} & & \\
CaCa+PointNet & 0.7680 & 0.7248\\
Compass+PointNet & & \\
Ours+PointNet & 0.7618 & 0.7458 \\
Ours+DGCNN & &\\
\hline
\end{tabular}}
\caption{Classification accuracy on the ModelNet40 dataset when training without augmentation. The $\{I\}$ column reports the accuracy attained when testing on the cloud in the canonical orientation provided
by the dataset and the $\mathrm{SO}(3)$ column when testing under arbitrary rotations. Best result for each row in bold. \Adrien{see table (2) from Learning to Orient Surfaces
by Self-supervised Spherical CNNs}
\label{table:ModelNet40}}
\end{center}
\end{table}
\fi
\iffalse
{'plane.h5': 0.04602410343212447, 'bench.h5': 0.07601101460047875, 'cabinet.h5': 0.10473483054834618, 'car.h5': 0.0207787336973135, 'chair.h5': 0.11900827385978933, 'monitor.h5': 0.08615313724635855, 'lamp.h5': 0.1365591589003889, 'speaker.h5': 0.12801386243965368, 'firearm.h5': 0.026088930217871897, 'couch.h5': 0.07989730170889289, 'table.h5': 0.1266521246650126, 'cellphone.h5': 0.03902715022584224, 'watercraft.h5': 0.06451587832083863, 'mean': 1.0534644998629117}
{'plane.h5': 0.050230628440483614, 'bench.h5': 0.05732758538798298, 'cabinet.h5': 0.035035983710636, 'car.h5': 0.04774012052085709, 'chair.h5': 0.09741799502654101, 'monitor.h5': 0.048041715137449655, 'lamp.h5': 0.0755310864334787, 'speaker.h5': 0.049052788416195967, 'firearm.h5': 0.032381294646947806, 'couch.h5': 0.06279105341207242, 'table.h5': 0.07273913904489566, 'cellphone.h5': 0.02758444133913145, 'watercraft.h5': 0.04000329236669788, 'mean': 0.6958771238833703}
{'plane.h5': 0.07314302341422438, 'bench.h5': 0.09759231722112291, 'cabinet.h5': 0.10552268183576381, 'car.h5': 0.0654067158886536, 'chair.h5': 0.13886475535067982, 'monitor.h5': 0.09702158728686197, 'lamp.h5': 0.18016050307429446, 'speaker.h5': 0.13971011713164344, 'firearm.h5': 0.052731822963231714, 'couch.h5': 0.09374274250770284, 'table.h5': 0.14785381153320828, 'cellphone.h5': 0.05996442349132849, 'watercraft.h5': 0.08160172899089654, 'mean': 1.3333162306896122}
\fi
\subsection{Ablations}
\label{sec:ablation}
We justify the following key design choices: the effect
of increasing amounts of occlusion/partiality, loss
functions (\cref{sec:losses}), and the benefit of multiple frames.
\parahead{Degree of Occlusion/Partiality}
We examine the ability of our model to handle varying amounts of occlusion/partiality for the car category.
Our occlusion function $\mathcal{O}$ occludes shapes to only keep a fraction of the original shape between 25\% and 75\% (\emph{i.e.},~25\% is more occluded than 75\%).
The average over all metrics indicates that our method performs optimally when trained at 50\% occlusion (25\%: 0.0594, 50\%: \textbf{0.0580}, 75\%: 0.0886).
\parahead{Loss Functions}
We evaluate our F+P model on both full
and partial shapes trained
with all losses, without the separation loss $\mathcal{L}_{sep}$, and without the restriction loss $\mathcal{L}_{rest}$. We observe that using $\mathcal{L}_{sep}$ and $\mathcal{L}_{rest}$ performs optimally with the least average error $\textbf{0.0696}$ across all canonicalization metrics over three categories (airplanes, tables, chairs).
\parahead{Multi-Frame Prediction}
We ablate on the number of canonical frames ($1, 3, 5$) predicted by our method to measure its effectiveness on symmetric categories.
We evaluate on two symmetric categories, table, and lamp, and observe (\cref{table:multi_frame_ablation}) that 3 and 5 frames perform better in most cases.
\input{content/text/tables/ablation_multi_frame}
\subsection{Applications}
\label{sec:applications}
ConDor\xspace enables applications that were previously difficult, particularly for category-level object understanding.
First, since our method operates on partial shapes, we can canonicalize objects in \textbf{depth images}.
To validate this, we use depth maps from the ShapeNetCOCO dataset~\cite{sridhar2019multiview} and canonicalize partial point clouds from the depth maps.
\cref{fig:money_shot} (right, row 3) shows an example of depth map canonicalization (see supplementary).
Second, since our method outperforms other methods, we believe it can be used to expand existing canonical datasets with un-canonicalized shapes from the internet -- we show examples of expanding the ShapeNet in the supplementary document.
Finally, we show that ConDor\xspace can be used to transfer sparse keypoint annotations between shape instances.
We utilize the unsupervised part segmentation learned using our method to solve this task (see supplementary).
\cref{fig:money_shot} (right, rows 1--2) shows results of transferring keypoint annotations from one shape to another.
\section{Conclusion}
We introduced ConDor\xspace, a self-supervised method to canonicalize the 3D pose of full and partial 3D shapes.
Our method uses TFNs and self-supervision losses to learn to canonicalize pose from an un-canonicalized shape collection.
Additionally, we can learn to consistently co-segment object parts without supervision. We reported detailed experiments using four new metrics, and new applications.
\parahead{Limitations \& Future Work}
Despite the high quality of our results, we encounter failures (see \cref{fig:money_shot}), primarily with symmetric or objects with fine details (lamps) where the canonical frame is incorrect. We also observed that PCA often performs very well, and sometimes outperforms methods on full shapes (we do significantly better on partial shapes). Our method occasionally generates flipped canonicalized shapes along the axis of symmetry due to the prediction of an $O(3)$ frame.
Our work can be extended to canonicalize purely from partial shapes and perform scale canonicalization.
\parahead{Acknowledgments}
This work was supported by AFOSR grant FA9550-21-1-0214, a Google Research Scholar Award, a Vannevar Bush Faculty Fellowship, ARL grant W911NF2120104, and gifts from the Adobe and Autodesk corporations. We thank the reviewers for their valuable comments.
\vfil
\subsection{Canonicalization Metrics}
\label{sec:canonicalization_metrics}
Most work on canonicalization evaluates performance indirectly on downstream tasks such as segmentation or registration~\cite{sun2020canonical, spezialetti2020learning}.
This makes it hard to disentangle canonicalization performance from task performance.
We contribute four new metrics that measure different aspects of 3D pose canonicalization while disentangling performance from downstream tasks.
The first three of these metrics evaluate rotation assuming mean-centering, while the last metric measures translation errors for partial shapes.
\parahead{Instance-Level Consistency (IC)}
The IC metric is designed to evaluate how well a method performs for canonicalizing the 3D rotation of the \emph{same shape instance}.
For each shape in the dataset, we obtain another copy of it by applying a rotation from $\mathbf{R}$, a user-defined set of random rotations (we use 120 rotations).
We then compute the 2-way Chamfer Distance ($\mathrm{CD}$), to handle classes with symmetries such as tables, between the canonicalized versions of the shapes (with superscript $^c$).
We expect this to be as small as possible for better canonicalization.
The average IC metric is given as:
\begin{align}
\mathrm{IC} := \frac{1}{|\mathcal{X}||\mathbf{R}|} \sum_{X_i \in \mathcal{X}} \sum_{R_j \in \mathbf{R}}\mathrm{CD}[ (R_j.X_i)^c, X^c ].\nonumber
\end{align}
\parahead{Category-Level Consistency (CC)}
The CC metric is designed to evaluate the quality of 3D rotation canonicalization between \emph{different shape instances}.
For each shape $X$ in the dataset, we pick $N$ other shapes to form a set of comparison shapes $\mathcal{N}$.
We then follow a similar approach as $\mathrm{IC}$ and compute the 2-way Chamfer Distance between each shape and its $N$ possible comparison shapes.
Intuitively, we expect this metric to be low if canonicalization is consistent across different instances.
Ideally, we want to evaluate this metric for all possible comparison shapes, but to reduce computation time, we pick $N = 120$ random comparison shapes.
The average CC metric is given as:
\begin{align}
\mathrm{CC} := \frac{1}{|\mathcal{X}| N} \sum_{X_i \in \mathcal{X}} \sum_{X_j \in \mathcal{N}} \mathrm{CD}[ X_i^c , X_j^c ].\nonumber
\end{align}
\parahead{Ground Truth Consistency (GC)}
The GC metric is designed to compare estimated canonicalization with manual ground truth pre-canonicalization in datasets like ShapeNet and ModelNet40. For perfect canonicalization, the predicted canonical shape should be a constant rotation away from ground truth shape. Given the predicted canonicalizing frames $\mathcal{R}(X_j), \mathcal{R}(X_k)$ for aligned shapes $X_j, X_k \in \mathcal{X}$, we induce the same canonicalization on any other shape $X_i \in \mathcal{X}$ and compute the 2-way $\mathrm{CD}$ between them.
\begin{align}
\mathrm{GC} :=
\frac{1}{|\mathcal{X}|^3}\sum_{X_i,X_j,X_k \in \mathcal{X}} \hspace{-3mm} \mathrm{CD}[\mathcal{R}(X_j).X_i, \mathcal{R}(X_k).X_i].\nonumber
\end{align}
We note that manual canonicalization, which is based on human semantic understanding of shapes, does not necessarily match with this paper's notion of canonicalization which is founded on geometric similarity.
Nonetheless, this metric provides a way to compare with human annotations.
\parahead{Translation Error (TE)}
To measure error in translation for partial shapes, we compute the average $L^2$ norm between the estimated amodal translation and ground truth amodal translation -- this has the same form as $\mathcal{L}_{amod}$ in \cref{sec:losses}.
Note that we have the ground truth amodal translation for our datasets since partial shapes are generated from the full shapes using an occlusion function $\mathcal{O}$.
\section{Keypoint Transfer}
More specifically, we transfer the keypoint annotation by computing the directional vector in the source point cloud and use the corresponding capsule in the same direction in the target point cloud to find its nearest neighbor.
\begin{Lemma}
A map $t: \mathcal{X} \rightarrow g$ satisfying the equivariance property:
$t(g.x) = t(x)g^{-1}$ for all $x \in \mathcal{X}$ and $g \in G$ induces a canonicalizing map $s: \mathcal{X} / \sim \rightarrow \mathcal{X}$ defined by $s(\overline{x}) := t(x).x$ where $\overline{x} \in \mathcal{X} / G := \mathcal{X} / \sim$ is the equivalence class (or orbit) of $x$. Please see the supplementary material for a proof.
\label{lemma:can_transform}
\end{Lemma}
\begin{proof}
By construction $t(x).x \in \overline{x}$ we simply have to verify that $t(x).x$ is invariant by the action of $G$. We have $t(g.x).(g.x) = t(x)g^{-1}(g.x) = t(x).x$.
\end{proof}
\Rahul{Do we need this? Need to fix the box}
\section{Proof of Rotation-Invariance Property of our Embedding}
Given rotation-equivariant embeddings $F^{\ell}$ and $Y^{\ell}$ the tensors $H^{\ell}(X)$ are rotation invariant as:
\[
\begin{aligned}
H^{\ell}_{ijk}(R.X)
&
=
\langle F^{\ell}_{i,:,j}(R.X), Y^{\ell}_{:,j,k}(R.X)\rangle
\\
&
=
\langle D^{\ell}(R)F^{\ell}_{i,:,j}(X), D^{\ell}(R) Y^{\ell}_{:,j,k}(X)\rangle
\\
&
=
\langle F^{\ell}_{i,:,j}(X), Y^{\ell}_{:,j,k}(X)\rangle
=
H^{\ell}_{ijk}(X)
\end{aligned}
\]
\section{Commutative Property of Canonicalization with the Cropping Operator}
Canonicalization commutes with the cropping operator $\mathcal{O}$. For a (full) point cloud $X$ and predicted canonicalizing frame $\mathcal{R}(X)$, we prove the commutative property here, we assume $X$ is mean centered for simplification.
\[
\begin{aligned}
&\widehat{\mathcal{O}[X]}^c + \mathcal{R}(X) \overline{\mathcal{O}[X]}
= \mathcal{R}(X)(\widehat{\mathcal{O}[X]} + \overline{\mathcal{O}[X]})
\\
&=
\mathcal{R}(X)(\mathcal{O}[X]) =\mathcal{O}[\mathcal{R}(X)X]
=\mathcal{O}[X^c]
\end{aligned}
\]
The above commutative property enables us to a predict a rotation-equivariant translation $\mathcal{T}(\widehat{\mathcal{O}(X)})$ from the mean centered partial shape $\widehat{\mathcal{O}(X)}$ only that aligns the partial shape to its corresponding points in the full shape.
\[
\begin{aligned}
&\widehat{\mathcal{O}[X]}^c + \mathcal{R}(\widehat{\mathcal{O}[X]}) \overline{\mathcal{O}[X]} \simeq \widehat{\mathcal{O}[X]}^c + \mathcal{R}(\widehat{\mathcal{O}[X]}) \mathcal{T}(\widehat{\mathcal{O}(X)})
\\
&=\mathcal{O}[X^{c}]
\end{aligned}
\]
\section{Network details}
\subsection{Architecture}
We reuse the classification architecture described in Section 3.1 of \cite{poulenard2021functional} as our backbone. The architecture comprises of three equivariant convolution layers followed by a global max-pooling layer, and the remaining layers specialize for classification; we drop these last layers and specialize the network for our tasks instead. The global max-pooling layer of \cite{poulenard2021functional} proceeds by first interpreting each point-wise signal as coefficients of spherical functions in the SH basis and performing a discrete inverse spherical harmonics transform to convert them into functions over a discrete sampling of the sphere. For any direction, the resulting signal is then spatially pooled over the shape, resulting in a single function over the sphere sampling (specifically, a single map from the sphere sampling to $\mathbf{R}^C$, where $C = 256$ as we have 256 channels). We then apply point-wise MLPs (with ReLU activations) on this sphere map and convert it back to TFN-like features via forward spherical harmonics transform (SHT) \cite{poulenard2021functional}.
\parahead{Spherical Harmonic Coefficients} In order to predict the coefficients $F(X)$ of the invariant embedding $H(X)$, we apply a $[128, 64]$-MLP whose last layer is linear and convert to types $\ell \in \llbracket 0, 3\rrbracket$ via SHT.
\parahead{Rotation-Invariant Point Cloud} We obtain our 3D invariant point cloud $X^c$ by applying a linear layer to $H(X)$.
\parahead{Rotation-Equivariant Frame} To predict $E$, we apply a $[64, 3]$-MLP whose last layer has a linear activation. We then extract type $1$ features with SHT, giving us a collection of 3 equivariant 3D vectors.
\parahead{Segmentation} To predict the segmentation we apply a point-wise $[256, 128, 10]$-MLP whose last layer is soft-max to get the segmentation masks $S$ described in \cref{sec:segmentation}.
\subsection{Training Details}
\parahead{Cropping operator $\mathcal{O}$} We introduce synthetic occlusion in our training setting by slicing full shapes using the cropping operator $\mathcal{O}$. To perform a crop, we uniformly sample a direction $v$ on the unit sphere and remove the top $K/2$ points in the shape that have the highest value of $x^Tv$ for $x \in X$. Additionally, we train our model on the ShapeNetCOCO dataset~\cite{sridhar2019multiview,sajnani2021draco} which has pre-determined occlusion due to camera motion, as seen in~\cref{fig:nocs}. In order to preprocess this data for training, we aggregate the parts in the canonical NOCS space of every sequence to obtain the full shape and perform a nearest neighbor search in the NOCS space to find correspondences between the full and partial shape.
\parahead{Hyper-parameters} During training, we use a batch size of $16$ in every step for all our models. We set an $L^1$ kernel regularizer at every layer of the network with weight $0.1$. We weigh the loss functions by their effect on reducing the Canonical Shape loss $\mathcal{L}_{canon}$. The loss functions are weighed as: $\mathcal{L}_{canon} \;(2)$, $\mathcal{L}_{rest}\; (1)$, $\mathcal{L}_{ortho}\; (1)$, $\mathcal{L}_{sep}\; (0.8)$, and $\mathcal{L}_{amod}\; (1)$.
\section{Unsupervised Co-segmentation}
\label{sec:segmentation}
\subsection{Predicting parts} We predict the part segments $S \in \mathbb{R}^{K \times C}$ wherein $C$ are the number of parts. We use the rotation-invariant embedding $H^{\ell}(X)$ with all the types $0 \leq \ell \leq 3$ to predict the segmentation $S$. We define the following notation for normalized parts $A(X)$ and part centroids $\theta(X)$ similar to \cite{sun2020canonical}.:
\begin{equation}
\begin{split}
& S(X) := \mathrm{Softmax}[\mathrm{MLP}(H(X))] \\
& \; A_{ij}(X) := \frac{S_{ij}(X)}{\sum_i S_{ij}(X)} \\
& \theta_{j}(X) := \sum_i A_{ij}(X)X_{i,:}
\end{split}
\end{equation}
\subsection{Loss functions}
We use part segmentation to enforce semantic consistency between full and partial shapes. We borrow the localization loss ($\mathcal{L}_{localization}$) and equilibrium loss ($\mathcal{L}_{equilibrium}$) from \cite{sun2020canonical} for the full shape to evenly spread part segmentation across the shape. Additionally, we employ the following losses.
\parahead{Part Distribution loss} We compute the two-way Chamfer distance ($CD$) between the part centroids and the input shape. In practice, this helps to distribute parts more evenly across the shape.
\begin{equation}
\mathcal{L}_{dist} = \mathrm{CD}\left( X, \theta(X) \right)
\label{eq:capsule_chamfer_loss}
\end{equation}
\parahead{Part Restriction loss} The parts discovered by the network for the partial shape should be congruent to the parts discovered by the network for the full shape. We penalize the part prediction for corresponding parts by minimizing the negative Cosine Similarity ($CS$) for our capsule predictions.
\begin{equation}
\mathcal{L}_{rest(part)} = - \frac{2}{K}\sum_{i \in \mathcal{S}} \mathbf{CS}(S(\mathcal{O}(X))_{i,:}, \mathcal{O}(S(X))_{i,:})
\end{equation}
\parahead{Part Directional loss} To avoid part centers of the visible parts of a shape from deviating from the part centers of the full shape, we use a soft loss to ensure that the directional vector between part centers are consistent between the full and partial shape. $\mathrm{dir}(\theta(X))$ computes the vector directions between every $^{C}C_2$ centroid pairs for $C$ part centroids.
\begin{equation}
\mathcal{L}_{direc} = - \frac{1}{^{C}C_2} \sum_{i \in \mathcal{S}} \mathbf{CS}(\mathrm{dir}(\theta(\mathcal{O}(X_i))), \mathrm{dir}(\mathcal{O}(\theta(X_i))))
\end{equation}
\section{Experiments}
\section{Registration}
\label{sec:registration}
\input{content/text/supp/tables/exp_registration}
We note in \cref{table:registration_shapenet} that our method does not perform well in this task as we predict a frame $E \in O(3)$ which can have reflection symmetries, we observe symmetries such as left-right reflection for planes. Symmetries cause high RMSE error because points are matched with their image under symmetry which are often very distant. However, when using Chamfer Distance metric which is symmetry agnostic our registration error decreases by an order of magnitude achieving competitive results on this benchmark. We also note that Ours(F+P) noticeably decreases $\mathrm{RMSE}$ compared to Ours(F) as during training the frame consistency is enforced between the full shape and a randomly rotated partial by the $\mathcal{L}_{rest}$ loss.
\section{Discussion}
\section{Discussion on Canonicalization Metrics}
We complement the discussion of our canonicalization metrics with a few remarks. Our 3 metrics Instance-Level (IC), Category-Level (CC) and Ground Truth (GC) Consistency measure three aspects of canonicalization. The instance-level metric is a measure of the "variance" of the canonical pose under rotation of the input. By definition the canonical pose must be invariant to the input pose. The GC metric provides a way of measuring canonicalization consistency across the entire class of objects by measuring how our canonicalization deviates from a ground truth canonicalization up to a constant rotation. In the absence of a ground truth alignment, we propose the CC metric which compares canonicalization of different shapes within the same class using Chamfer distance (as we don't assume pointwise correspondences between different shapes). The CC metric relies on the assumption that aligned shapes of the same category are similar to each other.
We observe in table (1) of our article that some methods have high IC but low GC and vice versa (e.g. CaCa \cite{sun2020canonical} (cabinet), Ours (F + P) speaker). This occurs as we canonicalize based on geometric similarity instead of semantic aspects of the object. The IC and CC metrics measure geometric properties of the canonicalization while GC measures semantic properties of the canonicalization according to manually aligned shapes.
We build our metrics using the Chamfer distance as it does not assume pointwise correspondences between shapes, this allows measuring the canonicalization quality of symmetric shapes where there may not be a single correct canonical orientation. However, we observe a performance gap with our method when using distances based on pointwise correspondences such as $L^2$ or root mean square (RMSE) errors as seen in \cref{sec:registration} of this appendix. We believe our Chamfer distance based metrics are representative of the quality of canonicalization and are consistent with our visual evaluation.
\section{Discussion on PCA}
\parahead{PCA Over-Performance on the CC Metric}
We note that the competitiveness of PCA is limited to certain experiments for \textbf{full shapes} and \textbf{multi-category} experiments only. The CC metric compares canonicalized shapes of the same category with possibly different geometry -- note that PCA even outperforms ground truth canonicalization for this metric. Thus a method which is optimal for GC metric cannot outperform PCA in CC.
\parahead{PCA Under-Performance on the IC Metric}
The most likely reason why PCA underperforms on the IC metric is because of frame ambiguity.
The PCA principal directions are defined up to symmetries of the covariance matrix eigenspaces -- the shape does not necessarily share these symmetries.
For instance, when eigenvalues are distinct, eigenvectors are defined up to sign, causing random flips over principal directions: \emph{e.g.},~an airplane can be flipped on its back.
When two or more eigenvalues are identical, eigenvectors are defined up to rotation, \emph{e.g.},~in chairs, the major component can be from the left leg to the top right corner or bottom right leg to top left corner.
Thus, PCA canonicalization of rotated copies of a given shape may not be equal due to symmetries of the shape, resulting in higher Chamfer/IC error.
\section{Ablations}
We now provide detailed ablations to justify the following key design choices: the effect of increasing amounts of occlusion/partiality, and loss
functions.
\parahead{Degree of Occlusion/Partiality} We examine the ability of our model to handle varying amounts of occlusion/partiality for the car category in Table~\ref{table:deg_partiality}.
Our occlusion function, $\mathcal{O}$, occludes shapes to only keep a fraction of the original shape between 25\% and 75\% (\emph{i.e.},~75\% is more occluded than 25\%).
We observe that our method performs optimally over all metrics when trained at 50\% occlusion.
\vspace{1cm}
\input{content/text/tables/ablation_degree_partiality}
\input{content/text/supp/tables/loss_functions}
\parahead{Loss Functions}
We evaluate our F+P model on both full
and partial shapes trained
with all losses, without the separation loss $\mathcal{L}_{sep}$, and without the restriction loss $\mathcal{L}_{rest}$. From Table~\ref{table_supp:loss_ablation}, we observe that using restriction loss $\mathcal{L}_{rest}$ helps in canonicalization of both full and partial shapes in categories \textit{plane}, \textit{table}, and \textit{chair}. However, separation loss, $\mathcal{L}_{sep}$, helps in \textit{plane}, \textit{table} but not in \textit{chair}. Since, both losses help in most of the categories, we utilize them for training our final model.
\parahead{Effect of introducing occlusion on full shapes}
We evaluate the canonicalization of full shapes using our network trained on full and partial shapes. We observe that on average both our models \textbf{Ours(F)} and \textbf{Ours(F+P)} perform the same on the canonicalization metrics for full shapes. For a few categories such as \textit{lamp, car, chair, watercraft}, introducing partial shapes in the training improves its performance on the canonicalization metrics. Whereas introducing occlusion during training degrades the performance for category $bench$.
\section{Applications}
\subsection{Co-Canonicalization}
Commonly used datasets in 3D vision, such as ShapeNet~\cite{chang2015shapenet}, are manually pre-canonicalized, making expansion of such datasets expensive. Since our method performs better than others on canonicalization, we believe that it can be used to extend these datasets by canonicalizing corpora of \textit{in-the-wild} shapes into a common pose. Figure~\ref{fig:single_cocan} shows the results of our model, trained on the ShapeNet (core) dataset~\cite{chang2015shapenet}, being used to canonicalize shapes from the (uncanonicalized) ModelNet40 dataset~\cite{wu20153d}. These shapes can now be merged into ShapeNet by applying a single category-wide rotation to match the obtained canonical frame with the existing frame used by ShapeNet, instead of the per-instance rotation that would otherwise be required. Furthermore, these results qualitatively demonstrate the ability of our method to generalize to datasets not seen during training.
\input{content/text/supp/figures/cocan}
\subsection{Depth Map Canonicalization}
Since our method operates on partial shapes, we can canonicalize objects in \textbf{depth images}. We use the depth maps from the ShapeNetCOCO dataset, which have pre-determined occlusion due to camera motion, and canonicalize partial point clouds.
Specifically, we first take depth maps and utilize them to generate groundtruth pointclouds. We then trained and tested our model on it. Figure~\ref{fig:nocs} present examples to demonstrate that our model is capable of canonicalizing depth maps.
\input{content/text/supp/figures/nocs}
\subsection{Annotation Transfer}
Since a category-level canonical frame is consistent with respect to the geometry and local shape of different object instances of a category, annotations can be transferred across instances that share the same canonical frame. Particularly, we demonstrate the transfer of sparse key-point annotations in Figure~\ref{fig:single_annotation}. We randomly assign labels to a few points of one point cloud in each category, which serves as the source. We then use a remarkably simple transfer function to transfer these labels to points in each target point cloud, making use of the predicted segmentation. To every labeled point in the source point cloud, we obtain a directional vector originating from the centroid of the segment it belongs to. Starting from the corresponding centroid in the target point cloud, we move along this directional vector and then pick the nearest point. While this scheme works well in our case, more nuanced transfer functions may be required depending on the application.
\input{content/text/supp/figures/annotation}
\section{Qualitative Results}
We now present more qualitative results in Figure~
\ref{fig:qualitative_results_full_shapes}, ~\ref{fig:qualitative_results_partial} to demonstrate the effectiveness of our method.
\onecolumn
\begin{figure}[h]
\centering
\includegraphics[width=0.9\textwidth]{content/text/supp/figures/edited/fullparking.jpg}
\caption{Parking lot for full shape canonicalization for multi-category(\textbf{\emph{top}}), plane (\textbf{\emph{middle}}) and chair (\textbf{\emph{bottom}}). }
\label{fig:qualitative_results_full_shapes}
\end{figure}
\begin{figure}[hbt]
\centering
\subfloat{\includegraphics[width=0.85\columnwidth]{content/images/parkinglot/partial_multi.jpg}}
\vfill
\subfloat{\includegraphics[width=0.9\columnwidth]{content/images/parkinglot/partial_plane.jpg}}
\vfill
\subfloat{\includegraphics[width=0.9\columnwidth]{content/images/parkinglot/partial_chair.jpg}}
\caption{Parking lot for partial shape canonicalization for multi-category(\textbf{\emph{top}}), plane (\textbf{\emph{middle}}) and chair (\textbf{\emph{bottom}}). Note: missing parts only shown for visualization. }
\label{fig:qualitative_results_partial}
\end{figure}
\twocolumn
| {
"attr-fineweb-edu": 1.583008,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdOs5qsBB3NoB39Az | \section{Introduction}
The bright G0 IV star $\eta$ Bo\"otis (8 Boo, HR 5235, HD 121370) is
an interesting target to study given its place on the H-R diagram,
and its implications upon stellar modeling. Solar-like oscillations
were detected for $\eta$ Boo by
\citet{1995AJ....109.1313K,2003AJ....126.1483K}. $\eta$ Boo was
recently observed with the Microvariability and Oscillations of
STars ({\it MOST}) satellite \citep{2005ApJ...635..547G}, a 15-cm
aperture satellite observatory orbited in 2003 June by the Canadian
Space Agency \citep{2003PASP..115.1023W}. Given that the convective
envelope of $\eta$ Boo is expected to be very thin, containing less
than 1\% of the total mass of the star, observations by {\it MOST}
were motivated by the possibility of detecting $g$-modes, along with
the $p$-modes indicative of turbulent convection. In detecting eight
consecutive low-frequency radial $p$-modes for this G0IV star, the
{\it MOST} team was able to estimate many stellar parameters,
including temperature, age, and mass. Additionally, new models
examined by \citet{2006ApJ...636.1078S} are able to match
simultaneously the space- and ground-based pulsation data for $\eta$
Boo through the inclusion of turbulence in the stellar models.
Using interferometry to obtain a {\it direct}, absolute measurement
of this object's linear size and effective temperature, in
conjunction with the the mass estimate from the models fit to the
{\it MOST} data, we may also infer its surface gravity,
$\log g$. Values of $\log g$ are frequently utilized in stellar
modeling and spectroscopy, and a direct characterization of $\log g$
for this slightly evolved object is of significant utility.
We
show that the combination of the high-precision photometry of {\it
MOST} with the high-spatial resolution observations of
the Palomar Testbed Interferometer ({\it PTI})
make for a potent combination for uniquely interpreting a star's
astrophysical parameters.
\section{Interferometric Observations and Data Reduction}\label{sec_PTIData}
{\it PTI} is a
three-element long-baseline interferometer, which combines two of
its three 40-cm apertures to make measurements of fringe visibility,
$V^2$, for astronomical sources. These measurements are made at
either $H-$ (1.6 $\mu$m) or $K-$band (2.2 $\mu$m) with PTI; for this
particular investigation, the K band was employed, being spectrally
dispersed into 5 `narrow' bands across K, centered at 2.009, 2.106,
2.203, 2.299 and 2.396 $\mu$m.
For all of these observations,
{\it PTI}'s 109.8-m N-S baseline was utilized;
details on {\it PTI} can be found in
\citet{1999ApJ...510..505C}.
\input{tab1.tex}
$\eta$ Boo was observed along with the unresolved calibration
sources HD117176 (70 Vir), HD120136 ($\tau$ Boo), HD121107,
HD121560, on 18 nights between 2000 and 2005. In addition to $\eta$
Boo, a resolved check star, HD121860, was observed as a means to
monitor system performance. All of the calibrators did not exceed
{\it PTI}'s point-like calibrator angular size criterion of
$\theta_{EST}<1.0$ mas \citep{2005PASP..117.1263V} for absolute
measurement calibration. A previous interferometric measure of
$\eta$ Boo's size was made by \citet{2005A&A...436..253T}, but
resolved calibration sources used in this study (due to sensitivity
limitations) makes the resulting $\eta$ Boo diameter estimate
subject to potential systematic errors. Two of the four calibrators,
70 Vir and $\tau$ Boo, are associated with radial velocity planets
\citep{1996ApJ...464L.147M,1997ApJ...474L.115B}, but no evidence has
been found for $V^2$ variations indicative of face-on binary stars
that could supplant the RV planet interpretation for these objects.
The relevant observing parameters are found in Table \ref{table0},
along with parameters to be derived in \S \ref{sec_SED_fitting}.
The calibration of the $\eta$ Boo $V^2$ data is performed by
estimating the interferometer system visibility ($V^2_{\textrm{\tiny
SYS}}$) using the calibration source with a model angular diameter
and then normalizing the raw $\eta$ Boo visibility by
$V^2_{\textrm{\tiny SYS}}$ to estimate the $V^2$ measured by an
ideal interferometer at that epoch
\citep{1991AJ....101.2207M,1998SPIE.3350..872B,2005PASP..117.1263V}.
Uncertainties in the system visibility and the calibrated target
visibility are inferred from internal scatter among the data in an
observation using standard error-propagation calculations
\citep{1999PASP..111..111C}. Calibrating our point-like calibration
objects against each other produced no evidence of systematics, with
all objects delivering reduced $V^2 = 1$. The observation dates,
calibrated visibilities for each wavelength bin, $(u,v)$
coordinates, and observation hour angle are presented in Table
\ref{table1} for $\eta$ Boo and Table \ref{table1b} for HD121860.
Plots of the absolute visibility data for $\eta$ Boo are found in
Figure \ref{fig_etaBooAbsV2}. {\it PTI}'s limiting night-to-night
measurement error is $\sigma_{V^2_{\textrm{\tiny SYS}}}\approx 1.5
-1.8$\%, the source of which is most likely a combination of
effects: uncharacterized atmospheric seeing (in particular,
scintillation), detector noise, and other instrumental effects. This
measurement error limit is an empirically established floor from the
previous study of \citet{bod99}.
\begin{figure*}
\plotone{fig1.eps} \caption{\label{fig_etaBooAbsV2} Absolute
visibility data for $\eta$ Boo, as discussed in \S
\ref{sec_PTIData}. The line fit to the data is the visibility
function corresponding to a $2.1894 \pm 0.0038$ mas limb-darkened
angular disk diameter. See \S 3.2 and 4 for a discussion of our
final uncertainty estimate for the $\eta$ Boo angular size.}
\end{figure*}
\input{tab2.tex}
\input{tab3.tex}
\section{Effective Temperature and Radius Determinations}
\subsection{Bolometric Flux Estimates}\label{sec_SED_fitting}
For each of the target and calibrator stars observed in this
investigation, a spectral energy distribution (SED) fit was
performed. This fit was accomplished using photometry available in
the literature as the input values, with template spectra
appropriate for the spectral types indicated for the stars in
question. The template spectra, from \citet{pic98}, were adjusted to
account for overall flux level and wavelength-dependent reddening,
resulting in an estimate of angular size. Reddening corrections were
based upon the empirical reddening determination described by
\citet{1989ApJ...345..245C}, which differs little from van de
Hulst's theoretical reddening curve number 15 \citep{joh68,dyc96}.
Both narrowband and wideband photometry in the 0.3 $\mu$m to 30
$\mu$m were used as available, including Johnson $UBV$ (see, for
example,
\citet{1963AJ.....68..483E,1972ApJ...175..787E,1971A&A....12..442M}),
Str\"omgren $ubvy\beta$ \citep{1976HelR....1.....P}, Geneva
\citep{1976A&AS...26..275R}, 2Mass $JHK_s$
\citep{2003yCat.2246.....C}, and Vilnius $UPXYZS$
\citep{1972VilOB..34....3Z}; zero-magnitude flux density
calibrations were based upon the values given in \citet{cox00}.
Starting with a reference spectral type and luminosity class as
cited by SIMBAD, template spectra were fit to the photometric data.
Templates in adjacent locations in spectral type and luminosity
class were also tested for best fit, with the fit with best $\chi^2$
being selected in the end for use in this study. For example, a
star indicated by SIMBAD to be a G0IV would have its photometry fit
to the 9 templates of spectral type F9, G0, and G1, and for
luminosity classes III, IV, and V. Metallicities for these fits
were assumed to be roughly solar, which is consistent with the values
found for these objects in the references listed in
\citet{1997A&AS..124..299C} and \citet{2001A&A...373..159C} .
From the best SED fit, estimates were obtained for each star for
their reddening ($A_V$) and bolometric flux ($F_{BOL}$); since
effective temperature was fixed for each of the \citet{pic98}
library spectra, an estimate of angular size ($\theta_{EST}$) was
also obtained. The results of the fitting are given in Table
\ref{table0}. As an example, the SED fitting plot for $\eta$ Boo is
given in Figure \ref{fig_HD121370}.
\begin{figure*}
\plotone{fig2.eps}
\caption{\label{fig_HD121370} Spectral energy distribution fitting
for $\eta$ Boo, as discussed in \S \ref{sec_SED_fitting}.
Vertical bars on the data points represent the photometric errors, and
horizontal bars correspond to the bandpass of the photometric filter
being used.}
\end{figure*}
For our calibration sources, {\it a priori} estimates of their sizes
are necessary to account for residual resolution that may be
afforded by an interferometer's long baselines. With
an expected limb darkened size of $\theta_{EST} \leq 1.00$ mas from
the SED fit, calibrators have predicted $V^2$'s of $>86$\% for a
110-m baseline used at $\lambda=2.2 \mu$m (with $V^2>96$\% expected
for our smallest calibrator, HD 121560). We consider this size
effectively identical to its uniform disk size, since for most of
our potential calibration sources, their effective temperatures are
in excess of $\sim 5000$K. The difference between the uniform
disk and limb darkened sizes is at the few percent level
\citep{dav00,cla03b}, which is far less than our size estimate error
or, in particular, its impact upon the system visibility estimate
derived from observations of our calibrators. A $\leq 5\%$
uncertainty in angular size will contribute, at most, less than
$\leq 1.3\%$ uncertainty to the system visibility
$V^2_{\textrm{\tiny SYS}}$ for {\it PTI}. The
night-to-night limiting measurement error of {\it PTI} is
$\sigma_{V^2_{\textrm{\tiny SYS}}}\approx 1.5 -1.8$\% \citep{bod99},
any measures of $V^2$ using our calibrators will be free from any
potential bias in its angular size measurement at the $\sigma_\theta
/ \theta \approx 7$\% level for our largest calibrator, HD117176,
and at better levels of insensitivity for our the smaller
calibrators.
\subsection{Angular Sizes of $\eta$ Boo and HD 121860}\label{sec_angularSizes}
We may fit our observed visibility data to a uniform disk approximation to get
an initial estimate of the angular sizes of $\eta$ Boo and HD 121860. From
the $V^2 = [2 J_1(x) / x]^2$, where $x = \pi B \theta_{UD} \lambda^{-1}$, we get
angular sizes $\theta_{UD}$ of $2.1528 \pm 0.0037$ mas and $2.035 \pm 0.009$ mas for
$\eta$ Boo and HD 121860, respectively, with reduced $\chi^2$ values of 0.90 and 0.48.
Given $\eta$ Boo's low rotation value of 13.5 km/s \citep{2006A&A...446..267R},
rotational oblateness did not need to be considered in the fit \citep{2001ApJ...559.1155V}.
For limb darkened fits, we utilized the visibility function for a limb-darkened stellar
disk as parameterized with a linear limb-darkening coefficient, $\mu_\lambda$
\citep{1974MNRAS.167..475B}:
\begin{equation}
V^2 = \left( {1-\mu_\lambda \over 2} + {\mu_\lambda \over 3}\right)^{-2}
\times
\left[
(1-\mu_\lambda) {J_1 (x) \over x}+\mu_\lambda {j_1 (x) \over x}
\right]^2
\end{equation}
where $x = \pi B \theta_{LD} \lambda^{-1}$. For these fits we used
the 2.2 $\mu$m coefficients of $\mu_\lambda=0.22$ and 0.38 for
$\eta$ Boo and HD 121860, respectively \citep{1995A&AS..114..247C}.
Examination of the linear limb darkening coefficients from
\citet{1995A&AS..114..247C} indicate that the value of $\mu_\lambda
= 0.22 \pm 0.02$ is sufficient to account for the 5 narrowband
channels of the {\it PTI} data (eg., $\mu_\lambda$ varies by less
than $\Delta \mu_\lambda = 0.04$ between 2.0 $\mu$m and 2.4 $\mu$m).
Fitting our data, we get limb darkened angular sizes of
$\theta_{LD}=2.1894 \pm 0.0038$ mas and $2.100 \pm 0.009$ mas for
$\eta$ Boo and HD 121860, respectively, with no appreciable change
in the reduced $\chi^2$ values as compared to the uniform disk fits.
A previous limb-darkened angular size of $2.25 \pm 0.25$ mas for $\eta$ Boo was measured by \citet{2003AJ....126.2502M}
and is consistent with our measurement.
These errors are sufficiently small that additional sources of error
need to be considered. First, knowledge of PTI's operational
wavelength has a limit of $\sigma_\lambda \approx 0.01$ $\mu$m; and
Second, the limb darkening coefficient $\mu_\lambda$ is estimated to
be accurate to only $\sigma_\mu \approx 0.02$. Incorporating these
effects into our LD fit for $\eta$ Boo, we find an additional
uncertainty contribution of 0.0040 mas, resulting in $\theta_{LD} =
2.1894 \pm 0.0055$ mas, with no appreciable increase in error for HD
121860, where the measurement error dominates the uncertainty due to
the smaller number of measurements. We shall return to the estimate
of uncertainty on $\eta$ Boo's angular size in \S
\ref{sec_possibleBinary}, where the limits on our knowledge of
$\eta$ Boo's possible binarity in fact ultimately limits the lower
bound on our knowledge of the star's angular size.
The absolute value of $\theta_{LD}$ is in agreement with that of the
VLTI measurement in \citet{2005A&A...436..253T}, who quote
$\theta_{LD}=2.200\pm0.027\pm0.016$ mas (``statistical'' and
``systematic'' errors are cited separately). This previous
measurement is based upon limited data (only 3 $V^2$ data points)
and is anchored to the angular size estimates of
\citet{1999AJ....117.1864C}. Tracing back through the calibration
history outlined in their paper, this is an indication that the SED
angular size estimates from \citet{1999AJ....117.1864C} of resolved
calibrators $\alpha$ Crt and $\mu$ Vir they used to calibrate the
system were accurate at the stated uncertainties, although this is
uncertain -- no values were quoted in the manuscript -- and that
Th{\'e}venin et al's measurement process preserved that SED
accuracy. Additionally, no evaluation of the impact of possible
binarity was considered by \citet{2005A&A...436..253T}, possibly due
to the limits of their relatively small sample. Our result, since it
is anchored to unresolved calibration sources, avoids the danger of
being susceptible to any potential significant systematic error from
SED modeling.
\subsection{$T_{EFF}$ and $R$}
The effective temperature can be
directly derived from the bolometric flux and the limb-darkening angular size:
\begin{equation}
T_{EFF}=2341 \times \left[ {F_{BOL} \over \theta^2_{LD}}
\right]^{1/4}
\end{equation}
where $F_{BOL}$ are in $10^{-8}$ ergs cm$^{-2}$ s$^{-1}$ and $\theta_{LD}$ is in mas
\citep{1999AJ....117..521V}.
Stellar radius is given by $R=0.1076 \theta_{LD} d$; where $R$ is in
$R_\odot$, $d$ is in parsecs, and $\theta_{LD}$ is used as a proxy for
the Rosseland mean diameter.
Luminosity can be derived directly from the radius and effective temperature,
$L=4 \pi R^2 \sigma T_{EFF}^4$
and is wholly independent of {\it PTI}'s measure of $\theta$, being depending only on $d$ and
our estimate of $F_{BOL}$ (and, by extension, $A_V$).
These derived values are summarized in Table \ref{table_starSummary}.
Our value of $L=8.89\pm0.16 L_\odot$ is statistically consistent
with the independent \citet{2005ApJ...635..547G} value, indicating
our $F_{BOL}$ value discussed in \S \ref{sec_SED_fitting} is accurate.
The measured effective temperature for HD 121860, $T_{EFF}=3627 \pm
46 $ K, which differs only slightly from the expected $T_{EFF}=3750 \pm 22$ K
\citep{1999AJ....117..521V} for a M2III
spectral type
derived from the described fitting
in \S \ref{sec_SED_fitting}.
Its radius of $ 147 \pm 92 $ $R_\odot$
exceeds the expected value of $\sim 60 R_\odot$, but the
error on the radius is sufficently large (due to the large error on the parallax)
that it is not inconsistent with the smaller expected
radius.
The agreement of HD121860's $T_{EFF}$ with the expected value was
an indication to us that our confidence in {\it PTI}'s data and its error
estimates are reasonable.
\section{Non-Detection of Binarity of $\eta$ Boo}\label{sec_possibleBinary}
$\eta$ Boo has historically been listed as a possible spectroscopic
binary \citep{1957ApJ...125..696B,1982Ap&SS..87..377V}, with an
orbital confirmation being cited by \citet{1976ApJS...30..273A}.
However, speckle interferometery observations
\citep{1978PASP...90..288M,1980A&AS...42..185B,1984PASP...96..105H,1992AJ....104.1961M}
showed no evidence for detection of a possible companion to $\eta$
Boo, despite an expected angular separation of $0.170"$
\citep{1981A&AS...44...47H}, and particularly given an quoted
luminosity ratio of $\approx 0.75$ \citep{1982Ap&SS..87..377V}.
However, the original discovery paper for $\eta$ Boo's binary nature
\citep{1957ApJ...125..696B} suggests a late K- or M-type dwarf
companion to the G0IV subgiant primary, which would indicate a
brightness difference of at least $\Delta K \approx 4.5$, a
substantially greater value than indicated by
\citet{1982Ap&SS..87..377V}, and one consistent with the speckle
non-detections. \citet{1976ApJS...30..273A} also cite an earlier
astrometric detection \citep{dan39}, although this seems unlikely
given both the expected separation and the brightness ratio.
The spectroscopy of \citet{1997ApJ...475..322B} indicates
$\eta$ Boo's barycentric velocity
being influenced by binarity, which \citet{1995ApJ...443L..29C} have
suggested is indeed due to a M dwarf.
Nevertheless, given the higher resolution of {\it PTI}, a possible
close-separation secondary companion may affect our measures of
$\eta$ Boo's $V^2$ and thereby complicate our interpretation. As such,
it was prudent for us to examine our data for evidence of $\Delta
V^2$ excursions indicative of binarity. As seen in Figure
\ref{fig_etaBooAbsV2} and the data contained in Table \ref{table1},
the $\eta$ Boo $V^2$ data are consistent with a single star
hypothesis, incorporating a wide range of $(u,v)$ coverage and
dates.
To explore to what extent a secondary star could be hidden within our data points,
we examined in detail the residuals found in our single star fit,
as seen in the bottom panel of Figure \ref{fig_etaBooAbsV2}.
We began by creating a synthetic $V^2$ dataset corresponding to a single
star observed by PTI with $\theta_{LD}$ = 2.2 mas, to which we added
varying amounts of measurement noise. We then characterized the
$V^2$ residuals through use of a histogram created through
averaging two dozen runs of this synthesis, stepping through the residual
values in increments of $\Delta \sigma_{V^2} = 0.01$, and comparing that
histogram to that of our actual data. We found that the best fit was
for measurement noise at the $\sigma_{V^2} = 0.018$ level with reduced $\chi^2$ fit
value of $\chi_\nu^2=1.66$, consistent with
our expectation for PTI data discussed in \S \ref{sec_SED_fitting}.
We then repeated this process, including measurement noise at
the $\sigma_{V^2} = 0.018$ level but also adding a main sequence stellar companion
of varying spectral type, using standard values of luminosity
and color from \citet{cox00}, and constraining the orbital parameters
to match the reported period of $P=494^d$. The reduced $\chi^2$ fit
value increased as expected as the mass of the putative companion increased,
but we could not exclude a possible companion of spectral type M7 and lower in a
statistically meaningful way. The net result of any potential $V^2$ bias of
a M7 companion with $\Delta K \approx 5.5$ would be to decrease the actual size of the $\eta$ Boo primary
by 0.014 mas. As such, we are including an additional, negative error in our calculations
for the absolute parameters of $\eta$ Boo to account for our uncertainty
regarding this possible companion.
Given the long
time baseline of $V^2$ measures that are present in our full
dataset, it seems unlikely that chance geometric
alignment would persist in making a secondary companion not appear
in the form of $\Delta V^2$ excursions. It is still entirely
possible that a secondary companion is present, but at a brightness
ratio that makes it not a factor in determining $\eta$ Boo's size
($L_2 / L_1 < 0.005$), which is consistent with the original
\citet{1957ApJ...125..696B} result.
\input{tab4.tex}
\section{Discussion the Astrophysical Parameters of $\eta$ Boo}
Placing $\eta$ Boo on a H-R diagram, just as was done in
\citet{2005ApJ...635..547G} (see their Figure 6), we may compare our
values for the star's luminosity and effective temperature to those
best fit to the {\it MOST} data.
We have highlighted this region of
interest in Figure \ref{fig_fig2}. We find our $\{\log L =
0.9490 \pm 0.0076,\log T = 3.7853 \pm 0.0020\}$ coordinates fall
within the locus of points defined by the models the {\it MOST}
team.
The error ellipse
of our $\{\log L,\log T\}$
derived from the PTI data
encompasses the {\it MOST}
$\{X=0.71, Z=0.04\}$ composition model point, which ultimately was favored by
Guenther et al. as the best fit to their {\it MOST} data.
The $\{\log L,\log T\}$ coordinates defined by \citet{dim03} were
sufficiently displaced from all of the possible {\it MOST}
coordinates that Guenther et al. suggested the \citet{dim03}
coordinates were incorrect. Our $\eta$ Boo data and analysis are
clearly consistent with this suggestion by
\citet{2005ApJ...635..547G}.
\begin{figure*}
\plotone{fig3.eps}
\caption{\label{fig_fig2} Comparison of {\it MOST} fits to luminosity and effective temperature
values determined in this study and by \citet{dim03}. The {\it MOST} fit corresponding to a model
composition of $\{X=0.71, Z=0.04\}$ was within
the error ellipse of this study's $\{\log L =
0.9490 \pm 0.0076,\log T = 3.7853 \pm 0.0020\}$ coordinates, and was ultimately
favored in \citet{2005ApJ...635..547G} as the best fit to their data.}
\end{figure*}
\subsection{Surface Gravity of $\eta$ Boo}
With the mass for $\eta$ Boo established from the asteroseismic constraints
of {\it MOST}, and the radius from interferometric observations, we may directly
establish the surface gravity for $\eta$ Boo:
\begin{equation}
g = {G M \over R^2}
\end{equation}
Using the {\it MOST} derived mass of $M=1.71 \pm 0.05 M_\odot$ and
the {\it PTI} derived radius of $R=2.672 \pm 0.024 R_\odot$, we
derive a surface gravity of $\log g = 3.817 \pm 0.015$ [cm s$^{-2}$].
The {\it PTI} $\eta$ Boo $\log g$ result is in significant
disagreement with the ``trigonometric'' gravities found in
\citet{1999ApJ...527..879A}, who found $\log g = 3.47 \pm 0.10$ [cm
s$^{-2}$]. \citet{1990A&AS...85.1015M} quote a value of $\log g=3.8
\pm 0.15$ [cm s$^{-2}$] from stellar evolution theories, although
their angular size, radius and mass values on which those theories
were baselined are divergent from the {\it PTI} and {\it MOST}
results at 10-20\% level. The study by \citet{1985ApJS...57..389L}
using photometric and spectrophotometric data for $T_{EFF}$, and
Str\"omgren photometry plus intermediate dispersion spectra for
$\log g$, is in reasonable agreement with our values, quoting
$T_{EFF}= 5930 \pm 70$ K and $\log g = 3.71 \pm 0.15$ [cm s$^{-2}$].
Similarly, \citet{2002ApJ...577..377M} find $\log g = 3.71 \pm 0.13$
[cm s$^{-2}$] using ultraviolet-visual spectrophotometry. In all of
these cases, indirect measures of $\log g$ and other astrophysical
parameters are overshadowed by the more direct, empirical methods
afforded by the unique capabilities of {\it PTI} and {\it MOST}.
\section{Conclusions}
While considerably more accurate, our angular diameter determination
is statistically consistent with the earlier measurement from
\citet{2005A&A...436..253T}, an apparent indication that these
results are free from systematic error at their stated level on
uncertainty. Our angular diameter and bolometric flux values have
led to an effective temperature and luminosity for the evolved star
$\eta$ Boo that is in direct agreement with those established by the
{\it MOST} asteroseismology mission. Furthermore, in conjunction
with the {\it MOST} value for stellar mass, a measure of stellar
surface gravity may be made.
The combination of the precise mass estimate from {\it MOST} and the
accurate radius measure of {\it PTI} has allowed us to derive a
precise value of the surface gravity for $\eta$~Boo. The measurement
of the surface gravity $\eta$ Boo, independent of spectroscopy, is a
significant demonstration of the astrophysical investigative value
of combining high spatial resolution interferometry with high
temporal resolution photometry. Of the determinations of $\log g$
for $\eta$ Boo in the literature, we find that our value has a
claimed precision an order of magnitude greater than previous
measures.
\acknowledgements
We gratefully acknowledge fruitful discussions with Jaymie Matthews
and Theo ten Brummelaar. Science operations with PTI are conducted
through the efforts of the PTI Collaboration
(http://pti.jpl.nasa.gov/ptimembers.html), and we acknowledge the
invaluable contributions of our PTI colleagues. We particularly
thank Kevin Rykoski for his professional operation of PTI. This
research has made use of the SIMBAD database, operated at CDS,
Strasbourg, France. Funding for PTI was provided to the Jet
Propulsion Laboratory under its TOPS (Towards Other Planetary
Systems), ASEPS (Astronomical Studies of Extrasolar Planetary
Systems), and Origins programs and from the JPL Director's
Discretionary Fund. Portions of this work were performed at the Jet
Propulsion Laboratory, California Institute of Technology under
contract with the National Aeronautics and Space Administration.
| {
"attr-fineweb-edu": 1.583984,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdQ05i7PA9MKgaQGT | \section{Introduction}
\label{sec:intro} Multi-object tracking is one of fundamental problems in many applications. There are abundant research works, however, it is still far from practical use. The overwhelming majority of multi-target tracking algorithms are built on the assumption that multi-object system model parameters are known a priori, which is generally not the case in practice \cite{Mahler_book}, \cite{BarShalom}. While tracking performance is generally tolerant to mismatches in the dynamic and measurement noise, the same cannot be said about missed detections and false detections. In particular, mismatches in the specification of missed detection and false detection model parameters such as detection profile and clutter intensity can lead to a significant bias or even erroneous estimates \cite{R_CPHD_Mahler}. \newline
\indent Unfortunately, except for a few application areas, exact knowledge of model parameters is not available. This is especially true in visual tracking, in which the missed detection and false detection processes vary with the detection methods. The detection profile and clutter intensity are obtained by trial and error. A major problem is the time-varying nature of the missed detection and false detection processes. Consequently, there is no guarantee that the model parameters chosen from training data will be sufficient for the multi-object filter at subsequent frames.\newline
\indent In radar target tracking applications, stochastic multi-object tracking algorithms based on Kalman filtering or Sequential Monte Carlo (SMC) method have been widely used \cite{BarShalom}, \cite{Reid77}. This approach also has been used in visual multi-object tracking research \cite{Breitenstein11}, \cite{Tinne11}, \cite{Hoseinnezhad2013}. On the other hand, deterministic approach such as network flow \cite{DAT08}, continuous energy optimisation \cite{Milan_PAMI}, has become a popular method for multi-object tracking problem in visual tracking application. This approach is known to be free from tuning parameters, however, it is useful only when reliable object detection is available. \newline
\indent Unknown observation model parameters (i.e., clutter rate, detection profile) in online multi-object filtering was recently formulated in a joint estimation framework using random finite set (RFS) approach \cite{Mahler2003}, \cite{Mahler2007a}. Recently, Mahler \cite{R_CPHD_Mahler} showed that clever use of
the CPHD filter can accommodate unknown clutter rate and detection profile.
In \cite{Beard2013} it was demonstrated that by bootstrapping clutter
estimator from \cite{R_CPHD_Mahler} to the Gaussian mixture CPHD filter \cite{Vo2007}, performed very close to the case with known clutter parameter
can be achieved. \cite{R_MeMBer_Vo} extended it to multi-Bernoulli filter
with SMC implementation. The multi-Bernoulli filter was used for visual multi-object tracking in \cite{DKim_ICCAIS14}. While the solution for filtering with unknown clutter rate exists, these filters do not provide tracks that identify different objects. In particular, the conference version of this work \cite{DKim_ICCAIS14} is seriously extended as a new algorithm that is able to provides track identities with completely new structure and evaluated using challenging pedestrian tracking and cell migration experiments To the best of our knowledge this paper is the first attempt for handling unknown false measurement information in online tracking. The main contribution of this paper is to design a multi-object tracker that also produces trajectories and estimates unknown clutter rate on the fly.
\section{Problem Formulation}
Let ${\mathbb{X}}=\mathbb{R}^{n_{x}}$ denote the space of the
target kinematic state, and $\{0,1\}$ denote the discrete space of labels
for clutter model and actual targets. Then, the augmented state space
is given by
\begin{equation}
\breve{{\mathbb{X}}}={\mathbb{X}}\times
\{0,1\} \label{state_space}
\end{equation}
where $\times $ denotes a Cartesian product. Consequently, the state variable
contains the kinematic state, and target/clutter
indicator. We follow the convention from \cite{R_MeMBer_Vo} that the label $u=0$ will be used as a subscript to denote the clutter generators and the
label $u=1$ for actual targets.\newline
\indent Suppose that there are $T_{k}$ target and clutter object, and we have
$O_{k}$ observations (i.e., detections). In the RFS framework, the
collections of targets (including clutter objects) and measurements can be
described as finite subsets of the state and observation spaces,
respectively as (\ref{RFS_form})
\begin{equation}
{\breve{X_{k}}} {=\{\breve{x}_{k,i}\}_{i=1}^{T_{k}}\subset \breve{{\mathbb{X}}}}{\small,}~~~~
{Z_{k}} {=\{z_{k,j}\}_{j=1}^{O_{k}}\subset \mathbb{Z}}{\small ,}
\label{RFS_form}
\end{equation}
where $\breve{x}_{k,i}$ represent either the kinematic state of actual target or clutter target; $z_{k,j}$ is a measurement, and $\mathbb{Z}$ is the space of measurement,
respectively. Considering the dynamic of the state, the RFS model of the
multi-target state at time $k$ consists of surviving targets and new targets
entering in the scene. This new set is represented as the union
\begin{equation}
\begin{array}{llll}
\breve{X}_{k}\displaystyle{=\bigcup_{\breve{x}_{k-1}\in \breve{X}_{k-1}}{S_{k|k-1}(\breve{x}_{k-1})}~~\bigcup {\Gamma _{k}}},
\end{array}
\label{State_Set_Propo}
\end{equation}
where ${\small {\Gamma _{k}}}$ is a set of spontaneous birth objects (actual
target or clutter targets) and ${\small {S_{k|k-1}(\cdot )}}$ is the set of
survived object states at time $k$ with survival probability $p_{S}(x)<1$.
\newline
\indent The set of observations given the multi-target state is expressed as
\begin{equation}
\begin{array}{llll}
Z_{k}{=Z_{T,0,k}\bigcup Z_{T,1,k}},
\end{array}
\label{Obs_Set_Propo}
\end{equation}
where $Z_{T,0,k}$ and $Z_{T,1,k}$ are, respectively, sets of clutter and
target-originated observations with unknown detection probability $p_{D}(x)<1
$.\newline
\indent With the RFS multi-target dynamic and measurement model, the
multi-object filtering problem amounts to propagating multi-target posterior
density recursively forward in time via the Bayes recursion. Note that in
the classical solution to this filtering problem such as PHD \cite{Mahler2003}, CPHD \cite{Mahler2007a}, and
multi-Bernoulli filters \cite{Vo_MeMber}, \cite{Hoseinnezhad2012}, \cite{Hoseinnezhad2013}, instead of clutter target measurement set, the
Poisson clutter intensity is given and the detection profile $p_{D}(x)$ is also known a priori \cite{Mahler_book}.
\section{Multi-object tracker with unknown clutter rate}
\indent The aim of this paper is to propose a new online multi-object tracker that is able to accommodate unknown clutter rate. For this purpose, the Robust Multi-Bernoulli (RMB) filter \cite{R_MeMBer_Vo} is employed to adapt unknown clutter rate. Then, the estimated clutter rate is plugged into the Generalized Multi-Bernoulli (GLMB) tracker \cite{VV13} to boost the performance in real-world scenarios.
\subsection{Robust Multi-Bernoulli Filter}
\indent The multi- Bernoulli filter parametrizes the multi-object posterior
density by using a set of pair, i.e., Bernoulli parameter, $\{(r^{(i)},p^{(i))}\}_{i=1}^{M}$ where $r^{(i)}$ and $p^{(i)}$ represent the
existence probability and the density of the state among $M$ Bernoulli
components. In the following the predicted and updated densities are represented
by propagating a set of Bernoulli parameters. The multi-Bernoulli filter
recursion for extended state space called RMB filter \cite{R_MeMBer_Vo}
is summarized to make the paper self-contained. \newline
\indent If the posterior multi-object density of the multi-Bernoulli form at
time $k-1$ is given as
\begin{equation}
\{(r_{k-1}^{(i)},p_{u,k-1}^{(i)})\}_{i=1}^{M_{k-1}}.
\label{previous_Bernoulli}
\end{equation}
\indent Then, the predicted intensity is approximated by the following
multi-Bernoulli
\begin{equation}
\{(r_{k|k-1}^{(i)},p_{u,k|k-1}^{(i)})\}_{i=1}^{M_{k|k-1}}.
\label{MB_predict}
\end{equation}
A set of predicted Bernoulli components is a union of birth components $\{(r_{\Gamma,k}^{(i)},p_{\Gamma,u,k}^{(i)})\}_{i=1}^{M_{\Gamma,k}}$ and
surviving components $\{(r_{k|k-1}^{(i)},p_{u,k|k-1}^{(i)})\}_{i=1}^{M_{k|k-1}}$. The birth Bernoulli components are chosen a priori by
considering the entrance region of the visual scene, e.g., image border. The
surviving components are calculated by
\begin{equation}
\begin{array}{ll}
r_{P,k|k-1}^{(i)}=r_{k-1}^{(i)}\sum_{u=0,1}\langle
p_{u,k-1}^{(i)},p_{S,u,k}\rangle , & \\
p_{u,k|k-1}^{(i)}(x)=\frac{\left\langle f_{u,k|k-1}(x|\cdot,
)p_{u,k-1}^{(i)},p_{S,u,k}\right\rangle }{\left\langle
p_{u,k-1}^{(i)},p_{S,u,k}\right\rangle },\label{predict_eq} &
\end{array}
\end{equation}
where $x$ is the kinematic state, $p_{S,u,k}$ is the survival probability to time $k$ and $f_{u,k|k-1}(x|\cdot)$ is the state transition density
specified by either for actual target $f_{1,k|k-1}(x|\cdot )$ or for clutter target $f_{0,k|k-1}(x|\cdot )$.\newline
\indent If at time $k$, the predicted multi-target density is
multi-Bernoulli of the form (\ref{MB_predict}), then the updated
multi-Bernoulli density approximation is composed of the legacy components
with the subscript $L$ and the measurement updated components with the
subscript $U$ as follows (\ref{update_Bernoulli})
\begin{equation}
\begin{array}{ll}
\{(r_{L,k}^{(i)},p_{L,u,k}^{(i)})\}_{i=1}^{M_{k|k-1}}\cup
\{(r_{U,k}(z),p_{U,u,k}(\cdot;z))\}_{z\in Z_{k}}.\label{update_Bernoulli}
&
\end{array}
\end{equation}
The legacy and measurements updated components are calculated by a series of
equations (\ref{update_eq}) as follows.
\begin{equation}
\begin{array}{lllll}
r_{L,k}^{(i)}=\sum_{u=0,1}r_{L,u,k}^{(i)}, & & & & \\
r_{L,u,k}^{(i)}=\frac{r_{k|k-1}^{(i)}\left\langle
p_{u,k|k-1}^{(i)},1-p_{D,u,k}\right\rangle }{1-r_{k|k-1}^{(i)}\sum_{u^{\prime }=0,1}\left\langle p_{u^{\prime },k|k-1}^{(i)},p_{D,u^{\prime
},k}\right\rangle }, & & & & \\
p_{L,u,k}^{(i)}(x)=\frac{\big(1-p_{D,u,k}\big)p_{u,k|k-1}^{(i)}(x)}{\sum_{u^{\prime
}=0,1}\left\langle p_{u^{\prime },k|k-1}^{(i)},1-p_{D,u^{\prime
},k}\right\rangle }, & & & & \\
r_{U,k}(z)=\sum_{u=0,1}r_{U,u,k}(z), & & & & \\
r_{U,u,k}(z)=\frac{\sum_{i=1}^{M_{k|k-1}}\frac{r_{k|k-1}^{(i)}\big(1-r_{k|k-1}^{(i)}\big)\left\langle
p_{u,k|k-1}^{(i)},g_{u,k}(z,|\cdot )p_{D,u,k}\right\rangle }{\big(1-r_{k|k-1}^{(i)}\sum_{u^{\prime }=0,1}\left\langle p_{u^{\prime
}=0,1}^{(i)},p_{D,u^{\prime },k}\right\rangle \big)^{2}}}{\sum_{i=1}^{M_{k|k-1}}\frac{r_{k|k-1}^{(i)}\sum_{u^{\prime }=0,1}\left\langle
p_{u,k|k-1}^{(i)},g_{u^{\prime },k}(z|\cdot )p_{D,u^{\prime
},k}\right\rangle }{1-r_{k|k-1}^{(i)}\sum_{u^{\prime }=0,1}\left\langle
p_{u^{\prime },k|k-1}^{(i)},p_{D,u^{\prime },k}\right\rangle }}, & & & &
\\
p_{U,u,k}(x;z)=& & & &\\\frac{\sum_{i=1}^{M_{k|k-1}}\frac{r_{k|k-1}^{(i)}}{1-r_{k|k-1}^{(i)}}p_{u,k|k-1}^{(i)}(x)g_{u,k}(z|x)\cdot p_{D,u,k}}{\sum_{u^{\prime
}=0,1}\sum_{i=1}^{M_{k|k-1}}\frac{r_{k|k-1}^{(i)}}{1-r_{k|k-1}^{(i)}}\left\langle p_{u^{\prime },k|k-1}^{(i)},g_{u^{\prime },k}(z|\cdot
)p_{D,u^{\prime },k}\right\rangle }\label{update_eq}
\end{array}
\end{equation}
where $p_{D,u,k}$ is the state dependent detection probability, $g_{u,k}(z|x)
$ is the measurement likelihood function that will be defined in the
following section. Note that the SMC implementation of summarized equations (6)-(10) can be found in \cite{R_MeMBer_Vo}.
\subsection{Boosted Generalized labeled Multi-Bernoulli Filter}
The generalized labeled multi-Bernoulli (GLMB) filter provides a solution of multi-object Bayes filter with unique labels. In this paper, the GLMB filter is used as a tracker that returns trajectories of multi-object given the estimated clutter rate from the RMB. As shown in Fig. \ref{Fig:Diagram}, GLMB and RMB filters are interconnected by sharing tracking parameters and facilitate feedback mechanism in order for robust tracking against time-varying clutter background. Note that one step RMB filter is used for the estimation of clutter rate, thus, it is not a parallel implementation of independent filter.
\begin{figure}
\begin{center}
\includegraphics[height=.55\linewidth]{./img/diagram}
\end{center}
\caption{Schematic diagram of the proposed tracker}
\label{Fig:Diagram}
\end{figure}
We call the proposed tracker as Boosted GLMB tracker.\newline
\indent In multi-object tracking with labels, formally, the state of an object at time $k$ is defined as $\mathbf{x}_{k}=(x_{k},\ell _{k})\in \mathbb{X\times L}_{k}$, where $\mathbb{L}_{k}$
denotes the label space for objects at time $k$ (including those born prior
to $k$). Note that $\mathbb{L}_{k}$ is given by $\mathbb{B}_{k}\cup \mathbb{L}_{k-1}$, where $\mathbb{B}_{k}$ denotes the label space for objects born at time $k$ (and is disjoint from $\mathbb{L}_{k-1}$) and we do not consider clutter generator in designing GLMB, thus, the label $u$ is omitted. Suppose that there are $N_{k}$ objects at time $k$ as (\ref{RFS_form}), but only consider actual target with label $\mathbf{x}_{k,1},...,\mathbf{x}_{k,N_{k}}$, in the context of multi-object tracking,
\begin{equation}
\mathbf{X}_{k}=\{\mathbf{x}_{k,1},...,\mathbf{x}_{k,N_{k}}\}\in \mathcal{F}
\mathbb{X\times L}_{k}),
\end{equation
where $\mathcal{F}(\mathbb{X\times L}_{k})$ denotes the space of finite
subsets of $\mathbb{X\times L}_{k}$. We denote cardinality (number of
elements) of $\mathbf{X}$ by $|\mathbf{X}|$ and the set of labels of $\mathbf{X}$, $\{\ell :(x,\ell )\in \mathbf{X}\}$, by $\mathcal{L}_{\mathbf{X}}$. Note that since the label is unique, no two objects have the same label,
i.e. $\delta_{|\mathbf{X}|}(|\mathcal{L}_{\mathbf{X}}|)=1$. Hence $\Delta
\mathbf{X})\triangleq $ $\delta _{|\mathbf{X}|}(|\mathcal{L}_{\mathbf{X}}|$
is called the \emph{distinct label indicator}.\newline
\indent In the GLMB the posterior density takes the form of a
generalized labeled multi-Bernoulli
\begin{equation}
\mathbf{\pi}_{k-1}(\mathbf{X})=\Delta (\mathbf{X})\sum_{c\in \mathbb{C}}\omega _{k-1}^{(c)}(\mathcal{L}_{\mathbf{X}})\left[ p_{k-1}^{(\xi )}\right] ^{\mathbf{X}}.
\label{generalMulti_Bernoulli}
\end{equation}
Given the posterior multi-object density of the form (\ref{generalMulti_Bernoulli}), the predicted multi-object density to time $k$ is given by
\begin{equation}
\mathbf{\pi}_{k|k-1}(\mathbf{X})=
\displaystyle\Delta (\mathbf{X})\sum_{c\in \mathbb{C}}\omega_{k|k-1}^{(c
)}(\mathcal{L}_{\mathbf{X}})\left[ p_{k|k-1}^{(c)}\right] ^{\mathbf{X}} \label{eq:PropCKstrong1}
\end{equation}
where\\
\indent \indent \indent$\omega_{k|k-1}^{(c)}(L)=w_{B,k}(L\cap \mathbb{B}_{k})\omega_{S,k}^{(c)}(L\cap \mathbb{L}_{k-1}),$\\
$p_{k|k-1}^{(c)}(x,\ell )=1_{\mathbb{L}_{k-1}}(\ell )p_{S,k}^{(c)}(x,\ell )+(1-1_{\mathbb{L}_{k-1}}(\ell ))p_{B,k}(x,\ell ),$\\
\indent \indent $p_{S,k}^{(c)}(x,\ell ) =\frac{\left\langle p_{S,k-1}(\cdot ,\ell
)f_{k|k-1}(x|\cdot ,\ell ),p_{k-1}^{(c)}(\cdot ,\ell )\right\rangle }{\eta
_{S}^{(c)}(\ell )},$\\
\indent $\eta_{S,k}^{(c)}(\ell ) =\int \left\langle p_{S,k-1}(\cdot ,\ell
)f_{k|k-1}(x|\cdot ,\ell ),p_{k-1}^{(c)}(\cdot ,\ell )\right\rangle dx,$\\
\indent ~~~$\omega_{S,k}^{(c)}(J)=[\eta _{S,k}^{(c
)}]^{L_{k-1}}\sum_{I\subseteq \mathbb{L}_{k-1}}1_{I}(J)[q_{S}^{(c
)}]^{I-J}\omega _{k-1}^{(c)},$\\
\indent \indent \indent $q_{S}^{(c)}(\ell )=\left\langle 1-p_{S,k-1}(\cdot ,\ell ),p_{k-1}^{(c)}(\cdot ,\ell )\right\rangle$, \\
where $c$ is the index for track hypothesis, $L$ is an instance of label set, $I$ is track labels from previous time step.\newline
\indent Moreover, the updated multi-object density is given by
\begin{equation}\begin{array}{ll}
\resizebox{.90\hsize}{!}{$
\mathbf{\pi}_{k|k}(\mathbf{X}|Z_{k})=
\Delta(\mathbf{X})\displaystyle\sum_{c
\in \mathbb{C}}\sum\limits_{\theta \in
\Theta_{k}}\omega_{Z_{k}}^{(c,\theta )}(\mathcal{L}_{\mathbf{X}})\left[ p_{k|k}^{(c,\theta )}(\cdot |Z_{k})\right] ^{\mathbf{X}} $}\label{eq:ProbBayes_strong1}
\end{array}
\end{equation}
where $\Theta_{k}$ is the space of mappings $\theta :\mathbb{L}_{k}\rightarrow \{0,1,...,|Z_{k}|\},$ such that $\theta (i)=\theta
(i^{\prime })>0~$implies$~i=i^{\prime }$, and\\
~\indent $\omega_{Z_{k}}^{(c,\theta )}(L) \propto \delta_{\theta
^{-1}(\{0:|Z_k|\})}(L)\omega_{k|k-1}^{(c)}(L)[\eta_{Z_{k}}^{(c,\theta
)}]^{L}, $\\
\indent \indent \indent $p_{k|k}^{(c,\theta )}(x,\ell |Z_k) =\frac{p_{k|k-1}^{(c)}(x,\ell)\psi _{Z_{k}}(x,\ell ;\theta )}{\eta _{Z_{k}}^{(c,\theta )}(\ell )},$\\
\indent \indent \indent $\eta_{Z_{k}}^{(c,\theta)}(\ell )=\left\langle p_{k|k-1}^{(c)}(\cdot ,\ell ),\psi _{Z_{k}}(\cdot ,\ell ;\theta )\right\rangle ,$\\
\indent $\psi _{Z_{k}}(x,\ell ;\theta )=\delta _{0}(\theta (\ell
))(1-p_{D,k}(x,\ell ))\\
\indent \indent \indent \indent \indent +(1-\delta _{0}(\theta (\ell )))\frac{p_{D,k}(x,\ell )g_{k}(z_{\theta (\ell )}|x,\ell )}{\kappa
_{k}(z_{\theta (\ell )})}$\\
where $\kappa_{k}\sim\hat{\lambda_c}\mathcal{U(\mathcal{Z})}$ denotes the clutter density. $\hat{\lambda_c}$ is the estimated clutter rate from the RMB filter. Specifically, the extraction of clutter rate can be simply obtained by the EAP estimate of clutter target number as
\begin{equation}
\hat{\lambda_c}=\sum_{i=1}^{M_{k}}r^{(i)}_{0,k}p_{D,0,k}
\end{equation}
where $r^{(i)}_{0,k}$ is the existence probability of clutter target introduced in the previous section, $p_{D,0,k}$ is the probability of detection for clutter targets, $\mathcal{U}(\mathcal{Z})$ is a uniform density on the observation region $\mathcal{Z}$.
\begin{table*}
{\footnotesize \ }
\par
\begin{center}
{\footnotesize \
\begin{tabular}{|l|l|ccc|cccc|cc|}
\hline
\textbf{Dataset} & \textbf{Method} & \textbf{Recall} & \textbf{Precision} &
\textbf{FPF} & \textbf{GT} & \textbf{MT} & \textbf{PT} & \textbf{ML} &
\textbf{Frag} & \textbf{IDS} \\ \hline\hline
& {Boosted GLMB} & 90.2 \% & 89.5 \% & 0.03 & 19 & 90 \% & 10 \% & 0.0 \% & 23
& 10 \\
PETS09-S2L1 & {GLMB} \cite{VV13} & 82.6 \% & 81.4 \% & 0.16 & 19 & 82.7 \% & 17.3 \% &
0.0 \% & 23 & 12 \\
& {RMOT} \cite{RMOT} & 80.6 \% & 85.4 \% & 0.25 & 19 & 84.7 \% & 15.3 \% &
0.0 \% & 20 & 11 \\
\hline\hline
& {Boosted GLMB} & 83.4 \% & 85.6 \% & 0.10 & 10 & 80 \% & 20 \% & 0.0 \% & 12
& 16 \\
TUD-Stadtmitte & {GLMB} \cite{VV13} & 80.0 \% & 83.0 \% & 0.16 & 10 & 78.0 \% & 22.0 \% &
0.0 \% & 23 & 12 \\
& {RMOT} \cite{RMOT} & 82.9 \% & 86.6 \% & 0.19 & 10 & 80 \% & 20 \% &
0.0 \% & 10 & 16 \\
\hline\hline
ETH& {Boosted GLMB} & 73.1 \% & 82.6 \% & 0.78 & 124 & 60.4 \% & 34.6 \% & 5.0
\% & 110 & 20 \\
BAHNHOF and & {GLMB} \cite{VV13} & 71.5 \% & 76.3 \% & 0.88 & 124 & 58.7 \% & 27.4 \% &
13.9 \% & 112 & 40 \\
SUNNYDAY & {RMOT} \cite{RMOT} & 71.5 \% & 76.3 \% & 0.98 & 124 & 57.7 \% & 37.4 \% &
4.8 \% & 68 & 40 \\ \hline
\end{tabular}
}
\end{center}
\caption{Comparison with the state-of-the-art trackers}
\label{tab:comparison_state_of_the_art}
\end{table*}
\begin{figure}
\begin{center}
\includegraphics[height=.80\linewidth]{./img/ospa_distance}
\end{center}
\caption{Comparison of OSPA distance (Boosted GLMB)}
\label{Fig:Example1_RMB}
\end{figure}
\section{Experimental results}
In this section, two types of experimental results are given. A nonlinear multi-object tracking example is tested in order to show the performance of the proposed tracker with respect to the standard performance metric, i.e., OSPA distance \cite{OSPA}. In addition, the proposed tracker is also evaluated for visual multi-object tracking datasets \cite{Andriluka08}, \cite{EssCVPR07}, \cite{PETS09}.
\subsection{Object motion model and basic parameters}
The target dynamic described as a coordinated turn model as (\ref{turn_model})
\begin{equation}
f_{1,k|k-1}=\mathcal{N}(x_{k};m_{x,1,k|k-1}(x_{k-1}),P_{x,1,k|k-1}),
\label{turn_model}
\end{equation}
where $m_{x,1,k|k-1}(x_{k-1})=[F(\omega_{k-1})x_{k-1},\omega_{k-1}]^T$, $P_{x,1,k|k-1}=diag([\sigma^2_wGG^T,\sigma^2_{\omega}])$,
\begin{equation}
F(\omega) =
\begin{bmatrix}
1 & \frac{\text{sin}~\omega T}{\omega} & 0 & -\frac{1-\text{cos}~\omega T}{\omega} \\
0 & \text{cos}~\omega T & 0 & -\text{sin}~\omega T \\
0 & \frac{1-\text{cos}~\omega T}{\omega} &1 & \frac{\text{sin}~\omega T}{\omega}\\
0 & \text{sin}~\omega T & 0 & \text{cos}~\omega T
\end{bmatrix},
G=
\begin{bmatrix}
\frac{T^2}{2} &0\\
T &0\\
0 &\frac{T^2}{2}\\
0 &T
\end{bmatrix},
\end{equation}
where $T$ is the sampling time, $\sigma_w$ is the standard deviation of the process noise, $\sigma_{\omega}$ is the standard deviation of the turn rate noise. These standard deviation values are determined by the maximum allowable object motion with respect to the image frame rate. For clutter targets, the transition density $f_{0,k|k-1}$, is given as a random walk to describe arbitrary motion \cite{R_MeMBer_Vo}.
\subsection{Numerical example}
The proposed algorithm is tested with a nonlinear multi-target tracking scenario in \cite{VV13}, \cite{R_MeMBer_Vo}. The actual target is observed from noisy bearing and range information $z_{k}=[\theta_{k}, r_{k}]^{T}$ and its likelihood function is given by
\begin{equation}
\begin{array}{llll}
g_{k}(z_k|x_k) = \mathcal{N}(z_k;m_{z,1,k}(x_{k}),P_{z,1,k}),
\end{array}\label{Likelihood1}
\end{equation}
where $m_{z,1,k}(x_{k})=[\text{arctan}(p_{x,k}/p_{y,k}),\sqrt{p^{2}_{x,k}+p^{2}_{y,k}}]$ and $P_{z,1,k}=diag([\sigma^{2}_{\theta},\sigma^{2}_{r}])$. For RMB implementation, we follow the same parameter setting as given in \cite{R_MeMBer_Vo}. Performance comparison between the GLMB tracker with known clutter rate and the proposed tracker (Boosted GLMB tracker) is studied. As can be seen in Fig. 1, OSPA distances for both trackers are similar. This result verifies that the Boosted GLMB shows reliable performance even when the clutter rate is unknown.
\subsection{Pedestrian tracking in vision}
\indent For the evaluation in real-world data, we are interested in tracking of multiple pedestrians. To detect pedestrians, we apply the state-of-the-art pedestrian detector proposed by Piotr et. al, called ACF detector \cite{Detect}. The detector used in the experiment integrates a set of image channels (normalized gradient magnitude, histogram of oriented gradients, and LUV color channels) to extract various types of features in order to discriminate objects from the background.\\
\indent Assuming that the object state $x_k=[p_{x,k} \dot{p}_{x,k}, p_{y,k} \dot{p}_{y,k}]^{T}$ (x-, y- positions and velocities) is observed with additive Gaussian noise, the measurement likelihood function is given by
\begin{equation}
\begin{array}{llll}
g_{k}(z_k|x_k) = \mathcal{N}(z_k;Hx_k,\Sigma),
\end{array}\label{Likelihood2}
\end{equation}
where $\mathcal{N}(z;m,P)$ denotes a normal distribution with mean $m$ and covariance $P$, $z_k$ is the response from designated detector; $H=[1~~0~~0~~0;~~1~~0~~0~~0]$ i,e., x-, y- position is observed by the detector, $\Sigma$ is the covariance matrix of the observation noise.\\
\indent Sample detection results in Fig. 2 contain false positive detections from other types of object with similar shapes as pedestrians. Based on our experiences ACF detector is robust to partial occlusions, however, there are more false positive detections than other single-model based detectors \cite{HOG}. Thus, it is relatively difficult to remove false positive detections by hard thresholding when it is used for visual scenes with time-varying imaging conditions or moving camera. In particular, in visual scene from autonomous vehicles, the average number of clutters (i.e., clutter rate) is varying with respect to the change in the field of view due to the vehicle pose change.
\begin{figure}[t]
\begin{center}
\includegraphics[height=.27\linewidth]{./img/img_clutter}
\includegraphics[height=.27\linewidth]{./img/img_clutter1}
\end{center}
\caption{Pedestrian detection results with clutter measurements}
\label{Fig:clutter}
\end{figure}
The basic assumption behind the existing visual multi-object tracking is that the offline-designed detector, e.g., HOG detector \cite{Detect}, \cite{HOG} gives reasonably clean detections. Thus, direct data association algorithms such as \cite{DAT08}, and \cite{Milan_PAMI} show reasonable performance with minor number of clutter measurements. However, in practice, there are false positive detections which make data association results inaccurate and computationally intensive.\newline
\indent "S2.L1" sequence from the popular PETS'09 dataset \cite{PETS09}, "TUD-Stadtmitte" sequence from TUD dataset \cite{Andriluka08}, and "Bahnhof" and "Sunnyday" sequences from ETHZ dataset \cite{EssCVPR07} are tested in the experiment where maximum 8-15 targets are moving in the scene. The number of targets varies in time due to births and deaths, and the measurement set includes target-originated detections and clutter. In our experiments, unlike previous works, we use the ACF detector with low threshold for nonmaximum suppression so as to have less number of miss-detections but increased false alarms with time-varying rate. It is more realistic setting especially in ETHZ dataset that is recorded with frequent camera view changes. The Boosted GLMB, is compared with the original GLMB (with fixed clutter rate) \cite{VV13}, and state-of-the-art online Bayesian multi-object tracker called RMOT \cite{RMOT}. Quantitative experiment results are summarized in Table 1 using well-known performance indexes given in \cite{KuoCVPR11}. In Table 1, Boosted GLMB shows superior performance compared to the GLMB in all indexes and compatible with the recent online tracker, RMOT. The proposed Boosted GLMB outperforms other trackers with respect to FPF where tracker is able to effectively reject clutters with estimated clutter rate. On the hand, inferior performance in Fragmentation and ID switches are observed compared with RMOT because of the lack of relative motion model. \\
\indent In summary, it is verified from the experiment that the Boosted GLMB filter is effective when the clutter rate is not known a priori which is often required in real-world applications. We make experiments using unoptimized MATLAB code with Intel 2.53GHz CPU laptop. The computation time per one image frame of size 768$\times$586 is 0.2s which is reasonably suitable for real-time visual tracking application. Further improvements can be made by code optimization to speed up.
\begin{figure}
\begin{center}
\includegraphics[height=.70\linewidth]{./img/1}
\end{center}
\caption{A snapshot of microscopy image of stem cells}
\label{Fig:cell_snapshot}
\end{figure}
\begin{figure}
\centering
\includegraphics[height=.41\linewidth]{./img/Fig2}
\caption{Reconstructed cell trajectories (left: MHT \cite{Nicolas13}, right: Boosted GLMB)}
\label{Fig:comparision_tracks}
\end{figure}
\subsection{Cell migration analysis in microscopy image}
\begin{table}
\caption{Comparison of averaged OSPA distance }
\begin{center}
{\small \
\begin{tabular}{|c|c|c|}
\hline
\textbf{Method} & \textbf{Average OSPA}\\
\hline\hline
Boosted GLMB (Ours) & 5 \\ \hline
MHT \cite{Nicolas13} & 8.5 \\ \hline
\end{tabular}
}
\end{center}
\label{tab:averaged OSPA}
\end{table}
The proposed algorithm is also tested with live-cell microscopy image data for cell migration analysis. The proposed GLMB tracking method is tested on a real stem cell migration sequence as illustrated in Fig. \ref{Fig:cell_snapshot}. The image sequence is recorded for 3 days, i.e., 4320 min and each image is taken in every 16 min. \newline
\indent Performance comparison is conducted with the state-of-the-art Multiple Hypothesis Tracker (MHT) \cite{Nicolas13}. The same motion and measurement models are used as in the first experiments and spot detection in \cite{Nicolas13} is applied for the fair comparison. As shown in Fig. \ref{Fig:comparision_tracks}, the Boosted GLMB provides reliable tracking results compared to the MHT. The MHT is tuned to obtain the best tracking results. The Boosted GLMB tracker produces significantly less false tracks and alleviate fragmented tracks because the tracker efficiently manages time-varying clutter information and keep confident tracks. Quantitatively, as can be seen in Table \ref{tab:averaged OSPA}, time averaged OSPA distances \cite{OSPA} for both trackers verify that the Boosted GLMB shows reliable performance even when the clutter rate is unknown.
\section{Conclusion}
\label{sec:Conclusion}
In this paper, we propose a new multi-object tracking algorithm for unknown clutter rate based on two interconnected random finite set filters. Unknown clutter rate is estimated using one step robust Bernoulli filter \cite{R_MeMBer_Vo}. Then, trajectories of objects are estimated using \cite{VV13} with estimated clutter rate online. Two filters are sharing tracking parameters so that there is no need for tuning of clutter parameters. Comparison results via a synthesized nonlinear multi-object tracking and visual tracking datasets (visual surveillance and biomedical) with state-of-the-art online trackers illustrate that the proposed multi-object tracker shows reliable performance. Interesting future research direction would be the extension of the tracking algorithm to adaptive survival probability and handling of missed-detections for further improvement.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
| {
"attr-fineweb-edu": 1.677734,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdQ3xK3YB9m7_-aGI | \section{Introduction}
The investigation of SU(N) gauge theories with fermions in higher representation has several different motivations.
Some of them are of pure theoretical nature like the question whether a dynamics and a particle spectrum completely different
from QCD can be observed.
Even more interesting are phenomenological applications of these theories. In possible extensions of the standard model they
are an alternative for simply modified versions of QCD with fermions in the fundamental representation. One example are
technicolour theories that provide a more natural representation of the electroweak sector by introducing a new strong dynamics.
The Higgs particle emerges in this case as a bound state in the new strongly interacting sector.
There are different requirements from phenomenological models for a technicolour theory that depend on the chosen extension of the standard model.
In this work we focus on the near conformal behaviour, the appearance of a light scalar particle,
and a large mass anomalous dimension.
The near conformal, or walking, behaviour with a small number of fermion flavours is required to
avoid possible tensions with electroweak precision data. This can be achieved with fermions in higher representation of the gauge group.
Several analytical \cite{Sannino:2004qp,Dietrich:2006cm,Braun:2010qs} and numerical lattice studies
\cite{Catterall:2007yx,Hietanen:2009zz,DelDebbio:2010hx,DeGrand:2011qd,Appelquist:2011dp,DeGrand:2010na,Fodor:2015zna,Hasenfratz:2015ssa,Athenodorou:2014eua}
have been dedicated to the investigation of the conformality in different gauge theories.
The conformal behaviour manifests itself in the mass spectrum of the theory. All states $M$ should scale to zero according to $M\propto m^{1/(1+\gamma_\ast)}$ with the residual quark mass $m$
and the same mass anomalous dimension $\gamma_\ast$. This behaviour should be observable if $m$ is below a certain threshold. It is much different from the chiral symmetry breaking scenario,
where a clear separation between the pseudo Nambu-Goldstone bosons (pNGb) and the rest of the spectrum appears at small $m$ and eventually the mass of the pNGb goes to zero in the chiral limit,
whereas the mass remains finite for the other particles. It is in general difficult to discern to which of the two classes a considered theory belongs since one is always restricted to a certain range of $m$ in the lattice simulations and the chiral limit can only be extrapolated. A comparison of different theories might therefore help to resolve the differences between conformal and chiral symmetry breaking scenario.
It is important to choose a comparable lattice realisation in such a comparison since lattice artefacts might have a significant influence on the scaling behaviour.
The adjoint representation is particularly interesting among the higher representations of the gauge group.
The minimal walking technicolour (MWT), the \su{2} gauge theory with two Dirac fermions in the adjoint representation, is a candidate for a technicolour extension of the
standard model. Further interesting gauge theories with fermions in the symmetric and anti-symmetric representation are related to the adjoint representation by large $N_c$ equivalence.
This leads to constraints for the conformal window of the symmetric representation that can be deduced from the adjoint one \cite{Bergner:2015dya}.
In gauge theories with fermions in the adjoint representation, the number of degrees of freedom in the fermionic and bosonic sector can be equal, as required by supersymmetry.
Therefore supersymmetric Yang-Mills theory (SYM) is also among the strongly interacting gauge theories having fermions in the adjoint representation (adjoint QCD).
Specific states that appear in the adjoint representation, but not in the fundamental one, might have applications in extensions of the standard model.
\section{Continuum action and chiral symmetry breaking in adjoint QCD}
The Lagrangian of adjoint QCD has the following form
\begin{equation}
\mathcal{L}=
\Tr\left[-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}
+\sum_{i}^{N_f}\bar{\psi}_{i}(\slashed{D}+m)\psi_{i}\right]\, ,
\end{equation}
where we assume the gauge symmetry to be \su{2}.
$\psi$ is a Dirac-Fermion in the adjoint representation
with the covariant derivative
\begin{equation}
D_\mu \psi =\partial_{\mu}\psi + i g [A_{\mu},\psi]\, .
\end{equation}
The adjoint representation is consistent with the Majorana condition
$\lambda=C\lambda^T$ and each Dirac fermion can be decomposed into two Majorana fermions.
$N_f$ counts the Dirac flavours and, consequently, there are theories with half integer $N_f$
corresponding to an odd number of Majorana flavours.
The representation in terms of $2N_f$ Majorana flavours indicates a chiral symmetry breaking pattern by the formation of a chiral condensate that is different from QCD:
\begin{align}
\su{2N_f} \rightarrow \so{2N_f}\, .
\label{chiralbreak}
\end{align}
Consequently there are $2N_f^2+N_f-1$ pNGb in adjoint QCD.
Notably there are a number of different notations for the pNGb and other states of these theories.
This comes from the different context in which these theories are considered, related to supersymmetry, QCD, or the \su{2}
version of QCD. The symmetry breaking pattern of \eqref{chiralbreak} cannot be directly realised in SYM with $N_f=1/2$, but the theory can be considered as partially quenched
$N_f=1$ adjoint QCD. The corresponding partially quenched chiral perturbation theory has been formulated in \cite{Munster:2014cja} and is applied
in the extrapolation of the chiral limit in SYM. In this case the pNGb is called adjoint pion to emphasise the similarity to
chiral perturbation theory in QCD.
In $N_f=1$ adjoint QCD the unbroken \so{2} is equivalent to the $\uu{1}_V$ corresponding to the Baryon number conservation in the Dirac fermion formulation. The
\su{2} gauge group allows to construct baryonic operators from two fermion fields. In \cite{Athenodorou:2014eua} the pNGb, represented by the $\psi^T C \gamma_5 \psi$,
has therefore baryon number $2$ and is called scalar baryon.
In the investigations of MWT the pNGb is usually called pseudoscalar meson \cite{DelDebbio:2010hx}.
\section{Lattice setup}
In our simulations we have chosen a tree level Symanzik improved gauge action and a Dirac-Wilson operator with stout smeared links in the fermionic part of the action.
More details of our results for SYM have been presented in other contributions of this conference \cite{Bergner:2015cqa,Bergner:2015lba}. For this theory we have data at
three different $\beta$ in the range of $1.6$ to $1.9$, which allows a reasonable estimation of lattice artefacts and continuum extrapolations.
The mass of the adjoint pion in lattice units in the relevant runs varies from $0.6$ down to $0.2$.
Apart from the different number of fermion flavors, the same lattice action was used in the simulations of MWT. The considered range of $\beta$ values is in this case limited by the bulk transition.
We have determined a bulk transition around $\beta=1.4$ with our lattice action. The control of finite volume effects is important in the
investigations of a conformal theory. Therefore we have chosen a rather small $\beta$ of $1.5$ in our first analysis. In a second step we have also
done simulations at $\beta=1.7$ to check for possible lattice artefacts. The pion mass at these runs was between about $0.9$ and $0.2$ in lattice units.
\section{Conformal window and comparison of MWT and SYM}
Our results for the mass spectrum of SYM have been reported in \cite{Bergner:2013nwa,Bergner:2015lba}. We have performed extrapolations to the chiral limit defined by a vanishing adjoint pion mass at each fixed lattice spacing.
At this chiral point, supersymmetry, which is broken by the lattice regularisation, is restored in the continuum limit, and even at a finite lattice spacing
we find no considerable indication for supersymmetry breaking in the supersymmetric Ward identities.
The extrapolation in the scalar sector is shown in Fig.~\ref{gbextrapol} and \ref{f0extrapol}. The masses of the other states are larger than the adjoint pion mass in
the considered parameter range and they extrapolate to a finite value in the chiral limit. As required by supersymmetry there is a degeneracy between bosonic and fermionic masses.
The scalar singlet meson, $\text{a-}f_0$, and the $0^{++}$ glueball have almost the same mass. These operators have the same quantum numbers and seem to have both a reasonable overlap with the
ground state in this channel.
\begin{figure}[h]
\centering
\subfigure[SYM $0^{++}$ extrapolation]{\includegraphics[width=0.48\textwidth]{final_b175_gb_extrapol.pdf}\label{gbextrapol}}
\subfigure[SYM $\text{a-}f_0$ extrapolation]{ \includegraphics[width=0.48\textwidth]{final_b175_f0_extrapol.pdf}\label{f0extrapol}}
\caption{This figure shows a part of the mass spectrum of supersymmetric Yang-Mills theory at $\beta=1.75$ and the extrapolation to the chiral limit defined by a vanishing adjoint pion mass $m_{\pi}$.
For the gluino-glue particle only the result of the chiral extrapolation, but not the measured data points are shown.
(a) The fermionic gluino-glue and the bosonic scalar glueball $0^{++}$ are compared. (b) The scalar singlet meson, $\text{a-}f_0$, is shown in comparison with the chiral extrapolation of the gluino-glue particle. }
\end{figure}
The results for MWT are completely different, see Fig.~\ref{mwtspect1} and \ref{mwtspect2}. The chiral extrapolation is in this case determined by the PCAC quark mass.
All of the masses scale to zero in the chiral limit and their ratios are constant. The particle with the lowest mass is the scalar glueball and not the pseudscalar meson, the pNGb in this theory.
Our simulations at the second $\beta$, corresponding to a smaller lattice spacing, are consistent with this picture.
These observations are clearly indicating a conformal scenario, in contrast to the chiral symmetry breaking scenario of SYM.
As shown in Fig.~\ref{mscaling}, we find a scaling of the states with a mass anomalous dimension around $0.38$ which is consistent with what has been determined in \cite{Debbio:2014wpa} for this theory.
There is a deviation at the two lightest PCAC masses where the mass of the spin-1/2 and the glueball become heavier than the mesons, but even at the $32^3\times 64$ lattice it is most likely a finite size effect.
In general the finite size effects are severe in MWT as was also pointed out in \cite{Debbio:2014wpa} and we plan to clarify the relevance of these effects in our next investigations.
\begin{figure}[t]
\centering
\subfigure[MWT masses in lattice units]{\includegraphics[width=0.48\textwidth]{plotmassesb150.pdf}\label{mwtspect1}}
\subfigure[MWT mass ratios]{ \includegraphics[width=0.48\textwidth]{combindesrations.pdf}\label{mwtspect2}}
\caption{The particle masses in $N_f=2$ adjoint QCD (MWT) are shown as a function of the PCAC quark mass ($m_{\text{PCAC}}$). (a) The masses of the vector ($m_\text{V}$) and pseudoscalar ($m_{\text{PS}}$) meson, the spin-1/2 state ($m_{\text{spin-1/2}}$), and the $0^{++}$ glueball are shown together with the string tension $\sigma$ for the two different volumes $24^3\times 64$ and $32^3\times 64$ ($\beta=1.5$). All quantities are represented in lattice units. (b) The mass ratios of these different states and the pseudoscalar meson mass excluding the runs with the smallest $m_\text{PCAC}$ that are probably affected by finite size effects. The data at the smaller lattice spacing ($\beta=1.7$) are included in this figure. }
\end{figure}
\section{Fractionally charged particles and scalar singlet meson operators in MWT}
The adjoint representation of the fermions allows for certain states that have no counterpart in QCD with fundamental fermions. One of them is a spin-1/2 operator
composed of fermion fields and gauge bosons
\begin{equation}
O_{\text{spin-1/2}}=\sum_{\mu,\nu} \sigma_{\mu\nu} \Tr\left[F^{\mu\nu} \lambda \right]\, .
\end{equation}
In SYM this operator is essential since it corresponds to the gluino-glue, the fermionic partner of the bosonic glueball or meson operator. Unbroken supersymmetry implies multiplets of
fermions and bosons with the same mass. The low energy effective theory must therefore contain such kind of fermionic bound states.
In MWT the spin-1/2 state is relevant for phenomenological considerations since it leads to fractionally charge particles when a naive hypercharge assignment is used. Even though the
mass of these particles has not been measured before, they have been considered to disfavour the phenomenological relevance of the theory. This was essentially one of the motivations
to consider \so{4} gauge theory as an alternative \cite{Hietanen:2013gva}. In \cite{Kouvaris:2007iq} the particles were, on the other hand, considered as an alternative dark matter scenario.
With our current investigations we were able to show that the mass of the spin-1/2 is well separated from the lightest scalar particle in the theory. On the other hand, it is slightly lighter than the
pseudoscalar meson, which means that it could be one of the first experimentally observable ``new physics'' states in this theory.
The light scalar state is one of the most important ingredients of this theory since it explains the observed Higgs particle. In SYM the
scalar singlet meson operator and the glueball provide two independent measurements of the lightest scalar mass (compare Fig.~\ref{gbextrapol} and \ref{f0extrapol}).
For this reason we have measured besides the glueball also the scalar singlet meson in MWT.
The correlator of the scalar meson is dominated by the disconnected contribution, as shown in Fig.~\ref{f0disc}, resulting in a large separation between the scalar singlet and the triplet meson.
In this particular example the mass in lattice units of the triplet (disconnected) is $0.747(17)$ and the one of the singlet (connected + disconnected) is $0.540(53)$. The obtained scalar mass is below
the pseudoscalar meson mass ($0.5873(3)$). We were, however, not able to see the degeneracy with the light scalar glueball which has a mass of only $0.33(2)$ in lattice units. One reason is, of course, the large separation of this state from the rest of the spectrum, but it might also be a more generic feature of conformal theories.
\begin{figure}[t]
\subfigure[MWT scaling of the masses]{\includegraphics[width=0.48\textwidth]{scalingg5.pdf}\label{mscaling}}
\subfigure[MWT scalar meson contributions]{ \includegraphics[width=0.48\textwidth]{f00disconnected_final.pdf}\label{f0disc}}
\caption{(a) The scaling of the masses compared to the expected scaling with a mass anomalous dimension of $0.38$. A linear behaviour in this plot indicates that the data are consistent with the conformal scaling.
(b) An example for the connected and disconnected contributions to the scalar singlet meson correlator $C(t)$ at $\beta=1.5$, $m_\text{PS}=0.5873(3)$. }
\end{figure}
\section{Comparison to one flavour adjoint QCD and outlook}
MWT and SYM show clear indications for a much different behaviour. SYM is expected to be a confining theory, which is in accordance with the lattice results. For MWT we find the indications of a conformal behaviour, in accordance with other lattice studies.
Between these two examples with a reasonable signal for a non-conformal and conformal behaviour there are the theories with $N_f=1$ and $N_f=3/2$. The data from the first lattice simulations of $N_f=1$ have been published in \cite{Athenodorou:2014eua}. There is evidence that even this theory is rather on the conformal side in terms of, for example, the nearly constant mass ratios. Further features of the theory are a light scalar and a large mass anomalous dimension of $\gamma_\ast=0.92(1)$. In contrast to MWT the spin-1/2 particle is clearly heavier than the pNGb and
there is a degenerate signal for the scalar mass in the singlet meson and the scalar glueball measurement.
In further studies we will complete the picture with results for $N_f=3/2$ adjoint QCD.
The determination of the conformal window for the adjoint representation has interesting consequences also for the studies of other theories, in particular with fermions in the symmetric representation.
MWT appears to be one of the most challenging theories from the point of view of numerical simulations due to the large finite volume effects.
\section*{Acknowledgments}
This project is supported by the John von Neumann Institute for Computing
(NIC) with grants of computing time.
We also gratefully acknowledge the Gauss Centre for Supercomputing e.V. for funding this project by providing
computing time on the GCS Supercomputer SuperMUC at Leibniz Supercomputing Centre.
Further computing time has been
provided by the compute cluster PALMA of the University of M\"unster.
| {
"attr-fineweb-edu": 1.657227,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdQE5qdmC6Srl9PJ7 | \section{Introduction}
Optimization is at the core of machine learning and many other fields
of applied research, for instance operations research, optimal
control, and deep learning. The latter fields have embraced frameworks
that combine a modeling language with only a few optimization solvers;
interior point solvers in operations research and stochastic gradient
descent (SGD) and variants thereof in deep learning frameworks like
TensorFlow, PyTorch, or Caffe. That is in stark contrast to classical
(i.e., non-deep) machine learning, where new problems are often
accompanied by new optimization algorithms and their
implementation. However, designing and implementing optimization
algorithms is still a time-consuming and error-prone task.
The lack of an optimization framework for classical machine learning
problems can be explained partially by the common belief, that any
efficient solver needs to exploit problem specific structure. Here, we
challenge this common belief.
We introduce GENO (GENeric Optimization), an optimization framework
that allows to state optimization problems in an easy-to-read modeling
language. From the specification an optimizer is automatically
generated by using automatic differentiation on a symbolic level. The
optimizer combines a quasi-Newton solver with an augmented Lagrangian
approach for handling constraints.
Any generic modeling language plus solver approach frees the user from
tedious implementation aspects and allows to focus on modeling aspects
of the problem at hand. However, it is required that the solver is
efficient and accurate. Contrary to common belief, we show here that
the solvers generated by GENO are (1) as efficient as well-engineered,
specialized solvers at the same or better accuracy, (2) more efficient
by a decent margin than recent state-of-the-art solvers, and (3)
orders of magnitude more efficient than classical modeling language
plus solver approaches.
\paragraph{Related work.}
Classical machine learning is typically served by toolboxes like
scikit-learn~\cite{scikit-learn}, Weka~\cite{weka2}, and
MLlib~\cite{mllib}. These toolboxes mainly serve as wrappers for a
collection of well-engineered implementations of standard solvers like
LIBSVM~\cite{ChangL01} for support vector machines or
glmnet~\cite{FriedmanHT09} for generalized linear models. A
disadvantage of the toolbox approach is a lacking of flexibility. An
only slightly changed model, for instance by adding a non-negativity
constraint, might already be missing in the framework.
Modeling languages provide more flexibility since they allow to
specify problems from large problem classes. Popular modeling
languages for optimization are CVX~\cite{cvx,cvx2} for MATLAB and its
Python extension CVXPY~\cite{cvxpy2,cvxpy}, and
JuMP~\cite{jump} which is bound to Julia. In the operations research
community AMPL~\cite{fourerGK03} and GAMS~\cite{Gams} have been used
for many years. All these languages take an instance of an
optimization problem and transform it into some standard form of a
linear program (LP), quadratic program (QP), second-order cone program
(SOCP), or semi-definite program (SDP). The transformed problems is
then addressed by solvers for the corresponding standard
form. However, the transformation into standard form can be
inefficient, because the formal representation in standard form can
grow substantially with the problem size. This representational
inefficiency directly translates into computational inefficiency.
The modeling language plus solver paradigm has been made deployable in
the CVXGEN~\cite{Cvxgen}, QPgen~\cite{qpgen}, and OSQP~\cite{osqpgen}
projects. In these projects code is generated for the specified
problem class. However, the problem dimension and sometimes the
underlying sparsity pattern of the data needs to be fixed. Thus,
the size of the generated code still grows with a growing problem
dimension. All these projects are targeted at embedded systems and are
optimized for small or sparse problems. The underlying solvers are
based on Newton-type methods that solve a Newton system of equations
by direct methods. Solving these systems is efficient only for small
problems or problems where the sparsity structure of the Hessian can
be exploited in the Cholesky factorization. Neither condition is
typically met in standard machine learning problems.
Deep learning frameworks like TensorFlow~\cite{tf},
PyTorch~\cite{pytorch}, or Caffe~\cite{caffe} are efficient and fairly
flexible. However, they target only deep learning problems that are
typically unconstrained problems that ask to optimize a separable sum
of loss functions. Algorithmically, deep learning frameworks usually
employ some form of stochastic gradient descent
(SGD)~\cite{robbins1951}, the rationale being that computing the full
gradient is too slow and actually not necessary. A drawback of
SGD-type algorithms is that they need careful parameter tuning of, for
instance, the learning rate or, for accelerated SGD, the
momentum. Parameter tuning is a time-consuming and often
data-dependent task. A non-careful choice of these parameters can turn
the algorithm slow or even cause it to diverge. Also, SGD type
algorithms cannot handle constraints.
GENO, the framework that we present here, differs from the standard
modeling language plus solver approach by a much tighter coupling of
the language and the solver. GENO does not transform problem instances
but whole problem classes, including constrained problems, into a very
general standard form. Since the standard form is independent of any
specific problem instance it does not grow for larger instances. GENO
does not require the user to tune parameters and the generated code
is highly efficient.
\begin{table}[h!]
\centering
\caption{Comparison of approaches/frameworks for optimization in
machine learning.}
\label{tab:advantages}
\begin{tabular}{p{44mm}ccccc}
\toprule
& handwritten & TensorFlow, & Weka, & \multirow{2}{*}{CVXPY} & \multirow{2}{*}{GENO}\\
& solver & PyTorch & Scikit-learn\\
\midrule
flexible
& \textcolor{red!90!black}{\xmark} & \textcolor{green!70!black}{\cmark} & \textcolor{red!90!black}{\xmark} & \textcolor{green!70!black}{\cmark} & \textcolor{green!70!black}{\cmark} \\
efficient
& \textcolor{green!70!black}{\cmark} & \textcolor{green!70!black}{\cmark} & \textcolor{green!70!black}{\cmark} & \textcolor{red!90!black}{\xmark} & \textcolor{green!70!black}{\cmark} \\
deployable / stand-alone
& \textcolor{green!70!black}{\cmark} & \textcolor{red!90!black}{\xmark} & \textcolor{red!90!black}{\xmark} & \textcolor{red!90!black}{\xmark} & \textcolor{green!70!black}{\cmark} \\
can accommodate constraints
& \textcolor{green!70!black}{\cmark} & \textcolor{red!90!black}{\xmark} & \textcolor{green!70!black}{\cmark} & \textcolor{green!70!black}{\cmark} & \textcolor{green!70!black}{\cmark} \\
parameter free (learning rate, ...) \hspace{-3mm}
& \textcolor{red!90!black}{\xmark}/\textcolor{green!70!black}{\cmark} & \textcolor{red!90!black}{\xmark} & \textcolor{green!70!black}{\cmark} & \textcolor{green!70!black}{\cmark} & \textcolor{green!70!black}{\cmark} \\
allows non-convex problems
& \textcolor{green!70!black}{\cmark} & \textcolor{green!70!black}{\cmark} & \textcolor{green!70!black}{\cmark} & \textcolor{red!90!black}{\xmark} & \textcolor{green!70!black}{\cmark} \\
\bottomrule
\end{tabular}
\end{table}
\section{The GENO Pipeline}
GENO features a modeling language and a solver that are tightly
coupled. The modeling language allows to specify a whole class of
optimization problems in terms of an objective function and
constraints that are given as vectorized linear algebra
expressions. Neither the objective function nor the constraints need
to be differentiable. Non-differentiable problems are transformed into
constrained, differentiable problems. A general purpose solver for
constrained, differentiable problems is then instantiated with the
objective function, the constraint functions and their respective
gradients. The gradients are computed by the matrix calculus
algorithm that has been recently published
in~\cite{LaueMG18}. The tight integration of the modeling language and
the solver is possible only because of this recent progress in
computing derivatives of vectorized linear algebra expressions.
Generating a solver takes only a few milliseconds. Once it has been
generated the solver can be used like any hand-written solver for
every instance of the specified problem class. An online interface
to the GENO framework can be found at
\href{http://www.geno-project.org}{\texttt{http://www.geno-project.org}}.
\subsection{Modeling Language}
\begin{minipage}{0.68\textwidth}
A GENO specification has four blocks (cf.\ the example to the right
that shows an $\ell_1$-norm minimization problem from compressed
sensing where the signal is known to be an element from the unit
simplex.): (1) Declaration of the problem parameters that can be of
type \emph{Matrix}, \emph{Vector}, or \emph{Scalar}, (2) declaration
of the optimization variables that also can be of type
\emph{Matrix}, \emph{Vector}, or \emph{Scalar}, (3) specification of
the objective function in a MATLAB-like syntax, and finally (4)
specification of the constraints, also in a MATLAB-like syntax that
supports the following operators and functions: \texttt{+, -, *, /,
.*, ./, $\wedge$, $.\wedge$, log, exp, sin, cos, tanh, abs, norm1,
norm2, sum, tr, det, inv}. The set of operators and functions can
be expanded when needed.
\end{minipage}
\quad
\begin{minipage}{0.28\textwidth}
\begin{Verbatim}[frame=single]
parameters
Matrix A
Vector b
variables
Vector x
min
norm1(x)
st
A*x == b
sum(x) == 1
x >= 0
\end{Verbatim}
\end{minipage}
Note that in contrast to instance-based modeling languages like CVXPY
no dimensions have to be specified. Also, the specified problems do
not need to be convex. In the non-convex case, only a local optimal
solution will be computed.
\subsection{Generic Optimizer}
At its core, GENO's generic optimizer is a solver for unconstrained,
smooth optimization problems. This solver is then extended to handle
also non-smooth and constrained problems. In the following we first
describe the smooth, unconstrained solver before we detail how it is
extended to handling non-smooth and constrained optimization problems.
\paragraph{Solver for unconstrained, smooth problems.}
There exist quite a number of algorithms for unconstrained
optimization. Since in our approach we target problems with a few
dozen up to a few million variables, we decided to build on a
first-order method. This still leaves many options. Nesterov's
method~\cite{Nesterov83} has an optimal theoretical running time, that
is, its asymptotic running time matches the lower bounds in $\Omega
(1/\sqrt{\varepsilon})$ in the smooth, convex case and $\Omega(\log(1/
\varepsilon))$ in the strongly convex case with optimal dependence on
the Lipschitz constants $L$ and $\mu$ that have to be known in
advance. Here $L$ and $\mu$ are upper and lower bounds, respectively,
on the eigenvalues of the Hessian. On quadratic problems quasi-Newton
methods share the same optimal convergence
guarantee~\cite{huang70,Nazareth79} without requiring the values for
these parameters. In practice, quasi-Newton methods often outperform
Nesterov's method, although they cannot beat it in the worst case. It
is important to keep in mind that theoretical running time guarantees
do not always translate into good performance in practice. For
instance, even the simple subgradient method has been shown to have a
convergence guarantee in $O(\log(1/ \varepsilon))$ on strongly convex
problems~\cite{Goffin77}, but it is certainly not competitive on real
world problems.
Hence, we settled on a quasi-Newton method and implemented the
well-established \mbox{L-BFGS-B} algorithm~\cite{ByrdLNZ95,ZhuBLN97}
that can also handle box constraints on the variables. It serves as
the solver for unconstrained, smooth problems. The algorithm combines
the standard limited memory quasi-Newton method with a projected
gradient path approach. In each iteration, the gradient path is
projected onto the box constraints and the quadratic function based on
the second-order approximation (L-BFGS) of the Hessian is minimized
along this path. All variables that are at their boundaries are fixed
and only the remaining free variables are optimized using the
second-order approximation. Any solution that is not within the bound
constraints is projected back onto the feasible set by a simple
min/max operation~\cite{MoralesN11}. Only in rare cases, a projected
point does not form a descent direction. In this case, instead of
using the projected point, one picks the best point that is still
feasible along the ray towards the solution of the quadratic
approximation. Then, a line search is performed for satisfying the
strong Wolfe conditions~\cite{Wolfe69,Wolfe71}. This condition is
necessary for ensuring convergence also in the non-convex case. The
line search also obliterates the need for a step length or learning
rate that is usually necessary in SGD, subgradient algorithms, or
Nesterov's method. Here, we use the line search proposed
in~\cite{MoreT94} which we enhanced by a simple backtracking line
search in case the solver enters a region where the function is not
defined.
\paragraph{Solver for unconstrained non-smooth problems.}
Machine learning often entails non-smooth optimization problems, for
instance all problems that employ $\ell_1$-regularization. Proximal
gradient methods are a general technique for addressing such
problems~\cite{Proximal15}. Here, we pursue a different approach. All
non-smooth convex optimization problems that are allowed by our
modeling language can be written as $\min_x \{\max_i f_i(x)\}$ with
smooth functions $f_i(x)$~\cite{Nesterov05}. This class is flexible
enough to accommodate most of the non-smooth objective functions
encountered in machine learning. All problems in this class can be
transformed into constrained, smooth problems of the form
\[
\begin{array}{rl} \displaystyle
\min_{t, x} & t \\
\st& f_i(x) \leq t.
\end{array}
\]
The transformed problems can then be solved by the solver for
constrained, smooth optimization problems that we describe next.
\paragraph{Solver for smooth constrained problems.}
There also quite a few options for solving smooth, constrained
problems, among them projected gradient methods, the alternating
direction method of multipliers
(ADMM)~\cite{boydADMM,Gabay1976,Glowinski1975}, and the augmented
Lagrangian approach~\cite{Hestenes69,Powell69}. For GENO, we decided
to follow the augmented Lagrangian approach, because this allows us to
(re-)use our solver for smooth, unconstrained problems directly. Also,
the augmented Lagrangian approach is more generic than ADMM. All
ADMM-type methods need a proximal operator that cannot be derived
automatically from the problem specification and a closed-form
solution is sometimes not easy to compute. Typically, one uses
standard duality theory for deriving the
prox-operator. In~\cite{Proximal15}, prox-operators are tabulated for
several functions.
The augmented Lagrangian method can be used for solving the following
general standard form of an abstract constrained optimization problem
\begin{equation} \label{eq:constrained}
\begin{array}{rl} \displaystyle
\min_{x} & f(x) \\
\st & h(x) = 0 \\
& g(x) \leq 0,
\end{array}
\end{equation}
where $x\in\mathbb{R}^n$, $f\colon \mathbb{R}^n \to \mathbb{R}$,
$h\colon\mathbb{R}^n\to\mathbb{R}^m$, $g\colon\mathbb{R}^n\to\mathbb{R}^p$ are
differentiable functions, and the equality and inequality constraints
are understood component-wise.
The augmented Lagrangian of Problem~\eqref{eq:constrained} is the
following function
\[
L_\rho (x, \lambda, \mu) = f(x) + \frac{\rho}{2}
\norm{h(x)+\frac{\lambda}{\rho}}^2 + \frac{\rho}{2} \norm{\left(g(x)
+ \frac{\mu}{\rho}\right)_+}^2,
\]
where $\lambda\in\mathbb{R}^m$ and $\mu\in\mathbb{R}_{\geq 0}^p$ are Lagrange
multipliers, $\rho >0$ is a constant, $\norm{\cdot}$ denotes the
Euclidean norm, and $(v)_+$ denotes $\max\{v, 0\}$. The Lagrange
multipliers are also referred to as dual variables. In principle, the
augmented Lagrangian is the standard Lagrangian of
Problem~\eqref{eq:constrained} augmented with a quadratic penalty
term. This term provides increased stability during the optimization
process which can be seen for example in the case that
Problem~\eqref{eq:constrained} is a linear program. Note, that
whenever Problem~\eqref{eq:constrained} is convex, i.e., $h$ are
affine functions and $g$ are convex in each component, then the
augmented Lagrangian is also a convex function.
The Augmented Lagrangian Algorithm~\ref{algo:1} runs in iterations.
In each iteration it solves an unconstrained smooth optimization
problem. Upon convergence, it will return an approximate solution $x$
to the original problem along with an approximate solution of the
Lagrange multipliers for the dual problem. If
Problem~\eqref{eq:constrained} is convex, then the algorithm returns
the global optimal solution. Otherwise, it returns a local
optimum~\cite{Bertsekas99}. The update of the multiplier $\rho$ can be
ignored and the algorithm still converges~\cite{Bertsekas99}. However,
in practice it is beneficial to increase it depending on the progress
in satisfying the constraints~\cite{Birgin14}. If the infinity norm of
the constraint violation decreases by a factor less than $\tau=1/2$ in
one iteration, then $\rho$ is multiplied by a factor of two.
\begin{algorithm}[h!]
\caption{Augmented Lagrangian Algorithm}
\label{algo:1}
\begin{algorithmic}[1]
\STATE {\bfseries input:} instance of Problem~\ref{eq:constrained}
\STATE {\bfseries output:} approximate solution $x\in\mathbb{R}^{n},
\lambda\in\mathbb{R}^{p}, \mu\in\mathbb{R}_{\geq 0}^{m}$
\STATE initialize $x^0 = 0$, $\lambda^0 = 0$, $\mu^0 = 0$, and $\rho=1$
\REPEAT
\STATE $x^{k+1} :=\quad \argmin_{x}\, L_\rho(x, \lambda^k, \mu^k)$ \label{algo:x}
\STATE $\lambda^{k+1} :=\quad \lambda^k + \rho h(x^{k+1})$ \label{algo:lambda}
\STATE $\mu^{k+1} :=\quad \left(\mu^k + \rho g(x^{k+1})\right)_+$ \label{algo:mu}
\STATE update $\rho$ \label{algo:rho}
\UNTIL{convergence}
\RETURN $x^k, \lambda^k, \mu^k$
\end{algorithmic}
\end{algorithm}
\section{Limitations}
While GENO is very general and efficient, as we will demonstrate in
the experimental Section~\ref{sec:experiments}, it also has some
limitations that we discuss here. For small problems, i.e., problems
with only a few dozen variables, Newton-type methods with a direct
solver for the Newton system can be even faster. GENO also does not
target deep learning applications, where gradients do not need to be
computed fully but can be sampled.
Some problems can pose numerical problems, for instance problems
containing an $\exp$ operator might cause an
overflow/underflow. However, this is a problem that is faced by all
frameworks. It is usually addressed by introducing special operators
like \emph{logsumexp}.
Furthermore, GENO does not perform sanity checks on the provided
input. Any syntactically correct problem specification is accepted by
GENO as a valid input. For example, $\log(\det(xx^\top))$, where $x$
is a vector, is a valid expression. But the determinant of the outer
product will always be zero and hence, taking the logarithm will
fail. It lies within the responsibility of the user to make sure that
expressions are mathematically valid.
\section{Experiments} \label{sec:experiments}
We conducted a number of experiments to show the wide applicability
and efficiency of our approach. For the experiments we have chosen
classical problems that come with established well-engineered solvers
like logistic regression or elastic net regression, but also problems
and algorithms that have been published at NeurIPS and ICML only
within the last few years. The experiments cover smooth unconstrained
problems as well as constrained, and non-smooth problems. To prevent a
bias towards GENO, we always used the original code for the competing
methods and followed the experimental setup in the papers where these
methods have been introduced. We ran the experiments on standard data
sets from the LIBSVM data set repository, and, in some cases, on
synthetic data sets on which competing methods had been evaluated in
the corresponding papers.
Specifically, our experiments cover the following problems and
solvers: $\ell_1$- and $\ell_2$-regularized logistic regression,
support vector machines, elastic net regression, non-negative least
squares, symmetric non-negative matrix factorization, problems from
non-convex optimization, and compressed sensing. Among other
algorithms, we compared against a trust-region Newton method with
conjugate gradient descent for solving the Newton system, sequential
minimal optimization (SMO), dual coordinate descent, proximal methods
including ADMM and variants thereof, interior point methods,
accelerated and variance reduced variants of SGD, and Nesterov's
optimal gradient descent. Please refer to the appendix
for more details on the solvers and GENO models.
Our test machine was equipped with an eight-core Intel Xeon
CPU~E5-2643 and 256GB RAM. As software environment we used Python~3.6,
along with NumPy~1.16, SciPy~1.2, and scikit-learn~0.20. In some cases
the original code of competing methods was written and run in
MATLAB~R2019. The solvers generated by GENO spent between $80\%$ and
$99\%$ of their time on evaluating function values and
gradients. Here, these evaluations essentially reduce to evaluating
linear algebra expressions. Since all libraries are linked against the
Intel MKL, running times of the GENO solvers are essentially the same
in both environments, Python and MATLAB, respectively.
\subsection{Regularization Path for $\ell_1$-regularized Logistic Regression}
\begin{figure*}[t!]
\centering
\begin{tabular}{ccc}
\includegraphics[width=0.48\textwidth]{logRegL1_SAGA.png} &
\includegraphics[width=0.48\textwidth]{logRegL1_GENO.png} \\
\includegraphics[width=0.48\textwidth]{logRegL1_CVXPY.png} &
\includegraphics[width=0.48\textwidth]{logRegL1_LIBLINEAR_bias1.png} &
\end{tabular}
\caption{The regularization path of $\ell_1$-regularized logistic
regression for the Iris data set using SAGA, GENO, CVXPY, and
LIBLINEAR.}
\label{fig:logRegL1}
\end{figure*}
Computing the regularization path of the $\ell_1$-regularized logistic
regression problem~\cite{Cox58} is a classical machine learning
problem, and only boring at a first glance. The problem is well suited
for demonstrating the importance of both aspects of our approach,
namely flexibility and efficiency. As a standard problem it is covered
in scikit-learn. The scikit-learn implementation features the SAGA
algorithm~\cite{DefazioBL14} for computing the whole regularization
path that is shown in Figure~\ref{fig:logRegL1}. This figure can also
be found on the scikit-learn
website~\footnote{\href{{https://scikit-learn.org/stable/auto_examples/linear_model/plot_logistic_path.html}}{https://scikit-learn.org/stable/auto\_examples/linear\_model/plot\_logistic\_path.html}}. However,
when using GENO, the regularization path looks different, see also
Figure~\ref{fig:logRegL1}. Checking the objective functions values
reveals that the precision of the SAGA algorithm is not enough for
tracking the path faithfully. GENO's result can be reproduced by using
CVXPY except for one outlier at which CVXPY did not compute the
optimal solution. LIBLINEAR~\cite{FanCHWL08,ZhuangJYL18} can also be
used for computing the regularization path, but also fails to follow
the exact path. This can be explained as follows: LIBLINEAR also does
not compute optimal solutions, but more importantly, in contrast to
the original formulation, it penalizes the bias for algorithmic
reasons. Thus, changing the problem slightly can lead to fairly
different results.
CVXPY, like GENO, is flexible and precise enough to accommodate the
original problem formulation and to closely track the regularization
path. But it is not as efficient as GENO. On the problem used in
Figure~\ref{fig:logRegL1} SAGA takes 4.3 seconds, the GENO solver
takes 0.5 seconds, CVXPY takes 13.5 seconds, and LIBLINEAR takes 0.05
seconds but for a slightly different problem and insufficient
accuracy.
\subsection{$\ell_2$-regularized Logistic Regression}
Logistic regression is probably the most popular linear, binary
classification method. It is given by the following unconstrained
optimization problem with a smooth objective function
\[
\begin{array}{rl} \displaystyle
\min_{w} & \frac{\lambda}{2} \norm{w}_2^2 + \frac{1}{m} \sum_i
\log(\exp(-y_i X_i w) + 1),
\end{array}
\]
where $X\in\mathbb{R}^{m\times n}$ is a data matrix, $y\in\{-1, +1\}^m$ is
a label vector, and $\lambda\in\mathbb{R}$ is the regularization parameter.
Since it is a classical problem there exist many well-engineered
solvers for $\ell_2$-regularized logistic regression. The problem also
serves as a testbed for new algorithms. We compared GENO to the
parallel version of LIBLINEAR and a number of recently developed
algorithms and their implementations, namely
Point-SAGA~\cite{Defazio16}, SDCA~\cite{Shalev-Shwartz13}, and
catalyst SDCA~\cite{LinMH15}). The latter algorithms implement some
form of SGD. Thus their running time heavily depends on the values for
the learning rate (step size) and the momentum parameter in the case
of accelerated SGD. The best parameter setting often depends on the
regularization parameter and the data set. We have used the code
provided by~\cite{Defazio16} and the parameter settings therein.
For our experiments we set the regularization parameter
$\lambda=10^{-4}$ and used real world data sets that are commonly used
in experiments involving logistic regression. GENO converges as
rapidly as LIBLINEAR and outperforms any of the recently published
solvers by a good margin, see Figure~\ref{fig:lrl2}.
\begin{figure*}[h!]
\centering
\begin{tabular}{ccc}
\includegraphics[width=0.3\textwidth]{a9a.pdf} &
\includegraphics[width=0.3\textwidth]{{covtype.libsvm.binary.scale}.pdf} &
\includegraphics[width=0.3\textwidth]{mushrooms.pdf} \\
\\
\includegraphics[width=0.3\textwidth]{{rcv1_test.binary}.pdf} &
\includegraphics[width=0.3\textwidth]{real-sim.pdf} &
\includegraphics[width=0.3\textwidth]{{webspam_wc_normalized_unigram.svm}.pdf}
\end{tabular}
\caption{Running times for different solvers on the
$\ell_2$-regularized logistic regression problem.}
\label{fig:lrl2}
\end{figure*}
On substantially smaller data sets we also compared GENO to CVXPY with
both the ECOS~\cite{Ecos1} and the SCS solver~\cite{scs}.
As can be seen from Table~\ref{tab:lrl2}, GENO is orders of magnitude
faster.
\begin{table}[h!]
\centering
\caption{Running times in seconds for different general purpose
solvers on small instances of the $\ell_2$-regularized logistic
regression problem. The approximation error is close to $10^{-6}$
for all solvers.}
\label{tab:lrl2}
\begin{tabular}{l*{8}{r}}
\toprule
\multirow{2}{*}{Solver} & \multicolumn{7}{c}{Data sets} \\
\cmidrule{2-8}
& heart & ionosphere & breast-cancer & australian & diabetes & a1a & a5a \\
\midrule
GENO & 0.005 & 0.013 & 0.004 & 0.014 & 0.006 & 0.023 & 0.062\\
ECOS & 1.999 & 2.775 & 5.080 & 5.380 & 5.881 & 12.606 & 57.467\\
SCS & 2.589 & 3.330 & 6.224 & 6.578 & 6.743 & 16.361 & 87.904\\
\bottomrule
\end{tabular}
\end{table}
\subsection{Support Vector Machines}
Support Vector Machines (SVMs)~\cite{CortesV95} have been studied
intensively and are widely used, especially in combination with
kernels~\cite{SchoelkopfS01}. They remain populat, as is indicated by
the still rising citation count of the popular and heavily-cited
solver LIBSVM~\cite{ChangL01}. The dual formulation of an SVM is given as the
following quadratic optimization problem
\[
\begin{array}{rl}
\min_{a} & \frac{1}{2}(a\odot y)^\top K (a\odot y) - \norm{a}_1\\
\st & y^\top a = 0\\
& 0 \leq a\leq c,
\end{array}
\]
where $K\in\mathbb{R}^{m\times m}$ is a kernel matrix, $y\in\{-1, +1\}^m$
is a binary label vector, $c\in\mathbb{R}$ is the regularization parameter,
and $\odot$ is the element-wise multiplication. While the SVM problem
with a kernel can also be solved in the primal~\cite{chapelle2007}, it
is traditionally solved in the dual. We use a Gaussian kernel, i.e.,
$K_{ij} =\exp\left(-\gamma\norm{X_i - X_j}_2^2\right)$ and standard
data sets. We set the bandwith parameter $\gamma=1/2$ which
corresponds to roughly the median of the pairwise data point distances
and set $C=1$. Table~\ref{tab:svm} shows that the solver generated by
GENO is as efficient as LIBSVM which has been maintained and improved
over the last 15 years. Both solvers outperform general purpose
approaches like CVXPY with OSQP~\cite{osqp}, SCS~\cite{scs},
Gurobi~\cite{gurobi}, or Mosek~\cite{mosek} by a few orders of
magnitude.
\begin{table}[h]
\centering
\caption{Running times in seconds for solving a dual
Gaussian-kernelized SVM. The optimality gap is close to $10^{-4}$
for all solvers and data sets. Missing entries in the table
indicate that the solver did not finish within one hour. }
\label{tab:svm}
\begin{tabular}{l*{8}{r}}
\toprule
\multirow{2}{*}{Solver} & \multicolumn{8}{c}{Datasets} \\
\cmidrule{2-9}
& ionosphere & australian & diabetes & a1a & a5a & a9a & w8a & cod-rna \\
\midrule
GENO & 0.009 & 0.024 & 0.039 & 0.078 & 1.6 & 30.0 & 25.7 & 102.1\\
LIBSVM & 0.005 & 0.010 & 0.009 & 0.088 & 1.0 & 18.0 & 78.6 & 193.1\\
SCS & 0.442 & 1.461 & 3.416 & 11.707 & 517.5 & -& - & -\\
OSQP & 0.115 & 0.425 & 0.644 & 3.384 & 168.2 & - & - & -\\
Gurobi & 0.234 & 0.768 & 0.992 & 4.307 & 184.4 & - & - &- \\
Mosek & 0.378 & 0.957 & 1.213 & 6.254 & 152.7 & - & - & - \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Elastic Net}
Elastic net regression~\cite{ZouH05} has also been studied intensively
and is used mainly for mircoarray data classification and gene
selection. Given some data $X\in\mathbb{R}^{m\times n}$ and a response
$y\in\mathbb{R}^{m}$, elastic net regression seeks to minimize
\[
\frac{1}{2m}\norm{Xw-y}^2_2 + \alpha\left( \lambda\norm{w} + \frac{1-\lambda}{2}\norm{w}_2^2\right),
\]
where $\alpha$ and $\lambda$ are the corresponding elastic net
regularization parameters. The most popular solver is glmnet, a dual
coordinate descent approach that has been implementated in
Fortran~\cite{FriedmanHT09}. In our experiments, we follow the same
setup as in~\cite{FriedmanHT09}. We generated Gaussian data
$X\in\mathbb{R}^{m\times n}$ with $m$ data points and $n$ features. The
outcome values $y$ were generated by
\[
y = \sum_{j=1}^n X_j \beta_j + k\cdot z,
\]
where $\beta_j = (-1)^j \exp(-j/10)$, $z\sim {\cal{N}} (0, 1)$, and
$k$ is chosen such that the signal-to-noise ratio is 3. We varied the
number of data points $m$ and the number of features $n$. The results
are shown in Table~\ref{tab:elasticnet}. It can be seen that the
solver generated by GENO is as efficient as glment and orders of
magnitude faster than comparable state-of-the-art general purpose
approaches like CVXPY coupled with ECOS, SCS, Gurobi, or Mosek. Note,
that the OSQP solver could not be run on this problem since CVXPY
raised the error that it cannot convert this problem into a QP.
\begin{table}[h]
\centering
\caption{Running times for the elastic net regression problem in
seconds. Missing entries in the table indicate that the solver did
not finish within one hour. The optimality gap is about $10^{-8}$
for all solvers which is the standard setting for glmnet.}
\label{tab:elasticnet}
\begin{tabular}{r*{8}{r}}
\toprule
\multirow{2}{*}{m} & \multirow{2}{*}{n} & \multicolumn{6}{c}{Solvers} \\
\cmidrule{3-8}
& & GENO & glmnet & ECOS & SCS & Gurobi & Mosek \\
\midrule
1000 & 1000 & 0.11 & 0.10 & 43.27 & 2.33 & 21.14 & 1.77 \\
2000 & 1000 & 0.14 & 0.08 & 202.04 & 9.24 & 58.44 & 3.52 \\
3000 & 1000 & 0.18 & 0.08 & 513.78 & 22.86 & 114.79 & 5.38 \\
4000 & 1000 & 0.21 & 0.09 & - & 38.90 & 185.79 & 7.15 \\
5000 & 1000 & 0.27 & 0.11 & - & 13.88 & 151.08 & 8.69 \\
\midrule
1000 & 5000 & 1.74 & 0.62 & - & 28.69 & - & 13.06 \\
2000 & 5000 & 1.49 & 1.41 & - & 45.79 & - & 27.69 \\
3000 & 5000 & 1.58 & 2.02 & - & 81.83 & - & 50.99 \\
4000 & 5000 & 1.24 & 1.88 & - & 135.94 & - & 67.60 \\
5000 & 5000 & 1.41 & 1.99 & - & 166.60 & - & 71.92 \\
\midrule
5000 & 10000 & 4.11 & 4.75 & - & - & - & - \\
7000 & 10000 & 4.76 & 5.52 & - & - & - & - \\
10000 & 10000 & 4.66 & 3.89 & - & - & - & - \\
50000 & 10000 & 13.97 & 6.34 & - & - & - & - \\
70000 & 10000 & 18.82 & 11.76 & - & - & - & - \\
100000 & 10000 & 23.38 & 23.42 & - & - & - & - \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Non-negative Least Squares}
Least squares is probably the most widely used regression method.
Non-negative least squares is an extension that requires the output to
be non-negative. It is given as the following optimization problem
\[
\begin{array}{rl}
\min_{x} & \norm{Ax-b}_2^2 \\
\st & x\geq 0,
\end{array}
\]
where $A\in\mathbb{R}^{m\times n}$ is a given design matrix and
$b\in\mathbb{R}^m$ is the response vector. Since non-negative least
squares has been studied intensively, there is a plenitude of solvers
available that implement different optimization methods. An overview
and comparison of the different methods can be found
in~\cite{Slawski13}. Here, we use the accompanying code described
in~\cite{Slawski13} for our comparison. We ran two sets of
experiments, similarly to the comparisons in~\cite{Slawski13}, where
it was shown that the different algorithms behave quite differently on
these problems. For experiment (i), we generated random data
$A\in\mathbb{R}^{2000\times 6000}$, where the entries of $A$ were sampled
uniformly at random from the interval $[0, 1]$ and a sparse vector
$x\in\mathbb{R}^{6000}$ with non-zero entries sampled also from the uniform
distribution of $[0, 1]$ and a sparsity of $0.01$. The outcome values
were then generated by $y = \sqrt{0.003}\cdot Ax + 0.003 \cdot z$,
where $z\sim {\cal{N}}(0, 1)$. For experiment (ii),
$A\in\mathbb{R}^{6000\times 3000}$ was drawn form a Gaussian distribution
and $x$ had a sparsity of $0.1$. The outcome variable was generated by
$y=\sqrt{1/6000}\cdot Ax+0.003\cdot z$, where $z\sim {\cal{N}}(0,
1)$. The differences between the two experiments are the following:
(1) The Gram matrix $A^\top A$ is singular in experiment (i) and
regular in experiment (ii), (2) The design matrix $A$ has isotropic
rows in experiment (ii) which does not hold for experiment (i), and
(3) $x$ is significantly sparser in (i) than in (ii). We compared the
solver generated by GENO with the following approaches: the classical
Lawson-Hanson algorithm~\cite{lawson95}, which employs an active set
strategy, a projected gradient descent algorithm combined with an
Armijo-along-projection-arc line search~\cite[Ch~2.3]{Bertsekas99}, a
primal-dual interior point algorithm that uses a conjugate gradient
descent algorithm~\cite{BoydV04} with a diagonal preconditioner for
solving the Newton system, a subspace Barzilai-Borwein
approach~\cite{kim13}, and Nesterov's accelerated projected gradient
descent~\cite{Nesterov83}. Figure~\ref{fig:nnls} shows the results for
both experiments. Note, that the Barzilai-Borwein approach with
standard parameter settings diverged on experiment (i) and it would
stop making progress on experiment (ii). While the other approaches
vary in running time depending on the problem, the experiments show
that the solver generated by GENO is always among the fastest compared
to the other approaches.
We provide the final running times of the general purpose solvers in
Table~\ref{tab:nnls} since obtaining intermediate solutions is not
possible for these solvers. Table~\ref{tab:nnls} also provides the
function values of the individual solvers. It can be seen, while the
SCS solver is considerably faster than the ECOS solver, the solution
computed by the SCS solver is not optimal in experiment (i). The ECOS
solver provides a solution with the same accuracy as GENO but at a
running time that is a few orders of magnitude larger.
\begin{figure*}[h]
\centering
\begin{tabular}{ccc}
\includegraphics[width=0.4\textwidth]{result_0.pdf} &
\quad\quad &
\includegraphics[width=0.4\textwidth]{result_1.pdf}
\end{tabular}
\caption{Running times for non-negative least squares
regression. The figure on the left shows the running times for the
experiment (i) and the figure on the right the running times for
experiment (ii). The algorithms are projected gradient descent
(pd), Lawson-Hanson (LH), subspace Barzilai-Borwein (BB),
primal-dual interior point method (pd), Nesterov's accelerated
projected gradient descent (Nest), and GENO.}
\label{fig:nnls}
\end{figure*}
\begin{table}[h]
\centering
\caption{Running times and function values for the non-negative
least squares problem.}
\label{tab:nnls}
\begin{tabular}{rrlrrrrr}
\toprule
m & n & & GENO & ECOS & SCS & Gurobi & Mosek \\
\midrule
\multirow{2}{*}{2000} & \multirow{2}{*}{6000} & time & 4.8 & 689.7 & 70.4 & 187.3 & 24.9 \\
&& fval & 0.01306327 & 0.01306327 & 0.07116707 & 0.01306330 & 0.01306343 \\
\midrule
\multirow{2}{*}{6000} & \multirow{2}{*}{3000} & time & 0.3 & 3751.3 & 275.5 & 492.9 & 58.4 \\
&& fval & 0.03999098 & 0.03999098 & 0.04000209 & 0.03999100 & 0.03999114 \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Symmetric Non-negative Matrix Factorization}
Non-negative matrix factorization (NMF) and its many variants are
standard methods for recommender systems~\cite{adomavicius2005} and
topic modeling~\cite{blei2003,hofmann1999}. It is known as symmetric
NMF, when both factor matrices are required to be identical. Symmetric
NMF is used for clustering problems~\cite{kuang2015} and known to be
equivalent to $k$-means kernel clustering~\cite{ding2005}. Given a
target matrix $T\in\mathbb{R}^{n\times n}$, symmetric NMF is given as the
following optimization problem
\[
\min_{U}\: \norm{T-UU^\top}_{\mbox{\scriptsize Fro}}^2 \quad\st\: U \geq 0,
\]
where $U\in\mathbb{R}^{n\times k}$ is a positive factor matrix of rank
$k$. Note, the problem cannot be modeled and solved by CVXPY since
it is non-convex. It has been addressed recently in~\cite{ZhuLLL18} by
two new methods. Both methods are symmetric variants of the
alternating non-negative least squares (ANLS)~\cite{kim2008} and the
hierarchical ALS (HALS)~\cite{cichocki2009} algorithms.
We compared GENO to both methods. For the comparison we used the code
and same experimental setup as in~\cite{ZhuLLL18}. Random
positive-semidefinite target matrices $X=\hat U\hat U^\top$ of
different sizes were computed from random matrices $\hat
U\in\mathbb{R}^{n\times k}$ with absolute value Gaussian entries. As can be
seen in Figure~\ref{fig:SNMF}, GENO outperforms both methods (SymANLS
and SymHALS) by a large margin.
\begin{figure*}[h]
\centering
\begin{tabular}{lcr}
\includegraphics[width=0.31\textwidth]{{result_snmf.mat}.pdf} &
\includegraphics[width=0.31\textwidth]{{result_snmf2.mat}.pdf}
\includegraphics[width=0.31\textwidth]{{result_snmf3.mat}.pdf}
\end{tabular}
\caption{Convergence speed on the symmetric non-negative matrix
factorization problem for different parameter values. On the left,
the times for $m = 50, k=5$, in the middle for $m=500, k=10$, and
on the right for $m=2000, k=15$.}
\label{fig:SNMF}
\end{figure*}
\subsection{Non-linear Least Squares}
GENO makes use of a quasi-Newton solver which approximates the Hessian
by the weighted sum of the identity matrix and a positive
semidefinite, low-rank matrix. One could assume that this does not
work well in case that the true Hessian is indefinite, i.e., in the
non-convex case. Hence, we also conducted some experiments on
non-convex problems. We followed the same setup and ran the same
experiments as in~\cite{LiuLWYY18} and compared to state-of-the-art
solvers that were specifically designed to cope with non-convex
problems. Especially, we considered the non-linear least squares
problem, i.e., we seek to minimize the function $l(x) =
\norm{\sigma(Ax) - b}_2^2$, where $A\in\mathbb{R}^{m\times n}$ is a data
matrix, $y\in\{0, 1\}^m$ is a binary label vector, and $\sigma(s) =
1/(1+\exp(-s))$ is the sigmoid function. Figure~\ref{fig:nonlinearLS}
shows the convergence speed for the data set \texttt{w1a} and
\texttt{a1a}. The state-of-the-art specialized solvers that were
introduced in~\cite{LiuLWYY18} are \mbox{S-AdaNCG}, which is a
stochastic adaptive negative curvature and gradient algorithm, and
\mbox{AdaNCD-SCSG}, an adaptive negative curvature descent algorithm
that uses SCSG~\cite{lei2017} as a subroutine. The experiments show
that GENO outperforms both algorithms by a large margin. In fact, on
the data set \texttt{a1a}, both algorithms would not converge to the
optimal solution with standard parameter settings. Again, this problem
cannot be modeled and solved by CVXPY.
\begin{figure*}[h]
\centering
\begin{tabular}{ccc}
\includegraphics[width=0.4\textwidth]{{result.mat}.pdf} &
\quad\quad &
\includegraphics[width=0.4\textwidth]{{result2.mat}.pdf}
\end{tabular}
\caption{Running times for the non-linear least squares problem. The
figure on the left shows the running times for the data set
\texttt{w1a} and on the right for the data set \texttt{a1a}.}
\label{fig:nonlinearLS}
\end{figure*}
\subsection{Compressed Sensing}
In compressed sensing, one tries to recover a sparse signal from a
number of measurements~\cite{CandesT06,donoho2006}. See the recent
survey~\cite{Rani18} for an overview on this topic. The problem can be
reduced to finding the solution to an underdetermined system of linear
equations with minimal $\ell_1$-norm. Hence, it can be written as the
following optimization problem
\begin{equation} \label{eq:cs}
\begin{array}{rl}
\min_{x} & \norm{x}_1\\
\st & Ax= b,
\end{array}
\end{equation}
where $A\in\mathbb{R}^{m\times n}$ is a measurement matrix and
$b\in\mathbb{R}^m$ is the vector of $m$ measurements. Note, that this
problem is a constrained problem with a non-differentiable objective
function. It is known that when matrix $A$ has the restricted isometry
property and the true signal $x^*$ is sparse, then
Problem~\eqref{eq:cs} recovers the true signal with high probability,
if the dimensions $m$ and $n$ are chosen
properly~\cite{candes2005}. There has been made considerable progress
in designing algorithms that come with convergence
guarantees~\cite{chin2013,christiano2011}. Very recently,
in~\cite{Ene19} a new and efficient algorithm based on the iterative
reweighted least squares (IRLS) technique has been proposed. Compared
to previous approaches, their algorithm is simple and achieves the
state-of-the-art convergence guarantees for this problem.
We used the same setup and random data set as in~\cite{Ene19} and ran
the same experiment. The measurement matrix $A\in\mathbb{R}^{150\times
200}$ had been generated randomly, such that all rows are
orthogonal. Then, a sparse signal $x^*$ with only 15 non-zero entries
had been chosen and the corresponding measurement vector $b$ had been
computed via $b=Ax^*$. We compared to their IRLS algorithm with the
long-steps update scheme. Figure~\ref{fig:cs} shows the convergence
speed speed towards the optimal function value as well as the
convergence towards feasibility. It can be seen that the solver
generated by GENO outperforms the specialized, state-of-the-art IRLS
solver by a few orders of magnitude.
\begin{figure*}[t]
\centering
\begin{tabular}{ccc}
\includegraphics[width=0.4\textwidth]{{irls_fvalerr}.pdf} &
\quad\quad &
\includegraphics[width=0.4\textwidth]{{irls_constraint}.pdf}
\end{tabular}
\caption{Running times for the compressed sensing problem. The
figure on the left shows the convergence to the optimal objective
function value and the figure on the right shows the norm of the
constraint violation of the iterate $x^{(k)}$, i.e.,
$\norm{Ax^{(k)}-b}_2$.}
\label{fig:cs}
\end{figure*}
\ignore{
\subsection{Principal Component Analysis and Non-negative PCA}
Principal component analysis is probably the widely used method for dimension reduction. It reduces to computing the largest eigenvectors of a covariance matrix. The most popular methods are Lanczos method~\cite{lanczos50} and power iterations. A recently published solver can also deal with non-negativity constraints. Here, we compare GENO with these state-of-the-art approaches for computing the leading eigenvector and for computing the largest non-negative eigenvector. It can be seen that GENO is as efficient as the Lanzos method, and considerably faster than the often used power method. It is also considerably faster than the state-of-the-art specialized solver for non-negative PCA~\cite{LiL18}.
\ignore{
\subsection{Joint Distribution in Graphical Models}
Given the marginals of two graphical models one is interested in the joint distribution. In general there infinitely many distributions that would satisfy the both marginal distributions. Hence, it is reasonable to pick the one with the largest entropy. This problem has been studied very recently in~\cite{}. It reduces to a constrained optimization problem. We have conducted the same experiments as in~\cite{}. GENO is faster than the state-of-the-art solver that has been specifically designed to solve this problem.
\subsection{Hyperbolic Embedding}
It has been observed that embedding data that has a hierarchical structure into hyperbolic space can be done more efficiently, i.e., with a lower distortion that embedding it into Euclidean space~\cite{}. It has become an active area of research within the last few years~\cite{}. It has been shown that embedding a tree into hyperbolic space can be done greedily. Based on this result, embedding a graph into hyperbolic space can also be done combinatorially without turning it into an optimization problem. However, this is no longer true when considering a distance metric. Suppose one is given a matrix with pairwise distances of data items. One would like to map these items to points in low-dimensional space. In the case of Euclidean space this problem is known as Multidimensional Scaling (MDS), in the case of hyperbolic space it is known as Hyperbolic Multidimensional Scaling (H-MDS)~\cite{}. If the data is noise-free, i.e., it can be embedded into low-dimensional space without any distortion then it has been shown that this problem can be reduced to the Euclidean MDS~\cite{SalaSGR18}. However, in practice this case never happens. Also, assuming that the data can be embedded without any distortion is already a strong assumption in the Euclidean case from a computational complexity point of view. If the data can be embedded into Euclidean space without any distortion then metric MDS can be solved in polynomial time via classic MDS. Otherwise, it is believed that metric MDS is NP-hard.~\footnote{There are many references in the literature that claim MDS has been shown to be NP-hard in the general case but such a proof cannot be found in any of them.}
Hence, we conduct an experiment of embedding data into Euclidean and into hyperbolic space.
\subsection{Shallow Net}
\begin{figure*}[t]
\centering
\includegraphics[width=0.98\textwidth]{figures/shallowNet.png} \\
\caption{Running times for learning a shallow net (example taken from Ali Rahimi's NIPS~2017 test of time award talk).}
\label{fig:shallowNet}
\end{figure*}
}
\subsection{Packing and Covering Linear Programs}
In a different project, we needed to find an arboreal matching of phylogenetic trees. This problem could be reduced to a number of packing and covering linear programs (LPs). A packing LP is the following optimization problem:
\[
\begin{array}{rl}
\min_{x} & c^\top x\\
\st & Ax\leq b\\
& x \geq 0,
\end{array}
\]
where $A\in\mathbb{R}^{m\times n}_{\geq 0}$ is a non-negative matrix, $b\in\mathbb{R}^{m}_{\geq 0}$ and $c\in\mathbb{R}^{n}_{\geq 0}$ are non-negative vectors. The dual of this problem is called a covering LP. Since these are standard LPs, we used the standard LP solver CPLEX. However, on medium-sized problems, i.e., $m=25,000$ and $n=5,500$, CPLEX would need too long. Even a specialized solver that would solve packing LPs on the GPU could not handle this input. However, the solver generated by GENO was able to solve all instances in less than one minute while consuming about 250MB of RAM.
}
\ignore{
\begin{table}[h]
\centering
\caption{Summary of all data sets that were used in the experiments. All data sets were obtained from the LIBSVM data set repository.}
\label{tab:datasets}
\begin{tabular}{llll}
\toprule
Name & Samples ($m$) & Features ($n$) \\
\midrule
heart & 270 & 13 & \\
ionosphere & 351 & 34 \\
breast-cancer & 683 & 10 \\
australian & 690 & 14 \\
diabetes & 768 & 8 \\
a1a & 1605 & 123 \\
a5a & 6414 & 123 \\
a9a & 32561 & 123 \\
mushrooms & 8124 & 112 \\
w8a & 49749 & 300 \\
cod-rna & 59535 & 8 \\
real-sim & 72308 & 20958 \\
webspam & 350000 & 254 \\
covtype & 581012 & 54 \\
rcv1\_test & 677399 & 47236 \\
\bottomrule
\end{tabular}
\end{table}
}
\section{Conclusions}
\label{sec:conclusions}
While other fields of applied research that heavily rely on
optimization, like operations research, optimal control and deep
learning, have adopted optimization frameworks, this is not the case
for classical machine learning. Instead, classical machine learning
methods are still mostly accessed through toolboxes like scikit-learn,
Weka, or MLlib. These toolboxes provide well-engineered solutions for
many of the standard problems, but lack the flexibility to adapt the
underlying models when necessary. We attribute this state of affairs
to a common belief that efficient optimization for classical machine
learning needs to exploit the problem structure. Here, we have
challenged this belief. We have presented GENO the first general
purpose framework for problems from classical machine learning. GENO
combines an
easy-to-read modeling language with a general purpose
solver. Experiments on a variety of problems from classical machine
learning demonstrate that GENO is as efficient as established
well-engineered solvers and often outperforms recently published
state-of-the-art solvers by a good margin. It is as flexible as
state-of-the-art modeling language and solver frameworks, but
outperforms them by a few orders of magnitude.
\section*{Acknowledgments}
S\"oren Laue has been funded by Deutsche Forschungsgemeinschaft (DFG) under grant LA~2971/1-1.
\bibliographystyle{plain}
| {
"attr-fineweb-edu": 1.831055,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdQLxK6wB9k0iLmG7 | \subsection{Likelihood Approach} \label{subsec:methodology}
We interpret the 1D Ly$\alpha$-flux power spectrum using a likelihood built around three categories of parameters which are floated in the minimization procedure. The first category describes the cosmological model in the case of $\Lambda$WDM assuming a flat Universe (first five lines of Tab.~\ref{tab:params}). The second category models the astrophysics within the IGM: the relationship between the gas temperature and its density (last four lines of Tab.~\ref{tab:params}), plus two amplitudes for the correlated absorption of Ly-$\alpha$ and \ion{Si}{iii}, or Ly-$\alpha$ and \ion{Si}{ii} visible as oscillations in Fig.~\ref{fig:DR9fPS}. The redshift evolution of $T_0(z)$ and $\gamma(z)$, parameters that describe the temperature of the IGM, are modeled with power laws proportional to $[(1+z)/4]^{\rm \eta}$, where the logarithmic slope $\eta$ is unique over the whole redshift range in the case of $\gamma(z)$ but allowed to take different values above and below a $z=3$ break for $T_0(z)$.
Lastly, in order to describe the imperfections of both our measurement of the 1D power spectrum (data) and its modeling by our suite of N-body and SPH simulations, we introduce a third category where we group all nuisance parameters that are fitted simultaneously with the parameters of interest (first two categories). This third category is implemented directly in the likelihood through well-chosen analytical forms where free parameters give flexibility to each specific item. Theses parameters are described in Sec.~\ref{subsec:nuisance}.
The $\chi^2$ minimization procedure (or, equivalently, likelihood maximization) is done using the MINUIT package~\cite{Minuit}. Our determination of the coverage intervals of unknown parameters is based on the `classical' confidence level method originally defined by \cite{Neyman1937}. We first determine the $\chi^2$ minimum $\chi^2_0$, leaving all $n$ parameters (from the above three categories) free. To set a confidence level (CL) on any individual parameter $\theta_i$, we then scan the variable $\theta_i$: for each fixed value of $\theta_i$, we again minimize $\chi^2$ with the remaining $n-1$ free parameters. The $\chi^2$ difference, $\Delta \chi^2(\theta_i)$, between the new minimum and $\chi^2_0$, allows us to compute the CL on the variable, assuming that the experimental errors are Gaussian,
\begin{equation}
{\rm CL}(\theta_i) = 1-\int_{\Delta \chi^2(\theta_i)}^{\infty} f_{\chi^2}(t;N_{\rm{dof}}) \: dt,
\label{Eq:CL}
\end{equation}
with
\begin{equation}
f_{\chi^2}(t;N_{\rm{dof}})=\frac{e^{-t/2}t^{N_{\rm{dof}}/2 - 1}}{\sqrt{2^{N_{\rm{dof}}}} \Gamma(N_{\rm{dof}}/2)}
\label{Eq:chi2}
\end{equation}
where $\Gamma$ is the Gamma function and the number of degrees of freedom $N_{\rm{dof}}$
is equal to 1.
This profiling method can be easily extended to two variables. In this case, the minimizations are
performed for $n-2$ free parameters and the confidence level ${\rm CL}(\theta_i,\theta_j)$ is
derived from Eq.~\ref{Eq:CL} with $N_{\rm{dof}}=2$. Given the large number of parameters in the fit, this frequentist approach presents the advantage of being much faster than a Bayesian inference, while giving results in remarkably good agreement with the latter~\cite{Yeche2006,PlanckCollaboration2014Freq}.
In this work, we also combine at times with the $\chi^2$ derived from Planck data. We use
the central values and covariance matrices available in the official
Planck repositories\footnote{\tt http://wiki.cosmos.esa.int/planckpla2015/index.php/Main\_Page}
for the cosmological parameters ($\sigma_8$, $n_s$, $\Omega_m$, $H_0$, $n_{\rm run}$), where $n_{\rm run}$ is the running of the scalar index of the fluctuation amplitudes, defined as $n_{\rm run} = 2\, d n_s / d \ln k$, which appears in the first-order expansion of the initial scalar power spectrum $\mathcal{P}_S$:
\begin{equation}
\label{eq:PS}
\mathcal{P}_S = \left( \frac{k}{k_\star} \right)^{n_s - 1 + \frac{1}{2} n_{\rm run} \ln \frac{k}{k_\star}}\, ,
\end{equation}
where $k_\star = 0.05 \: \rm{Mpc}^{-1}$ is the pivot scale of the CMB. For each parameter, we assume a Gaussian CMB likelihood with asymmetric $1\sigma$ errors that we estimate on either side of the central value from the $1\sigma$ lower and upper limits, thus accounting for asymmetric contours. We validated this strategy in \cite{Palanque2015a}, where we showed that it gave similar results to a Markov-Chain Monte-Carlo approach based on the full likelihood.
\subsection{Description of Nuisance Parameters} \label{subsec:nuisance}
The aforementioned third category of parameters is a set of 21 nuisance parameters that account for residual uncertainties or corrections related to noise in the data, spectrograph resolution, modeling of the IGM temperature-density relation, imperfections in our splicing technique, additional feedbacks to take into account in our simulations, and redshift of reionization. Since all but the redshift of reionization are identical to \cite{Palanque2015a, Palanque2015b}, we briefly describe them in the following subsection, while we dedicate \ref{subsec:z_reio} to our treatment of the redshift of reionization.
\subsubsection{Data, Technical and Simulation Nuisance Parameters}
We model a possible redshift-dependent correction to the spectrograph resolution with the multiplicative factor
\begin{equation}
\label{eq:Nuis1}
\mathcal{C}_{\rm{reso}} = e^{ - \left( \alpha_{\rm{reso}} + \beta_{\rm{reso}} (z-3) \right) \times k^2}
\end{equation} where $\alpha_{\rm{reso}}$ and $\beta_{\rm{reso}}$ are allowed to vary around a null value with a Gaussian constraint of $\sigma = \left( 5 \rm{km} s^{-1} \right)^2$. We also quantify the uncertainty in each 12 redshift bins of the data by multiplicative factors $\alpha^{\rm{noise}}_{\langle z \rangle}$ where $\langle z \rangle = 2.2, 2.4, ..., 4.4$. This totals 14 free parameters accounting for data uncertainty and spectrograph resolution in our likelihood.
The hydrodynamics simulations we describe in Sec.~\ref{sec:simulations} are used to compute the 1D Ly-$\alpha$ flux power spectrum from a neutral Hydrogen field. A number of astrophysical feedback processes are poorly quantitatively today and require the addition of systematics. We model the impact of feedbacks from Active Galactic Nuclei (AGN) and Supernov\ae (SN) on the Ly-$\alpha$ transmitted flux by implementing the multiplicative factors
\begin{equation}
\label{eq:Nuis2}
\begin{split}
& \mathcal{C}^{\rm{feedback}}_{\rm{AGN}} (k) = \left( \alpha_{\rm{AGN}}(z) + \beta_{\rm{AGN}}(z) \times k \right) \times \alpha^{\rm{feedback}}_{\rm{AGN}} \\
& \mathcal{C}^{\rm{feedback}}_{\rm{SN}} (k) = \left( \alpha_{\rm{SN}}(z) + \beta_{\rm{SN}}(z) \times k \right) \times \alpha^{\rm{feedback}}_{\rm{SN}}
\end{split}
\end{equation} where the $\alpha_{\rm{AGN, SN}}$ and $\beta_{\rm{AGN, SN}}$ coefficients are derived from \cite{Feedbacks}. An additional parameter is implemented to account for fluctuations in the intensity of the ionizing background, commonly referred to as UV fluctuations. Similar to \cite{UVbackground}, we implement an additive correction proportional to the transmitted flux power spectrum at pivot point $k_{p} = 0.009 s \: \rm{km}^{-1}$, $\mathcal{C}_{\rm{UV}}$, which is $k$-independent but evolves with redshift proportionally to the power spectrum. Finally, we account for any damped Lyman-alpha (DLA) systems we might not have removed in our pipeline by introducing a $k$-dependent multiplicative correction (see \cite{McDonald2005} for justification of the analytical form)
\begin{equation}
\label{eq:Nuis3}
\mathcal{C}_{\rm DLA}(k) = \left( \frac{1}{15,000 k - 8.9} + 0.018 \right) \times 0.2 \times \alpha_{\rm{DLA}}
\end{equation} where $\alpha_{\rm{DLA}}$ is free to vary in the likelihood fit.
We recall that the simulated flux power spectrum is obtained by a splicing technique that consists in constructing the power spectrum from a large-box simulation corrected for its low resolution, and a high-resolution simulation corrected for its small size. The residuals with respect to an exact simulation are modeled by a broken line with a pivot at $k^{\rm{min}}_{L} = 2 \pi / L$ where $L= 25 h^{-1}\rm{Mpc}$ is the box size of the high-resolution simulation introduced in Sec.~\ref{sec:simulations}. The relationship between $h \: \rm{Mpc}^{-1}$ and $s \: \rm{km}^{-1}$ being redshift-dependent, so is the aforementioned pivot scale, which represents the scale at which the large-box simulation correction switches from a $k$-independent to a $k$-dependent factor~(\citep{Borde2014}). Similar to \cite{Palanque2015b}, the slope below the pivot scale is fixed. The vertical offset at the pivot scale $A_{\epsilon} (k=k_p)$ is let free and we allow for a redshift-dependence in the correction slope $\eta_{\epsilon}$ beyond the pivot scale. This adds two nuisance parameters to our third category of parameters in our likelihood analysis.
\subsubsection{Reionization History}
\label{subsec:z_reio}
As in most optically-thin hydrodynamics simulations, the ionizing (UV) background causes the Hydrogen to quickly become highly ionized. This is qualitatively inaccurate since reionization processes are non-instantaneous and operate inhomogeneously in space due to density contrasts. Although hydrodynamics simulations are required to tackle this shortcoming self-consistently, our study is concerned with modeling the Ly-$\alpha$ forest at $z<5$, well after reionization processes have completed. The redshift at which the UV background onsets, however, affects the Jeans smoothing scale of the baryon gas (\citep{Gnedin&Hui98}) in a manner similar to the free streaming scale of warm dark matter particles. Altering the reionization redshift $z_{\star}$ impacts the amount of time that gas pressure has to suppress small-scale density fluctuations. It is therefore necessary to explore different thermal histories of the IGM in order to lift the degeneracy between Jeans smoothing scale and WDM free streaming scale.
Fig.~13 of \citep{McDonald2005} shows that an increase in the redshift of reionization from $z_{\star}=7$ to 17 suppresses the Ly$\alpha$ flux power spectrum in the largest $k$-modes present in the BOSS data ($k \sim 0.02~s \: \rm{km^{-1}}$) by about 1\% at $z=2.1$ and 4\% at $z=4.0$. Given the reduced range allowed for $z_{\star}$ by recent Planck measurements, these shifts are reduced to percent-level at most. We implement another multiplicative nuisance parameter $\mathcal{C}_\star(k)$ in our likelihood to take into account the effect of $z_{\star}$ on the IGM thermal history. This parameter is given by
\begin{equation}
\mathcal{C}_\star(k) = \alpha_\star(z) + \beta_\star(z) k+\gamma_\star(z) k^2 \;,
\end{equation}
where $\alpha_\star$, $\beta_\star$, and $\gamma_\star$ are taken from \citep{McDonald2005} and
interpolated with respect to the central model to our range of redshift.\\
Since the redshift of reionization is treated as a nuisance parameter in the fit, we add a $z_{\star} = 9.0 \pm 1.5$ prior to our likelihood. The central value and range of this prior are defined in order to encompass the most recent measurements of the redshift of reionization: $10.5\pm 1.1$ from WMAP9+BAO+H0~\cite{WMAP9}, $9.9\pm 1.8$ from Planck TT temperature data at all multipoles and LFI-based polarization data at low ($\ell<30$) multipoles (PlanckTT+`lowP')~\cite{Planck2015}, $10.0\pm 1.7$ when also including the $\ell>30$ HFI polarization data (PlanckTTEE+`lowP')~\cite{Planck2015}. The latter constraints were revised to $8.11\pm 0.93$ and $8.24\pm0.88$ respectively in the latest incarnation where the `lowP' likelihood was replaced with the `SIMlow' likelihood that includes the HFI-based polarization for $\ell\leq 20$~\cite{Planck2016PolarReio}. Values ranging from 7.8 to 8.8 are obtained by the Planck collaboration for a given choice of CMB temperature and polarization data set, when varying the model of reionization adopted~\cite{Planck2016Reio}.
\subsection{Results} \label{subsec:results}
Combining our analysis of the Ly-$\alpha$ forest flux power spectrum with the expansion rate value of $H_0 = 67.3 \pm 1.0 \; \rm{km~s^{-1}~Mpc^{-1}}$ issued by the 2015 Planck collaboration (Ly-$\alpha$ + $\rm H_0$ herein), we obtain the most stringent lower limit on WDM mass to date, set at $m_X > 4.09 \: \rm{keV}$ for thermal relics and $m_s > 24.4 \: \rm{keV}$ for DW sterile neutrinos (95\% CL), as shown in the first part of Tab.~\ref{tab:CL95_1D}.
The fitted values of the nuisance parameters are all well within the expected range. The IGM nuisance parameters, the corrections to our model of the splicing technique and of the spectrograph resolution are all compatible with no correction at the $1\sigma$ level. The additive corrections to the estimate of the noise power spectra range from $-9\%$ to $+19\%$ with median at $-2.5\%$ and negligible correction in the redshift bins where the noise dominates over the signal (i.e., at low redshift). The IGM temperature parameters have large error bars and are thus poorly constrained by this data set. Their values are within $1-2~\sigma$ of typical measurements (see, e.g. \citep{Becker2011}). Optical depth amplitude and index have consistent best-fitted values with those of the $\Lambda$-CDM$\nu$ case in~\cite{Palanque2015b} although the uncertainty ranges are larger by a factor of 2 and 4 respectively: $A^\tau = \left( 25.0 \pm 2.6 \right) \times 10^{-4}$ and $\eta^\tau = 3.728 \pm 0.074$ (68\% C.L.). \\
\begin{table}[htb]
\caption{95\% CL lower bounds on thermal relic mass $m_X$, in keV, obtained with three data configurations. When Ly-$\alpha$ is combined with other datasets, the limit is derived with (right) or without (left) running of the spectral index. In each case, the corresponding DW sterile neutrino mass (in keV) is given in parentheses (see Eq.~\ref{eq:MsMxrelation}).}
\begin{center}
\begin{tabular}{lcc}
\hline \\[-10pt]
\textbf{Data set} & \multicolumn{2}{c}{ \textbf{Lower bound on $\; \rm{\frac{m_X}{keV}} \left( \rm{\frac{m_s}{keV}} \right)$}}\\[2pt]
\hline \\[-10pt]
Ly-$\alpha$ + $H_0$ ($z \leq 4.5$) & \multicolumn{2}{c}{4.09 (24.4)} \\[2pt]
Ly-$\alpha$ + $H_0$ ($z \leq 4.1$) & \multicolumn{2}{c}{ 2.97 (16.1)} \\[2pt]
\hline \\[-10pt]
& no running & with running\\[2pt]
Ly-$\alpha$ + Planck {\scriptsize (TT + lowP)} & 2.96 (16.0) & 4.26 (25.7)\\[2pt]
Ly-$\alpha$ + Planck {\scriptsize (TT + lowP+ TE + EE)} + BAO & 2.93 (15.8) & 4.12 (24.6)\\[2pt]
\hline \\[-10pt]
\end{tabular}
\end{center}
\label{tab:CL95_1D}
\end{table}
The primary causes for the enhancement of our limit compared to prior studies (see Tab.~\ref{tab:studies}) are mostly two-fold. Our daughter QSO sample, for one, includes over four times as many medium-resolution spectra than previous SDSS studies, with all spectra selected for their high signal-to-noise ratio and good quality, and includes more objects in the highest redshift bins. As discussed in the end of Sec.~\ref{sec:simulations} and illustrated in Fig.~\ref{fig:TkFlux}, the damping of small-scale perturbations due to free-streaming is more prominent at higher redshifts. As such, bounds on WDM particle mass are better constrained at higher redshifts, despite observations being more challenging. Whereas our predecessors used high-resolution spectra from the HIRES, UVES and MIKE spectrographs to probe higher redshifts, our QSO sample suffices to establish a competitive constraint on $m_X$ thanks to the addition of Ly-$\alpha$ forest power spectra in the $\langle z \rangle=4.2$ and $4.4$ bins (i.e., for Ly-$\alpha$ absorbers in the redshift range between $z=4.1$ and 4.5). Dropping these two bins from our sample issues $m_X \gtrsim 3.0 \: \rm keV$, which is illustrative of their significance. Despite the relatively low number of objects in these two upper $z$-bins ($26$ and $14$ respectively out of $\sim 14,000$ in total), their contribution enhances our constraint by over 30\%. Moreover, the unprecedented resolution of our SPH numerical simulations also contributes to the competitiveness of our result. Several systematic effects have been greatly reduced: the accuracy of the splicing method and the model of its residual by a scale-dependent feature, the quantification of the sampling variance, the model of the IGM by a broken power-law and the better accounting for the Hydrogen reionization history. The improvements on these simulation nuisance parameters are fully detailed in \cite{Palanque2015b}. Finally, we also slightly extend the range covered by the data to smaller scales, from $k=0.018$ in SDSS-I to $0.020 ~{s\: \rm{km}^{-1}}$ in SDSS-III.
\subsubsection{Spectral Index Running}
\begin{figure}[!]
\begin{center}
\epsfig{figure = plots/C2D_All_zreio9.png, width = 16cm}
\caption{Two-dimensional probability density functions. 68\% and 95\% confidence intervals with regards to $1 \rm{keV} / m_X$ and the 4 cosmological parameters in our grid. Blue contours depict Ly-$\alpha$ forest flux power spectrum data (see Sec.~\ref{sec:Lya}) combined with the $H_0$ constraints from the Planck 2015 collaboration. Yellow and red contours are established by adding low-$\ell$ polarization, temperature and E auto and cross-correlation power spectra from Planck and measurements of the baryon acoustic oscillations scale, with the spectral index running $n_{\rm run}$ fixed to 0 (yellow) or allowed to vary (red) and fitted as a free parameter in our multidimensional analysis (see Sec.~\ref{sec:constraints}).}
\label{fig:Contour_nrun}
\end{center}
\end{figure}
In addition to Ly-$\alpha$ forests, cosmic microwave background and baryon acoustic oscillations are other formidable probes to constrain cosmological parameters. These observations cannot directly probe the small scales at which WDM plays a role, and are therefore not expected to provide a direct constraint on the mass of a WDM particle. However, they can impact our constraint on WDM mass through the correlation this parameter has with other cosmological parameters that CMB and BAO measure with a better precision than Ly-$\alpha$ data alone can. We thus also include, in a second step, the Planck temperature data at low and high multipoles (PlanckTT), the low-multipole HFI-based polarization data up to $\ell=29$ (`lowP'), and the $\ell \ge 30$ polarization data (denoted `TE' and `EE'), as well as measurements of the BAO scale by 6dFGS \cite{6dFGS}, the main galaxy sample of SDSS \cite{SDSSmainGalaxy}, the BOSS LOW-Z sample \cite{LOWZ-CMASS} and the CMASS DR11 sample \cite{LOWZ-CMASS} (`BAO' herein). \\
Although these additional sets have contributed to establish competitive constraints on the sum of the masses of (standard) neutrinos $\Sigma m_\nu$ and the effective number of neutrino species $N_{\rm eff}$ \cite{Palanque2015a, Palanque2015b, Rossi2015}, they deteriorate our limit on WDM mass (last two rows of Tab.~\ref{tab:CL95_1D}, column labelled `no running'). This is the consequence of two factors. The first one is the tension on the value of the spectral index measured with different probes: $n_s = 0.934\pm 0.009$ obtained with Ly-$\alpha$ forest and $n_s = 0.959\pm 0.004 $ with CMB. The second factor is the fact that our limit on the WDM mass is looser with increasing $n_s$. Thus the increased $n_s$ imposed by the combination with CMB is causing the loosening of our limit. \\
Ly-$\alpha$ and CMB data being relevant on different scales, however, we can remedy the disparity on $n_s$ by allowing a non-zero running of the spectral index $n_{\rm run}$ (cf. Eq.~\ref{eq:PS}). This additional free parameter in our multidimensional analysis reconciles the different values of $n_s$ measured at small (with Ly-$\alpha$) and large (with CMB) scales. The small discrepancy on $n_s$ between Ly-$\alpha$ and CMB measurements, and the subsequent detection of $n_{\rm run}$ at $\sim3\sigma$, were extensively discussed in~\cite{Palanque2015b}. In the present analysis, the best-fit value of running is unchanged compared to what we measured in the context of CDM with massive active neutrinos, which is no surprise since the detection of running is completely driven by the different values of $n_s$ measured on different scales ($n_s\sim 0.97$ at $k = 0.05~{\rm Mpc}^{-1}$ from Planck and $n_s \sim 0.94$ at $k = 0.7~{\rm Mpc}^{-1}$ from Ly-$\alpha$). We also measure no significant correlation between the value of running, which is set by the comparison of $n_s$ on large and small scales, and the value of the WDM particle mass, set by the shape and redshift-dependence of the power spectrum on scales probed by Ly-$\alpha$ data. \\
We feature in Tab.~\ref{tab:CL95_1D} the constraints on WDM mass obtained in the `no running' and `with running' configurations, which denote the cases in which the value of the spectral index running is either taken as fixed to zero or as a free parameter, respectively. As expected, our limits on WDM mass when running is allowed to vary are similar to the limits that were derived from Ly-$\alpha$ data alone, since the effective value of $n_s$ on small scales is then determined by Ly-$\alpha$ data (and not by CMB data as in the `no running' case). We list in Tab.~\ref{tab:CL95_1D} the constraints obtained for all three configurations (Ly-$\alpha$+$H_0$, Ly-$\alpha$+Planck with no running of $n_s$ and Ly-$\alpha$+Planck allowing for a running of $n_s$) to illustrate the impact of the value of $n_s$ on the sensitivity of our analysis. The `with running' configuration, however, is to be considered with caution. The detection of running is driven by the different values of $n_s$ measured on large (probed by CMB) and small (probed by Ly-$\alpha$) scales. As we explained in \cite{Palanque2015b}, the determination of $n_s$ in Ly-$\alpha$ data is prone to systematic effects in the measurement of the flux power spectrum, such as modeling of the spectrograph resolution or contributions from SN or AGN feedbacks, UV fluctuations... The measure of $n_s$ in Ly-$\alpha$ data is a delicate task that could still be affected by an unaccounted-for systematic. \\
Figure \ref{fig:Contour_nrun} displays our 68\% and 95\% likelihood intervals with respect to keV $/ m_X$ and our four main cosmological parameters in the `with running' (red) and `no running' (yellow) configurations. The contours for the Ly-$\alpha$ + CMB configuration are very similar to those for Ly-$\alpha$ + CMB + BAO (featured). The above discussion still holds true in the 2D case. More importantly, no significant correlation between our set of cosmological parameters and WDM mass is manifest, which conforts us in the interpretation that a small-scale power deficit in our simulated power spectrum would be due to the free-streaming of DM particles as opposed to a combined effect of $\Omega_M$, $H_0$, $\sigma_8$ and/or $n_s$.
\subsubsection{IGM Thermal History}
\begin{figure}[!]
\begin{center}
\epsfig{figure = plots/C2D_Zreio9.png, width = 16cm}
\caption{ Confidence intervals between the WDM mass (left) and the primordial spectral index (right) with the reionization redshift, given the same configurations as Fig.~\ref{fig:Contour_nrun}. The apparent lack of correlation is addressed in the text. The $z \leq 4.1$ (\textit{resp.} $4.5$) configuration corresponds to taking the first 10 (\textit{resp.} all 12) redshift bins from our data set (Ly-$\alpha$ only).}
\label{fig:Contour_Zreio}
\end{center}
\end{figure}
Our best-fit value for the redshift of reionization is $z_{\star} \simeq 8.2$ using Ly-$\alpha$ data only. The best-fit value shifts to $z_{\star} \simeq 8.8$ and $8.4$ in the fixed and fitted $n_{\rm{run}}$ Ly-$\alpha$+CMB configuration respectively, and to $z_{\star} \simeq 8.8$ and $ 8.5$ when BAO is also included, all consistent to within one standard deviation from the CMB constraint.
We detect no strong correlation between $z_{\star}$ and $m_X$, as shown on the left panel of Fig.~\ref{fig:Contour_Zreio}. The global correlation coefficient of $z_\star$, defined as the correlation between that parameter and that linear combination of all other parameters which is most strongly correlated with it, is 60\%. This correlation is due to the fact that several other nuisance parameters in our fit encompass --- even partially --- the effect of the IGM thermal history on the transmitted flux power spectrum, namely the splicing residual pivot offset $A_{\epsilon} (k=k_p)$ and slope $\eta_{\epsilon}$, as well as the uncertainties due to the redshift-dependence of the spectrograph resolution $\alpha_{\rm{reso}}$ and $\beta_{\rm{reso}}$. \\
Because the gas pressure has a similar effect on the power spectrum as the free streaming of WDM particles, $z_{\star}$ is expected to feature a correlation with $m_X$, as recently noted in \cite{Sherwood}. In the present situation, however, this correlation is strongly reduced for several reasons: the best-fit value for $m_X$ ventures nearby the benchmark CDM model (showing no significant departure from $1/m_X=0$), the data points with the highest statistical significance lie at low redshift where the correlation is the lowest, and the many nuisance parameters that are fitted along with the cosmology and IGM parameters also contribute to absorbing the correlation. More generally, these nuisance parameters render $z_\star$ to have relatively small correlations with all cosmological and astrophysical parameters. The strongest residual correlation is with the primordial spectral index $n_s$, at the $\sim 20\%$ level, which is expected since the effect of alternate values of $n_s$ on the flux power spectrum is a shift in the slope with respect to spatial scales. This slight correlation is manifest in the right panel of Fig.~\ref{fig:Contour_Zreio}, where the semi-major axis of the quasi-elliptical blue Ly-$\alpha$ contours deviates from vertical. The correlation is damped when taking CMB data into account as it probes a distinct $k$ range than Ly-$\alpha$ forest data.
\subsubsection{Warm Dark Matter or Warmer IGM ?}
\begin{figure}[!]
\begin{center}
\epsfig{figure = plots/C2D_IGM_zreio9.png, width = 16cm}
\caption{ Confidence intervals between the WDM mass and IGM temperature intercept $T_0^{z=3}$ (left) and exponent (right) $\gamma^{z=3}$ in the Ly-$\alpha$ + $H_0$ configuration. We include the confidence intervals when only taking our 10 lowest redshift bins (excluding $\langle z \rangle = 4.2$ and $4.4$, e.g., `$z \leq 4.1$' configuration) in contrast with our current extension of the analysis which features the inclusion of these two highest redshift bins (`$z \leq 4.5$' configuration).}
\label{fig:Contour_T0gamma}
\end{center}
\end{figure}
It has been recently argued in \cite{warmIGM} that the small-scale cutoff in the power spectrum can be accounted for by a warm IGM rather than a warm DM particle. The temperature-density power-law defined in Eq.~\ref{eq:IGM} is a crude first-order assumption as the power-law intercept $T_0 \: (z)$ and exponent $\gamma \: (z)$ are poorly constrained. $T_0$ may not be a monotonic function of redshift at $z \gtrsim 5$. An extended apprehension of the thermal state of the IGM and its history is crucial in carrying out investigation in the lower velocity-space $k$ segments of the Ly-$\alpha$ flux power spectrum. In our likelihood computation, we allow the IGM temperature-density relation to obey two distinct power laws, above and below a $z=3$ break.
No degeneracy between IGM temperature at $z=3$ and WDM mass is manifest, as is illustrated in Fig.~\ref{fig:Contour_T0gamma}. Implementing the $\langle z \rangle = 4.2$ and $4.4$ bins into our multidimensional analysis tightens our bounds on WDM mass and lowers $T_0 \: (z=3)$ from $\sim 14,000$ to $\sim 10,000$ Kelvins. The power-law exponent $\gamma \: (z=3)$ remains unaltered. On all accounts, the issues raised in \cite{warmIGM} do not apply to the redshift and velocity-space ranges that we probe. Our model of the IGM thermal state, although generic, is not the predominant limiting factor in the establishment of our bounds on WDM mass, as is apparent in Fig.~\ref{fig:Contour_T0gamma}. Our result is primarily limited by the sheer size and low resolution of our Ly-$\alpha$ power spectrum data sample.
\section{Introduction}
\label{sec:intro}
\input{introduction}
\section{Sterile Neutrinos as WDM}
\label{sec:NuS}
\input{neutrino}
\section{Flux Power Spectrum from the Ly-$\alpha$ Forest}
\label{sec:Lya}
\input{Lya}
\section{Numerical Simulations}
\label{sec:simulations}
\input{simulations}
\section{Constraints on Warm Dark Matter Mass}
\label{sec:constraints}
\input{constraints}
\section{Discussion and Conclusion}
\label{sec:discussion}
\input{discussion}
\acknowledgments
We thank Julien Lesgourgues for his precious feedback and input, as well as Volker Springel for making \texttt{GADGET-3} available to our team. Kevork Abazajian's remarks about the neutrino to relic mass mapping were useful and appreciated. We thank James Bolton for helpful discussions on the thermal history of the IGM and its implementation in simulations.
MV is supported by the ERC-StG "cosmoIGM" and by the INDARK PD51 grant.
We acknowledge PRACE (Partnership for Advanced Computing in Europe) for awarding us access to resource curie-thin and curie-xlarge nodes based in France at TGCC, under allocation numbers 2010PA2777, 2014102371 and 2012071264.\\
\\
\bibliographystyle{unsrtnat_arxiv}
\subsection{Simulation Pipeline}
We run a set of N-body + hydrodynamical simulations to model the flux power spectrum in the non-linear regime of density perturbations, using \textsf{Gadget-3}, an updated version of the publicly available \textsf{Gadget-2} code\footnote{\tt http://www.mpa-garching.mpg.de/gadget/}~\cite{Springel2001,Springel2005}.
Given a set of cosmological parameters, a user-specified box size $L$, and a number of particules per species $N^3$, the code simulates the evolution of baryonic, stellar and dark matter particles. The former undergo Smoothed Particle Hydrodynamics (SPH) treatment, a Lagrangian method of solving the hydrodynamics equations (\cite{Monaghan2005, Rosswog2009, Springel2010}), whereas the latter two are treated as collisionless populations of fixed-mass point particles. Stars are a subset of the baryon population, which the code produces whenever a particle with a temperature less than $10^5\,{\rm K}$ reaches an overdensity with respect to the mean exceeding 1000 (as done for instance in \cite{Viel2010}).
The code is widely used for cosmological simulations and has been thoroughly tested.
For each simulation (\textit{i.e.,} each set of parameters), the initial matter power spectrum is generated using the \textsf{CAMB} software\footnote{\tt http://camb.info} \cite{Lewis2000}
down to $z=30$. The power spectrum is normalized to feature a chosen fluctuation amplitude $\sigma_8$ at $z=0$ (identical for all the simulations that do not explicitly probe the impact of $\sigma_8$), before positions and velocities are generated using second-order Lagrangian perturbation theory implemented by the \textsf{2LPTic}\footnote{\tt http://cosmo.nyu.edu/roman/2LPT/}
software. Using a splicing technique described in~\cite{McDonald2003} and first applied in~\cite{Borde2014,Palanque2015a}, we infer the flux power spectrum of an equivalent ($L=100 \:h^{-1}~ \rm Mpc$, $N=3072$) simulation from a combination of three lesser ones: a scaled-down (25, 768) to provide high resolution on small scales, a large-box low-resolution (100, 768) for large scales, and a small-box low-resolution (25, 192) which bridges the preceding two at intermediate scales. In addition to saving considerable time and resource consumption, this splicing technique loopholes around the limits of the software and packages we utilize in our pipeline. Our adaptation of the splicing technique has been successful at reproducing the `exact' spectrum for (100, 1024) and (100, 2048) configurations to within 3\% and 2\% respectively, as shown in~\cite{Palanque2015b}.
A detailed assessment of our methodology is extensively provided in~\cite{Borde2014, Palanque2015a} and reviewed in~\cite{Palanque2015b}, along with all other simulation specifics. In what follows, we provide the reader a run-through of the major characteristics of our pipeline.\\
\subsection{Parameter Space}
Henceforth, our `best guess' power spectrum refers to the \textit{spliced} (100, 3072) power spectrum obtained for our central model: a flat $\Lambda$CDM Universe whose cosmological parameters are listed in the column titled `central value' of Tab.~\ref{tab:params}, in agreement with the 2013 Planck collaboration's best-fit cosmological parameters~\cite{PlanckCollaboration2013}. We produce simulations for several values of the current expansion rate of the Universe $H_0 = 100 \: h\rm \: km~s^{-1}~Mpc^{-1}$, the current matter energy density $\rm \Omega_{\rm{M}}$, the spectral index of primordial density fluctuations $n_s$ and the current fluctuation amplitude of the matter power spectrum $\rm \sigma_8$. We explore two pure-$\Lambda$WDM models with $m_X = 2.5$ and 5 keV thermal relics implemented using the neutrino mass degeneracy parameters in \textsf{CAMB} to encode $\Delta N_{\rm eff} = \left( T / T_\nu \right)^4$, according to the mass-temperature relation given in Eq.~\ref{eq:OTXrelation}.
In addition to the above cosmological parameters, we also vary parameters that describe the physics of the IGM, such as its temperature. The temperature-density relation of the IGM is measured at each redshift from the low-density regions in the simulations and modeled with two parameters $T_0(z)$ and $\gamma(z)$ according to the relation
\begin{equation} \label{eq:IGM}
T(\rho, z) = T_0(z) \: \left( 1 + \delta \right)^{\gamma(z) - 1}
\end{equation}
where $\delta=(\rho-\langle\rho\rangle)/\langle\rho\rangle$ is the normalized density contrast. Our central model has logarithmic intercept $T_0^{z=3} = 14,000 \: \rm K$ and slope $\gamma^{z=3} = 1.3$, consistent with the measurements of~\cite{Becker2011}.
Since $T_0$ and $\gamma$ are poorly constrained parameters~\cite{Garzilli2012, Lidz2010, Schaye2000}, we allow them to vary by $50 \%$ and $25 \%$ from their central values, respectively, and we run simulations for each of these sets of values. When fitting data, however, we allow for additional freedom by modeling the IGM temperature with three parameters describing the redshift dependence, in addition to the values of $T_0$ and $\gamma$ taken at $z=3$ (cf., Sec.~\ref{subsec:methodology} for details).
Finally, we include in the grid two additional parameters, $A^\tau$ and $\eta^\tau$, to vary the effective optical depth of Ly-$\alpha$ absorption, although these dependences are obtained without requiring additional simulations (see Sec.~\ref{sec:pk} for details). The central value and range of each of the grid parameters are summarized in Tab.~\ref{tab:params}.\\
\begin{table}[htb]
\caption{List of parameters in the simulation grid, along with their range of values.}
\begin{center}
\begin{tabular}{lccl}
\textbf{Parameter} & \textbf{Central value} & \textbf{Range} & \textbf{Description} \\[2pt]
\hline \\[-10pt]
$ 1 {\rm keV} / m_X$ & $0.0$ & $+0.2 +0.4$ & Inverse of WDM mass when expressed in $\rm keV$ (thermal relic)\\[2pt]
$h$ & $0.675$ & $\pm 0.05$ & Current expansion rate in units of $100 \: \rm km~s^{-1}~Mpc^{-1}$\\[2pt]
$\rm \Omega_M$ & $0.31$ & $\pm 0.05$ & Current matter energy density in units of critical density\\[2pt]
$n_s$ & $0.96$ & $\pm 0.05$ & Scalar spectral index\\[2pt]
$\sigma_8$ & $0.83$ & $\pm 0.05$ & Current RMS of matter fluctuation amplitude at $8 \: h^{-1}~\rm Mpc$\\[2pt]
$T_0^{z=3}$ & $\rm 14k$ & $\pm \rm 7k$ & IGM temperature intercept at $z=3$ in K\\[2pt]
$\gamma^{z=3}$ & $1.3$ & $\pm 0.3$ & IGM temperature-density index at $z=3$ \\[2pt]
$A^\tau$ & $0.0025$ & $\pm 0.0020$ & Effective optical depth amplitude \\[2pt]
$\eta^\tau$ & $3.7$ & $\pm 0.4$ & Effective optical depth index \\[2pt]
\hline \\[-10pt]
\end{tabular}
\end{center}
\label{tab:params}
\end{table}
We evaluate the variations of the flux power spectrum with our set of parameters
$$\vec{x} = \left({\rm{keV}}/m_X, \: h, \: \rm{\Omega_M}, \: n_s, \: \sigma_8, \: T_0^{z=3}, \: \gamma^{z=3}, \: A^\tau, \: \eta^\tau \right)^T$$
around our central model $\vec{x}_0$ using a second-order Taylor expansion:
\begin{equation} \label{eq:Taylor}
f(\vec{x}_0 + \vec{\Delta x}) \simeq f(\vec{x}_0) + \sum_i \partial_i f(\vec{x}_0) \Delta x_i + \frac{1}{2} \sum_{i,j} \partial^2_{ij} f (\vec{x}_0) \Delta x_i \Delta x_j \;.
\end{equation}
We run a simulation for each parameter and each value in Tab.~\ref{tab:params}, as well as a cross-term simulation for every pair of parameters. Consequently, along with the best-guess configuration, our parameter space yields a total of 36 \textit{spliced} simulations to run, requiring over 4 Mhr CPU computing time. They are produced at the TGCC Curie machine at Bruy\`eres-le-Ch\^atel, France. All simulations that do not require $ 1 {\rm keV} / m_X$ to be different from its central value ({\textit{e.g.,} 0) are common to this work and to the analysis of \cite{Palanque2015a, Palanque2015b}, and were not run anew.
\subsection{Constructing the Simulated Ly-$\alpha$ Power Spectrum}\label{sec:pk}
We extract 13 \textsf{Gadget} snapshots (output files) at equidistant redshifts between $z = 4.6$ and $z = 2.2$, and construct, for each snapshot, particle and line-of-sight (LOS) samples.The former serves to establish the temperature-density relation (see Eq.~\ref{eq:IGM}) of the baryon population, treated as a monoatomic gas, at the given redshift. \cite{Borde2014} provides details of the procedure and results. Consistent with standard unidimensional flux power studies, the purpose of the latter sample is to compute the \textsc{Hi} effective optical depth $ \tau_{\rm eff}(z) = - \ln \langle \varphi (z) \rangle $ on 100,000 randomly-seeded lines-of-sight.
At this stage, we fix the photo-ionization rate by rescaling the transmitted fluxes at each redshift in order to have the effective optical depth follow the empirical law
\begin{equation}
\label{optdepth}
\tau_{\rm eff} = A^\tau \times \left( 1 + z \right)^{\eta^\tau}
\end{equation}
where the amplitude $A^\tau$ and the index $\eta^\tau $ take the desired values from Tab.~\ref{tab:params}. The central values of $A^\tau$ and $\eta^\tau $ are in agreement with observations, and the allowed range encompasses observational uncertainties~\cite{Meiksin2009}. A previous study~\citep{Palanque2015b} constrained these parameters at $A^\tau = \left( 26 \pm 1 \right) \times 10^{-4}$ and $\eta^\tau = 3.734 \pm 0.015$ (68\% C.L.) using the Ly-$\alpha$ power spectrum in Sec.~\ref{sec:Lya} with a similar suite of SPH simulations. Although these uncertainty intervals are significantly narrower than our chosen steps for computing their derivative terms, we opt for a more conservative approach since the cited values apply to a different cosmological model ($\Lambda$-CDM$\nu$: cold dark matter and massive standard model neutrinos) than the present one ($\Lambda$-WDM).
The LOS-averaged transmitted flux power spectrum is then computed from the LOS sample by virtue of $P_\varphi (k) = | \tilde{\delta_\varphi} (k) |^2$, where $\delta_\varphi$ is the transmitted flux computed using smoothed particle hydrodynamics. Note that the simulations cover a broader redshift range than the SDSS-III/BOSS DR9 data. The $z=4.6$ bin will not be used for the analysis presented here, but it is available for future studies extending at higher redshift.
Figure \ref{fig:TkFlux} compares the resulting flux power spectra for $1 \:{\rm keV} / m_X = 0.4$ and $0.2$ with that of the best-guess model at three redshifts within the Ly-$\alpha$ forest bounds probed by {BOSS}. As expected, heavier thermal-relic sterile neutrinos are more consistent with the standard cold dark matter model, which in this context would correspond to vanishing-velocity DM particles (tantamountly, of infinite mass). The power suppression caused by their free-streaming is clearly quantifiable in all three displayed redshifts, despite simulation shot noise and statistical uncertainties. Furthermore, the free-streaming cutoff is more prominent at higher redshifts, as clearly visible in the plots at the bottom of Fig.~\ref{fig:TkFlux}. This is because at high redshift, structure formation is less departed from the linear regime and the power suppression is therefore nearer to the case of the analytical approximation illustrated in Fig.~\ref{fig:Tk_CAMB}.
It is noteworthy to recall that the flux power spectrum in each simulation is normalized to the same $\sigma_8$ value. Thus, the apparent excess of power at low-$k$ modes needs not be hastily interpreted as an actual accumulation of matter on large scales, but rather as an artifact of this normalization.
\begin{figure}[htbp]
\begin{center}
\epsfig{figure= plots/8panel.png, width = 16cm}
\caption{ {\bf Top \& Middle:} Visual inspection (see caption of Fig~\ref{fig:Tk_CAMB}) at $z=2.2$, 3.4 and 4.6 of the best-guess, i.e., CDM, model (top) and of a simulation assuming a 500 eV DM-particle mass (middle) for visualization purposes. Panels are 8 $ h^{-1}~\rm Mpc$ across. {\bf Bottom:} Ratio of the WDM to the CDM Ly-$\alpha$ transmitted flux power spectra at redshifts $z = 2.2, 3.4$ and $4.6$, normalized to identical $\sigma_8$, for our $m_X = 2.5$~keV (left) and $5 \rm \: keV$ (right) grid values. Line thickness encodes simulation uncertainty (statistical).}
\label{fig:TkFlux}
\end{center}
\end{figure}
| {
"attr-fineweb-edu": 1.830078,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdQU5qhLA4YB2luxX | \section{Introduction}\label{sec:intro}
Given a nonincreasing null sequence $T=(T_{j})_{j\ge 1}$ of nonnegative random variables, the mapping $f\mapsto\Erw\prod_{j\ge 1}f(tT_{j})$ for suitable functions $f:\R_{\scriptscriptstyle\geqslant}\to\R$ with $\R_{\scriptscriptstyle\geqslant}=[0,\infty)$ is called the smoothing transform and any $f$ satisfying
\begin{equation}\label{eq:functional FPE}
f(t)\ =\ \Erw\Bigg(\prod_{j\ge 1}f(tT_{j})\Bigg)
\end{equation}
a fixed point of this mapping. The problem of identifying all fixed points within certain function classes, here Laplace transforms of probability measures on $\R_{\scriptscriptstyle\geqslant}$ or, more generally, survival functions of nonnegative random variables, has been dealt with in a host of articles such as \cite{DurLig:83,Liu:98,BigKyp:05}, with most general results obtained in \cite{AlsBigMei:12}. The last reference should also be consulted for a more detailed account of the earlier literature and for further background information.
\vspace{.1cm}
The existence of fixed points of the smoothing transform has been studied mostly under the following standard assumptions: First,
\begin{equation}
\Erw\Bigg(\sum_{j\ge 1}\mathbf{1}_{\{T_{j}>0\}}\Bigg)\ >\ 1,\label{eq:(T1)}
\end{equation}
which ensures that the product in \eqref{eq:functional FPE} remains nonempty with positive probability upon iterating the fixed point equation; second, the existence of $\alpha > 0$ such that
\begin{equation}
\Erw\Bigg(\sum_{j\ge 1} T_{j}^{\alpha}\Bigg)\,=\,1,\label{eqn:alphaDef}
\end{equation}
and third that
\begin{equation}
\Erw\left(\sum_{j\ge 1}T_{j}^{\alpha}\log T_{j}\right)\,\le\,0.\label{eq:derivative at alpha}
\end{equation}
We mention that, Liu \cite[Thm.~1.1]{Liu:98} has shown by a truncation argument that nonnegative solutions to \eqref{eqn:smoothingTransform} may still exist if \eqref{eqn:alphaDef} is weakened to $\Erw(\sum_{j\ge 1}T_{j}^{\beta})\le 1$ for some $\beta\in [0,1]$.
\vspace{.1cm}
The aim of this article is to introduce a new and simple method that allows the characterization of all fixed points of the smoothing transform. But rather than doing so under most general conditions as in \cite{AlsBigMei:12}, we impose some extra integrability conditions in addition to \eqref{eq:derivative at alpha} so as to simplify and streamline the presentation of this method.
\vspace{.1cm}
If $f$ is the Laplace transform of a probability law $\nu$ on $\R_{\scriptscriptstyle\geqslant}$, then it solves Eq.~\eqref{eq:functional FPE} iff $\nu$ is a distributional fixed point of the (homogeneous) smoothing transform $\mathbb{S}$ which maps $\nu$ to the law of the random variable $\sum_{j\ge 1}T_{j}X_{j}$, where $X,X_{1},X_{2},\ldots$ denote i.i.d.~random variables with common law $\nu$ and independent of $T$. Hence, Eq.~\eqref{eq:functional FPE} corresponds to $\mathbb{S}(\nu)=\nu$ or, equivalently,
\begin{equation}\label{eqn:smoothingTransform}
X\ \stackrel{d}{=}\ \sum_{j\ge 1}T_{j}X_{j}
\end{equation}
where $\stackrel{d}{=}$ means equality in distribution.
\vspace{.1cm}
In the case when $f$ is the (left continuous) survival function of a probability distribution $\nu$ on $\R_{\scriptscriptstyle\geqslant}$, viz. $f(t)=\nu([t,\infty))$ for $t\ge 0$, the fixed-point property \eqref{eq:functional FPE} corresponds to the distributional fixed-point equation
\begin{equation}\label{eq:SFPE max transform}
X\ \stackrel{d}{=}\ \inf\{X_{j}/T_{j}:j\ge 1\text{ and }T_{j}>0\}
\end{equation}
with $X,X_{1},X_{2},\ldots$ as before. Here the infimum over the empty set is defined to be $\infty$.
\vspace{.1cm}
As in \cite{AlsBigMei:12}, let $\mathcal{S}(\mathcal{M})$ denote the set of solutions to Eq.~\eqref{eq:functional FPE} within the class
\begin{align*}
&\mathcal{M}\ =\ \left\{f:\R_{\scriptscriptstyle\geqslant}\to [0,1]:\,f\text{ is nonincreasing and left continuous},\right.\\
&\hspace{3cm}\left. f(0)=f(0+)=1\text{ and }0<f(t)<1\text{ for some }t>0\right\}
\end{align*}
which comprises all survival functions on $\R_{\scriptscriptstyle\geqslant}$ as well as its subclass $\mathcal{L}$ of Laplace transforms of probability measures on $\R_{\scriptscriptstyle\geqslant}$, ruling out only the trivial solutions $f\equiv 1$ and $f=1_{\{0\}}+q\,\mathbf{1}_{\R_{\scriptscriptstyle >}}$, where $q$ denotes the extinction probability of the associated branching random walk (\textsf{BRW}) described below. Note that the last fact entails $\mathcal{S}(\mathcal{L})\subset\mathcal{S}(\mathcal{M})$. In order to determine $\mathcal{S}(\mathcal{M})$ (and thus $\mathcal{S}(\mathcal{L})$), which has already been done in \cite{AlsBigMei:12}, we provide here a new approach that works under relaxed conditions on the random sequence $T$ and also considerably simplifies some of the key arguments used in \cite{AlsBigMei:12}. This novel approach consists of three steps that will be outlined further below, after the introduction of some necessary notation, further background information and the most important assumptions.
\vspace{.1cm}
It is well-known that the fixed points of the smoothing transform are intimately related to certain martingale limits of an associated \textsf{BRW}\ that we describe next. Let $\mathbb{V}= \bigcup_{n\ge 0}\mathbb{N}^{n}$ be the Ulam-Harris tree of finite integer words, with the convention that $\mathbb{N}^{0}=\{\varnothing\}$ equals the set containing the empty word (and root of the tree). As common, we use $v_{1}\ldots v_{n}$ as shorthand for $v=(v_{1},\ldots,v_{n})$, $|v|=n$ for its length, and $uv$ for the concatenation of two vertices $u,v\in\mathbb{V}$. The restriction of $v$ to its first $j$ coordinates, thus its ancestor at level $j$, is denoted $v(j)$, thus $v(j)=v_{1}\ldots v_{j}$. We set $\partial \mathbb{V} = \mathbb{N}^\mathbb{N}$, which represents the boundary of the tree $\mathbb{V}$.
\vspace{.1cm}
Now let $(T_{j}^{v})_{j\ge 1}$ be i.i.d.~copies of $T$ for any $v\in\mathbb{V}$ and define the multiplicative \textsf{BRW}\ (also called weighted branching process) as the random map
$$ L:\mathbb{V}\to \R_{\scriptscriptstyle\geqslant},\quad v\ \mapsto\ L(v)\,:=\,\prod_{j=1}^{|v|}T^{v(j-1)}_{v_{j}}. $$
Note that Assumption \eqref{eq:(T1)} entails that $L$ forms a supercritical \textsf{BRW}\ and that with positive probability, for all $n\in\mathbb{N}$, there exists a vertex $v\in\mathbb{V}_{n}:=\{u:|u|=n\}$ such that $L(v)>0$. This event is called the survival set of the \textsf{BRW}. Thinking of $T_{j}^{v}$ as a weight attached to the edge connecting vertex $v$ with its child $vj$, we see that $L(v)$ equals the total weight of the unique shortest path from the root $\varnothing$ (with $L(\varnothing):=1$) to $v$ obtained by multiplying the edge weights along this path. The natural filtration of the \textsf{BRW}, reflecting its genealogical structure, is defined by
$$ \mathcal{F}_{n} = \sigma\big(L(v),\,|v|\le n\big),\quad n\in\mathbb{N}_{0}. $$
There is a deep and meaningful relationship between the fixed points of the smoothing transform and the \textsf{BRW}\ just introduced. This will be further explained in Section~\ref{sec:measure}.
\vspace{.1cm}
A second fundamental assumption for the existence of nontrivial solutions to \eqref{eq:functional FPE}, though not necessary as shown in \cite{Liu:98} and already mentioned, is the existence of a minimal positive $\alpha$, called \emph{characteristic exponent of $T$} in \cite{AlsBigMei:12}, such that \eqref{eqn:alphaDef} holds. Then these solutions can be expressed in terms of an associated martingale limit. Namely, by the branching property of the \textsf{BRW}, the process
\begin{equation}\label{eq:Biggins martingale}
W_{n}\ :=\ \sum_{|v|=n} L(v)^{\alpha},\quad n\ge 0
\end{equation}
constitutes a nonnegative martingale. It was shown by Biggins \cite{Biggins:77} (see also \cite{Lyons:97} for a simpler proof of his result) that $W_{n}$ converges a.s.~and in $L^{1}$ to a nondegenerate limit $W$ provided that, additionally,
\begin{gather}\label{eqn1:regular case}
\Erw\Bigg(\sum_{j\ge 1}T_{j}^{\alpha}\log T_{j}\Bigg)\,<\,0
\shortintertext{and}
\Erw\left(W_{1} \log W_{1}\right)\,<\,\infty\label{eqn2:regular case}
\end{gather}
hold. Alsmeyer and Iksanov \cite{AlsIks:09} further proved that these conditions are indeed necessary and sufficient whenever $\Erw\big(\sum_{j\ge 1}T_{j}^{\alpha}\log T_{j}\big)$ is well-defined. The law of $W$ forms a solution to the SFPE \eqref{eqn:smoothingTransform} with weight sequence given by $T^{\alpha}=(T_{j}^{\alpha})_{j\ge 1}$.
\vspace{.2cm}
Notice that Condition \eqref{eqn1:regular case} is more restrictive than Condition \eqref{eq:derivative at alpha}, for it requires $\Erw(\sum_{j\ge 1}T_{j}^{\alpha}\log T_{j})$ to be strictly negative. We call this situation the \emph{regular case}, as opposed to the more difficult \emph{boundary case} that will be discussed further below. Assuming \eqref{eqn1:regular case} and \eqref{eqn2:regular case} (naturally besides \eqref{eq:(T1)} and \eqref{eqn:alphaDef}) allows the construction of a natural solution to the SFPE. Moreover, we will see that under one more integrability condition, namely
\begin{equation}\label{eqn:finitemean}
\Erw\Bigg(\sum_{j\ge 1}T_{j}^{\alpha}\log T_{j}\Bigg)\,>\,-\infty,
\end{equation}
the set $\mathcal{S}(\mathcal{M})$, and thus all fixed points of Eqs.~\eqref{eqn:smoothingTransform} and \eqref{eq:SFPE max transform}, can be determined quite easily. The structure of the fixed points depends on the lattice-type of $T$. As in \cite{AlsBigMei:12}, $T$ is called geometric with span $r$ or $r$-geometric if $r>1$ is the maximal number such that
\begin{equation}\label{eqn:lattice}
\Prob\left(T_{j} \in\{r^{n}:n\in\mathbb{Z}\}\text{ for all }j\ge 1\right)\,=\,1,
\end{equation}
and nongeometric if no such $r$ exists. A function $h:\R_{\scriptscriptstyle\geqslant}\to\R_{\scriptscriptstyle >}$ is called multiplicatively $r$-periodic if $h(rt)=h(t)$ for all $t\ge 0$. Given $r>1$, let $\mathcal{H}_{r}$ denote the set of all such functions such that $t\mapsto h(t)t^{\alpha}$ is nondecreasing for $\alpha$ satisfying \eqref{eqn:alphaDef}. Let also $\mathcal{H}_{1}$ be the the set of positive constant functions on $\R_{\scriptscriptstyle\geqslant}$. In order to be able to describe $\mathcal{S}(\mathcal{L})$, we put $\cP_{1}=\mathcal{H}_{1}$ and denote by $\cP_{r}$ for $r>1$ the set of $h\in\mathcal{H}_{r}$ such that $t^{\alpha}h(t)$ has a completely monotone derivative. When $\alpha=1$, the latter requirements force $h$ to be constant, thus $\cP_{r}=\cP_{1}$. These classes were first introduced in \cite{DurLig:83}.
\begin{Theorem}\label{thm:mainNonBoundary}
Let $T$ satisfy \eqref{eq:(T1)}, \eqref{eqn:alphaDef}, \eqref{eqn1:regular case}, \eqref{eqn2:regular case}, \eqref{eqn:finitemean} and $r\ge 1$ denote its span. Then the elements of $\mathcal{S}(\mathcal{M})$ are the functions
$$ f(t)\ =\ \Erw(\exp(-h(t)t^{\alpha}W)),\quad t\ge 0 $$
with left continuous $h\in\mathcal{H}_{r}$. Moreover, $\mathcal{S}(\mathcal{L})$ is nonempty only if $0<\alpha\le 1$ in which case it consists of all $f\in\mathcal{S}(\mathcal{M})$ with $h\in\cP_{r}$.
\end{Theorem}
This result was first derived in \cite{AlsBigMei:12} by showing first that $t^{-1}(1-\Erw e^{-tW})$ is slowly varying at $t=0$ (Thm.~3.1) and then, given any $f\in\mathcal{S}(\mathcal{M})$, the same property for $(1-f(t))/t^{\alpha}h(t)$ for a suitable $h\in\mathcal{H}_{r}$ (Thms. 3.2 and 3.3). Our proof is simpler and does not rely on these properties.
\begin{Rem}\rm
If $\alpha=1$, then the only fixed point of Eq.~\eqref{eqn:smoothingTransform}, modulo positive multiplicative constants, is given by the limit $W$ of the Biggins martingale defined in \eqref{eq:Biggins martingale}. This fixed point is \emph{endogenous} in the sense of Aldous and Bandyopadhyay \cite{AldBan:05} which means that $W$ can be constructed as a measurable function of the \textsf{BRW}\ or, equivalently, that it is $\mathcal{F}_{\infty}$-measurable, where $\mathcal{F}_{\infty}:=\sigma(\mathcal{F}_{n},\,n\ge 0)$. If $\alpha < 1$, there are no endogenous fixed points.
\end{Rem}
Nontrivial solutions to Eq.~\eqref{eq:functional FPE} also exist in the following so-called boundary case when
\begin{equation}\label{eqn:boundaryCase}
\Erw\Bigg(\sum_{j\ge 1}T_{j}^{\alpha}\log T_{j}\Bigg) = 0,
\end{equation}
holds in the place of \eqref{eqn1:regular case}. In this case, the martingale limit $W$ is a.s.~zero. However, assuming \eqref{eqn:boundaryCase}, the process
\begin{equation}\label{eqn:derivativeMartingale}
Z_{n}\ =\ \sum_{|v|=n}L(v)^{\alpha}(-\log L(v)), \quad n \ge 0
\end{equation}
is also a martingale, known as the derivative martingale. A{\"{\infty}}d\'ekon \cite{Aidekon:13} determined necessary conditions, that Chen \cite{Chen:15} found sufficient, for $Z_{n}$ to converge to a nonnegative and nondegenerate limit. More precisely, if
\begin{equation}\label{eqn:ncsDerivative}
\Erw\Bigg(\sum_{j\ge 1}T_{j}^{\alpha}\log^{2}T_{j}\Bigg) \in (0,\infty), \quad \Erw W_{1}\log^{2}W_{1}\,+\,\Erw\widetilde{X}\log\widetilde{X}\ <\ \infty,
\end{equation}
where $\widetilde{X}:=\sum_{j\ge 1}T_{j}^{\alpha}\log_{+}\!T_{j}$, then
\begin{equation}\label{eqn:derivativeLimit}
Z\, :=\, \lim_{n\to\infty} Z_{n} \quad \text{a.s.}
\end{equation}
exists and is almost surely positive on the survival set of the \textsf{BRW}.
\vspace{.1cm}
Biggins and Kyprianou \cite{BigKyp:05} and Alsmeyer et al. \cite{AlsBigMei:12} proved analogs of Theorem~\ref{thm:mainNonBoundary} in the boundary case, where the limit $Z$ of the derivative martingale replaces the limit $W$ of the additive martingale. Their proofs required some exponential integrability condition, see (A) in \cite{BigKyp:05} and (A4b) in \cite{AlsBigMei:12}. Using the same techniques as in the proof of Theorem~\ref{thm:mainNonBoundary}, we can extend their results without any additional assumption other than those ensuring the nondegeneracy of the derivative martingale. This is the principal achievement of this work.
\begin{Theorem}\label{thm:mainDerivative}
Let $T$ satisfy \eqref{eq:(T1)}, \eqref{eqn:alphaDef}, \eqref{eqn:boundaryCase} and \eqref{eqn:ncsDerivative}. Then all assertions of Theorem~\ref{thm:mainNonBoundary} remain valid when replacing $W$ with $Z$ in the definition of the solutions $f$.
\end{Theorem}
Although the proofs of Theorems~\ref{thm:mainNonBoundary} and~\ref{thm:mainDerivative} need different estimates, they follow the same three-step scheme that we now outline (in the regular case) and believe to work also in more general situations. Given any solution $f\in\mathcal{S}(\mathcal{M})$, it is easily checked that, for each $t\ge 0$,
\begin{align*}
M_{n}(t):=\prod_{|v|=n}f(tL(v)),\quad n\ge 0
\end{align*}
constitutes a positive bounded product martingale whose limit
\begin{align*}
M(t)\ :=\ \lim_{n\to\infty}\prod_{|v|=n}f(tL(v))
\end{align*}
exists a.s.~and in $L^{1}$. These martingales are called in \cite{AlsBigMei:12} the \emph{disintegration} of the fixed point $f$.
\begin{description}\itemsep2pt
\item[1. Tameness:] Using the convergence of the disintegration along a sequence of stopping lines (see Section~\ref{sec:brw}), the first step is to show that any nondegenerate fixed point $f$ must satisfy
$$ \limsup_{t\to0} \frac{-\log f(t)}{t^{\alpha}}\, <\, \infty. $$
\item[2. Harmonic analysis:] This property enables us to derive that $-\log M(t)$ is an integrable random variable with
$$ F(t)\,:=\,\Erw(-\log M(t))\,=\,\Erw\Bigg(\sum_{|v|=1}F(tL(v))\Bigg)\,\le\,C t^{\alpha} $$
for all $t>0$ and suitable $C\in (0,\infty)$. The shown equality can be translated as follows: the function $G(x):=e^{\alpha x}F(e^{-x})$ defines a bounded harmonic function of a certain random walk associated with the \textsf{BRW}\ (see Section \ref{sec:brw}). By a Choquet-Deny-type lemma, we then deduce that $G$ is constant on the subgroup generated by the walk.
\item[3. Identification of $M(t)$:] It follows by the previous step that $F$ is of the form $F(t)=h(t)t^{\alpha}$ for some $h\in\mathcal{H}_{r}$, $r$ the span of $T$. To find the value of $M(t)$, we finally observe that
\begin{align*}
\log M(t)\ &=\ \lim_{n\to\infty}\Erw\left(\log M(t)|\mathcal{F}_{n}\right)\\
&=\ -\lim_{n\to\infty}\sum_{|v| = n}F(tL(v))\ =\ -h(t)t^{\alpha} W.
\end{align*}
This completes the proof of the main theorem as $f(t)=\Erw e^{\log M(t)}$.
\end{description}
In the boundary case, this three-step method requires the identification of a harmonic function of at most linear growth for a killed random walk. This is done in Proposition~\ref{prop:uniqueness H(x)}, where we prove that there is only one such function modulo scalars, namely the renewal function of a random walk. This result, which may also be of independent interest, forms a generalization of a result by Spitzer \cite[Thm.~E3, p.~332]{Spitzer:76}. It shows that the Martin boundary of a centered random walk with finite variance and conditioned to stay positive reduces to a single point.
\vspace{.1cm}
We devote the next section to some classical \textsf{BRW}\ tools and then prove Theorem~\ref{thm:mainNonBoundary} in Section~\ref{sec:nonBoundary}. Before turning to the more difficult proof of Theorem~\ref{thm:mainDerivative} in Section~\ref{sec:derivative}, we need to show in Section~\ref{sec:RW negative halfline} a Choquet-Deny-type result asserting that any right-continuous and at most linearly growing harmonic function of a centered random walk killed upon entering $\R_{\scriptscriptstyle\geqslant}$ equals the Tanaka solution (see \eqref{eq:Tanaka function}) up to a constant, or periodic function in the lattice case. Finally, in Section~\ref{sec:measure} we briefly describe a one-to-one connection between the solutions of \eqref{eqn:smoothingTransform} and certain fractal random measures on the boundary of the weighted tree related to the \textsf{BRW}.
\section{Preliminary results for the classical branching random walk}
\label{sec:brw}
This section collects some well-known tools in the study of \textsf{BRW}'s that will be needed later on, namely the many-to-one lemma and some facts about stopping lines.
The many-to-one lemma is a widely known result, which can be traced back at least to Peyri\`ere \cite{Peyriere:74} and Kahane and Peyri\`ere \cite{KahanePey:76}. It links additive moments of the \textsf{BRW}\ to random walk estimates. Consider a zero-delayed random walk $(S_{n})_{n\ge 0}$ with increment distribution specified as
\begin{equation}\label{eqn:defManytooneRW}
\Erw g(S_{1})\ =\ \Erw\Bigg(\sum_{j\ge 1}T_{j}^{\alpha}g(-\log T_{j})\Bigg).
\end{equation}
for nonnegative measurable $g$.
\begin{Lemma}[Many-to-one lemma]\label{lem:manytoone}
For all $n\ge 0$ and all nonnegative measurable functions $g$, we have
\[\Erw g(S_{1},\ldots,S_{n})\ =\ \Erw\Bigg(\sum_{|v|=n} L(v)^{\alpha}g(-\log L(v(j)), j \le n)\Bigg).\]
\end{Lemma}
The result can be thought of as a first step towards the spinal decomposition due to Lyons \cite{Lyons:97} that describes the law of the \textsf{BRW}\ when size-biased by the martingale $(W_{n})_{n\ge 0}$ as a \textsf{BRW}\ with a designated path, called spine, along which offspring particles have displacement law defined by \eqref{eqn:defManytooneRW}.
As moments of functionals of the $tL(u)$ will often be considered hereafter, it is convenient to define $\Prob_{t}$ as the law of the above random walk $(S_{n})_{n \ge 0}$ when its delay equals $S_{0} = -\log t$. In other words, the random walk is starting from $-\log t$ under $\Prob_{t}$, and its laws under $\Prob$ and $\Prob_{1}$ coincide. With this notation, Lemma~\ref{lem:manytoone} can be rewritten as
\begin{equation}
\label{eqn:startingPointmanytoone}
\Erw\Bigg(\sum_{|v|=n} L(v)^{\alpha}g( tL(v(j)), j \le n)\Bigg) \ =\ \Erw_t g(e^{-S_{1}},\ldots,e^{-S_{n}}).
\end{equation}
We now recall some facts about stopping lines, in fact so-called very simple stopping lines, a name coined by Biggins and Kyprianou \cite[p.~557]{BigKyp:04}. A line is a set $\mathcal{L}\subset\mathbb{V}$ satisfying the two following assumptions:
\begin{equation}\label{eqn:line}
\begin{split}
&\hspace{.5cm}\forall u,v \in \mathcal{L}: u\preceq v\ \Rightarrow\ u=v,\\
&\forall v\in \partial\mathbb{V}: v(n)\in\mathcal{L}\text{ for some }n\in\mathbb{N}.
\end{split}
\end{equation}
In other words, a line is a minimal set separating the root $\varnothing$ from the boundary $\partial\mathbb{V}$. In \textsf{BRW}'s, stopping lines take the role of stopping times for random walks. In particular, a very simple stopping line is a random line such that for all $v\in\mathbb{V}$,
\begin{equation}\label{eqn:stoppingLine}
\{v\in\mathcal{L}\}\ \in\ \sigma(L(v(j)),j\le |v|).
\end{equation}
In other words, whether or not a vertex $v$ belongs to the line depends only on the weights of the tree along the unique path from the root to $v$.
\vspace{.1cm}
In this article, only the following first passage lines will be of interest. For all $a>0$, we set
\begin{equation}
\Upsilon_{a}\ :=\ \left\{v\in\mathbb{V}:L(v(j))\ge a\text{ for }j<|v|\text{ and }L(v)<a\right\}.
\end{equation}
Note that $\lim_{n\to\infty}\sup_{|v|=n}L(v)=0$ a.s. under the assumptions of our two theorems. Therefore, $\Upsilon_{a}$ is a well-defined very simple stopping line for any $a>0$ and consists of all particles entering the interval $[0,a]$ for the first time. Biggins and Kyprianou \cite{BigKyp:04} proved that a theorem similar to the optional stopping theorem holds for the additive martingale of the \textsf{BRW}. We state and use here a simplified version of their result. Further defining
\begin{align}\label{eq:def first passage filtration}
\mathcal{G}_{a}\ :=\ \sigma\left(L(v(j)),\,j\le |v|,\,v\in\Upsilon_{a}\right),
\end{align}
note that $(\mathcal{G}_{e^{-t}})_{t\ge 0}$ forms a filtration for the \textsf{BRW}.
The following result is a version of the many-to-one lemma along the stopping lines $\Upsilon_{a}$.
\begin{Lemma}\label{lem:manytooneStopped}
For all $a \in (0,1]$ and all measurable nonnegative functions $g$, we have
\begin{align*}
\Erw\Bigg(\sum_{v\in\Upsilon_{a}}L(v)^{\alpha}g(L(v(j)),j \le |v|)\Bigg)\ =\ \Erw g\big(e^{-S_j},j \le\sigma(-\log a)\big),
\end{align*}
where $\sigma(b):=\inf\{n\ge 0: S_{n}>b\}$ and $(S_{n})_{n\ge 0}$ equals the random walk with increment law defined by \eqref{eqn:defManytooneRW}.
\end{Lemma}
\begin{proof}
The result is obtained by a decomposing the cutting line generation-wise and the applying the many-to-one lemma, viz.
\begin{align*}
&\Erw\left(\sum_{v\in \Upsilon_{a}} L(v)^{\alpha}g( L(v(j)),j\le |v|)\right)\\
&=\ \sum_{n\ge 0} \Erw\Bigg(\sum_{|v|=n}\mathbf{1}_{\{v\in\Upsilon_{a}\}}L(v)^{\alpha} g( L(v(j)),j \le n)\Bigg)\\
&=\ \sum_{n\ge 0}\Erw g(e^{-S_{1}},\ldots,e^{-S_{n}})\mathbf{1}_{\{\sigma(-\log a)=n\}}\\
&=\ \Erw g\big(e^{-S_{j}}, j \le\sigma(-\log a)\big).
\end{align*}
This completes the proof.
\end{proof}
\section{The regular case: proof of Theorem~\ref{thm:mainNonBoundary}}\label{sec:nonBoundary}
Given any solution $f\in\mathcal{S}(\mathcal{M})$, recall that, for each $t\ge 0$, the disintegration
\begin{align*}
M_{n}(t):=\prod_{|v|=n}f(tL(v)),\quad n\ge 0
\end{align*}
constitutes a positive bounded product martingale whose limit $M(t)$ exists a.s.~and in $L^{1}$. We start by proving the tameness of any solution $f\in\mathcal{S}(\mathcal{M})$.
\begin{Lemma}\label{lem:tameness}
Under the assumptions of Theorem~\ref{thm:mainNonBoundary}, any $f\in\mathcal{S}(\mathcal{M})$ satisfies
\begin{equation}\label{eqn:tameness}
\sup_{0<t\le 1}\frac{-\log f(t)}{t^{\alpha}}\ \le\ C
\end{equation}
for some $0<C<\infty$.
\end{Lemma}
\begin{proof}
Assuming $\limsup_{t\to 0}\frac{-\log f(t)}{t^{\alpha}} = \infty$, we will derive that for all $t > 0$ we have $f(t)\le\Prob(W=0)$. Since $\Prob(W=0)<1$, this contradicts the property $f(0+)=1$.
By the stated assumption, there exists a decreasing null sequence $(t_{n})_{n \le 1}$ such that
\begin{align*}
\frac{-\log f(t_{n})}{t_{n}^{\alpha}}\ \ge\ n^{2}\quad\text{for all }n\ge 1.
\end{align*}
Setting $c_{n}:=n^{1/\alpha}$ and observing that $t \mapsto -\log f(t)$ and $t \mapsto t^{\alpha}$ are both nonnegative and nondecreasing functions, we find that
\begin{align*}
\frac{-\log f(s)}{s^{\alpha}}\ \ge\ \frac{-\log f(t_{n})}{(c_{n}t_{n})^{\alpha}}\ \ge\ n\quad\text{for all }s \in [t_{n}, c_{n} t_{n}].
\end{align*}
Therefore, we can define decreasing null sequences $(\vartheta_{n})_{n\ge 1}$ and $(\rho_{n})_{n\ge 1}$ such that
\begin{equation}\label{eqn:extractedSequence}
\frac{-\log f(s)}{s^{\alpha}}\ \ge\ n\quad\text{for all }s\in [\rho_{n}\vartheta_{n}, \vartheta_{n}].
\end{equation}
\vspace{.1cm}
We now bound the conditional mean of $M(t)$ given $\mathcal{G}_{\vartheta_{n}/t}$, where $(\mathcal{G}_{\vartheta_{n}/t})_{n\ge 1}$ is the first passage filtration defined in \eqref{eq:def first passage filtration}. By dominated convergence,
\begin{align*}
\Erw\left(M(t)\middle|\mathcal{G}_{\vartheta_{n}/t} \right)\ =\ \lim_{m\to\infty} \Erw\left( M_{m}(t)\middle|\mathcal{G}_{\vartheta_{n}/t}\right),
\end{align*}
and the branching property of the \textsf{BRW}\ implies
\begin{align*}
\Erw\left(M_{m}(t) \middle|\mathcal{G}_{\vartheta_{n}/t}\right)\ = \exp\Bigg(\sum_{\substack{v\in\Upsilon_{\vartheta_{n}/t}\\|v| \le m}} \log f(tL(v))\,+\,\sum_{\substack{|v|=m\\ tL_{*}(v)\ge\vartheta_{n}}}f(tL(v))\Bigg),
\end{align*}
where $L_{*}(v):=\min_{0\le k\le|v|}L(v(k))$. Hence, using $\sup_{v\in \Upsilon_{\vartheta_{n}/t}}|v|<\infty$ a.s., we infer upon letting $m\to\infty$
\begin{align}
\Erw\left(M(t)\middle|\mathcal{G}_{\vartheta_{n}/t} \right)\ &=\ \exp\left(\sum_{v\in \Upsilon_{\vartheta_{n}/t}} \log f(tL(v))\right)\nonumber\\
&\le\ \exp\Bigg(-n\sum_{v\in\Upsilon_{\vartheta_{n}/t}}(tL(v))^{\alpha}\mathbf{1}_{\{tL(v)\ge\rho_{n}\vartheta_{n}\}}\Bigg),\label{eqn:firststep}
\end{align}
where we have bounded $-\log f(tL(v))$ by $0$ if $tL(v)\not\in [\rho_{n}\vartheta_{n},\vartheta_{n}]$, and with the help of \eqref{eqn:extractedSequence} otherwise.
\vspace{.1cm}
On the other hand, by another use of the branching property of the \textsf{BRW}, we obtain, for all $a>0$ and $m\in\mathbb{N}$,
$$ \Erw\left(W_m\middle|\mathcal{G}_{a} \right)\ =\ \sum_{\substack{v\in\Upsilon_{a}\\|v| \le m}} L(v)^{\alpha} + \sum_{\substack{|v|=m\\ L_{*}(u) > a}} L(v)^{\alpha}\quad\text{ a.s.} $$
and thus $\Erw\left(W\middle|\mathcal{G}_{a} \right)\ =\ \sum_{v\in\Upsilon_{a}} L(v)^{\alpha}$ a.s.~as $m\to\infty$. Now let $a\to 0$ and use $\mathcal{F}_\infty = \bigvee_{a > 0}\mathcal{G}_{a}$ to infer
\begin{equation}\label{eqn:1}
\lim_{a\to0}\sum_{v\in\Upsilon_{a}} L(v)^{\alpha}\ =\ W \text{ a.s.}
\end{equation}
Next, Lemma~\ref{lem:manytooneStopped} provides us with
\begin{align*}
\Erw\left(\sum_{v\in\Upsilon_{\vartheta_{n}/t}}(tL(v))^{\alpha}\mathbf{1}_{\{tL(v)<\rho_{n}\vartheta_{n}\}}\right)\,=\,t^{\alpha}\,\Prob\bigg(S_{\sigma(-\log(\vartheta_{n}/t))}> -\log\Big(\frac{\rho_{n}\vartheta_{n}}{t}\Big)\hspace{-4pt}\bigg),
\end{align*}
where $\sigma(a) = \inf\{ n \le 0 : S_{n}>a\}$ should be recalled. Use \eqref{eqn1:regular case} and \eqref{eqn:finitemean} to infer
$$ \Erw S_{1}\ =\ -\,\Erw\Bigg(\sum_{j\ge 1}T_{j}^{\alpha}\log T_{j}\Bigg)\ \in\ (0,\infty). $$
As a consequence, the family $\{S_{\sigma(-\log s)}+\log s:0<s\le 1\}$ of overshoots of the random walk is tight (see e.g.~Gut \cite[Thm.~3.10.3]{Gut:09}) which in turn implies
\begin{align*}
\lim_{n\to\infty} \sum_{v\in\Upsilon_{\vartheta_{n}/t}}(tL(v))^{\alpha}\mathbf{1}_{\{tL(v)<\rho_{n}\vartheta_{n}\}}\ =\ 0 \quad \text{in probability}.
\end{align*}
as $\rho_{n}\to0$. In combination with \eqref{eqn:1}, this further entails that
\begin{align*}
\lim_{n\to\infty} \sum_{v\in\Upsilon_{\vartheta_{n}/t}}(tL(v))^{\alpha}\mathbf{1}_{\{tL(v)\ge\rho_{n}\vartheta_{n}\}}\ =\ W \quad \text{in probability},
\end{align*}
and by combining the last conclusion with \eqref{eqn:firststep}, we obtain
\begin{align*}
\liminf_{n\to\infty}\Erw\big(M(t)\big|\mathcal{G}_{\vartheta_{n}/t}\big)\ \le\ \mathbf{1}_{\{W=0\}} \quad \text{a.s.}
\end{align*}
Finally using that $\left(\Erw(M(t)|\mathcal{G}_{\vartheta_{n}/t})\right)_{n\ge 0}$ forms a bounded martingale, we infer $M(t)\le\mathbf{1}_{\{W=0\}}$ a.s. for all $t>0$ and thereupon the announced contradiction $f(t)=\Erw M(t)\le\Prob(W=0)$ for all $t > 0$.
\end{proof}
\begin{Rem}\rm
In the above proof, Assumption \eqref{eqn:finitemean} is only needed for the conclusion that
\begin{align*}
\sum_{v\in\Upsilon_{\vartheta_{n}/t}}(tL(v))^{\alpha} \mathbf{1}_{\{tL(v)\ge\rho_{n}\vartheta_{n}\}}
\end{align*}
converges to a positive random variable with positive probability. We suspect that it can be replaced with a weaker assumption while keeping the assertions of Theorem~\ref{thm:mainNonBoundary}.
\end{Rem}
With the help of Lemma~\ref{lem:tameness}, we can now identify the function defined by
\begin{equation}\label{eqn:defineH}
F(t)\ :=\ \Erw\left(-\log M(t)\right)
\end{equation}
for a given $f\in\mathcal{S}(\mathcal{M})$. Doing so, we make use of the subsequent Choquet-Deny lemma, for convenience reformulated here in our setting, which identifies all bounded harmonic functions of the random walk.
\begin{Lemma}\label{lem:CD-lemma G}
Let $G : \R\to\R$ be a right-continuous bounded function satisfying
\begin{equation}\label{eq:CD-equation H}
G(x)\ =\ \Erw G(x+S_{1})
\end{equation}
for all $x\in\R$. Then $G$ $d$-periodic if $(S_{n})_{n\ge 0}$ is $d$-arithmetic, and it is constant everywhere if $(S_{n})_{n\ge 0}$ is nonarithmetic.
\end{Lemma}
\begin{proof}
Note that, possibly upon adding a constant, we can assume $G$ to be nonnegative. We denote by $\nu$ the law of $S_{1}$, and we let $\lambda$ denote the measure with density $G$ with respect to Lebesgue measure. Then \eqref{eq:CD-equation H} can be rewritten as
\[
\lambda = \lambda \ast \nu.
\]
The Choquet-Deny lemma \cite{Choquet+Deny:60} entails that, for each $a$ in the support of $\nu$, the measure $\lambda$ is $a$-periodic. In particular, $G(x+a)=G(x)$ for Lebesgue almost all $x \in \R$. Thus using the right-continuity of $G$, this equation in fact holds for all $x \in \R$ and $a$ in the support of $\nu$. The set of periods of a right-continuous function being a closed group, we deduce that $G$ is $d$-periodic if $(S_{n})_{n\ge 0}$ is $d$-arithmetic for some $d>0$, and that $G$ is constant otherwise.
\end{proof}
We now turn to the identification of the function $F$.
\begin{Lemma}\label{lem:identificationOfH}
Given the assumptions of Theorem~\ref{thm:mainNonBoundary}, let $f\in\mathcal{S}(\mathcal{M})$ with disintegration $(M_{n}(t))_{n\ge 0}$. Then there exists a function $h\in\mathcal{H}_{r}$, $r$ the span of $T$, such that
\begin{equation}\label{eq:form of F}
F(t)\ =\ h(t)t^{\alpha}
\end{equation}
for all $t\ge 0$.
\end{Lemma}
\begin{proof}
Since $M(t)=\lim_{n\to\infty}M_{n}(t)$ a.s., we see that
\begin{align*}
\lim_{n\to\infty}\sum_{|v|=n}-\log f(t L(v))\ =\ -\log M(t)\quad\text{a.s.}
\end{align*}
Moreover, by Lemma~\ref{lem:tameness}, there exists $C>0$ such that
\begin{align*}
\sum_{|v|=n}-\log f(t L(u))\ \le\ C t^{\alpha} W_{n}.
\end{align*}
for all $n$ so large that $\sup_{|v|=n}L(v)\le 1$. Hence, $0 \le-\log M(t)\le C t^{\alpha}W$ follows upon letting $n\to\infty$. Since $\Erw W=1$ under the assumptions of Theorem~\ref{thm:mainNonBoundary}, we infer
\begin{equation}\label{eqn:boundedFunction}
0\,\le\,F(t)\,\le\,C t^{\alpha},
\end{equation}
where $F$ is the function defined in \eqref{eqn:defineH} (for the given $f$). One can readily check that $F$ is nondecreasing and left continuous.
\vspace{.1cm}
Put $G(x):=e^{-\alpha x}F(e^{-x})$ and use the branching property of the \textsf{BRW}\ to obtain, for any $m\in\mathbb{N}$,
\begin{equation}\label{eqn:inlaw}
M(t)\ =\ \prod_{|u|=m}M^{(u)}(tL(u)),
\end{equation}
where $M^{(u)}(s) = \lim_{n\to\infty}\prod_{|v|=n}f(s\frac{L(uv)}{L(u)})$ are i.i.d. copies of $M$ and independent of $\mathcal{F}_{m}$ for $u\in\mathbb{V}_{m}$. Equality in law is already enough to infer that
\begin{align*}\label{eqn:harmonic}
G(x)\ &=\ e^{-\alpha x}\,\Erw \log M(e^{-x})\ =\ e^{-\alpha x}\,\Erw\Bigg(\log \prod_{|v|=1} M^{(v)}(e^{-x}L(v))\Bigg)\\
&=\ \Erw\Bigg(\sum_{|v|=1} L(u)^{\alpha} G(x - \log L(v))\Bigg)\ =\ \Erw G(x + S_{1}),
\end{align*}
by the many-to-one lemma. Therefore, by \eqref{eqn:boundedFunction}, $G$ is a bounded, nonnegative and right continuous harmonic function of the random walk $(S_{n})_{n\ge 0}$, and the latter is $\log r$-arithmetic. It follows by Lemma~\ref{lem:CD-lemma G}, that $G$ is $\log r$-periodic, thus $G(x+\log r)=G(x)$ for all $x\in\R$. Equivalently, \eqref{eq:form of F} holds with $h\in\mathcal{H}_{r}$ given by $h(t):=G(\log t)$ for $t>0$.
\end{proof}
\begin{Rem}\rm
The previous proof has also shown that, if \eqref{eqn1:regular case} and \eqref{eqn2:regular case} fail and thus $W_{n}\to 0$ a.s., any solution $f\in\mathcal{S}(\mathcal{M})$ satisfying \eqref{eqn:tameness} must be trivial, i.e.~$f(t)=1$ for all $t\ge 0$. In particular, no nontrivial solution $f$ can satisfy \eqref{eqn:tameness} in the boundary case.
\end{Rem}
We are now ready to provide the proof of Theorem~\ref{thm:mainNonBoundary}.
\begin{proof}[Proof of Theorem~\ref{thm:mainNonBoundary}]
Given any $f\in\mathcal{S}(\mathcal{M})$, we denote by $M(t)$ its disintegration and put $F(t)=\Erw(-\log M(t))$. It folows from \eqref{eqn:inlaw} that
\begin{align*}
\Erw\left(-\log M(t)\middle|\mathcal{F}_{n}\right)\ =\ \sum_{|v|=n}F(tL(v))\ \quad \text{a.s.}
\end{align*}
for all $n\in\mathbb{N}$. By letting $n\to\infty$ and an appeal to Lemma~\ref{lem:identificationOfH}, we obtain
\begin{align*}
-\log M(t)\ =\ \lim_{n\to\infty}\sum_{|v|=n}F(tL(v))\ =\ h(t)t^{\alpha}\lim_{n\to\infty}W_{n}\ =\ h(t)t^{\alpha}W\quad\text{a.s.}
\end{align*}
for some $h\in\mathcal{H}_{r}$, $r$ the span of $T$, and then $f(t)=\Erw M(t)= \Erw e^{-h(t)t^{\alpha} W}$. If $f\in\mathcal{S}(\mathcal{L})$, we even infer $h\in\cP_{r}$ because $f$ is a Laplace transform.
\end{proof}
\section{Harmonic functions of random walks on the positive halfline}\label{sec:RW negative halfline}
We now turn to the proof of Theorem~\ref{thm:mainDerivative} and thus work under assumptions \eqref{eqn:boundaryCase} and \eqref{eqn:ncsDerivative}. In this case, instead of using the Choquet-Deny lemma, we need to identify harmonic functions of a centered random walk with finite variance, killed upon entering the nonpositive halfline. This is the content of the present section.
We recall that $(S_{n})_{n \ge 0}$ is the random walk associated with the \textsf{BRW}\ by the many-to-one lemma. Since $\Erw S_{1}=0$ by \eqref{eqn:boundaryCase} and $0<\Erw S_{1}^{2} < \infty$ by \eqref{eqn:ncsDerivative}, $(S_{n})_{n\ge 0}$ is a centered random walk with finite variance. A harmonic function $G$ of the walk, killed at the first time it leaves the positive halfline $\R_{\scriptscriptstyle >}$, is a function such that $G(x) = 0$ for $x \le 0$ and
\begin{equation}\label{eq:harmonic V(x)}
G(x)\ =\ \Erw G(x+S_{1})\mathbf{1}_{\{x+S_{1}>0\}}\ =\ \Erw_{x}G(S_{1})\mathbf{1}_{\{S_{1}>0\}}.
\end{equation}
for all $x>0$.
Let us define
\begin{gather*}
\tau(a)\,:=\,\inf\{n\ge 0:S_{n}\le a\},\quad\tau\,:=\,\tau(0),
\shortintertext{and recall that}
\sigma(a)\,:=\,\inf\{n\ge 0:S_{n}>a\},\quad\sigma\,:=\,\sigma(0),
\end{gather*}
for $a\in\R$. Further, put $R_{a}:=S_{\sigma(a)}-a$, and let $(\tau_{n})_{n\ge 1}$ and $(\sigma_{n})_{n\ge 1}$ denote the sequences of weakly descending and strictly ascending ladder epochs, respectively. Note that $\Prob_{x}(\tau(a)\in\cdot)=\Prob_{0}(\tau(a-x)\in\cdot)$ for $a\le 0$ and $x\ge 0$ and recall that $\Prob=\Prob_{0}$.
Even without assuming finite variance, Tanaka \cite{Tanaka:89} obtained a solution of \eqref{eq:harmonic V(x)} defined by
\begin{equation}\label{eq:Tanaka function}
\widehat{H}(x)\ :=\ \Erw_{x}\left(\sum_{k=0}^{\sigma-1}\mathbf{1}_{(0,x]}(S_{k})\right),\quad x>0.
\end{equation}
By the duality lemma, it also equals the renewal function of the weakly descending ladder heights $S_{n}^{*}=S_{\tau_{n}}$, $n\ge 1$, of the given walk (up to a reflection), viz.
\begin{equation*}
\widehat{H}(x)\ =\ \sum_{n\ge 0}\Prob(S_{n}^{*}>-x)\ =\ \sum_{n\ge 0}\Prob(\tau^{*}(-x)>n)\ =\ \Erw\tau^{*}(-x)
\end{equation*}
for $x>0$, where $\tau^{*}(a):=\inf\{n\ge 0:S_{n}^{*}\le a\}$. Now, if $\Erw|S_{1}^{*}|<\infty$, a sufficient condition being $\Erw S_{1}^{2}<\infty$, then Wald's identity further ensures
$$ \widehat{H}(x)\ =\ \frac{\Erw S_{\tau^{*}(-x)}^{*}}{\Erw S_{1}^{*}},\quad x<0, $$
and by finally observing $S_{\tau^{*}(-x)}^{*}=S_{\tau(-x)}$, we arrive at
\begin{equation}\label{eq:V(x)=ES_tau(-x)}
\widehat{H}(x)\ =\ \frac{\Erw S_{\tau(-x)}}{\Erw S_{1}^{*}}\ =\ \frac{\Erw_{x}S_{\tau}-x}{\Erw S_{1}^{*}}.
\end{equation}
In other words, if $0<\Erw S_{1}^{2}<\infty$, then $\widehat{H}(x)$ and $H(x):= x-\Erw_{x}S_{\tau}$ differ only by a multiplicative positive constant.
\vspace{.1cm}
An interesting aspect of this last observation is that, unlike $\widehat{H}$, the function $H$ is very easily shown to be harmonic. Namely, as $\Prob(\tau(-x)\ge 1)=1$ for $x>0$ and $\Erw S_{1}=0$, we infer by a standard renewal argument
\begin{align*}
H(x)\ =\ -\Erw S_{\tau(-x)}\ &=\ \int_{\R_{\scriptscriptstyle >}}-\Erw S_{\tau(-y)}\ \Prob_{x}(S_{1}\in dy)\ =\ \Erw_{x}H(S_{1})
\end{align*}
for all $x>0$ as required.
\vspace{.1cm}
A well-known result from renewal theory asserts that
$$ \frac{\Erw|S_{\tau^{*}(-y)}^{*}+y|}{y}\ \xrightarrow{y\to\infty}\ 0 $$
if $\Erw S_{1}^{*}<\infty$, see e.g.~\cite[Thm.~3.10.2]{Gut:09}, giving
\begin{equation}\label{eq:growth of H}
x\ \le\ H(x)\ \le\ x(1+o(1))\quad\text{as }x\to\infty.
\end{equation}
Moreover, we point out that, by definition, $H$ is right-continuous with left limits at each point.
Our main result of this section is the following Choquet-Deny-type lemma. It states that any right-continuous function of at most linear growth and satisfying \eqref{eq:harmonic V(x)} equals $H$ up to multiplication by a constant, or $d$-periodic function if the walk is $d$-arithmetic.
\begin{Prop}\label{prop:uniqueness H(x)}
Given a nontrivial, centered random walk with lattice-span $d\ge 0$ and $\Erw S_{1}^{2}<\infty$, let $G : \R\to\R$ be a right-continuous function satisfying \eqref{eq:harmonic V(x)} and $\sup_{x \le 0} |G(x)/(1+|x|)| < \infty$. Then there exists a function $\kappa$, $d$-periodic if $d>0$ and constant if $d=0$, such that
\[
G(x)\,=\,\kappa(x) H(x)\quad\text{for all }x\in\R.
\]
\end{Prop}
For centered random walks on the integer lattice $\mathbb{Z}$, where \eqref{eq:harmonic V(x)} must only hold for $x\in\mathbb{Z}$, it was already shown by Spitzer \cite[Thm.~E3, p.~332]{Spitzer:76} that there is only one positive solution to \eqref{eq:harmonic V(x)} up to positive multiples (even without additional moment assumptions). More recent work by Doney \cite[Thm.~1]{Doney:98} also considers the case when the $\mathbb{Z}$-valued random walk has nonzero drift.
Before proving our result, we provide some useful estimates and begin with an extension of the harmonic property of $G$ at random times.
\begin{Lemma}\label{lem:harmonicStopped}
Under the assumptions of Proposition \ref{prop:uniqueness H(x)},
\[
G(x)\,=\,\Erw_{x} G(S_{\sigma(y)})\ind{\sigma(y)<\tau}
\]
holds for all for all $0<x<y$.
\end{Lemma}
\begin{proof}
By \eqref{eq:harmonic V(x)}, $(G(S_{\tau \wedge n})_{n\ge 0}$ is a martingale. Hence, the optional sampling theorem implies
\begin{align*}
G(x)\ &=\ \Erw_{x} G(S_{\sigma(y)\wedge \tau \wedge n})\\
&=\ \Erw_{x} G(S_{\sigma(y)} \ind{\sigma(y)< \tau \wedge n}\,+\,\Erw_{x} G(S_{n}) \ind{n < \sigma(y) \wedge \tau}
\end{align*}
for all $0 < x<y$ and $n\in\mathbb{N}$. As $n\to\infty$, we have
$$ \Erw_{x}G(S_{\sigma(y)} \ind{\sigma(y)< \tau \wedge n}\ \to\ \Erw_{x} G(S_{\sigma(y)} \ind{\sigma(y)< \tau} $$
by the monotone convergence theorem, and
$$ \Erw_{x}G(S_{n}) \ind{n < \sigma(y) \wedge \tau}\ \le\ C(y+1) \Prob_{x}(n < \sigma(y) \wedge \tau)\ \to\ 0. $$
This completes the proof.
\end{proof}
Next are some asymptotic estimates involving the level $a$ overshoot $R_{a}=S_{\sigma(a)}-a$ of the random walk killed upon entering the positive halfline. As a by-product, another formula for $H$ is obtained.
\begin{Lemma}\label{lem:overshoot L_{1}}
Let $(S_{n})_{n\ge 0}$ be a centered random walk with $0<\Erw S_{1}^{2}<\infty$. Then for all $x > 0$, we have
\begin{gather}
\lim_{b\to\infty} \limsup_{a\to\infty} \Erw_{x} S_{\sigma(a)} \ind{\sigma(a)<\tau,R_{a}>b}\ =\ 0 \label{eqn:overshoot1}\\
\shortintertext{and}
\lim_{a\to\infty}\,\Erw_{x}R_{a}\mathbf{1}_{\{\sigma(a)<\tau\}}\ =\ 0. \label{eqn:overshoot2}
\end{gather}
\end{Lemma}
\begin{proof}
\textsc{Step 1} We first show that
\begin{equation}
\label{eqn:firstStep}
\lim_{a\to\infty}\,\Erw_{x} H(S_{\sigma(a)})\mathbf{1}_{\{\sigma(a)<\tau<\infty,\,S_{\sigma(a)}>(1+\varepsilon)a\}}\ =\ 0
\end{equation}
for all $\varepsilon>0$ and $x\ge 0$. As $H$ grows like the identity, we may replace $H(S_{\sigma(a)})$ with $S_{\sigma(a)}$. It is further no loss of generality to choose $x=0$. We then have
\begin{align}
\Erw S_{\sigma(a)}\mathbf{1}_{\{\sigma(a)<\tau,\,S_{\sigma(a)}>(1+\varepsilon)a\}}\ &=\ \int_{(1+\varepsilon)a}^{\infty}\Prob(\sigma(a)<\tau,S_{\sigma(a)}>y)\ dy\nonumber\\
&=\ \int_{(1+\varepsilon)a}^{\infty}\int_{0}^{a}\Prob(S_{1}>y-x)\ \mathbb{U}_{a}(dx)\ dy\nonumber\\
&\le\ \int_{(1+\varepsilon)a}^{\infty}\Prob(S_{1}>y-a)\ \mathbb{U}_{a}([0,a])\ dy\label{eq:crucial}
\end{align}
where
\begin{align*}
\mathbb{U}_{a}(dx)\ &=\ \sum_{n\ge 0}\Prob(S_{n}\in dx,\,0<S_{k}\le a\text{ for }k=0,\ldots,n)\\
&=\ \sum_{n\ge 0}\Prob(S_{n}\in dx,\,0<S_{n}-S_{n-k}\le a\text{ for }k=0,\ldots,n)\\
&=\ \sum_{n\ge 0}\sum_{k\ge 0}\Prob\left(\sigma_{k}=n,\,S_{\sigma_{k}}\in dx,\,S_{\sigma_{k}}\le a+\min_{0\le j\le\sigma_{k}}S_{j}\right)\\
&=\ \sum_{k\ge 0}\Prob\left(S_{\sigma_{k}}\in dx,\,S_{\sigma_{k}}\le a+\min_{0\le j\le\sigma_{k}}S_{j}\right)\\
&\le\ \sum_{k\ge 0}\Prob\left(S_{\sigma_{k}}\in dx\cap (0,a]\right).
\end{align*}
Since $\Erw S_{1}^{2}<\infty$ ensures $\Erw S_{\sigma_{1}}<\infty$, we infer that
$$ \mathbb{U}_{a}([0,a])\ \le\ \sum_{k}\Prob(S_{\sigma_{k}}\le a)\ \le\ Ca $$
for some $C>0$ and all $a\ge 1$. Returning to \eqref{eq:crucial}, we now obtain
\begin{align*}
\Erw S_{\sigma(a)}\mathbf{1}_{\{\sigma(a)<\tau,\,S_{\sigma(a)}>(1+\varepsilon)a\}}\ &\le\ Ca\int_{(1+\varepsilon)a}^{\infty}\Prob(S_{1}>y-a)\ dy\\
&\le\ \frac{C}{\varepsilon}\int_{\varepsilon a}^{\infty}y\,\Prob(S_{1}>y)\ dy
\end{align*}
and the last expression goes to 0 as $a\to\infty$ under the proviso $\Erw S_{1}^{2}<\infty$.
\vspace{.2cm}
\textsc{Step 2}. Next, we show that
\begin{equation*}
\lim_{b\to\infty}\limsup_{a\to\infty}\,\Erw_{x}H(S_{\sigma(a)})\mathbf{1}_{\{\sigma(a)<\tau<\infty,\,R_{a}>b\}}\ =\ 0
\end{equation*}
for all $x\ge 0$, thus proving \eqref{eqn:overshoot1}. Using the strong Markov property at time $\sigma(a/3)$, we have
\begin{align*}
\Erw_{x}H(S_{\sigma(a)})\mathbf{1}_{\{\sigma(a)<\tau<\infty,\,R_{a}>b\}}\ =\ \Erw_{x}H(S_{\sigma(a/3)})\Psi(S_{\sigma(a/3)})\mathbf{1}_{\{\sigma(a/3)<\tau<\infty\}},
\end{align*}
where
$$ \Psi(x)\ :=\ \Erw_{x}\left(\frac{H(S_{\sigma(a)})}{H(x)}\mathbf{1}_{\{\sigma(a)<\tau<\infty,\,b<R_{a}\le a\}}\right) $$
for $x>0$. Observe that
\begin{equation*}
\Psi(x)\ \le\ \Erw_{x}\left(\frac{H(S_{\sigma(a)})}{H(x)}\mathbf{1}_{\{\sigma(a)<\tau<\infty\}}\right)\ =\ \Prob_{x}^{\uparrow}(\sigma(a)<\infty)\ =\ 1,
\end{equation*}
where $\Prob_{x}^{\uparrow}$ denotes the harmonic transform with respect to $H$. Using this, we further obtain
\begin{align}
\begin{split}\label{eq:inequality step 2}
&\Erw_{x}H(S_{\sigma(a/3)})\Psi(S_{\sigma(a/3)})\mathbf{1}_{\{\sigma(a/3)<\tau<\infty\}}\\
&\le\ \Erw_{x}H(S_{\sigma(a/3)})\mathbf{1}_{\{S_{\sigma(a/3)}>2a/3\}}\\
&+\ \Erw_{x}H(S_{\sigma(a/3)})\mathbf{1}_{\{\sigma(a/3)<\tau<\infty,\,S_{\sigma(a/3)}\le 2a/3\}}\sup_{a/3\le y\le 2a/3}\Psi(y).
\end{split}
\end{align}
The first of the two terms on the right-hand side of this inequality converges to 0 as $a\to\infty$ by Step 1. As for the second one, we use that $H$ is harmonic and of linear growth together with Lemma \ref{lem:harmonicStopped} (which also holds for $H$ in the place of $G$) to bound it by
\begin{align*}
\Erw_{x}H(S_{\sigma(a/3)})\mathbf{1}_{\{\sigma(a/3)<\tau<\infty\}}\sup_{a/3\le y\le 2a/3}\Psi(y)\ =\ H(x)\sup_{a/3\le y\le 2a/3}\Psi(y).
\end{align*}
Furthermore,
\begin{align*}
\sup_{a/3\le y\le 2a/3}\Psi(y)\ &\le\ \frac{H(2a)}{H(a/3)}\sup_{a/3\le y\le 2a/3}\Prob_{y}(\sigma(a)<\tau<\infty,\,b<R_{a}\le a)\\
&\le\ \frac{H(2a)}{H(a/3)}\sup_{0\le y\le a}\Prob(R_{a-y}>b)\\
&=\ (6+o(1))\,\sup_{y\ge 0}\Prob(R_{y}>b)\quad\text{as }a\to\infty.
\end{align*}
Consequently, recalling that $\Erw S_{1}^{2}<\infty$ implies the tightness of the overshoots $R_{a}$ for $a\ge 0$, the second term on the right-hand side of \eqref{eq:inequality step 2} converges to 0 as well when first letting $a$ and then $b$ tend to infinity.
\vspace{.1cm}
\textsc{Step 3}. In order to finally prove the last assertion of the lemma, we first note that, by another appeal to \eqref{eqn:firstStep}, it suffices to show
\begin{equation*}
\lim_{a\to\infty}\,\Erw_{x}R_{a}\mathbf{1}_{\{\sigma(a)<\tau<\infty,\,R_{a}\le a\}}\ =\ 0
\end{equation*}
for all $x>0$. Fix an arbitrary $\varepsilon>0$. By Step 2 and \eqref{eq:growth of H}, we can pick $b>0$ so large that
$$ \limsup_{a\to\infty}\,\Erw_{x}H(S_{\sigma(a)})\mathbf{1}_{\{\sigma(a)<\tau<\infty,\,b<R_{a}\le a\}}\, <\,\frac{\varepsilon}{2} $$
and thus also $a_{0}>0$ such that
$$ \Erw_{x}H(S_{\sigma(a)})\mathbf{1}_{\{\sigma(a)<\tau<\infty,\,R_{a}>b\}}\,<\,\varepsilon $$
for all $a\ge a_{0}$. Consequently, as $a\to\infty$,
\begin{align*}
\Erw_{x}&R_{a}\mathbf{1}_{\{\sigma(a)<\tau<\infty,\,R_{a}\le a\}}\ \simeq\ \Erw_{x}H(R_{a})\mathbf{1}_{\{\sigma(a)<\tau<\infty,\,R_{a}\le a\}}\\
&\le\ b\,\Prob_{x}(\sigma(a)<\tau<\infty)\,+\,\Erw_{x}H(S_{\sigma(a)})\mathbf{1}_{\{\sigma(a)<\tau<\infty,\,R_{a}>b\}}\\
&\le\ o(1)\,+\,\varepsilon
\end{align*}
which completes the proof.
\end{proof}
This result particularly implies the following identity for $H$ that will be useful below in the proof of Proposition \ref{prop:uniqueness H(x)}.
\begin{Cor}\label{cor:alternativeFormula}
For all $x > 0$, we have
$$ H(x)\,=\,\lim_{y\to\infty} y\,\Prob_{x}(\sigma(y)<\tau). $$
\end{Cor}
\begin{proof}
By Lemma \ref{lem:harmonicStopped}, for all $y>x>0$, we have
$$ H(x)\,=\,\Erw_{x} H(S_{\sigma(y)}) \ind{\sigma(y) \le \tau}. $$
Using \eqref{eq:growth of H}, for all $\epsilon>0$ and all $y$ large enough, we deduce
\begin{multline*}
(1-\epsilon) \left(y\,\Prob_{x}(\sigma(y)<\tau) + \Erw_{x} R_{y} \ind{\sigma(y)<\tau} \right)\\
\ \le\ H(x)\ \le\ y\,\Prob_{x}(\sigma(y)<\tau) + \Erw_{x} R_{y} \ind{\sigma(y)<\tau}.
\end{multline*}
Now use Lemma \ref{lem:overshoot L_{1}} upon letting $y\to\infty$ and then $\epsilon\to0$ to arrive at the assertion.
\end{proof}
We are now ready to give the proof of the main result of this section.
\begin{proof}[Proof of Proposition \ref{prop:uniqueness H(x)}]
Our proof follows along the same lines as the original one by Choquet and Deny \cite{Choquet+Deny:60}. Note first that, for all $A>0$, the function $G+AH$ satisfies the same assumptions as $G$ and is nonnegative on $[1,\infty)$ for large enough $A$. Therefore, we may assume without loss of generality that $G$ is bounded from below.
We consider the following regularization of the function $G$. For $\delta>0$ and $x\in\R$, put
\begin{equation*}
G^{\delta}(x)\,:=\,\frac{1}{\delta}\int_{x}^{x+\delta} G(z) \mathrm{d} z.
\end{equation*}
The function $G^{\delta}$ is differentiable, and by assumption its derivative satisfies
\begin{equation*}
|(G^{\delta})'(x)|\,=\, |G(x+\delta)-G(x)|\,\le\,2C (1+x_{+}+\delta).
\end{equation*}
for some $C>0$. As a consequence, $x\mapsto G^{\delta}(x)/(1+x_{+})$ is uniformly continuous and bounded. Hence, by the Arzela-Ascoli theorem, there exist $0<y_{n}\uparrow\infty$ such that $x \mapsto G^{\delta}(x+y_{n})/(1+(x+y_{n})_{+})$ converges, uniformly on compact sets, to a bounded and continuous limit denoted as $\kappa^{\delta}$. The $y_{n}$ may further be chosen from $d\mathbb{N}$ if the random walk is $d$-arithmetic. The next argument shows this function to be harmonic for the random walk (without killing).
Indeed, as $G(x)=0$ for $x\le 0$, we infer from \eqref{eq:harmonic V(x)} that
\begin{gather}
G(x)\ =\ \Erw G(x+S_{1})\ind{x+S_{1}>0}\ =\ \Erw G(x+S_{1})\nonumber
\intertext{and thereupon, by Fubini's theorem,}
G^{\delta}(x)\ =\ \Erw G^{\delta}(x + S_{1})\quad\text{for all }x>0.\label{eq:harmonic G^delta}
\end{gather}
As a consequence,
\begin{align}
\kappa^{\delta}(x)\ &=\ \lim_{n\to\infty} \frac{G^{\delta}(x + y_{n})}{1+(x+y_{n})_{+}}\ =\ \lim_{n\to\infty} \frac{G^{\delta}(x+y_{n})}{y_{n}}\label{eqn:ab}\\
&=\ \lim_{n\to\infty}\frac{\Erw G^{\delta}(x+y_{n}+S_{1})}{y_{n}}\ =\ \Erw\left(\lim_{n\to\infty} \frac{G^{\delta}(x+y_{n}+S_{1})}{y_{n}}\right)\nonumber\\
&=\ \Erw \kappa^{\delta} (x+S_{1}) \nonumber,
\end{align}
for all $x\in\R$, having used \eqref{eq:harmonic G^delta}, then the domination assumption on $G$, and finally the dominated convergence theorem. This proves that $\kappa^{\delta}$ is indeed harmonic for the random walk $(S_{n})_{n\ge 0}$ and thus, by Lemma \ref{lem:CD-lemma G}, either a $d$-periodic continuous function or a constant.
As the next step, we show that
\begin{equation}\label{eqn:nextStep}
G^{\delta}(x)\ =\ \kappa^{\delta}(x) H^{\delta}(x)\,+\,\Erw_{x} G^{\delta}(S_\tau).
\end{equation}
for all $x>0$. First, writing \eqref{eq:harmonic G^delta} as $G^{\delta}(x)=\Erw_{x}G^{\delta}(x+S_{1\wedge\tau})$ for all $x>0$, it follows that $(G^{\delta}(x+S_{n\wedge\tau}))_{n\ge 0}$ forms a martingale and then as in the proof of Lemma \ref{lem:harmonicStopped} that
\begin{equation*}
G^{\delta}(x)\ =\ \Erw_{x}G^{\delta}(S_{\sigma(y)\wedge\tau})\ =\ \Erw_{x}G^{\delta}(S_{\sigma(y)})\ind{\sigma(y)<\tau}\,+\,\Erw_{x} G^{\delta}(S_{\tau}) \ind{\tau<\sigma(y)}
\end{equation*}
for all $0<x<y$. Observe that $\Erw_{x}G^{\delta}(S_{\tau}) \ind{\tau< \sigma(y)}=\Erw_{x} G^{\delta}(S_\tau)$ as $y\to\infty$.
\vspace{.1cm}
Compactly uniform convergence of $G^{\delta}(x+y_{n})/y_{n}$ to $\kappa^{\delta}$ will now be utilized to compute $\lim_{n\to\infty}\Erw_{x} G^{\delta} (S_{\sigma(y_{n})})\ind{\sigma(y_{n})<\tau}$ by bounding it from above and from below separately. Let $\epsilon,K> 0$ and choose $n$ large enough such that
\begin{equation}\label{eqn:unifConv}
\sup_{x \in [0,K]} |G^{\delta}(x+y_{n})/y_{n} - \kappa^{\delta}(x)| \le \epsilon.
\end{equation}
Then
\begin{align*}
\Erw_{x}G^{\delta}(S_{\sigma(y_{n})})\ind{\sigma(y_{n})<\tau}\ &\ge\ \Erw_{x}G^{\delta} (S_{\sigma(y_{n})}) \ind{\sigma(y_{n})<\tau,\,R_{y_{n}}\le K}\\
&\ge\ (\kappa^{\delta}(x)-\epsilon) y_{n}\,\Prob_{x}\big( \sigma(y_{n})<\tau,R_{y_{n}}\le K\big),
\end{align*}
using that $\kappa^{\delta}(S_{\sigma(y_{n})}-y_{n}) = \kappa^{\delta}(x)$ $\Prob_{x}$-a.s. by the periodicity or constancy of $\kappa^{\delta}$ (recall here that the $y_{n}$ are all chosen from $d\mathbb{N}$ if the random walk has lattice-span $d>0$). By letting $n\to\infty$ and use of Corollary \ref{cor:alternativeFormula}, this yields
\begin{multline*}
\liminf_{n\to\infty}\Erw_{x} G^{\delta} (S_{\sigma(y_{n})})\ind{\sigma(y_{n})<\tau} \ge\\
(\kappa^{\delta}(x)-\epsilon)\left(H(x)\,-\,\limsup_{a\to\infty}\,a\,\Prob_{x}\left(\sigma(a)<\tau, R_{a}>K\right)\right),
\end{multline*}
and thereupon, with the help of \eqref{eqn:overshoot1},
$$ \liminf_{n\to\infty}\Erw_{x}G^{\delta}(S_{\sigma(y_{n})})\ind{\sigma(y_{n})<\tau}\ge H(x) \kappa(x). $$
when letting $K\to\infty$ and then $\epsilon\to 0$.
\vspace{.1cm}
For the upper bound, we obtain by proceeding similarly
\begin{multline*}
\Erw_{x} G^{\delta} (S_{\sigma(y_{n})}) \ind{\sigma(y_{n})<\tau}\ \le\ (\kappa^{\delta}(x)+\epsilon)\,(y_{n}+K)\,\Prob_{x}\left( \sigma(y_{n})<\tau,R_{y_{n}}\le K\right)\\
+\ 2C\,\Erw_{x} S_{\sigma(y_{n})} \ind{\sigma(y_{n})<\tau, R_{y_{n}}>K},
\end{multline*}
for sufficiently large $n$, where $G^{\delta}(y) \le 2Cy$ for sufficiently large $y$ has been utilized. Now, by another use of Corollary \ref{cor:alternativeFormula}, \eqref{eqn:overshoot1} and also
$$ \lim_{n\to\infty}K\,\Prob_{x}\left( \sigma(y_{n})<\tau,R_{y_{n}}\le K\right) = 0, $$
we find
$$ \limsup_{n\to\infty}\Erw_{x} G^{\delta} (S_{\sigma(y_{n})})\ind{\sigma(y_{n})<\tau} \le H(x) \kappa^\delta(x) $$
upon letting $n\to\infty$, then $K\to\infty$ and finally $\epsilon\to0$. This completes the proof of \eqref{eqn:nextStep}.
Finally, we observe that the right continuity of $G$ implies $G^{\delta}(x)\to G(x)$ as $\delta\to0$ and in combination with $G^{\delta}(z)=0$ for $z < -\delta$ also
$$ |\Erw_{x} G^{\delta}(S_\tau)|\ \le\ \sup_{z \in [0,\delta]} |G(z)|\ \to\ 0\quad\text{as }\delta\to 0. $$
Consequently, by letting $\delta\to 0$ in \eqref{eqn:nextStep}, we conclude that $\kappa^{\delta}$ converges as well. Its limit $\kappa$ is also $d$-periodic or constant, right-continuous and satisfies $\kappa(x) = G(x)/H(x)$ for all $x > 0$. This finishes the proof.
\end{proof}
\section{The boundary case: proof of Theorem~\ref{thm:mainDerivative}}\label{sec:derivative}
In essence, the same techniques as those used in Section~\ref{sec:nonBoundary} can be used to prove Theorem~\ref{thm:mainDerivative}. However, additional complications arise because, as predicted by the theorem,
\begin{align*}
-\log M(t)\ =\ h(t)t^{\alpha}Z,
\end{align*}
is no longer integrable. As a consequence, it is impossible to directly give an analog of the function $F$ here. Instead, we will have to work with a truncated version of the \textsf{BRW}\ that only includes individuals in the tree that never went ``too high''.
\vspace{.1cm}
Again, let $f\in\mathcal{S}(\mathcal{M})$ be an arbitrary solution to Eq.~\eqref{eq:functional FPE} and $a>0$. For $n\ge 0$, define
\begin{equation}\label{eqn:defMnshaved}
M_{n}^{(a)}(t)\ :=\ \prod_{\substack{|v|=n\\ tL^{*}(v)<a}}f(tL(v)),
\end{equation}
where $L^{*}(v):=\max_{k\le |v|}L(v(k))$. By another appeal to the branching property of the \textsf{BRW}, it follows immediately that $(M_{n}^{(a)}(t))_{n\ge 0}$ constitutes a bounded submartingale and therefore converges a.s.~to a limit that we denote by $M^{(a)}(t)$.
\vspace{.1cm}
Recall that assumption \eqref{eqn:ncsDerivative} ensures $\lim_{n\to\infty} \sup_{|v|=n}L(v)=0$ a.s.~and thus $\sup_{v\in\mathbb{V}}L(v)<\infty$. As a consequence,
\[M^{(a)}(t)\,=\,M(t)\quad\text{a.s. for all }0 \le t<a/\sup_{v\in\mathbb{V}}L(v).\]
In particular, $M^{(a)}(t)$ is positive with positive probability for $a$ large enough.
\vspace{.1cm}
For $t\ge 0$, $a>1$ and $n\in\mathbb{N}_{0}$, we now define
\begin{align*}
Z_{n}^{(a)}(t)\ :=\ \sum_{|v|=n}(tL(v))^{\alpha}H(-\log(tL(v)/a))\,\mathbf{1}_{\{tL^{*}(v)<a\}}
\end{align*}
where $H(x)=-\Erw S_{\tau(-x)}=x-\Erw_{x}S_{\tau}$ for $x>0$ is the essentially unique right-continuous harmonic solution to \eqref{eq:harmonic V(x)}. The last property entails that $(Z_{n}^{(a)}(t))_{n\ge 0}$ constitutes a nonnegative martingale. It converges a.s. and in $L^{1}$ (see \cite[Proposition A.3]{Aidekon:13}) to a limit $Z^{(a)}(t)$. By similar arguments as the ones above for the martingale $M^{(a)}(t)$ in combination with $H(x)=0$ for $x\le 0$ and $H(x)\simeq x$ as $x\to\infty$, we see that, if $t\ge 0$ and $a>t\,\sup_{v\in\mathbb{V}}L(v)$, then almost surely
\begin{align*}
Z^{(a)}(t)\ &=\ \lim_{n\to\infty}\sum_{|v|=n}(tL(v))^{\alpha}H(-\log(tL(v)/a))\\
&=\ \lim_{n\to\infty}\sum_{|v|=n}(tL(v))^{\alpha}(-\log(tL(v)/a))\\
&=\ \lim_{n\to\infty}\sum_{|v|=n}(tL(v))^{\alpha}(-\log L(v)) + t^{\alpha}\log(a/t) \sum_{|v|=n}L(v)^{\alpha}\ =\ t^{\alpha}Z,
\end{align*}
where we have also used that the additive martingale $\sum_{|v|=n}L(v)^{\alpha}$, $n\ge 0$, converges to $0$ a.s.~as $n\to\infty$ (see \cite{Lyons:97}).
\vspace{.1cm}
Using these observations, we now prove the following tameness result and counterpart of Lemma~\ref{lem:tameness} in the boundary case.
\begin{Lemma}\label{lem:tamenessBoundary}
Under the assumptions of Theorem~\ref{thm:mainDerivative}, any function $f\in\mathcal{S}(\mathcal{M})$ satisfies
\begin{equation}\label{eqn:tamenessBoundary}
\sup_{0<t\le 1}\frac{\log f(t)}{t^{\alpha}\log t}\ \le\ C
\end{equation}
for some $0<C<\infty$.
\end{Lemma}
\begin{proof}
Proceeding in a similar manner as in the proof of Lemma~\ref{lem:tameness},
we prove that, given any $f\in\mathcal{S}(\mathcal{M})$, failure of \eqref{eqn:tamenessBoundary}, that is
\begin{equation}\label{eq:failure of tamenessBoundary}
\limsup_{t\to 0}\frac{\log f(t)}{t^{\alpha}\log t}\ =\ \infty,
\end{equation}
entails $M(t)\le \mathbf{1}_{\{Z=0\}}$ a.s.~for all $t,a>0$ and thus $f(0+)< 1$ which is impossible. We confine ourselves to the main steps of the proof as technical details are very similar to those in the proof of Lemma~\ref{lem:tameness}.
\vspace{.1cm}
Given \eqref{eq:failure of tamenessBoundary}, we can find two decreasing null sequences $(\vartheta_{n})_{n\ge 1}$ and $(\rho_{n})_{n\ge 1}$ such that, for all $n\in\mathbb{N}$ and $x\in [\rho_{n}\vartheta_{n},\vartheta_{n}]$,
\begin{align*}
-\log f(x)\,\ge\,n x^{\alpha}(-\log x).
\end{align*}
We then bound the conditional expectation of $M^{(a)}(t)$ given $\mathcal{G}_{\vartheta_{n}/t}$. Namely, by the branching property of the \textsf{BRW},
\begin{align*}
M^{(a)}(t)\ &=\ \lim_{n\to\infty}\prod_{\substack{v\in\Upsilon_{\vartheta_{n}/t}\\ tL^{*}(v)<a}}f(tL(v))\ =\ \exp\Bigg(-\lim_{n\to\infty}\sum_{\substack{v\in\Upsilon_{\vartheta_{n}/t}\\ tL^{*}(v)<a}}\hspace{-.2cm}-\log f(tL(v))\Bigg).
\end{align*}
Bounding $-\log f(x)$ by $nx^{\alpha}(-\log x)$ if $x\in [\rho_{n}\vartheta_{n},\vartheta_{n}]$, and by $0$ otherwise, we then obtain
\begin{align}\label{eq:bound M^a(t)}
M^{(a)}(t)\ \le\ \exp\Bigg(-\limsup_{n\to\infty}\ n\hspace{-.5cm}\sum_{\substack{v\in\Upsilon_{\vartheta_{n}/t}\\ tL(v)\ge\rho_{n}\vartheta_{n},\,tL^{*}(v)<a}}\hspace{-.1cm} (tL(v))^{\alpha}(-\log tL(v))\Bigg).
\end{align}
On the other hand, as $\bigvee_{n\ge 1} \mathcal{G}_{\vartheta_{n}/t} =\mathcal{F}_\infty$, we infer
\begin{align*}
Z^{(a)}(t)&\ =\ \lim_{n\to\infty}\Erw\big(Z^{(a)}(t)|\mathcal{G}_{\vartheta_{n}/t}\big)\\
&\ =\ \lim_{n\to\infty}\sum_{\substack{v\in\Upsilon_{\vartheta_{n}/t}\\ tL^{*}(v)<a}}(tL(v))^{\alpha}H\big(-\log(tL(v)/a)\big).
\end{align*}
Therefore, by another use of $\lim_{n\to\infty}\max_{|v|=n}L(v)=0$ and $H(x) \simeq x$ as $x \to\infty$, we obtain for all $t\ge 0$
\begin{align*}
&\lim_{n\to\infty} \sum_{\substack{v\in\Upsilon_{\vartheta_{n}/t}\\ tL^{*}(v)<a}}(tL(v))^{\alpha}(-\log (tL(v)))\\
&=\ \lim_{n\to\infty}\left(\sum_{v\in\Upsilon_{\vartheta_{n}/t}}(tL(v))^{\alpha}H(-\log (tL(v)/a))\ +\ \log a\sum_{v\in\Upsilon_{\vartheta_{n}/t}}L(v)^{\alpha}\right)\\
&=\ Z^{(a)}(t)\quad \text{a.s.}
\end{align*}
Next, the many-to-one lemma provides us with (recall $\Prob_{t}=\Prob(\cdot|S_{0}=-\log t)$)
\begin{align*}
&\Erw\Bigg(\sum_{\substack{v\in\Upsilon_{\vartheta_{n}/t}\\ tL(v)\le\rho_{n}\vartheta_{n},\,tL^{*}(v)<a}} (tL(v))^{\alpha}(-\log tL(v))\Bigg)\\
&\hspace{.5cm}=\ t^{\alpha}\,\Erw(S_{\sigma(\log(t/\vartheta_{n}))}-\log t)\mathbf{1}_{\{R_{\log(t/\vartheta_{n})}\ge-\log\rho_{n},\,\sigma(\log(t/\vartheta_{n}))\le\tau(\log(t/a))\}}\\
&\hspace{.5cm}=\ t^{\alpha}\,\Erw_{t}S_{\sigma(\log(1/\vartheta_{n}))}\mathbf{1}_{\{R_{\log(1/\vartheta_{n})}\ge-\log\rho_{n},\,\sigma(\log(1/\vartheta_{n}))\le\tau(\log(1/a))\}},
\end{align*}
which by Lemma \ref{lem:overshoot L_{1}} converges to $0$ as $n\to\infty$.
\vspace{.1cm}
By combining the previous facts, we obtain
\begin{align*}
\lim_{n\to\infty} \sum_{\substack{v\in\Upsilon_{\vartheta_{n}/t}\\ tL(v)\ge\rho_{n}\vartheta_{n},\,tL^{*}(v)<a}}\hspace{-.1cm} (tL(v))^{\alpha}(-\log tL(v))\ =\ Z^{(a)}(t) \quad \text{a.s.}
\end{align*}
and therefore with the help of \eqref{eq:bound M^a(t)} that $M^{(a)}(t)\le\mathbf{1}_{\{Z^{(a)}(t)=0\}}$ for all $a,t>0$. But the latter entails $M(t)\le\mathbf{1}_{\{Z=0\}}$ upon letting $a\to\infty$ which is impossible because it would imply $f(t)=\Prob(Z=0)<1$ for all $t>0$, a contradiction to $f(0+)=1$.
\end{proof}
Using the tameness assumption for fixed points of the smoothing transform, we now can identify the function defined by
\begin{equation}
\label{eqn:defineHa}
F^{(a)}(t) = \Erw\left(- \log M^{(a)}(t)\right).
\end{equation}
We prove this function to be harmonic for the random walk $(S_{n})_{n\ge 0}$ killed when hitting $(-\infty,0]$ and therefore to be a multiple of $H$.
\begin{Lemma}\label{lem:identifyHa}
Let $r\ge 1$ be the span of $T$. Assuming \eqref{eqn:ncsDerivative}, for any function $f \in \mathcal{S}(\mathcal{M})$ with disintegration $M(t)$ and all $a > 0$, there exists a function $h^{(a)}$, multiplicatively $r$-periodic if $r>1$ and constant otherwise, such that
$$ F^{(a)}(t)\,=\,t^{\alpha}\,H(-\log (t/a))\,h^{(a)}(t) $$
for all $t\ge 0$.
\end{Lemma}
\begin{proof}
The proof follows similar lines as the one of Lemma~\ref{lem:identificationOfH}. We prove that the function $F^{(a)}$ defined in \eqref{eqn:defineHa} is related to a harmonic function for the random walk conditioned to stay positive which in turn allows us to characterize it up to multiplication by a multiplicatively $r$-periodic or constant function.
Recalling that
\begin{align*}
-\log M^{(a)}(t)\ =\ \lim_{n\to\infty} \sum_{|v|=n} -\log f(tL(v))\mathbf{1}_{\{tL^{*}(v)<a\}}\quad\text{a.s.}
\end{align*}
for all $a\le 1$ and $t>0$, Lemma~\ref{lem:tamenessBoundary} implies
\begin{align}\label{eqna}
0\ \le\ -\log M^{(a)}(t)\ \le\ C Z^{(a)}(t)\quad\text{a.s.}
\end{align}
Consequently, as $(Z^{(a)}_{n}(t))_{n\ge 0}$ is uniformly integrable, we infer that
\begin{align*}
F^{(a)}(t)\ =\ \Erw\left(-\log M^{(a)}(t)\right)\ \in\ [0, C t^{\alpha}H(-\log (t/a))].
\end{align*}
for all $t\le a$.
By conditioning with respect to $\mathcal{F}_{1}$ and use of the many-to-one lemma, it follows that
\begin{align*}
F^{(a)}(t)\ &=\ \Erw\left(\sum_{|u|=1} F^{(a)}(tL(u))\mathbf{1}_{\{t L(u)<a\}}\right)\\
&=\ \Erw\left(F^{(a)}(te^{-S_{1}})e^{\alpha S_{1}}\mathbf{1}_{\{S_{1} - \log t > -\log a\}}\right)
\end{align*}
for all $t\le a$. Hence, the function $g^{(a)}(x):=a^{-\alpha}e^{\alpha x}F^{(a)}(ae^{-x})$ satisfies
$$ g^{(a)}(x)\ =\ \Erw g^{(a)}(x+S_{1}) \mathbf{1}_{\{x+S_{1} >0\}} $$
for all $x>0$, and it is right-continuous because $f$ is left-continuous, by dominated convergence. Furthermore, \eqref{eqna} implies that
$$ g^{(a)}(x)\ \le\ Ca^{-\alpha} e^{\alpha x} \Erw Z^{(a)}(ae^{-x})\ \le\ C H(x), $$
and thus the required boundedness of $(1+x)^{-1}g^{(a)}(x)$. Invoking Proposition \ref{prop:uniqueness H(x)}, we conclude that $g^{(a)}$ equals $\kappa^{(a)}H$ for some $\kappa^{(a)}$ which is $\log r$-periodic if $T$ has span $r\ne 1$, and is constant otherwise.
The proof is completed by rewriting this result in terms of $F^{(a)}(t)$ and putting $h^{(a)}(t):=\kappa^{(a)}(-\log(t/a))$.
\end{proof}
With the help of the last lemma, we are now able to given an explicit expression for $M^{(a)}(t)$ and thereby to find the value of $M(t)$ upon letting $a\to\infty$.
\begin{proof}[Proof of Theorem~\ref{thm:mainDerivative}]
By the branching property of the \textsf{BRW}, we have almost surely
\begin{align*}
&\Erw\left(-\log M^{(a)}(t)\middle|\mathcal{F}_{n}\right)\ =\ \sum_{|v|=n}F^{(a)}(tL(v))\,\mathbf{1}_{\{tL^{*}(v)<a\}}\\
&\quad=\ h^{(a)}(t)\sum_{|v|=n}(tL(v))^{\alpha}\,H(-\log (tL(v)/a))\,\mathbf{1}_{\{tL^{*}(v)<a\}}\ =\ h^{(a)}(t)Z_{n}^{(a)}(t)
\end{align*}
for all $n\in\mathbb{N}$. Letting $n\to\infty$, this yields
\begin{align*}
M^{(a)}(t)\ =\ e^{-h^{(a)}(t) Z^{(a)}(t)} \quad \text{a.s.}
\end{align*}
Therefore, for all $a$ large enough and all $t \in [0,a/\sup_{v\in\mathbb{V}}L(v)]$,
$$ M(t)\ =\ M^{(a)}(t) = e^{-h^{(a)}(t)t^{\alpha} Z}\quad\text{a.s} $$
In particular, as $h^{(a)}$ is multiplicatively $r$-periodic or constant, we infer that $h^{(a)}=h$ does not depend on $a$ for $a$ large enough. Since
$$ \Psi(t)\ =\ \Erw\left(M(t)\right)\ =\ \Erw\left(e^{-h(t)t^{\alpha}Z}\right), $$
the proof is complete when finally noting that $h\in\mathcal{H}_{r}$ follows from the fact that $\Psi(t)$ is nonincreasing.
\end{proof}
\section{Fixed points of the smoothing transform and fractal measures on the boundary of the \textsf{BRW}}\label{sec:measure}
The purpose of this supplementary section is to show that any fixed point of the smoothing transform \eqref{eqn:smoothingTransform} can be thought of as the total mass of a random fractal measure on the boundary of the associated \textsf{BRW}. More precisely, this connection is established by a one-to-one map between these fixed points and random measures $\nu$ on the boundary $\partial\mathbb{V}$ of the tree (see below for details) such that, for all $u\in\mathbb{V}$,
$$ \frac{\nu\left(\left\{v\in\partial\mathbb{V}:v(|u|)=u\right\}\right)}{L(u)} $$
is independent of $\mathcal{F}_{|u|}$ and has the same law as $\nu(\mathbb{V})$. Any random measure $\nu$ of this kind is called \emph{fractal}.
Let $X$ be a random variable with Laplace transform $f\in\mathcal{S}(\mathcal{L})$, its law thus a fixed point of the smoothing transform. Then there exists a family $(X(v))_{v\in\mathbb{V}}$ of copies of $X$, defined on the same probability space as the multiplicative \textsf{BRW}\ $L=(L(v))_{v\in\mathbb{V}}$ (possibly enlarged) such that
\begin{equation}\label{eqn:coupling}
X(v)\ =\ \sum_{j\ge 1}T_{j}^{v}X(vj)\ =\ \sum_{j\ge 1}\frac{L(vj)}{L(v)} X(vj)
\end{equation}
for all $v\in\mathbb{V}$. Namely, let $\{X^{(n)}(v):|v|=n\}$ for any $n\in\mathbb{N}$ denote a family of independent copies of $X$ which are also independent of $\{L(v):|v|=n\}$. For $v\in\mathbb{V}$ with $|v| < n$, we then define recursively
\begin{align*}
X^{(n)}(v)\ =\ \sum_{j\ge 1}\frac{L(vj)}{L(v)}X^{(n)}(vj).
\end{align*}
As $X$ satisfies \eqref{eqn:smoothingTransform} and by the branching property of the \textsf{BRW}, we see that each $X^{(n)}(v)$ is a copy of $X$ and depends only on the variables defined on the subtree rooted at vertex $v$. The existence of $(X(v))_{v\in\mathbb{V}}$ with the claimed properties is now ensured by Kolmogorov's consistency theorem because the laws of
$$ \{(X^{(n)}(v),L(v)):|v|\le n\},\quad n\in\mathbb{N} $$
constitute a projective familiy.
\vspace{.1cm}
Recall that $\partial\mathbb{V}=\mathbb{N}^\mathbb{N}$ denotes the boundary of the tree $\mathbb{V}$ and becomes a complete metric space when endowed with the ultrametric distance
$$ d(u,v)\,=\,\exp(-\min\{k\ge 1:u_{k}\ne v_{k}\}). $$
Putting
\begin{align*}
B(u)\,:=\,\left\{v\in\partial\mathbb{V}:v(|u|)=u\right\}
\end{align*}
for $u\in\mathbb{V}$, the family $(B(u))_{u\in\mathbb{V}}$ forms a basis of the topology on $\partial\mathbb{V}$ and its Borel $\sigma$-field.
\vspace{.1cm}
With the help of the familiy $(X(v))_{v\in\mathbb{V}}$ introduced above, a one-to-one map between the fixed points of the smoothing transform and the random fractal measures on $\partial\mathbb{V}$ can now be constructed as follows. Observe that, for any such $\nu$, the total mass $\nu(\partial\mathbb{V})$ is a fixed point of the smoothing transform associated with $L$. This follows because, by $\sigma$-additivity of $\nu$,
\begin{align*}
\nu(\partial\mathbb{V})\ =\ \nu\Bigg(\bigcup_{|v|=1}B(v)\Bigg)\ =\ \sum_{|v|=1}\nu(B(v))\ =\ \sum_{|v|=1}L(v)\frac{\nu(B(v))}{L(v)},
\end{align*}
and the fractal property of $\nu$.
\vspace{.1cm}
Conversely, the above construction allows us to define a fractal measure $\nu$ for each fixed point of the smoothing transform such that the law of $\nu(\partial\mathbb{V})$ equals this fixed point.
Namely, with $(X(v))_{v\in\mathbb{V}}$ as defined above, we put
\begin{align*}
\nu(B(v))\,:=\,X(v)L(v)
\end{align*}
for any $v\in\mathbb{V}$. By \eqref{eqn:coupling}, this provides a well-defined consistent $\sigma$-additive measure on $\mathbb{N}^{k}$ for each $k\in\mathbb{N}$. Thus, by another use of Kolmogorov's consistency theorem, we can extend $\nu$ to a random measure on $\partial\mathbb{V}$, and it has the fractal property by definition as $X(v)$ is independent of $\mathcal{F}_{|v|}$ for any $v$.
\vspace{.1cm}
Under the assumptions of Theorem~\ref{thm:mainNonBoundary} or~\ref{thm:mainDerivative}, the fractal measure $\nu$ can be even explicitly defined as a marked Poisson point process, namely
\begin{align*}
\nu\ =\ \sum_{j\ge 1}\xi_{j}\delta_{v^{j}},
\end{align*}
where $(\xi_{j}, v^{j})$ are the atoms of a bivariate Poisson point process on $\R_{\scriptscriptstyle >}\times\partial\mathbb{V}$ with intensity $\pi(\mathrm{d} x)\mu_{\alpha}(\mathrm{d} v)$. Here $\pi$ equals the L\'evy jump measure of a L\'evy process with characteristic exponent $h(t)t^{\alpha}$, $h$ the function associated with $f$ by the respective theorem.
Moreover, $\mu_{\alpha}$ denotes the random measure on $\partial\mathbb{V}$ defined by
\begin{align*}
\mu_{\alpha}(B(v))\ =\ \lim_{n\to\infty} \sum_{|u|=n,u\succ v}L(u)^{\alpha}
\end{align*}
in the regular case (assuming \eqref{eqn1:regular case} and \eqref{eqn2:regular case}), and by
\begin{align*}
\mu_{\alpha}(B(v))\ =\ \lim_{n\to\infty}\sum_{|u|=n,u\succ v}(-\log L(u))L(u)^{\alpha}
\end{align*}
in the boundary case (assuming \eqref{eqn:boundaryCase} and \eqref{eqn:ncsDerivative}) where the former definition would only give the null measure. Indeed, with $\mu_{\alpha}$ thus defined and Campbell's formula, we obtain that
\begin{align*}
\Erw e^{-t\nu(B(v))}\ =\ \Erw \exp(-t^{\alpha} h(t) \mu_\alpha(B(v)))\ =\ f(t),
\end{align*}
for all $v \in \mathbb{V}$ as expected.
\section*{Acknowledgments}
Most of this work was done during a visit of the second author in March 2019 at the University of M\"unster. Financial support and kind hospitality are gratefully acknowledged.
\def$'${$'$}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{%
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
| {
"attr-fineweb-edu": 1.492188,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdQc25YjgKDfmRyGj | \section{Introduction}
Fokker-Plank (FP) equations model the time evolution of a probability density.
The general set up is as follows.
Given an open subset of $\RR^d$, $\Omega$, a terminal time, $T>0$,
and a (\emph{drift})
vector field, $b(x,t):\Omega \times[0,T]\rightarrow \Omega$,
we seek to find a time-dependent probability distribution,
$\rho:\Omega \times [0,T] \rightarrow \RR$, solving
\begin{equation}\label{nonlinearFP}
\left\{\begin{array}{ll}
\partial_t \rho -\varepsilon\Delta \rho+ \mbox{div}(b(x,t) \rho) =0 &\mbox{in } \Omega \times [0,T], \\[6pt]
\rho(\cdot, 0)= \rho_0(\cdot) &\mbox{in } \Omega.
\end{array}\right.
\end{equation}
In addition, we supplement the above problem with
boundary conditions on $\partial \Omega\times [0,T]$, where $\partial \Omega$ is
the boundary of $\Omega$.
The Fokker-Planck equation was introduced in statistical mechanics. Yet,
this equation has multiple applications in economics \cite{aime10,Gueant09}, crowd motion models \cite{hughes2000flow,Lachapelle10}, and biological models \cite{chavanis2008nonlinear, goudon1998fokker}. Due to the complex structure of those equations, the computation of explicit solutions is not possible. Hence, effective numerical methods to approximate solutions of FP equations have a broad interest.
Here, we propose a technique to obtain approximation schemes for FP equations using their representation as the adjoint of the linearization of Hamilton-Jacobi (HJ) equations. In this way, all monotone numerical schemes proposed in the context of HJ equations give rise to consistent schemes for FP equations. In particular, these schemes preserve positivity and total mass, as required by the nature of the problem.
Previously, the adjoint structure of the FP equation was used by several authors, for example, in \cite{AchdouCapuzzo10} and in \cite{AchdouCapuzzoCamilli12}. In those references, the authors propose a finite-difference scheme which is the adjoint of the linearization of the upwind scheme used to approximate a convex Hamiltonian. In \cite{carlinisilvafp2016} and \cite{carlini2016DGA}, the authors propose a semi-Lagrangian numerical method using a slightly different procedure, but based on a similar principle.
The main contribution of the present paper is to show how to use the adjoint structure with a wide class of numerical solvers, and without limitations on the problem dimension.
Further, in contrast to the above references, we do not discretize the time variable. Thus, the evolution
in time corresponds to a system of ordinary differential equations (ODE). These can be solved with different methods, depending on the smoothness of the solution and desired accuracy. Finally,
the implementation of our method uses a symbolic-numeric approach. Here, the numerical schemes are created by exact formula manipulation, thus reducing the implementation time and complexity.
\smallskip
\noindent {\bf Outline of the paper.} We end this introduction with an outline of this paper. The adjoint structure is examined in Section \ref{eaa}. Next, in Section \ref{eee}, we proof key features of the method: positivity and mass-conservation. In Section \ref{num}, we describe the numerical method and its properties. Some sample schemes are studied in detail.
Finally, in Section \ref{mf}, consider some problems where our schemes apply.
These included mean-field games and a crowd motion model.
\section{Adjoint structure}\label{eaa}
The relation between a FP equation and its adjoint equation is well known. In recent works, \cite{evans2010adjoint, gomes2016regularity,MR3303946,MR3146863,MR3092361,MR2873233,MR2796233}, this relation was used to study regularity properties, vanishing viscosity limits, and rates of convergence of numerical methods.
Those results are based on the observation that a FP equation is the adjoint of the linearization of a certain HJ equation.
\subsection{Adjoint structure}
Here, we discuss
the relation between FP and HJ equations. First, we consider the HJ operator
\begin{equation}\label{hj1}
HJ(u) := - u_t(x,t) + H(x,Du(x,t)) - \varepsilon \Delta u(x,t),
\end{equation}
with the Hamiltonian $H=H(x,p):\RR^d\times \RR^d\rightarrow~\RR$. Further, we define the nonlinear generator $$A^{HJ} u := H(x,Du(x,t)) - \varepsilon \Delta u(x,t).$$ Here, we write $Du = D_x u$ for the gradient in the variable $x = (x_1,\cdots,x_d)$. The parameter $\varepsilon$ is called the viscosity.
To linearize \eqref{hj1} around $u_0$, we expand $u = u_0 + \lambda w$, then take the derivative in $\lambda$, and, finally, consider the limit $\lambda \to 0$.
For now, we proceed formally to compute this linearization. Later, we discuss
functional spaces and boundary conditions.
The expansion $HJ(u_0 + \lambda w)$ gives
\begin{multline*}
- \partial_t (u_0 + \lambda w) + H(x,D(u_0 + \lambda w)) - \varepsilon \Delta(u_0 + \lambda w)\\[6pt]
= - {(u_0)}_t - \lambda w_t + H(x,Du_0 + \lambda Dw) - \varepsilon \Delta u_0 - \lambda \varepsilon \Delta w.
\end{multline*}
By taking the derivative of the preceding expression with respect to $\lambda$, and letting $\lambda \to 0$, we obtain the operator
\begin{equation}\label{linearized_eq}
L(w):= - w_t + D_p H(x,Du) \cdot Dw - \varepsilon \Delta w,
\end{equation}
the linearization of the HJ operator. The (linear) generator of $L$ is
$$A^L w := D_pH(x,Du) \cdot Dw - \varepsilon \Delta w.$$
Finally, we compute
the adjoint of $L$ by integration by parts. We fix smooth functions, $w$ and $\rho$, and
derive the identity
\begin{align}\label{aeq}
& \iint\limits_{[0,T] \times \Omega} (-w_t + D_pH(x, Du) \cdot Dw - \varepsilon \Delta w) \ \rho \\\notag
= & \iint\limits_{[0,T] \times\Omega} \left(\rho_t - \div_x(D_p H(x, Du) \ \rho)
- \varepsilon \Delta \rho \right) w\\\notag
+ & \iint\limits_{[0,T] \times \partial \Omega}\left(D_p H(x, Du) \right)\cdot n \ \rho \ w + \varepsilon \frac{\partial \rho}{\partial n} w - \varepsilon \rho \frac{\partial w}{\partial n}\\\notag
- & \int\limits_{\Omega} \rho(x,T) \ w(x,T) - \rho(x,0) \ w(x,0),
\end{align}
where $n$ is the normal vector to the boundary, $\partial \Omega$. The last calculation shows that the adjoint of $L$ is the following FP operator
\begin{equation}\label{fp1}
L^* \rho := \rho_t - \div_x(D_p H(x, Du) \ \rho) - \varepsilon \Delta \rho,
\end{equation}
whose generator is $A^{FP} \rho := - \div_x(D_p H(x, Du) \ \rho) - \varepsilon \Delta \rho$.
\subsection{Boundary conditions}
Now, we address the boundary conditions for \eqref{nonlinearFP} on $\partial \Omega\times [0,T]$.
The discussion of initial conditions is straightforward.
Two common boundary conditions for FP equations are Dirichlet data
and a prescribed flow via Neumann conditions.
Typically, the Dirichlet data vanishes on the boundary. These boundary conditions correspond to the case where particles exit once they reach the boundary.
The prescribed flow case represents a current of particles or agents crossing the boundary.
With a zero flow, the mass is conserved.
Each of these choices
of boundary conditions
determines cancellations in the boundary integrals
in \eqref{aeq}. This suggests different functional spaces for the HJ operator, its linearized version, and its adjoint, the FP operator.
The first case corresponds to a FP equation with Dirichlet boundary conditions:
\begin{equation*}
\left\{\begin{array}{ll}
\rho_t(x,t) - \div(D_p H(x, Du) \ \rho) = \varepsilon \Delta \rho, \hspace{0.15cm} \mbox{in } \Omega \times [0,T],& \\[6pt]
\rho(\cdot,t) = 0, \hspace{3.94cm} \mbox{on } \partial \Omega \times [0,T].&
\end{array}\right.
\end{equation*}
We consider the HJ operator on a functional space with the boundary conditions
\begin{equation*}
\left\{\begin{array}{ll}
- u_t(x,t) + H(x,Du(x,t)) - \varepsilon \Delta u(x,t), \hspace{0.15cm} \mbox{in } \Omega \times [0,T],& \\[6pt]
u(\cdot,t) = g_1(\cdot,t), \ \ \text{for any } g_1, \ \hspace{1.39cm} \mbox{on } \partial \Omega \times [0,T],&
\end{array}\right.
\end{equation*}
and the linearized operator as
\begin{equation*}
\left\{\begin{array}{ll}
- w_t + D_p H(x,Du) \cdot Dw - \varepsilon \Delta w, \hspace{0.15cm} \mbox{in } \Omega \times [0,T],& \\[6pt]
w(\cdot,t) = 0, \hspace{3.25cm} \mbox{on } \partial \Omega \times [0,T].&
\end{array}\right.
\end{equation*}
The second case corresponds to a FP equation with a flux through the boundary
\begin{equation*}
\left\{\begin{array}{ll}
\rho_t(x,t) - \div(D_p H(x, Du) \ \rho) - \varepsilon \Delta \rho(x,t), \hspace{0.15cm} \mbox{in } \Omega \times [0,T],& \\[6pt]
D_p H(x, Du) \ \rho+\varepsilon
\frac{\partial \rho}{\partial n}(x,t) = g_2(x,t), \hspace{0.65cm} \mbox{on } \partial \Omega \times [0,T],&
\end{array}\right.
\end{equation*}
where $g_2$ is the desired in/out-flow through $\partial \Omega$. We can consider diverse boundary conditions for the HJ operator: Dirichlet type, state-constraint, reflection at the boundary, and Neumann type.
In the following example, we use Neumann conditions. The Hamilton-Jacobi operator is
\begin{equation*}
\left\{\begin{array}{ll}
- u_t(x,t) + H(x,Du(x,t)) - \varepsilon \Delta u(x,t), \hspace{0.15cm} \mbox{in } \Omega \times [0,T],& \\[6pt]
\frac{\partial u}{\partial n}(x,t) = 0, \hspace{3.88cm} \mbox{on } \partial \Omega \times [0,T],&
\end{array}\right.
\end{equation*}
with the corresponding linearization
\begin{equation*}
\left\{\begin{array}{ll}
- w_t + D_p H(x,Du) \cdot Dw - \varepsilon \Delta w, \hspace{0.15cm} \mbox{in } \Omega \times [0,T],& \\[6pt]
\frac{\partial w}{\partial n}(\cdot,t) = 0, \hspace{3.075cm} \mbox{on } \partial \Omega \times [0,T].&
\end{array}\right.
\end{equation*}
We do not address the initial conditions for the above operators because we use them only to discretize in space the HJ generator
A nonlinear FP equation is related to the solution of a stochastic differential equation of McKean-Vlasov type (or mean-field type), see \cite{MR0221595,MR0233437,MR1431299,MR1108185}. More precisely, we consider the stochastic differential equation (SDE)
\begin{equation}\label{Mackeanvlasov}
\left\{\begin{array}{ll}
d X(t)= b(X(t), \rho(X(t),t),t)\,d t+ \sqrt{2\varepsilon}\, d W(t),& \\[6pt]
X(0)= X^0,&
\end{array}\right.
\end{equation}
where $b: \RR^d \times \RR_+ \times \RR_+ \to \RR^d$ is a regular vector-valued function, $X^0$ is a random vector in $\RR^d$, independent of the Brownian motion $W(\cdot)$, with density $\rho_0$, and $\rho(\cdot,t)$ is the density of $X(t)$. It can be shown (see \cite{MR1653393}) that under certain growth conditions for $b$ \eqref{Mackeanvlasov} admits a unique solution and $\rho$ is the unique classical solution of the nonlinear FP equation
\begin{equation*}\label{nonlinearFP2}
\partial_t \rho -\varepsilon\Delta \rho+ \mbox{div}(b(x,\rho,t) \rho) =0.
\end{equation*}
Therefore, if we set $b(x,m,t):=-D_p H(x,Du)$ and impose appropriate boundary conditions, \eqref{Mackeanvlasov} provides a probabilistic interpretation of the optimal trajectories for \eqref{hj1}.
With Dirichlet conditions, those trajectories end at the boundary; for zero-flux conditions, they are reflected, see \cite{Bossy2004}, and \cite{Gobet2001}.
\begin{remark}
Our methods can be extended to study stationary FP equations. In this case, the associated Hamilton-Jacobi operator is stationary. Small modifications can be added to the HJ operator to guarantee the existence of solutions.
\end{remark}
\section{Properties}\label{eee}
In this section, we show that the evolution of an initial density by the FP equation preserves positivity and mass. We use arguments from nonlinear semigroup theory to illustrate how these properties are related to corresponding
properties of the Hamilton-Jacobi equation. The arguments detailed here
are valid without any substantial changes for the discretized problems.
We denote by
$\langle f, g \rangle = \int_{\Omega} f \ g$ the duality product, and by $S_t$ the semigroup associated to the linearized operator \eqref{linearized_eq}. This semigroup preserves order and commutes with constants.
We define the adjoint $S^*_t$ by
\[
\langle S^*_tu, v\rangle= \langle u, S_tv\rangle.
\]
We have then the following results:
\begin{pro}[Positivity]
The evolution of the initial density $\rho_0$ through the adjoint semigroup, $S_t^*$, preserves positivity. Denote by $w_T$ the terminal condition for the linearized operator. Then, for $w_T \geq 0$, and $\rho_0 \geq 0$; we have $L^*_t \rho \geq 0$, for all $t \in [0,T]$.
\end{pro}
\noindent \emph{Proof: } First, note that $w_T \geq 0$ implies $S_t w_T \geq 0$. This follows from the maximum principle for HJ equations. Thus, for $w_T \geq 0$, we have $$ \langle S_t^* \rho, w_T \rangle = \langle \rho, S_t w_T \rangle \geq 0,$$
since $\rho \geq 0$, and $S_t w_T \geq 0$. Accordingly, $S_t^* \rho \geq 0$.
\cqd
\begin{pro}[Conservation of Mass]
Let $\rho_0$ be the initial probability distribution, i.e. $\int_{\Omega} \rho_0 = 1$. Then, for all $t \in [0,T]$, the evolution of this probability measure through the adjoint semigroup, $S_t^* \rho_0$, is also a probability measure.
\end{pro}
\noindent \emph{Proof: } First, observe that $S_t 1 = 1$. Then,
\begin{align*}
\int\limits_{\Omega} S_t^* \rho_0 = \langle S_t^* \rho_0,1 \rangle = \langle \rho_0, S_t 1 \rangle = \langle \rho_0,1 \rangle = \int\limits_{\Omega} \rho_0 = 1.
\end{align*}
\cqd
We conclude this section with some remarks.
\begin{remark}
In the computations of the previous sections, we assume that $Du(x,t)$ does not depend on $\rho$. Further, the relation between a general FP equation whose drift depends on the density, and its associated HJ equation is still a research topic. We do not address this case in the present work. Still, particular cases of drift depending on the density and numeric approaches to solve them are discussed in the literature, see for instance \cite{MR2724518}, and \cite{MR2966923}.
\end{remark}
\begin{remark}
If the viscosity vanishes (~$\varepsilon~=~0$~), the same approach is valid. A first-order HJ operator gives rise to a continuity equation (CE), i.e. a FP equation without viscosity. This case is considered in section \ref{mf}, where we extend our numerical scheme to address systems of partial differential equations (PDEs). Those systems arise in multiple applications such as mean-field games (MFG), population models, traffic flow problems, and modeling in chemotaxis.
\end{remark}
\section{Numerical Approach}\label{num}
Our numerical approach relies on the relation between the HJ framework and the corresponding adjoint FP equation. Given a semi-discrete (discrete in space) numerical scheme for \eqref{hj1}, the same scheme can be used to construct a consistent approximation for \eqref{fp1}.
Before proceeding, we define additional notation. To simplify, we consider a scheme for the case where the domain $\Omega$ is $\mathbb T^2$ (2-D torus). Let $\mathbb T^2_{\Delta x}$ be an uniform grid on $\mathbb T^2$ with constant discretization parameter $\Delta x>0$. Let $x_{i,j}$ denote a generic point in $\mathbb T^2_{\Delta x}$. The space of grid functions defined on $\mathbb T^2_{\Delta x}$ is denoted by $\mathcal{G}(\mathbb T^2_{\Delta x})$, and the functions $U$ and $M$ in $\mathcal{G}(\mathbb T^2_{\Delta x})$ (approximations of respectively $u$ and $\rho$)
are called $U_{i,j}$ and $M_{i,j}$, when evaluated at $x_{i,j}$.
We utilize a semi-discrete numerical scheme $N(x,p):\mathbb T^2_{\Delta x}\times \RR^d\rightarrow \mathbb R$ monotone and consistent to approximate the operator $H(x,p)$, such that $U$ is the solution of the ODE
\begin{equation}
U_t=N(x,\mathcal D U),
\end{equation}
where $\mathcal D U$ is a discretization of the gradient operator on $U$.
Thanks to the adjoint structure, we modify this scheme to approximate the solution of \eqref{fp1}. The discrete approximation, $M$, is the solution of the following system of ODE $$ M_t = K,$$ where
\begin{equation}\label{fp3}
K(x,\mathcal D U,M):=(D_p N(x,\mathcal D U))^T M+\varepsilon \Delta_d M.
\end{equation}
Here, the nonlinear part of the operator corresponds to
the discrete operator $D_p N(x_{i,j},\mathcal D U)$; $\Delta_d M$ is a discretization of the Laplacian. We note that this operator depends on the monotone approximation scheme used to discretize the HJ equation, and can be computed numerically or using a symbolic differentiation operator. This is the case in our examples in section \ref{mf}.
We stress that the features of positivity and mass conservation are valid at the discrete level. This is a consequence of the semigroup arguments in section~\ref{eee}, independently of the manner the space or time are discretized.
\subsection{Finite Differences}\label{ssct:FinDif}
Now, we consider an explicit scheme using our method. We describe an upwind discretization for the Hamiltonian, which we assume to be
\begin{equation}\label{ham}
H(x,p)=g(x)+|p|^\alpha, \quad \alpha>1.
\end{equation}
We define the standard finite-difference operators as
$$(\mathcal D_1^\pm u)_{i,j}=\frac{u_{i\pm 1,j}-u_{i,j}}{\Delta x}, \hbox{ and } (\mathcal D_2^\pm u)_{i,j}=\frac{u_{i,j\pm 1}-u_{i,j}}{\Delta x},$$
and
$$\Delta_d u=\frac{1}{\Delta x^2}\left( 4 u_{i,j}-u_{i+1,j}-u_{i,j+1}-u_{i-1,j}-u_{i,j-1} \right).$$
The approximation of the operator $H(x,p)-\varepsilon \div(p)$ is
\begin{equation*}
N(x,p)=g(x)+G(p_1^-,p_2^+,p_3^-,p_4^+)\\-\varepsilon \left(\frac{p_1-p_2}{\Delta x}+\frac{p_3-p_4}{\Delta x}\right),
\end{equation*}
where for a real number $r$, we define the operators
\begin{equation}\label{eq:monotonicity operators}
r^+ :=\max(0,r), \ \ \ r^-:=\max(0,-r),
\end{equation}
and
$$G(p)=G(p_1,p_2,p_3,p_4):=(p^2_1+p_2^2+p^2_3+p_4^2)^\frac{\alpha}{2}.$$
The operators $r^+$ and $r^-$ are chosen to preserve the monotonicity of the scheme for the HJ operator, which is well defined backward in time.
Now, we compute the operator $K(x,\mathcal D U,M)$,
and we obtain
\begin{multline*}
K(x_{i,j},[\mathcal D U]_{i,j},M_{i,j})= \\[6pt]
\frac{1}{\Delta x} \left[M_{i,j}\frac{\partial N}{\partial p_1}(x_{i,j},[\mathcal D U]_{i,j})-M_{i-1,j}\frac{\partial N}{\partial p_1}(x_{i-1,j},[\mathcal D U]_{i-1,j})\right.\\[6pt]
+M_{i+1,j}\frac{\partial N}{\partial p_2}(x_{i+1,j},[\mathcal D U]_{i+1,j})-M_{i,j}\frac{\partial N}{\partial p_2}(x_{i,j},[\mathcal D U]_{i,j})\\[6pt]
+M_{i,j}\frac{\partial N}{\partial p_3}(x_{i,j},[\mathcal D U]_{i,j})-M_{i,j-1}\frac{\partial N}{\partial p_3}(x_{i,j-1},[\mathcal D U]_{i,j-1})\\[6pt]
\left.+M_{i,j+1}\frac{\partial N}{\partial p_4}(x_{i,j+1},[\mathcal D U]_{i,j+1})-M_{i,j}\frac{\partial N}{\partial p_4}(x_{i,j},[\mathcal D U]_{i,j}) \right] \\[6pt]
-\varepsilon \ \Delta_d M_{i,j}.
\end{multline*}
We use this operator in \eqref{fp3}. This scheme is similar to the one in \cite{AchdouCapuzzo10}.
\subsection{Semi-Lagrangian scheme}
To describe a semi-Lagrangian scheme appropriate to approximate \eqref{ham}, we introduce the operator
\begin{equation}
\mathcal D^{\gamma}u:=\max_{\gamma\in B(0,1)}\frac{\mathcal I[u](x,\gamma)-u(x)}{\Delta x},
\end{equation}
where $B(0,1)$ is the unitary ball in $\RR^2$, and
\begin{equation*}
\mathcal I [u](x,\gamma)=\frac{1}{2}\sum_{i=1}^2 \left( \mathbb I[u](x+\gamma\Delta x +e_i\sqrt{2\varepsilon\Delta x})\right.\\
\left.+\mathbb I[u](x+\gamma\Delta x -e_i\sqrt{2\varepsilon\Delta x})\right).
\end{equation*}
Here, $\mathbb I[u](x)$ is an interpolation operator on the matrix $u$, and $e_i$ is the $i$ unitary vector of an orthonormal basis of the space. The approximation of $H(x,p)-\varepsilon \div(p)$ is then simply
\begin{equation*}
N(x,p)=g(x)+p^\alpha.
\end{equation*}
We take the adjoint of the linearized of $N$,
and we use it into \eqref{fp3}, analogously as performed for the finite-difference scheme.
This scheme differs from the one proposed in \cite{carlini2016DGA}, where an estimation on the volumes of the density distribution $M$ was necessary. We note that the operator $N(x,p)$ is monotone, see \cite{falcone2013semi}.
\section{Applications to Systems of PDEs} \label{mf}
One immediate application of our numerical scheme is to solve "measure-potential" systems of PDEs. These systems comprise an equation for the evolution of a measure coupled with a second equation for a potential or value function. Typically, this potential determines the drift for the convection in the first equation. Many problems have this structure: mean-field games, traffic-flow models, crowd motion, and chemotaxis. Here, we describe how to use our method in the following examples: two 1-D forward-forward mean-field game (FFMFG) problems and a 2-D crowd motion model.
\subsection{Example: 1-D forward-forward mean-field games}\label{sct:1dmfg}
Here, we consider two one-dimensional forward-forward mean-field game problems, see \cite{AchdouCapuzzo10,MR3575617,GomesSedjroFFMFGCongestion}. The general form of such systems is
\begin{equation}\label{sys:FF MFG}
\begin{cases}
u_t + H(u_x)= \varepsilon u_{xx} + g(\rho), \\[6pt]
\rho_t-(H'(u_x) \rho)_x=\varepsilon \rho_{xx}, \end{cases}
\end{equation}
together with the {\em initial-initial conditions}:
\begin{equation*}\label{ini-ini}
\begin{cases}
u(x,0) = u_0(x), \\[5pt]
\rho(x,0) = \rho_0 (x).
\end{cases}
\end{equation*}
In this example, we use periodic boundary conditions. For the first problem, we set $H(u_x) = \frac{u_x^2}{2}$, $g(\rho) = \ln \rho$, and $\varepsilon=0.01$. We then solve:
\begin{equation}\label{sys:FF MFG Solved1}
\begin{cases}
u_t + \frac{u_x^2}{2} = 0.01 \ u_{xx} + \ln \rho,\\[6pt]
\rho_t-(u_x \rho)_x= 0.01 \ \rho_{xx}. \end{cases}
\end{equation}
We choose the {\em initial-initial conditions}:
\begin{equation*}
\begin{cases}
u_0(x)=0.3 \cos(2\pi x), \\[5pt]
\rho_0(x)=1.
\end{cases}
\end{equation*}
We depict the solution of this problem in Figure~\ref{fig:sol FF MFG log rho}.
\begin{figure}[htb]
\centering
\begin{subfigure}[b]{\sizefigure\textwidth}
\includegraphics[width=\textwidth]{figures/FFCase1Density.pdf}
\caption{Density}
\label{fig:solDensity1Drho}
\end{subfigure}
~
\begin{subfigure}[b]{\sizefigure\textwidth}
\includegraphics[width=\textwidth]{figures/FFCase1ValueFunction.pdf}
\caption{Value function}
\label{fig:solEikonal1Drho}
\end{subfigure}
\caption{Solutions for $g(\rho) = \ln \rho$.}\label{fig:sol FF MFG log rho}
\end{figure}
Now, for the second case, we choose $H(u_x,\rho) = \frac{(p+u_x)^2}{2\rho^\alpha}$, $g(\rho)=\frac 3 2 \rho^\alpha$, and $\varepsilon=0$. This is a first-order FFMFG with congestion, which is equivalent to a system of conservation laws. Setting $v = p + u_x$, the equivalent system is
\begin{equation}\label{sys:FF MFG Solved2V}
\begin{cases}
v_t + \left(\frac{v^2}{2 \rho^\alpha} - \frac{3}{2} \rho^\alpha\right)_x = 0, \\[6pt]
\rho_t- \left( \rho^{1-\alpha}v \right)_x=0. \end{cases}
\end{equation}
For $\alpha=1$, and for the {\em initial-initial conditions}
\begin{equation*}
\left\{\begin{array}{ll}
u_0= -0.5 \ \frac{\cos(2 \pi x)}{2 \pi},& \\[5pt]
\rho_0=1+0.5 \sin(2 \pi x),&
\end{array}\right.
\end{equation*}
the solution for the density in \eqref{sys:FF MFG Solved2V} is a traveling wave; as shown in \cite{GomesSedjroFFMFGCongestion}, and depicted in Figure~\ref{fig:sol FF MFG Travelling Wave}.
\begin{figure}[htb]
\centering
\begin{subfigure}[b]{\sizefigure\textwidth}
\includegraphics[width=\textwidth]{figures/FFMFG2Density.pdf}
\caption{Density}
\end{subfigure}
~
\begin{subfigure}[b]{\sizefigure\textwidth}
\includegraphics[width=\textwidth]{figures/FFMFG2ValueFunction.pdf}
\caption{Value function}
\end{subfigure}
\caption{Solutions for the FFMFG with congestion.}\label{fig:sol FF MFG Travelling Wave}
\end{figure}
Now, we explain how we treated such systems numerically. MFGs have built-in the adjoint structure we consider here. Hence, we can use the same spatial discretization for both the FP and HJ equations. Each of the discretizations requires solving an ODE in time. Since we must solve the system of FP coupled to a HJ equation, we treat these ODEs as a system, and we can apply a suitable solver for the time discretization. In our examples, we use finite differences for the spatial discretization, as in section~\ref{ssct:FinDif}. The simulations corresponding to
Figure \ref{fig:sol FF MFG log rho} and Figure~\ref{fig:sol FF MFG Travelling Wave} were produced with a spatial grid with 80 points, final time $T~=~3$, and 50 points for the sample on time.
\subsection{Example: Hughes Model in 2-D}\label{Example_Hughes}
In this example, we present a model for crowd motion model due to Hughes \cite{hughes2002continuum,hughes2000flow}. The model comprises a FP equation, describing the evolution of the density of pedestrians/agents, coupled to an Eikonal (EK) equation that gives the optimal movement direction. This two-dimensional system is
\begin{equation}\label{Hughes_System}
\begin{cases}
\displaystyle \rho_t(x,t) - \div(\rho(1-\rho)^2 Du)= 0,\\[6pt]
\displaystyle |Du(x)|^2=\frac{1}{(1-\rho)^2},
\end{cases}
\end{equation}
together with an initial condition for the density. The goal is to exit a domain $\Omega$ in minimal time taking into account congestion effects. Due to the stationary character of the EK equation, this system is not of mean-field game type. The density, $\rho$, evolves as if at each instant of time the EK equation sees a frozen density. Then the agents choose the direction that leads to the shortest-time to evacuation and this process determines the evolution of $\rho$.
Now, we describe how the Hughes system fits our framework. Performing the same steps as in section \ref{eaa}, with the HJ operator
\begin{equation}\label{HJ_with_f}
- u_t + f(\rho) H(x,Du) - \varepsilon \Delta u,
\end{equation}
where $f(\rho)$ is a regular function of the density, we obtain the associated FP equation
\begin{equation}\label{FP_with_f}
\rho_t - \div \left( f(\rho) D_pH(x,Du) \rho \right) = \varepsilon \Delta u.
\end{equation}
By setting $f(\rho) = (1-\rho)^2$ and $H(x,p) = \displaystyle \frac{|p|^2}{2}$, \eqref{FP_with_f} becomes the first equation of \eqref{Hughes_System}; and \eqref{HJ_with_f} is the adjoint operator we must study. Since the EK equation is a particular case of a HJ equation, we discretize it in space as with the HJ operator associated to the FP equation. In the following example, we use finite differences to discretize the generator of the HJ operator. For the time discretization, we use an explicit Euler method.
The domain is a rectangle $[0,3] \times [0,1]$, with an exit on $[2.25,3] \times \{1\}$, corresponding to a typical proportional size of a door in a room. We set the value of $u$ to $+\infty$ on all the boundary but on its exit, where we fix it equal to zero. The density is set equal zero on the boundary.
In contrast with MFG problems, the Hughes model does not have the adjoint structure built-in. Again, the numerical solution of the FP equation requires solving an ODE in time. However, the EK equation must be treated in another way; at each iteration of the solver for the FP equation, we solve the EK equation. We use a fixed-point approach, as described in \cite{MR2218974}. Alternatively, fast marching or policy iteration methods could also be applied. We depict the initial condition and its evolution in Figure~\ref{fig:Hughes_2D}. The spatial grid contains $100$ points, and we choose the final time $T=1.0$.
We end this section by remarking that in the last three problems our simulations preserve mass and positivity, as expected.
\begin{figure}[htb]
\centering
\begin{subfigure}[b]{\sizefigure\textwidth}
\includegraphics[width=\textwidth]{figures/hughes2DRhot0.pdf}
\caption{Initial Density.}
\end{subfigure}
~
\begin{subfigure}[b]{\sizefigure\textwidth}
\includegraphics[width=\textwidth]{figures/hughes2DRhotT12.pdf}
\caption{Density at time $0.33$.}
\end{subfigure}
\qquad
\begin{subfigure}[b]{\sizefigure\textwidth}
\includegraphics[width=\textwidth]{figures/hughes2DRhotT1.pdf}
\caption{Density at time $0.5$.}
\end{subfigure}
~
\begin{subfigure}[b]{\sizefigure\textwidth}
\includegraphics[width=\textwidth,scale=0.5]{figures/Hughes2DSRhoT2.pdf}
\caption{Final density at time $1.0$.}
\end{subfigure}
\caption{Evolution of the density for the Hughes model.} \label{fig:Hughes_2D}
\end{figure}
\section{Conclusions}
Here, we develop numerical methods to solve nonlinear Fokker-Planck equations via its adjoint Hamilton-Jacobi operator. Our method preserves mass and positivity, and we use it to solve systems of PDEs with a Fokker-Planck equation coupled to a Hamilton-Jacobi equation. Our methods apply to a broad range of problems with a measure-potential structure that include mean-field games, crowd and traffic models, and chemotaxis.
In future work, we plan to
address different schemes developed for HJ equations to study FP equations. Thus, reversing the process that gave rise to effective numerical schemes for HJ equations, as Discontinuous Galerkin or ENO schemes, originally developed for conservation laws. Nevertheless, it is clear that, without monotonicity and stability properties, results for the convergence of such schemes are difficult to achieve.
| {
"attr-fineweb-edu": 1.878906,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdR85qoTAp2zWUODk | \section{Introduction}\label{intro}}
\input{sections/intro}
\section{Background}\label{bg}
\input{sections/background}
\section{Approach}\label{approach}
\input{sections/approach}
\section{Case Study}\label{case}
\input{sections/case}
\section{Discussion}\label{discuss}
\input{sections/discuss}
\section{Related Work}\label{related}
\input{sections/related}
\section{Conclusions and Future Work}\label{conclusion}
User reviews convey client-side requirements for mobile app products. Accurate recovery of the user concerns and automatic localization of relevant source code based on these feedbacks is of great importance to facilitate rapid development. In this paper, we present an approach to localize potential change files based on user reviews for mobile applications. We conducted experiments on 10 popular mobile apps and used a comprehensive set of metrics to assess the performance of our approach. Experimental results show that our approach greatly outperforms the state-of-the-art baseline work.
In the immediate future work, we plan to develop a comprehensive environmental support for change file localization so as to give a better applicability of our approach. Moreover, our current case studies are all about open-source apps, while our future plan includes collaboration with commercial app developers and applying our approach to these industry cases.
\section*{Acknowledgements}
This work was partially supported by the National Key R\&D Program of China (No. 2018YFB1003902), the National Natural Science Fundation of China (NSFC, No.\ 61972197), the Collaborative Innovation Center of Novel Software Technology and Industrialization, and the Qing Lan Project. T. Chen is partially supported by Birkbeck BEI School Project (ARTEFACT), NSFC grant (No.\ 61872340), and Guangdong Science and Technology Department grant (No. 2018B010107004), and an oversea grant from the State Key Laboratory of Novel Software Technology, Nanjing University (KFKT2018A16).
\bibliographystyle{IEEEtran}
\subsection{User review clustering} \label{sec:cluster}
Most user reviews are short textual snippets consisting of multiple sentences. These raw sentences may address different aspects of apps and need to be preprocessed before clustering. Based on their content, the reviews can mainly be classified into four categories, i.e., information giving, information seeking, feature request and problem discovery~\cite{panichella2015can,palomba17}. In particular, ``Information giving" denotes those sentences that inform or update users or developers about an aspect related to the app; ``information seeking" denotes those which attempt to obtain information or help; ``feature request" denotes those expressing ideas, suggestions or needs for improving the app's functionalities and performance; ``problem discovery" denotes the sentences describing issues with the apps or their unexpected behaviors~\cite{panichella2015can}.
Since our aim is to identify those reviews which are directly relevant to apps' evolution, following \cite{palomba17} we only focus on the last two categories, i.e., feature request and problem discovery. To this end, we first employ ARDOC, a \emph{user review classifier} developed in the previous work~\cite{panichella2015can} which transforms user reviews into individual sentences and then classifies these sentences into one of the aforementioned four categories. We then collect those sentences of the last two categories. ARDOC is built upon the functions of AR-Miner~\cite{Chen:2014:AMI:2568225.2568263}, which can filter noisy and uninformative user reviews in the first place. Such capability contributes another benefit to our approach.
To improve the accuracy of clustering, two tactics are employed, i.e., finer granularity review segmentation and textual processing, which will be elaborated in the following two subsections.
\subsubsection{Fine-grained review segmentation}
Clustering user reviews
is usually conducted at the sentence level.
We observe that, even inside an individual sentence, there still may be multiple topics involved which possibly address quite different concerns. As an example, one user review of AcDisplay reads ``I wish there was a pattern lock feature and a camera shortcut for the lockscreen.'' Apparently, the user prefers two more features (a pattern lock and a shortcut utility). Moreover, for composite sentences in user reviews, if they contain adversative conjunctions such as 'but', the content after `but' usually discloses the real information. As an example from K9-Mail\footnote{https://play.google.com/store/apps/details?id=com.fsck.k9}, one user states that ``This app is good, but it is lacking a key feature for anyone who uses mailing lists: Reply-To-List.'' In this case, for the purpose of localization, the content before `but' is not informative at all, and may introduce noises to the follow-up process. As a result, we propose to have a more fine-grained text analysis. In particular, we split the composite sentences into \textsc{atomic} ones each of \textit{which expresses a single concern only}, and remove the irrelevant part of the sentence.
To achieve that, we employ a statistical parser from the Stanford NLP toolkit\footnote{https://nlp.stanford.edu/software/lex-parser.shtml} to generate grammatical structures of sentences, i.e., phrase structure trees. We then traverse the leaf nodes of the phrase structure tree to determine whether or not the sentence contains conjunctions. Particularly, we focus on two types of conjunctions, i.e., copulative conjunctions and adversative conjunctions. The former (e.g., `and', `as well as' and `moreover') mainly expresses the addition while the latter (e.g., `but', `yet') denotes contrasts.
For the first type, we recursively parse the nodes to identify the layer where the copulative conjunctions are located. We then obtain the copulative conjunction's sibling nodes. The two parts connected by the conjunction may be two sentences, two noun phrases, two verb phrases, etc. Given different conditions, we can generate two atomic sentences based on the parts which are connected by the conjunctions. As a concrete example, if the conjunction `and' connects two noun objectives, then the two objectives are split as the only objective of each atomic sentence, but they share the same subjective and verb. (e.g. I wish there was a pattern lock feature and a camera shortcut for the lockscreen. $\rightarrow$ I wish there was a pattern lock feature for the lockscreen. I wish there was a camera shortcut for the lockscreen). If the conjunction 'and' connects two sentences, then the two sentences will be simply split into two atomic sentences (e.g. There are only 2 things I'd change for a 5 star review; I wish it had audio controls, and I wish there was a camera shortcut from the lock screen. $\rightarrow$ There are only 2 things I'd change for a 5 star review; I wish it had audio controls. There are only 2 things I'd change for a 5 star review; I wish there was a camera shortcut from the lock screen).
For the second type, since we believe that the content after the adversative conjunctions convey the real information, we only preserve the leaf nodes after the conjunction nodes and simply leave out the other parts.
\subsubsection{Textual processing}
User reviews are generally informal and unstructured, mixing with typos, acronyms and even emojis~\cite{vu2015}.
The noisy data inevitably degrades the performance of clustering and localization which necessitates further textual processing.
We first filter out the emoji characters and other punctuation content. Some emojis which were published as icons are stored in a text format, and their encoding appears as combination of question marks. Some others also use a combination of common punctuations, such as smiley faces. These patterns are matched by using regular expressions. Particularly, we propose two regular expressions to extract the pattern. The first one is "$\backslash\backslash p\left\{P\right\}\backslash\backslash s ^\ast$". It removes all punctuations and replaces them with a space; the second one is "$\left[^\wedge a-zA-Z0-9\backslash\backslash s\right] ^\ast$" which removes non-alphanumeric parts. Furthermore, we also convert all letters to lowercase uniformly.
Given the above steps, sentences are transformed into lists of words (i.e., tokens). We then use the Stanford NLP toolkit\footnote{https://stanfordnlp.github.io/CoreNLP/} to transform the inflected words to their lemmatization form. Here a dictionary-based instead of a rule-based approach is used to convert words into tokens which can avoid over-processing of words. (For instance, ``images'' is transformed correctly to image instead of to imag).
User reviews may contain stopwords that could introduce noise for clustering and need to be removed. We note that the existing English stopword list cannot be well applied here for two reasons: first, a large number of user reviews contain irregular acronyms (e.g., asap--as soon as possible, cuz--cause) which cannot be processed by the existing stopword list. Second, some words are in the regular stopword list, but for specific apps, they may convey important information. For example, some words, such as ``home'', listed in strings.xml which encodes the string literals used by the GUI components, are of this kind. Therefore, we manually edit the English stopword list\footnote{The customized stopword list is also available online with the replication package.} accordingly (e.g., by adding some acronyms commonly used and removing some words that appear in strings.xml). We also delete the repeated words and the sentences which contain less than two words, because in short documents like user reviews, documents with less than two words hardly convey any useful information for evolution purpose.
Note that review segmentation is executed before textual processing, because the textual processing, which includes transforming the inflected words to their lemmatization form, removing stopwords, etc, would affect the grammatical structures of sentences which are crucial for review segmentation.
\subsubsection{User Review Clustering}
Although ARDOC could classify reviews into ``problem discovery'' and ``feature request'', such coarse-grained classification provides limited guidance for developers when confronted with specific maintenance tasks. A more fine-grained approach is highly desirable. Firstly, it is not uncommon that the number of user reviews makes addressing every concern practically infeasible. Therefore, developers would like to identify the most common issues or requests raised by the end users, which are supposed to be treated with higher priority~\cite{villarroel2016release}. Secondly, not all user reviews are meaningful, especially in the problem discovery category. In practice, some complaints are actually caused by users' misunderstanding. By grouping similar issues together, such cases would be easier to be identified. Both of these motivate using clustering of pre-processed user reviews.
\medskip
\noindent\textbf{Construction of word-review matrix.}
We adopt the widely used Vector Space Model (VSM~\cite{baeza1999modern}) to represent the pre-processed texts.
We fix a vocabulary $\Sigma$, each of which represents a feature in our approach.
Let $n=|\Sigma|$, the size of the vocabulary, and $m$ be the number of atomic sentences. We first construct a raw matrix $WR_{m\times n}$ where each entry $WR[r,w]$ is equal to the number of occurrences of the word $w$ in the review $r$.
For each word $w\in \Sigma$, let $f_w$ denote the occurrence of $w$ in all reviews, i.e., $f_w:=\sum_{r} WR[r, w]$,
and we use logarithmically scaled document frequency ($df(w)$) as the weight assigned to the corresponding word:
\[
df(w) = \log (1 + f_{w})
\]
Finally we can construct the scaled word-review matrix $R_{m\times n}$, where each entry
\[ R[r,w]:= WR[r,w]*df(w).\]
We remark that some related work uses traditional tf/idf as the weighting strategy~\cite{villarroel2016release,adelina17}. However, we use the document frequency (df) \cite{baeza1999modern} for two reasons: (1) Clustering in our approach is done at the sentence level. Particularly, these sentences are short where an individual word usually occurs once, so tf would be meaningless for clustering in most cases.
(2) In general, the purpose of idf is to give less weight to common words than to less common ones. In our preliminary experiments, we found that some words which only appear once do not make right suggestion to developers because they only represent personal feedback/opinion and lack a general ground. However, they carry high idf simply because they are rare. On the other hand, those words which can indeed represent common issues encountered, or new functions required, by a majority of users carry low weights. To offset, we adopt df rather than more common tf/idf. Besides, one of the strong reasons to use idf is to reduce the weight of stop words which would have been removed in the data preprocessing steps.
Due to the large number of user reviews and the shortness nature of individual atomic sentences, the word vectors are of very high-dimension but very sparse. To reduce the dimension, we use the principal component analysis (PCA) technique~\cite{cohen14,ding04} which is one of the most widely used techniques for dimension reduction. Essentially, PCA replaces the original $n$ features with an (usually much smaller) number $r$ of features. The new features are linear combinations of the original ones that maximize the sample variance and try to make the new $r$ features uncorrelated. The conversion between the two feature spaces captures the inherent variability of the data. Finally, the resulting matrix of $m\times r$ dimension gives rise to the data set $D$, which is the collection---as vectors---of rows of the matrix.
\medskip
\noindent\textbf{COP-Kmeans.}
After obtaining the vector models, we are in position to cluster similar texts based on their content. Existing approaches mainly employ automatic clustering algorithms to divide the reviews into multiple groups. However, we postulate that clustering would benefit from leveraging domain knowledge about the mobile app dataset. By investing limited human effort, the performance of clustering could be further boosted. For example, from AcDisplay, some reviews state ``I do wish you could set a custom background, though.'' and ``Would be nice to be able to customize the wallpaper too.'' As for traditional clustering algorithms, since the two keywords (i.e., background and wallpaper) are quite different in regular contexts, these two sentences would have a very low similarity score and thus be clustered into two different categories. However, professional developers would easily recognize that ``wallpaper'' and ``background'' refer to similar things in UI design, which suggests that the two reviews address the same issue and should be put into the same cluster.
On the other hand, some reviews might address quite irrelevant issues using the same words. For example, again in AcDisplay, two reviews are as below: ``I would love the option of having different home screen.'', and ``First I'd like to suggest to disable that home button action because it turns the lock screen off ..., I hope you do it in next update.''. These two reviews have completely different meanings, but since they both contain key words ``home'' and ``screen'', they are very likely to be clustered together by traditional clustering algorithms.
Domain knowledge of developers could potentially improve the precision of clustering, which has not been
exploited by the traditional
clustering algorithms. To remedy this shortcoming,
we annotate a subset of instances with two types of link information, i.e., must-link and cannot-link constraints, as a priori knowledge and then apply the constrained K-means clustering technique~\cite{wagstaff01}.
The must-link constraints specify the instance pairs that discuss semantically similar or the same concerns, judged by professional developers with rich development expertise. Likewise, the cannot-link constraints specify the instance pairs that are \emph{not} supposed to be clustered together.
Besides, the must-link constraints define a transitive binary relation over the instances~\cite{wagstaff01}.
When making use of the constraints (of both kinds), we take a transitive closure over the constraints. (Note that although only the must-link constraints are transitive, the closure is performed over both kinds because, e.g., if $d_i$ must link to $d_j$ which cannot link to $d_k$, then we also know that $d_i$ cannot link to $d_k$.)
To use the K-means family of algorithms, one needs to determine the value of the hyper-parameter $K$. There are some traditional, general-purpose approaches~\cite{hamerly03,tibshi00,pelleg00},
but they did not take the topic distribution concerns into consideration so cannot
provide a satisfactory solution in our setting. We instead use a heuristic-based method to infer $K$. The heuristic is derived from the n-gram model of the review texts, since we believe the cluster number should strongly correlate to the topic distribution. N-gram denotes a sequence of $n$ words in a particular sentence, which is a widely adopted statistical model to predict the occurrence of the $n$-th word using its previous $n-1$ words, based on the probability distribution of these words.
Concretely, we obtain the 2-gram phrases of all user reviews. Then we merge the same phrases and record the number of occurrences of those phrases. If two phrases share the same word information, the less frequent phrase will be deleted. We also delete the phrases which occur once. $K$ is then set to be the number of the remaining phrases.
(2-gram is used as we empirically found that this yields the best performance.)
The COP-Kmeans algorithm takes the must-link and cannot-link dataset, $K$ value and atomic sentence vectors as input and produces
the clustering results. The pseudo-code
is given in Algorithm~\ref{al1}. First, it randomly selects $k$ samples $\left\{\mu_1,\ldots,\mu_k\right\}$ from the data set $D$ as the initial cluster centers. Then, for each sample $x_i$ in $D$, assign it to the closest cluster $C_j$ such that
it doesn't violate constraints in $M$ and $C$. If no such cluster exists, an error message (line 4-21) would be returned. Then, for each cluster $C_j$, update its centroid by averaging all of the points $x\in C_j$ (line 22-24). This process iterates until the mean vectors no longer change.
\begin{algorithm}
\SetKwInOut{Input}{\textbf{Input}}\SetKwInOut{Output}{\textbf{Output}}
\Input{
\\
The Data set $D = \left\{x_1,x_2,...,x_m\right\}$\;\\
The Must-link constraints $M$\;\\
The Cannot-link constraints $C$\;\\
The K-value $k$\;\\}
\Output{
\\
The clustering results$\left\{C_1,C_2,...,C_k\right\}$\;\\}
\BlankLine
Randomly select $k$ samples$\left\{\mu_1,\mu_2,...,\mu_k\right\}$ from $D$ as the initial cluster centers\;
\Repeat
{\text{Mean vectors are no longer updated}}
{$C_j = \varnothing(1 \leq j \leq k)$\;
\For {$i = 1,2,...,m$}{
Calculate the distance between the sample $x_i$ and each mean vector $\mu_j(1 \leq j \leq k):d_{ij}= \left \|x_i-\mu_j \right \|_2$\;
$\mathcal{K} = \left\{ 1,2,...,k\right\}$\;
is\_merged = false\;
\While{$\urcorner$ is\_merged}{
Find the cluster closest to the sample $x_i$ based on $\mathcal{K}$: $r = \arg\min_{j \in \mathcal{K}}d_{ij}$\;
Detecting whether $x_i$ is classified into cluster $C_r$ violates constraints in $M$ and $C$\;
\uIf{$\urcorner$ is\_voilated}{
$C_r = C_r \cup \left\{ x_i\right\}$\;
is\_merged=true\;
}
\Else{
$\mathcal{K} = \mathcal{K} \setminus \left\{ r\right\}$\;
\If{$\mathcal{K} = \varnothing$}{
\textbf{Break} Return error message\;
}
}
}
}
\For {$j = 1,2,...,k$}{
$\mu_j = \frac{1}{\left|C_j\right|}\sum\limits_{x\in C_j}{x}$\;
}
}
\caption{Constrained K-means Algorithm\label{al1}}
\end{algorithm}
\subsection{Change file localization} \label{sec:localization}
For localizing potential change files, our approach combines the information from both the commit message and the source code. To get the commit messages of mobile apps, we exploit open-source projects
to collect (i) the title, (ii) the description, (iii) the set of files involved, and (iv) the timestamp, for each commit. For source code, we mainly use the file path, class summary, method summary, method name and field declaration. Class summary and method summary can be extracted based on the javadoc tags. Method names and field declarations are parsed through abstract syntax tree (AST) analysis.
In both cases, we remove non-textural information, split identifiers based on camel case styles, convert letters to lower case formats, stem, and remove stopwords/repetition words.
Finally, the bag-of-words (BoW) model from the target app's source code and commit messages are generated respectively.
\subsubsection{Tag Source Code Files} \label{sect:tag}
As mentioned earlier, we propose to leverage historical commit information to bridge the semantics gap between user reviews and source code. To this end, we first tag the source code with the historical change information. Particularly, for each commit, we extract the title, description, timestamps, and the involved file paths. From the file paths, we traverse the corresponding source code files in the project, and all the collected information, i.e., the title, description, and time stamps, is attached with the source file. As a result, each source code file can be regarded as a pair,
\[file=(code, commit)\]
where both $code$ and $commit$ are bag of words.
Fig.~\ref{fig:commit} shows a commit example from AcDisplay. We extract title, description, timestamps (in blue rectangle) and relevant file paths (in red rectangle) information. All the files will be tagged with such information. Particularly, in this step we only consider the source code files and their related commit messages. The irrelevant commits (e.g., those do not involve source code files changes which are usually associated with `.html', `.properties', or `.md' files) are removed in the first place.
\begin{figure}[h]
\centering
\centering
\includegraphics[width=8.8cm, height=5.8cm]{figs/tags}
\vspace{-3mm}
\caption{Commit Message Illustration} \label{fig:commit}
\end{figure}
\subsubsection{Localization}
\noindent\textbf{Similarity Computation}.
As mentioned earlier, due to the semantic gap between natural language and programming language, the direct similarity matching cannot precisely localize potential change files. We introduce the commit information to bridge the gap. Therefore, the similarity is attributed to the following two parts:
\begin{itemize}
\item the similarity between the user review and the code components extracted from one class of the target app;
\item the similarity between the user review and the commit tags of one class whose time stamps were earlier than the user review.
\end{itemize}
Palomba et al. \cite{palomba15} used the asymmetric Dice coefficient \cite{baeza1999modern} to compute a textual similarity between a user review and a commit, as well as a textual similarity between a user review and an issue. Since user reviews are usually much shorter than source code files and commits, asymmetric Dice coefficient based similarity measures are usually employed (as opposed to other alternatives such as the cosine similarity or the Jaccard coefficient~\cite{jaccard1901etude}). However, the original asymmetric Dice coefficient treats all the word equally and ignores those words which occur more frequently. Hence, we introduce a weighted asymmetric Dice coefficient as follows:
\begin{equation} \label{eq:sim1}
sim(r_i,code_j) = \frac{\sum\limits_{w_k\in W_{r_i}\cap W_{code_j}} df_{w_k} }{\min \left(\sum\limits_{w_r\in W_{r_i}}df_{w_r}, \sum\limits_{w_c\in W_{code_j}}df_{w_c}\right)}
\end{equation}
where $W_{r_i}$ is the set of words within the review $r_i$, $W_{code_j}$ is the set of words within the code components of class $j$, $df_{w_k}$ represents the document frequency (df) of the word $w_k$,
and the $\min(\cdot,\cdot)$ function returns the argument whose value is smaller.
In \eqref{eq:sim1}, we use $df$'s value as the weight of the words.
The intuition is that the more frequently a word occurs, the more important the word is.
The similarity between a user review and commit tags is computed analogously, by replacing $W_{code_j}$ by $W_{commit_j}$ as shown in \eqref{eq:sim2}, where $W_{commit_j}$ is the set of words within the commit tags of class $j$.
\begin{equation} \label{eq:sim2}
sim(r_i,commit_j) = \frac{\sum\limits_{w_k\in W_{r_i}\cap W_{commit_j}} df_{w_k} }{\min\left(\sum\limits_{w_r\in W_{r_i}}df_{w_r}, \sum\limits_{w_c\in W_{commit_j}}df_{w_c}\right)}
\end{equation}
\medskip
\noindent\textbf{Dynamic Interpolation Weights}.
The similarity score between user reviews and source code files is calculated by a linear combination of the similarity score between the reviews and the source code contained in the files and the one between the reviews and the commit messages associated with the files (cf.\ Section~\ref{sect:tag}).
However, in the initial stage of the project life cycle, there is no enough commit information, a reminiscent of the cold-start problem.
During the course of the project, commit messages accumulate. In light of this,
we dynamically assign the weights to the two parts, inspired by dynamic interpolation weights~\cite{tu14,knight09}:
\[
sim(r_i,file_j) = \frac{L-\gamma}{L}sim(r_i,code_j) + \frac{\gamma}{L}sim(r_i,commit_j)
\]
where $\gamma$ is the number of common words which appear in both user review $r_i$ and commit tags $commit_j$, and $L$ is the number of words in user review $r_i$. We use $L$ instead of the concentration parameter because we can determine the maximum number of $\gamma$. Based on the above equation, if $class_j$ does not have enough commit tags (when $\gamma$ is small), then the code components of $class_j$ will be preferred, which can cope with the cold-start problem in which there are few commits or even no commits at the beginning of project life cycle. As the number of commit tags is growing, when $\gamma$ is large, the commits will be preferred. This strategy could gradually increase the weight of commit messages during similarity calculation over time.
\subsection{Discussion}
Identifying meaningful user reviews from app markets is a non-trivial task, since a majority of them are not informative. Furthermore, to link and localize potential change files based on those meaningful feedbacks would be highly desirable for software developers. Compared with the state-of-the-art baseline work like ChangeAdvisor, RISING could give more fine-grained clustering results and more accurate localization performance. Specifically, after a closer qualitative analysis of the clusters generated by both approaches we found interesting characteristics concerning the clusters generated by RISING, when compared to the one of ChangeAdvisor. First of all, ChangeAdvisor tends to discard a wide majority of reviews, clustering a small subset of informative reviews. For instance, if we consider the app AcDisplay, we observe that the total number of reviews included in all generated clusters, considering both feature requests and problem discovery categories, is 150. This value is drastically smaller than the amount of informative reviews composing the clusters generated by RISING for this app, i.e., 2,053 reviews. As a consequence, the number of clusters for ChangeAdvisor tends to be very small for most projects, compared to our approach, as reported in Table \ref{tab:cluster}. On the other hand, the sizes of the clusters generated by RISING tend to be more balanced, i.e., not too large. Indeed, for RISING, the average number of reviews for each cluster tends to be smaller (11-12 reviews on average v.s. 18-24 for ChangeAdvisor). In our experiments, we also observe that, distinct runs of ChangeAdvisor give noticeably different clustering results, making the clustering less stable or deterministic and less accurate (they tend to be less semantically related). The clustering result of RISING is much more stable.
To see this, we made two new runs of RISING and ChangeAdvisor respectively for the app AcDisplay. The results show that the size of the clusters generated by RISING tends to be still very balanced, i.e., not too large, keeping a similar number of clusters, and almost the same average number of reviews for each cluster (11-12 reviews in average). Vice versa, for ChangeAdvisor we observe for the same app, still very large clusters. Interestingly, in the ChangeAdvisor re-runs, the number of clusters was in one case reduced and in another increased, observing in some cases higher or similar averages number of reviews for each cluster.
In the localization phase, RISING leverages the commit information to bridge the lexicon gap. Note that commit history contains all the relevant files for the change transaction including not only source files but also configuration related files (such as XML files). Our approach is thus advantageous over other similar approaches to be able to
locate multiple files which are necessary for problem fix or feature request. In contrast, ChangeAdvisor does not take into account the association between files, which would miss, for instance, configuration files.
\subsection*{Threats to Validity}
\noindent {\bf Internal validity.}
We conclude that, with domain knowledge, marginal human effort could greatly boost the clustering performance. Such effectiveness has already been demonstrated in various scenarios~\cite{bilenko04,basu08}. In the clustering phase, we only annotate a small portion of the whole review set with must-link and cannot-link, reducing the threat of over-fitting. The recovery of missing traceability links between various software artefacts has also been actively studied in the literature~\cite{Antoniol:tse2002}. Commit messages contain rich information about the change history and the motivation of the change itself. Thus the information could bring benefits to bridge the vocabulary gap between professional developers and ordinary users.
Another threat arises from the selection bias of the dataset. In our experiments, we strive to reuse the same apps in the baseline work as many
as possible.
To reduce the noise from the raw data and bias in the result, we take the standard measures to pre-process the raw texts, and include five professional developers with over three years of mobile app development experience to solve subjective conflicts.
\medskip
\noindent{\bf External validity.}
In our case study, we deliberately selected 10 apps across different categories instead of being limited within a narrow domain. To give a fair comparison, we use a combination of multiple evaluation metrics, including both objective and subjective ones.
Similar to other empirical studies, no evidence could theoretically prove our approach can always accurately localize change files in all scenarios. But we believe that, since our approach is open to different scenarios, domain knowledge could be leveraged via new constraints and heuristics incorporated into our approach which could improve the clustering and localization performance as well in the new dataset.
Finally, even if the underlying assumption of our work is that commit messages contribute to increase the number of common terms matching the use review vocabulary, there could be a mismatch between commit message vocabulary and user review vocabulary too. For instance, commit messages terms like bug ids (\textit{'this commit fixe the bug X'}) are not present in user reviews. Hence, for future work, we plan to investigate the potential mismatch between such vocabularies, with the goal to improve the results of our approach.
\section{A brief survey of word embedding and related techniques}
Everything starts from the \emph{distributional hypothesis} (Harris 1954): a feature is characterized by the words in its context".
Low-dimensional vector embeddings are a popular approach for capturing the ``meaning" of text and a form of unsupervised learning useful for downstream tasks.
We focus on linear embedding schemes.
\subsection{Word embedding}
Distributional word embeddings, which represent the ``meaning" of a word via a low-dimensional vector, have been widely applied by many NLP pipelines and algorithms.
There are two representative work.
\begin{itemize}
\item Neural approach (Mikolov NIPS'13)
\item matrix factorization method (Pennington, Glove, EMNLP)
\end{itemize}
Word2vec is a group of related models that are used to produce word embeddings. These models are shallow, two-layer neural networks that are trained to reconstruct linguistic contexts of words. Word2vec takes as its input a large corpus of text and produces a vector space, typically of several hundred dimensions, with each unique word in the corpus being assigned a corresponding vector in the space. Word vectors are positioned in the vector space such that words that share common contexts in the corpus are located in close proximity to one another in the space.
Word2vec was created by a team of researchers led by Tomas Mikolov at Google. The algorithm has been subsequently analysed and explained by other researchers. Embedding vectors created using the Word2vec algorithm have many advantages compared to earlier algorithms such as latent semantic analysis.
Software
Software for training and using word embeddings includes Tomas Mikolov's Word2vec, Stanford University's GloVe,[21] AllenNLP's Elmo,[22] fastText, Gensim,[23] Indra[24] and Deeplearning4j. Principal Component Analysis (PCA) and T-Distributed Stochastic Neighbour Embedding (t-SNE) are both used to reduce the dimensionality of word vector spaces and visualize word embeddings and clusters.[25]
\textbf{CBOW and skip grams}
Word2vec can utilize either of two model architectures to produce a distributed representation of words: continuous bag-of-words (CBOW) or continuous skip-gram. In the continuous bag-of-words architecture, the model predicts the current word from a window of surrounding context words. The order of context words does not influence prediction (bag-of-words assumption). In the continuous skip-gram architecture, the model uses the current word to predict the surrounding window of context words. The skip-gram architecture weighs nearby context words more heavily than more distant context words. According to the authors' note, CBOW is faster while skip-gram is slower but does a better job for infrequent words.
\paragraph{Basic principle of word2vec}
unsupervised Skip-gram model which learns vector representation of words that are useful for predictiing the surrounding words in a sentence.
The vector representation of $w_t$ is used as the parameter vector of a binary logistic regression model $P[w_k\in C_t\mid w_t]$
\subsection{Extensions}
Word embeddings computed using diverse methods are basic building blocks for Natural Language
Processing (NLP) and Information Retrieval (IR).
Following the success of word embedding, researchers seek to \textbf{extend to other text features, from subword elements to n-grams to sentences}.
Issues: the performance of both word embeddings and their extensions is known to degrade \textbf{in small corpus settings} or when embedding sparse, low-frequency features.
For computational efficiency it is desirable that methods be able to induce embeddings for only those features (e.g. bigrams or synsets) needed by the downstream task, rather than having to pay a computational prix fiex to learn embeddings for all features occurring frequently-enough in a corpus.
\paragraph{a la carte embedding} a method which bootstraps existing high-quality word vectors to learn a feature representation in the same semantic space via a linear transformation of the average word embeddings in the features's availabel contexts.
Assume a large text corpus $C_V$ consisting of contexts $c$ of words $w$ in a vocabulary $V$ with contexts themselves being sequences of words in $V$ (e.g. a fixed-size window around the word or feature). We further assume that we have trained word embedding s $v_w\in R^d$ on this collocation information using a standard algorithm (e.g. word2vec and GloVe). The goal is to construct a good embedding $v_f\in R^d$ of a text feature $f$ given a set $C_f$ of contexts it occurs in. (Both $f$ and its contexts are assumed to arise via the same process that generates the large corpus $C_V$.) In many settings, the number $|C_f|$ of contexts available for a feature $f$ of interests is much smaller than the number $|C_w|$ of contexts that the typical word $w\in V$ occurs in. This could be becasue the feature is rare or due to limited human annotations.
Additive Approach: take the average over all contexts of a feature $f$ of the average word vector in each context
\[v_f:=\frac{1}{|C_f|} \sum_{c\in C_f} \frac{1}{|c|}\sum_{w\in c} v_w \]
A natural extension is to use an arbitaray linear transformation which will be learned from the data,a nd hence guaranteed to do at least as well as ...
This can be posed as the following linear regression problem
\[v_w\approx Av_w^a = A\left(\frac{1}{|C_f|} \sum_{c\in C_w}\sum_{w'\in c} v_{w'}\right)\]
After learning the matrix $A$, we can embed any text feature in the same semantics space as the word embeddings via the following expression.
\subsection{A SIMPLE BUT TOUGH-TO-BEAT BASELINE FOR SENTENCE EMBEDDINGS}
\paragraph{Phrase/Sentence/Paragraph embeddings.}
Previous works have computed phrase or sentence
embeddings by
\begin{itemize}
\item composing word embeddings using operations on vectors and matrices e.g.,
(Mitchell \& Lapata, 2008; 2010; Blacoe \& Lapata, 2012). They found that coordinate-wise multiplication of the vectors performed very well among the binary operations studied. Unweighted averaging is also found to do well in representing short phrases (Mikolov et al., 2013a).
\item Another approach is recursive neural networks (RNNs) defined on the parse tree, trained with supervision (Socher et al., 2011) or without (Socher et al., 2014). Simple RNNs can be viewed as a special case where the parse tree is replaced by a simple linear chain. For example, the skip-gram model (Mikolov et al., 2013b) is extended to incorporate a latent vector for the sequence, or to treat the sequences rather than the word as basic units. In (Le \& Mikolov, 2014) each paragraph was assumed to have a latent paragraph vector, which influences the distribution of the words in the paragraph. Skip-thought of (Kiros et al., 2015) tries to reconstruct the surrounding sentences from surrounded one and treats
the hidden parameters as their vector representations.
RNNs using long short-term memory (LSTM) capture long-distance dependency and have also been used for modeling sentences (Tai et al., 2015). Other neural network structures include convolution neural networks, such as (Blunsom et al., 2014) that uses a dynamic pooling to handle input sentences of varying length and do well in sentiment
prediction and classification tasks.
\end{itemize}
The sentence embedding algorithm:
Input: word embeddings $\{v_w\mid w\in V\}$, a set of sentences $S$, parameter $a$ and estimated probabilities $\{p(w) \mid w\in V\}$ of words. Output: sentence embedding $\{v_s\mid s\in S\}$.
\[v_s:= \frac{1}{|s|}\sum_{w\in s} \frac{a}{a+p(w)} v_w\]
Form a matrix $X$ whose columns are $\{v_s\mid s\in S\}$, and let $u$ be its first singular vector, and then ...
The authors compare their method with the following
\begin{itemize}
\item Unsupervised: ST, avg-GloVe, tfidf-GloVe. ST denotes the skip-thought vectors (Kiros
et al., 2015), avg-GloVe denotes the unweighted average of the GloVe vectors (Pennington
et al., 2014) and tfidf-GloVe denotes the weighted average of GloVe vectors using TF-IDF weights.
[In information retrieval, tf–idf or TFIDF, short for term frequency–inverse document frequency, is a numerical statistic that is intended to reflect how important a word is to a document in a collection or corpus. It is often used as a weighting factor in searches of information retrieval, text mining, and user modeling. The tf–idf value increases proportionally to the number of times a word appears in the document and is offset by the number of documents in the corpus that contain the word, which helps to adjust for the fact that some words appear more frequently in general.]
\item Semi-supervised: avg-PSL. This method uses the unweighted average of the PARAGRAMSL999
(PSL) word vectors from (Wieting et al., 2015). The word vectors are trained using
labeled data, but the sentence embedding are computed by unweighted average without
training.
\item Supervised: PP, PP-proj., DAN, RNN, iRNN, LSTM (o.g.), LSTM (no). All these methods
are initialized with PSL word vectors and then trained on the PPDB dataset. PP and PPproj.
are proposed in (Wieting et al., 2016). The first is an average of the word vectors, and
the second additionally adds a linear projection. The word vectors are updated during the
training. DAN denotes the deep averaging network of (Iyyer et al., 2015). RNN denotes the
classical recurrent neural network, and iRNN denotes a variant with the activation being the
identity, and the weight matrices initialized to identity. The LSTM is the version from (Gers
et al., 2002), either with output gates (denoted as LSTM(o.g.)) or without (denoted as
LSTM (no)).
\end{itemize}
\subsection{A framework of text embedding}
Let $V$ be the number of words in the vocabulary and $V_n$ be the number of $n$-grams (independent of word order), so that $V=V_1$.
\textbf{Bag-of-n-grams vector} Assigning to each word a unique index $i\in [V]$ we define the BoW representation of a document to be the V-dimensional vector whose $i$-th entry is the number times word $i$ occurs in the document.
The n-gram extension is the Bag-of-n-grams (BonG) representation, which counts the number of times any $k$-gram for $k\leq n$ appears in a document.
\begin{itemize}
\item unigram embedding
\item
\end{itemize}
Useful references:
John Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. Towards universal paraphrastic
sentence embeddings. In International Conference on Learning Representations, 2016.
\section{document2word}
The algorithms use either hierarchical softmax or negative sampling; see Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean: "Efficient Estimation of Word Representations in Vector Space, in Proceedings of Workshop at ICLR, 2013" and Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean: "Distributed Representations of Words and Phrases and their Compositionality. In Proceedings of NIPS, 2013".
For a usage example, see the Doc2vec tutorial.
https://github.com/RaRe-Technologies/gensim/blob/develop/docs/notebooks/doc2vec-lee.ipynb
Make sure you have a C compiler before installing Gensim, to use the optimized doc2vec routines (70x speedup compared to plain NumPy implementation, https://rare-technologies.com/parallelizing-word2vec-in-python/).
https://rare-technologies.com/parallelizing-word2vec-in-python/
A gentle introduction to Doc2Vec:
https://medium.com/scaleabout/a-gentle-introduction-to-doc2vec-db3e8c0cce5e
currently: Many machine learning algorithms require the
input to be represented as a \textbf{fixed-length feature
vector}. When it comes to texts, one of the most
common fixed-length features is bag-of-words.
Despite their popularity, \textbf{bag-of-words features}
have two major weaknesses: they lose the ordering
of the words and they also ignore semantics
of the words.
For example, "powerful," "strong"
and "Paris" are equally distant.
Now: \emph{Paragraph Vector}, an unsupervised algorithm
that learns \textbf{fixed-length feature representations}
from variable-length pieces of texts, such as
sentences, paragraphs, and documents. Our algorithm
represents each document by a dense vector
which is trained to predict words in the document.
Even though bag-of-n-grams considers
the word order in short context, it suffers from data
sparsity and high dimensionality.
Bag-of-words and bag-of-n-grams
have very little sense about the semantics of the
words or more formally the distances between the words.
This means that words "powerful," "strong" and "Paris" are
equally distant despite the fact that semantically, "powerful"
should be closer to "strong" than "Paris."
\cite{LM14} proposed Paragraph Vector, an unsupervised
framework that learns continuous distributed vector
representations for pieces of texts. The texts can be of
variable-length, ranging from sentences to documents. The
name Paragraph Vector is to emphasize the fact that the
method can be applied to variable-length pieces of texts,
anything from a phrase or sentence to a large document.
Following these successful techniques, researchers have
tried to extend the models to go beyond word level
to achieve phrase-level or sentence-level representations
(Mitchell \& Lapata, 2010; Zanzotto et al., 2010;
Yessenalina \& Cardie, 2011; Grefenstette et al., 2013;
Mikolov et al., 2013c).
A particular implementation of neural network based algorithm
for training the word vectors is available at code.google.com/p/word2vec/ (Mikolov et al.,
2013a).
\begin{itemize}
\item Paragraph Vector: A distributed memory model
\item Paragraph Vector without word ordering:
Distributed bag of words
\end{itemize}
\section{Topic model}
In order to learn structure one has to posit the existence of a structure. In topic models, one assumes that a generative model for a collection of documents. Specially, each document is represented as a vector of word frequencies (the bag of words representation).
The documents areise as a convex combination of (i.e. distributon on) a small number of topic vectors, where each topic vector is a distribution on words (i.e., a vector of word-frequencies).
Subsequent work makes specific choices for the distribution used to generate topic combinations. (the well known LDA model of Blei hypothesizes a Dirichlet distribution. )
\section{NMF meets word embedding}
Setting: document clustering; traditional approach NMF but they use the bag-of-words representation and thus do not account for the sequential order in which words occur in documents.
word embedding: proven effiective in learning continuous representations of words that successfully capture meaningful syntactic and semantic regularities between words. Given a corpus of unannotated text documents, each of which represented as a sequence of words, the contexts for word $w$ are the words surrounding it in an L-sized window in any document of the corpus. We assume that frequently co-occurring words in an L-size window in any document are likely to have a common meaning.
Method: Semantics-NMF which jointly decomposes the document-word and word-context co-occurrence matrices with shared word factors. The decomposition of the word-context matrix has been shown to be equivalent to a state of the art neural word embedding method, namely the skip-gram model.
\subsection{The Role of User Feedback Analysis in the Mobile App Success}
\textbf{App Rating \& App Success}. Previous research widely investigated the relationship between the rating and a particular characteristic (or feature) of mobile applications \cite{Corral:2015:BCB:2825041.2825045, BavotaSE15, VasquezBBPOP13, Taba2014, TianNLH15}. Recent research efforts have been devoted to investigating the reliability of app rating when used as a proxy to represent user satisfaction. For example, Luiz \etal~\cite{luiz2018feature} proposed a framework performing sentiment analysis on a set of relevant features extracted from user reviews. Despite the star rating was considered to be a good metric for user satisfaction, their results suggest that sentiment analysis might be more accurate in capturing the sentiment transmitted by the users. Hu \etal~\cite{hu2018studying} studied the consistency of reviews and star ratings for hybrid Android and iOS apps discovering that they are not consistent across different app markets.
Finally, Catolino \cite{catolino2018does} preliminarily investigated the extent to which source code quality can be used as a predictor of commercial success of mobile apps.
\medskip
\noindent \textbf{User Feedback Analysis \& App Success}.
Several approaches have been proposed with the aim to classify useful user reviews for app success. \textsc{AR-Miner} \cite{Chen:2014:AMI:2568225.2568263} was the first one able to classify informative reviews. Panichella \etal adopted natural language processing and text and sentiment analysis to automatically classify user reviews \cite{panichella2015can,Panichella:2016:AAR:2950290.2983938} according to a User Review Model (URM). Gu and Kim \cite{Gu:ASE15} proposed an approach that summarizes sentiments and opinions of reviews.
Following the general idea of incorporating user feedback into typical development process, Di Sorbo \etal \cite{sorbo16fse, SorboPAVC17} and Scalabrino \etal \cite{villarroel2016release,scalabrino2017listening} proposed \textsc{SURF} and \textsc{CLAP}, two approaches aiming at recommending the most important reviews to take into account while planning a new release of a mobile application. \textsc{CLAP} improves \textsc{AR-Miner} by clustering reviews into specific categories (\eg reports of security issues) and by learning from the app history (or from similar apps) which reviews should be addressed \cite{scalabrino2017listening}. \textsc{SURF} proposed a first strategy to automatically summarize user feedback in more structured and recurrent topics \cite{SorboPAVC17,Panichella18} (e.g., GUI, app pricing, app content, bugs, etc.). Finally, Palomba \etal \cite{palomba17}, inspired by the work by Scalabrino \etal, proposed ChangeAdvisor, a tool that cluster user reviews of mobile applications. In this paper we considered as baseline ChangeAdvisor since, similarly to our approach, it is based on a clustering approach for user review feedback. In evaluating our approach, we discovered that ChangeAdvisor tends to generate rather different user review clusters with the same study setting and user review data, which highlights higher reliability of our approach compared to this state-of-the-art tool.
\subsection{Information Retrieval in SE \& the Mobile Context}
Information Retrieval techniques have been widely adopted to handle several SE problems. Specifically, strategies for recovery traceability links between textual artefacts and the source code were widely studied in the past \cite{Antoniol:tse2002,DeLucia2012}. In the same way, several approaches to locating features in the source code \cite{Dit:JSEP}, and tracing informal textual documentation, such as e-mails \cite{BacchelliLR10,DiSorboASE2015,SorboPVPCG16}, forum discussions \cite{Parnin:2012, Panichella:ICPC12,VassalloICPC14}, and bug reports \cite{Saha:ase13} to the source code have been proposed. However, as previously demonstrated by Panichella \etal \cite{Panichella:2013}, the configuration used to set the clustering algorithm is an important component of topic modeling techniques used in several traceability recovery approaches, and an optimal choice of the parameters generally results in better performance.
Duan \etal \cite{duan08} proposed a consensus-based approach to constrained clustering requirement documents. This is a different software engineering task than ours, but both approaches employ the semi-supervised clustering technique at a high level. In Duan \etal's work \cite{duan08}, consensus clustering is firstly performed to generate an ensemble of multiple clusters, and then a voting mechanism is applied to select the constraints. In our approach, we leverage domain knowledge to help generate the constraints.
In the context of mobile computing research, two pieces of work are closer to ours. Ciurumelea \etal \cite{ciurumelea2018automated,adelina17} employed machine learning techniques for the automatic categorization of user reviews on a two-level taxonomy adopting a modified Version of Vector Space Model (VSM) to automatically link user reviews to code artefacts. Similarly, Palomba \etal \cite{palomba17}
cluster user reviews of mobile applications and suggest the source-code artefacts to maintain. We mainly compare our approach against ChangeAdvisor \cite{palomba17} as, similar to our approach, it leverages clustering approaches for user review feedback analysis and IR-based methods for suggesting the source-code artefacts to maintain according to user change-requests. However, different from the work \cite{palomba17} and the work \cite{ciurumelea2018automated,adelina17}, the similarity score between
user reviews and source code files in our approach is calculated by a linear combination of the similarity score between the reviews and the source code
and the similarity score between reviews and the commit messages.
Indeed the work \cite{palomba17,ciurumelea2018automated,adelina17} mainly relies on textual analysis techniques such as VSM and Dice to compute directly the similarity among reviews and source code files. Moreover, the word-review matrix is build on a subset of textual features, selected using PCA. This allows to select more meaningful features from user review textual content.
\section{Semi-supervised clustering methods}
We will now briefly outline several semi-supervised clustering methods. These methods
will be organized according to the nature of the known outcome data.
\subsection{the data is partially labeled}
Basu et al. developed a generalization of k-means clustering (which they called "constrained
k-means") for the situation where class labels are known for a subset of the
observations.
\subsection{Relationship between the features is known}
Clustering when more complex relationships among the observations
are known. In particular,
two types of possible constraints among
observations:
``Must-link constraints" require that two observations must be placed in
the same cluster, and ``cannot-link constraints" require that two observations must not
be placed in the same cluster.
Numerous methods have been proposed for solving the problem of constrained clustering.
[Survey book] Basu S, Davidson I, Wagstaff K. Constrained Clustering: Advances in Algorithms,
Theory, and Applications. Chapman Hall/CRC Data Mining and
Knowledge Discovery Series. CRC Press, Boca Raton, FL, 2009.
The methods modify an existing clustering method (namely k-means
clustering) such that the constraints are satisfied. Thus, such methods are sometimes
referred to as "constraint-based methods" in literature. In contrast, "distance based
methods" (or "metric-based methods") use an existing clustering method but
modify the metric used to measure the "distance" between a pair of observations such
that the constraints are satisfied.
[For example, rather than using the simple Euclidean
distance, one may use an alternative distance metric such that two observations
with a "must-link constraint" will necessarily have a lower distance between them.]
Moreover, other constraint-based methods have been proposed, and still other
methods combine both of these approaches into a single model. Other forms of
constrained clustering are also possible, such as clustering on graph data.
\subsection{identify clusters associated with a particular
outcome variable}
| {
"attr-fineweb-edu": 1.49707,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdSE4eILhQLGUUeVI | \section{Introduction}
In recent work \cite{KR1,KRx,KR1/2,KR3D,RT} it was advocated that
Levinson's theorem is of topological nature, namely that it should be viewed as an index theorem.
The relevant index theorem occurs naturally in the framework of non-commutative topology, that is,
$C^*$-algebras, their $K$-theory and higher traces (unbounded cyclic cocycles). The analytical hypothesis
which has to be fulfilled for the index theoretic formulation to hold is that
the wave operators of the scattering system lie in a certain $C^*$-algebra. In the examples considered until now, the index theorem substantially extends the usual Levinson's theorem which relates the number of bound states of a physical system to an expression depending on the scattering part of the system. In particular it sheds new light on the corrections due to resonances and on the regularization which are often involved in the proof of this relation. It also emphasizes the influence of the restriction of the waves operators at thresholds energies.
In the present paper we extend these investigations in two directions. On the one hand, we apply the general idea for the first time to a magnetic system.
Indeed, the Aharonov-Bohm operators describe a two-dimensional physical system involving a singular magnetic field located at the origin and perpendicular to the plane of motion. On the other hand, due to the large number of parameters present in this model, we can develop a new topological equality involving higher degree traces. Such an equality, which we call a {\em higher degree Levinson's theorem}, extends naturally the usual Levinson's theorem (which corresponds to a relation between an $0$-trace and a $1$-trace) and it is apparently the first time that a relation between a $2$-trace and a $3$-trace is put into evidence in a physical context. While the precise physical meaning of this equality deserves more investigations, we have no doubt that it can play a role in the theory of topological transport and/or of adiabatic pumping \cite{Graf}.
Let us describe more precisely the content of this paper. In Section \ref{recall} we recall the contruction of the Aharonov-Bohm operators and present part of the results obtained in \cite{PR}. Earlier references for the basic properties of these operators are \cite{AT,AB,DS,R,Rui}. In particular, we recall the explicit expressions for the wave operators in terms of functions of the free Laplacian and of the generator of the dilation group in ${\mathbb R}^2$. Let us mention that the theory of boundary triples, as presented in \cite{BGP} was extensively used in reference \cite{PR} for the computation of these explicit expressions.
In Section \ref{secLev} we state and prove a version of Levinson's theorem adapted to our model, see Theorem \ref{Lev0}. It will become clear at that moment that a naive approach of this theorem involving only the scattering operator would lead to a completely wrong result. Indeed, the corrections due to the restriction of the wave operators at $0$-energy and at energy equal to $+\infty$ will be explicitly computed. Adding these different contributions leads to a first proof of Levinson's theorem. All the various situations, which depend on the parameters related to the flux of the magnetic field and to the description of the self-adjoint extensions, are summarized in Section \ref{Sectionfinal}. Let us stress that this proof is rather lengthy but that it leads to a very precise result. Note that up to this point, no $C^*$-algebraic knowledge is required, all proofs are purely analytical.
The last two sections of the paper contain the necessary algebraic framework, the two topological statements and their proofs. So Section \ref{secK} contains a very short introduction to $K$-theory, cyclic cohomology, $n$-traces, Connes' pairing and the dual boundary maps. Obviously, only the very few necessary information on these subjects is presented, and part of the constructions are over-simplified. However, the authors tried to give a flavor of this necessary background for non-experts, but any reader familiar with these constructions can skip Section \ref{secK} without any lost of understanding in the last part of the paper.
In the first part of Section \ref{secAlgebra}, we construct a suitable $C^*$-algebra $\mathcal{E}$ which contains the wave operators. For computational reasons, this algebra should neither be too small nor too large. In the former case, the computation of its quotient by the ideal of compact operators would be too difficult and possibly not understandable, in the latter case the deducible information would become too vague. In fact, the algebra we propose is very natural once the explicit form of the wave operators is known. Once the quotient of the algebra $\mathcal{E}$ by the compact operators is computed, the new topological version of Levinson's can be stated. This is done in Theorem \ref{Ktheo} and in that case its proof is contained in a few lines. Note furthermore that there is a big difference between Theorem \ref{Lev0} and the topological statement (and its corollary). In the former case, the proof consisted in checking that the sum of various explicit contributions is equal to the number of bound states of the corresponding system. In the latter case, the proof involves a topological argument and it clearly shows the topological nature of Levinson's theorem. However, the statement is global, and the contributions due to the scattering operator and to the restrictions at $0$-energy and at energy $+\infty$ can not be distinguished. For that reason, both approach are complementary. Note that the topological approach opens the way towards generalisations which could hardly be guessed from the purely analytical approach.
Up to this point, the flux of the magnetic field as well as the parameters involved in the description of the self-adjoint extension were fixed. In the second topological statement, we shall consider a smooth boundaryless submanifold of the parameters space and perform some computations as these parameters vary on the manifold. More precisely, we first state an equality between a continuous family of projections on the bound states and the image through the index map of a continuous family of unitary operators deduced from the wave operators, see Theorem \ref{thm-ENN}. These unitary operators contain a continuous family of scattering operators, but also the corresponding continuous family of restrictions at energies $0$ and $+\infty$. Note that this result is still abstract, in the sense that it gives an equality between an equivalent class in the $K_0$-theory related to the bounded part of the system with an equivalent class in the $K_1$-theory related to the scattering part
of the system, but nothing prevents this equality from being trivial in the sense that it yields $0=0$.
In the final part of the paper, we choose a $2$-dimensional submanifold and show that the second topological result is not trivial. More precisely, we explicitly compute the pairings of the $K$-equivalent classes with their respective higher degree traces. On the one hand this leads to the computation of the Chern number of a bundle defined by the family of projections. For the chosen manifold this number is equal to $1$, and thus is not trivial. By duality of the boundary maps, it follows that the natural $3$-trace applied on the family of unitary operators is also not trivial. The resulting statement is provided in Proposition \ref{propfinal}. Note that this statement is again global. A distinction of each contribution could certainly be interesting for certain applications, but its computation could be rather tedious and therefore no further investigations have been performed in that direction.
\section*{Acknowledgements}
S. Richard was supported by the Swiss National Science Foundation and is now supported by the Japan Society for the Promotion of Sciences.
\section{The Aharonov-Bohm model}\label{recall}
In this section, we briefly recall the construction of the Aharonov-Bohm operators and
present a part of the results obtained in \cite{PR} to which we refer for details.
We also mention \cite{AT,DS,Rui} for earlier works on these operators.
\subsection{The self-adjoint extensions}\label{ssec21}
Let $\mathcal{H}$ denote the Hilbert space $L^2(\mathbb{R}^2)$ with its scalar product
$\langle \cdot,\cdot\rangle$ and its norm $\|\cdot \|$.
For any $\alpha \in (0,1)$, we set $A_\alpha: \mathbb{R}^2\setminus\{0\} \to \mathbb{R}^2$ by
\begin{equation*}
A_\alpha(x,y)= -\alpha \Big(\frac{-y}{x^2+y^2},
\frac{x}{x^2+y^2}
\Big),
\end{equation*}
corresponding formally to the magnetic field $B=\alpha\delta$ ($\delta$ is the Dirac
delta function), and consider the operator
\begin{equation*}
H_\alpha:=(-i\nabla -A_\alpha)^2,
\qquad \mathop{\mathcal{D}}(H_\alpha)=C_c^\infty\big(\mathbb{R}^2\setminus\{0\}\big)\ .
\end{equation*}
Here $C_c^\infty(\Xi)$ denotes the set of smooth functions on $\Xi$ with compact support.
The closure of this operator in $\mathcal{H}$, which is denoted by the same symbol,
is symmetric and has deficiency indices $(2,2)$.
We briefly recall the parametrization of the self-adjoint extensions of $H_\alpha$ from \cite{PR}.
Some elements of the domain of the adjoint operator $H_\alpha^*$ admit singularities
at the origin. For dealing with them, one defines linear functionals $\Phi_0$, $\Phi_{-1}$, $\Psi_0$, $\Psi_{-1}$
on $\mathop{\mathcal{D}}(H_\alpha^*)$ such that for $\mathfrak{f}\in\mathop{\mathcal{D}}(H_\alpha^*)$ one has, with $\theta \in [0,2\pi)$ and $r \to 0_+$,
\[
2\pi \mathfrak{f}(r\cos\theta,r\sin\theta)= \Phi_0(\mathfrak{f})r^{-\alpha}+\Psi_0(\mathfrak{f}) r^\alpha
+e^{-i\theta} \Big(
\Phi_{-1}(\mathfrak{f})r^{\alpha-1}+\Psi_{-1}(\mathfrak{f}) r^{1-\alpha}
\Big) +O(r).
\]
The family of all self-adjoint extensions of the operator $H_\alpha$ is then indexed by two matrices
$C,D \in M_2(\mathbb{C})$ which satisfy the following conditions:
\begin{equation}
\label{eq-mcd}
\text{(i) $CD^*$ is self-adjoint,\qquad (ii) $\mathrm{det}(CC^* + DD^*)\neq 0$,}
\end{equation}
and the corresponding extensions $H^{C\!D}_\alpha$ are the restrictions of $H_\alpha^*$
onto the functions $\mathfrak{f}$ satisfying the boundary conditions
\[
C \begin{pmatrix}
\Phi_0(\mathfrak{f})\\ \Phi_{-1}(\mathfrak{f})
\end{pmatrix}
=2D \begin{pmatrix}
\alpha \Psi_0(\mathfrak{f})\\ (1-\alpha)\Psi_{-1}(\mathfrak{f})
\end{pmatrix}.
\]
For simplicity, we call \emph{admissible} a pair of
matrices $(C,D)$ satisfying the above conditions.
\begin{rem}\label{1to1}
The parametrization of the self-adjoint extensions of $H_\alpha$ with all admissible
pairs $(C,D)$ is very convenient but highly none unique.
At a certain point, it will be useful to have a one-to-one parametrization of all
self-adjoint extensions.
So, let us consider $U \in U(2)$ and set
\begin{equation*}
C(U) := {\textstyle \frac{1}{2}}(1-U) \quad \hbox{ and }
\quad D(U) = {\textstyle \frac{i}{2}}(1+U).
\end{equation*}
It is easy to check that $C(U)$ and $D(U)$ satisfy both conditions \eqref{eq-mcd}.
In addition, two different elements $U,U'$ of $U(2)$ lead to two different self-adjoint
operators $H_\alpha^{C(U)\;\!D(U)}$ and $H_\alpha^{C(U')\;\!D(U')}$, {\it cf.}~\cite{Ha}.
Thus, without ambiguity we can write $H_\alpha^U$ for the operator $H_\alpha^{C(U)\;\!D(U)}$.
Moreover, the set $\{H_\alpha^U\mid U \in U(2)\}$ describes all
self-adjoint extensions of $H_\alpha$.
Let us also mention that the normalization of the above maps has been chosen such that
$H_\alpha^{-1}\equiv H_\alpha^{10}= H_\alpha^{A\!B}$ which corresponds to the standard Aharonov-Bohm operator
studied in \cite{AB,Rui}.
\end{rem}
The essential spectrum of $H^{C\!D}_\alpha$ is absolutely continuous and covers the positive half line $[0,+\infty)$.
The discrete spectrum consists of at most two negative eigenvalues. More precisely, the number of negative eigenvalues
of $H^{C\!D}_\alpha$ coincides with the number of negative eigenvalues of the matrix $CD^*$.
The negative eigenvalues are the real negative solutions of the equation
\[
\mathrm{det}\big(
DM(z)-C\big)=0
\]
where $M(z)$ is, for $z<0$,
\[
M(z)=- \frac{2}{\pi} \sin (\pi \alpha)\,
\begin{pmatrix}
\Gamma(1-\alpha)^2 \Big( -\dfrac{z}{4}\Big)^\alpha & 0 \\
0& \Gamma(\alpha)^2 \Big( -\dfrac{z}{4}\Big)^{1-\alpha}
\end{pmatrix},
\]
and there exists an injective map $\gamma(z):\mathbb{C}^2\to\mathcal{H}$ depending continuously on $z \in {\mathbb C} \setminus [0,+\infty)$ and calculated explicitly in \cite{PR}
such that for each $z<0$ one has $\ker(H^{C\!D}_\alpha-z)=\gamma(z)\ker\big(
DM(z)-C\big)$.
\subsection{Wave and scattering operators}
One of the main result of \cite{PR} is an explicit description of the wave operators.
We shall recall this result below, but we first need
to introduce the decomposition of the Hilbert space $\mathcal{H}$ with respect to the
spherical harmonics.
For any $m \in \mathbb{Z}$, let $\phi_m$ be the complex function defined by
$[0,2\pi)\ni \theta \mapsto \phi_m(\theta):= \frac{e^{im\theta}}{\sqrt{2\pi}}$.
One has then the canonical isomorphism
\begin{equation}\label{decomposition}
\mathcal{H} \cong \bigoplus_{m \in \mathbb{Z}} \mathcal{H}_r \otimes [\phi_m] \ ,
\end{equation}
where $\mathcal{H}_r:=L^2(\mathbb{R}_+, r\;\!\mathrm{d} r)$ and $[\phi_m]$ denotes the one dimensional space spanned by $\phi_m$.
For shortness, we write $\mathcal{H}_m$ for $\mathcal{H}_r \otimes [\phi_m]$, and often consider it as a subspace of $\mathcal{H}$.
Let us still set $\mathcal{H}_\mathfrak{int}:=\mathcal{H}_0\oplus\mathcal{H}_{-1}$ which is clearly isomorphic to $\mathcal{H}_r\otimes {\mathbb C}^2$.
Let us also recall that the unitary dilation group
$\{U_\tau\}_{\tau \in \mathbb{R}}$ is defined on any $\mathfrak{f} \in \mathcal{H}$ and $x \in \mathbb{R}^2$ by
\begin{equation*}
[U_\tau \mathfrak{f}](x) = e^\tau \mathfrak{f}(e^\tau x)\ .
\end{equation*}
Its self-adjoint generator $A$ is formally given by
$\frac{1}{2}(X\cdot (-i\nabla) + (-i\nabla)\cdot X)$, where $X$ is the position operator and $-i\nabla$
is its conjugate operator. All these operators are essentially self-adjoint on the Schwartz space on $\mathbb{R}^2$.
Clearly, the group of dilations as well as its generator leave each subspace $\mathcal{H}_m$ invariant.
Let us now consider the wave operators
\begin{equation*}
\Omega^{C\!D}_-:=\Omega_-(H^{C\!D}_\alpha,H_0)=s-\lim_{t\to - \infty}e^{itH^{C\!D}_\alpha }\;\!e^{-itH_0 }\ .
\end{equation*}
where $H_0:=-\Delta$.
It is well known that for any admissible pair $(C,D)$ the operator $\Omega_\pm^{C\!D}$ is reduced by
the decomposition $\mathcal{H}=\mathcal{H}_\mathfrak{int} \oplus \mathcal{H}_\mathfrak{int}^\bot$ and that
$\Omega_-^{C\!D}|_{\mathcal{H}_\mathfrak{int}^\bot} = \Omega_-^{A\!B}|_{\mathcal{H}_\mathfrak{int}^\bot}$.
The restriction to $\mathcal{H}_\mathfrak{int}^\bot$ is further reduced by the decomposition \eqref{decomposition}
and it is proved in \cite[Prop.~10]{PR} that the channel wave operators satisfy for each $m \in {\mathbb Z}$,
\begin{equation*}
\Omega_{-,m}^{A\!B} = \varphi_m^-(A)\ ,
\end{equation*}
with $\varphi_m^-$ explicitly given for $x \in {\mathbb R}$ by
\begin{equation*}
\varphi^-_m(x):=e^{i\delta_m^\alpha}\;\!
\frac{\Gamma\big(\frac{1}{2}(|m|+1+ix)\big)}{\Gamma\big(\frac{1}{2}(|m|+1-ix)\big)}
\;\!
\frac{\Gamma\big(\frac{1}{2}(|m+\alpha|+1-ix)\big)}{\Gamma\big(\frac{1}{2}(|m+\alpha|+1+ix)\big)}
\end{equation*}
and
\begin{equation*}
\delta_m^\alpha = \hbox{$\frac{1}{2}$}\pi\big(|m|-|m+\alpha|\big)
=\left\{\begin{array}{rl}
-\hbox{$\frac{1}{2}$}\pi\alpha & \hbox{if }\ m\geq 0 \\
\hbox{$\frac{1}{2}$}\pi\alpha & \hbox{if }\ m< 0
\end{array}\right.\ .
\end{equation*}
It is also proved in \cite[Thm.~11]{PR} that
\begin{equation}\label{yoyo}
\Omega_-^{C\!D}|_{\mathcal{H}_\mathfrak{int}} = \Big(
\begin{smallmatrix}\varphi^-_0(A) & 0 \\ 0 & \varphi^-_{-1}(A) \end{smallmatrix}\Big) +
\Big(
\begin{smallmatrix}\tilde{\varphi}_0(A) & 0 \\ 0 & \tilde{\varphi}_{-1}(A) \end{smallmatrix}\Big)
\widetilde{S}^{C\!D}_\alpha\big(\sqrt{H_0}\big)
\end{equation}
with $\tilde{\varphi}_m(x)$ given for $m \in \{0,-1\}$ by
\begin{eqnarray*}
\frac{1}{2\pi}\;\!e^{-i\pi|m|/2} \;\!e^{\pi x/2}\;\!
\frac{\Gamma\big(\frac{1}{2}(|m|+1+ix)\big)}{\Gamma
\big(\frac{1}{2}(|m|+1-ix)\big)} \Gamma\big({\textstyle \frac{1}{2}}(1+|m+\alpha|-ix)\big)
\;\!\Gamma\big({\textstyle \frac{1}{2}}(1-|m+\alpha|-ix)\big)\ .
\end{eqnarray*}
Clearly, the functions $\varphi^-_m$ and $\tilde{\varphi}_m$ are continuous on ${\mathbb R}$. Furthermore,
these functions admit limits at $\pm \infty$: $\varphi^-_m(-\infty)=1$, $\varphi^-_m(+\infty)=e^{2i\delta^\alpha_m}$,
$\tilde{\varphi}_m(-\infty)=0$ and $\tilde{\varphi}_m(+\infty)=1$.
Note also that the expression for the function $\widetilde{S}^{C\!D}_\alpha(\cdot)$ is given
for $\kappa \in {\mathbb R}_+$ by
\begin{eqnarray*}
\widetilde{S}_\alpha^{C\!D}(\kappa)
&:=& 2i\sin(\pi\alpha)
\left(\begin{matrix}
\frac{\Gamma(1-\alpha)\;\!e^{-i\pi\alpha/2}}{2^\alpha}\;\!\kappa^{\alpha} & 0 \\
0 & \frac{ \Gamma(\alpha)\;\!e^{-i\pi(1-\alpha)/2}}{2^{1-\alpha}}\;\!\kappa^{(1-\alpha)}
\end{matrix}\right) \\
&&\cdot \left( D\,
\left(\begin{matrix}
\frac{\Gamma(1-\alpha)^2 \;\!e^{ -i\pi\alpha}}{4^\alpha}\;\!\kappa^{2\alpha} & 0 \\
0& \frac{\Gamma(\alpha)^2\;\! e^{ -i\pi(1-\alpha)}}{4^{1-\alpha}}\;\!\kappa^{2(1-\alpha)}
\end{matrix}\right)
+\frac{\pi}{2\sin(\pi\alpha)}C\right)^{-1} D \\
&& \cdot
\left(\begin{matrix}
\frac{ \Gamma(1-\alpha)\;\!e^{-i\pi\alpha/2}}{2^\alpha}\;\!\kappa^{\alpha} & 0 \\
0 & -\frac{ \Gamma(\alpha)\;\!e^{-i\pi(1-\alpha)/2}}{2^{1-\alpha}}\;\!\kappa^{(1-\alpha)}
\end{matrix}\right)\ .
\end{eqnarray*}
As usual, the scattering operator is defined by the formula
\begin{equation*}
S^{C\!D}_\alpha:=\big[\Omega^{C\!D}_+\big]^* \Omega^{C\!D}_-.
\end{equation*}
Then, the relation between this operator and $\widetilde{S}^{C\!D}_\alpha$ is of the form
\begin{equation} \label{eq-stilde}
S_\alpha^{C\!D}|_{\mathcal{H}_\mathfrak{int}}=
S_\alpha^{C\!D}(\sqrt{H_0})
\quad\hbox{with}\quad
S_\alpha^{C\!D}(\kappa):=\begin{pmatrix}
e^{-i\pi\alpha} & 0 \\
0 & e^{i\pi\alpha}
\end{pmatrix}
+ \widetilde{S}_\alpha^{C\!D}(\kappa)\ .
\end{equation}
The following result has been obtained in \cite[Prop.~13]{PR} and will be necessary further on:
\begin{prop}\label{propSurS}
The map
\begin{equation*}
\mathbb{R}_+\ni \kappa \mapsto S^{C\!D}_\alpha(\kappa) \in U(2)
\end{equation*}
is continuous and has explicit asymptotic values for $\kappa=0$ and $\kappa = +\infty$.
More explicitly, depending on $C, D$ and $\alpha$ one has:
\begin{enumerate}
\item[i)] If $D=0$, then $S^{C\!D}_\alpha(\kappa)=\left(\begin{smallmatrix}
e^{-i\pi \alpha} & 0\\
0 & e^{i\pi\alpha}
\end{smallmatrix}\right)$,
\item[ii)] If $\mathrm{det}(D)\neq 0$, then
$S^{C\!D}_\alpha(+\infty)=\left(\begin{smallmatrix}
e^{i\pi \alpha} & 0\\
0 & e^{-i\pi\alpha}
\end{smallmatrix}\right)$,
\item[iii)] If
$\dim[\ker(D)]=1$ and $\alpha =1/2$, then
$S^{C\!D}_\alpha(+\infty)=(2{\mathrm P} -1)\;\!\left(\begin{smallmatrix}
i & 0\\
0 & -i
\end{smallmatrix}\right)$,
where ${\mathrm P}$ is the orthogonal projection onto $\ker(D)^\bot$,
\item[iv)] If $\ker(D)= \left(\begin{smallmatrix}
\mathbb{C}\\ 0 \end{smallmatrix}\right)$ or if
$\dim[\ker(D)]=1$, $\alpha < 1/2$ and $\ker(D)\neq \left(\begin{smallmatrix}
0\\ \mathbb{C}
\end{smallmatrix}\right)$,
then
$S^{C\!D}_\alpha(+\infty)=\left(\begin{smallmatrix}
e^{-i\pi \alpha} & 0\\
0 & e^{-i\pi\alpha}
\end{smallmatrix}\right)$,
\item[v)] If $\ker(D)= \left(\begin{smallmatrix}
0\\ \mathbb{C}
\end{smallmatrix}\right)$ or if
$\dim[\ker(D)]=1$, $\alpha > 1/2$ and $\ker(D)\neq \left(\begin{smallmatrix}
\mathbb{C}\\ 0
\end{smallmatrix}\right)$,
then
$S^{C\!D}_\alpha(+\infty)=\left(\begin{smallmatrix}
e^{i\pi \alpha} & 0\\
0 & e^{i\pi\alpha}
\end{smallmatrix}\right)$.
\end{enumerate}
Furthermore,
\begin{enumerate}
\item[a)] If $C=0$, then
$S^{C\!D}_\alpha(0)=\left(\begin{smallmatrix}
e^{i\pi \alpha} & 0\\
0 & e^{-i\pi\alpha}
\end{smallmatrix}\right)$,
\item[b)] If $\mathrm{det}(C)\ne 0$, then
$S^{C\!D}_\alpha(0)=\left(\begin{smallmatrix}
e^{-i\pi \alpha} & 0\\
0 & e^{i\pi\alpha}
\end{smallmatrix}\right)$,
\item[c)] If $\dim[\ker (C)]=1$ and $\alpha=1/2$, then
$S^{C\!D}_\alpha(0)=
(1-2\Pi)\left(\begin{smallmatrix}
i & 0\\
0 & -i
\end{smallmatrix}\right)$,
where $\Pi$ is the orthogonal projection on $\ker(C)^\perp$.
\item[d)] If $\ker (C)=\left(\begin{smallmatrix}0\\ \mathbb{C} \end{smallmatrix}\right)$ or if $\dim[\ker(C)]=1$,
$\alpha>1/2$ and
$\ker (C)\ne \left(\begin{smallmatrix}\mathbb{C} \\ 0 \end{smallmatrix}\right)$,
then
$S^{C\!D}_\alpha(0)=\left(\begin{smallmatrix}
e^{-i\pi \alpha} & 0\\
0 & e^{-i\pi\alpha}
\end{smallmatrix}\right)$,
\item[e)] If $\ker (C)=\left(\begin{smallmatrix} \mathbb{C} \\ 0\end{smallmatrix}\right)$ or if
$\dim[\ker (C)]=1$, $\alpha<1/2$ and $\ker (C)\ne \left(\begin{smallmatrix}0 \\ \mathbb{C} \end{smallmatrix}\right)$,
then
$S^{C\!D}_\alpha(0)=\left(\begin{smallmatrix}
e^{i\pi \alpha} & 0\\
0 & e^{i\pi\alpha}
\end{smallmatrix}\right)$.
\end{enumerate}
\end{prop}
\section{The $\boldsymbol{0}$-degree Levinson's theorem, a pedestrian approach}\label{secLev}
In this section, we state a Levinson's type theorem adapted to our model. The proof is quite ad-hoc and will look like a recipe, but a much more conceptual one will be given subsequently.
The main interest in this pedestrian approach is that it shows the importance of the restriction of the wave operators at $0$-energy and at energy equal to $+\infty$.
Let us remind the reader interested in the algebraic approach that the present proof can be skipped without any lost of understanding in the following sections.
Let us start by considering again the expression \eqref{yoyo} for the operator $\Omega_-^{C\!D}|_{\mathcal{H}_\mathfrak{int}}$. It follows from the explicit expressions for the functions $\varphi^-_m$, $\tilde{\varphi}_m$ and $\widetilde{S}^{C\!D}_\alpha$ that $\Omega_-^{C\!D}|_{\mathcal{H}_\mathfrak{int}}$ is a linear combination of product of functions of two non-commutating operators with functions that are respectively continuous on $[-\infty,\infty]$ and on $[0,\infty]$ and which take values in $M_2({\mathbb C})$. For a reason that will become limpid in the algebraic framework, we shall consider the restrictions of these products of functions on the asymptotic values of the closed intervals. Namely, let us set for $x \in {\mathbb R}$ and $\kappa\in {\mathbb R}_+$
\begin{eqnarray}
\label{Gma1}\Gamma_1(C,D,\alpha,x)&:=&
\Big(
\begin{smallmatrix}\varphi^-_0(x) & 0 \\ 0 & \varphi^-_{-1}(x) \end{smallmatrix}\Big) +
\Big(
\begin{smallmatrix}\tilde{\varphi}_0(x) & 0 \\ 0 & \tilde{\varphi}_{-1}(x) \end{smallmatrix}\Big)
\widetilde{S}^{C\!D}_\alpha(0)\ ,\\
\label{Gma2}\Gamma_2(C,D,\alpha,\kappa)&:=&S^{C\!D}_\alpha(\kappa)\ ,\\
\label{Gma3}\Gamma_3(C,D,\alpha,x)&:=&
\Big(
\begin{smallmatrix}\varphi^-_0(x) & 0 \\ 0 & \varphi^-_{-1}(x) \end{smallmatrix}\Big) +
\Big(
\begin{smallmatrix}\tilde{\varphi}_0(x) & 0 \\ 0 & \tilde{\varphi}_{-1}(x) \end{smallmatrix}\Big)
\widetilde{S}^{C\!D}_\alpha(+\infty)\ ,\\
\label{Gma4}\Gamma_4(C,D,\alpha,\kappa)&:=& 1.
\end{eqnarray}
We also set
\begin{equation}\label{fctGamma}
\Gamma(C,D,\alpha,\cdot):= \big(\Gamma_1(C,D,\alpha,\cdot), \Gamma_2(C,D,\alpha,\cdot),\Gamma_3(C,D,\alpha,\cdot),
\Gamma_4(C,D,\alpha,\cdot)\big)\ .
\end{equation}
In fact, $\Gamma(C,D,\alpha,\cdot)$ is a continuous function on the edges $\square$ of the square $[0,\infty]\times[-\infty,\infty]$ and takes values in $U(2)$. Thus, since $\Gamma(C,D,\alpha,\cdot) \in C\big(\square,U(2)\big)$,
we can define the winding number $\mathrm{wind}\big[\Gamma(C,D,\alpha,\cdot)\big]$ of the map
\begin{equation*}
\square \ni \zeta \mapsto \mathrm{det} [\Gamma(C,D,\alpha,\zeta)]\in {\mathbb T}
\end{equation*}
with orientation of $\square$ chosen clockwise. Here ${\mathbb T}$ denotes the set of complex numbers of modulus $1$.
The following statement is our Levinson's type theorem.
\begin{theorem}\label{Lev0}
For any $\alpha\in (0,1)$ and any admissible pair $(C,D)$ one has
\begin{equation*}
\mathrm{wind} \big[\Gamma(C,D,\alpha,\cdot)\big] = - \# \sigma_p(H_\alpha^{C\!D}) =
-\#\{\hbox{negative eigenvalues of }CD^*\}\ .
\end{equation*}
\end{theorem}
\begin{proof}
The first equality is proved below by a case-by-case study.
The equality between the cardinality of $\sigma_p(H_\alpha^{C\!D})$ and the number of
negative eigenvalues of the matrix $CD^*$ has been shown in \cite[Lem.~4]{PR}.
\end{proof}
We shall now calculate separately the contribution to the winding number from the functions $\Gamma_1(C,D,\alpha,\cdot)$, $\Gamma_2(C,D,\alpha,\cdot)$ and $\Gamma_3(C,D,\alpha,\cdot)$. The contribution due to the scattering operator is the one given by $\Gamma_2(C,D,\alpha,\cdot)$. It will be rather clear that a naive approach of Levinson's theorem involving only the contribution of the scattering operator would lead to a completely wrong result. The final results are presented in Section \ref{Sectionfinal}.
\subsection{Contributions of $\Gamma_1(C,D,\alpha,\cdot)$ and $\Gamma_3(C,D,\alpha,\cdot)$}
In this section we calculate the contributions due to $\Gamma_1(C,D,\alpha,\cdot)$ and $\Gamma_3(C,D,\alpha,\cdot)$ which were introduced in \eqref{Gma1} and \eqref{Gma3}. For that purpose, recall first the relation
\begin{equation*}
S_\alpha^{C\!D}(\kappa):=\left(\begin{smallmatrix}
e^{-i\pi\alpha} & 0 \\
0 & e^{i\pi\alpha}
\end{smallmatrix}\right)
+ \widetilde{S}_\alpha^{C\!D}(\kappa)\ .
\end{equation*}
Since $S^{C\!D}_\alpha(0)$ and $S^{C\!D}_\alpha(+\infty)$ are diagonal
in most of the situations, as easily observed in Proposition \ref{propSurS}, let us define for $a \in \mathbb{C}$ and $m \in \{0,-1\}$ the following functions:
\begin{equation*}
\varphi_m(\cdot,a):= \varphi_m^-(\cdot)+ a\;\! \tilde{\varphi}_m(\cdot)\ .
\end{equation*}
Then, by a simple computation one obtains
\begin{eqnarray*}
\varphi_m(x,a) &=&
\frac{\Gamma\big(\frac{1}{2}(|m|+1+ix)\big)}{\Gamma\big(\frac{1}{2}(|m|+1-ix)\big)}
\;\!
\frac{\Gamma\big(\frac{1}{2}(|m+\alpha|+1-ix)\big)}{\Gamma\big(\frac{1}{2}(|m+\alpha|+1+ix)\big)}
\ \cdot \\
&& \cdot \ \Big[
e^{i\delta_m^\alpha} + a \;\!e^{-i\pi|m|/2} \;\!
\frac{e^{\pi x/2}}{2\sin\big(\frac{\pi}{2}(1+|m+\alpha|+ix)\big)}\;\! \Big].
\end{eqnarray*}
Let us mention that the equality
\begin{equation}\label{relGamma}
\Gamma(z) \;\!\Gamma\big(1-z)=\frac{\pi}{\sin(\pi z)}
\end{equation}
for $z={\textstyle \frac{1}{2}}(1+|m+\alpha|+ix)$ has been used for this calculation.
In the case $a=0$, the function $\varphi_m(\cdot,0)$ clearly takes its values in ${\mathbb T}$.
We shall now consider the other two special cases $\varphi_0(\cdot,e^{i\pi\alpha}-e^{-i\pi\alpha})$
and $\varphi_{-1}(\cdot,e^{-i\pi\alpha}-e^{i\pi\alpha})$ which will appear naturally subsequently.
Few more calculations involving some trigonometric relations and
the same relation \eqref{relGamma} lead to
\begin{eqnarray*}
\varphi_0(x,e^{i\pi\alpha}-e^{-i\pi\alpha})&=&e^{i\pi\alpha/2}\;\!
\frac{\Gamma\big(\frac{1}{2}(1+ix)\big)}{\Gamma\big(\frac{1}{2}(1-ix)\big)}
\;\!
\frac{\Gamma\big(\frac{1}{2}(1+\alpha-ix)\big)}{\Gamma\big(\frac{1}{2}(1+\alpha+ix)\big)}
\;\!\frac{\sin\big(\frac{\pi}{2}(1+\alpha-ix)\big)}{\sin\big(\frac{\pi}{2}(1+\alpha+ix)\big)}\\
&=&
e^{i\pi\alpha/2}\;\!
\frac{\Gamma\big(\frac{1}{2}(1+ix)\big)}{\Gamma\big(\frac{1}{2}(1-ix)\big)}
\;\!
\frac{\Gamma\big(\frac{1}{2}(1-\alpha-ix)\big)}{\Gamma\big(\frac{1}{2}(1-\alpha+ix)\big)}
\end{eqnarray*}
and to
\begin{eqnarray*}
\varphi_{-1}(x,e^{-i\pi\alpha}-e^{i\pi\alpha}) &=&-e^{-i\pi\alpha/2}\;\!
\frac{\Gamma\big(1+\frac{1}{2}ix\big)}{\Gamma\big(1-\frac{1}{2}ix\big)}
\;\!
\frac{\Gamma\big(1-\frac{1}{2}(\alpha+ix)\big)}{\Gamma\big(1-\frac{1}{2}(\alpha-ix)\big)}
\;\!
\frac{\sin\big(\frac{\pi}{2}(\alpha+ix)\big)}{\sin\big(\frac{\pi}{2}(\alpha-ix)\big)} \\
&=&-e^{-i\pi\alpha/2}\;\!
\frac{\Gamma\big(1+\frac{1}{2}ix\big)}{\Gamma\big(1-\frac{1}{2}ix\big)}
\;\!
\frac{\Gamma\big(\frac{1}{2}(\alpha-ix)\big)}{\Gamma\big(\frac{1}{2}(\alpha+ix)\big)}\ .
\end{eqnarray*}
Clearly, both functions are continuous and take values in ${\mathbb T}$. Furthermore, since
$\varphi^-_m$ and $\tilde{\varphi}_m$ have limits at $\pm \infty$, so does the functions
$\varphi_m(\cdot,a)$. It follows that the variation of the arguments of the previous functions
can be defined. More generally, for any continuously differentiable function $\varphi:[-\infty,\infty]\to {\mathbb T}$ we set
\begin{equation*}
\mathrm{Var}[\varphi]:=\frac{1}{i}\int_{-\infty}^\infty \varphi(x)^{-1}\;\!\varphi'(x)\;\! \mathrm{d} x\ .
\end{equation*}
Let us first state a convenient formula. Its proof is given in the Appendix~\ref{appb}.
\begin{lemma}\label{variationarg}
Let $a,b>0$. For $\varphi_{a,b}(x):=\frac{\Gamma(a+ix)}{\Gamma(a-ix)}\frac{\Gamma(b-ix)}{
\Gamma(b+ix)}$ one has $\mathrm{Var}[\varphi_{a,b}]=2\pi(a-b)$.
\end{lemma}
As an easy corollary one obtains
\begin{corol}
The following equalities hold:
\begin{enumerate}
\item[i)] $\mathrm{Var}[\varphi_m(\cdot,0)]=2\delta_m^\alpha$ for $m \in \{0,-1\}$,
\item[ii)] $\mathrm{Var}[\varphi_0(\cdot,e^{i\pi\alpha}-e^{-i\pi\alpha})]=\pi\alpha$,
\item[iii)] $\mathrm{Var}[\varphi_{-1}(\cdot,e^{-i\pi\alpha}-e^{i\pi\alpha})]=\pi(2-\alpha)$.
\end{enumerate}
\end{corol}
Let us now set
$$
\phi_1(C,D,\alpha):=\mathrm{Var}\big[\mathrm{det}\big(\Gamma_1(C,D,\alpha,\cdot)\big)\big]
$$
and
$$\phi_3(C,D,\alpha):=-\mathrm{Var}\big[\mathrm{det}\big(\Gamma_3(C,D,\alpha,\cdot)\big)\big].$$
The sign $"-"$ in the second definition comes from the sense of the computation of the winding
number: from $+\infty$ to $-\infty$.
By taking into account the above information and the expression $S^{C\!D}_\alpha(0)$ and $S^{C\!D}_\alpha(+\infty)$ recalled in Proposition \ref{propSurS}
one can prove:
\begin{prop}
One has
\begin{enumerate}
\item[i)] If $D=0$, then $\phi_3(C,D,\alpha)=0$,
\item[ii)] If $\mathrm{det}(D)\neq 0$, then $\phi_3(C,D,\alpha)=-2\pi$,
\item[iii)] If $\ker(D)= \left(\begin{smallmatrix}
\mathbb{C}\\ 0 \end{smallmatrix}\right)$ or if
$\dim[\ker(D)]=1$, $\alpha < 1/2$ and $\ker(D)\neq \left(\begin{smallmatrix}
0\\ \mathbb{C}
\end{smallmatrix}\right)$,
then $\phi_3(C,D,\alpha)=-2\pi(1-\alpha)$,
\item[iv)] If $\ker(D)= \left(\begin{smallmatrix}
0\\ \mathbb{C}
\end{smallmatrix}\right)$ or if
$\dim[\ker(D)]=1$, $\alpha > 1/2$ and $\ker(D)\neq \left(\begin{smallmatrix}
\mathbb{C}\\ 0
\end{smallmatrix}\right)$,
then $\phi_3(C,D,\alpha)=-2\pi\alpha$,
\item[v)] If
$\dim[\ker(D)]=1$ and $\alpha =1/2$, then $\phi_3(C,D,\alpha)=-\pi$.
\end{enumerate}
Furthermore,
\begin{enumerate}
\item[a)] If $C=0$, then $\phi_1(C,D,\alpha)=2\pi$,
\item[b)] If $\mathrm{det}(C)\ne 0$, then $\phi_1(C,D,\alpha)=0$,
\item[c)] If $\ker (C)=\left(\begin{smallmatrix}0\\ \mathbb{C} \end{smallmatrix}\right)$ or if $\dim[\ker(C)]=1$,
$\alpha>1/2$ and
$\ker (C)\ne \left(\begin{smallmatrix}\mathbb{C} \\ 0 \end{smallmatrix}\right)$,
then $\phi_1(C,D,\alpha)=2\pi(1-\alpha)$,
\item[d)] If $\ker (C)=\left(\begin{smallmatrix} \mathbb{C} \\ 0\end{smallmatrix}\right)$ or if
$\dim[\ker (C)]=1$, $\alpha<1/2$ and $\ker (C)\ne \left(\begin{smallmatrix}0 \\ \mathbb{C} \end{smallmatrix}\right)$,
then $\phi_1(C,D,\alpha)=2\pi\alpha$,
\item[e)] If $\dim[\ker (C)]=1$ and $\alpha=1/2$, then
$\phi_1(C,D,\alpha)=\pi$.
\end{enumerate}
\end{prop}
\begin{proof}
Statements i) to iv) as well as statements a) to d) are easily obtained simply by
taking the asymptotic values of $S^{C\!D}_\alpha(\cdot)$ into account. So let us concentrate
on the remaining statements.
Let $p=(p_1,p_2)\in {\mathbb C}^2$ with $\|p\|=1$, and let
\[
{\mathrm P}=\begin{pmatrix}
|p_2|^2 & - p_1 \Bar p_2\\
-\Bar p_1 p_2 & |p_1|^2
\end{pmatrix}
\]
be the orthogonal projection onto $p^\bot$. For $x \in {\mathbb R}$, let us also set
\begin{equation*}
\varphi({\mathrm P},x):=
\Big(
\begin{smallmatrix}\varphi^-_0(x) & 0 \\ 0 & \varphi^-_{-1}(x) \end{smallmatrix}\Big) +
\Big(
\begin{smallmatrix}\tilde{\varphi}_0(x) & 0 \\ 0 & \tilde{\varphi}_{-1}(x) \end{smallmatrix}\Big)
2{\mathrm P}
\Big(\begin{smallmatrix} i & 0 \\ 0 & -i \end{smallmatrix}\Big)
\end{equation*}
whose determinant is equal to
\begin{equation*}
g(x):= \varphi^-_0(x)\;\!\varphi^-_{-1}(x) + 2i\tilde{\varphi}_0(x)\;\!
\varphi^-_{-1}(x)\;\!|p_2|^2
-2i\varphi^-_0(x)\;\!\tilde{\varphi}_{-1}(x)|p_1|^2\ .
\end{equation*}
By taking the explicit expressions for these functions one obtains
\begin{eqnarray*}
g(x)&=& \frac{\Gamma\big(\frac{1}{2}(1+ix)\big)}{\Gamma\big(\frac{1}{2}(1-ix)\big)}
\;\!
\frac{\Gamma\big(\frac{1}{2}(\frac{3}{2}-ix)\big)}{\Gamma\big(\frac{1}{2}(\frac{3}{2}+ix)\big)}\;\!
\frac{\Gamma\big(\frac{1}{2}(2+ix)\big)}{\Gamma\big(\frac{1}{2}(2-ix)\big)}
\frac{\Gamma\big(\frac{1}{2}(\frac{3}{2}-ix)\big)}{\Gamma\big(\frac{1}{2}(\frac{3}{2}+ix)\big)}\;\!
\\
&&\cdot\, \Big(
1+ i\;\!e^{i\pi/4}\;\!\frac{e^{\pi x /2}}{\pi}\;\!
\Gamma\big({\textstyle \frac{1}{2}}({\textstyle \frac{1}{2}}-ix)\big)\Gamma\big({\textstyle \frac{1}{2}}({\textstyle\frac{3}{2}}+ix)\big)
\Big).
\end{eqnarray*}
Now, by setting $z=\frac{3}{4}+i\frac{x}{2}$ and by some algebraic computations one obtains
\begin{eqnarray*}
&&1+ie^{i\pi/4}\;\!\frac{e^{\pi x /2}}{\pi}\;\!
\Gamma\big({\textstyle \frac{1}{2}}({\textstyle \frac{1}{2}}-ix)\big)\;\!\Gamma\big({\textstyle \frac{1}{2}}({\textstyle\frac{3}{2}}+ix)\big)\\
&=& 1+\frac{i}{\pi}e^{-i\pi(z-1)}\;\!\Gamma(1-z)\;\!\Gamma(z)
=1-i\frac{e^{-i\pi z}}{\sin(\pi z)} \\
&=&-i\frac{\cos(\pi z)}{\sin(\pi z)}
= -i\frac{1}{\tan\big(\frac{3\pi}{4}+i\frac{\pi x}{2}\big)}\\
&=&-i\frac{\tanh(\frac{\pi x}{2})-i}{\tanh(\frac{\pi x)}{2})+i}\ .
\end{eqnarray*}
Thus, one finally obtains that
\begin{equation*}
g(x)=-i\frac{\Gamma\big(\frac{1}{2}(1+ix)\big)}{\Gamma\big(\frac{1}{2}(1-ix)\big)}
\;\!
\frac{\Gamma\big(\frac{1}{2}(\frac{3}{2}-ix)\big)}{\Gamma\big(\frac{1}{2}(\frac{3}{2}+ix)\big)}\;\!
\frac{\Gamma\big(\frac{1}{2}(2+ix)\big)}{\Gamma\big(\frac{1}{2}(2-ix)\big)}\;\!
\frac{\Gamma\big(\frac{1}{2}(\frac{3}{2}-ix)\big)}{\Gamma\big(\frac{1}{2}(\frac{3}{2}+ix)\big)}
\frac{\tanh(\frac{\pi x}{2})-i}{\tanh(\frac{\pi x}{2})+i}\ .
\end{equation*}
Note that this function does not depend on the projection ${\mathrm P}$ at all.
Clearly one has
\[
\mathrm{Var}[g]=\mathrm{Var}[\varphi_{\frac{1}{2},\frac{3}{4}}\,] +\mathrm{Var}[\varphi_{1,\frac{3}{4}}\,]
+\mathrm{Var}\Big[ \frac{\tanh(\frac{\pi \cdot}{2})-i}{\tanh(\frac{\pi \cdot}{2})+i}\,\Big]=
-\dfrac{\pi}{2}+\dfrac{\pi}{2}+\pi=\pi.
\]
Now, by observing that $\phi_3(C,D,\alpha)=-\mathrm{Var}[g]$ in the case v), one concludes
that in this special case $\phi_3(C,D,\alpha)=-\pi$.
For the case e), observe that by setting ${\mathrm P}:=1-\Pi$, one easily obtains that
in this special case $\Gamma_1(C,D,\alpha,\cdot)=\varphi({\mathrm P},\cdot)$. It follows
that $\phi_1(C,D,\alpha)=\mathrm{Var}[g]$ and then $\phi_1(C,D,\alpha)=\pi$.
\end{proof}
\subsection{Contribution of $\Gamma_2(C,D,\alpha,\cdot)$}
Recall first that $\Gamma_2(C,D,\alpha,\cdot)$ defined in \eqref{Gma2} is equal to $S^{C\!D}_\alpha(\cdot)$.
We are interested here in the phase of $\mathrm{det} \big(S^{C\!D}_\alpha(\kappa)\big)$ acquired
as $\kappa$ runs from $0$ to $+\infty$; we denote this phase by $\phi_2(C,D,\alpha)$.
Note that if $\mathrm{det} \big(S^{C\!D}_\alpha(\kappa)\big)=\frac{\bar f(\kappa)}{f(\kappa)}$ for a non-vanishing continuous function $f:{\mathbb R}_+ \to {\mathbb C}^*$, then
\begin{equation*}
\phi_2(C,D,\alpha)=-2\big(\arg f(+\infty)-\arg f(0)\big)\ ,
\end{equation*}
where $\arg:{\mathbb R}_+ \to {\mathbb R}$ is a continuous function defined by the argument of $f$.
In the sequel, we shall also use the notation $\theta:{\mathbb C}^*\to (-\pi,\pi]$ for the principal argument of a complex number different from $0$.
Now, let us consider $\kappa>0$ and set $S(\kappa):=S^{C\!D}_\alpha(\kappa)$.
For shortness, we also set $L:=\frac{\pi}{2\sin(\pi\alpha)}\;\!C$ and
\begin{gather*}
B:=\left(\begin{smallmatrix}
b_1(\kappa) & 0 \\
0 & b_2(\kappa)
\end{smallmatrix}\right)=\left(\begin{smallmatrix}
\frac{\Gamma(1-\alpha)}{2^\alpha}\;\!\kappa^{\alpha} & 0 \\
0 & \frac{ \Gamma(\alpha)}{2^{1-\alpha}}\;\!\kappa^{(1-\alpha)}
\end{smallmatrix}\right),
\quad
\Phi:=\left(\begin{smallmatrix}
e^{-i\pi\alpha/2} & 0 \\
0 & e^{-i\pi(1-\alpha)/2}
\end{smallmatrix}\right),\quad
J:=\left(\begin{smallmatrix}
1 & 0 \\ 0 & -1
\end{smallmatrix}\right).
\end{gather*}
Note that the matrices $B$, $\Phi$ and $J$ commute with each other, that the matrix $B$
is self-adjoint and invertible, and that $J$ and $\Phi$ are unitary.
I) If $D=0$, then $S^{C\!D}_\alpha$ is constant and $\phi_2(C,D,\alpha)=0$.
II) Let us assume $\mathrm{det}(D)\neq 0$, {\it i.e.}~$D$ is invertible.
Without loss of generality, we may assume that $D=1$, as explained in \cite[Sec.~3]{PR}, and that $C$ and hence $L$
are self-adjoint. We write $C=(c_{jk})$, $L=(l_{jk})$ and we then use the expression
\begin{equation}\label{eq-SL}
S(\kappa)=\Phi \;\!\frac{B^{-1}\;\! L\;\!B^{-1} +\cos(\pi\alpha)J
+i\sin(\pi\alpha)}{B^{-1}\;\! L\;\!B^{-1} +\cos(\pi\alpha)J -i\sin(\pi\alpha)}
\;\!\Phi\;\!J\ ,
\end{equation}
derived in \cite{PR}. By direct calculation one obtains
$\mathrm{det} \big(S(\kappa)\big)= \frac{\bar f(\kappa)}{f(\kappa)}$
with
\begin{eqnarray}\label{f1}
\nonumber f(\kappa) &=&
\mathrm{det}\big(B^{-1} L B^{-1} +\cos(\pi\alpha)J -i\sin(\pi\alpha)\big) \\
\nonumber &=&\mathrm{det}(L)\;\! b_1^{-2}(\kappa) \;\!b_2^{-2}(\kappa) -1 +\cos(\pi\alpha) \big(l_{22}\;\!
b_2^{-2}(\kappa) - l_{11} \;\!b_1^{-2}(\kappa)\big)\\
&&-i\sin(\pi\alpha) \big(l_{11} \;\!b_1^{-2}(\kappa)+l_{22} \;\!b_2^{-2}(\kappa)\big)
\end{eqnarray}
and $f$ is non-vanishing as the determinant of an invertible matrix.
For the computation of $\phi_2(C,D,\alpha)$ we shall have to consider several cases. We first assume that $\mathrm{det}(C)\neq 0$, which is equivalent to $\mathrm{det}(L)\neq 0$. In that case one clearly has $\mathrm{det}\big(S^{C\!D}_\alpha(0)\big) = \mathrm{det}\big(S^{C\!D}_\alpha(+\infty)\big)$, and then $\phi_2(C,D,\alpha)$ will be a multiple of $2\pi$. Furthermore, note that $\theta \big(f (+\infty)\big)= \pi$ and that $\theta\big( f(0)\big)=0$ if $\mathrm{det}(L)>0$ and $\theta\big( f(0)\big)=\pi$ if $\mathrm{det}(L)<0$.
Assuming that $l_{11}\;\!l_{22}\geq 0$ (which means that $\Im f$ is either non-negative or non-positive, and its sign is opposite to that of $\mathop{\mathrm{tr}}(L)$), one has the following cases:
\begin{enumerate}
\item[II.1)] If $\mathop{\mathrm{tr}} (C)>0$ and $\mathrm{det}(C)>0$, then $\Im f<0$ and
$\phi_2(C,D,\alpha)=2\pi$,
\item[II.2)] If $\mathop{\mathrm{tr}} (C)>0$ and $\mathrm{det}(C)<0$, then $\Im f<0$ and
$\phi_2(C,D,\alpha)=0$,
\item[II.3)] If $\mathop{\mathrm{tr}} (C)<0$ and $\mathrm{det}(C)>0$, then $\Im f>0$ and
$\phi_2(C,D,\alpha)=-2\pi$,
\item[II.4)] If $\mathop{\mathrm{tr}} (C)<0$ and $\mathrm{det}(C)<0$, then $\Im f>0$ and
$\phi_2(C,D,\alpha)=0$,
\item[II.5)] If $c_{11}=c_{22}=0$ (automatically $\mathrm{det}(C)<0$), then $f$ is real and non-vanishing, hence $\phi_2(C,D,\alpha)=0$.
\end{enumerate}
Now, if $l_{11} l_{22}<0$ the main difference is that the parameter $\alpha$
has to be taken into account. On the other hand, one has $\mathrm{det}(L)<0$ which implies that $\arg f(+\infty)-\arg f(0)$ has to be a multiple of $2\pi$. For the computation of this difference, observe that the equation $\Im f(\kappa)=0$ (for $\kappa\ge 0$) is equivalent to
\begin{equation}\label{eq-kkk}
\dfrac{b_1^{-2}(\kappa)}{b_2^{-2}(\kappa)}=-\dfrac{l_{22}}{l_{11}}\Longleftrightarrow
\kappa^{2\alpha-1} = 2^{2\alpha-1} \dfrac{\Gamma(\alpha)}{\Gamma(1-\alpha)}\sqrt{-\dfrac{l_{11}}{l_{22}}}.
\end{equation}
For $\alpha\ne 1/2$ this equation has a unique solution $\kappa_0$, and it follows that the sign of $\Im f(\kappa)$ will be different for $\kappa<\kappa_0$ and for $\kappa>\kappa_0$ (and will depend on $\alpha$ and on the relative sign of $l_{11}$ and $l_{22}$).
Let us now estimate $\Re f(\kappa_0)$. We have
\begin{eqnarray*}
\Re f(\kappa)&=&\mathrm{det} (L)\;\! b_1^{-2}(\kappa)\;\! b_2^{-2}(\kappa) -1 +\cos(\pi\alpha) \big(l_{22} \;\!b_2^{-2}(\kappa) - l_{11}
\;\!b_1^{-2}(\kappa)\big)\\
&\leq& -|l_{11}\;\!l_{22}|\;\! b_1^{-2}(\kappa) \;\!b_2^{-2}(\kappa) -1 +|\cos(\pi\alpha)|\big( \big|l_{22}|\;\! b_2^{-2}(\kappa) + |l_{11}|\;\! b_1^{-2}(\kappa)\big)\\
&=& -\Big(|l_{11}\;\!l_{22}|\;\! b_1^{-2}(\kappa) \;\!b_2^{-2}(\kappa) +1 -|\cos(\pi\alpha)|\big( \big|l_{22}| \;\!b_2^{-2}(\kappa) + |l_{11}| \;\! b_1^{-2}(\kappa)\big)\Big)\\
&=&-\big(1-|\cos(\pi\alpha)|\big)\big(|l_{11}\;\!l_{22}|\;\! b_1^{-2}(\kappa)\;\! b_2^{-2}(\kappa) +1\big)\\
&&-|\cos(\pi\alpha)| \big( |l_{11}|\;\!b_1^{-2}(\kappa)-1\big)\big( |l_{22}\;\!|b_2^{-2}(\kappa)-1\big).
\end{eqnarray*}
Hence using \eqref{eq-kkk} and the equality $-\frac{l_{22}}{l_{11}}= \frac{|l_{22}|}{|l_{11}|}$ one obtains
\[
\Re f(\kappa_0)\le -\big(1-|\cos(\pi\alpha)|\big)\big(|l_{11}l_{22}|\, b_1^{-2}(\kappa_0) b_2^{-2}(\kappa_0) +1\big)
-|\cos(\pi\alpha)| \big( |l_{22}|b_2^{-2}(\kappa_0)-1\big)^2<0.
\]
This estimate implies that $0$ is not contained in the interior of the curve $f({\mathbb R}_+)$, which means that
$\arg f(+\infty)-\arg f(0)=0$ for all $\alpha \neq 1/2$.
For the special case $\alpha=1/2$, the equation \eqref{eq-kkk} has either no solution or holds for all $\kappa\in {\mathbb R}_+$. In the former situation, $\Im f$ has always the same sign, which means that the $\arg f(+\infty)-\arg f(0)=0$. In the latter situation, $f$ is real, and obviously $\arg f(+\infty)-\arg f(0)=0$.
In summary, one has obtained:
\begin{enumerate}
\item[II.6)] If $c_{11}\;\!c_{22}<0$, then $\phi_2(C,D,\alpha)=0$.
\end{enumerate}
Let us now assume that $\mathrm{det} (C)=0$ but $C\ne 0$, {\it i.e.}~$\mathrm{det}(L)=0$ but $L\neq 0$. In that case one simply has
\[
f(\kappa)= -1 +\cos(\pi\alpha) \big(l_{22}\;\! b_2^{-2}(\kappa) - l_{11}\;\! b_1^{-2}(\kappa)\big)
-i\sin(\pi\alpha) \big(l_{11} \;\!b_1^{-2}(\kappa)+l_{22} \;\!b_2^{-2}(\kappa)\big).
\]
Furthermore, one always has $l_{11}l_{22}\ge 0$, which means
that $\Im f$ is either non-negative or non-positive.
Then, since $\theta \big(f(+\infty)\big)=\pi$, it will be sufficient
to calculate the value $\theta \big(f(0)\big)$.
i) Assume first that $l_{11}=0$, which automatically implies that $l_{22}\neq 0$ and $l_{12}=l_{21}=0$. Then one has
\[
f(\kappa)= -1 +\cos(\pi\alpha) \;\!l_{22}\;\! b_2^{-2}(\kappa) -i\sin(\pi\alpha)\;\! l_{22}\;\! b_2^{-2}(\kappa)
\]
and
\[
\theta \big(f(0)\big)=\begin{cases}
-\pi\alpha & \text{ if } l_{22}>0\\
\pi(1-\alpha) & \text{ if } l_{22}<0
\end{cases}\ .
\]
By taking into account the sign of $\Im f$, one then obtains
\[
\arg f(+\infty)-\arg f(0)=
\begin{cases}
-\pi(1-\alpha) & \text{ if } l_{22}>0\\
\pi \alpha & \text{ if } l_{22}<0
\end{cases}\ .
\]
ii) Similarly, if we assume now that $l_{22}=0$, we then have $l_{11}\ne 0$, $l_{12}=l_{21}=0$ and
\[
f(\kappa)= -1 -\cos(\pi\alpha) \;\!l_{11} \;\!b_1^{-2}(\kappa)
-i\sin(\pi\alpha) \;\!l_{11}\;\! b_1^{-2}(\kappa).
\]
It then follows that
\[
\theta \big(f(0)\big)=\begin{cases}
\pi\alpha & \text{ if } l_{11}<0\\
-\pi(1-\alpha) & \text{ if } l_{11}>0
\end{cases}
\]
and
\[
\arg f(+\infty)-\arg f(0)=
\begin{cases}
\pi(1-\alpha) & \text{ if } l_{11}<0\\
-\pi \alpha & \text{ if } l_{11}>0
\end{cases}\ .
\]
iii) Assume now that $l_{11}\;\!l_{22}\ne 0$ (which means automatically $l_{11}l_{22}>0$) and that $\alpha=1/2$.
Since $b_1(\kappa)=b_2(\kappa)=:b(\kappa)$ one then easily observes that
$f(\kappa)=-1-i\mathop{\mathrm{tr}} (L)\;\! b^{-2}(\kappa)$,
$\theta \big(f(0)\big)=-\frac{\pi}{2} \;\!\mathrm{sign} \big(\mathop{\mathrm{tr}} (L)\big)$ and
$\arg f(+\infty)-\arg f(0)=-\frac{\pi}{2} \;\!\mathrm{sign} \big(\mathop{\mathrm{tr}} (L)\big)$.
iv) Assume that $l_{11}\;\!l_{22}\ne 0$ and that $\alpha<1/2$.
In this case one can rewrite
\[
f(\kappa)= -1 +\cos(\pi\alpha)\;\! b_2^{-2}(\kappa)\Big(l_{22} - l_{11} \frac{b_2^2(\kappa)}{b_1^2(\kappa)}\Big)
-i\sin(\pi\alpha)\;\!b_2^{-2}(\kappa) \Big(l_{22}+l_{11} \frac{b_2^2(\kappa)}{b_1^2(\kappa)}\Big).
\]
Since $b_2(\kappa)/b_1(\kappa)\to 0$ as $\kappa\to 0+$, one has the same limit values and phases as in i).
v) Similarly, if $l_{11}\;\!l_{22}\ne 0$ and $\alpha>1/2$, we have the same limit and phases as in ii).
In summary, if $\mathrm{det}(C)=0$ and $C\neq 0$ one has obtained:
\begin{enumerate}
\item[II.7)] If $c_{11}=0$ and $\mathop{\mathrm{tr}}(C)>0$, or if $c_{11}\;\!c_{22}\neq 0$, $\mathop{\mathrm{tr}}(C)>0$ and $\alpha<1/2$, then $\phi_2(C,D,\alpha)=2\pi(1-\alpha)$,
\item[II.8)] If $c_{11}=0$ and $\mathop{\mathrm{tr}}(C)<0$, or if $c_{11}\;\!c_{22}\neq 0$, $\mathop{\mathrm{tr}}(C)<0$ and $\alpha<1/2$, then $\phi_2(C,D,\alpha)=-2\pi\alpha$,
\item[II.9)] If $c_{22}=0$ and $\mathop{\mathrm{tr}}(C)>0$, or if $c_{11}\;\!c_{22}\neq 0$, $\mathop{\mathrm{tr}}(C)>0$ and $\alpha>1/2$, then $\phi_2(C,D,\alpha)=2\pi\alpha$,
\item[II.10)] If $c_{22}=0$ and $\mathop{\mathrm{tr}}(C)<0$, or if $c_{11}\;\!c_{22}\neq 0$, $\mathop{\mathrm{tr}}(C)<0$ and $\alpha>1/2$, then $\phi_2(C,D,\alpha)=-2\pi(1-\alpha)$,
\item[II.11)] If $c_{11}\;\!c_{22}\neq 0$, $\mathop{\mathrm{tr}}(C)>0$ and $\alpha = 1/2$, then $\phi_2(C,D,\alpha)=\pi$,
\item[II.12)] If $c_{11}\;\!c_{22}\neq 0$, $\mathop{\mathrm{tr}}(C)<0$ and $\alpha = 1/2$, then $\phi_2(C,D,\alpha)=-\pi$.
\end{enumerate}
III) If $C=0$, then $S^{C\!D}_\alpha$ is constant and $\phi_2(C,D,\alpha)=0$.
IV) We shall now consider the situation $\mathrm{det}(D)=0$ but $D\ne 0$.
Obviously, $\ker(D)$ is of dimension $1$. So let $p=(p_1,p_2)$ be a vector
in $\ker(D)$ with $\|p\|=1$.
Let us also introduce
\begin{equation*}
c(\kappa)=b_1^2(\kappa)\;\! |p_2|^2\;\!e^{-i\pi\alpha} - b_2^2(\kappa)\;\! |p_1|^2\;\!e^{i\pi\alpha}
\end{equation*}
and
\begin{equation*}
X_-:= \big(b_1^2(\kappa)\;\!|p_2|^2 - b_2^2(\kappa)\;\!|p_1|^2\big),\qquad
X_+:= \big(b_1^2(\kappa)\;\!|p_2|^2 + b_2^2(\kappa)\;\!|p_1|^2\big)\ .
\end{equation*}
In that case it has been shown in \cite{PR} that
\begin{equation*}
S= \Phi\;\!\big(c(\kappa)+\ell\big)^{-1} M(\kappa)\;\!\Phi\;\!J\ ,
\end{equation*}
where
\[
M(\kappa):=
\begin{pmatrix}
e^{i\pi\alpha}\;\!X_- + \ell & -2\;\!i\;\!\sin(\pi\alpha)\;\!b_1(\kappa)\;\!b_2(\kappa)\;\!p_1\;\!\bar p_2 \\
-2\;\!i\;\!\sin(\pi\alpha)\;\!b_1(\kappa)\;\!b_2(\kappa)\;\!\bar p_1\;\! p_2 &
e^{-i\pi\alpha}\;\!X_- + \ell
\end{pmatrix}
\]
and $\ell$ is a real number which will be specified below.
Note that $\mathrm{det} \big(M(\kappa)\big)=|c(\kappa)+\ell|^2$ which ensures that $S$ is a unitary operator.
Therefore, by setting
\[
g(\kappa):=c(\kappa)+\ell\\
=\cos(\pi\alpha)\Big(
b_1^2(\kappa)\;\!|p_2|^2-b_2^2(\kappa)\;\!|p_1|^2
\Big)+\ell
-i\sin(\pi\alpha)
\Big(
b_1^2(\kappa)\;\!|p_2|^2+b_2^2(\kappa)\;\!|p_1|^2
\Big),
\]
one has
\[
\phi_2(C,D,\alpha)=-2\big(\arg g(+\infty)- \arg g(0)\big),
\]
where $\arg:{\mathbb R}_+ \to {\mathbb R}$ is a continuous function defined by the argument of $g$.
Note already that we always have $\Im g< 0$.
We first consider the special case $\alpha=1/2$. In that case we have $b_1(\kappa)=b_2(\kappa)=:b(\kappa)$, and then
\[
g(\kappa)=\ell-i b^2(\kappa).
\]
If $\ell\ne 0$, we have $\theta \big(g(0)\big)=\theta(\ell)$ and $\theta \big(g(+\infty)\big)=-\pi/2$.
Therefore
\[
\arg g(+\infty)-\arg g(0)=\begin{cases}
-\pi/2 & \text{if } \ell>0\\
\pi/2 & \text{if } \ell<0
\end{cases} \ .
\]
If $\ell=0$, then $g$ is pure imaginary, hence $\arg g(+\infty)-\arg g(0)=0$.
In summary, for $\mathrm{det}(D)=0$ but $D \neq 0$, one has already obtained:
\begin{enumerate}
\item[IV.1)] If $\ell>0$ and $\alpha=1/2$, then $\phi_2(C,D,\alpha)=\pi$,
\item[IV.2)] If $\ell=0$ and $\alpha=1/2$, then $\phi_2(C,D,\alpha)=0$,
\item[IV.3)] If $\ell<0$ and $\alpha=1/2$, then $\phi_2(C,D,\alpha)=-\pi$.
\end{enumerate}
Let us now consider the case $\alpha<1/2$, and assume first that $\ell \ne 0$. It follows that $\theta \big(g(0)\big)=\theta (\ell)$. To calculate $\theta \big(g(+\infty)\big)$ one has to consider two subcases.
So, on the one hand let us assume in addition that $p_1\ne 0$. Then one has
\[
g(\kappa)= \ell -
\cos(\pi\alpha)\;\!b_2^2(\kappa)\Big(|p_1|^2-\dfrac{b_1^2(\kappa)}{b_2^2(\kappa)}\;\!|p_2|^2\Big)
-i\sin(\pi\alpha)\;\!b_2^2(\kappa)\Big(|p_1|^2+\dfrac{b_1^2(\kappa)}{b_2^2(\kappa)}\;\!|p_2|^2\Big)
.
\]
Since $b_1(\kappa)/b_2(\kappa)\to 0$ as $\kappa\to +\infty$,
one obtains $\theta \big(g(+\infty)\big)=-\pi(1-\alpha)$
and
\[
\arg g(+\infty)-\arg g(0)=\begin{cases}
-\pi(1-\alpha), & \text{if } \ell>0\\
\pi\alpha, & \text{if } \ell<0
\end{cases} \ .
\]
On the other hand, if $p_1=0$, then one has
\[
g(\kappa)=\ell + b_1^2(\kappa)\big( \cos(\pi\alpha) -i\sin(\pi\alpha)\big),
\]
which implies that $\theta \big(g(+\infty)\big)=-\pi\alpha$ and that
\[
\arg g(+\infty)-\arg g(0)=\begin{cases}
-\pi\alpha, & \text{if } \ell>0\\
\pi(1-\alpha), & \text{if } \ell<0
\end{cases}\ .
\]
Now, let us assume that $\ell=0$. In this case the above limits for $\kappa \to + \infty$ still hold, so we only need
to calculate $\theta \big(g(0)\big)$. Firstly, if $p_2\ne 0$, we have
\[
g(\kappa)= \cos(\pi\alpha)\;\!b_1^2(\kappa)\Big( |p_2|^2-\dfrac{b_2^2(\kappa)}{b_1^2(\kappa)}\;\!|p_1|^2\Big)
-i\sin(\pi\alpha)\;\!b_1^2(\kappa) \Big(|p_2|^2+\dfrac{b_2^2(\kappa)}{b_1^2(\kappa)}\;\!|p_1|^2 \Big),
\]
and since $b_2(\kappa)/b_1(\kappa)\to 0$ as $\kappa \to 0+$ it follows that $\theta \big(g(0)\big)=-\pi\alpha$. Secondly, if $p_2=0$, then
\[
g(\kappa)=-b_2^2(\kappa)\Big(\cos(\pi\alpha) +i\sin(\pi\alpha)\big)
\Big),
\]
and we get $\theta \big(g(0)\big)=-\pi(1-\alpha)$.
In summary, for $\mathrm{det}(D)=0$, $D\neq 0$ and $\alpha<1/2$, we have
obtained
\begin{enumerate}
\item[IV.4)] if $\ell <0$ and $p_1\neq 0$, then $\phi_2(C,D,\alpha)=-2\pi\alpha$,
\item[IV.5)] if $\ell<0 $ and $p_1= 0$, then $\phi_2(C,D,\alpha)=-2\pi(1-\alpha)$,
\item[IV.6)] if $\ell>0 $ and $p_1\neq 0$, then $\phi_2(C,D,\alpha)=2\pi(1-\alpha)$,
\item[IV.7)] if $\ell >0$ and $p_1= 0$, then $\phi_2(C,D,\alpha)=2\pi\alpha$,
\item[IV.8)] if $\ell=0 $, $p_1\neq 0$ and $p_2\neq 0$ then $\phi_2(C,D,\alpha)=2\pi(1-2\alpha)$,
\item[IV.9)] if $\ell=0$ and $p_1= 0$ or if
$\ell=0$ and $p_2=0$, then $\phi_2(C,D,\alpha)=0$.
\end{enumerate}
The case $\mathrm{det}(D)=0$, $D\neq 0$ and $\alpha>1/2$ can be treated analogously. We simply state the results:
\begin{enumerate}
\item[IV.10)] if $\ell <0$ and $p_2\neq 0$, then $\phi_2(C,D,\alpha)=-2\pi(1-\alpha)$,
\item[IV.11)] if $\ell<0 $ and $p_2= 0$, then $\phi_2(C,D,\alpha)=-2\pi\alpha$,
\item[IV.12)] if $\ell>0 $ and $p_2\neq 0$, then $\phi_2(C,D,\alpha)=2\pi\alpha$,
\item[IV.13)] if $\ell >0$ and $p_2= 0$, then $\phi_2(C,D,\alpha)=2\pi(1-\alpha)$,
\item[IV.14)] if $\ell=0 $, $p_1\neq 0$ and $p_2\neq 0$ then $\phi_2(C,D,\alpha)=-2\pi(1-2\alpha)$,
\item[IV.15)] if $\ell=0$ and $p_1= 0$ or if
$\ell=0$ and $p_2=0$, then $\phi_2(C,D,\alpha)=0$.
\end{enumerate}
Let us finally recall some relationship between the constant $\ell$ and the matrices $C$ and $D$ in the case IV).
As explained before, we can always assume that $C=(1-U)/2$ and $D=i(1+U)/2$ for some $U\in U(2)$. Recall that in deriving the equalities (IV.1)--(IV.15)
we assumed $\dim[\ker(D)]=1$, {\it i.e.}~$-1$ is an eigenvalue of $U$ of multiplicity $1$.
Let $e^{i\theta}$, $\theta\in(-\pi,\pi)$ be the other eigenvalue of $U$.
Then by the construction explained in \cite{PR}, one has
\[
\ell = \dfrac{\pi}{2\sin(\pi\alpha)} \,\dfrac{1-e^{i\theta}}{i(1+e^{i\theta})}=
- \dfrac{\pi}{2\sin(\pi\alpha)} \frac{\sin \big(\frac{\theta}{2}\big)}{\cos \big(\frac{\theta}{2}\big)}.
\]
On the other hand, the eigenvalues of the matrix $CD^*=i(U-U^*)/4$
are $\lambda_1=0$ and
\[
\lambda_2=i(e^{i\theta}-e^{-i\theta})/4=-{\textstyle \frac{1}{2}} \sin(\theta)=
- \sin\big({\textstyle \frac{\theta}{2}}\big)\;\!\cos\big( {\textstyle \frac{\theta}{2}}\big)\ .
\]
It follows that $\lambda_2$ and $\ell$ have the same sign.
Therefore, in (IV.1)--(IV.15) one has:
$\ell <0$ if $CD^*$ has one zero eigenvalue and one negative eigenvalue,
$\ell =0$ if $CD^*=0$ and $\ell>0$ if $CD^*$ has one zero eigenvalue and one positive eigenvalue.
\subsection{Case-by-case results}\label{Sectionfinal}
In this section we finally collect all previous results and prove the case-by-case version of Levinson's theorem. The interest of this analysis is that the contribution of the $0$-energy operator $\Gamma_1(C,D,\alpha,\cdot)$ and the contribution of the operator $\Gamma_3(C,D,\alpha,\cdot)$ at $+\infty$-energy are explicit.
Here, Levinson's theorem corresponds to the equality between the number of bound states of $H_\alpha^{C\!D}$ and $-\frac{1}{2\pi} \sum_{j=1}^4\phi_j(C,D,\alpha)$. This is proved again by comparing the column $3$ with the column $7$ (the contribution of $\Gamma_4(C,D,\alpha,\cdot)$ defined in \eqref{Gma4} is always trivial).
For simplicity, we shall write $H$ for $H_\alpha^{C\!D}$ and $\phi_j$ for $\phi_j(C,D,\alpha)$. We also recall that the number $\#\sigma_p(H)$ of eigenvalues of $H$ is equal to the number of negative eigenvalues of the matrix $CD^*$ \cite[Lem.~4]{PR}.
We consider first the very special situations:
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
No & Conditions & $\#\sigma_p(H)$ & $\phi_1$ & $\phi_2$ & $\phi_3$ & $\sum_j\phi_j$
\\ \hline\hline
I &$ D=0 $&$ 0 $&$ 0 $&$ 0 $&$ 0 $&$ 0 $\\\hline
III & $C=0 $&$ 0 $&$ 2\pi $&$ 0 $&$
-2\pi $&$ 0$ \\ \hline
\end{tabular}
\end{center}
Now, if $\mathrm{det}(D)\neq 0$ and $\mathrm{det}(C)\ne 0$, we set $E:=D^{-1}C=:(e_{jk})$ and obtains:
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
No & Conditions & $\#\sigma_p(H)$ & $\phi_1$ & $\phi_2$ & $\phi_3$ & $\sum_j\phi_j$
\\ \hline\hline
II.1 &$ e_{11} e_{22}\ge 0$, $\mathop{\mathrm{tr}}(E)>0$, $\mathrm{det}(E)>0$ &$ 0 $&$ 0 $&$ 2\pi $&$ -2\pi $&$ 0$ \\\hline
II.2 &$ e_{11} e_{22}\ge 0$, $\mathop{\mathrm{tr}}(E)>0$, $\mathrm{det}(E)<0$ &$ 1$ &$ 0 $&$ 0 $&$ -2\pi $&$ -2\pi $\\\hline
II.3 &$ e_{11} e_{22}\ge 0$, $\mathop{\mathrm{tr}}(E)<0$, $\mathrm{det}(E)>0$ &$ 2 $&$ 0 $&$ -2\pi $&$ -2\pi $&$ -4\pi$ \\\hline
II.4 &$ e_{11} e_{22}\ge 0$, $\mathop{\mathrm{tr}}(E)<0$, $\mathrm{det}(E)<0$ &$ 1 $&$ 0 $&$ 0 $&$ -2\pi $&$ -2\pi$ \\\hline
II.5 &$ e_{11}=e_{22}=0,\mathrm{det}(E)< 0$&$ 1 $&$ 0 $&$ 0 $&$ -2\pi $&$ -2\pi$ \\\hline
II.6 &$ e_{11}\;\! e_{22}<0 $&$ 1 $&$ 0 $&$ 0 $&$ -2\pi $&$ -2\pi$ \\\hline
\end{tabular}
\end{center}
If $\mathrm{det}(D)\neq 0$, $\mathrm{det}(C)=0$ and if we still set $E:=D^{-1}C$ one has:
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
No & Conditions & $\#\sigma_p(H)$ & $\phi_1$ & $\phi_2$ & $\phi_3$ & $\sum_j\phi_j$
\\ \hline\hline
II.7.a &$ e_{11}=0, \mathop{\mathrm{tr}}(E)>0 $&$ 0 $&$ 2\pi\alpha $&
$ 2\pi(1-\alpha) $&$ -2\pi $&$ 0 $\\\hline
II.7.b &$e_{11}\;\!e_{22}\neq 0,\mathop{\mathrm{tr}}(E)>0,\alpha<1/2 $&$ 0 $&$ 2\pi\alpha $&
$ 2\pi(1-\alpha) $&$ -2\pi $&$ 0 $\\\hline
II.8.a &$ e_{11}>0, \mathop{\mathrm{tr}}(E)<0 $&$ 1 $&$ 2\pi\alpha $&$ -2\pi\alpha $&$ -2\pi $&$ -2\pi$ \\\hline
II.8.b &$ e_{11}\;\!e_{22}\neq0, \mathop{\mathrm{tr}}(E)<0,\alpha<1/2 $&$ 1 $&$ 2\pi\alpha $&$ -2\pi\alpha $&$ -2\pi $&
$ -2\pi$ \\\hline
II.9.a &$ e_{22}=0, \mathop{\mathrm{tr}}(E)>0 $&$ 0 $&$ 2\pi(1-\alpha)$&$ 2\pi\alpha $&$ -2\pi $&$ 0$ \\\hline
II.9.b &$ e_{11}\;\!e_{22}\neq0, \mathop{\mathrm{tr}}(E)>0,\alpha>1/2 $&$ 0 $&$ 2\pi(1-\alpha)$&$ 2\pi\alpha $&
$ -2\pi $&$ 0$ \\\hline
II.10.a &$ e_{22}=0, \mathop{\mathrm{tr}}(E)<0 $&$ 1 $&$ 2\pi(1-\alpha) $&$ -2\pi(1-\alpha) $&$ -2\pi $&$ -2\pi $\\\hline
II.10.b &$ e_{11}\;\!e_{22}\neq0, \mathop{\mathrm{tr}}(E)<0,\alpha>1/2 $&$ 1 $&$ 2\pi(1-\alpha) $&$ -2\pi(1-\alpha) $&
$ -2\pi $&$ -2\pi $\\\hline
II.11 &$ e_{11}\;\!e_{22}\neq0,\mathop{\mathrm{tr}}(E)> 0 ,\alpha=1/2$&$ 0 $&$ \pi $&$ \pi $&$ -2\pi $&$ 0$ \\\hline
II.12 &$ e_{11}\;\! e_{22}\neq 0,\mathop{\mathrm{tr}}(E)<0,\alpha=1/2 $&$ 1 $&$ \pi $&$ -\pi $&$ -2\pi $&$ -2\pi$ \\\hline
\end{tabular}
\end{center}
On the other hand, if $\dim[\ker(D)]=1$ and $\alpha=1/2$ one has:
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
No & Conditions & $\#\sigma_p(H)$ & $\phi_1$ & $\phi_2$ & $\phi_3$ & $\sum_j\phi_j$
\\ \hline\hline
IV.1 &$ \ell>0 $&$ 0 $&$ 0 $&$ \pi $&$ -\pi $&$ 0$ \\\hline
IV.2 &$ \ell=0 $&$ 0 $&$ \pi $&$ 0 $&$ -\pi $&$ 0$ \\\hline
IV.3 &$ \ell<0 $&$ 1 $&$ 0 $&$ -\pi $&$ -\pi$&$ -2\pi$ \\\hline
\end{tabular}
\end{center}
If $\dim[\ker(D)]=1$, $\alpha<1/2$ and if $(p_1,p_2)\in \ker(D)$ one obtains:
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
No & Conditions & $\#\sigma_p(H)$ & $\phi_1$ & $\phi_2$ & $\phi_3$ & $\sum_j\phi_j$
\\ \hline\hline
IV.4 &$ \ell<0,p_1\neq 0 $&$ 1 $&$ 0 $&$ -2\pi\alpha $&$ -2\pi(1-\alpha) $&$ -2\pi$ \\\hline
IV.5 &$ \ell<0,p_1=0 $&$ 1 $&$ 0 $&$ -2\pi(1-\alpha) $&$ -2\pi\alpha $&$ -2\pi$ \\\hline
IV.6 &$ \ell>0,p_1\neq 0 $&$ 0 $&$ 0 $&$ 2\pi(1-\alpha) $&$ -2\pi(1-\alpha)$&$ 0$ \\\hline
IV.7 &$ \ell>0,p_1= 0 $&$ 0 $&$ 0 $&$ 2\pi\alpha $&$ -2\pi\alpha $&$ 0$ \\\hline
IV.8 &$ \ell=0,p_1\;\!p_2\neq 0 $&$ 0 $&$ 2\pi\alpha $&$ 2\pi(1-2\alpha) $&$ -2\pi(1-\alpha) $&$ 0$ \\\hline
IV.9.a &$ \ell=0,p_1= 0 $&$ 0 $&$ 2\pi\alpha $&$ 0 $&$ -2\pi\alpha$&$ 0$ \\\hline
IV.9.b &$ \ell=0,p_2= 0 $&$ 0 $&$ 2\pi(1-\alpha) $&$ 0 $&$ -2\pi(1-\alpha)$&$ 0$ \\\hline
\end{tabular}
\end{center}
Finally, if $\dim[\ker(D)]=1$, $\alpha>1/2$ and $(p_1,p_2)\in \ker(D)$ one has:
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
No & Conditions & $\#\sigma_p(H)$ & $\phi_1$ & $\phi_2$ & $\phi_3$ & $\sum_j\phi_j$
\\ \hline\hline
IV.10 &$ \ell<0,p_2\neq 0 $&$ 1 $&$ 0 $&$ -2\pi(1-\alpha) $&$ -2\pi\alpha $&$ -2\pi$ \\\hline
IV.11 &$ \ell<0,p_2=0 $&$ 1 $&$ 0 $&$ -2\pi\alpha $&$ -2\pi(1-\alpha) $&$ -2\pi$ \\\hline
IV.12 &$ \ell>0,p_2\neq 0 $&$ 0 $&$ 0 $&$ 2\pi\alpha $&$ -2\pi\alpha$&$ 0$ \\\hline
IV.13 &$ \ell>0,p_2= 0 $&$ 0 $&$ 0 $&$ 2\pi(1-\alpha) $&$ -2\pi(1-\alpha) $&$ 0$ \\\hline
IV.14 &$ \ell=0,p_1\;\!p_2\neq 0 $&$ 0 $&$ 2\pi(1-\alpha) $&$ -2\pi(1-2\alpha) $&$ -2\pi\alpha $&$ 0$ \\\hline
IV.15.a &$ \ell=0,p_1= 0 $&$ 0 $&$ 2\pi\alpha $&$ 0 $&$ -2\pi\alpha$&$ 0$ \\\hline
IV.15.b &$ \ell=0,p_2= 0 $&$ 0 $&$ 2\pi(1-\alpha) $&$ 0 $&$ -2\pi(1-\alpha)$&$ 0$ \\\hline
\end{tabular}
\end{center}
\section{$\boldsymbol{K}$-groups, $\boldsymbol{n}$-traces and their pairings}\label{secK}
In this section, we give a very short account on the $K$-theory
for $C^*$-algebras and on various constructions related to it.
Our aim is not to present a thorough introduction to these subjects but
to recast the result obtained in the previous section in the most
suitable framework. For the first part, we refer to \cite{Roerdam} for an enjoyable
introduction to the subject.
\subsection{$K$-groups and boundary maps}
The $K_0$-group of a unital $C^*$-algebra $\mathcal{E}$ is constructed from
the homotopy classes
of projections in the set of square matrices with entries in $\mathcal{E}$.
Its addition is induced from the addition of two orthogonal projections:
if $p$ and $q$ are orthogonal projections, {\it i.e.}~$pq=0$, then also $p+q$ is a
projection. Thus, the sum of two homotopy classes $[p]_0+[q]_0$ is
defined as the class of the
sum of the block matrices $[p \oplus q]_0$ on the diagonal. This new class
does not depend on the choice of the representatives $p$ and $q$.
$K_0(\mathcal{E})$ is defined as the Grothendieck group of
this set of homotopy classes of projections endowed with the mentioned addition.
In other words, the elements of the $K_0$-group are
given by formal differences:
$[p]_0-[q]_0$ is identified with $[p']_0-[q']_0$ if there exists a projection $r$
such that $[p]_0+[q']_0+[r]_0 = [p']_0+[q]_0+[r]_0$.
In the general non-unital case the
construction is a little bit more subtle.
The $K_1$-group of a $C^*$-algebra $\mathcal{E}$ is constructed from the homotopy classes
of unitaries in the set of square matrices with entries in the unitisation of $\mathcal{E}$.
Its addition is again defined by: $[u]_1+[v]_1 = [u\oplus v]_1$ as a block matrix on the
diagonal. The homotopy class of the added identity is the neutral element.
Now, let us consider three $C^*$-algebras $\mathcal{J},\mathcal{E}$ and $\mathcal{Q}$ such that
$\mathcal{J}$ is an ideal of $\mathcal{E}$ and $\mathcal{Q}$ is isomorphic to the quotient
$\mathcal{E}/\mathcal{J}$. Another way of saying this is that $\mathcal{J}$ and $\mathcal{Q}$ are the
left and right part of an exact sequence of $C^*$-algebras
\begin{equation}\label{shortexact}
0\to \mathcal{J}\stackrel{{\tt i }}{\to} \mathcal{E} \stackrel{{\tt q}}{\to} \mathcal{Q}\to 0,
\end{equation}
${\tt i }$ being an injective morphism and ${\tt q}$ a surjective morphism satisfying $\hbox{ker}\;\!
{\tt q} = \hbox{im}\;\! {\tt i }$.
There might not be any reasonable algebra morphism between $\mathcal{J}$
and $\mathcal{Q}$ but algebraic topology provides us with homomorphisms between their
$K$-groups: $\mathrm{ind}:K_1(\mathcal{Q})\to K_0(\mathcal{J})$ and $\exp:K_0(\mathcal{Q})\to
K_1(\mathcal{J})$, the index map and the exponential map. These maps are also referred to
as boundary maps.
For the sequel we shall be concerned only with the index map. It can be computed as follows: If $u$ is a unitary in $\mathcal{Q}$ then there exists a unitary
$w\in M_{2}(\mathcal{E})$ such that
${\tt q}(w)=\left( \begin{smallmatrix} u & 0 \\ 0 & u^*\end{smallmatrix}\right)$. It turns out that
$w\left(\begin{smallmatrix} 1 & 0 \\ 0 & 0 \end{smallmatrix}\right)w^*$ lies in the unitisation of ${\tt i }\big(M_2(\mathcal{J})\big)$ so that
$\left(\left[w\left(\begin{smallmatrix} 1 & 0 \\ 0 & 0 \end{smallmatrix}
\right)w^*\right]_0
-\left[\left(\begin{smallmatrix} 1 & 0 \\ 0 & 0 \end{smallmatrix}
\right)\right]_0\right)$ defines an element of $K_0(\mathcal{J})$. $\mathrm{ind}([u]_1)$ is that element.
With a little luck there exists even a partial isometry $w\in \mathcal{E}$ such that ${\tt q}(w)=u$. Then $(1-w^* w)$ and
$(1-w w^*)$ are projections in $\mathcal{J}$ and we have the simpler formula
\begin{equation}\label{eqformula}
\mathrm{ind}[u]_1 = \big[1-w^* w\big]_0-\big[1-w w^*\big]_0\ .
\end{equation}
\subsection{Cyclic cohomology, $n$-traces and Connes' pairing}\label{pourlesconstantes}
For this part, we refer to \cite[Sec.~III]{Connes} or to the short surveys presented in \cite[Sec.~5]{KRS} or in \cite[Sec.~4 \& 5]{KS}. For simplicity, we denote by ${\mathbb N}$ the set of natural number including $0$.
Given a complex algebra $\mathcal{B}$ and any $n \in {\mathbb N}$, let $C^n_\lambda(\mathcal{B})$ be the set of $(n+1)$-linear functional on $\mathcal{B}$ which are cyclic in the sense that any $\eta \in C^n_\lambda(\mathcal{B})$ satisfies for each $w_0,\dots,w_n\in \mathcal{B}$:
\begin{equation*}
\eta(w_1,\dots,w_n,w_0)=(-1)^n \eta(w_0,\dots,w_n)\ .
\end{equation*}
Then, let ${\tt b}: C^n_\lambda(\mathcal{B}) \to C^{n+1}_\lambda(\mathcal{B})$ be the Hochschild coboundary map defined for $w_0,\dots,w_{n+1}\in \mathcal{B}$ by
\begin{equation*}
[{\tt b} \eta](w_0,\dots,w_{n+1}):=
\sum_{j=0}^n (-1)^j \eta(w_0,\dots,w_jw_{j+1},\dots ,w_{n+1}) +
(-1)^{n+1}\eta(w_{n+1}w_0,\dots,w_n)\ .
\end{equation*}
An element $\eta \in C^n_\lambda(\mathcal{B})$ satisfying ${\tt b}\eta=0$ is called a cyclic $n$-cocyle, and the cyclic cohomology $HC(\mathcal{B})$ of $\mathcal{B}$ is the cohomology of the complex
\begin{equation*}
0\to C^0_\lambda(\mathcal{B})\to \dots \to C^n_\lambda(\mathcal{B}) \stackrel{{\tt b}}{\to}
C^{n+1}_\lambda(\mathcal{B}) \to \dots \ .
\end{equation*}
A convenient way of looking at cyclic $n$-cocycles is in terms of characters of a graded
differential algebra over $\mathcal{B}$. So, let us first recall that a graded differential
algebra $(\mathcal{A},{\tt d})$ is a graded algebra $\mathcal{A}$ together with a map ${\tt d}:\mathcal{A}\to \mathcal{A}$ of degree $+1$.
More precisely, $\mathcal{A}:=\oplus_{j=0}^\infty \mathcal{A}_j$ with each $\mathcal{A}_j$ an algebra over ${\mathbb C}$
satisfying the property $\mathcal{A}_j \;\!\mathcal{A}_k \subset \mathcal{A}_{j+k}$, and ${\tt d}$ is a graded
derivation satisfying ${\tt d}^2=0$.
In particular, the derivation satisfies
${\tt d}(w_1w_2)=({\tt d} w_1)w_2 + (-1)^{\deg(w_1)}w_1 ({\tt d} w_2)$, where $\deg(w_1)$
denotes the degree of the homogeneous element $w_1$.
A cycle $(\mathcal{A},{\tt d},\int)$ of dimension $n$ is a graded differential algebra
$(\mathcal{A},{\tt d})$, with $\mathcal{A}_j=0$ for $j>n$, endowed with a linear functional
$\int : \mathcal{A} \to {\mathbb C}$ satisfying $\int {\tt d} w=0$ if $w \in \mathcal{A}_{n-1}$ and for
$w_j\in \mathcal{A}_j$, $w_k \in \mathcal{A}_k$ :
$$
\int w_j w_k = (-1)^{jk}\int w_k w_j\ .
$$
Given an algebra $\mathcal{B}$, a cycle of dimension $n$ over $\mathcal{B}$ is a
cycle $(\mathcal{A},{\tt d},\int)$ of dimension $n$ together with a homomorphism $\rho:\mathcal{B} \to \mathcal{A}_0$.
In the sequel, we will assume that this map is injective and hence identify $\mathcal{B}$ with a subalgebra of $\mathcal{A}_0$ (and do not write $\rho$ anymore).
Now, if $w_0, \dots, w_n$ are $n+1$ elements of $\mathcal{B}$, one can define the character $\eta(w_0,\dots,w_n) \in {\mathbb C}$ by the formula:
\begin{equation}\label{defeta}
\eta(w_0,\dots,w_n):=\int w_0\;\!({\tt d} w_1)\dots ({\tt d} w_n)\ .
\end{equation}
As shown in \cite[Prop.III.1.4]{Connes}, the map $\eta: \mathcal{B}^{n+1}\to {\mathbb C}$ is a cyclic $(n+1)$-linear functional on $\mathcal{B}$ satisfying
${\tt b}\eta=0$, {\it i.e.} $\eta$ is a cyclic $n$-cocycle.
Conversely, any cyclic $n$-cocycle arises as the character of a cycle of dimension $n$ over $\mathcal{B}$. Let us also mention that a third description of any cyclic $n$-cocycle is presented in \cite[Sec.~III.1.$\alpha$]{Connes} in terms of the universal differential algebra associated with $\mathcal{B}$.
We can now introduce the precise definition of a $n$-trace over a Banach algebra. For an algebra $\mathcal{B}$ that is not necessarily unital, we denote by $\widetilde{\mathcal{B}}:= \mathcal{B} \oplus {\mathbb C}$ the algebra obtained by adding a unit to $\mathcal{B}$.
\begin{defin}\label{ntrace}
A $n$-trace on a Banach algebra $\mathcal{B}$ is the character of a cycle $(\mathcal{A},{\tt d},\int)$ of dimension $n$ over a dense subalgebra $\mathcal{B}'$ of $\mathcal{B}$ such that for all $w_1,\dots,w_n\in \mathcal{B}'$ and any $x_1,\dots,x_n \in \widetilde{\mathcal{B}}'$ there exists a constant $c= c(w_1,\dots,w_n)$ such that
$$
\left|\int (x_1 {\tt d} a_1)\dots (x_n {\tt d} w_n)\right| \leq c \|x_1\|\dots \|x_n\|\ .
$$
\end{defin}
\begin{rem}
Typically, the elements of $\mathcal{B}'$ are suitably smooth elements of $\mathcal{B}$
on which the derivation ${\tt d}$ is well defined and for which the r.h.s.~of \eqref{defeta}
is also well defined.
However, the action of the $n$-trace $\eta$ can sometines be extended to
more general elements $(w_0, \dots,w_n) \in \mathcal{B}^{n+1}$ by a suitable reinterpretation
of the l.h.s.~of \eqref{defeta}.
\end{rem}
The importance of $n$-traces relies on their duality relation with $K$-groups.
Recall first that $M_q(\mathcal{B})\cong M_q({\mathbb C})\otimes \mathcal{B}$ and that $\mathop{\mathrm{tr}}$ denotes the standard
trace on matrices. Now, let $\mathcal{B}$ be a $C^*$-algebra and let $\eta_n$ be a $n$-trace
on $\mathcal{B}$ with $n \in {\mathbb N}$ even.
If $\mathcal{B}'$ is the dense subalgebra of $\mathcal{B}$ mentioned in Definition \ref{ntrace} and if
$p$ is a projection in $M_q(\mathcal{B}')$, then one sets
\begin{equation*}
\langle \eta_n,p\rangle := c_n\;\![\mathop{\mathrm{tr}}\otimes \eta_n](p,\dots,p) .
\end{equation*}
Similarly, if $\mathcal{B}$ is a unital $C^*$-algebra and if $\eta_n$ is a $n$-trace with $n \in {\mathbb N}$ odd,
then for any unitary $u$ in $M_q(\mathcal{B}')$ one sets
\begin{equation*}
\langle \eta_n,u\rangle := c_n \;\![\mathop{\mathrm{tr}}\otimes \eta_n](u^*,u,u^*,\dots,u)
\end{equation*}
the entries on the r.h.s.~alternating between $u$ and $u^*$. The constants $c_n$ are given by
$$
c_{2k}
\;=\;
\frac{1}{(2\pi i)^k}\,\frac{1}{k!}
\mbox{ , }
\qquad
c_{2k+1}
\;=\;
\frac{1}{(2\pi i)^{k+1}}\,
\frac{1}{2^{2k+1}}\,
\frac{1}{(k+\frac{1}{2})(k-\frac{1}{2})
\cdots\frac{1}{2}}
\mbox{ . }
$$
There relations are referred to as Connes' pairing between $K$-theory and cyclic cohomology
of $\mathcal{B}$ because of the following property, see \cite[Thm.~2.7]{Connes86} for a precise
statement and for its proof:
In the above framework, the values $\langle \eta_n,p\rangle$ and $\langle \eta_n,u\rangle$
depend only of the $K_0$-class $[p]_0$ of $p$ and of the $K_1$-class $[u]_1$ of $u$,
respectively.
We now illustrate these notions with two basic examples which will be of importance
in the sequel.
\begin{example}\label{exam1}
If $\mathcal{B}=\mathcal K(\mathcal{H})$, the algebra of compact operators on a Hilbert space $\mathcal{H}$,
then the linear functional $\int$ on $\mathcal{B}$ is given by the usual trace $\mathrm{Tr}$
on the set $\mathcal K_1$ of trace class elements of $\mathcal K(\mathcal{H})$. Furthermore, since any projection
$p\in \mathcal K(\mathcal{H})$ is trace class, it follows that $\langle \eta_0,p\rangle\equiv
\langle \mathrm{Tr},p\rangle$ is well defined
for any such $p$ and that this expression gives the dimension of the projection $p$.
\end{example}
For the next example, let us recall that $\mathrm{det}$ denotes the usual determinant of elements of $M_q({\mathbb C})$.
\begin{example}\label{exam2}
If $\mathcal{B}=C\big(\mathbb{S}^1, M_q({\mathbb C})\big)$ for some $q\geq 1$,
let us fix $\mathcal{B}':=C^1\big(\mathbb{S}^1,M_q({\mathbb C})\big)$. We parameterize $\mathbb{S}^1$ by the real numbers modulo $2\pi$ using $\theta$ as local coordinate.
As usual, for any $w \in \mathcal{B}'$
(which corresponds to an homogeneous element of degree $0$), one sets
$[{\tt d} w](\theta):=w'(\theta)\;\!\mathrm{d} \theta$ (which is now an homogeneous element of degree $1$).
Furthermore, we define the graded trace $\int v \;\!\mathrm{d} \theta :=\int_{-\pi}^{\pi} \mathop{\mathrm{tr}}[v(\theta)]\;\!\mathrm{d} \theta$
for an arbitrary element $v \;\!\mathrm{d} \theta$ of degree $1$. This defines the $1$-trace $\eta_1$.
A unitary element in $u\in C^1\big(\mathbb{S}^1,M_q({\mathbb C})\big)$ (or rather its class) pairs as follows:
\begin{equation}\label{wn1}
\langle \eta_1,u \rangle = c_1[\mathop{\mathrm{tr}}\otimes\eta_1](u^*,u) := \frac{1}{2\pi i}
\;\!\int_{-\pi}^\pi \mathop{\mathrm{tr}}[u(\theta)^*\;\!u'(\theta)]\;\! \mathrm{d} \theta\ .
\end{equation}
For this example, the extension of this expression for any unitary $u \in
C\big(\mathbb{S}^1,M_q({\mathbb C})\big)$
is quite straightforward. Indeed, let us first rewrite $u=:e^{i\varphi}$ for some
$\varphi \in C^1\big(\mathbb{S}^1,M_q({\mathbb R})\big)$ and set $\beta(\theta):=\mathrm{det}[u(\theta)]$.
By using the equality $\mathrm{det}[e^{i\varphi}]=e^{i\mathop{\mathrm{tr}}[\varphi]}$, one then easily observed that
the quantity \eqref{wn1} is equal to
\begin{equation*}
\frac{1}{2\pi i}\int_{-\pi}^\pi \beta(\theta)^*\;\!\beta'(\theta)\;\!\mathrm{d} \theta\ .
\end{equation*}
But this quantity is known to be equal to the winding number of the map
$\beta:\mathbb{S}^1 \to {\mathbb T}$, a quantity which is of topological nature and which only requires
that the map $\beta$ is continuous.
Altogether, one has thus obtained that the l.h.s.~of \eqref{wn1} is nothing but
the winding number of the map $\mathrm{det}[u]: \mathbb{S}^1 \to {\mathbb T}$, valid for any unitary $u \in
C\big(\mathbb{S}^1,M_q({\mathbb C})\big)$.
\end{example}
\subsection{Dual boundary maps}
We have seen that an $n$-trace $\eta$ over $\mathcal{B}$ gives rise to a functional on $K_i(\mathcal{B})$ for $i=1$ or $i=2$, {\it i.e.}~the map $ \langle \eta,\cdot \rangle$ is an element of $Hom(K_i(\mathcal{B}),{\mathbb C})$. In that sense $n$-traces are dual to the elements of the (complexified) $K$-groups. An important question is whether this dual relation is functorial in the sense that morphisms between the $K$-groups of different algebras yield dual morphisms on higher traces. Here we are in particular interested in a map on higher traces which is dual to the index map, {\it i.e.}~a map $\#$ which assigns to an even trace $\eta$ an odd trace $\#\eta$ such that
\begin{equation}\label{comp}
\langle \eta,\mathrm{ind} (\cdot) \rangle = \langle \#\eta,\cdot \rangle.
\end{equation}
This situation gives rise to
equalities between two numerical topological invariants.
Such an approach for relating two topological invariants has already been used at few occasions. For example, it has been recently shown that Levinson's theorem corresponds to a equality of the form \eqref{comp} for a $0$-trace and a $1$-trace \cite{KR1/2}.
In Section \ref{Sechigh} we shall develop such an equality for a $2$-trace and a $3$-trace.
On the other hand, let us mention that similar equalities have also been developed for the exponential map in \eqref{comp} instead of the index map. In this framework, an equality involving a $0$-trace and a $1$-trace has been put into evidence in \cite{Kel}. It gives rise to a relation
between the pressure on the boundary of a quantum system and the integrated density of states.
Similarly, a relation involving $2$-trace and a $1$-trace was involved in the proof of the equality between the bulk-Hall conductivity and the conductivity of the current along the edge of the sample, see \cite{KRS,KS}.
\section{Non-commutative topology and topological Levinson's theorems}\label{secAlgebra}
In this section we introduce the algebraic framework suitable for the Aharonov-Bohm model.
In fact, the following algebras were already introduced in
\cite{KRx} for the study of the wave operators in potential scattering on ${\mathbb R}$.
The similar form of the wave operators in the Aharonov-Bohm model and in the model studied
in that reference allows us to reuse part of the construction and the abstract results.
Let us stress that the following construction holds for fixed $\alpha$ and $(C,D)$.
These parameters will vary only at the end of the section.
\subsection{The algebraic framework}\label{lesalgebres}
For the construction of the $C^*$-algebras, let us introduce the operator
$B:=\frac{1}{2}\ln(H_0)$, where $H_0=-\Delta$ is the usual Laplace operator
on $\mathbb{R}^2$. The crucial property of the operators $A$ and $B$ is that they satisfy the canonical commutation
relation $[A,B]=i$ so that $A$ generates translations in $B$ and vice versa,
\begin{equation*}
e^{iBt} A e^{-iBt} = A+t, \quad e^{iAs} B e^{-iAs} = B-s.
\end{equation*}
Furthermore, both operators leave the subspaces $\mathcal{H}_m$ invariant.
More precisely, for any essentially bounded functions $\varphi$ and $\eta$ on ${\mathbb R}$,
the operator $\varphi(A)\eta(B)$ leaves each of these subspaces invariant.
Since all the interesting features of the Aharonov-Bohm model take place in the subspace
$\mathcal{H}_\mathfrak{int}\cong L^2({\mathbb R}_+,r\;\!\mathrm{d} r)\otimes {\mathbb C}^2$, we shall subsequently restrict our
attention to this subspace and consider functions $\varphi, \eta$ defined on ${\mathbb R}$
and taking values in $M_2({\mathbb C})$.
Now, let $\mathcal{E}$ be the closure in $\mathcal{B}(\mathcal{H}_\mathfrak{int})$ of the algebra generated by elements of the form
$\varphi(A)\psi(H_0)$, where $\varphi$ is a continuous function on ${\mathbb R}$ with values in
$M_2({\mathbb C})$ which converges at $\pm \infty$, and $\psi$ is a continuous function ${\mathbb R}_+$
with values in $M_2({\mathbb C})$ which converges at $0$ and at $+\infty$.
Stated differently, $\varphi\in C\big(\overline{{\mathbb R}},M_2({\mathbb C})\big)$ with $\overline{{\mathbb R}}=[-\infty,+\infty]$,
and $\psi \in C\big(\overline{{\mathbb R}_+},M_2({\mathbb C})\big)$ with $\overline{{\mathbb R}_+}=[0,+\infty]$.
Let $\mathcal{J}$ be the norm closed algebra generated by $\varphi(A)\psi(H_0)$ with functions
$\varphi$ and $\psi$ for which the above limits vanish. Obviously, $\mathcal{J}$ is an ideal in $\mathcal{E}$,
and the same algebras are obtained if $\psi(H_0)$ is replaced by $\eta(B)$ with
$\eta \in C\big(\overline{{\mathbb R}},M_2({\mathbb C})\big)$ or $\eta \in C_0\big({\mathbb R},M_2({\mathbb C})\big)$, respectively.
Furthermore, the ideal $\mathcal{J}$ is equal to the algebra of compact operators $\mathcal K(\mathcal{H}_\mathfrak{int})$,
as shown in \cite[Sec.~4]{KRx}.
Let us already mention the reason of our interest in defining the above algebra $\mathcal{E}$.
Since for $m \in \{0,-1\}$ the functions $\varphi^-_m$ and $\tilde{\varphi}_{m}$ have limits
at $\pm \infty$, and since $\widetilde{S}^{C\!D}_\alpha$ also has limits at $0$ and $+\infty$,
it follows from \eqref{yoyo} that the operator $\Omega_-^{C\!D}|_{\mathcal{H}_\mathfrak{int}}$ belongs to $\mathcal{E}$.
Since $\mathcal{J} = \mathcal K(\mathcal{H}_\mathfrak{int})$, the image ${\tt q}\big(\Omega_-^{C\!D}|_{\mathcal{H}_\mathfrak{int}}\big)$ in $\mathcal{E}/\mathcal{J}$
corresponds to the image of $\Omega_-^{C\!D}|_{\mathcal{H}_\mathfrak{int}}$ in the Calkin algebra.
This motivates the following computation of the quotient $\mathcal{E}/\mathcal{J}$.
To describe the quotient $\mathcal{E}/\mathcal{J}$ we consider the square
$\blacksquare:=\overline{{\mathbb R}_+}\times \overline{{\mathbb R}}$ whose boundary $\square$ is the
union of four parts: $\square =B_1\cup B_2\cup B_3\cup
B_4$, with $B_1 = \{0\}\times \overline{{\mathbb R}}$, $B_2 = \overline{{\mathbb R}_+} \times \{+\infty\}$,
$B_3 = \{+\infty\}\times \overline{{\mathbb R}}$ and $B_4 = \overline{{\mathbb R}_+}\times \{-\infty\}$. We
can then view $\mathcal{Q}:=C\big(\square,M_2({\mathbb C})\big)$ as the subalgebra
of
\begin{equation*}
C\big(\overline{{\mathbb R}},M_2({\mathbb C})\big)\oplus C\big(\overline{{\mathbb R}_+},M_2({\mathbb C})\big)
\oplus C\big(\overline{{\mathbb R}},M_2({\mathbb C})\big)\oplus C\big(\overline{{\mathbb R}_+},M_2({\mathbb C})\big)
\end{equation*}
given by elements
$(\Gamma_1,\Gamma_2,\Gamma_3,\Gamma_4)$ which coincide at the
corresponding end points, that is,
$\Gamma_1(+\infty)=\Gamma_2(0)$,
$\Gamma_2(+\infty)=\Gamma_3(+\infty)$,
$\Gamma_3(-\infty)=\Gamma_4(+\infty)$,
$\Gamma_4(0)=\Gamma_1(-\infty)$.
The following lemma corresponds to results obtained in
\cite[Sec.~3.5]{Georgescu} rewritten in our framework.
\begin{lemma}\label{image}
$\mathcal{E}/\mathcal{J}$ is isomorphic to $\mathcal{Q}$.
Furthermore, for any $\varphi\in C\big(\overline{{\mathbb R}},M_2({\mathbb C})\big)$ and for any
$\psi \in C\big(\overline{{\mathbb R}_+},M_2({\mathbb C})\big)$, the image of $\varphi(A)\psi(H_0)$
through the quotient map ${\tt q}: \mathcal{E} \to \mathcal{Q}$ is given by
$\Gamma_1(\cdot) = \varphi(\cdot)\psi(0)$, $\Gamma_{2}(\cdot) =
\varphi(+\infty)\psi(\cdot)$,
$\Gamma_{3}(\cdot) = \varphi(\cdot)\psi(+\infty)$ and $\Gamma_{4}(\cdot) =
\varphi(-\infty)\psi(\cdot)$.
\end{lemma}
Stated differently, the algebras $\mathcal{J}, \mathcal{E}$ and $\mathcal{Q}$ are part of the short exact sequence of $C^*$-algebras \eqref{shortexact}.
And as already mentioned, the operator $\Omega_-^{C\!D}|_{\mathcal{H}_\mathfrak{int}}$ clearly belongs to $\mathcal{E}$. Furthermore, its image through the quotient map ${\tt q}$ can be easily computed, and in fact has already been computed. Indeed, the function $\Gamma(C,D,\alpha,\cdot)$ presented in \eqref{fctGamma} is precisely ${\tt q}\big(\Omega_-^{C\!D}|_{\mathcal{H}_\mathfrak{int}}\big)$, as we shall see it in the following section.
\begin{rem}\label{remautre}
We still would like to provide an alternative description of the above algebras and of the corresponding short exact sequence.
Recall that
$\mathcal{Q}$ is isomorphic to $C\big({\mathbb T},M_2({\mathbb C})\big)$ and that $C({\mathbb T})$ is as a
$C^*$-algebra generated by a single
continuous bijective function $u:{\mathbb T}\to {\mathbb T}\subset{\mathbb C}$ with winding number $1$.
There are, up to a natural equivalence, not so many $C^*$-algebras
$\mathcal{B}$ which fit into an exact sequence of the form
$0\to \mathcal K\stackrel{}\to\mathcal{B}\stackrel{}\to C({\mathbb T})\to 0$, with $\mathcal K$ the algebra of compact operators.
In fact, it turns out that they are classified by the Fredholm-index of a
lift $\hat u$ of $u$ \cite[Thm.~IX.3.3]{Davidson}.
In the present case we can use an exactly solvable model to find out
that $\hat u$ can be taken to be an isometry of co-rank $1$ and hence
this index is $-1$ \cite{KRx}. Our extension is thus what one refers
to as the Toeplitz extension.
This means that $\mathcal{E}$ is the tensor product of $M_2({\mathbb C})$ with
the $C^*$-algebra generated by an element $\hat u$
satisfying $\hat u^*\hat u = 1$ and $\hat u\hat u^* = 1 - e_{00}$
where $e_{00}$ is a rank $1$ projection. The surjection ${\tt q}$ is
uniquely defined by ${\tt q}(\hat u) = u$. Our exact sequence is thus the tensor
product with $M_2({\mathbb C})$ of the exact sequence
\begin{equation}\label{eqautre}
0 \to \mathcal K \stackrel{{\tt i }}\to C^*(\hat u) \stackrel{{\tt q} }{\to}C^*(u) \to 0.
\end{equation}
\end{rem}
\subsection{The $0$-degree Levinson's theorem, the topological approach}
We can now state the topological version of Levinson's theorem.
\begin{theorem}\label{Ktheo}
For each $\alpha \in (0,1)$ and each admissible pair $(C,D)$, one has
$\Omega_-^{C\!D}|_{\mathcal{H}_\mathfrak{int}}\in \mathcal{E}$. Furthermore,
${\tt q}\big(\Omega_-^{C\!D}|_{\mathcal{H}_\mathfrak{int}}\big)=\Gamma(C,D,\alpha,\cdot) \in \mathcal{Q}$
and the following equality holds
\begin{equation*}
\mathrm{ind}[\Gamma(C,D,\alpha,\cdot)]_1 = -[P^{C\!D}_\alpha]_0\ ,
\end{equation*}
where $P^{C\!D}_\alpha$ is the orthogonal projection on the space spanned by the bound states of $H^{C\!D}_\alpha$.
\end{theorem}
\begin{rem}
Recall that by Atkinson's theorem the image of
any Fredholm operator $F\in\mathcal{B}(\mathcal{H}_\mathfrak{int})$ in the Calkin algebra $\mathcal{B}(\mathcal{H}_\mathfrak{int})/ \mathcal K(\mathcal{H}_\mathfrak{int})$
is invertible. Then, since the wave operators $\Omega_-^{C\!D}|_{\mathcal{H}_\mathfrak{int}}$ is an isometry
and a Fredholm operator,
it follows that each function $\Gamma_j(C,D,\alpha,\cdot)$ takes values in $U(2)$.
In fact, this was already mentioned when the functions $\Gamma_j(C,D,\alpha,\cdot)$ were introduced.
\end{rem}
\begin{proof}[Proof of Theorem~\ref{Ktheo}]
The image of $\Omega_-^{C\!D}|_{\mathcal{H}_\mathfrak{int}}$ through the quotient map ${\tt q}$ is easily obtained by taking
the formulae recalled in Lemma \ref{image} into account.
Then, since $\Omega_-^{C\!D}|_{\mathcal{H}_\mathfrak{int}}$ is a lift for $\Gamma(C,D,\alpha,\cdot)$,
the image of $[\Gamma(C,D,\alpha,\cdot)]_1$ though the index map is obtained by the formula \eqref{eqformula}:
\begin{eqnarray*}
\mathrm{ind}[\Gamma(C,D,\alpha,\cdot)]_1 &=& \big[1-\big(\Omega_-^{C\!D}|_{\mathcal{H}_\mathfrak{int}}\big)^* \;\!\Omega_-^{C\!D}|_{\mathcal{H}_\mathfrak{int}}\big]_0-\big[1-\Omega_-^{C\!D}|_{\mathcal{H}_\mathfrak{int}}\;\! \big(\Omega_-^{C\!D}|_{\mathcal{H}_\mathfrak{int}}\big)^*\big]_0\\
&=& [0]_0-\big[P^{C\!D}_\alpha\big]_0\ .
\end{eqnarray*}
\end{proof}
Theorem~\ref{Ktheo} covers the $K$-theoretic part of Levinson's theorem. In order to get a genuine Levinson's theorem, by which we mean an equality between topological numbers, we need to add the dual description, {\it i.e.}~identify higher traces on $\mathcal{J}$ and $\mathcal{Q}$ and a dual boundary map. As a matter of fact, the algebras considered so far are too simple to allow for non-trivial results in higher degree
and so we must content ourselves here to identify a suitable $0$-trace and $1$-trace which can be
applied to $P^{C\!D}_\alpha$ and $\Gamma(C,D,\alpha,\cdot)$, respectively.
Clearly, only the usual trace $\mathrm{Tr}$ can be applied on the former term, {\it cf.}~Example \ref{exam1}
of Section \ref{secK}.
On the other hand, since $\Gamma(C,D,\alpha,\cdot) \in C\big(\square,U(2)\big)$,
we can define the winding number $\mathrm{wind}\big[\Gamma(C,D,\alpha,\cdot)\big]$ of the map
\begin{equation*}
\square \ni \zeta \mapsto \mathrm{det} [\Gamma(C,D,\alpha,\zeta)]\in {\mathbb T}
\end{equation*}
with orientation of $\square$ chosen clockwise, {\it cf.}~Example \ref{exam2}
of Section \ref{secK}.
Then, the already stated Theorem \ref{Lev0} essentially reformulates the fact that the $0$-trace is mapped to the $1$-trace
by the dual of the index map. The first equality of Theorem \ref{Lev0} can then be found in Proposition 7 of \cite{KRx} and the equality between the cardinality of $\sigma_p(H_\alpha^{C\!D})$ and the number of
negative eigenvalues of the matrix $CD^*$ has been shown in \cite[Lem.~4]{PR}.
\subsection{Higher degree Levinson's theorem}\label{Sechigh}
The previous theorem is a pointwise $0$-degree Levinson's theorem. More precisely, it was obtained for fixed $C,D$ and $\alpha$. However, it clearly calls for making these parameters
degrees of freedom and thus to include them into the description of the algebras. In the context of our physical model this amounts to considering
families of self-adjoint extensions of $H_\alpha$.
For that purpose we use the one-to-one parametrization of these extensions with elements $U \in U(2)$
introduced in Remark \ref{1to1}. We denote the self-adjoint extension corresponding to $U \in U(2)$ by $H_\alpha^U$.
So, let us consider a smooth and compact orientable $n$-dimensional manifold $X$ without boundary.
Subsequently, we will choose for $X$ a two-dimensional submanifold of $U(2)\times (0,1)$.
Taking continuous functions over $X$ we get a new short exact sequence
\begin{equation}\label{degresup}
0 \to C(X,\mathcal{J})\stackrel{}\to C(X,\mathcal{E}) \stackrel{}\to C(X,\mathcal{Q})\to 0\ .
\end{equation}
Furthermore, recall that $\mathcal{J}$ is endowed with a $0$-trace and the algebra $\mathcal{Q}$ with
a $1$-trace.
There is a standard construction in cyclic cohomology, the cup
product, which provides us with
a suitable $n$-trace on the algebra $C(X,\mathcal{J})$ and a corresponding $n+1$-trace
on the algebra $C(X,\mathcal{Q})$, see \cite[Sec.~III.1.$\alpha$]{Connes}. We describe it here in terms of cycles.
Recall that any smooth and compact manifold $Y$ of dimension $d$
naturally defines a structure of a graded differential algebra $(\mathcal{A}_Y,{\tt d}_Y)$,
the algebra of its smooth differential $k$-forms.
If we assume in addition that $Y$ is orientable so that we can choose
a global volume form,
then the linear form $\int_Y$ can be defined by integrating the
$d$-forms over $Y$.
In that case, the algebra $C(Y)$ is naturally endowed with the $d$-trace defined
by the character of the cycle $(\mathcal{A}_Y,{\tt d}_Y,\int_Y)$ of dimension $d$
over the dense subalgebra $C^\infty(Y)$.
For the algebra $C(X,\mathcal{J})$, let us recall that $\mathcal{J}$ is equal to the
algebra $\mathcal K(\mathcal{H}_\mathfrak{int})$
and that the $0$-trace on $\mathcal{J}$ was simply the usual trace $\mathrm{Tr}$.
So, let $\mathcal K_1$ denote the trace class elements of $\mathcal K(\mathcal{H}_\mathfrak{int})$.
Then, the natural graded differential algebra associated with
$C^\infty(X,\mathcal K_1)$ is given by $(\mathcal{A}_X\otimes \mathcal K_1,{\tt d}_X)$.
The resulting $n$-trace on $C(X,\mathcal{J})$ is then defined by the character
of the cycle $(\mathcal{A}_X\otimes \mathcal K_1,{\tt d}_X,\int_X\otimes \mathrm{Tr})$ over the dense
subalgebra $C^\infty(X,\mathcal K_1)$ of $C(X,\mathcal{J})$. We denote it by $\eta_X$.
For the algebra $C(X,\mathcal{Q})$, let us recall that $\mathcal{Q} =C\big(
\square,M_2({\mathbb C})\big)$
with $ \square \cong \mathbb{S}^1$, and thus
$C(X,\mathcal{Q}) \cong C\big(X\times\mathbb{S}^1,M_2({\mathbb C})\big) \cong C(X \times \mathbb{S}^1)
\otimes M_2({\mathbb C})$.
Since $X\times \mathbb{S}^1$ is a compact orientable manifold without
boundary, the above construction
applies also to $C\big(X\times\mathbb{S}^1,M_2({\mathbb C})\big)$. More precisely,
the exterior derivation on $X\times \mathbb{S}^1$ is the sum of ${\tt d}_X$ and
${\tt d}_{\mathbb{S}^1}$ (the latter was denoted simply by ${\tt d}$ in Example
\ref{exam2}). Furthermore, we consider the natural volume form on $X\times \mathbb{S}^1$.
Note because of the factor $M_2({\mathbb C})$ the graded trace of the cycle
involves the usual matrix trace $\mathop{\mathrm{tr}}$.
Thus the resulting $n+1$-trace is the character of the cycle
$(\mathcal{A}_{X\times \mathbb{S}^1}\otimes M_2({\mathbb C}),{\tt d}_X+{\tt d}_{\mathbb{S}^1},\int_{X\times
\mathbb{S}^1}\otimes \mathop{\mathrm{tr}})$. We denote it by $\#\eta_X$.
Having these constructions at our disposal we can now state the main result of this section.
For the statement, we use the one-to-one parametrization of the extensions of $H_\alpha$
introduced in Remark \ref{1to1} and let $\alpha\in(0,1)$.
We consider a family $\{\Omega_-(H_\alpha^U,H_0)\}_{(U,\alpha)\in X} \in \mathcal{B}(\mathcal{H}_\mathfrak{int})$,
parameterized by some compact orientable and boundaryless submanifold $X$ of $U(2)\times (0,1)$. This family defines a map ${\bf \Omega}:X\to \mathcal{E}$,
${\bf \Omega}(U,\alpha) = \Omega_-(H_\alpha^U,H_0)$, a map ${\bf \Gamma}:X\to \mathcal{Q}$,
${\bf \Gamma}(U,\alpha,\cdot) = \Gamma\big(C(U),D(U),\alpha,\cdot\big)= {\tt q}(\Omega_-(H_\alpha^U,H_0))$, and
a map ${\bf P}:X\to \mathcal{J}$,
${\bf P}(U,\alpha) = P_\alpha^U$ the orthogonal
projection of the subspace of $\mathcal{H}_\mathfrak{int}$ spanned by the bound states of $H_\alpha^U$.
\begin{theorem}\label{thm-ENN}
Let $X$ be a smooth, compact and orientable $n$-dimensional submanifold of $U(2)\times (0,1)$ without boundary.
Let us assume that the map ${\bf \Omega}:X\to \mathcal{E}$
is continuous. Then the following equality holds:
\begin{equation*}
\mathrm{ind}[{\bf \Gamma}]_1 = -[{\bf P}]_0
\end{equation*}
where $\mathrm{ind}$ is the index map from $K_1\big(C(X,\mathcal{Q})\big)$ to $K_0\big(C(X,\mathcal{J})\big)$.
Furthermore, the numerical equality
\begin{equation}\label{eq23}
\big\langle \#\eta_{X},[{\bf \Gamma}]_1 \big\rangle
=-
\big \langle \eta_{X},[{\bf P}]_0 \big\rangle
\end{equation}
also holds.
\end{theorem}
\begin{proof}
For the first equality we can simply repeat pointwise the proof of Theorem~\ref{Ktheo}. Since we required ${\bf \Omega}$ to be continuous, its kernel projection ${\bf P}$ is continuous as well.
The second equality follows from of a more general formula stating that the map $\eta_X\mapsto \#\eta_X$ is dual to the boundary maps \cite{ENN88}. We also mention that another proof can be obtained by mimicking the calculation given in the Appendix of \cite{KRS}. For the convenience of the reader, we
sketch it in the Appendix~\ref{appa} and refer to \cite{KRS} for details.
\end{proof}
Let us point out that r.h.s.~of \eqref{eq23} is the Chern number of the vector bundle given by the eigenstates of $H_\alpha^U$. The next section is devoted to a computation of this number for a special choice of manifold $X$.
\subsection{An example of a non trivial Chern number}
We shall now choose a $2$-dimensional manifold $X$ and show that the above relation between the corresponding $2$-trace and $3$-trace is not trivial. More precisely, we shall choose a manifold $X$ such that the r.h.s. of \eqref{eq23} is not equal to $0$.
For that purpose, let us fix two complex numbers $\lambda_1,\lambda_2$ of modulus $1$
with $\Im \lambda_1<0< \Im \lambda_2$ and consider the set $X \subset U(2)$ defined by :
\begin{equation*}
X = \left\{ V
\left(\begin{smallmatrix}
\lambda_1 & 0 \\
0 & \lambda_2
\end{smallmatrix}\right)
V^* \mid V\in U(2)\right\}.
\end{equation*}
Clearly, $X$ is a two-dimensional smooth and compact manifold without boundary, which can be parameterized by
\begin{equation}\label{param}
X= \left\{
\left(\begin{matrix}
\rho^2 \lambda_1 + (1-\rho^2) \lambda_2 &
\rho(1-\rho^2)^{1/2} \;\!e^{i\phi}(\lambda_1-\lambda_2) \\
\rho(1-\rho^2)^{1/2} \;\!e^{-i\phi}(\lambda_1-\lambda_2) &
(1-\rho^2) \lambda_1 + \rho^2 \lambda_2
\end{matrix}\right)
\mid \rho \in [0,1] \hbox{ and } \phi \in [0,2\pi)\right\}.
\end{equation}
Note that the $(\theta,\phi)$-parametrization of $X$ is complete in the sense that it covers all the manifold injectively away from a subset of codimension $1$,
but it has coordinate singularities at $\rho\in \{0,1\}$.
By \cite[Lem.~15]{PR}, for each $U\equiv U(\rho,\phi)\in X$ the operator $H^U_\alpha$ has a single negative eigenvalue $z\equiv z(U)$
defined by the equality $\mathrm{det} \big( (1+U) M(z) +i (1-U)\big)=0$, and one has
\begin{equation}\label{eq-khu}
\ker (H^U_\alpha -z)=\gamma(z) \ker \big( (1+U) M(z) +i (1-U)\big).
\end{equation}
Here, $M(z)$ is the Weyl function which is a $2\times 2$ diagonal matrix
and $\gamma(z):{\mathbb C}^2\to \mathcal{H}$ an injective linear map (see subsection \ref{ssec21}).
The orthogonal projection onto $\ker (H^U_\alpha -z)$ is denoted by $P^U_\alpha$ and we shall consider $E=\{\mathop{\mathrm{im}} P^U_\alpha\mid U \in X\}$ which is a subbundle of the trivial bundle $X\times \mathcal{H}$.
Our next aim is to calculate its Chern number $\mathrm{ch}(E)$, first in terms of the Chern number of a simpler bundle. In view of \eqref{eq-khu} $X\times{\mathbb C}^2 \ni (U,\xi) \mapsto (U,\gamma(z(U))\xi)\in X\times\mathcal{H}$ defines a continuous isomorphism between the subbundle $F$ of the trivial bundle $X\times \mathbb{C}^2$ defined by
\begin{equation*}
F=\big\{\ker \big( (1+U) M(z) +i (1-U)\big)\mid U \in X\big\}.
\end{equation*}
and $E$, and hence
$\mathrm{ch}(E)=\mathrm{ch}(F)$.
Now, the assumptions on $\lambda_1$ and $\lambda_2$ imply that for any $U \in X$ the matrix $(1-U)$ is invertible and one can then consider the self-adjoint operator
\[
T(U)=i\,\dfrac{1-U}{1+U}\ .
\]
By setting $\lambda_j =: e^{i\varphi_j}$ with $\varphi_1 \in (-\pi,0)$ and $\varphi_2 \in (0,\pi)$, and then $r_i = \tan \frac{\varphi_i}{2}$ we get
\[ T(U)
= \left(\begin{matrix}
\rho^2r_1 +(1-\rho^2)r_2 & \rho (1-\rho^2)^{1/2}e^{i\phi}(r_1-r_2) \\ \rho (1-\rho^2)^{1/2}e^{-i\phi}(r_1-r_2) &
(1-\rho^2)r_1 +\rho^2 r_2
\end{matrix}\right)
\]
for some $\rho \in [0,1]$ and $\phi \in [0,2\pi)$ given by \eqref{param}.
Thus, by using the parametrization of $U$ and $z$ in terms of $(\rho,\phi)$ one obtains that the bundle $E$ is
isomorphic to the bundle $G$ defined by
\begin{equation*}
G=\big\{\ker \big( G(\rho,\phi)\big)
\mid \rho \in [0,1] \hbox{ and } \phi \in [0,2\pi)\big\}.
\end{equation*}
with
\[G(\rho,\phi)
:= \left(\begin{matrix}
M_{11}\big(z(\rho,\phi)\big)+\rho^2r_1 +(1-\rho^2)r_2 & \rho (1-\rho^2)^{1/2}e^{i\phi}(r_1-r_2) \\ \rho (1-\rho^2)^{1/2}e^{-i\phi}(r_1-r_2) &
M_{22}\left(z(\rho,\phi)\right)+ (1-\rho^2)r_1 +\rho^2 r_2
\end{matrix}\right)\ .
\]
Recall that $z(\rho,\phi)$ is defined by the condition $\mathrm{det} \big(G(\rho,\phi)\big)=0$, {\it i.e.}
\begin{equation}\label{eq-annul}
\big(
M_{11}\big(z(\rho,\phi)\big)+\rho^2r_1 +(1-\rho^2)r_2
\big)\cdot
\big(
M_{22}\left(z(\rho,\phi)\right)+ (1-\rho^2)r_1 +\rho^2 r_2
\big)=(r_1-r_2)^2 (1-\rho^2)\rho^2.
\end{equation}
Finally, since $M(z)$ is self-adjoint for $z \in {\mathbb R}_-$, the matrix $G(\rho,\phi)$ is self-adjoint, and hence
$\ker G(\rho,\phi) = \big(\mathop{\mathrm{im}} G(\rho,\phi)\big)^\perp$.
In particular, if one defines the bundle
\begin{equation}\label{eq-bf}
H=\big\{\mathop{\mathrm{im}} G(\rho,\phi)\mid \rho \in [0,1] \hbox{ and } \phi \in [0,2\pi)\big\}
\end{equation}
one obviously has $G+H=X\times \mathbb{C}^2$, and then
$\mathrm{ch}(G)=-\mathrm{ch}(H)$ as the Chern number of the trivial bundle $X\times \mathbb{C}^2$ is zero.
In summary, $\mathrm{ch}(E) = -\mathrm{ch}(H)$, which we are going to calculate after the following remark.
\begin{rem}\label{exchern}
Let $A:X\to M_2(\mathbb{C})$ be a continuously differentiable map with $A(x)$ of rank $1$ for all $x \in X$. Let us recall how to
calculate the Chern number of the bundle $B=\{\mathop{\mathrm{im}} A(x)\mid x\in X\}$.
Assume that the first column $A_1$ of $A$ vanishes only on a finite set $Y$.
If $Y$ is empty, the bundle is trivial and $\mathrm{ch} (B)=0$.
So let us assume that $Y$ is non-empty.
Let $P(x)$ be the matrix of the orthogonal projection onto $\mathop{\mathrm{im}} A(x)$ in $\mathbb{C}^2$.
By definition, one has
\[
\mathrm{ch}(B)=\dfrac{1}{2\pi i}\int_X \mathop{\mathrm{tr}} \big(P \;{\tt d}_X P\wedge {\tt d}_X P\big).
\]
Now, for $\epsilon>0$ consider an open set $V_\epsilon\subset X$ with $Y\subset V_\epsilon$, having a $C^1$ boundary
and such that $\mathop{\mathrm{vol}}_X V_\epsilon\to 0$ as $\epsilon\to 0$.
By continuity and compactness, the differential form $\mathop{\mathrm{tr}} \big(P \; {\tt d}_X P \wedge{\tt d}_X P\big)$ is bounded, and then
\[
\mathrm{ch}(B)=\dfrac{1}{2\pi i}\lim_{\epsilon\to 0}\int_{X \setminus V_\epsilon} \mathop{\mathrm{tr}} \big(P \;{\tt d}_X P\wedge {\tt d}_X P\big).
\]
For $x\in X \setminus V_\epsilon$ one can consider the vector
\[
\psi(x)=\dfrac{A_1(x)}{\|A_1(x)\|}
\]
and by a direct calculation, one obtains $\mathop{\mathrm{tr}} \big(P \; {\tt d}_X P\wedge {\tt d}_X P\big)= {\tt d}_X\Bar \psi \wedge {\tt d}_X\psi$.
Since the differential form ${\tt d}_X\Bar \psi \wedge {\tt d}_X\psi$ is exact, then ${\tt d}_X \Bar \psi \wedge {\tt d}_X\psi={\tt d}_X(\Bar \psi \;{\tt d}_X\psi)$ and by Stokes' theorem,
one obtains
\[
\mathrm{ch}(B)=
\dfrac{1}{2\pi i}\lim_{\epsilon\to 0}
\int_{\partial V_\epsilon} \Bar \psi\;{\tt d}_X\psi.
\]
\end{rem}
Let us apply the above constructions to the bundle \eqref{eq-bf}.
Since $(r_1 -r_2)\neq 0$ the first column $G_1(\rho,\phi)$ of the matrix $G(\rho,\phi)$
can potentially vanish only for $\rho=0$ or for $\rho=1$.
As already mentioned, these two points are the coordinate singularities of the parametrization. But by a local change of parametrization, one easily get rid of this pathology.
Thus, we first consider $\rho=1$ and let $(\theta_1, \theta_2) \in (-1,1)^2$ be a local parametrization of a neighbourhood of the point $\rho=1$ which coincides with $(\theta_1, \theta_2)=(0,0)$.
Let $\widetilde{G}$ be the expression of the function $G$ in the coordinates $(\theta_1, \theta_2)$ and in a neighbourhood of the point $\rho=1$. For this function one has
\[
\widetilde{G}_1(0,0) = \begin{pmatrix}
M_{11} \big(z(0,0)\big) + r_1\\
0
\end{pmatrix}
\]
Now, note that under our assumptions one has $r_1<0$ and $r_2>0$.
As seen from the explicit expressions for $M$,
the entries of $M(z)$ are negative for $z<0$.
Then the term $M_{11} \big(z(0,0)\big) + r_1$ can not be equal to
$0$ and this also holds for the first coefficient of $\widetilde{G}_1(0,0)$.
For $\rho =0$ let $(\vartheta_1, \vartheta_2) \in (-1,1)^2$ be a local parametrization of a neighbourhood of the point $\rho=0$ which coincides with $(\vartheta_1, \vartheta_2)=(0,0)$.
Let again $\widehat{G}$ be the expression of the function $G$ in the coordinates $(\vartheta_1, \vartheta_2)$ and in a neighbourhood of the point $\rho=0$. Then one has
\[
\widehat{G}_1(0,0) = \begin{pmatrix}
M_{11} \big(z(0,0)\big) + r_2\\
0
\end{pmatrix}\ .
\]
In that case, since $M_{22}(z)+ r_1$ is strictly negative for any $z\in {\mathbb R}_-$ one must have $M_{11} \big(z(0,0)\big) + r_2=0$ in order to satisfy Equation \eqref{eq-annul}.
Therefore, the corresponding point $\rho=0$ belongs to $Y$, as introduced in Remark \ref{exchern}.
Therefore, in our case $Y$ consists in a single point $y$ corresponding to $\rho=0$.
Now, for $\epsilon>0$ consider the set
\begin{equation*}
V_\epsilon= \left\{
\left(\begin{matrix}
\rho^2 \lambda_1 + (1-\rho^2) \lambda_2 &
\rho(1-\rho^2)^{1/2} \;\!e^{i\phi}(\lambda_1-\lambda_2) \\
\rho(1-\rho^2)^{1/2} \;\!e^{-i\phi}(\lambda_1-\lambda_2) &
(1-\rho^2) \lambda_1 + \rho^2 \lambda_2
\end{matrix}\right)
\mid \rho \in [0,\epsilon) \hbox{ and } \phi \in [0,2\pi)\right\}.
\end{equation*}
Obviously, this set satisfies the conditions of Remark \ref{exchern}. We can then represent
\[
G_1(\rho,\phi) = \begin{pmatrix}
M_{11}\big(z(\rho,\phi)\big)+\rho^2r_1 +(1-\rho^2)r_2 \\
\rho (1-\rho^2)^{1/2}e^{-i\phi}(r_1-r_2)
\end{pmatrix}
=:
\begin{pmatrix}
g(\rho,\phi)\\
f(\rho) e^{-i\phi}
\end{pmatrix}
\]
with $f,g$ real, and set
\[
\psi(\rho,\phi):=\dfrac{G_1(\rho,\phi)}{\|G_1(\rho,\phi)\|}=
\begin{pmatrix}
\dfrac{g(\rho,\phi)}{\sqrt{f^2(\rho)+g^2(\rho,\phi)}}\\
\dfrac{f(\rho) e^{-i\phi}}{\sqrt{f^2(\rho)+g^2(\rho,\phi)}}
\end{pmatrix}.
\]
Then one has
\begin{eqnarray*}
\int_{\partial V_\epsilon(y)} \Bar\psi\,{\tt d}_X \psi
&=&
\int_0^{2\pi} \Big[ \dfrac{g}{\sqrt{f^2+g^2}}\;\!\partial_\phi \Big(\dfrac{g}{\sqrt{f^2+g^2}}\Big)
+ \dfrac{f e^{i\cdot}}{\sqrt{f^2+g^2}}\;\!
\partial_\phi\Big(\dfrac{f e^{-i\cdot}}{\sqrt{f^2+g^2}}\Big)
\Big](\epsilon,\phi) \;\!\mathrm{d}\phi \\
&=&
-i\int_0^{2\pi} \Big[\dfrac{f^2}{f^2+g^2}\Big](\epsilon,\phi) \mathrm{d}\phi
\end{eqnarray*}
Thus, one has obtained that
\begin{equation}\label{eq-cf}
\mathrm{ch}(H)=-\dfrac{1}{2\pi}\lim_{\epsilon\to 0} \int_0^{2\pi} \dfrac{f^2(\epsilon)}{f^2(\epsilon)+g^2(\epsilon,\phi)} \,\mathrm{d}\phi.
\end{equation}
Furthermore, note that Equation \eqref{eq-annul}
can be rewritten as
$g(\rho,\phi) h(\rho,\phi)=f^2(\rho)$,
where $h(\rho,\phi)=\big(M_{22}\left(z(\rho,\phi)\right)+ (1-\rho^2)r_1 +\rho^2 r_2\big)$ does not vanish in a sufficiently small neighbourhood of the point $\rho=0$.
Then one has $g(\rho,\phi)=o\big(f(\rho)\big)$ uniformly in $\phi$ as $r$ tends to $0$. By
substituting this observation into \eqref{eq-cf}
one obtains
\[
\mathrm{ch}(H)=-\dfrac{1}{2\pi} \int_0^{2\pi} \lim_{\epsilon\to 0}\dfrac{f^2(\epsilon)}{f^2(\epsilon)+g^2(\epsilon,\phi)} \,\mathrm{d}\phi
=-\dfrac{1}{2\pi} \int_0^{2\pi} \,\mathrm{d}\phi=-1.
\]
As a consequence, by returning to the original bundle $E$, one has obtained $\mathrm{ch}(E)=-\mathrm{ch}(H)=1$.
As a corollary, one easily prove:
\begin{prop}\label{propfinal}
Let $\lambda_1,\lambda_2$ be two complex numbers of modulus $1$
with $\Im \lambda_1<0< \Im \lambda_2$ and consider the set $X \subset U(2)$ defined by \eqref{param}.
Then the map ${\bf \Omega}: X \to \mathcal{E}$
is continuous and the following equality holds:
\begin{equation*}
\frac{1}{24\pi^2} \int_{X\times \square}\mathop{\mathrm{tr}}\big[{\bf \Gamma}^* \; {\tt d}_{X\times\square}{\bf \Gamma}\wedge
{\tt d}_{X\times\square}{\bf \Gamma}^* \wedge{\tt d}_{X\times\square}{\bf \Gamma} \big]
= 1
\end{equation*}
\end{prop}
\begin{proof}
Continuity of $X \ni U \mapsto \Omega_-(H_\alpha^U,H_0) \in \mathcal{E}$ is proved in Appendix~\ref{appc}.
The equation is an application of Theorem~\ref{thm-ENN} with $n=2$ with $\eta_X$ defined by the first Chern character over $X$:
$\langle\eta_X, [{\bf P}]_0 \rangle = \frac{1}{2\pi i} \int_{X}\mathrm{Tr}\big[ {\bf P} \;{\tt d}_X{\bf P} \wedge{\tt d}_X{\bf P}\big] = \mathrm{ch}(E)$.
\end{proof}
| {
"attr-fineweb-edu": 1.164062,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdU4241xg-KyhkBUT | \section*{Introduction}\label{sec:intro}
A valuable way of understanding a many-body system is to characterise its phase diagram and its associated transitions. This approach is useful across a broad class of domains, from the classical to the quantum realms, at zero-temperature, and both in- and out-of thermal equilibrium. Although typically such domains are clearly separated, there are situations where phase transitions in one such domain can influence the behaviour of another.
A useful framework to address such issues is the Lindblad master equation \cite{Gorini1976, Lindblad1976}, through which one may combine both Hamiltonian and classical stochastic dynamics. This methodology has been used, for example, to explore mixed classical-quantum transport \cite{Prosen2008, Prosen2008b, Eisler2011, Temme2012}. However, despite this success, it is difficult to find systems where an interesting interplay can be maintained between classical/stochastic and quantum phases. For example, for a spin chain with stochastic Lindblad processes only at the boundary spins, the typical steady state behaviour is dictated by the quantum properties of the bulk Hamiltonian (see e.g. \cite{Prosen2008,Prosen2008b}). On the other hand, if bulk stochastic processes are also allowed, these typically dominate \cite{Eisler2011, Temme2012} and leave little or no trace of the quantum phase transition to survive at late times.
In this paper we discuss a spin chain model where both classical stochastic and quantum phases are simultaneously relevant to a degree that allows for a genuine interplay between them in the long-time dynamics. The model is a combination of the transverse XY (TXY) Hamiltonian, or equivalently the Kitaev chain~\cite{Kitaev2001}, with a one-way classical stochastic hopping process, modelled by the Totally Asymmetric Simple Exclusion Process (TASEP). We refer to the combination of these two models as the TXY-TASEP.
The TASEP, considered in isolation, has a phase diagram for its non-equilibrium steady state (NESS) that is determined by the stochastic hop-on/hop-off rates at its boundaries. The TXY Hamiltonian undergoes a quantum phase transition in its ground state as the transverse magnetic field parameter is increased, assuming a non-zero XY anisotropy parameter $\delta$, at $\delta = 0$ the model is critical for any magnetic field value. When the two models are combined, we find that for zero anisotropy, $\delta=0$, the NESS retains many of the properties associated with the classical TASEP and, as such, its behaviour can be essentially controlled via the stochastic boundary (hop-on/hop-off) rates. On the other hand, in the regime associated with the anti-ferromagnetic (topological) phase of the XY model, the steady-state more closely resembles a perturbed infinite temperature state, but where the stochastic hop-on/hop-off rates do still dictate some key properties of the perturbation.
The essential feature that allows for the balance between quantum and classical effects to be maintained is the non-zero XY anisotropy, which together with the bulk stochastic hopping, opens a constant Liouvillian gap that persists even for large system sizes. The precise scaling of the gap depends on the underlying quantum phase and is thus controlled by the bulk topology of the transverse XY model band-structure. This results in steady state properties that are very different in each of the quantum regimes.
From the perspective of the TASEP phase diagram \cite{Derrida1992}, we see that steady states of the low- and high-density phases are far more susceptible to the pair creation/annihilation associated with the XY anisotropy. This effect is much less pronounced in the maximal current phase, where the tendency of the XY anisotropy to drive the system towards half-filling is complementary to the maximal current micro-states.
Crucially, because of the constant gap, even in the thermodynamic limit one can move quickly between these limiting cases by simply tuning the transverse field. Systems with a finite gap in this limit are described as \emph{rapidly mixing} and it can be shown that the resultant steady states are robust to local perturbations and uncorrelated at a scale equivalent to the inverse gap size~\cite{Znidaric2015, Poulin2010, Nachtergaele2011, Kastoryano2013, Lucia2015, Cubitt2015}. Our results, obtained by similar methods to prior studies of a dissipative XY model~\cite{Bardyn2012, Joshi2013}, suggest that the XY system parameters can be used to quickly engineer and tune specific features into the steady state and as such have the potential to be used as a means of rapid state preparation.
The TXY-TASEP system does not allow for a direct analytical treatment, as available for related models \cite{Gwa1992, Kim1995, deGier2005, deGier2006, Prosen2008, Zunkovic2010, Crampe2010, Crampe2012, Lazarescu2014, Znidaric2015, Prolhac2016, Brattain2017, Zhang2019, Essler2020, Ishiguro2021, Robertson2021}. Our results are therefore arrived at by using a mix of numerical methods and approximate approaches. On a numerical level we apply matrix product state (MPS) methods~\cite{Nagy2002, Schollwock2011, Paeckel2019} to study steady states and the Liouvillian gap~\cite{Orus2008, Prosen2009, Joshi2013}. However, we also use operator quantization \cite{Prosen2008, Zunkovic2010}, and exploit the block structure that occurs naturally via the associated canonical Majorana representation \cite{Goldstein2012, Kells2015}, to make concrete perturbative statements.
An overview of the paper is as follows:
In section~\ref{sect:model} we introduce key aspects of the transverse XY and TASEP models, providing in addition a detailed summary of our main results and the physical picture that emerges. In section~\ref{sect:NESS_results} we detail our main numerical results, focusing in particular on the relationship of the non-equilibrium steady state (NESS) with both the TASEP steady state and the maximally mixed state. In section~\ref{sect:gap_results} we discuss the Liouvillian super-operator of the model from the perspective of operator quantization and outline its block structure in what is called the canonical Majorana representation. This sets up our perturbative analysis of the NESS in the weak-stochastic limit \cite{Temme2012} and the subsequent focus on the two-quasiparticle super-operator block \cite{Prosen2008, Kells2015}. We provide a number of appendices for peripheral discussions. App.~\ref{app:discrete_TASEP} derives the continuous time TASEP master equation from the discrete time process. The remaining appendices (App.~\ref{app:BPT}, \ref{app:oddevengaps} \& \ref{app:oddspectrum}) expand on the technical aspects and interpretations of the block perturbation theory used in section~\ref{sect:gap_results}.
\section{Model and methods}\label{sect:model}
\subsection{Combining the TXY \& TASEP Models}
Our model of study is the combination of two paradigmatic models for transport in 1-dimensional systems: the transverse-field XY model (TXY) and the Totally Asymmetric Exclusion Process (TASEP). Separately both TXY and TASEP are well understood; the quantum XY spin model with a transverse magnetic field can be solved exactly by mapping to free fermion model with superconducting terms present due to the XY anisotropy. Likewise, the classical TASEP is solvable in the sense that there is an ansatz solution for the NESS. Although this ansatz solution predates the tensor network concept, it takes the form of a matrix product state~\cite{Derrida1992}.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{setup_and_phases.pdf}
\caption{\label{fig:model} Top: Our model is a chain of two-level quantum systems evolving by the combination of the Transverse XY Hamiltonian (TXY) and the totally asymmetric simple exclusion process (TASEP). The TXY model parameters are in red and TASEP parameters are in black. Bottom: the phase diagrams for the ground state of the TXY Hamiltonian (left), and for non-equilibrium steady state of the TASEP (right) where: LD = Low Density, HD = High Density, MC = Maximal Current.}
\end{figure}
We can incorporate both models into a single Lindblad master equation \cite{Gorini1976, Lindblad1976}
\begin{eqnarray}
\frac{d\hat{\rho}}{dt} &=& - i \lambda \mathbb{H}(\hat{\rho}) + \epsilon \mathbb{L}(\hat{\rho}), \nonumber \\
&=& \mathcal{L}(\hat{\rho}). \label{eq:GKSL}
\end{eqnarray}
The TXY model is represented by the following commutator \mbox{$\mathbb{H}(\rho) = [\hat{H}, \hat{\rho}]$}, with overall strength $\lambda$ and the Hamiltonian
\begin{equation}
\hat{H} = - h_{z} \sum_{j=1}^{N} \hat{\sigma}_{j}^{z} + \sum_{j=1}^{N-1} \left( \frac{1 + \delta}{2} \hat{\sigma}_{j}^{x}\hat{\sigma}_{j+1}^{x} + \frac{1 - \delta}{2} \hat{\sigma}_{j}^{y}\hat{\sigma}_{j+1}^{y} \right) . \label{eq:H}
\end{equation}
Here $h_{z}$ is the transverse magnetic field and $0 \leq \delta \leq 1$ the anisotropy parameter. We note that if $\delta \neq 0$, the TXY-Hamiltonian has a quantum phase transition at $|h_{z}| = 1$ (see Fig. \ref{fig:model}). The anisotropic terms can be rewritten as $2\delta (\hat{\sigma}^{+}_{i}\hat{\sigma}^{+}_{i+1} + \hat{\sigma}^{-}_{i}\hat{\sigma}^{-}_{i+1})$, so they can be seen to introduce pair creation/annihilation when $\delta$ is non-zero. We make this statement in the view that, after a Jordan-Wigner transformation, $\hat{H}$ can be rewritten in terms of spinless fermions, which is known as the Kitaev chain \cite{Kitaev2001}. Then the spin model can be reinterpreted as particles hopping on a 1-dimensional lattice where spin-up corresponds to an occupied state and spin-down to an unoccupied state.
In the second term of Eq.~\ref{eq:GKSL} we have the totally asymmetric simple exclusion process (TASEP), with overall strength $\epsilon$ and modelled by the Lindblad super-operator \cite{Temme2012}
\begin{equation}
\mathbb{L}(\hat{\rho}) = \alpha \mathcal{D}[\hat{\sigma}_{1}^{+}](\hat{\rho}) + \beta \mathcal{D}[\hat{\sigma}_{N}^{-}](\hat{\rho}) +\, \sum_{j=1}^{N-1} \mathcal{D}[\hat{\sigma}_{j}^{-} \hat{\sigma}_{j+1}^{+}](\hat{\rho}), \label{eq:cl_TASEP}
\end{equation}
\noindent where $\mathcal{D}[\hat{\ell}](\hat{\rho}) = \hat{\ell}\hat{\rho}\hat{\ell}^{\dagger} - \frac{1}{2}\hat{\ell}^{\dagger}\hat{\ell}\hat{\rho} - \frac{1}{2}\hat{\rho}\hat{\ell}^{\dagger}\hat{\ell}$. The TASEP is a classical stochastic process that involves hard-core particles hopping onto the first site of the chain with rate $\alpha$, hopping off the end of the chain with rate $\beta$, and hopping in one direction through the bulk with rate equal 1. The TASEP has three distinct phases with respect to $\alpha$ and $\beta$ (see Fig. \ref{fig:model}): the maximal current (MC) phase ($\alpha > 1/2$ and $\beta > 1/2$), the low density (LD) phase ($\alpha < 1/2$ and $\beta > \alpha$), and the high density (HD) phase ($\beta < 1/2$ and $\beta < \alpha$). This phase diagram can be deduced from an exact MPS solution for the TASEP steady state, with infinite dimensional matrices \cite{Derrida1992}. However, the exact solution can also be accurately approximated by a MPS with relatively small bond dimension \cite{Temme2012}. In this way we can generate an efficient matrix product state description of the TASEP steady state in a way that can be further extended to find the steady state $\hat{\rho}_\text{NESS}$ of the full Liouvillian $\mathcal{L}$, where an exact MPS is not known. Away from the purely classical model, we can obtain the full NESS by a density matrix renormalisation group (DMRG) implementation modified for open quantum systems~\cite{Orus2008, Prosen2009, Joshi2013}.
We note that Eq.~\ref{eq:cl_TASEP} is part of a continuous-time master equation, while TASEP is often considered as a discrete time stochastic process. In Appendix~\ref{app:discrete_TASEP} we outline the derivation of Eq.~\ref{eq:cl_TASEP} from the underlying discrete time stochastic process. By viewing the TASEP as a discrete time Markov process one can translate the model to a non-Hermitian spin chain for which Bethe anatz methods can be applied to determine analytic results, see e.g.~\cite{Gwa1992, Kim1995, deGier2005, deGier2006}. We note also that our approach is not the only one with the aim to introduce quantum effects into classical exclusion processes. A number of recent works have proposed quantum modified versions of the SSEP~\cite{Bernard2019, Bernard2021} and ASEP~\cite{Bernard2021sp} which employ a non-Hermitian Hamiltonian formulation of the exclusion process that introduces noise in the particle hopping amplitudes.
\begin{figure*}
\centering
\includegraphics[width=0.9\textwidth]{sparse4.pdf}
\caption{\label{fig:Lblocks}(Color Online) (a) The structure of $\mathcal{L}$ in the canonical basis for a system size of $N = 4$. (b) The $s = 0$ block that corresponds to the maximally mixed/thermal state is connected via terms dependent on the bulk and boundary driving to states $\Ket{\phi_L} = \Ket{\gamma_1 \gamma_2}$, $\Ket{\phi_R} = \Ket{\gamma_{N-1} \gamma_{N}}$. These elements are highlighted, on the left within the $\mathcal{L}^{(2,0)}$ sub-block, by the upper {\color{orange} \emph{orange}} dot and lower {\color{cyan} \emph{cyan}} dot, which have respective values $-\epsilon (\beta-1/2)$ and $ \epsilon (\alpha-1/2)$. (c) One of our main observations is that the complex spectrum near $\mathcal{E} = 0$ is dominated by the states generated from the extremal blocks $\mathcal{L}^{(0)}$, $\mathcal{L}^{(1)}$, $\mathcal{L}^{(2)}$ and $\mathcal{L}^{(2N-1)}$ and that the eigenvalues of these states are well approximated by diagonalizing within each block separately. This can be seen via a non-Hermitian perturbative analysis where the effects of off-diagonal blocks appear only at second order, see Sec.~\ref{sect:perturbation_NESS}. In the figure, we give spectral gaps for $s = 1$ (red), $s = 2$ (black) and $s = 2N-1$ (blue) for a system of length $N = 100$, with $\alpha = 0.1$, $\beta = 0.3$, and $\epsilon = 0.1$.}
\end{figure*}
Our goal in this paper is to study the steady state and the Liouvillian gap of the corresponding TXY-TASEP model's Liouvillian super-operator $\mathcal{L}$, as we vary the model parameters, including the parameter $\epsilon/\lambda$ which controls the relative strength of the quantum TXY model and the classical TASEP in Eq.~\ref{eq:GKSL}. We set $\lambda = 1$ for the remainder of this paper, essentially allowing $\lambda$ to define the unit of frequency. We note that the steady state of $\mathcal{L}$ for the isotropic Hamiltonian, with $h_{z} = \delta = 0$ and TASEP, has been previously explored by other methods \cite{Temme2012}. Also, the case of zero bulk TASEP hopping has been explored in the more general scenario where particles can hop on or off either end of the chain \cite{Prosen2008b}.
\subsection{Operator Quantization - A Hilbert Schmidt Formulation}\label{sect:OQ_basis}
\begin{figure*}
\centering
\includegraphics[width=1\textwidth]{new_overlap_fig.pdf}
\caption{\label{fig:New_Overlap_fig} (Color Online) These figures contain numerical data for the overlap, as defined in Eq.~\ref{eq:overlapDef}, for three cases of $\epsilon =\lbrace 0.1, 1, 10\rbrace$ and capturing features of the three TASEP phases. In (a)-(c), for the low density (LD) phase [$\alpha = 0.1$ \& $\beta = 0.3$] we observe a strong effect on the overlap with changing $\delta$, in the high density (HD) phase one can see similar features. In (d)-(f), for the maximal current (MC) phase [$\alpha = 0.7$ \& $\beta = 0.9$] we show the relatively weak effect of increasing $\delta$, note the restricted color range of values for this row of figures. In (g), the overlap is shown against system size, $N$, showing an exponential decay with system size within the high density (HD) phase (see inset showing $\log_{10}(\mathcal{O})$). In the MC phase the overlap decays at a slower rate with respect to system size. For (a)-(f), $N=50$. For (g), $\delta = 0.1, h_{z} = 0.5$ and $\epsilon = 0.1$.}
\end{figure*}
In the following, it will be useful to represent the superoperator $\mathcal{L}$ in Eq. \ref{eq:GKSL} as a matrix that acts on a vectorized representation of the quantum state $\hat{\rho}$. We do this by choosing a convenient basis of orthonormal operators $\{ \Gamma_i \}$ with respect to the Hilbert-Schmidt inner product, i.e., $\BraKet{\Gamma_i}{\Gamma_j} \equiv \mbox{Tr}(\Gamma_i^\dagger \Gamma_j) = \delta_{i,j}$. We choose the so-called canonical Majorana basis \cite{Goldstein2012, Kells2015}:
\begin{eqnarray}\label{eq:MFbasis}
\Gamma^{(0)} : & \quad & I/ \sqrt{2^{N}},\nonumber \\
\Gamma^{(1)} : & & \gamma_{1}/\sqrt{2^{N}},\gamma_{2}/\sqrt{2^{N}},\dots,\gamma_{2N}/ \sqrt{2^{N}},\\
\Gamma^{(2)} : & & i\gamma_{1}\gamma_{2}/\sqrt{2^{N}},i\gamma_{1}\gamma_{3}/\sqrt{2^{N}},\dots,i\gamma_{2N}\gamma_{2N}/\sqrt{2^{N}},\nonumber \\
& &\text{etc.}\nonumber
\end{eqnarray}
These Majorana operators are defined from the spin operators as:
\begin{equation}
\gamma_{2n-1} = \left(\prod^{2n-2}_{k=1}\sigma^{z}_{k}\right)\sigma^{x}_{2n-1}, \gamma_{2n}= \left(\prod^{2n-1}_{k=1}\sigma^{z}_{k}\right)\sigma^{y}_{2n},
\end{equation}
for $n=1,2,\hdots,N$. As shown in Eq. \ref{eq:MFbasis}, an element $\Gamma^{(s)}_{a}$ of this basis is a product of Majorana operators, where the upper index $s$ is the number of $\gamma$'s in the product, and $a$ labels the basis elements within each $s$ subspace. The factors of $1/\sqrt{2^N}$ ensure the normalisation $\BraKet{\Gamma_a^{(s)}}{\Gamma_{b}^{s'}} = \delta_{s,s'}\delta_{a,b}$. In this basis the Liouvillian superoperator $\mathcal{L}$ has the matrix elements
\begin{equation}
\mathcal{L}^{(s,s')}_{ab} = \llangle \Gamma^{(s)}_a | \mathcal{L}(\Gamma^{(s')}_b) \rrangle,\label{eq:Labss}
\end{equation}
where the upper indices $(s,s')$ label blocks in the matrix and the lower indices $a,b$ label the matrix elements within the $(s,s')$ block [see Fig. \ref{fig:Lblocks}(a,b) for an illustration of the matrix structure]. Likewise, the vectorized density operator in this operator basis has the vector elements $\rho_a^{(s)} = \mbox{Tr} (\Gamma_a^{(s)}\rho)$.
The superoperator matrix $\mathcal{L}_{ab}^{(s,s')}$ can be non-Hermitian, resulting in a set of complex eigenvalues $\{ \mathcal{E}_0, \mathcal{E}_1, \mathcal{E}_2, \dots \}$, which we assume are ordered according to their real parts $0 \geq \text{Re}(\mathcal{E}_0) \geq \text{Re}(\mathcal{E}_1) \geq \hdots$ etc.. The steady state corresponds to the eigenvalue with zero real part, $\text{Re}(\mathcal{E}_{0}) = 0$, and the Liouvillian gap is defined as
\begin{equation}
\mathcal{E}_{gap} \equiv -\text{Re}(\mathcal{E}_{1}). \label{eq:L_gap}
\end{equation}
The superoperator $\mathcal{L}$ has some other interesting features that are worth pointing out. First, we note that it preserves the parity of the label $s$ (i.e., the operator $\mathcal{L}(\Gamma^{(s)})$ is a linear combination of operator basis elements with the same parity as $s$). This is seen clearly in Fig. \ref{fig:Lblocks}(a,b), where $\mathcal{L}^{(s,s')} = 0$ if $s$ and $s'$ have different parity. Also, we highlight the $s=s'=0$ block [upper-left corner of Fig. \ref{fig:Lblocks}(a,b)], corresponding to the operator basis element $\Gamma^{(0)} = I/\sqrt{2^N}$. Using the master equation \eqref{eq:GKSL}, it is straighforward to show that this matrix element is always zero $\mathcal{L}^{(0,0)} = 0$. Similarly, it can be shown that this element is only connected to two others in the $\mathcal{L}^{(2,2)}$ block, via the off-diagonal block $\mathcal{L}^{(2,0)}$ [as illustrated in Fig. \ref{fig:Lblocks}(b)]. The two non zero elements are
\begin{eqnarray}
\llangle 2^{-\frac{N}{2}} \gamma_1 \gamma_2 | 2^{-\frac{N}{2}} I \rrangle &=& \epsilon(\alpha- 1/2),\nonumber \\ \BraKet{2^{-\frac{N}{2}} \gamma_{2N-1} \gamma_{2N}}{ 2^{-\frac{N}{2}} I} &=& -\epsilon(\beta- 1/2) . \nonumber
\end{eqnarray}
If these two matrix elements are zero (i.e., if $\alpha=\beta=1/2$ or if $\epsilon = 0$) then the maximally mixed state $\rho \sim \Gamma^{(0)} \sim I$ is a valid steady state of the Liouvillian. If both matrix elements are non-zero but small then we expect the NESS to be close to the maximally mixed state. This intuition is based partly on the structure produced in our expression of the Liouvillian superoperator (see Fig.~\ref{fig:Lblocks} and Eq.~\ref{eq:Labss}) and on prior work for another system which allows for a NESS ansatz~\cite{Znidaric2011} with the maximally mixed state as the zeroth order state. We will exploit this feature later in Sections \ref{sect:perturbation_NESS} and \ref{sect:gap_results} to perturbatively estimate the steady state and the gap scaling in the small $\epsilon$ limit.
Furthermore, generically speaking, for a Lindblad equation comprised of a Hamiltonian which is quadratic and Lindblad jump operators that are linear in fermion operators one finds that the Liouvillian super-operator admits a block diagonal matrix form. As a result, the super-operator can be solved block-by-block. There are cases however where exact treatments of the super-operator are possible despite the underlying Lindblad equation not being entirely quadratic. Asymmetric boundary driving~\cite{Prosen2008, Prosen2008b} and quartic stochastic processes~\cite{Eisler2011, Zunkovic2014} are two such examples. Although similar approaches cannot be directly applied to TXY-TASEP, we will show that using the canonical representation yields a useful block structure which allows for perturbative estimation of the Liouvillian gap in the weak classical regime.
\section{Non-Equilibrium Steady State}\label{sect:NESS_results}
The non-equilibrium steady state (NESS) is defined as the state $\hat{\rho}_\text{NESS}$ for which $\mathcal{L}(\hat{\rho}_\text{NESS}) = 0$. The case for studying the NESS is straightforward: it typically governs the system's late time behaviour. There are various examples of open quantum spin chains for which the NESS can be calculated exactly through analytical methods. One important class are those for which matrix product ansatz solutions exist for the NESS \cite{Znidaric2010c, Znidaric2011, Prosen2011b, Karevski2013, Prosen2015}. This includes, for example, the purely classical TASEP ($\lambda = 0$ in our model) for which a matrix product ansatz solution was found by Derrida \emph{et al.} \cite{Derrida1992}. Other formulations allow one to utilise the methodology from the Bethe Ansatz \cite{Gwa1992, Kim1995, deGier2005, deGier2006, Crampe2010, Crampe2012, Lazarescu2014, Prolhac2016, Brattain2017, Zhang2019, Essler2020, Ishiguro2021} or operator quantization \cite{Prosen2008, Prosen2008b, Eisler2011}. However, these exact analytical methods cannot be applied to the full TXY-TASEP to determine the NESS. Instead, in this section we employ the density matrix renormalisation group (DMRG) algorithm to numerically determine $\hat{\rho}_\text{NESS}$.
\subsection{Obtaining NESS from DMRG}
We begin by comparing $\hat{\rho}_\text{NESS}$ to the classical TASEP steady $\hat{\rho}_{cl}$, defined as the state for which $\mathbb{L}(\hat{\rho}_{cl}) = 0$ (where $\mathbb{L}$ is defined in Eq. \ref{eq:cl_TASEP}). For given TASEP boundary hopping rates $\left(\alpha, \beta\right)$ we know from the work of Derrida \emph{et al.} \cite{Derrida1992} how to construct $\hat{\rho}_{cl}$ from its exact matrix product ansatz. However, introducing the Hamiltonian term in Eq.~\ref{eq:GKSL} typically modifies the steady state so that it is no longer equal to the classical TASEP steady state $\hat{\rho}_{cl}$. For a given $(\alpha, \beta)$ we quantify the difference between $\hat{\rho}_\text{NESS}$ and $\hat{\rho}_{cl}$ with the overlap
\begin{equation}\label{eq:overlapDef}
\mathcal{O}(\hat{\rho}_\text{NESS}, \hat{\rho}_{cl}) =
\frac{\llangle \rho_\text{NESS} | \rho_{cl} \rrangle}{\sqrt{\llangle \rho_\text{NESS} | \rho_\text{NESS} \rrangle \llangle \rho_{cl} | \rho_{cl} \rrangle}},
\end{equation}
where $\llangle A | B \rrangle = \mbox{Tr} (\hat{A}^\dagger \hat{B} )$ is the Hilbert-Schmidt inner product for operators $\hat{A}$ and $\hat{B}$. This overlap takes values in the interval $\mathcal{O} \in [0,1]$, with $\mathcal{O} = 1$ if $\hat{\rho}_\text{NESS} = \hat{\rho}_{cl}$ and $\mathcal{O} = 0$ if the states $\hat{\rho}_\text{NESS}$ and $\hat{\rho}_{cl}$ are orthogonal (i.e., $\llangle \rho_\text{NESS} | \rho_{cl} \rrangle = 0$).
Assuming $(\alpha, \beta)$ in the LD phase, in Fig. \ref{fig:New_Overlap_fig}~[(a)-(c)] we plot the overlap $\mathcal{O}$ as a function of the TXY-model parameters $(\delta,h_{z})$, for the three different TASEP strengths $\epsilon = \lbrace 0.1, 1, 10\rbrace$. For $\epsilon = 10$ the Liouvillian $\mathcal{L}$ is dominated by the TASEP component of the model. It is not surprising, therefore, that in Fig. \ref{fig:New_Overlap_fig}(c) we see large regions in parameter space where $\mathcal{O} \approx 1$. In particular, for small anisotropy $\delta$ we see that $\hat{\rho}_\text{NESS}$ and $\hat{\rho}_{cl}$ are very similar. This is consistent with previous work by Temme \emph{et al.} \cite{Temme2012}, which considered the transport properties for the TXY-TASEP in the special case of zero anisotropy $\delta = 0$, and found that the isotropic Hamiltonian has very little effect. However, even for $\epsilon = 10$ where TASEP dominates, we see in Fig. \ref{fig:New_Overlap_fig}(c) that increasing the TXY anisotropy parameter to relatively small values $\delta \gtrsim 0.5$ can lead to a significant decrease in the overlap $\mathcal{O}$. This suggests that, in the LD phase, the TXY anisotropy $\delta$ plays an important role in driving the NESS away from the TASEP steady state. Similar results are obtained for $(\alpha, \beta)$ chosen in the HD phase.
When $\epsilon = 0.1$, on the other hand, the TASEP is relatively weak compared to the TXY Hamiltonian in Eq. \ref{eq:GKSL}, and so the steady state $\hat{\rho}_\text{NESS}$ may be very different from $\hat{\rho}_{cl}$. This is borne out in Fig.~\ref{fig:New_Overlap_fig}(a), where $\mathcal{O} \ll 1$ for most values of $(\delta,h_{z})$. However, even in this parameter regime we see a significant overlap $\mathcal{O}$ when $h_{z} > 1$ and $\delta$ is small, i.e., for parameters in the paramagnetic phase of the TXY-Hamiltonian. This indicates that the quantum phase transition affects the properties of the NESS.
\begin{figure}
\centering
\subfigure[~$\epsilon = 0.1$]{
\includegraphics[width=0.225\textwidth]{figMix_e01.pdf}
}
\subfigure[~$\epsilon = 10$]{
\includegraphics[width=0.225\textwidth]{figMix_e10.pdf}
}
\caption{\label{fig:mixednessLD} (Color Online) At classical rates $(\alpha,\beta)=(0.1,0.3)$, LD phase, we show the purity/mixedness of the NESS at two relative strengths $\epsilon$ representative of the weak/strong classical limits. In (a) $\epsilon = 0.1$, weak classical regime, we can see that increased $\delta$ quickly produces a more mixed state for all $h_{z}$ though more slowly for $h_{z} > 1$. In (b) $\epsilon = 10$, strong classical regime, the value of $h_{z}$ has less relevance yet the effect of increasing $\delta$ remains apparent. $N=50$ for both figures.}
\end{figure}
As mentioned above, our numerical results in Fig.~\ref{fig:New_Overlap_fig}~[(a)-(c)] are plotted for $(\alpha, \beta)$ in the LD phase, and similar results are obtained in the HD phase. However, the results are different for $(\alpha, \beta)$ in the MC phase. In Fig.~\ref{fig:New_Overlap_fig}[(d)-(f)] we can see that the overlap does not go to zero as in the LD phase for all $(\delta,h_{z})$. While an attempt has been made to highlight the different regions in $(\delta,h_{z})$, the overlap is largely similar across the parameter space. In Fig.~\ref{fig:New_Overlap_fig}(g) we plot the overlap $\mathcal{O}$ as a function of system size $N$, for various choices of $(\alpha, \beta)$. We see that the overlap decays much more slowly with system size for $(\alpha,\beta)$ in the MC phase.
What can we say about $\hat{\rho}_\text{NESS}$ when it is driven away from $\hat{\rho}_{cl}$ by the TXY-Hamiltonian? We can gain some insight by studying the mixedness $\mbox{Tr}(\hat{\rho}_\text{NESS}^2)$ of the steady state. In Fig.~\ref{fig:mixednessLD}[(a)-(b)] we plot the mixedness of the steady state $\hat{\rho}_\text{NESS}$ of the full Liouvillian in the LD regime. As the parameter $\epsilon$ decreases, corresponding to increasing relative strength of the TXY-Hamiltonian, we see that the NESS is driven away from $\hat{\rho}_{cl}$ to a much more mixed state. Moreover, with decreasing $\epsilon$ one can clearly resolve signatures of the phase transition of the XY-model at $h_{z} = 1$ and $\delta > 0$, see Fig.~\ref{fig:mixednessLD}(a).
We have shown then that increasing the TXY anisotropy can drive the NESS away from the classical TASEP steady state, for $(\alpha, \beta)$ in the LD/HD phase. To better understand this, we examine the overlap $\llangle \rho_{cl} | \mathcal{L}^\dagger \mathcal{L} | \rho_{cl} \rrangle = \big| \frac{d}{dt} |\rho_{cl} \rrangle \big|^2$, which quantifies the susceptibility of the TASEP steady state $\hat{\rho}_{cl}$ to dynamics of the full Liouvillian. Since $\mathbb{L}|\rho_{cl}\rrangle = 0$ we observe that $\llangle \rho_{cl} | \mathcal{L}^\dagger \mathcal{L} | \rho_{cl} \rrangle = \lambda^2 \llangle \rho_{cl} | \mathcal{H}^2 | \rho_{cl} \rrangle$, so that the susceptibility depends only on the Hamiltonian part of the Liouvillian. In Fig.~\ref{fig:commutator_expect}(a) we see that the isotropic Hamiltonian $\delta = 0$ has a relatively small effect on the classical steady state. However, for $\delta > 0$, Fig. \ref{fig:commutator_expect}(b) shows that $\hat{\rho}_{cl}$ responds very strongly to the TXY-Hamiltonian in the LD and HD phases, although not in the MC phase. This is reinforced by Fig. \ref{fig:sus_cl_L_a7b19}, which shows the susceptibility scales linearly with system size $N$ in the HD phase, but sub-linearly in the MC phase.
\begin{figure}
\centering
\subfigure[~$N=50$, $\delta=0$]{
\includegraphics[width=.225\textwidth,height=0.2\textwidth]{L50d0.pdf}
\label{fig:KZ}
}
\subfigure[~$N=50$, $\delta=0.2$]{
\includegraphics[width=.225\textwidth,height=0.2\textwidth]{L50d0p2.pdf}
\label{fig:KNZ}
}
\subfigure[~$\alpha = 0.7$, $\delta=0.1$]{
\includegraphics[width=0.45\textwidth]{sus_cl_L_a7b19.pdf}\label{fig:sus_cl_L_a7b19}}
\caption[]{\label{fig:commutator_expect} (Color online): The susceptibility of $\hat{\rho}_{cl}$ to dynamics by the Liouvillian $\, \Bra{\rho_{cl}} \mathcal{L}^\dagger \mathcal{L}\Ket{\rho_{cl}} = \lambda^{2}\Bra{\rho_{cl}} \mathcal{H}^2 \Ket{\rho_{cl}} $. [(a),(b)] The introduction of pairing $\delta$ allows the the classical steady state to couple strongly to the quantum commutator in both low and high density phases. (c) The strength of this coupling scales linearly with the system size in the low and high density phases (upper two lines). We emphasis this by plotting the susceptibility divided by $N$ so that the upper lines remain largely constant and the lower lines decrease. All data in this figure was plotted with $h_{z} = 0.5$ and $\epsilon = 0.1$.}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=.95\textwidth]{Tr_rhoH.pdf}
\caption{\label{fig:Tr_rhoH}The energy expectation values of $\langle E \rangle = \mbox{Tr}\rho_{\text{NESS}}H$ together with the many-body eigen-spectrum, $E_n$ of $H$, where only band edges are shown. (a) $\alpha=0.1$, $\beta=0.3$, $\delta=0.1$, (b) $\alpha=0.3$, $\beta=0.1$, $\delta=0.05$, (c) $\alpha=0.7$, $\beta=0.9$, $\delta=0.1$. In all figures, $\epsilon = 0.1$ and $N = 50$. In the paramagnetic regimes $|h_{z}| > 1$ the classical densities determined by the boundary rates result in steady states with a clear low/high energy imbalance [(a) and (b)]. This imbalance is suppressed in the ferromagnetic regime $|h_{z}| < 1$ and also throughout the maximal current phase (c)}
\end{figure*}
One can intuit the reasons for the strong response of $\hat{\rho}_{cl}$ in this case by considering the steady state configuration \cite{Derrida1992, Rajewsky97, Evans1999, Nagy2002} in those classical phases. In the LD phase, as the name suggests, there are many empty sites. Rewriting the anisotropic terms of the Hamiltonian in Eq.~\ref{eq:H} as $\delta(\hat{\sigma}^{x}_{i}\hat{\sigma}^{x}_{i+1} - \hat{\sigma}^{y}_{i}\hat{\sigma}^{y}_{i+1}) = 2\delta (\hat{\sigma}^{+}_{i}\hat{\sigma}^{+}_{i+1} + \hat{\sigma}^{-}_{i}\hat{\sigma}^{-}_{i+1})$, the operator $\hat{\sigma}^{+}_{i}\hat{\sigma}^{+}_{i+1}$ associated with the anisotropy can successfully be applied to the state at many locations on the chain. Similarly, in the HD phase there are many occupied sites and the pair annihilation operator, $\hat{\sigma}^{-}_{i}\hat{\sigma}^{-}_{i+1}$, can be applied without annihilating the state. However, in the MC phase the steady state is largely comprised of half-filled configurations which will not couple as strongly to the anisotropic terms.
Another interesting property of the steady state $\rho_{\text{NESS}}$ is the expectation value $\langle E \rangle = \mbox{Tr} \rho_{\text{NESS}} H$ which gives an indication of which Hamiltonian eigenstates take part in the steady state. In Fig.~\ref{fig:Tr_rhoH} we show how the expectation value changes relative to the full eigen-spectrum of the system Hamiltonian. In the LD and HD regimes the expectation value drifts towards the extrema of the the many body Hamiltonian spectrum, provided the Hamiltonian is tuned to the paramagnetic region. This occurs due to the energetic importance of either filled or empty sites (up or down spins) in this quantum phase. On the other hand, in the ferromagnetic/topological regimes, the energy of the steady state coincides with the centre of the many-body spectrum backing up the idea that here the system favours something close to the maximally mixed state. In the maximal current phase, this behaviour dominates for all values of the transverse field.
\subsection{NESS as a Perturbation of the Maximally Mixed State}\label{sect:perturbation_NESS}
The perturbation theory utilised here for non-Hermitian systems is based on \cite{Sternheim1972,Li2014nat, Li2016}. For additional technical details see App.~\ref{app:BPT}. As a starting point, one defines a ``bare" unperturbed Liouvillian $\mathcal{L}_0$ with eigenvalues $\mathcal{E}_{n}$ and left and right eigenvectors $\Bra{\tilde{v}_{n}}$ and $\Ket{v_{n}}$ such that $\Bra{\tilde{v}_m} \mathcal{L}_0 \Ket{v_n} = \delta_{nm} \mathcal{E}_n$. We write the perturbation as $\mathcal{L}_{1}$ and an expansion of the steady state as $\Ket{\rho} =\sum_{j} \Ket{\rho_{j}}$ the terms of which are produced iteratively according to
\begin{equation}
\Ket{\rho_{j}} = \mathcal{L}_0^{-1} \mathcal{L}_1 \Ket{\rho_{j-1}},
\label{eq:it_expand}
\end{equation}
where $\mathcal{L}_0^{-1}$ is the pseudo-inverse defined as
\begin{equation}
\mathcal{L}_0^{-1} =\sum_{\mathcal{E}_n \neq 0} \frac{\Ket{v_n} \Bra{\tilde{v}_n} }{ \mathcal{E}_n}.\label{eq:pseudo_inv}
\end{equation}
At this point one might expect that $\mathbb{H}$ is chosen as the unperturbed piece of the Liouvillian and subsequently that $\epsilon\mathbb{L}$ becomes the perturbation. However, one can immediately see an obstacle arising from this choice. Given Eq.~\ref{eq:pseudo_inv}, since $\mathbb{H}$ corresponds to the commutator of the Hamiltonian its spectrum is massively degenerate and all eigenvectors of the Hamiltonian yield zero eigenvalue in the commutator. As a result we would be left with a highly degenerate situation that is difficult to deal with.
We propose a way to circumvent this obstacle by exploiting the structure of $\epsilon\mathbb{L}$. We know that once any part of $\mathbb{L}$ is switched on that the system will immediately have a preferred steady state. As such we propose that to proceed we first treat diagonal ($\backslash$) components of the TASEP term $\mathbb{L}$ differently from the off diagonal ($\backslash \backslash$) ones. Namely, we split the total $\mathbb{L}$ as the sum
\begin{equation}
\epsilon \mathbb{L} \rightarrow \epsilon_{\backslash} \mathbb{L}_{\backslash} + \epsilon_{\backslash \backslash} \mathbb{L}_{\backslash \backslash}.
\end{equation}
We note here that this expression of the splitting of $\epsilon\mathbb{L}$ is an equality, however we introduce new $\epsilon$ variables for the separate components for this calculation. In the end they will equalized to the original $\epsilon$ variable. Our unperturbed system will then consist of the collective diagonal blocks
\begin{equation}
\mathcal{L}_0 = \sum_{ s \in even} \mathcal{L}^{(s)} = \sum^{2N}_{s \in \text{ even}} \epsilon_{\backslash} \mathbb{L}^{(s)}_{\backslash} -i \lambda \mathbb{H}^{(s)},
\end{equation}
and the perturbation as the remaining off diagonal components
\begin{align}
\mathcal{L}_1 = \mathcal{L}-\mathcal{L}_0 &=& \sum_{s \in \text{ even}} \mathcal{L}^{(s,s+2)} + \mathcal{L}^{(s+2,s)},\nonumber \\
&=& \epsilon_{\backslash \backslash } \sum_{s \in \text{ even}} \mathbb{L}^{(s,s+2)} + \mathbb{L}^{(s+2,s)}.
\end{align}
The block diagonal form of $\mathcal{L}_0$ means that we can write down its eigen-spectrum block by block. In practice we observe numerically that the real component of $\mathcal{E}_{n}^{(s)}$ for small $\epsilon_{\backslash}$ grows linearly such that in what follows it will be useful to write this dependence explicitly and expand the complex eigenvalue as $\mathcal{E}_n^{(s)} = \epsilon_{\backslash} r_{n}^{(s)} +i E_{n}^{s} $.
Another property of our unperturbed operator is that the pseudo-inverses of the blocks only act locally within each block. This will allow us to simplify some expressions in the following and implies for example that
\begin{equation}
\mathcal{L}_0^{-1} = \sum_{ s \in even} [\mathcal{L}^{(s)} ]^{-1}.
\end{equation}
Then, with the maximally mixed state as our starting state $\Ket{\rho_0} =\Ket{I}$ we can proceed according to the iterative procedure \eqref{eq:it_expand}:
\begin{align}
\Ket{\rho_1} &=& - [\mathcal{L}^{(2)}]^{-1} \mathcal{L}^{(2,0)} \Ket{I}, \\
\Ket{\rho_2} &=& - [\mathcal{L}^{(4)}]^{-1} \mathcal{L}^{(4,2)} \Ket{\rho_1}, \nonumber \\
\Ket{\rho_3} &=& - ([\mathcal{L}^{(2)}]^{-1} \mathcal{L}^{(2,4)} + [\mathcal{L}^{(6)}]^{-1} \mathcal{L}^{(6,4)} ) \Ket{\rho_2}, \nonumber \\
&\vdots & \nonumber
\end{align}
where only the non-zero $\mathcal{L}^{(s,s^{\prime})}$ blocks/elements have been kept. Plugging in the dependence on the overall weights we have for the first order expression:
\begin{align}
\Ket{\rho_1}
&=& - \epsilon_{\backslash \backslash} [\mathcal{L}^{(2)}]^{-1} \mathbb{L}^{(2,0)} \Ket{I}, \nonumber \\
&=& - \epsilon_{\backslash \backslash} \sum_{n } \frac{ \Ket{v_n^{(2)}}}{\mathcal{E}_n^{(2)}} \Bra{\tilde{v}^{(2)}_n} \mathbb{L}^{(2,0)} \Ket{I}, \nonumber \\
&=& - \epsilon_{\backslash \backslash} \sum_{n } \frac{ \bar{\alpha} \BraKet{\tilde{v}^{(2)}_n}{\phi_L} - \bar{\beta} \BraKet{\tilde{v}^{(2)}_n}{\phi_R} }{\epsilon_{\backslash} r^{(2)}_{n} + i E^{(2)}_{n} } \Ket{v_{n}},
\end{align}
with $\Ket{\phi_L} = \Ket{\gamma_1 \gamma_2}$, $\Ket{\phi_R} = \Ket{\gamma_{2N-1} \gamma_{2N}}$, $\bar{\alpha} = \alpha -1/2$, $\bar{\beta} = \beta -1/2$ and where on the last line we have also expanded the $s=2$ block eigenvalues into their real and imaginary components.
Leaving the inner products in the numerator to one side for a moment we can consider which terms are relevant in this first iterative correction by looking at cases for the coefficients in the sum:
\begin{equation}\label{eq:firstordercorr}
\frac{-\epsilon_{\backslash \backslash}}{\epsilon_{\backslash} r^{(2)}_{n} + i E^{(2)}_{n}} \sim
\begin{cases}
\frac{-1}{\,r^{(2)}_{n}},\, E^{(2)}_{n} \ll \epsilon_{\backslash} r^{(2)}_{n},\\
\frac{i \epsilon_{\backslash}}{E^{(2)}_{n}}, \text{otherwise.}\\
\end{cases}
\end{equation}
Evidently as we reinstate $\epsilon_{\backslash \backslash} = \epsilon_{\backslash} \rightarrow \epsilon$ and approach $\epsilon \rightarrow 0$ the second case is irrelevant and only those coefficients with small to negligible imaginary parts contribute to the correction.
What about the terms $\BraKet{\tilde{v}^{(2)}_n}{\phi_{L/R}}$? An unusual feature of the block-decomposition is that we could in principle have additional $\epsilon_{\backslash}$ dependences occurring through the $\Ket{ \tilde{v}^{(2)}_n}$. However, in practice we see via direct evaluation that, to leading order, these vector elements are independent of $\epsilon$. This means, that in the limit $ \epsilon_{\backslash \backslash} = \epsilon_{\backslash} \rightarrow 0$ we approach a fixed steady state that is not the infinite temperature state $\Ket{I}$. Moreover, the magnitude of this deviation from the thermal state is dictated primarily by the scale given by $1/r^{(2)}$ for which the term $1/r_1^{(2)}$ is the largest.
This outcome runs counter to typical perturbative statements where, as the small parameter tends to zero, we approach the bare un-perturbed state (in this case $\Ket{I}$). Recall however that, to avoid dealing with the massive degeneracy of the commutator $\mathbb{H}$, we also allowed the small parameter $\epsilon$ to enter into the bare Liouvillian. In this iterative construction then, we do not necessarily expect that each successive iteration will result in contributions that scale according to some positive power of $\epsilon$. Indeed, one expects that further iterations would eventually lead to additional corrections in other $s$-even sectors that, similarly to the explicit first iterative correction above, do not vanish as $\epsilon \rightarrow 0$.
\section{The Liouvillian Gap}\label{sect:gap_results}
The next feature that we explore is the Liouvillian gap, which one can consider as a key indicator of relaxation times towards the NESS~\cite{Dudzinski2000, Nagy2002, deGier2006, Kessler2012}. As shown in Eq. \ref{eq:L_gap}, this is defined $\mathcal{E}_{\text{gap}} \equiv -\text{Re}(\mathcal{E}_{1})$, where $\mathcal{E}_{1}$ is the eigenvalue of $\mathcal{L}$ with the largest non-zero real component. For a review of gap behaviour in a variety of related models see~\cite{Znidaric2015}. Generically in such studies the key indicator is how the Liouvillian gap scales as a function of the system size, $N$, e.g. $\mathcal{E}_{\text{gap}} \sim N^{-z}$ where the dynamical exponent $z$ depends on the particular model studied.
\subsection{Emergence of an Open Gap from XY Anisotropy and Bulk Dissipation}
Utilising a convenient basis for the Liouvillian super-operator (Sec.~\ref{sect:OQ_basis}), we find that the gap for this system can be obtained via a MPS based approach~\cite{Orus2008, Prosen2009, Joshi2013}. Moreover we find that, in this limit, the full Liovillian gap is closely shadowed by the gap obtained by restricting to the $s=2$ sector only - the $\mathcal{E}_n^{(2)}$ gap used in the last section. Analysing the scaling of $s=2$ sector we find that it, and therefore the full Liouvillian gap scale as
\begin{equation}
\mathcal{E}_{\text{gap}} \sim f(\delta,h_{z}) + \mathcal{O}(N^{-1}),
\end{equation}
where $f(\delta,h_{z})$ is non-zero when $|\delta|>0$. This implies that the relaxation time is finite in the thermodynamic limit when $|\delta| > 0$, since in this case the gap remains non-zero. This non-zero gap is not present in either XX + TASEP \cite{Temme2012}, XX + symmetric simple exclusion process (SSEP) \cite{Eisler2011} or XY + boundary driving \cite{Prosen2009} models. As such we can infer that it is a consequence of combining both an XY anisotropy and bulk stochastic hopping. The precise functional form of the gap function $f(\delta, h_{z})$ for different types of dissipation, including TASEP, remains an interesting question which we explore in future work~\cite{Kavanagh2021b}.
\subsection{MPS obtained $\mathcal{E}_{\text{gap}} $ versus $\mathcal{E}^{(2)}_{1}$ }\label{sect:TLGap}
Our key claims on the scaling of the gap are based on the assertion that, in the weak classical limit, the full Liouvillian gap can be estimated by only solving the $s=2$ sub-block. Our key tool here is a MPS calculation where we can effectively project out the steady state from the variational algorithm. Here we exploit the structure that the Liouvillian super-operator takes in the so-called canonical Majorana representation (see Fig.~\ref{fig:Lblocks}), specifically using the fact that the $s=0$ block is only connected to the $s=2$ block via a single off-diagonal block, $\mathcal{L}^{(2,0)}$. This allows one to project out the steady state from the MPO that represents the full Liouvillian operator, while leaving all other eigenvalues unaffected.
In Fig.~\ref{fig:MPScompare} we compare the results from $ \mathcal{E}^{(2)}_{1}$ with the eigenvalues obtained from a full MPS treatment of a system of $N=30$ and see excellent agreement right across the phase diagram. In App.~\ref{app:BPT} we also detail a perturbative argument for why these values are so close, using the Rayleigh-Schr\"{o}dinger non-Hermitian formulation \cite{Sternheim1972} of the TXY-TASEP system. A synopsis of this calculation is that in the small $\epsilon$ regime, we can consider $s$-blocks as only being weakly connected to their $(s \pm 2)$-block neighbours. Here spectrum $\mathcal{E}_{\text{gap}}$ can be expanded as
\begin{equation}
\mathcal{E}_{\text{gap}} = \mathcal{E}^{(2)}_{1} + \mathcal{E}^{(2)\prime}_{1} + \mathcal{E}^{(2) \prime \prime}_{1} + \dots,
\end{equation}
where $\mathcal{E}^{(s)}_{i}$ is the $i^{\text{th}}$ eigenvalue from the $s$ diagonal block and $\mathcal{E}^{(s) \prime}_i$ and $\mathcal{E}^{(s) \prime\prime}_i$ are the first and second order corrections. Crucially, one finds that the first order correction $\mathcal{E}^{(2)\prime}_{1}$ is zero and that the second order correction is much smaller than the zeroth order estimate, and typically scales as $\epsilon^{p}$ where $p > 2$, see App.~\ref{app:BPT}.
\begin{figure}
\centering
\includegraphics[width=0.42\textwidth]{N30Gapcompare.pdf}
\caption{Comparison of projection and MPS methods for a line cut at: $\alpha=0.1, \beta=0.3, \delta=0.7, \epsilon =0.1$ and $N=30$. A low virtual bond dimension ($\chi=20$ in this case) can be used to estimate gapped low lying states in both sectors by adding a weighted parity operator to $\mathcal{L}$. The even-sector gap can be estimated directly due to the specific form the Liouvillian takes in the canonical basis, which means that one can decouple the $s=0$ block without affecting any other eigenvalues. }
\label{fig:MPScompare}
\end{figure}
\subsection{Analysis of the $s=2$ Spectrum}
In the weak classical limit we can use $\mathcal{E}^{(2)}_{1} $ now as a proxy for the full gap and more fully analyse the parameter space of the model and assess its scaling as a function of system size, see Fig.~\ref{fig:gap_scaling}. Our main result is that, in the the thermodynamic limit $N \rightarrow \infty$, the gap $\mathcal{E}_{\text{gap}} \rightarrow f(\delta,h_{z})$ remains open if the anisotropy parameter is non-zero. However the dependence $\mathcal{E}_{\text{gap}} $ has on $\delta$ also relies strongly on the magnetic field parameter, with clear differences occurring between the different quantum phases of the Hamiltonian.
When $\delta = 0$ we find that $f(0,h_{z}) = 0$ and thus $\mathcal{E}_{\text{gap}} \sim N^{-1}\xrightarrow[ ]{N\rightarrow \infty} 0$. This value is completely unaffected by changes in magnetic field $h_{z}$, as a result of a Lindblad symmetry present, see e.g. \cite{Albert2014}. However, for non zero $\delta$ and when $|h_{z}| < 1$ (where the underlying Hamiltonian has a topological gap and boundary zero-energy modes) the Liouvillian gap develops linearly with $\delta$ (the superconducting order parameter in the fermionic picture). On the other hand where $|h_{z}| > 1$, and the system Hamiltonian is non-topological and the gap develops $\propto \delta^2$. This smaller gap means that perturbations to the thermal state are far more dramatic in this quantum regime. For a discussion on the odd sector blocks $s=1$ and $s=2N-1$ see App.~\ref{app:oddevengaps} and App.~\ref{app:oddspectrum}.
\begin{figure}
\centering
\includegraphics[width=.45\textwidth,height=0.36\textwidth]{1overNscale.pdf}
\includegraphics[width=.48\textwidth,height=0.36\textwidth]{Egap_infty.pdf}
\caption[]{(Color online) [Top] Spectral gap scaling with $\alpha=0.1$, $\beta=0.3$, $\epsilon =0.1$, $h_z=0.5$. A nonzero $\delta$ introduces a persistent gap in the $N \rightarrow \infty$ limit. [Bottom] A scan of the projected $N \rightarrow \infty$ limit. The character of the gap changes when one traverses the quantum phase transition at $h_{z} = 1$ (red line). We note that the quantity plotted in the bottom figure is precisely $r^{(2)}_{1}$ of Sec.~\ref{sect:perturbation_NESS}.}
\label{fig:gap_scaling}
\end{figure}
\subsection{Relaxation Rate Compared to Related Models}
The interpretation of the gap as an inverse relaxation time leads one to consider the scaling of the gap with system size. If one has an inverse relation between the gap and the system size then in the large $N$ limit the system will not relax to the steady state in finite time. As such one often aims to determine the dynamical exponent, $z$, in the scaling relation $\mathcal{E}_{\text{gap}} \sim N^{-z}$. If $z = 0$, the longest relaxation times for the dynamics remain finite in the thermodynamic limit, while if $z > 0$ they diverge.
Generically, the dynamical exponent depends on a variety of factors from the model in question. The gap scaling of our model has been found in certain special cases. It has been determined analytically \cite{Prosen2008b, Prosen2008} for the quantum XY model, with bi-directional dissipation on boundary sites only, that the gap scales as $\mathcal{E}_{\text{gap}} \sim N^{-3}$ everywhere except for at $|h_{z}| = 1- \delta^2 $ where the gap closes more rapidly as: $\mathcal{E}_{\text{gap}} \sim N^{-5}$. This can be contrasted with the TASEP gap scalings which differ depending on the phase of the classical model. There, one finds by various approaches \cite{deGier2005, deGier2006, Znidaric2015, Dudzinski2000, Nagy2002} gap scalings of $\mathcal{E}_{\text{gap}} \sim N^{0}$ in the LD/HD phases, $\mathcal{E}_{\text{gap}} \sim N^{-3/2}$ in the MC phase, and $\mathcal{E}_{\text{gap}} \sim N^{-2}$ on the critical line, where $\alpha = \beta$.
A relatively generic bound for the gap scaling of $\mathcal{E}_{\text{gap}} \sim N^{-1}$ can be found for systems with only boundary dissipation~\cite{Znidaric2015} which is a component of the model studied in this paper. However, the existence of a finite gap for appropriate Hamiltonian parameters places the TXY-TASEP outside of the scope of these results and indeed also outside the scope of integrable systems results~\cite{Ziolkowska2020, deLeeuw2021}. One can draw the conclusion that both bulk and boundary dissipation together are necessary for a non-vanishing gap in all phases of the TASEP.
\section{Summary and Conclusion}\label{sect:conclusion}
In this work we have explored the transverse XY TASEP system, showing how the interplay between XY anisotropy and transverse field affect both the non-equilibrium steady state and the gap that separates it from the rest of the Liouvillian spectrum.
An interesting aspect to the model is the ability to tune between different steady states that derive key properties from the underlying quantum phase. These quantum effects are most profound in the parameter spaces of low magnetic field ($h_{z} < 1$) where the XY terms opens a Liouvillian gap that is approximately linear in the anisotropy $\delta$. On the other hand, in the regimes associated with high transverse field ($h_{z} > 1$) we see that the steady state essentially reverts to the something like the purely stochastic NESS, mimicking the scenario also found with no XY anisotropy, albeit with a gap now proportional to $\delta^2$.
The low field deviations from the classical NESS, most pronounced in the TASEP low and high density regimes, can be understood by viewing the XY anisotropy $\delta$ as a source of pair creation/annihilation which seeks to drive the system towards half filling, and pin the energy expectation value to energies close to the centre of the many-body spectrum. The high magnetic effect reduces this anisotropic drive toward half filling allowing the particle densities to be largely determined by the classical boundary driving. This coincides with NESS energy expectation values drifting towards the extremes of the Hamiltonian many-body spectra.
Our observations of a constant gap show that the TXY-TASEP system constitutes what is called a rapidly mixing system. In this respect the canonical Majorana basis provides an intuitive way to understand this in terms of the perturbations to the maximally mixed state and the gaps found in successive even-parity excitation number blocks. In cases where the even quasi-particle gap is constant we can argue that, in the weak classical limit, that successive perturbations will decay order-by-order. On the other hand, when the gap closes with system size, or is at least very small (as in the large transverse field limit), one expects that successive perturbations to the maximally mixed-state will not completely decay, so that the resulting NESS remains very different.
The mechanism we describe shows how the quantum system can be used to rapidly switch between radically different steady states, either by tuning $\delta$ or the the transverse field $h_{z}$. A natural question to ask then is what types of pre-determined states that can be easily prepared in this fashion? Moreover, can TXY-TASEP be a template from which one can develop such schema? In this paper we argue that this so-called rapidly mixing aspect is due to both the bulk driving and XY anisotropy (due to the lack of similar effects for boundary driven only or XX systems). An interesting question is if the XY model gives similar results for other types of bulk driving/dissipation? Other work in this area suggest that this may indeed be a general feature. In Joshi et. al. \cite{Joshi2013} the XY model with bulk dissipation showed distinctly different behaviours of the steady-state negativity in the different quantum regimes. Moreover, they also observed rapidly decaying correlations in the topological/ferromagnetic regime. This is consistent with a robust Liouvillian gap and thus indicates that XY systems, together with bulk Lindblad operators generally, may prove a promising avenue for rapid state preparation.
\begin{acknowledgments}
We acknowledge Ian Hughes for inspiring discussions in the early stages of this work. K.K., S.D., and G.K. acknowledge Science Foundation Ireland for financial support through Career Development Award 15/CDA/3240. G.K. was also supported by a Schr{\"o}dinger Fellowship. J.K.S. was supported through SFI Principal Investigator Award 16/IA/4524.
\end{acknowledgments}
\bibliographystyle{apsrev4-1}
| {
"attr-fineweb-edu": 1.537109,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdU45qsBDEK-qI3AW | \section{Introduction}\label{sec1}
Monitoring the adequacy of stochastic models is undoubtedly of great importance in many areas of application. A change in the dynamics of the model results in misspecifications and consequently influences conclusions drawn from the model output, e.g., forecasts. Since false conclusions generally imply increasing costs, the speed of detection is crucial for the construction of sequential change-point procedures.\medskip\\
As one of the founding fathers of sequential change-point analysis \citet{1954} introduced the Page CUSUM detector motivated by certain problems in quality control. In this context the properties of detectors were explored mainly employing constant thresholds. In the literature, a comparison of the speed of detection of this type of procedures in many cases is based on their {\it average} detection delay. Even optimality criteria that were introduced for such procedures are referring to the {\it average} run length (ARL) to false alarm. However, procedures designed with respect to this criterion typically stop with (asymptotic) probability one.
For further reading on optimality criteria for Page's CUSUM we refer to the work of \citet{lorden1971}. In addition the monograph of \citet{BassevilleNikiforov1993} gives an extensive overview of the contributions made since the CUSUM procedure was introduced by \citet{1954}.\medskip\\
\citet{1996} argue that in many contemporary applications, in particular in an economic context, there are no sampling costs under structural stability. They therefore propose an approach, in which the probability of false alarm is controlled. This approach initiated a multitude of contributions to the field of sequential change-point analysis.
E.g., \citet{2004} propose various sequential tests for the stability of a linear model with independent errors. These tests are based on a stopping rule which stops as soon as a CUSUM detector crosses a given boundary function. In a time series regression model, \citet{F2012-art} proposed ---in a similar fashion--- a stopping rule given by the first-passage time of Page's CUSUM over this very boundary function. Based on these procedures we will consider a location model, i.e., we will investigate changes in the mean of certain times series. The latter model is, as a special case, included in the time series regression model of \citet{F2012-art}. To compare the performance of the CUSUM procedure of \citet{2004} (which in the sequel will be referred to as {\it ordinary CUSUM}) and the Page CUSUM, we will investigate the limit distributions of the corresponding stopping times for early as well as for late change scenarios. Until now, only few contributions were made regarding the complete asymptotic distribution of the stopping times which obviously provides more information about the behavior of the stopping times than results on the average run length.\medskip\\
Ordinary CUSUM detectors are defined as the partial sum of, e.g., model residuals from the beginning of the monitoring to the present. These procedures have been studied extensively in the literature, cf., e.g., \citet{2004}, \citet{HKS2007} or \citet{AHHK2006}. For the location model results on the asymptotic distribution of the ordinary CUSUM procedure in an early change scenario were given by \citet{A2003} and \citet{AH2004}. \citet{AHR2007} then also provided an extension to a linear regression model. To prove these results on the asymptotic normality of the ordinary CUSUM detector, strong conditions on the time of change as well as on the magnitude of change were imposed. In this context we also want to mention the work of \citet{huskova2005}, who introduced monitoring procedures based on quadratic forms of weighted cumulative sums, and \citet{HKPS2011}, who showed the asymptotic normality of the corresponding stopping time under assumptions on the time of change similar to those used in \citet{AH2004} and \citet{AHR2007}.\medskip\\ Building on the work of \citet{AH2004}, we will derive the asymptotic distribution of the Page as well as the ordinary CUSUM procedure under weaker conditions on the time and magnitude of the change. Hereby, we will show that the Page procedure is more robust to the location and the size of the change than ordinary CUSUM procedures. The corresponding limit distributions for the Page CUSUM are novel in this context and provide a classification of the behavior of the stopping time depending on the interplay of the magnitude and location of the change.\medskip\\
The paper is organized as follows. In Section \ref{sec3} we will introduce our model setting and required assumptions and formulate our main results. Section \ref{sec4} contains the results of a small simulation study which compares the ordinary and Page CUSUM in various scenarios. Furthermore, it illustrates particularly how the performance of the procedures depends on the time of change. The proofs of our main results from Section \ref{sec3} are postponed to the appendix.
\section{Asymptotic distribution of the stopping times}\label{sec3}
Let $X_i, i = 1,2,\ldots$ follow the location (or ``change-in-the-mean'') model
\begin{equation} X_i = \begin{cases}
\mu + \varepsilon_i, & i = 1,\ldots, m+{k^{*}} - 1,\\
\mu + \varepsilon_i + \Delta_m, & i= m+{k^{*}},m+{k^{*}}+1,\ldots,
\end{cases}\label{x_i}
\end{equation}
where $\mu$ and $\Delta_m$ are real numbers, $1\leq {k^{*}}<\infty$ denotes the unknown time of change and the $\{\varepsilon_i\}_{i=1,2,\ldots}$ are zero-mean random variables. In this setting, the number $m$ is the length of a so-called {\it training period}, in which the model is assumed to be stable. This {\it noncontamination assumption} (cf. \citet{1996}) is used to estimate the model parameters and the asymptotics considered here are (if not stated otherwise) with respect to $m\to\infty$. Furthermore we want to allow for certain dependence structures like autoregressive or GARCH-type dependencies. For this purpose, we assume that the random variables $\varepsilon_i$ satisfy the following weak invariance principles
\begin{align}
&\left|\sum_{i = 1}^m \varepsilon_i\right| = \Opo{\sqrt{m}}
,\tag{A1}\label{A1}\\
&\text{There is a sequence of Wiener processes }\{W_m(t):t\geq 0\}_{m\geq 1} \text{ and a positive constant }\sigma\tag{A2}\label{A2}\\
&\text{such that}\notag\\
&\sup_{\frac{1}{m}\leq t<\infty}\frac{1}{(mt)^{1/\nu}}\left|\sum_{i=m+1}^{m+mt}\varepsilon_i - \sigma W_m(mt)\right| = \mathcal{O}_P(1) \quad
\text{ with some }\nu>2. \notag
\end{align}
Examples for sequences of random variables satisfying Assumptions \eqref{A1} and \eqref{A2} are given in \citet{AH2004}. Besides i.i.d.\ sequences these include, e.g., martingale difference sequences and certain stationary mixing sequences. \citet{AHHK2006} showed that the class of augmented GARCH processes, which were introduced by \citet{duan1997}, also satisfies Assumptions \eqref{A1} and \eqref{A2}. This class contains most of the conditionally heteroskedastic time series models applied to describe financial time series. A selection of GARCH models from this class can be found in \citet{ABH2006}.\medskip\\
We are interested in testing the hypothesis
\begin{align}
&H_0: \quad\Delta_m = 0\nota
\\
\intertext{against the alternatives}
&H_{A,1}:\quad\Delta_m> 0\qquad \text{and}\qquad H_{A,2}:\quad\Delta_m\neq 0.\nota
\end{align}
An (asymptotic) $\alpha$-level sequential test of power one (cf. \citet{1996}) is then given via a stopping rule $\tau = \tau_m$ such that
\begin{align*}
&\lim\limits_{m\rightarrow\infty} P(\tau_m < \infty) = \alpha\quad\textnormal{under }H_0
\intertext{and}
&\lim\limits_{m\rightarrow\infty} P(\tau_m < \infty) = 1\quad\textnormal{under the alternative.
\end{align*}
For the location model, \cite{AH2004} defined the CUSUM detector of the (centered) $X_i$
\begin{align*}
\Qm{k} &= \sum_{i=m+1}^{m+k}X_i -\frac{k}{m}\sum_{i = 1}^m X_i, \notag\\
\intertext{and as a corresponding stopping time }
\tau^Q_{1,m} &= \min\{k\geq 1: ~\Qm{k} \geq \hat{\sigma}_mc^Q_{1,\alpha} g(m,k)\}
\end{align*}
Here, $\hat{\sigma}_m$ is a weakly consistent estimator for the constant $\sigma$ from \eqref{A2} (calculated from the data of the training period $1,\ldots,m$),
\begin{equation}
g(m,k) = \sqrt{m}\left(1+k/m\right)\left(k/(k+m)\right)^\gamma\quad\text{for }\gamma\in[0,1/2)\label{g-def}
\end{equation}
and the critical value $c^Q_{1,\alpha} = c^Q_{1,\alpha}(\gamma)$ is a constant derived from the asymptotic distribution of the detector under the null hypothesis. This asymptotic distribution can be obtained by adapting the proof of Theorem 2.1 in \citet{2004} with respect to Assumptions \eqref{A1} and \eqref{A2}. Under $H_0$, we find
\begin{equation*}
\lim_{m\rightarrow\infty} P\lr{\frac{1}{\hat{\sigma}_m}\mathop{\sup}\limits_{1\leq k<\infty}\frac{\Qm{k}}{g(m,k)}> c^Q_{1,\alpha}} = P\lr{\mathop{\sup}\limits_{0< t< 1} \frac{W(t)}{t^\gamma} > c^Q_{1,\alpha}}=\alpha.
\end{equation*}
The parameter $\gamma$ in this construction is a so-called tuning parameter, which allows a practitioner to regulate the sensitivity of the procedure with respect to the time of change. A value of $\gamma$ close to $1/2$ increases detection speed in early change scenarios, while for later changes smaller values lead to a faster detection. For a discussion of this we refer to, e.g., \citet{2004}.\bigskip\\
\citet{AH2004} showed that for the stopping time $\tau^Q_{1,m}$ under the more restrictive {\it local change} assumption $\Delta_m\to 0$ and for early change alternatives
\begin{align}
{k^{*}} = \mathcal{O}(1){m^\beta}\quad\text{with some }0\leq\beta<\bfrac{\frac{1}{2}-\gamma}{1-\gamma}^2,
\label{AH-Ass}
\end{align}
one can find (deterministic) sequences $a_m$ and $b_m$ such that $(\tau^Q_{1,m}-a_m)/b_m$ is asymptotically normal.\medskip\\
Our main results, on the one hand, extend the theorem of \citet{AH2004} for the ordinary CUSUM to a wider range of change scenarios, on the other hand, they provide the corresponding limit distribution for the Page CUSUM. Not only do we weaken the assumptions to allow for a larger class of change-size scenarios, but the range for the value $\beta$ in \eqref{AH-Ass} is extended to the upper limit 1 (see Assumption \eqref{A3} below).
These results permit a comparison of the two procedures on a theoretical basis. From this the superiority of the Page versus the ordinary CUSUM in late change scenarios can be seen, while the similarity of these procedures in early change scenarios is implied. \bigskip\\%under weaker assumptions, including in particular {\it fixed change} alternatives, and extend the order of a possible change to $m^\beta$ with $\beta$ arbitraryly close to 1 (see Assumption \eqref{A3} below).
The Page CUSUM detector for the one-sided alternative $H_{A,1}$ is given via
\[S_1(m,k) = \Qm{k} - \min\limits_{0\leq i\leq k} \Qm{i}\qquad\textnormal{(with }\Qm{0} = 0)\]
and the corresponding stopping time with $g$ from \eqref{g-def} as
\[\tau^\page_{1,m} = \min\{k\geq 1: ~S_1(m,k){} \geq \hat{\sigma}_m c^{\textnormal{Page}}_{1,\alpha} g(m,k)\}.\]
Here, according to \citet{F2012-art}, the critical value $c^{\textnormal{Page}}_{1,\alpha}=c^{\textnormal{Page}}_{1,\alpha}(\gamma)$ for a given confidence level $\alpha \in (0,1)$ can be chosen such that under $H_0$
\begin{align*}
&\lim_{m\rightarrow\infty} P\lr{\frac{1}{\hat{\sigma}_m}\mathop{\sup}\limits_{1\leq k<\infty}\frac{S_1(m,k)}{g(m,k)}> c^{\textnormal{Page}}_{1,\alpha}} \\
= &P\lr{\mathop{\sup}\limits_{0< t< 1} \ofrac{t^\gamma}\left[W(t)-\mathop{\inf}\limits_{0\leq s\leq t}\lr{\frac{1-t}{1-s}W(s)}\right] > c^{\textnormal{Page}}_{1,\alpha}}\notag\\
=&\alpha.\notag
\end{align*}
A table containing simulated versions of the critical values $c^{\textnormal{Page}}_{1,\alpha}(\gamma)$ for selected values of $\gamma$ and $\alpha$ can be found in \citet{F2012-art}.\medskip\\
It can easily be seen that the stopping times $\tau^Q_{1,m}$ and $\tau^\page_{1,m}$ are suitable for alternative $H_{A,1}$. The (two-sided) detectors and stopping rules for $H_{A,2}$ are given via
\begin{align*}
&S_2(m,k) = \max\limits_{0\leq i\leq k} |\Qm{k} - \Qm{i}|,\quad\tau^{\textnormal{Page}}_{2,m} = \min\{k\geq 1: ~S_2(m,k) \geq \hat{\sigma}_m c^{\textnormal{Page}}_{2,\alpha} g(m,k)\}
\intertext{and}
&Q_2(m,k) = |\Qm{k}|,\quad\tau^Q_{2,m} = \min\{k\geq 1: ~Q_2(m,k) \geq \hat{\sigma}_mc^Q_{2,\alpha} g(m,k)\}.
\end{align*}
For a proof of consistency and the derivation of the limit distributions under $H_0$ we refer to \citet{F2012-art} and \citet{2004}.\medskip\\
The asymptotic distribution of the stopping times $\tau^\page_{1,m}$ and $\tau^{\textnormal{Page}}_{2,m}$ will depend strongly on the interplay of $\Delta_m$ and ${k^{*}}$. To specify this, we need the following (technical) assumptions:
\begin{align}
&\quad \textnormal{there exists a }\theta>0\text{ such that }{k^{*}} = \lfloor\theta m^\beta\rfloor\text{ with }0\leq\beta<1\tag{A3}\label{A3}\\
&\quad\sqrt{m}|\Delta_m|\stackrel{(m\rightarrow\infty)}{\longrightarrow} \infty,\tag{A4} \label{A4}\\
&\quad\Delta_m = \mathcal{O}(1){1}.\tag{A5}\label{A5}
\end{align}
Assumption \eqref{A5} only requires $\Delta_m$ to be bounded and therefore includes {\it local} as well as {\it fixed} alternatives. From Assumption \eqref{A3}, we have a given order of the change-point ${k^{*}}$ in terms of $m$ depending on the exponent $\beta$. As we will see later on, the asymptotic distribution of the stopping times depends crucially on the decay of $\Delta_m$, which is implicitly allowed by Assumptions \eqref{A4} and \eqref{A5}. This dependence can be expressed in terms of the asymptotic behavior of the quantities $|\Delta_m| m^{\gamma-1/2}{k^{*}}^{1-\gamma}$. In view of Assumption \eqref{A3} we distinguish the following three cases:
\begin{align}
& m^{\beta(1-\gamma)-1/2+\gamma}|\Delta_m|\stackrel{(m\rightarrow\infty)}{\longrightarrow} 0, \tag{I}\label{I}\\
& m^{\beta(1-\gamma)-1/2+\gamma}|\Delta_m|\stackrel{(m\rightarrow\infty)}{\longrightarrow} \tilde{C}_1\in (0,\infty),
\tag{II}\label{II}\\
& m^{\beta(1-\gamma)-1/2+\gamma}|\Delta_m|\stackrel{(m\rightarrow\infty)}{\longrightarrow} \infty. \tag{III}\label{III}
\end{align}
\begin{re}\label{re1}
\begin{enumerate}[a)]
\item We note that under Assumptions \eqref{A4} and \eqref{A5}) we have \eqref{I} for
\begin{align}
0\leq \beta < \frac{\frac 12 - \gamma}{1-\gamma} \tag{Ia}\label{Ia}.
\end{align}
\item Assumption \eqref{A3} implies that under \eqref{II} we have
\begin{eqnarray}
{|\Delta_m| {m^{\gamma-1/2}}{k^{*}}^{1-\gamma}}\stackrel{(m\rightarrow\infty)}{\longrightarrow} \theta^{1-\gamma}\tilde{C}_1 = C_1\in(0,\infty).\label{re1-eq2}
\end{eqnarray}
\end{enumerate}
\end{re}
The limit distribution in our main results will depend on the given case \eqref{I}, \eqref{II} or \eqref{III}. In order to define this limit distribution, we first introduce for all real $x$
\begin{eqnarray*}
\overline{\Psi}(x) = \begin{cases}
\Phi(x), &\text{under }\eqref{I},\\
P\lr{\mathop{\sup}\limits_{d_1 < t<1}W(t)\leq x},&\text{under} \eqref{II},\\
P\lr{\mathop{\sup}\limits_{0<t<1}W(t)\leq x} = \begin{cases}
0,&x<0,\\
2\Phi(x) - 1,& x\geq 0,
\end{cases}
,&\text{under }\eqref{III}.
\end{cases}
\end{eqnarray*}
Here $\Phi(x)$ denotes the standard normal distribution function and under \eqref{II} we denote by $d_1 = d_1(c)$ the unique solution of
\begin{align}
d_1 = 1 - \frac{c\sigma}{C_1}d_1^{1-\gamma}.\label{d1}
\end{align}
\begin{Th}\label{AD}
Let $\{X_n\}_{n\geq 1}$ be a sequence of random variables according to \eqref{x_i} such that \eqref{A1} -- \eqref{A5} are satisfied, and let $\gamma\in[0,1/2)$.
\begin{enumerate}[(a)]
\item Then, for all real $x$ under $H_{A,1}$ and with $\Psi(x)= 1-\overline{\Psi}(-x)$,
\begin{equation*}
\lim\limits_{m\rightarrow\infty} P\left(\frac{\tau^\page_{1,m} - a_m(c^{\textnormal{Page}}_{1,\alpha})}{b_m(c^{\textnormal{Page}}_{1,\alpha})}\leq x\right) =\Psi(x),
\end{equation*}
where $a_m(c)$ is the unique solution of
\begin{align*}
a_m(c) &= \lr{\frac{\sigma cm^{1/2-\gamma}}{|\Delta_m|}+\frac{{k^{*}}}{(a_m(c))^\gamma}}^{1/(1-\gamma)}\mspace{-30mu}\\
\intertext{and }b_m(c) &= \sigma\sqrt{a_m(c)}|\Delta_m|^{-1}\lr{1-\gamma\lr{1-\frac{{k^{*}}}{a_m(c)}}}^{-1}\mspace{-23mu}.
\end{align*}
\item Additionally it holds that, for all real $x$ under $H_{A,1}$,
\begin{equation*}
\lim\limits_{m\rightarrow\infty} P\left(\frac{\tau^Q_{1,m} - a_m(c^Q_{1,\alpha})}{b_m(c^Q_{1,\alpha})}\leq x\right) = \Phi(x).
\end{equation*}
\end{enumerate}
\end{Th}
\begin{Th}\label{AD2}
Let $\{X_n\}_{n\geq 1}$ be a sequence of random variables according to \eqref{x_i} such that \eqref{A1} -- \eqref{A5} are satisfied and let $\gamma\in[0,1/2)$.
Then, for all real $x$ under $H_{A,2}$, the limit results of Theorem \ref{AD} are retained if $\tau^\page_{1,m}, c^{\textnormal{Page}}_{1,\alpha}, \tau^Q_{1,m}$ and $c^Q_{1,\alpha}$ are replaced with the respective objects $\tau^{\textnormal{Page}}_{2,m}, c^{\textnormal{Page}}_{2,\alpha}, \tau^Q_{2,m}$ and $c^Q_{2,\alpha}$.
\end{Th}
Theorems \ref{AD} and \ref{AD2} show that in early change scenarios like under case \eqref{Ia}, the behavior of the ordinary and Page CUSUM is comparable. This implies that the same limit distribution is obtained using normalizing sequences which differ only by the critical values. As, clearly, $c^{\textnormal{Page}}_{1,\alpha}>c^Q_{1,\alpha}$ this suggests a slightly superior behavior of the ordinary CUSUM.\medskip\\
Under late change scenarios, however, the difference in the limit distribution shows how the detection delay improves for the Page CUSUM, since the Page limit distribution is stochastically smaller than a standard normal one (cf.\ \eqref{I} compared to \eqref{II} and \eqref{III}). This underlines the detection properties of the Page CUSUM which by construction is intended to show a higher stability in particular under later changes. Furthermore, the construction of the normalizing sequences $a_m(c)$ and $b_m(c)$ identifies the driving factors for the detection delay and therefore adds to the understanding of the dynamics of the different procedures.\medskip\\
In the next section we will find that the large sample results from above can be confirmed empirically. In addition they can help to explain effects observed in the small sample behavior of the considered procedures.
\section{A small simulation study}\label{sec4}
In this section, we present the outcome of a small simulation study to illustrate the theoretical findings from Section \ref{sec3} and compare ordinary and Page CUSUM in this framework. The simulations were carried out for various types of sequences $\{\varepsilon_i\}_{i=1,2,\ldots}$, mainly leading to similar findings. Since the statements of Theorems \ref{AD} and \ref{AD2} yield the behavior of ordinary and Page CUSUM in different scenarios, we will focus here on these scenarios and the transition between them. We will therefore only provide one examplary setting. A more extensive empirical analysis of the limit behavior with respect to the influence of the different parameters and the dependence structure of the series $\{\varepsilon_i\}_{i=1,2,\ldots}$ would be desirable. Yet this exceeds by far the scope of this paper. However, it should be mentioned that, in particular in small samples, the quality of estimation of the parameter $\sigma$ is crucial to the behavior of the procedures. Inaccurate estimation may lead to increasing sizes under the null hypothesis, which bias the estimated densities of the stopping times as well. \medskip\\
We will focus here on results for $\mu = 0$, using a GARCH(1,1) sequence
\begin{align*}
&\varepsilon_i = \sigma_i z_i,\quad\sigma_i^2 = \overline{\omega} + \overline{\alpha}\varepsilon_{i-1}^2 + \overline{\beta}\sigma_{i-1}^2
\intertext{where $\{z_i\}_{i = 1,2,\ldots}$ are i.i.d. standard normally distributed and the parameters were specified as}
&\overline{\omega} = 0.5,~\overline{\alpha} = 0.2~\text{and}~\overline{\beta} = 0.3,
\end{align*}
which implies (unconditional) unit variance. Due to the uncorrelated error terms, the ordinary least squares estimator for the sample variance is a weakly consistent estimator for the parameter $\sigma$. All simulations were carried out using 5000 replications.\medskip\\
Since the limit behavior of the stopping times depends strongly on the interplay of the model parameters, a rather simple simulation design was chosen, to increase the traceability and make the outcome easier to interpret. For all presented results a fixed change alternative with $\Delta_m = 1$ was considered, which implies that the behavior of $m^{\beta(1-\gamma)-1/2+\gamma}\Delta_m$, and hereby the determination of the corresponding case \eqref{I} -- \eqref{III} depends only on the exponent $\eta = \eta(\gamma,\beta) = \beta(1-\gamma)-1/2+\gamma$. Hence, the cases correspond to $\eta\lesseqqgtr 0$. In all presented figures, the respective density of the limit distribution is plotted as a solid line and denoted by $\Phi(=\Psi^{\text{(I)}}), \Psi^{\text{(II)}}_{d_1}$ and $\Psi^{\text{(III)}}$. Since the one and two-sided procedures mainly differ in the critical values, we only present the results for the one-sided procedures.\medskip\\
Figures \ref{fig:1-1}--\ref{fig:4} each show (in black lines) estimated density plots of the normalized stopping time
\begin{align*}
\nu^{\textnormal{Page}}_{1,m} &= (\tau^\page_{1,m} - a_m(c^{\textnormal{Page}}_{1,\alpha}))/b_m(c^{\textnormal{Page}}_{1,\alpha}
\end{align*}
in the left column and
\begin{align*}
\nu^Q_{1,m} &= (\tau^Q_{1,m} - a_m(c^Q_{1,\alpha}))/b_m(c^Q_{1,\alpha}
\end{align*}
in the right column.
These illustrate the limit results of Theorem \ref{AD} (a) and (b), respectively. For a direct comparison of ordinary and Page CUSUM the normalized stopping time
\begin{equation*}
\tilde{\nu}_m = (\tau^Q_{1,m} - a_m(c^{\textnormal{Page}}_{1,\alpha}))/b_m(c^{\textnormal{Page}}_{1,\alpha})
\end{equation*}
can be used as a benchmark. In both columns of Figures \ref{fig:1-1}--\ref{fig:4} the corresponding density estimates for $\tilde{\nu}_m$ are provided in gray lines. The rows in Figures \ref{fig:1-1}--\ref{fig:4} correspond to the choice of the tuning parameter $\gamma$, where from top to bottom, $\gamma = 0.00,0.25$ and $0.45$. We will first interpret the findings with respect to the limit results themselves and then conclude the analysis with a comparison of $\tau^\page_{1,m}$ and $\tau^Q_{1,m}$ in the different scenarios.\medskip\\
Since the limit distribution of $\nu^Q_{1,m}$ is standard normal under the assumptions of Theorem \ref{AD}, it is not surprising that we will find the convergence to be relatively robust with respect to the given scenario. This, however, does not hold for the convergence of $\tau^\page_{1,m}$, where the transition from case \eqref{I} to case \eqref{III} strongly influences the convergence even for large sample sizes. We will therefore restrict the following discussion to the behavior of the Page CUSUM.\medskip\\
Figure \ref{fig:1-1} shows the estimated density plots for a fixed change-point, i.e., $\beta=0$ in \eqref{A3}, choosing ${k^{*}} = \theta = 1$. Figure \ref{fig:1-1a} provides the related results for ${k^{*}} = 100$. These belong to case \eqref{I} for all values of $\gamma$ and therefore have the standard normal distribution as a limit (for $\nu^{\textnormal{Page}}_{1,m}$ and $\nu^Q_{1,m}$). For ${k^{*}} = 1$ a fast and clear convergence to the standard normal can be found for $\gamma = 0.00$ and $\gamma = 0.25$. For $\gamma = 0.45$, a deviation is visible due to a slower convergence to the standard normal. This deviation can be explained in terms of the transition from case \eqref{I} to case \eqref{II} (and \eqref{III}) in $\eta = 0$ and thus for $\tilde{\beta} = 1/11$. Consider, e.g., the alternative scenario ${k^{*}} = \lfloor m^{0.1}\rfloor$. For $m = 100$ and $m = 1000$ we then find ${k^{*}} = 1$, for $m = 10,000$ we have ${k^{*}} = 2$. The two scenarios are therefore rather indistinguishable for the given sample sizes. Yet they have different limit distributions. Consequently, a fast convergence to the standard normal can hardly be expected. The influence of the parameter $\theta$ from Assumption \eqref{A3}, which leads to a bias in the limiting behavior, is shown in Figure \ref{fig:1-1a}. While for $\gamma = 0.00$ the convergence to the standard normal distribution can be seen nicely, for the larger values of $\gamma$ it is not obvious. The explanation for this is again the influence of $\gamma$ on $\eta$, e.g., we have for $m = 10,000$ that $k^*_1 = 100m^0$ and $k^*_2 = m^{0.5}$ are two possible alternatives with different limit distributions for larger values of $\gamma$. The parameter $\theta$ was used in earlier works to justify the assumption of an early change scenario. The results from Figure \ref{fig:1-1a} show, however, that this argument has to be handled with caution.\medskip\\
Figure \ref{fig:1-2} shows the density plots for ${k^{*}} = \lfloor m^{0.45}\rfloor$, which implies $\eta<0$ for $\gamma = 0.00$ and $\eta>0$ for $\gamma = 0.25$ and $\gamma = 0.45$. For $\gamma = 0.00$ the convergence to the standard normal distribution is again obvious. For $\gamma = 0.25$ and $\gamma = 0.45$, the convergence away from the standard normal distribution towards $\Psi^{\text{(III)}}$ is also clearly visible for $\nu^{\textnormal{Page}}_{1,m}$. Figure \ref{fig:2} shows results under case \eqref{II}, which corresponds to $\eta = 0$ and $\beta$ taking values $\beta = 1/2, 1/3$ and $1/11$, for $\gamma = 0.00,0.25,0.45$. The limiting densities in the left column show the dependence on $d_1$ and therefore implicitly on $\gamma$. The model setting implies $C_1 = 1$, consequently $d_1$ takes the following values:
\begin{center}
\begin{tabular}{c|c|c|c}
$\gamma$ & 0.00 & 0.25 & 0.45\\\hline
$d_1(\gamma)$ & 0.3714 & 0.1887 & 0.1051
\end{tabular}
\end{center}
Finally, in Figure \ref{fig:4} case \eqref{III} is considered. The change-point is set to ${k^{*}} = \lfloor m^{0.75}\rfloor$ and we find that in accordance with the theoretical results only little influence of $\gamma$ can be seen. The convergence to the limit distribution $\Psi^{\text{(III)}}$ is again obvious in the left column.
\begin{figure}
\centering
\includegraphics[page=1, scale = 0.8]{plot_densities_Fig-1.pdf}
\includegraphics[page=2, scale = 0.8]{plot_densities_Fig-1.pdf}
\includegraphics[page=3, scale = 0.8]{plot_densities_Fig-1.pdf}
\includegraphics[page=4, scale = 0.8]{plot_densities_Fig-1.pdf}
\includegraphics[page=5, scale = 0.8]{plot_densities_Fig-1.pdf}
\includegraphics[page=6, scale = 0.8]{plot_densities_Fig-1.pdf}
\caption{Estimated density plots for ${k^{*}} = \lfloor \theta m^0\rfloor$ with $\theta = 1$ for $\nu^{\textnormal{Page}}_{1,m}$ (left column) and $\nu^Q_{1,m}$ (right column) in black lines. In both columns the estimated density of $\tilde{\nu}_m$ can be found in gray lines. The rows from top to bottom correspond to the tuning parameter $\gamma = 0.00,0.25$ and $0.45$.}
\label{fig:1-1}
\end{figure}
\begin{figure}
\centering
\includegraphics[page=7, scale = 0.8]{plot_densities_Fig-1.pdf}
\includegraphics[page=8, scale = 0.8]{plot_densities_Fig-1.pdf}
\includegraphics[page=9, scale = 0.8]{plot_densities_Fig-1.pdf}
\includegraphics[page=10, scale = 0.8]{plot_densities_Fig-1.pdf}
\includegraphics[page=11, scale = 0.8]{plot_densities_Fig-1.pdf}
\includegraphics[page=12, scale = 0.8]{plot_densities_Fig-1.pdf}
\caption{Estimated density plots for ${k^{*}} = \lfloor \theta m^0\rfloor$ with $\theta = 100$ for $\nu^{\textnormal{Page}}_{1,m}$ (left column) and $\nu^Q_{1,m}$ (right column) in black lines. In both columns the estimated density of $\tilde{\nu}_m$ can be found in gray lines. The rows from top to bottom correspond to the tuning parameter $\gamma = 0.00,0.25$ and $0.45$.}
\label{fig:1-1a}
\end{figure}
\begin{figure}
\centering
\includegraphics[page=1, scale = 0.8]{plot_densities_Fig-2.pdf}
\includegraphics[page=2, scale = 0.8]{plot_densities_Fig-2.pdf}
\includegraphics[page=3, scale = 0.8]{plot_densities_Fig-2.pdf}
\includegraphics[page=4, scale = 0.8]{plot_densities_Fig-2.pdf}
\includegraphics[page=5, scale = 0.8]{plot_densities_Fig-2.pdf}
\includegraphics[page=6, scale = 0.8]{plot_densities_Fig-2.pdf}
\caption{Estimated density plots for ${k^{*}} = \lfloor m^{0.45}\rfloor$ for $\nu^{\textnormal{Page}}_{1,m}$ (left column) and $\nu^Q_{1,m}$ (right column) in black lines. In both columns the estimated density of $\tilde{\nu}_m$ can be found in gray lines. The rows from top to bottom correspond to the tuning parameter $\gamma = 0.00,0.25$ and $0.45$.}
\label{fig:1-2}
\end{figure}
\begin{figure}
\centering
\includegraphics[page=1, scale = 0.8]{plot_densities_Fig-3.pdf}
\includegraphics[page=2, scale = 0.8]{plot_densities_Fig-3.pdf}
\includegraphics[page=3, scale = 0.8]{plot_densities_Fig-3.pdf}
\includegraphics[page=4, scale = 0.8]{plot_densities_Fig-3.pdf}
\includegraphics[page=5, scale = 0.8]{plot_densities_Fig-3.pdf}
\includegraphics[page=6, scale = 0.8]{plot_densities_Fig-3.pdf}
\caption{Estimated density plots for $\nu^{\textnormal{Page}}_{1,m}$ (left column) and $\nu^Q_{1,m}$ (right column) in black lines. In both columns the estimated density of $\tilde{\nu}_m$ can be found in gray lines. The rows from top to bottom correspond to the tuning parameter $\gamma = 0.00,0.25$ and $0.45$. In each row the change-point ${k^{*}}$ was set to ${k^{*}} = \lfloor m^\beta\rfloor$ such that $\beta = (1/2-\gamma)/(1-\gamma)$. The limit distribution in the left column is therefore determined by case \eqref{II}.}
\label{fig:2}
\end{figure}
\begin{figure}
\centering
\includegraphics[page=1, scale = 0.8]{plot_densities_Fig-4.pdf}
\includegraphics[page=2, scale = 0.8]{plot_densities_Fig-4.pdf}
\includegraphics[page=3, scale = 0.8]{plot_densities_Fig-4.pdf}
\includegraphics[page=4, scale = 0.8]{plot_densities_Fig-4.pdf}
\includegraphics[page=5, scale = 0.8]{plot_densities_Fig-4.pdf}
\includegraphics[page=6, scale = 0.8]{plot_densities_Fig-4.pdf}
\caption{Estimated density plots for ${k^{*}} = \lfloor m^{0.75}\rfloor$ for $\nu^{\textnormal{Page}}_{1,m}$ (left column) and $\nu^Q_{1,m}$ (right column) in black lines. In both columns the estimated density of $\tilde{\nu}_m$ can be found in gray lines. The rows from top to bottom correspond to the tuning parameter $\gamma = 0.00,0.25$ and $0.45$.}
\label{fig:4}
\end{figure}
\paragraph{Comparison of $\tau^\page_{1,m}$ and $\tau^Q_{1,m}$.}
Now we want to compare the stopping rules $\tau^\page_{1,m}$ and $\tau^Q_{1,m}$ explicitly. Aside from the large sample behavior provided in Theorem \ref{AD}, the simulations yield further information on the small sample behavior. For this, the discussion of the convergence from above helps to explain the findings with respect to this comparison.\medskip\\
Since the normalizations in Theorem \ref{AD} allow for a direct comparison of the relative values, we will not discuss the absolute values of the delay times and, in particular, the influence of the parameter $\gamma$ on these. Table \ref{am-bm-table} shows the values of the normalizing sequences $a_m(c^{\textnormal{Page}}_{1,\alpha})$, $b_m(c^{\textnormal{Page}}_{1,\alpha})$ and $a_m(c^Q_{1,\alpha})$, $b_m(c^Q_{1,\alpha})$ for $\alpha = 0.1$ in the considered scenarios. These confirm the findings of \citet{F2012-art} and \citet{2004}, which is why we refer to the latter works for a further discussion of this topic.\medskip\\
To compare the stopping times $\tau^\page_{1,m}$ and $\tau^Q_{1,m}$ directly, we consider the same normalization for both, i.e., we compare $\nu^{\textnormal{Page}}_{1,m}$ and $\tilde{\nu}_m$. Thus, the left columns of Figures \ref{fig:1-1}--\ref{fig:4} provide the basis for this comparison.\medskip\\
As to be expected, Figure \ref{fig:1-1} shows that for a change-point at the beginning of the monitoring, the ordinary CUSUM detects slightly faster than the Page CUSUM. This is mainly due to the difference of the critical values. In Figure \ref{fig:1-1a} it can be seen how the parameter $\theta$ influences the detection of the two procedures. While both have the same limit distribution, the Page CUSUM shows a faster detection in small as well as large samples. Additionally, for $\gamma > 0$ the shape of the density estimates of $\nu^{\textnormal{Page}}_{1,m}$ yields the sensitivity of the procedure to the transition between the different cases.\medskip\\
Figure \ref{fig:1-2} particularly highlights the dependence of the limit distribution on the parameter $\gamma$. While for $\gamma = 0$ only a slightly faster detection of the Page CUSUM can be observed, for larger values of $\gamma$ the effect is more evident, due to the difference in the limit distribution.\medskip\\
The decreasing difference of the two procedures with increasing $\gamma$ in Figure \ref{fig:2} is owed to the change-point specification. For $\gamma = 0$ the change-point ${k^{*}}$ (for $m = 100,1000,10000$) takes the values ${k^{*}} = 10,31,100$, for $\gamma = 0.25$ we have ${k^{*}} = 4,9,21$ and for $\gamma = 0.45$ finally ${k^{*}} = 1,1,2$. The observed behavior is therefore not surprising.\medskip\\
Finally, Figure \ref{fig:4} clearly shows how the Page CUSUM improves compared to the ordinary CUSUM, the later a change occurs. Even for small training samples the density estimates of the Page CUSUM dominate those of the ordinary CUSUM and the effect is reinforced with increasing sample size.\medskip\\
To summarize, we find that the simulations in general confirm the theoretical results. They even explain why in certain (biased) early change scenarios, despite the same standard normal limit for both procedures, the Page CUSUM outperforms the ordinary CUSUM. Furthermore, the similarity of the Page and ordinary CUSUM in early change scenarios, on the one hand, and the superiority of the Page CUSUM in late change scenarios, on the other hand, underline why in general (i.e., without additional assumptions on the change-point) the usage of the Page CUSUM procedure can be recommended.
\begin{table}[hptb]
\begin{center}
\begin{tabular}{cccccccccc}
&&&\multicolumn{3}{c}{$\tau^\page_{1,m}$}&&\multicolumn{3}{c}{$\tau^Q_{1,m}$}\\
\cmidrule(lr){3-6}\cmidrule(lr){7-10}
${k^{*}}$&$\gamma$&$m$&100&1000&10000&$m$&100&1000&10000\\
\cmidrule(lr){1-6}\cmidrule(lr){7-10}
1&0.00&$a_m$&~~17.92&~~54.52&~170.24&$a_m$&~~17.45&~~53.01&~165.49\\
&&$b_m$&~~~4.23&~~~7.38&~~13.05&$b_m$&~~~4.18&~~~7.28&~~12.86\\
\cmidrule(lr){2-6}\cmidrule(lr){7-10}
&0.25&$a_m$&~~12.23&~~24.84&~~52.00&$a_m$&~~11.55&~~23.39&~~48.86\\
&&$b_m$&~~~4.54&~~~6.56&~~~9.55&$b_m$&~~~4.41&~~~6.36&~~~9.26\\
\cmidrule(lr){2-6}\cmidrule(lr){7-10}
&0.45&$a_m$&~~~9.54&~~11.37&~~13.63&$a_m$&~~~8.57&~~10.18&~~12.15\\
&&$b_m$&~~~5.17&~~~5.72&~~~6.33&$b_m$&~~~4.86&~~~5.37&~~~5.94\\
\cmidrule(lr){1-6}\cmidrule(lr){7-10}
100&0.00&$a_m$&~116.92&~153.52&~269.24&$a_m$&~116.45&~152.01&~264.49\\
&&$b_m$&~~10.81&~~12.39&~~16.41&$b_m$&~~10.79&~~12.33&~~16.26\\
\cmidrule(lr){2-6}\cmidrule(lr){7-10}
&0.25&$a_m$&~119.87&~136.51&~168.42&$a_m$&~118.90&~134.68&~164.87\\
&&$b_m$&~~11.42&~~12.52&~~14.44&$b_m$&~~11.36&~~12.40&~~14.24\\
\cmidrule(lr){2-6}\cmidrule(lr){7-10}
&0.45&$a_m$&~127.43&~131.18&~135.49&$a_m$&~125.31&~128.75&~132.70\\
&&$b_m$&~~12.50&~~12.82&~~13.20&$b_m$&~~12.31&~~12.61&~~12.96\\
\cmidrule(lr){1-6}\cmidrule(lr){7-10}
$\lfloor m^{0.45}\rfloor$&0.00&$a_m$&~~23.92&~~75.52&~232.24&$a_m$&~~23.45&~~74.01&~227.49\\
&&$b_m$&~~~4.89&~~~8.69&~~15.24&$b_m$&~~~4.84&~~~8.60&~~15.08\\
\cmidrule(lr){2-6}\cmidrule(lr){7-10}
&0.25&$a_m$&~~19.64&~~50.47&~126.72&$a_m$&~~18.94&~~48.92&~123.32\\
&&$b_m$&~~~5.28&~~~8.27&~~12.88&$b_m$&~~~5.17&~~~8.11&~~12.65\\
\cmidrule(lr){2-6}\cmidrule(lr){7-10}
&0.45&$a_m$&~~18.51&~~40.34&~~92.96&$a_m$&~~17.41&~~38.75&~~90.53\\
&&$b_m$&~~~5.97&~~~7.98&~~11.28&$b_m$&~~~5.71&~~~7.73&~~11.02\\
\cmidrule(lr){1-6}\cmidrule(lr){7-10}
$\lfloor m^{0.5}\rfloor$&0.00&$a_m$&~~26.92&~~84.52&~269.24&$a_m$&~~26.45&~~83.01&~264.49\\ &&$b_m$&~~~5.19&~~~9.19&~~16.41&$b_m$&~~~5.14&~~~9.11&~~16.26\\
\cmidrule(lr){1-6}\cmidrule(lr){7-10}
$\lfloor m^{1/3}\rfloor$&0.25&$a_m$&~~16.01&~~34.97&~~77.32&$a_m$&~~15.33&~~33.49&~~74.11\\
&&$b_m$&~~~4.93&~~~7.26&~~10.75&$b_m$&~~~4.80&~~~7.08&~~10.49\\
\cmidrule(lr){1-6}\cmidrule(lr){7-10}
$\lfloor m^{1/11}\rfloor$&0.45&$a_m$&~~~9.54&~~11.37&~~15.30&$a_m$&~~~8.57&~~10.18&~~13.81\\
&&$b_m$&~~~5.17&~~~5.72&~~~6.43&$b_m$&~~~4.86&~~~5.37&~~~6.04\\
\cmidrule(lr){1-6}\cmidrule(lr){7-10}
$\lfloor m^{0.75}\rfloor$&0.00&$a_m$&~~47.92&~230.52&1169.24&$a_m$&~~47.45&~229.01&1164.49\\ &&$b_m$&~~~6.92&~~15.18&~~34.19&$b_m$&~~~6.89&~~15.13&~~34.12\\
\cmidrule(lr){2-6}\cmidrule(lr){7-10}
&0.25&$a_m$&~~46.70&~218.04&1109.61&$a_m$&~~45.90&~216.03&1104.35\\
&&$b_m$&~~~7.46&~~15.50&~~34.15&$b_m$&~~~7.37&~~15.39&~~34.04\\
\cmidrule(lr){2-6}\cmidrule(lr){7-10}
&0.45&$a_m$&~~48.81&~216.02&1090.74&$a_m$&~~47.33&~213.06&1084.14\\
&&$b_m$&~~~8.36&~~16.00&~~34.31&$b_m$&~~~8.14&~~15.80&~~34.12
\end{tabular}
\caption{
Values of the normalizing sequences $a_m(c^{\textnormal{Page}}_{1,\alpha})$, $b_m(c^{\textnormal{Page}}_{1,\alpha})$ (left) and $a_m(c^Q_{1,\alpha})$, $b_m(c^Q_{1,\alpha})$ (right) for $\alpha = 0.1$ in the scenarios considered in Figures \ref{fig:1-1}--\ref{fig:4}.
}
\label{am-bm-table}
\end{center}
\end{table}
| {
"attr-fineweb-edu": 1.618164,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdVM4ukPiEYUYc7bU | \section{Introduction}
The impressive progress in atomic quantum gases can be largely
attributed to technical advances in the control of atomic quantum
states utilizing electromagnetic interactions. While laser cooling
with near resonant light narrows down atomic momentum distribution,
forced evaporation on trapped atoms favors spatial confinement into
the quantum degenerate regime. Since the first successful experiment
on atomic Bose-Einstein condensation, or the controlled production
of many bosonic atoms into the same quantum ground state of a
trapping potential, the field of atomic quantum gases has blossomed
into one of the leading frontiers in physics, attracting ever increasing
interest. The ultimate goal along this line of thought
is to control the state of
an arbitrary assembly of an interacting many body system, and either use
it to simulate other physical systems or to perform quantum
computations.
Significant efforts have been directed toward the above ambitious goal,
with the immediate pursuit from more recent endeavors targeted
at the generation, control, and
study of cold molecules. These endeavors include important successes
such as the BCS to BEC cross-over due to tunable interactions
induced through B-field magnetic Feshbach resonances and the
production of quantum degenerate heteronuclear molecules in
specific low lying ro-vibrational states of
their electronic ground states \cite{ni08}. The latter advance
is especially illuminating, highlighting the
prospects for observing and utilizing
anisotropic and long range interactions from the permanent electric dipoles.
Molecule quantum fluids bring in new possibilities due to their
richer internal degrees of freedom, such as alignment and rotation
\cite{friedrich95}, even after their hyperfine spins like those of
atomic spinor quantum gases are neglected. In this short note, we
propose a simple mechanism to generate a vortex state, relying on
the internal degrees of freedom of a heternuclear molecule, through
adiabatically flipping the axial bias of an external E-field
Ioffe-Pritchard trap (IPT) \cite{pritchard83}. The
physics of our proposal can be analogously mapped onto the
successful protocol first proposed \cite{isoshima00} and
demonstrated for magnetically trapped atomic condensates
\cite{leanhardt02,leanhardt03,isoshima07}. Our theory hopefully will
lead to parallel experiments that begin to address new opportunities
afforded by the molecular internal degrees of freedom.
This paper is organized as follows. First, we describe our model for
dc E-field trapping of cold heteronuclear molecules based on the interaction
between their permanent electric dipoles with a spatially varying static
E-field IPT. This is analogous to the operation of the
famous IPT for magnetic dipoles with
an inhomogeneous B-field \cite{pritchard83}. The heteronuclear diatomic molecule is
modeled as a 3D rigid rotor \cite{friedrich95}. We then discuss the eigen structure of
a heteronuclear molecule inside a dc E-field and identify states capable
of trapped by a local E-field minimum and their associated quantum
numbers. Next, the rotational properties of such eigen states are
studied, which helps to illustrate the proposed vortex pump mechanism based
on the flipping of the axial bias E-field
\cite{isoshima00,leanhardt02,leanhardt03,isoshima07,mottonen07,xu08}. Finally we conclude.
It is imperative to present
this study because this analogy between E-field and B-field
and between molecular E-dipole and atomic B-dipole is not a simple one to one map.
The magnetic dipole of an atom contains both an electronic and
a nuclear component, respectively, proportional to the electronic spin ($\vec S$) and
the nuclear spin ($\vec I$). Its projection onto the local B-field is
approximately proportional to the projection of its hyperfine spin $\vec F=\vec S+\vec I$
when the B-field is weak. In contrast, the
molecular dipole is a fixed constant and pointed along the molecule
axis connecting the two nuclei. The internal
rotational angular momentum of the molecule stays in the perpendicular
plane of the molecule axis in the absence of an E-field. Despite
these differences, our study finds that the Isoshima, {\it et al.},
protocol \cite{isoshima00,leanhardt02,leanhardt03},
originally developed for B-field trapped atoms or
molecules, remains effective for heteronuclear molecules in a
E-field IPT. Much of these comparative studies and analogies will
become clear from the materials presented in the third and fourth
sections.
\section{Trapping a heteronuclear molecule with static electric field}
We consider a diatomic molecule composed of two different atomic
nuclei and described in terms of their center of mass $\vec R$ and
relative coordinate $\vec r$. Neglecting the nuclear and electronic
spins and their associated interactions, or assume we consider an eigen
state of the above internal degrees of freedom, the remaining
Hamiltonian reads
\begin{eqnarray}
\mathcal{H}&=&\frac{\mathbf{P}_{R}^2}{2M}+\frac{\mathbf{P}^2_{r}}{2\mu}
-\vec{D}(\vec r)\cdot\vec{E}(\vec{R})
\nonumber\\
&=&\frac{\mathbf{P}_{R}^2}{2M}-\frac{\hbar^2}{2\mu r^2}\frac{\partial}{\partial r}
\left( r^2\frac{\partial}{\partial r} \right)+\frac{\mathbf{J}^2(\theta_r,\phi_r)}{2\mu r^2}
-\vec{D}(\vec r)\cdot\vec{E}(\vec{R}),
\label{ham}
\end{eqnarray}
where $\vec{E}(\vec{R})$ denotes the inhomogeneous static E-field,
which is slowly varying over the molecule size of $r$, and
$\vec{D }(\vec r)$ is the permanent electric dipole moment operator
for this specific electronic and internal spin eigen-state \cite{friedrich95,cohen84,micheli07}.
$M=M_1+M_2$ and $\mu=M_1M_2/(M_1+M_2)$ are, respectively, the total
and the reduced mass of the two atoms making up the molecule.
We adopt the Bohn-Oppenheimer approximation to study the effective
trapping of the molecule center of mass motion. Within this
approximation, the molecular wave function for $\vec R$ and $\vec r$
is decomposed into the form
$\Psi(\vec{r},\vec{R})=\Phi(\vec{r})\psi(\vec{R})$, and we find
\begin{eqnarray}
\fl\qquad\left[-\frac{\hbar^2}{2\mu r^2}\frac{\partial}{\partial r}
\left( r^2\frac{\partial}{\partial r} \right)+\frac{\mathbf{J}^2(\theta_r,\phi_r)}{2\mu r^2}
-\vec{D}(\vec r)\cdot\vec{E}(\vec{R})\right]\psi_{M_J}(\vec{r})
&=&\mathcal{V}_{M_J}(\vec{R})\psi_{M_J}(\vec{r}), \label{rot}\\
{\hskip 112pt \left(\frac{\mathbf{P}_{R}^2}{2M}+\mathcal{V}_{M_J}(\vec{R})\right)
\Phi_n(\vec{R})}&=&E_{M_J}^{(n)}\Phi_n(\vec{R}).
\end{eqnarray}
Now consider for a specific vibrational state, the Eq. (\ref{rot})
above then reduces to the simple form of a 3D rotator in a dc E-field
with a rotational constant $B=\langle\hbar^2/2\mu r^2\rangle$. If we
further assume the local E-field direction to be the quantization
z-axis, then the Schr\"odinger equation for the internal part of the
diatomic molecule becomes
\begin{eqnarray}
\left[BJ^2-DE(\vec R)\cos\theta_r\right]\psi_{M_J}(\theta_r,\phi_r)
=\mathcal{V}_{M_J}(\vec R)\psi_{M_J}(\theta_r,\phi_r),
\end{eqnarray}
where we have neglected the dependence of the permanent dipole moment on the
internuclear distance $\vec r$.
Some intuition can be gained for weak E-fields if a simple perturbation
theory is adopted as is done in Ref. \cite{micheli07}. We take the
unperturbed Hamiltonian to be $H_0=BJ^2$ with
$H_0|J,M_J\rangle=E_{JM_J}|J,M_J\rangle$, where the eigen-energy and
eigen-function are, respectively, $E_{JM_J}=BJ(J+1)$
and $|J,M_J\rangle$, with spherical harmonics $|J,M_J\rangle$ being
functions of $\hat r$.
The dipole interaction $-DE(\vec R)\cos\theta_r$
is treated as a perturbation.
To first order the wave function becomes
\begin{eqnarray}
\psi_{M_J}&=&|J,M_J\rangle+\frac{DE}{B}\frac{1}{2(J+1)}
\sqrt{\frac{(J+1)^2-M_J^2}{(2J+1)(2J+3)}}\,|J+1,M_J\rangle \nonumber\\
&& -\frac{DE}{B}\frac{1}{2J}\sqrt{\frac{J^2-M_J^2}{(2J-1)(2J+1)}}\,|J-1,M_J\rangle,
\label{psifo}
\end{eqnarray}
and the corresponding eigen-energy to second order in the external E-field
becomes
\begin{eqnarray}
\mathcal{V}_{M_J}&=&BJ(J+1)+\frac{D^2E^2}{B}\frac{1-3M_J^2/J(J+1)}{2(2J-1)(2J+3)}.
\end{eqnarray}
For this model system, $M_J$ is a conserved quantity.
Assuming the center of mass motion is adiabatic with respect to the
internal state, the above eigen energy from the internal state
$\mathcal{V}_{M_J}$ then clearly acts as an effective potential
due to its dependence on $\vec R$. In particular, we find any state with
$1-3M_J^2/J(J+1)>0$ is a weak field seeking state, {\it i.e.}, free
space E-field traps analogous to the B-field IPT can be constructed.
These states, of course, are meta-stable, and trapping is
possible because of the dynamic stability from the Larmor-like
precession of the permanent electric dipole along the direction of
a local E-field. On the other hand, the strong field seeking states with
$1-3M_J^2/J(J+1)<0$ cannot be used to spatially confine
molecules with static fields, because it is impossible to construct a local
E-field maximum.
The significant progress gained over the years
in trapping and manipulating neutral atoms with static B-fields \cite{TB}
provide important enabling technologies for the experimental successes of
cold atom physics research. A variety of B-field traps have been implemented
for various applications \cite{pritchard83,TB}.
Despite the analogy described earlier
of the B-field confinement of magnetic dipoles with
the E-field confinement of electric dipoles (of polar molecules),
experimental efforts of trapping and manipulating polar molecules
with electrostatic E-fields become an active research topic
only in recent years with the interests on cold atoms broadening into
cold molecules. The Meijer's group pioneered the research of cooling and
trapping of molecules with (3D quadrupole) E-fields \cite{GM00}.
They also demonstrated the guiding of
cold molecules with a torus shaped (2D) E-field hexapole \cite{GM01}.
With suitable modifications, an analogous IPT of E-field can be
constructed for polar molecules \cite{pritchard83,TB,ab,GM}.
In Fig. \ref{fig1}, we show the calculated dc Stark shift for
the $M_J=1$ state of heteronuclear molecule KRb
that is connected to the $J=2$ manifold when the E-field is absent.
We adopt the
parameters as from its ground state $X^1\sum{\nu=0}$ as
measured recently \cite{ni08} with the electric dipole
moment $D=1.36\times 10^4$\,$\rm Hz\cdot m/V$ and the
rotational constant $B=1.1139\, \rm GHz$.
The calculation is accomplished by a numerical diagonalization of the
Hamiltonian over a truncated basis of common eigen states for
operators $\mathbf{J}^2$ and $J_z$. The diagonalization
is carried out in the subspace of a conserved
$M_J$ \cite{cohen84}, and the procedure is found to be rapidly
convergent. At the E-field strength we consider, the
perturbation result $\psi_{M_J}$ in Eq. (\ref{psifo}) with $M_J=1$ and $J=2$
is found to consists of an excellent approximation. This particular state
as illustrated in Fig. \ref{fig1} is clearly a weak field seeking state.
The maximum trap height can become as high as $173\,\rm MHz$.
\begin{figure}[H]
\centering
\includegraphics[width=4.5in]{fig1.eps}
\caption{The Stark shift of the $M_J=1$ state for the polar molecule KRb in the
electronic ground state $X^1\sum{\nu=0}$, for realistic electrostatic
E-trap field strength \cite{GM,ab}. }
\label{fig1}
\end{figure}
\section{Rotational transformation of the wave function $|J,M_J\rangle$}
Before we extend the Isoshima, {\it et al.}, protocol \cite{isoshima00}
from B-field trapped atoms to E-field trapped polar
molecules, we will briefly review the rotational properties of the
spherical harmonics $|J,M_J\rangle$. Under a rotation
along a unit vector $\hat{n}$ by an angle $\theta$,
the wave function changes to
\begin{eqnarray}
R(\theta\hat{n})|J,M\rangle=e^{-i\theta\hat{n}\cdot\mathbf{J}}|J,M\rangle.
\end{eqnarray}
Expressed in terms of the Euler angles, the most general 3D rotation
takes the following form
\begin{eqnarray}
R(\alpha,\beta,\gamma)=\exp(-i\alpha J_z)\exp(-i\beta J_y)\exp(-i\gamma J_z).
\end{eqnarray}
According to the representation theory of rotation, we find
\begin{eqnarray}
R(\alpha,\beta,\gamma)|J,M\rangle&=&\sum\limits_{M'}D^{J}_{M'M}(\alpha,\beta,\gamma)
|J,M'\rangle\nonumber\\
&=&\sum\limits_{M'}\exp[-iM'\alpha]d^{J}_{M'M}(\beta)\exp[-iM\gamma]|J,M'\rangle,
\end{eqnarray}
where
\begin{eqnarray}
d^{J}_{M'M}(\beta)&=&\langle J,M'|\exp[-i\beta J_y]|J,M\rangle\nonumber\\
&=&[(J+M)!(J-M)!(J+M')!(J-M')!]^{1/2}\nonumber\\
&&\times\sum\limits_{\nu}[(-1)^{\nu}(J-M'-\nu)!(J+M-\nu)! (\nu+M'-M)!\nu!]^{-1}\nonumber\\
&&\times\left(\cos\frac{\beta}{2}\right)^{2J+M-M'-2\nu}\left(-\sin\frac{\beta}{2}\right)^{M'-M+2\nu}.
\end{eqnarray}
The value of $\nu$ needs to ensure that the numbers in the factorials
stay non-negative. For example, when $\beta=\pi$ and
$\cos\frac{\beta}{2}=0$, for $d^{J}_{M'M}$ to be nonzero, we need to
let $2J+M-M'-2\nu=0$. This gives
\begin{eqnarray}
(J-M'+\nu)!=\left[ -\frac{1}{2}(M'+M) \right]!,\\
(J+M-\nu)!=\left[ \frac{1}{2}(M'+M) \right]!.
\end{eqnarray}
In order to assure that both $J-M'-\nu$ and $J+M-\nu$ are
non-negative, we need to make $M'+M=0$. As a result we find
\begin{eqnarray}
d^{J}_{M'M}(\pi)=(-1)^{J-M'}\delta_{M',-M},
\end{eqnarray}
which gives
\begin{eqnarray}
R(\alpha,\pi,\gamma)|J,M\rangle=(-1)^{J+M}\exp[iM\alpha]\exp[-iM\gamma]|J,-M\rangle.
\end{eqnarray}
We want to stress at this point that the angles $\alpha$, $\beta$,
and $\gamma$ are parameters used to specify arbitrary rotations, and
they have nothing to do with the internal state angles $\theta_r$
and $\phi_r$. To understand the physics behind the vortex generation protocol,
which is directly related to the geometrical properties of the
static E-field, we first consider several special cases of
unit vectors as rotation axes: for
$\hat{n}=\hat{X}\cos\phi+\hat{Y}\sin\phi$,
$\hat{n}_{\perp}=-\hat{X}\sin\phi+\hat{Y}\cos\phi$, and
$\hat{n}_q=\hat{X}\sin\phi+\hat{Y}\cos\phi$, respectively, with $\hat{X}$ and
$\hat{Y}$ the unit vectors along the Cartesian $x$- and
$y$-directions. $\theta$ and $\phi$ denote the polar and
azimuthal angles of the center of mass coordinate $\vec R$. The last
case of $\hat{n}_q$ corresponds simply to the direction of a IPT
E-field in the plane of $Z=0$. Using the
relationship derived above, we find that
\begin{eqnarray}
R_{\hat{n}}(\pi)|J,M_J\rangle&=&\exp(-i\pi\hat{n}\cdot \mathbf{J})|J,M_J\rangle\nonumber\\
&=&R_{\hat{z}}(\phi-\pi/2)R_{\hat{y}}(\pi)R_{\hat{z}}(\pi/2-\phi)|J,M_J\rangle\nonumber\\
&=&-(-1)^{J+M_J}\exp[2iM_J\phi]|J,-M_J\rangle, \\
R_{\hat{n}_{\perp}}(\pi)|J,M_J\rangle&=&\exp(-i\pi\hat{n}_{\perp}\cdot \mathbf{J})|J,M_J\rangle\nonumber\\
&=&R_{\hat{z}}( (\pi/2+\phi)-\pi/2)R_{\hat{y}}(\pi)R_{\hat{z}}(\pi/2-(\pi/2+\phi))|J,M_J\rangle\nonumber\\
&=&(-1)^{J+M_J}\exp[2iM_J\phi]|J,-M_J\rangle,\\
R_{\hat{n}_q}(\pi)|J,M_J\rangle&=&\exp(-i\pi\hat{n}_q\cdot \mathbf{J})|J,M_J\rangle\nonumber\\
&=&R_{\hat{z}}( (\pi/2-\phi)-\pi/2)R_{\hat{y}}(\pi)R_{\hat{z}}(\pi/2-(\pi/2-\phi))|J,M_J\rangle\nonumber\\
&=&(-1)^{J+M_J}\exp[-2iM_J\phi]|J,-M_J\rangle.\label{rotipt}
\end{eqnarray}
These properties of rotational transformation show that by
enforcing a rotation of the internal state, the wave function gains
an appropriate topologically phase specified by the
azimuthal angle coordinate of the center of mass (or the molecule) coordinate.
Provided the internal state flipping of
the permanent dipole moment and the center of mass motion are both
adiabatic, the above results show that different vortical phases are generated
as in the protocol of Isoshima, {\it et al.}, for B-field trapped
atomic spinor condensates \cite{isoshima00,leanhardt02,leanhardt03}.
\section{Vortex creation in a condensate of heteronuclear molecules}
In this section, we will confirm numerically the vortex creation
protocol for a condensate of heteronuclear molecules in an E-field
IPT. Basically, we will simulate the axial E-field bias flip and
check for the adiabatic conditions of the associated internal state.
There are two main points we need to watch for. First we need to
create the proper vortical phase structure on a condensate during
the time evolution of the E-field. Secondly we must maintain
adiabaticity during the flip of the internal state of a polar
molecule.
Considering the first question, assume the initial state
corresponds to a polar molecule locally aligned along the E-field
direction of a E-field IPT, which is essentially along the axial $z$-axis direction
close to the $z$-axis, its local internal state $\phi_{M_J}(\theta_r,\phi_r)$
can then be described approximately as the
eigen state of the system in the local E-field. Since
$M_J$ is a conserved quantity, this eigen state $\psi_{M_J}(\theta_r,\phi_r)$
can be expanded by a series of $|J,M_J\rangle$ in $J$,
\begin{eqnarray}
\psi_{M_J}(\theta_r,\phi_r)=\sum\limits_J C_J|J,M_J\rangle.
\label{psimj}
\end{eqnarray}
The flipping of the E-field bias adiabatically corresponds then
to nothing but a rotation of the initial state along
the unit vector $\hat{n}_q(\theta,\phi)$ in the transverse plane
by an angle of $\pi$. According to the rotation properties
we discussed before, this flipping of the bias then gives
\begin{eqnarray}
R_{\hat{n}_q}(\pi)\psi_{M_J}(\theta_r,\phi_r)=\sum\limits_J (-1)^{J+M_J}
C_J\exp[-2iM_J\phi]|J,-M_J\rangle,
\label{rotpsimj}
\end{eqnarray}
which gains clearly a vortex with a winding number $-2M_J$.
Turning to the second question concerning the adiabaticity, we simply can
propagate the initial wave function $\psi_{M_J}$ in real time,
simulating the complete process of the bias flip. To test the level
of adiabaticity and the validity of Eq. (\ref{rotpsimj}), we
consider points $(X_0, Y_0, 0)$ along a circle in the $X$-$Y$ plane.
The absolute value for the E-field along the circle is then a
constant, although their local directions are all different. In
fact, for the E-field IPT being considered here, the E-field can be
written as $\vec{E}=E'(X,-Y,L)$ with $L$ a constant. Choosing the
weak field seeking state $\psi_{M_J=1}$ with $J=2$ at $\vec{E}=3\,
\rm kV/cm\ \hat{Z}$ as considered in Fig. \ref{fig1}, we simulate
the flip protocol over various time intervals. The time evolution of
the bias E-field is taken to be the same as that in the optimal
B-field protocol \cite{xu08}, and the E-field in the transverse
$X$-$Y$ plane is treated as a constant.
\begin{figure}[H]
\centering
\includegraphics[width=4.5in]{fig2.eps}
\caption{The topological phase gained after numerically flipping the
bias E-field, that is calculated by taking the inner product of the
time evolved state after flip with the nominally down polarized
($M_J=-1$) state. $\fullsquare$ denotes numerically computed data,
and the red solid curve represents a linear fit. Both axes are
dimensionless in units of $2\pi$. }
\label{fig2}
\end{figure}
In our simulation, we take the E-field at $t=0$ as $E'L=3\, \rm
kV/cm$ and $E'\sqrt{X_0^2+Y_0^2}=0.3\, \rm kV/cm$. After the flip of
the E-bias over $10^5/B$ ($\sim 0.1$ ms at $B=1.1139\, \rm GHz$), which is
a reasonably
fast time scale, we calculate the inner product of the final state
with the eigen-state $\psi_{M_J=-1}$ (of $J=2$) when the E-field
$\vec{E}=-3 \rm\,kV/cm\,\hat{Z}$ is pointed in the downward
direction. Using these parameters, the adiabaticity is found to be
fully satisfied, and the state overlap is essentially 100\% except
for the extra topological phase gained as shown in Fig. \ref{fig2}.
A numerical fit gives precisely the slope for the phase over $\phi$
being exactly equal to $-2$, which confirms the high fidelity
operation of our vortex pump proposal.
Before concluding, we point out that, like the system of
magnetically trapped atomic spinors inside a B-field IPT
\cite{peng07,xu08}, the quantity of $J_z-L_z$ is found to commute
with the Hamiltonian $B\mathbf{J}^2-\mathbf{D}\cdot E'(X,-Y,L)$,
where $E'$ is the spatial gradient of the E-field IPT, and $L_z$ is
the mechanical angular momentum of the heteronuclear molecule. $\vec
J$ is the rotational angular momentum of the molecule. If the flip
of the E-bias is indeed adiabatic, the respective quantum numbers
conserve the combination $M_J-L_z$. After going through the flip,
the internal rotational state is changed from $M_J$ to $-M_J$, which
then must be accompanied by an increase of $2M_J$ to its mechanical
angular momentum. Thus we see the appearance of a vortex state.
In conclusion, by analogy with B-field trapping of neutral atoms
with magnetic dipoles, we study and identify weak field trapping
states of polar molecules inside a spatially inhomogeneous dc
E-field. Further, we suggest that an effective and efficient vortex
pump protocol can be envisioned for condensates of polar molecules
\cite{mottonen07,xu08}, based on the flipping of the axial bias
field, originally suggested \cite{isoshima00} and experimentally
demonstrated for atomic spinor condensates inside a B-field IPT
\cite{leanhardt02,leanhardt03,isoshima07}. We have confirmed that
the E-field bias flip remains effective, and the vortex state created
maintains a vorticity proportional to the $M_J$ quantum number of
the trapped molecule state. When a diatomic molecule is placed
inside a homogeneous E-field, $M_J$ is a good quantum number. The
weak field trapping states can be associated with very large values
of $M_J$, thus the amount of vorticity created could become very
significant even after a single bias flip. This vorticity possibly could open
up a practical approach to reach the rapid rotation limit of
atomic/molecular quantum gases.
More generally, we find that this protocol for vortex creation
remains effective if the diatomic molecule is taken as a symmetric
top \cite{herzberg}. Interestingly we find the E-bias flip also
works for the strong field trapping states provided the polar
molecules are confined through other means not relying on its
permanent electric dipole. Finally, other improvements to the
B-field bias flip protocol \cite{isoshima00}, such as
those developed for cyclically operated continuous vortex pumping schemes can
be analogously extended to the case of heteronuclear molecules
\cite{mottonen07,xu08}.
\section{Acknowledgement}
This work is supported by US NSF, NSF of China under Grant
10640420151, and NKBRSF of China under Grants 2006CB921206 and
2006AA06Z104.
\section*{References}
| {
"attr-fineweb-edu": 1.952148,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdWTxaL3Sugla0tnh | \section{Introduction and model}
During the last three decades there has been an increasing number of papers
devoted to the study of random graphs and complex networks, in view of the fact
that they describe systems in many knowledge areas: from maths and physics to
finance and social sciences, passing through biology and chemistry \cite{BA99,S01,AB02,B13}.
In particular, some of those works report studies of spectral and eigenfunction
properties of complex networks; see for example Refs.~\cite{DR93a,ZX00,DR93b,GGS05,SKHB05,BJ07a,BJ07b,F02,DGMS03,GT06a,GT06b,EK09,HS11,AMM}.
That is, since complex networks composed by nodes and the bonds joining them can
be represented by sparse matrices, it is quite natural to ask about the spectral
and eigenfunction properties of such {\it adjacency} matrices.
Then, in fact, studies originally motivated on physical systems represented by
Hamiltonian sparse random matrices \cite{RB88,RD90,FM91,EE92,JMR01} can be directly
applied to complex networks.
In contrast to the numerous works devoted to study spectral and
eigenfunction properties of complex netwoks, to our knowledge,
just a few focus on some of their scattering and transport properties
\cite{MPB07a,MPB07b,XLL08,SRS10,PW13}.
So, in the present work we study numerically several statistical properties
of the scattering matrix and the electronic transport across disordered
tight-binding networks described by sparse real symmetric matrices.
We stress that we use a scattering approach to electronic transport; see
for example \cite{MK04}. In addition, we concentrate on the case of a small
number of attached leads (or terminals), each of them supporting one open channel.
We also note that tight-binding complex networks have also been studied in
Refs.~\cite{DR93a,ZX00,GT06a,GT06b}.
The tight-binding random networks we shall study here are described by the
tight-binding Hamiltonian
\begin{equation}
\label{TBH}
H = \sum^N_{n=1} h_{nn} | n \rangle \langle n| +
\sum^N_{n=1} \sum^N_{m=1} h_{nm} \left( | n \rangle \langle m| + |m \rangle \langle n| \right) \ ,
\end{equation}
where $N$ is the number of nodes or vertexes in the network, $h_{nn}$ are on-site
potentials and $h_{nm}$ are the hopping integrals between sites $n$ and $m$.
Then we choose $H$ to be a member of an ensemble of $N\times N$ sparse
real symmetric matrices whose nonvanishing elements are statistically independent random
variables drawn from a normal distribution with zero mean $\left\langle h_{nm} \right\rangle=0$
and variance $\left\langle |h_{nm}|^2 \right\rangle=(1+\delta_{nm})/2$. As in Refs.~\cite{AMM,JMR01},
here we define the sparsity of $H$, $\alpha$, as the fraction of the $N(N-1)/2$ nonvanishing
off-diagonal matrix elements. I.e., $\alpha$ is the network average connectivity.
Thus, our random network model corresponds to an ensemble of adjacency
matrices of Erd\H{o}s-R\'enyi--type random graphs \cite{ER59,AB02,note1}.
Notice that with the prescription given above our network model displays {\it
maximal disorder} since averaging over the network ensemble implies average over
connectivity and over on-site potentials and hopping integrals.
With this averaging procedure we get rid off any individual network characteristic
(such as {\it scars} \cite{SK03} which in turn produce topological resonances
\cite{GSS13}) that may lead to deviations from random matrix theory (RMT) predictions
which we use as a reference. I.e., we choose this network model to retrieve well known
random matrices in the appropriate limits: a diagonal random matrix is obtained
for $\alpha=0$ when the nodes in the network are isolated, while a member of the
Gaussian Orthogonal Ensemble (GOE) is recovered for $\alpha=1$ when the network
is fully connected.
However, it is important to add that the {\it maximal disorder} we consider is not
necessary for a graph/network to exhibit universal RMT behavior. In fact:
(i) It is well known that tight-binding cubic lattices with on-site disorder (known
as the three-dimensional Anderson model \cite{3DAM}), forming networks with fixed
regular connectivity having a very dilute Hamiltonian matrix, show RMT behavior in
the {\it metallic phase} (see for example Refs.~\cite{metallic1,metallic2}).
(ii) It has been demonstrated numerically and theoretically that graphs
with fixed connectivity show spectral \cite{spectral,TS01} and scattering
\cite{PW13,scattering} universal properties corresponding to RMT predictions,
where in this case the disorder is introduced either by choosing random bond lengths
\cite{spectral,PW13,scattering} (which is a parameter not persent in our network model)
or by randomizing the vertex-scattering matrices \cite{TS01}
(somehow equivalent to consider random on-site potentials).
Moreover, some of the RMT properties of quantum graphs have already been tested
experimentally by the use of small ensembles of small microwave networks with fixed
connectivity \cite{Sirko}.
(iii) Complex networks having specific topological properties (such as small-world
and scale-free networks, among others), where randomness is
applied only to the connectivity, show signatures of RMT behavior in their
spectral and eigenfunction properties \cite{BJ07a,GT06a,MPB07a}.
The organization of this paper is as follows.
In the next section we define the scattering setup as well as the scattering
quantities under investigation and provide the corresponding analytical
predictions from random scattering-matrix theory for systems with
time-reversal symmetry. These analytical results will be used as a
reference along the paper. In Section III we analyze the average
scattering matrix elements $\left\langle |S_{mn}|^2 \right\rangle$, the conductance probability distribution
$w(T)$, the average conductance $\left\langle T \right\rangle$, the shot noise power $P$, and
the elastic enhancement factor $F$ for
tight-binding networks as a function of $N$ and $\alpha$.
We show that all scattering and transport quantities listed above are invariant
for fixed $\xi$.
Moreover, we propose a heuristic and universal relation between
$\left\langle |S_{mn}|^2 \right\rangle$, $\left\langle T \right\rangle$, and $P$ and the disorder parameter $\xi$.
Finally, Section IV is left for conclusions.
\section{The scattering setup and RMT predictions}
We open the isolated samples, defined above by the tight-binding random network
model, by attaching $2M$ semi-infinite single channel leads. Each lead is
described by the one-dimensional semi-infinite tight-binding Hamiltonian
\begin{equation}
\label{leads}
H_{\tbox{lead}}=\sum^{-\infty}_{n=1} (| n \rangle \langle n+1| + |n+1 \rangle \langle n|) \ .
\end{equation}
Using standard methods one can write the scattering
matrix ($S$-matrix) in the form \cite{MW69}
\begin{equation}
\label{smatrix}
S(E) =
\left(
\begin{array}{cc}
r & t' \\
t & r'
\end{array}
\right)
={\bf 1}-2i \sin (k)\, {\cal W}^{\,T} (E-{\cal H}_{\rm eff})^{-1} {\cal W} \ ,
\end{equation}
where $t$, $t'$, $r$, and $r'$ are $M\times M$ transmission and reflection
matrices; ${\bf 1}$ is the $2M\times 2M$ unit matrix, $k=\arccos(E/2)$ is
the wave vector supported in the leads, and ${\cal H}_{\rm eff}$ is
an effective non-hermitian Hamiltonian given by
\begin{equation}
\label{Heff}
{\mathcal{H}}_{\rm eff}=H- e^{ik} {\cal W}{\cal W}^{\,T} \ .
\end{equation}
Here, ${\cal W}$ is an $N\times 2M$ matrix that specifies the positions
of the attached leads to the network. However, in the random network model
we are studying here all nodes are equivalent; so, we attach the $2M$
leads to $2M$ randomly chosen nodes. The elements of ${\cal W}$ are equal
to zero or
$\epsilon$, where $\epsilon$ is the coupling strength. Moreover, assuming
that the wave vector $k$ do not change significantly in the center of the
band, we set $E=0$ and neglect the energy dependence of
${\mathcal{H}}_{\rm eff}$ and $S$.
Since in the limit $\alpha=1$ the random network model reproduces the
GOE, in that limit we expect the
statistics of the scattering matrix, Eq.~(\ref{smatrix}), to be determined
by the Circular Orthogonal Ensemble (COE) which is the appropriate scattering matrix
ensemble for {\it internal} systems $H$ with time reversal symmetry. Thus,
below, we provide the
statistical results for the $S$-matrix and the transport quantities
to be analyzed in the following sections, assuming the orthogonal symmetry.
In all cases, we also assume the absence of direct processes (also known as
perfect coupling condition), i.e., $\langle S \rangle=0$.
We start with the average of the $S$-matrix elements.
It is known that
\begin{equation}
\label{Saa}
\left\langle |S_{mn}|^2 \right\rangle_{\tbox{COE}} = \frac{1+\delta_{mn}}{2M+1} \ ,
\end{equation}
where $\left\langle \cdot \right\rangle$ means ensemble average over the COE.
Within a scattering approach to the electronic transport, once the
scattering matrix is known one can compute the dimensionless
conductance \cite{Landauer}
\[
T={\mbox{Tr}}(tt^\dagger)=\sum_m\sum_n|t_{mn}|^2
\]
and its distribution $w(T)$.
For $M=1$, i.e. considering two single-channel leads attached to
the network, $w(T)$ is given by
\begin{equation}
\label{wofTM1}
w(T)_{\tbox{COE}} = \frac{1}{2\sqrt{T}} \ ,
\end{equation}
while for $M=2$,
\begin{equation}
w(T)_{\tbox{COE}} = \left\{
\label{wofTM2}
\begin{array}{ll}
3T/2 \ , & 0<T<1 \\
3\left( T-2\sqrt{T-1}\right)/2 \ , & 1<T<2 \\
\end{array} \right. \ . \\
\end{equation}
For arbitrary $M$, the prediction for the average value of $T$ is
\begin{equation}
\label{avT}
\left\langle T \right\rangle_{\tbox{COE}} = \frac{M}{2}-\frac{M}{2(2M+1)} \ .
\end{equation}
For the derivation of the expressions above see for example
Ref.~\cite{MK04}.
A related transport quantity is the shot noise power
\[
P = \left\langle {\mbox{Tr}}(tt^\dagger - tt^\dagger tt^\dagger) \right\rangle \ ,
\]
which as a function of $M$ reads \cite{evgeny}
\begin{equation}
\label{P}
P_{\tbox{COE}} = \frac{M(M+1)^2}{2(2M+1)(2M+3)} \ .
\end{equation}
Another scattering quantity of interest that measures {\it cross sections}
fluctuations is the elastic enhancement factor \cite{EF}
\begin{equation}
\label{F}
F = \frac{\left\langle |S_{mm}|^2 \right\rangle}{\left\langle |S_{mn}|^2 \right\rangle} \ ,
\end{equation}
that in the RMT limit becomes
\begin{equation}
\label{FCOE}
F_{\tbox{COE}}=2 \ .
\end{equation}
In the following sections we focus on $\left\langle |S_{mn}|^2 \right\rangle$,
$\left\langle T \right\rangle$, $P$, and $F$ for the tight-binding random network model.
\section{Results}
In all cases below we set the coupling strength $\epsilon$ such that
\begin{equation}
\label{avS}
\left\langle S \right\rangle \equiv \frac{1}{2M} \sum_{mn} | \left\langle S_{mn} \right\rangle |
\end{equation}
is approximately zero in order to compare our results, in the limit
$\alpha\to 1$, with the RMT predictions reviewed above, see
Eqs.~(\ref{Saa}-\ref{P}) and (\ref{FCOE}).
To find the perfect coupling condition we plot $\left\langle S \right\rangle$ vs.~$\epsilon$
for fixed $N$ and $\alpha$ and look for the minimum.
As an example, in Fig.~\ref{Fig1} we plot $\left\langle S \right\rangle$ vs.~$\epsilon$
for random networks having $N=50$ nodes with $\alpha=0.2$,
0.44, and 0.99. Notice that: For $\epsilon=0$, $\left\langle S \right\rangle=1$; i.e., since there is no
coupling between the network and the leads, there is total reflection
of the waves incoming from the leads. While since for any $\epsilon>0$
the waves do interact with the random network, $\left\langle S \right\rangle<1$.
It is clear from Fig.~\ref{Fig1} that the curves $\left\langle S \right\rangle$ vs.~$\epsilon$ behave
similarly. In fact we identify two regimes:
When $0<\epsilon<\epsilon_0$, $\left\langle S \right\rangle$ decreases with $\epsilon$; while
for $\epsilon>\epsilon_0$, $\left\langle S \right\rangle$ increases with $\epsilon$.
Since $\epsilon_0$ is the coupling strength value at which $\left\langle S \right\rangle\approx 0$,
we set $\epsilon=\epsilon_0$ to achieve the perfect coupling condition.
In addition, as in previous studies \cite{MMV09,AMV09}, here we found
that the curves $\left\langle S \right\rangle$ vs.~$\epsilon$ are well fitted by the expression
\begin{equation}
\label{Svse}
\left\langle S \right\rangle = \frac{C_0}{1+(C_1\epsilon)^{\pm C_2}} - C_3 \ ,
\end{equation}
where $C_i$ are fitting constants and the plus and minus signs correspond
to the regions $0<\epsilon<\epsilon_0$ and $\epsilon>\epsilon_0$, respectively.
With the help of Eq.~(\ref{Svse}) we can find $\epsilon_0$ with a relatively
small number of data points.
Moreover, we heuristically found that
\begin{equation}
\epsilon_0 \approx (\alpha \cdot N)^{1/4} \ .
\end{equation}
Then, we use this prescription to compute $\epsilon_0$ which is the
value for the coupling strength that we set in all the calculations below.
In the following, all quantities and histograms were computed by the use
of $10^6$ random network realizations for each combination of $N$ and $\alpha$.
\begin{figure}[t]
\centerline{\includegraphics[width=7cm]{Fig1.eps}}
\caption{Average $S$-matrix, as defined in Eq.~(\ref{avS}), for tight-binding
random networks having $N=50$ nodes as a function of the coupling strength
$\epsilon$. We found $\epsilon_0\approx 1.76$, 2.15, and 2.63 for $\alpha=0.2$,
0.44, and 0.99, respectively. Dashed lines are fittings of Eq.~(\ref{Svse}) to
the data. Each point was computed by averaging over $10^6$
random network realizations.}
\label{Fig1}
\end{figure}
\subsection{Average scattering matrix elements}
First we consider the case $M=1$, where the $S$-matrix is a $2\times 2$
matrix. In Fig.~\ref{Fig2}(a) we plot the ensemble average of the elements
$|S_{11}|^2$ (average reflexion) and $|S_{12}|^2$ (average transmission)
as a function of the connectivity $\alpha$ for three different network sizes.
The COE limit, Eq.~(\ref{Saa}), expected for $\alpha\to 1$ is
also plotted (dot-dashed lines) as reference.
Notice that for all three network sizes the behavior is similar: there is a
strong $\alpha$-dependence of the average $S$-matrix elements driving
the random network from a localized or insulating regime [$\left\langle |S_{11}|^2 \right\rangle \approx 1$ and
$\left\langle |S_{12}|^2 \right\rangle \approx 0$; i.e., the average conductance is close
to zero] for $\alpha\to 0$, to a delocalized or metallic regime
[$\left\langle |S_{11}|^2 \right\rangle \approx 2/3$ and $\left\langle |S_{12}|^2 \right\rangle \approx 1/3$;
i.e., RMT results are already recovered] for $\alpha \to 1$.
Moreover, the curves $\left\langle |S_{mn}|^2 \right\rangle$ vs.~$\alpha$ are displaced along
the $\alpha$-axis: the larger the network size $N$ the smaller the value of
$\alpha$ needed to approach the COE limit.
We now recall that the parameter
\begin{equation}
\xi \equiv \alpha \times N
\label{xi}
\end{equation}
was shown to fix (i) spectral properties of sparse random matrices \cite{JMR01},
(ii) the percolation transition of Erd\H{o}s-R\'enyi random graphs, see for
example Ref.~\cite{AB02}, where $\xi$ has the name of average degree;
and (iii) the nearest-neighbor energy level spacing distribution and the entropic
eigenfunction localization length of sparse random matrices \cite{AMM}.
So, it make sense to explore the dependence of $\left\langle |S_{mn}|^2 \right\rangle$ on $\xi$.
Then, in Fig.~\ref{Fig2}(b) we plot again $\left\langle |S_{11}|^2 \right\rangle$ and
$\left\langle |S_{12}|^2 \right\rangle$ but now as a function of
$\xi$. We observe that curves for different $N$ now fall on top of a universal
curve.
\begin{figure}[t]
\centerline{\includegraphics[width=7cm]{Fig2.eps}}
\caption{(Color online) Average $S$-matrix elements $\left\langle |S_{11}|^2 \right\rangle$ and
$\left\langle |S_{12}|^2 \right\rangle$ for tight-binding random networks having $N$ nodes as
a function of (a) $\alpha$ and (b) $\xi$, for $M=1$.
The dot-dashed lines correspond to 2/3 and 1/3; the RMT prediction
for $\left\langle |S_{11}|^2 \right\rangle$ and $\left\langle |S_{12}|^2 \right\rangle$, respectively,
given by Eq.~(\ref{Saa}). Red dashed lines in (b) are
Eqs.~(\ref{S11x}) and (\ref{S12x}) with $\delta \approx 0.198$.
Error bars in this and the following figures are not shown since they are much
smaller than symbol size.}
\label{Fig2}
\end{figure}
\begin{figure}[t]
\centerline{\includegraphics[width=7cm]{Fig3.eps}}
\caption{(Color online) Average $S$-matrix elements $\left\langle |S_{mm}|^2 \right\rangle$
[with $mm=11$, 22, 33, and 44] and $\left\langle |S_{mn}|^2 \right\rangle$ [with $mn=12$,
23, 34, and 41] for tight-binding random networks having $N=200$
nodes as a function of $\xi$ for (a) $M=2$ and (b) $M=3$.
The dot-dashed lines correspond to the RMT prediction for
$\left\langle |S_{mm}|^2 \right\rangle$ and $\left\langle |S_{mn}|^2 \right\rangle$; see Eq.~(\ref{Saa}).
Red dashed lines are Eqs.~(\ref{Smmx}) and (\ref{Smnx}) with
(a) $\delta \approx 0.237$ and (b) $\delta \approx 0.242$.}
\label{Fig3}
\end{figure}
Moreover, we have found that the universal behavior of $\left\langle |S_{11}|^2 \right\rangle$
and $\left\langle |S_{12}|^2 \right\rangle$, as a function of $\xi$, is well described by
\begin{eqnarray}
\label{S11x}
\left\langle |S_{11}|^2 \right\rangle & = & 1 - \left\langle |S_{12}|^2 \right\rangle \ , \\
\label{S12x}
\left\langle |S_{12}|^2 \right\rangle & = & \frac{1}{3}
\left[ \frac{1}{1+(\delta \xi)^{-2}} \right] \ ,
\end{eqnarray}
where $\delta$ is a fitting parameter. Eq.~(\ref{S11x}) is a
consequence of the unitarity of the scattering matrix,
$SS^\dagger =\mathbf 1$, while the factor 1/3 in Eq.~(\ref{S12x})
comes from Eq.~(\ref{Saa}) with $M=1$.
In Fig.~\ref{Fig2}(b) we also include Eqs.~(\ref{S11x}) and (\ref{S12x})
(red dashed lines) and observe that they reproduce very well the corresponding
numerical results.
In fact, we have to add that Eqs.~(\ref{S11x}) and
(\ref{S12x}) also work well for other random
matrix models showing a metal-insulator phase transition \cite{AMV09}.
For $M>1$ we observe the same scenario as for $M=1$: All $S$-matrix elements
suffer a localization-delocalization transition as a function of $\xi$.
See Fig.~\ref{Fig3} where we plot some of the average $S$-matrix elements
for $M=2$ and 3. Moreover, we were able to generalize Eqs.~(\ref{S11x}) and
(\ref{S12x}) to any $M$ as
\begin{eqnarray}
\label{Smmx}
\left\langle |S_{mm}|^2 \right\rangle & = & 1 - (2M-1)\left\langle |S_{mn}|^2 \right\rangle \ , \\
\label{Smnx}
\left\langle |S_{mn}|^2 \right\rangle & = & \left\langle |S_{mn}|^2 \right\rangle_{\tbox{COE}}
\left[ \frac{1}{1+(\delta \xi)^{-2}} \right] \ .
\end{eqnarray}
Then, in Fig.~\ref{Fig3} we also plot Eqs.~(\ref{Smmx}) and (\ref{Smnx})
and observe very good correspondence with the numerical data.
We also note that the fitting parameter $\delta$ slightly depends on $M$.
Finally we want to remark that concerning $\left\langle |S_{mn}|^2 \right\rangle$, the RMT
limit, expected for $\alpha\to 1$ or $\xi\to N$, is already recovered for
$\xi\ge 30$.
\begin{figure}[t]
\centerline{\includegraphics[width=7.5cm]{Fig4.eps}}
\caption{(Color online) Conductance probability distribution $w(T)$
for tight-binding random networks having $N$ nodes, in the case $M=1$, for some
values of $\xi$. Dashed lines are $w(T)_{\tbox{COE}}$; the RMT
prediction for $w(T)$ given by Eq.~(\ref{wofTM1}).}
\label{Fig4}
\end{figure}
\begin{figure}[t]
\centerline{\includegraphics[width=7.5cm]{Fig5.eps}}
\caption{(Color online) Conductance probability distribution $w(T)$
for tight-binding random networks having $N$ nodes, in the case $M=2$, for some
values of $\xi$. Dashed lines are $w(T)_{\tbox{COE}}$; the RMT
prediction for $w(T)$ given by Eq.~(\ref{wofTM2}).}
\label{Fig5}
\end{figure}
\subsection{Conductance and shot noise power}
Now we turn to the conductance statistics.
In Figs.~\ref{Fig4} and \ref{Fig5} we present conductance probability
distributions $w(T)$ for $M=1$ and $M=2$, respectively. In both cases
we include the corresponding RMT predictions. We report histograms for
four values of $\xi$ and three network sizes.
From these figures, it is clear that $w(T)$ is invariant once $\xi$ is
fixed; i.e., once $\xi$ is set to a given value, $w(T)$ does not depend
on the size of the network.
We also recall that in the limit $\alpha\to 1$, $w(T)$ is expected to
approach the RMT predictions of Eqs.~(\ref{wofTM1}) and (\ref{wofTM2}).
However, we observe that $w(T)$ is already well described by
$w(T)_{\tbox{COE}}$ once $\xi\ge 30$.
We observe an equivalent scenario for $w(T)$ when $M>2$ (not shown here).
We now increase further the number of attached leads.
Then, in Figs.~\ref{Fig6}(a) and \ref{Fig7}(a) we plot
the average conductance $\left\langle T \right\rangle$ and the shot noise power $P$ for
tight-binding random networks having $N=200$ nodes, for several values
of $\xi$ with $M\in [1,5]$ (we recall that for $M=5$, ten single-channel
leads are attached to the networks).
It is clear from these plots that changing $\xi$ from small ($\xi< 1$) to
large ($\xi\gg 1$) values produces a transition from localized to
delocalized behavior in the scattering properties of random notworks.
That is, (i) for $\xi<0.5$, $\left\langle T \right\rangle \approx 0$ and $P \approx 0$;
and (ii) for $\xi\ge 30$, $\left\langle T \right\rangle$ and $P$ are well given by the
corresponding RMT predictions given by Eqs.~(\ref{avT}) and (\ref{P}),
respectively.
Equivalent plots are obtained (not shown here) for other network sizes.
Moreover, we have observed that $\left\langle T \right\rangle$ and $P$ as a function of $\xi$
behave (for all
$M$) as $\left\langle |S_{mn}|^2\right\rangle$ does. I.e., they show a universal behavior
as a function of $\delta\xi$ that can be well described by
\begin{equation}
\label{Xxi}
X(\xi) = X_{\tbox{COE}} \left[ \frac{1}{1+(\delta\xi)^{-2}}
\right] \ ,
\end{equation}
where $X$ represents $\left\langle T \right\rangle$ or $P$ and $\delta$ is the fitting
parameter.
Then, in Figs.~\ref{Fig6}(b) and \ref{Fig7}(b) we plot $\left\langle T \right\rangle$
and $P$ normalized to their respective COE average values, as a function
of $\delta\xi$ for $M\in [1,5]$.
Notice that all curves for different $M$ fall on top of the universal
curve given by Eq.~(\ref{Xxi}).
\begin{figure}[t]
\centerline{\includegraphics[width=8cm]{Fig6.eps}}
\caption{(Color online) (a) Average conductance $\left\langle T \right\rangle$ as a function of $M$
for tight-binding random networks having $N=200$ nodes for several values
of $\xi$. (b) $\left\langle T \right\rangle/\left\langle T \right\rangle_{\tbox{COE}}$ as a function of
$\delta\xi$ for $M\in[1,5]$. Insert: $\delta$ versus $M$. $\delta$
is obtained from the fitting of Eq.~(\ref{Xxi}) to the $\left\langle T \right\rangle$
vs.~$\xi$ data. Thick full lines correspond to $\left\langle T \right\rangle = 0$.
Dashed lines are (a) the RMT prediction for $\left\langle T \right\rangle$, given by
Eq.~(\ref{avT}); and (b) one. The red dashed line in (b) on top of the
data is Eq.~(\ref{Xxi}).}
\label{Fig6}
\end{figure}
\begin{figure}[t]
\centerline{\includegraphics[width=8cm]{Fig7.eps}}
\caption{(Color online) (a) Shot noise power $P$ as a function of $M$ for tight-binding
random networks having $N=200$ nodes for several values of $\xi$.
(b) $P/P_{\tbox{COE}}$ as a function of $\delta\xi$ for $M\in[1,5]$.
Insert: $\delta$ versus $M$. $\delta$ is obtained from the fitting
of Eq.~(\ref{Xxi}) to the $P$ vs.~$\xi$ data. Thick full lines
correspond to $P = 0$. Dashed lines are (a) the RMT
prediction for $P$, given by Eq.~(\ref{P}); and (b) one. The red dashed
line in (b) on top of the data is Eq.~(\ref{Xxi}).}
\label{Fig7}
\end{figure}
\subsection{Enhancement factor}
Finally, in Fig.~\ref{Fig8} we plot the elastic enhancement factor $F$ as a function of
$\xi$ for random networks with $N=50$ nodes for $M=1$, 2, and 4.
From this figure we observe that, for any $M$ (and also for any $N$, not shown
here), $F$ decreases as a function of $\xi$ and approaches smoothly, for large
$\xi$ ($\xi\to N$), the RMT limit value of $F_{\tbox{COE}}=2$. Also note that when
$\xi\ll 1$, $F\propto \xi^{-2}$; which seems to be a signature of our random network model.
\begin{figure}[t]
\centerline{\includegraphics[width=7cm]{Fig8.eps}}
\caption{(Color online) Elastic enhancement factor $F$ as a function of
$\xi$ for tight-binding random networks having $N=50$ nodes for $M=1$,
2, and 4. Black full line is Eq.~(\ref{F1}) with $M=1$ and $\delta=0.198$.
Red dashed lines are fittings of Eq.~(\ref{F2}) to the data with $C=205$,
138, and 106 for $M=1$, 2 and 4, respectively. The horizontal black dashed
line corresponds to the RMT limit value of $F_{\tbox{COE}}=2$.}
\label{Fig8}
\end{figure}
To have an analytic support for the observations made above, we substitute
Eqs.~(\ref{Smmx}) and (\ref{Smnx}) into Eq.~(\ref{F}) to get the following estimation
for $F$:
\begin{equation}
\label{F1}
F \approx (2M+1)(\delta\xi)^{-2} + 2 \ .
\end{equation}
Notice that Eq.~(\ref{F1}) reproduces properly the behavior of $F$ for small
and large $\xi$: $F\propto \xi^{-2}$ and $F\to 2$, respectively.
Unfortunately, Eq.~(\ref{F1}) does not describe qualitatively the curves of
Fig.~\ref{Fig8}, see as example the black full line in this figure that
corresponds to Eq.~(\ref{F1}) with $M=1$.
The reason of this discrepancy, as a detailed analysis shows, is that
Eq.~(\ref{Smnx}) overestimates the magnitude of $\left\langle |S_{mn}|^2 \right\rangle$ when
$\xi\ll 1$ and as a consequence Eq.~(\ref{F1}) underestimates the magnitude
of $F$ for those $\xi$-values.
Then, to fix this issue we propose the following expression
\begin{equation}
\label{F2}
F \approx C \xi^{-2} + 2 \ ,
\end{equation}
where $C$ is a fitting constant, to describe the curves $F$ vs.~$\xi$.
In Fig.~\ref{Fig8} we also show that Eq.~(\ref{F2}) fits reasonably well the
numerical data.
\section{Conclusions}
We study scattering and transport properties of tight-binding random
networks characterized by the number of nodes $N$ and the average
connectivity $\alpha$.
We observed a smooth crossover from localized to delocalized behavior
in the scattering and transport properties of the random
network model by varying $\alpha$ from small ($\alpha\to 0$)
to large ($\alpha\to 1$) values.
We show that all the scattering and transport quantities studied here
are independent of $N$ once $\xi=\alpha N$ is fixed.
Moreover, we proposes a heuristic and universal relation between
the average scattering matrix elements $\left\langle |S_{mn}|^2 \right\rangle$, the
average conductance $\left\langle T \right\rangle$, and the shot noise power $P$
and the disorder parameter $\xi$. See Eq.~(\ref{Xxi}).
As a consequence, we observed that the onset of the transition takes place at
$\delta\xi\approx 0.1$; i.e., for $\delta\xi< 0.1$ the networks are in
the insulating regime. While the onset of the Random Matrix Theory limit
is located at $\delta\xi\approx 10$; that is, for $\delta\xi>10$ the networks
are in the metallic regime. Also, the metal-insulator transition point
is clearly located at $\delta\xi\approx 1$; see red dashed curves in
Figs.~\ref{Fig6}(b) and \ref{Fig7}(b). Here, $\delta\in[0.2,0.4]$ is a
parameter that
slightly depends on the number of attached leads to the network but also
on the quantity under study, see inserts of Figs.~\ref{Fig6}(b) and \ref{Fig7}(b).
Since our random network model is represented by an ensemble of sparse real
symmetric random Hamiltonian matrices, in addition to random graphs of the
Erd\H{o}s-R\'enyi--type and complex networks, we expect our results to be also
applicable to physical systems characterized by sparse Hamiltonian matrices,
such as quantum chaotic and many-body systems.
\begin{acknowledgments}
This work was partially supported by VIEP-BUAP grant MEBJ-EXC13-I
and PIFCA grant BUAP-CA-169.
\end{acknowledgments}
| {
"attr-fineweb-edu": 1.432617,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdXo5ixsDMVd5AIgR | \section{Acknowledgements}
This work was supported in part by the National Science Foundation (DIBBs
1443054, CAREER IIS-1253549), and used the Romeo cluster, supported by Indiana
University and NSF RaPyDLI 1439007. We acknowledge the use of data from CReSIS
with support from the University of Kansas and Operation IceBridge (NNX16AH54G).
\section{Conclusion}
To the best of our knowledge, this paper is the first to propose an automated
approach to reconstruct 3D ice features using graphical models.
We showed our technique can
effectively estimate ice-bottom surfaces from noisy
radar observations.
This technique also demonstrated its accuracy and efficiency in producing
bedrock layers on radar echograms against the state-of-the-art.
\section{Experiments}
\begin{figure}[t]
\begin{minipage}[b]{1.0\linewidth}
\centering
\centerline{\includegraphics[height=2.0cm, width=8.5cm]{figures/echogram_44_gt.eps}}
\vspace{-20pt}
\centerline{\textcolor{white}{Ground truth}}\medskip
\vspace{1.5pt}
\end{minipage}
\begin{minipage}[b]{1.0\linewidth}
\centering
\centerline{\includegraphics[height=2.0cm, width=8.5cm]{figures/echogram_44_djc.eps}}
\vspace{-20pt}
\centerline{\textcolor{white}{Result of \cite{crandall2012layer}}}\medskip
\end{minipage}
\begin{minipage}[b]{1.0\linewidth}
\centering
\centerline{\includegraphics[height=2.0cm, width=8.5cm]{figures/echogram_44_mcmc.eps}}
\vspace{-20pt}
\centerline{\textcolor{white}{Result of \cite{lee2014estimating}}}\medskip
\end{minipage}
\begin{minipage}[b]{1.0\linewidth}
\centering
\centerline{\includegraphics[height=2.0cm, width=8.5cm]{figures/echogram_44_trws.eps}}
\vspace{-20pt}
\centerline{\textcolor{white}{Ours with TRW}}\medskip
\end{minipage}
\vspace*{-15pt}
\caption{Results of the bedrock layer finding on a sample echogram. In each image,
the upper (red) boundary is the ice-air layer, and the lower
(green) boundary is the ice-bottom layer. The ice-air layer in our
result is from the radar.}
\label{fig:echogram}
\vspace{-10pt}
\end{figure}
We tested our surface extraction algorithm on the basal topography of
the Canadian Arctic Archipelago (CAA) ice caps, collected by the
Multichannel Coherent Radar Depth Sounder (MCoRDS)
instrument~\cite{rodriguez2014advanced}. We used a total of 7
topographic sequences, each with over 3000 radar images which
corresponds to about 50km of flight data per sequence. For these
images, we also have the associated ice-air surface ground truth, a subset
(excluded from the testing data)
of which we used to learn the parameters of the template model and the weights
of the binary costs.
We then ran inference on each topographic sequence and measured the accuracy
by comparing our estimated surfaces to the ground truth, which was produced
by human annotators.
However, these labels are not always accurate at the pixel-level,
since the radar images are often full of noise, and some boundaries
simply cannot be tracked precisely even by experts.
To relax the effect of inaccuracies in
ground truth, we consider a label to be correct when it is within
a few pixels.
We evaluated with three summary statistics: mean
deviation, median mean deviation, and the percentage of correct labeled pixels
over the whole surface (Table 1(a)). The mean error is about 11.9
pixels and the median-of-means error is about 12.2 pixels. The
percentage of correct pixels is 35.9\%, or about 63.9\% within 5 pixels,
which we consider the more meaningful statistic
given noise in the ground truth.
To give some context, we compare our results with three baselines.
Since no existing methods solve the 3D reconstruction problem that we consider here,
we adapted three methods from 2D layer finding to the 3D case.
Crandall et al.~\cite{crandall2012layer} use a fixed weight for the
pairwise conditional probabilities in the Viterbi algorithm, which
cannot automatically adjust the shape of the layer in each image
slice. Lee et al.~\cite{lee2014estimating} generate better results by
using Markov-Chain Monte Carlo (MCMC). However, neither of these approaches
considers constraints between adjacent slices. We introduce
Dynamic Viterbi (DV) as an additional baseline that incorporates a
dynamic weight for the pairwise term, but it still lacks the ability
to smooth the whole surface in 3D. As shown in Table 1(a) and Figure
2, our technique performs significantly better than any of these
baselines on 3D surface reconstruction. We also used our technique to
estimate layers in 2D echograms, so that we could compare directly to
the published source code of~\cite{crandall2012layer,
lee2014estimating} (i.e.\ using our approach to solve the problem
they were designed for). Figure 3 and Table 1(b) present results,
showing a significant improvement over these baselines also.
Similar to~\cite{crandall2012layer, lee2014estimating}, additional evidence
can be easily added into our energy function. For instance, ground truth
data (e.g.\ ice masks) may be available for some particular slices, and human
operators can also provide feedback by marking true surface boundaries for
a set of pixels. Either of these can be implemented by putting additional
terms into the unary term defined in equation ($\ref{eq:5}$).
\section{Introduction}
\begin{figure*}
\begin{center}
\includegraphics[width=17.5cm]{figures/overall.eps}
\end{center}
\vspace{-15pt}
\caption{Illustration of our task. Radar flies along the X-axis,
collecting noisy evidence about the ice surface distance and depth immediately below it. This yields a 2D echogram (Sample (c)), with depth on one axis and flight
path on the other,
and prior work has used these echograms to estimate 2D ice structure but only along the flight path. Our approach also includes (very noisy) evidence
from either side of the radar, yielding a sequence of 2D topographic slices (e.g.\ Sample (a) and (b)).
Each slice is represented in polar coordinates,
where Y- and Z-axis denote the direction of arrival of radar waves and
the distance from each voxel to plane, respectively. We combine this noisy evidence with prior information to produce
3D ice reconstructions.}
\label{fig:overall}
\vspace{-10pt}
\end{figure*}
Scientists increasingly use visual observations of the world in their work:
astronomers collect telescope images at unprecedented scale~\cite{szalay2001world},
biologists image live cells~\cite{jaiswal2003long, stephens2003light},
sociologists record social interactions~\cite{wedekind2000cooperation},
ecologists collect large-scale remote sensing data~\cite{bamber2013new}, etc.
Although progress in technology has made \textit{collecting} this imagery
affordable, actually \textit{analyzing} it is often done by hand. But with
recent progress in computer vision, automated techniques may soon work well
enough to remove this bottleneck, letting scientists analyze visual data more
thoroughly, quickly, and economically.
As a particular example, glaciologists need large-scale data about the polar
ice sheets and how they are changing over time in order to understand and
predict the effects of melting glaciers.
Aerial ground-penetrating radar systems have been developed that can
fly over an ice sheet and collect evidence about its subsurface structure.
The raw radar return data is typically mapped into 2D radar echogram images
which are easier for people to interpret, and then manually labeled for
important semantic properties (ice thickness and structure,
bedrock topography, etc.) in a slow, labor-intensive
process~\cite{freeman2010automated, ilisei2012technique, ferro2013automatic,
mitchell2013semi}. Some recent work has shown promising results on the specific problem of layer-finding in 2D echograms~\cite{crandall2012layer,
lee2014estimating, carrer2017automatic}, although the accuracy is still far
below that of a trained human annotator. The echograms are usually quite noisy
and complex, requiring experience and intuition that is difficult to
encode in an algorithm. Using echograms as input data also inherently limits
the analysis to the ice structure immediately under the radar's
flight path.
In this paper we take an alternative approach, using additional data collected
by the radar in order to actually estimate the 3D structure of the ice sheet,
including a large area on either side, instead of simply tracing 2D
cross-sections (Figure~\ref{fig:overall}). In particular, the Multichannel Coherent Radar Depth Sounder
(MCoRDS) instrument~\cite{rodriguez2014advanced} uses three transmit beams
(left, nadir, right) to collect data from below the airplane
and to either side (for a total swath width of about 3km). Although an expert
may be able to use intuition and experience to produce a
reasonable estimate of the 3D terrain from this data, the amount of weak
evidence that must be considered at once is overwhelming. As with
structure-from-motion in images~\cite{disco2013pami}, this gives
automatic algorithms an advantage: while humans are better
at using intuition to estimate from weak evidence, algorithms can
consider a large, heterogeneous set of evidence to make better overall decisions.
We formulate the problem as one of discrete energy minimization in order to
combine weak evidence into a 3D reconstruction of the bottom of the ice sheet.
We first estimate layer boundaries to generate a seed surface, and then
incorporate additional sources of evidence, such as ice masks, surface digital
elevation models, and optional feedback from humans to refine it.
We investigate the performance of the algorithm using
ground truth from humans, showing that our technique significantly
outperforms several strong baselines.
\section{Methodology}
As the radar system flies over ice, it collects a
sequence of topographic slices $I = \{I_1, \cdots, I_l\}$ that characterizes
the returned radar signals (Figure 1). Each slice $I_i$ is a 2D radar image that describes a
distribution of scattered energy in
polar coordinates (with dimensions $\phi \times \rho)$ at a discrete position $i$ of the aircraft along its flight path.
Given such a topographic sequence of dimension $l \times \phi \times \rho$,
we wish to infer the 3D ice-bottom surface.
We parameterize the surface as a sequence of slices $S = \{S_1, \cdots, S_l\}$
and
$S_i = \{s_{i,1}, \cdots, s_{i,\phi}\}$, where $s_{i,j}$ denotes the
row coordinate of the boundary of the ice-bottom for column $j$ of slice $i$,
and $s_{i,j} \in [1,\rho]$
since the ice-bottom layer can occur anywhere within a column.
\vspace{-4pt}
\subsection{A graphical model for surface extraction}
Because radar is so noisy, our goal is to find a surface that
not only fits the observed data well but that is also smooth
and satisfies other prior knowledge. We formulate this as an
inference problem on a Markov Random Field. In particular, we
look for a surface that minimizes an energy function,
\begin{align} \label{eq:1}
E(S|I) = & \sum\limits_{i=1}^l \sum\limits_{j=1}^\phi \psi_1(s_{i,j}|I) \ + \\
& \sum\limits_{i=1}^l \sum\limits_{j=1}^\phi \sum_{i' \in \pm 1} \sum_{j' \in \pm 1} \psi_2(s_{i,j}, s_{i+i', j+j'})
\end{align}
where
$\psi_1(\cdot)$
defines a unary cost function which measures how well a given labeling agrees
with the observed image in $I$, and $\psi_2(\cdot, \cdot)$ defines a
pairwise interaction potential on the labeling which encourages the surface to
be continuous and smooth. Note that each column of each slice contributes one
term to the unary part of the energy function, while the pairwise terms
are a summation over the four neighbors of a column (two columns on either side
within the same slice, and two slices within the same column in neighboring slices).
\vspace{8pt}
\noindent
\textbf{Unary term.}
Our unary term $\psi_1(\cdot)$ consists of three parts,
\begin{equation} \label{eq:5}
\psi_1(\cdot) = \psi^{temp}(\cdot) + \psi^{air}(\cdot) + \psi^{bin}(\cdot).
\end{equation}
First, similar to~\cite{crandall2012layer}, we define a template model $T$
of fixed size $1 \times t$ (we use $t = 11$ pixels) for the vertical profile
of the ice-bottom surface in each slice. For each pixel $p$ in the template,
we estimate a mean $\mu_p$ and a variance $\sigma_p$ on greyscale intensity
assuming that the template is centered at the location of the ice-bottom
surface, suggesting a template energy,
\begin{equation} \label{eq:2}
\psi^{temp}(s_{i,j}|I) = \sum\limits_{p \in T} (I(s_{i,j} + p) - \mu_p)^2 / \sigma_p.
\end{equation}
We learn the parameters of this model with a small set of labeled
training data.
\newcommand{\ra}[1]{\renewcommand{\arraystretch}{#1}}
\begin{table}\centering
{\footnotesize{
\ra{1.1}
\begin{tabular}{@{}lcccccc@{}} \toprule
& & \multicolumn{2}{c}{Error} & \phantom{a}& \multicolumn{2}{c}{Precision}\\
\cmidrule{3-4} \cmidrule{6-7}
& & \textbf{Mean} & \textbf{Median Mean} && \textbf{1 pixel} & \textbf{5 pixels} \\ \midrule
\multicolumn{7}{l}{(a) Ice-bottom surfaces:} \\
Crandall \cite{crandall2012layer} & & 101.6 & 95.9 & & 0.2\% & 2.5\% \\
Lee \cite{lee2014estimating} & & 35.6 & 30.5 & & 3.6\% & 29.9\% \\
Ours with \textbf{DV} & & 13.3 & 13.4 & & 20.2\% & 58.3\% \\
Ours with \textbf{TRW} & & 11.9 & 12.2 & & 35.9\% & 63.9\% \\ \midrule
\multicolumn{7}{l}{(b) Bedrock layers:} \\
Crandall \cite{crandall2012layer} & & 75.3 & 42.6 & & 0.5\% & 21.5\% \\
Lee \cite{lee2014estimating} & & 47.6 & 36.6 & & 2.2\% & 20.5\% \\
Ours with \textbf{TRW} & & 4.1 & 4.2 & & 28.8\% & 81.4\% \\
\bottomrule
\end{tabular}
}}
\vspace*{-5pt}
\caption{Error in terms of the mean and median mean absolute
column-wise difference compared to ground truth, in pixels. Precision is
the percentage of correct labeled pixels.}
\vspace{-10pt}
\end{table}
\begin{figure}[t]
\begin{minipage}[b]{1.0\linewidth}
\centering
\centerline{\includegraphics[height=2.5cm, width=8.5cm]{figures/surface_44_gt.eps}}
\vspace{-25pt}
\centerline{\textcolor{white}{Ground truth}}\medskip
\vspace{8pt}
\end{minipage}
\begin{minipage}[b]{1.0\linewidth}
\centering
\centerline{\includegraphics[height=2.5cm, width=8.5cm]{figures/surface_44_mcmc.eps}}
\vspace{-25pt}
\centerline{\textcolor{white}{Result of \cite{lee2014estimating}}}\medskip
\vspace{7pt}
\end{minipage}
\begin{minipage}[b]{1.0\linewidth}
\centering
\centerline{\includegraphics[height=2.5cm, width=8.5cm]{figures/surface_44_viterbi.eps}}
\vspace{-25pt}
\centerline{\textcolor{white}{Ours with DV}}\medskip
\vspace{8pt}
\end{minipage}
\begin{minipage}[b]{1.0\linewidth}
\centering
\centerline{\includegraphics[height=2.5cm, width=8.5cm]{figures/surface_44_trws.eps}}
\vspace{-25pt}
\centerline{\textcolor{white}{Ours with TRW}}\medskip
\vspace{8pt}
\end{minipage}
\vspace*{-18pt}
\caption{Results of the ice-bottom surface extracting on a sample dataset. The color
represents the depth from plane (Z).}
\label{fig:surface}
\vspace{-10pt}
\end{figure}
Second, to capture the fact that the ice-bottom surface should always be below the
ice-air surface by a non-trivial margin, we add a cost to penalize intersecting
surfaces,
\begin{equation} \label{eq:3}
\psi^{air}(s_{ij}) =
\begin{cases}
\, \quad \quad +\infty & s_{i,j} - a_{i,j} < 0 \\
\, \, \, \, \, \, \quad \quad 0 & s_{i,j} - a_{i,j} > \tau \\
\, \tau - |s_{i,j} - a_{i,j}| & \mbox{otherwise,}
\end{cases}
\end{equation}
with $a_{i,j}$ the label of the air-ice boundary of slice
$i$, column $j$.
Finally, we incorporate an additional weak source of evidence produced
by the radar system. The \emph{bottom bin} gives a constraint on a
\textit{single} column in each slice, specifying a single coordinate $(j, b_i)$
that the true surface boundary must be below. Despite how weak this evidence
is, it helps to distinguish between the ice-air and ice-bottom surface boundary
in practice.
Formally, we formulate
this cost function as,
\begin{equation} \label{eq:4}
\psi^{bin}(s_{i,j}) =
\begin{cases}
\, +\infty & s_{i,j} < b_i \\
\, \, \, \, \, \, 0 & \mbox{otherwise.}
\end{cases}
\end{equation}
\vspace{6pt}
\noindent
\textbf{Pairwise term.}
The ice-bottom surface is
encouraged to be smooth
across both adjacent columns and adjacent slices,
\begin{equation} \label{eq:6}
\psi_2(s, \hat{s}) =
\begin{cases}
-\beta_j \ln \mathcal{N} (s - \hat{s}; \, 0, \hat{\sigma}) & |s - \hat{s}| < \alpha \\
\quad \quad \quad \quad +\infty & \mbox{otherwise,}
\end{cases}
\end{equation}
where $\hat{s}$ denotes the labeling of an adjacent pixel of ($i, j$), and
parameters $\alpha$ and $\hat{\sigma}$ are learned from labeled training data.
Parameter $\beta_j$ models smoothness on
a per-slice basis, which is helpful if some slices are known to be noisier
than others (or set to a constant if this information is not known).
This term models the similarity of the labeling of two adjacent pixels by a
zero-mean Gaussian that is truncated to zero outside a fixed interval $\alpha$.
Since all parameters in the energy function are considered penalties, we
transform the Gaussian probability to a quadratic function by using a negative
logarithm.
\vspace{8pt}
Our energy function introduces several important improvements over
that of Crandall et al.~\cite{crandall2012layer} and Lee et
al.~\cite{lee2014estimating}. First, while their model gives all
pairs of adjacent pixels the same pairwise weight ($\beta$), we have
observed that layers in different slices usually have particular
shapes, such as straight lines and parabolas, depending on the local ice
topography. By using a dynamic weight $\beta_j$, we can roughly
control the shape of the layer and adjust how smooth two adjacent
pixels should be. More importantly, those techniques consider a single
image at a time, which could cause discontinuities in the ice reconstruction.
We correct this by defining pairwise terms along both the intra- and inter-slice
dimensions.
\vspace{-6pt}
\subsection{Statistical inference}
The minimization of equation ($\ref{eq:1}$) can
be formulated as discrete energy minimization on a first-order
Markov Random Field (MRF)~\cite{koller2009probabilistic}.
Given the large size of this MRF, we use Sequential Tree-reweighted
Message Passing (TRW)~\cite{kolmogorov2006convergent}, which breaks the MRF
into several monotonic chains, and perform belief propagation (BP) on
each chain. TRW only passes messages within each of these chains, rather than
to all four directions (like Loopy BP~\cite{murphy1999loopy}). Benefiting from
this, TRW converges faster and requires half as much memory as traditional
message passing methods. We assign a row-major order for pixels in the graph
and define the monotonic chains based on this order. In each iteration, TRW first
passes messages in increasing order, and then back in
decreasing order.
We pre-define a maximum number
of iterations to be the same as the width of each slice, $\phi$, which allows
evidence from one side of the slice to reach the other.
When message
passing is finished, we assign a label to each pixel in row-major order:
for pixel $(i, j)$, we choose the label $s_{i,j}$ that minimizes
$\boldsymbol{M}(s_{i,j}) + \psi_1(s_{i,j}) + \psi_2(s_{i,j}, s_{i,j-1}) + \psi_2(s_{i,j}, s_{i-1,j})$,
where $\boldsymbol{M}(s_{i,j})$ is the summation of messages from four adjacent neighbors.
The usual implementation of TRW has time complexity $O(l \phi \rho^2)$ for
each loop. To speed this up, we use linear-time generalized distance
transforms~\cite{felzenszwalb2006efficient},
yielding a total running time of $O(l \phi \rho L)$ where
$L$ is the number of iterations.
This is possible because of our
pairwise potentials are log-Gaussian.
\section{Related work}
Detecting boundaries between material layers in noisy radar images is important
for glaciology. Semi-automated and automated methods have been introduced for
identifying features of subsurface imaging. For example, in echograms from Mars,
Freeman et al.~\cite{freeman2010automated} find layer boundaries by applying
band-pass filters and thresholds to find linear subsurface structures, while
Ferro and Bruzzone \cite{ferro2011novel} identify subterranean features using
iterative region-growing. For the specific case of ice, Crandall et al.~\cite{crandall2012layer}
detect the ice-air and ice-bottom layers in echograms along the flight path by
combining a pretrained template model for the vertical profile of each layer
and a smoothness prior in a probabilistic graphical model. Lee et al.~\cite{lee2014estimating}
present a more accurate technique that uses Gibbs sampling from a joint
distribution over all possible layers. Carrer and Bruzzone~\cite{carrer2017automatic}
reduce computational complexity with a divide-and-conquer strategy. In contrast
to the above work which all infers 2D cross-sections, we attempt to reconstruct
3D subsurface features and are not aware of other work that does this. We pose
this as an inference problem on a Markov Random Field similar to that proposed
for vision problems (e.g.\ stereo~\cite{felzenszwalb2006efficient}), except
that we have a large set of images and wish to produce a 3D surface, whereas
they perform inference on a single 2D image at a time.
| {
"attr-fineweb-edu": 1.516602,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdZPxK7IAEeGRfcA2 | \section{Introduction}\label{sec:intro}
\setcounter{equation}{0}
\setcounter{footnote}{1}
Softly-broken supersymmetry is a leading candidate to explain the
hierarchy of the Planck mass scale and other high-energy scales to the
electroweak symmetry breaking mass scale \cite{hierarchyproblem}. In
extensions of the Standard Model with a fundamental Higgs scalar,
obtaining this hierarchy would seem to require tuning of the Higgs squared
mass parameter to about one part in $10^{32}$. The Minimal Supersymmetric
Standard Model (MSSM) \cite{primer} solves this problem by introducing
superpartners with masses near the electroweak scale. In addition, with
the assumption of $R$-parity conservation, the most dangerous
(renormalizable) contributions to proton decay are eliminated, and the
lightest supersymmetric particle (LSP) can serve
\cite{neutralinodarkmatter}-\cite{DarkSUSY} as the cold dark matter
required by cosmology \cite{WMAP}-\cite{PDG}.
However, the fact that the CERN LEP $e^+e^-$ collider did not discover a
Standard Model-like light neutral Higgs scalar boson,
placing a limit $M_{h^0} > 114$ GeV
\cite{LEPHiggsbounds}, has put some tension on the allowed parameter space
in the MSSM. This is because $M_{h^0}$ is bounded above at tree level by
$m_Z$, and radiative corrections depend on the superpartner masses, which
we assume cannot be too large without reintroducing the hierarchy problem.
Including the largest radiative corrections at one-loop
order\footnote{This approximation is subject to significant further
corrections, which are not necessary for the present argument.} gives:
\beq
M^2_{h^0} \>=\> m_Z^2 \cos^2(2\beta) +
\frac{3 }{4 \pi^2} \sin^2\!\beta \>y_t^2 \biggl [
m_t^2 \, {\rm ln}\left (m_{\tilde t_1} m_{\tilde t_2} / m_t^2 \right )
+ c_{\tilde t}^2 s_{\tilde t}^2 (m_{\tilde t_2}^2 - m_{\tilde t_1}^2)
\, {\rm ln}(m_{\tilde t_2}^2/m_{\tilde t_1}^2) &&
\nonumber
\\
+ c_{\tilde t}^4 s_{\tilde t}^4 \Bigl \lbrace
(m_{\tilde t_2}^2 - m_{\tilde t_1}^2)^2 - \frac{1}{2}
(m_{\tilde t_2}^4 - m_{\tilde t_1}^4)
\, {\rm ln}(m_{\tilde t_2}^2/m_{\tilde t_1}^2)
\Bigr \rbrace/m_t^2 \biggr ]. && \phantom{xx}
\label{hradcorrmix}
\eeq
where $c_{\tilde t}$ and $s_{\tilde t}$ are the cosine and sine of a
top-squark mixing angle, $m_{\tilde t_{1,2}}$ are the top-squark mass
eigenvalues, $y_t$ and $m_t$ are the top-quark Yukawa coupling and mass,
and $\tan\beta = v_u/v_d$ is the ratio of Higgs vacuum expectation values,
and for simplicity the Higgs sector is treated in a decoupling
approximation with ${h^0}$ much lighter than the other Higgs bosons $A^0,
H^0, H^\pm$. (In this paper, I follow the notations and conventions of
\cite{primer}.) In order to evade the LEP bound, it is clearly helpful to
have $m_t$ as large as possible, but the experimental central value
\cite{Tevatrontopmass} has
fallen recently. It is also required that $\tan\beta$ is not too small.
For fixed values of the superpartner masses, it follows that an upper
bound within the approximation of eq.~(\ref{hradcorrmix}) is
\beq
M^2_{h^0} \><\> m_Z^2 \cos^2(2\beta) +
\frac{3 }{4 \pi^2} \sin^2\!\beta \>y_t^2 m_t^2 \left [
{\rm ln}(m_{\tilde t_2}^2 / m_t^2 ) + 3 \right]
\eeq
in the case that the top-squark mixing is adjusted to have the maximum
positive impact on $M_{h^0}$. In specific model frameworks without
carefully adjusted top-squark mixing it is typically found that this bound
is not close to saturated, so while a non-zero top-squark mixing is quite
useful for satisfying the LEP bounds for a Standard Model-like lightest
Higgs scalar, it is also usually necessary for $m^2_{\tilde t_2}/m_t^2$
to be fairly large.
This is to be contrasted with the condition for electroweak symmetry
breaking, which for $\tan\beta$ not too small takes the form:
\beq
m_Z^2 &=& -2 \left ( |\mu|^2 + m^2_{H_u} \right ) -
\frac{1}{v_u} \frac{\partial}{\partial v_u} \Delta V
\,+\, {\cal O}(1/\tan^2\!\beta).
\label{eq:mZ}
\eeq
Here $\Delta V$ is the radiative part of the effective potential with
$v_u$ treated as a real variable in the differentiation, $\mu$ is the
supersymmetry-preserving Higgs mass parameter, and $m_{H_u}^2$ is the soft
supersymmetry breaking mass term for the Higgs field that couples to the
top quark, which must be negative near the electroweak scale. The
``supersymmetric little hierarchy problem" is that if supersymmetry
breaking parameters are large enough to make $M_{h^0}$ exceed the LEP
bounds, then a tuning at the several percent-level (or worse) might seem
to be needed in eq.~(\ref{eq:mZ}), so that $|\mu|^2$ and $-m^2_{H_u}$
nearly cancel. It has been argued that the level of fine tuning required
can be quantified with various measures, but it is my view that any such
metrics are inherently and unavoidably subjective, so they will not be
used here. Although the little hierarchy problem does not admit
of rigorous judgments, it can and does cause discomfort and
doubt regarding the likelihood of finding supersymmetric particles in
present and future collider searches.
There is no sense in which $|\mu|$ is naturally large, in fact it could
naturally be 0 even in the presence of arbitrary supersymmetry breaking if
it were not for experimental constraints. The radiative effective
potential contribution to eq.~(\ref{eq:mZ}) is not negligible, but since
it is loop-suppressed, it does not imply a drastic fine tuning. Therefore,
the supersymmetric
little hierarchy problem, if indeed there is one, is implied by the
fact that $|m^2_{H_u}|$ might be expected to be much larger than $m_Z^2$
in models with heavy top squarks. This indeed occurs in popular models
with few parameters with universal soft supersymmetry breaking terms
imposed near the scale of apparent gauge coupling unification (the GUT
scale), hereafter referred to as mSUGRA. However, it has long been
appreciated that this connection is modified or lost in more general
models of supersymmetry breaking. In section \ref{sec:compressed}, I will
review the arguments that suggest that the little hierarchy problem is
ameliorated in particular by models that predict a smaller gluino mass
than in unified models.
A further source of tension on the parameter of the MSSM is provided by
the opportunity of the explaining the cold dark matter by the thermal
relic density of a neutralino LSP ($\tilde N_1$). Roughly, the
annihilation rate for neutralinos decreases with increasing supersymmetry
breaking masses in the absence of special mechanisms dependent on
particular mass ratios. If the LSP is bino-like, as predicted by many
mSUGRA models, then the predicted thermal relic abundance is often found
to be too high\footnote{It is also important that the dark matter need not
be neutralinos with a thermal relic density. The LSP might be a gravitino
or axino, or something else. Or, if the predicted thermal relic abundance
of neutralino dark matter is too low or too high, it can be enhanced or
diluted by some non-thermal effect; see for example \cite{nonthermal}.
However, models that can explain the dark matter without multiplying
hypotheses should be accorded special interest.} compared to the
results of WMAP and other experiments \cite{WMAP}-\cite{PDG}.
The exceptional possibilities have lately been classified qualitatively in
four main categories, depending on the mechanism most responsible for
reducing the predicted dark matter density to an acceptable level.
First, in the ``bulk region" of parameter space, there is a relatively
light neutralino LSP, which pair annihilates by the $t$-channel and
$u$-channel exchange of sleptons. However, in mSUGRA and similar models,
this bulk region often predicts that $M_{h^0}$ is too small, or that other
states should have been detected at LEP or the Fermilab Tevatron
$p\overline p$ collider, or gives trouble with other indirect constraints.
Second, in the Higgs resonance (or funnel) region, neutralino pairs
annihilate through the $s$-channel exchange of the pseudo-scalar Higgs
boson $A^0$ in an $s$-wave final state. Because the relevant coupling is
proportional to $m_b \tan\beta$, this usually entails large values of
$\tan\beta$ \cite{Drees:1992am}. (There is also the possibility
of annihilating near the $h^0$ pole \cite{hzeropole}.)
Third, there is the possibility that the LSP has a significant higgsino
component, allowing larger neutralino pair annihilation and
co-annihilation with the heavier neutralinos and charginos, to and through
weak bosons \cite{Edsjo:1997bg}. This occurs for example in the ``focus
point" region of parameter space, in which $|\mu|$ is not too large, even
if the sfermions are very heavy \cite{focuspoint}.
A fourth possibility, the ``sfermion co-annihilation region" of parameter
space \cite{GriestSeckel}, is obtained if there is a sfermion (typically a
tau slepton \cite{staucoannihilation} in mSUGRA, but possibly a top squark
\cite{stopcoannihilationone}-\cite{stopcoannihilationfive}) that happens
to be slightly heavier than the LSP. A significant density of this
sfermion will then coexist with the LSP around the freeze-out time, and so
annihilations involving the sfermion with itself or with the LSP will
further dilute the number of superpartners and so the eventual dark matter
density. The co-annihilation region generally requires just the right mass
difference between the stau or stop quark and the LSP, and so is often
considered to be fine tuned.
If the LSP is mostly higgsino or wino, then the annihilation of
superpartners in the early universe is typically too efficient to provide
for thermal relic density in agreement with WMAP. However, one can always
adjust the higgsino or wino contents of $\tilde N_1$ to be just right,
at the expense of some fine tuning. In
recent years, there have been many studies of the properties of
neutralino dark matter that follow from
abandoning the strict boundary conditions of
mSUGRA models to allow non-universal gaugino masses
\cite{Corsetti:2000yq}-\cite{Bae:2007pa} or
scalar masses \cite{Berezinsky:1995cj}-\cite{Evans:2006sj}
at the GUT scale.
By increasing the wino or higgsino content of the neutralino,
one can increase the cross-section for annihilations and co-annihilations
to weak bosons, and those mediated by the $Z$ boson and $h^0$ and $A^0$ in
the $s$-channel.
In section \ref{sec:dark} of this paper, I will study a different
possibility with rather distinctive properties, namely the possibility
that the LSP is mostly bino-like, but pair annihilates efficiently to
top-antitop quark pairs due predominantly to the exchange of light
top squarks.
In the models discussed in section \ref{sec:compressed} (unlike in mSUGRA
and similar models) this mechanism turns out to give a thermal relic dark
matter density in agreement with the WMAP measurements for a wide range of
input parameters, much more than in the stop co-annihilation (or stau
co-annihilation) regions to which it is continuously connected. This
scenario also has important implications for collider searches at the
Fermilab Tevatron, CERN Large Hadron Collider (LHC), and a future linear
collider, discussed briefly in section \ref{sec:colliders}. Section
\ref{sec:outlook} contains some concluding remarks.
\section{Compressed supersymmetry}\label{sec:compressed}
\setcounter{equation}{0}
\setcounter{footnote}{1}
In this section, I review the argument that a suppression of the gluino
mass parameter ameliorates the little hierarchy problem in supersymmetry.
(This has been observed in various papers; a particularly clear and early
explanation was given in ref.~\cite{KaneKing}.)
As noted in the Introduction, the issue is essentially to explain why the
running parameter $-m_{H_u}^2$ should be small and positive near the
electroweak scale, in the same theory that allows large positive
corrections to $M_{h^0}^2$. The parameter $|\mu|^2$, which relies on a
different sector of the theory, can then be chosen without too much fine
tuning to give the right $m_Z^2$ in eq.~(\ref{eq:mZ}). In terms of the
MSSM soft supersymmetry breaking parameters at the apparent GUT scale, one
finds:
\beq
-m_{H_u}^2 &=&
1.92 \barM_3^2
+ 0.16 \barM_2 \barM_3
-0.21 \barM_2^2
-0.33 \barM_3 \barA_t
-0.074 \barM_2 \barA_t
+ 0.11 \barA_t^2
\nonumber \\ &&
+0.024 \barM_1 \barM_3
+0.006 \barM_1 \barM_2
- 0.006 \barM_1^2
- 0.012 \barM_1 \barA_t
+ 0.002 \barM_3 \barA_b
\nonumber \\ &&
- 0.63 \barm^2_{H_u} + 0.36 \barm^2_{Q_3} +0.28 \barm^2_{u_3}
-0.027 \barm^2_{H_d} +0.025 \barm^2_{d_3} - 0.026 \barm^2_{L_3}
\nonumber \\ &&
+ 0.026 \barm^2_{e_3}
+ 0.05 \barm^2_{Q_1} -0.11 \barm^2_{u_1} +0.05 \barm^2_{d_1}
- 0.05 \barm^2_{L_1} + 0.05 \barm^2_{e_1}
\label{eq:mHu}
\eeq
Here, the hats on the parameters on the right-hand side denote that they
are inputs at the apparent GUT scale, while $m^2_{H_u}$ on
the left-hand side denotes the
result at the renormalization
scale $Q=400$ GeV (where the corrections due to the effective
potential are presumed moderate), using $\tan\beta=10$ and the
SPS1a$'$ benchmark point \cite{sillybenchmarkpoint} values for the Yukawa
and gauge couplings and unification scale,
and using two-loop renormalization group equations \cite{twoloopRGEs}.
The input parameters consist of independent gaugino masses $\barM_1$,
$\barM_2$, $\barM_3$, scalar trilinear coupling parameters $\barA_t$,
$\barA_b$, $\barA_\tau$, and scalar squared masses for the Higgs bosons,
third family sfermions, and first family sfermions (with each second
family sfermion assumed degenerate with the first family counterpart
having the same quantum numbers).
Some contributions with very small coefficients have
been omitted from eq.~(\ref{eq:mHu}).
The reason for applying boundary
conditions at the GUT mass is that the apparent unification of couplings
provides some justification that it is meaningful to extrapolate running
parameters up to that scale.
In the so-called mSUGRA framework, the input parameters are usually taken
to obey the much stronger conditions:
\beq
&&
\barM_1 = \barM_2 = \barM_3 = m_{1/2},
\\
&&
\barA_t = \barA_b = \barA_\tau = A_0,
\\
&&
\barm^2_{\phi} = m_0^2
\eeq
for all scalars $\phi = H_u, H_d, Q_i, u_i, d_i, L_i, e_i$, with family
index $i=1,2,3$. It is then clear that the largest contribution to
$-m^2_{H_u}$ at the weak scale is due to the input gluino mass $\barM_3$;
furthermore, there is a significant cancellation between the scalar
contributions within the mSUGRA framework.
Generalizing the input parameters can provide a relative reduction in
$-m_{H_u}^2$, therefore lowering both the predicted value of $|\mu|^2$ and
the cancellation needed to obtain the observed value of $m_Z$. From consideration of the first five terms on the right-hand side, one
learns that small values of $|\mu|$ result from (roughly)
$\barM_3 \approx
0.29 \barM_2 + 0.13 \barA_t$ or
$\barM_3 \approx -0.38 \barM_2 + 0.04 \barA_t$, provided that $\barM_1$,
$\barA_t$, $\barm_{H_u}^2$ etc.~are not too large.
A complete cancellation is
actually not desirable from our present point of view, since a mostly
higgsino-like neutralino has too small a thermal relic density, and
$M_{h^0}$ often comes out too small. There are many types of models
already in the literature
that can predict a small ratio of $\barM_3/\barM_2$.
The scenario for dark matter to be studied in the next section
does not depend crucially on which framework is used,
but for concreteness I will review one here.
In the usual mSUGRA framework, one assumes that the gaugino masses are all
the same; in $SU(5)$ GUT language this corresponds to a supersymmetry
breaking $F$-term in a singlet of $SU(5)$ [or $SO(10)$]. More generally,
one can consider non-universal gaugino masses arising from an $F$ term VEV
in arbitrary linear combinations of the symmetric product of two adjoint
representations of the GUT group that contain a Standard Model singlet
\cite{Ellis:1985jn}-\cite{Anderson:1999ui}.
For $SU(5)$:
\beq
({\bf 24}\times {\bf 24})_S = {\bf 1} + {\bf 24} + {\bf 75} + {\bf 200}.
\eeq
The resulting gaugino mass terms
have the form
\beq
{\cal L} = -\sum_{R} \frac{\langle F_{R} \rangle}{2M_P}
\sum_{n} c^{(n)}_{R} \lambda_n \lambda_n + {\rm c.c.}
\eeq
where the coefficients $c^{(n)}_{R}$
(with $n=1,2,3$ for bino, wino, and gluino
respectively, and $R = {\bf 1} + {\bf 24} + {\bf 75} + {\bf 200}$)
are determined by group theory, leading to the
parameterization:
\beq
\barM_1 &=& m_{1/2} (1 + \ctf + 5 \csf + 10 \cth),
\label{eq:M1fromadj}
\\
\barM_2 &=& m_{1/2} (1 + 3 \ctf -3 \csf + 2 \cth),
\\
\barM_3 &=& m_{1/2} (1 -2 \ctf - \csf + \cth).
\label{eq:M3fromadj}
\eeq
The special case $\ctf = \csf = \cth$ recovers the mSUGRA model. In
eqs.~(\ref{eq:M1fromadj})-(\ref{eq:M3fromadj}), I have assumed that
there is at least some $SU(5)$ singlet component to the $F$ term,
although this is not strictly necessary. Note that this parameterization
is already general enough to fit any observed gaugino mass hierarchy,
simply because it contains three linearly independent contributions.
One particularly simple way to achieve a ratio $\barM_3/\barM_2 \sim
1/3$ is to choose $\ctf \sim 0.22$, with $\csf = \cth = 0$. This is the
inspiration for the model space studied in the next section, although it
cannot be overemphasized that there are many other reasonable ways to
achieve such a ratio. One point in favor of this type of model
is that in GUT theories like $SU(5)$, there is a chiral superfield
in the ${\bf 24}$ (adjoint) representation anyway; once the
scalar component acquires
a VEV, it is actually unnatural for the $F$ term to not develop a VEV
as well. Moreover, this can make the theory consistent with
proton decay requirements and help to obtain precise
gauge coupling unification \cite{Huitu:1999eh,TobeWells}.
As evidenced by the special case of eq.~(\ref{eq:mHu}), soft supersymmetry
breaking mass parameters at the weak scale are substantially driven by the
gaugino mass parameters through large logarithmic effects that are
summarized in the renormalization group. In mSUGRA models, this effect
typically causes squarks and the gluino to be much heavier than the
superpartners that do not have $SU(3)_C$ interactions, except when $m_0$
is very large. In the case of a small ratio $\barM_3/\barM_2$ motivated by
a solution to the little hierarchy problem, however, the resulting mass
spectrum will be ``compressed" in comparison to mSUGRA, with a smaller
ratio of the masses of the heaviest and lightest superpartners. Two
aspects of this that will be important in the next section are that it
becomes much more likely that the lighter top squark can be the
next-to-lightest supersymmetric particle (NLSP), and that the LSP is
rather heavy.
It will be assumed here
that the trilinear scalar couplings are sizable and negative
at the GUT scale (in the convention of \cite{primer}). This can be
motivated as the
effect of strong
renormalization group running between the GUT scale and
the Planck scale, which would produce both negative scalar trilinear
couplings and positive scalar squared masses, and often
prefers positive $\mu$
\cite{muispositive} (independently of low-energy $(g-2)_\mu$ or $b
\rightarrow s \gamma$ considerations), when the running is dominated by
positive gaugino masses.
It is worth noting that in many viable supersymmetric GUT theories,
the naive running of the gauge couplings above the unification scale quickly
becomes non-perturbative. For example, in the minimal missing partner
supersymmetric $SU(5)$ theory
\cite{missingpartner}, the two-loop gauge beta function has a Landau
pole, and the three- and four-loop beta functions appear to have
strongly-coupled
UV-stable fixed points \cite{UVstable}.
While the breakdown of perturbation theory
renders such calculations untrustworthy in detail, this suggests that
gaugino mass
dominance could eliminate the problem of supersymmetric flavor-violation,
while giving essentially arbitrary flavor-preserving soft supersymmetry
breaking terms at the GUT scale.
As a simplistic assumption
made only for convenience, the scalar trilinear
couplings $\barA_t$, $\barA_b$, $\barA_\tau$ will be taken to be unified
in the models below; the parameter $\barA_t$ has the most direct
importance in most cases.
Likewise I will assume, as a matter of convenience,
that at the GUT scale all scalars have
a common squared mass $m_0^2$ as in mSUGRA. While it is clearly
worthwhile to
consider scalar mass non-universality at the GUT scale,
I expect that the results obtained here will be realized at least
qualitatively in a variety of different schemes without universal scalar
squared masses imposed at the GUT scale.
\section{Dark matter density and pair annihilation to
top quarks}\label{sec:dark}
\setcounter{equation}{0}
\setcounter{footnote}{1}
In order to explain a large value of $M_{h^0}$ in models of compressed
supersymmetry, it is favored that the mass spectrum is compressed up,
rather than down. This means that the LSP will have to be heavier than
usually found in the mSUGRA ``bulk" region for dark matter.
It has been suggested in models of this type with small
$|M_3/M_2|$ that the thermal relic
abundance of dark matter can be explained by an enhanced Higgsino component of
$\tilde N_1$, leading to enhanced annihilations $\tilde N_1 \tilde N_1
\rightarrow W^+W^-$ or $ZZ$ \cite{Bertin:2002sq,Baer:2006dz},
or by $s$-channel annihilation through the
pseudo-scalar Higgs $A^0$ near resonance
\cite{Bertin:2002sq,Belanger:2004ag,Mambrini:2005cp}, or
by co-annihilations with heavier neutralinos and charginos
\cite{Belanger:2004ag,Baer:2006dz},
or by $s$-channel annihilations to $t\overline t$
through the $Z$ boson \cite{Bertin:2002sq,Mambrini:2005cp}.
In this paper, I will consider instead the case that the thermal relic
density is suppressed primarily by $\tilde N_1$ pair annihilation to top
quark-antiquark pairs, mediated mostly by top-squark exchange. As
mentioned in the previous section, it is not difficult in compressed
supersymmetric models to obtain a top squark NLSP. This is in
contradistinction to mSUGRA and similar models, where achieving the
required suppression in $\Omega_{\rm DM} h^2$ from top-squark exchange
requires that $|A_0|$ is very large in absolute terms and must be rather
finely adjusted so that $\tilde t_1$ is not much heavier than $\tilde N_1$
(see for example
refs.~\cite{stopcoannihilationtwo,stopcoannihilationthree,%
stopcoannihilationfive}). In the points
in mSUGRA parameter space where this can occur, $\tilde t_1 \tilde t_1$
and $\tilde N_1 \tilde t_1$ co-annihilations are also generally very
important (unlike here). Compressed supersymmetry models with
small\footnote{Hats will be omitted from GUT scale input parameters
throughout this section.} $|M_3/M_2|$ at the GUT scale have the crucial
distinction that achieving comparable $m_{\tilde t_1}$ and $m_{\tilde
N_1}$ is much easier, and requires smaller values of $|A_t|$ in absolute
terms, and admits a wider range of $A_t$.
The tree-level Feynman
diagrams that contribute to the process $\tilde N_1 \tilde N_1 \rightarrow
t \overline t$ are shown in Figure \ref{fig:annihilation}.
\begin{figure}
\begin{picture}(120,55)(0,0)
\SetWidth{0.85}
\Line(0,0)(100,0)
\Line(0,50)(100,50)
\DashLine(50,0)(50,50){5}
\rText(-13,8)[][]{$\widetilde N_1$}
\rText(-13,45)[][]{$\widetilde N_1$}
\rText(57,25)[][]{$\tilde t_{1,2}$}
\rText(101,48)[][]{$ t$}
\rText(101,6)[][]{$\overline t$}
\end{picture}
\hspace{2.7cm}
\begin{picture}(120,55)(0,0)
\SetWidth{0.85}
\Line(0,0)(22,22)
\Line(28,28)(50,50)
\Line(0,50)(50,0)
\Line(50,0)(100,0)
\Line(50,50)(100,50)
\DashLine(50,50)(50,0){5}
\rText(-13,9)[][]{$\widetilde N_1$}
\rText(-13,44)[][]{$\widetilde N_1$}
\rText(57,25)[][]{$\tilde t_{1,2}$}
\rText(101,48)[][]{$ t$}
\rText(101,6)[][]{$\overline t$}
\end{picture}
\vspace{0.9cm}
\begin{picture}(120,60)(0,0)
\SetWidth{0.85}
\Line(0,50)(33,25)
\Line(77,25)(110,50)
\Photon(33,25)(77,25){2.1}{4.5}
\Line(0,0)(33,25)
\Line(77,25)(110,0)
\rText(-12,8)[][]{$\tilde N_1$}
\rText(-12,47)[][]{$\tilde N_1$}
\rText(51.9,34)[][]{$Z$}
\rText(110,47.5)[][]{$t$}
\rText(109.5,3)[][]{$\overline{t}$}
\end{picture}
\hspace{2.5cm}
\begin{picture}(120,60)(0,0)
\SetWidth{0.85}
\Line(0,0)(30,25)
\Line(0,50)(30,25)
\DashLine(30,25)(90,25){5}
\Line(120,0)(90,25)
\Line(120,50)(90,25)
\rText(-12,8)[][]{$\tilde N_1$}
\rText(-12,47)[][]{$\tilde N_1$}
\rText(55.3,32.8)[][]{$A^0, h^0, H^0$}
\rText(120.5,48)[][]{$t$}
\rText(121,3)[][]{$\bar t$}
\end{picture}
\vspace{0.1cm}
\caption{Contributions to the annihilation of neutralino
dark matter LSP pairs into top quark-antiquark pairs, from top squark,
$Z$ boson, and Higgs boson exchange.
\label{fig:annihilation}}
\end{figure}
In order to obtain $\Omega_{\rm DM} h^2$ compatible with WMAP by this
mechanism (without undue fine tuning), it is necessary that:
\beq
&&
m_t \,<\, m_{\tilde N_1} \,\mathrel{\rlap{\lower4pt\hbox{$\sim$}\, m_t + 100\>{\rm GeV},
\\
&&m_{\tilde N_1} + 25\>{\rm GeV} \,\mathrel{\rlap{\lower4pt\hbox{$\sim$} \,
m_{\tilde t_1}
\,\mathrel{\rlap{\lower4pt\hbox{$\sim$} \,
m_{\tilde N_1} + 100\>{\rm GeV}.
\label{eq:stopbounds}
\eeq
The first inequality in eq.~(\ref{eq:stopbounds}) is the approximate
requirement that the relic density not be suppressed too much by
top-squark co-annihilations. The upper bounds here are also necessarily
fuzzy, because of the connection to the thin co-annihilation region. For
models satisfying these criteria, the top-squark exchange is most
important for bino-like $\tilde N_1$ and $\tilde t_1$ with a high $\tilde
t_R$ component. As one increases the small Higgsino component of $\tilde
N_1$ (by lowering $|\mu|$), the contribution from the $\tilde t_1$
exchange diagrams becomes
enhanced, due to the top Yukawa coupling. In the
models to be considered below, the $s$-channel
$Z$ exchange diagram is subdominant but
not negligible; using the analytic formulas provided in
\cite{Drees:1992am}, one can show that the most important effect is a
significant destructive interference with the dominant top-squark exchange
diagram amplitude.
For a more detailed study, I have used the program {\tt micrOMEGAs 2.0.1}
\cite{micrOMEGAs} to evaluate the thermal relic abundance of dark matter
for supersymmetric models generated using {\tt SOFTSUSY 2.0.11}
\cite{softsusy} (and checked for approximate agreement with {\tt SuSpect}
\cite{suspect}). In the following, I consider a rather conservative
thermal relic density constraint:
\beq
0.09 < \Omega_{\rm DM} h^2 < 0.13,
\label{eq:WMAPconstraint}
\eeq
and impose a Higgs mass constraint:
\beq
M_{h^0} > 113\>{\rm GeV}.
\label{eq:Mhbound}
\eeq
This is slightly lower than the LEP bound for Standard Model-like Higgs
scalars, justified by the significant uncertainties involved in the
theoretical Higgs mass calculation. In addition, I adopt the slightly
optimistic value of $m_t = 175$ GeV rather than the somewhat lower latest
combined
central value $m_t = 171.4 \pm 1.8\,{\rm(syst.)} \pm 1.2 \,{\rm(stat.)}$
GeV from the Tevatron \cite{Tevatrontopmass}. In each model, I require
that the LSP is a neutralino. Then all limits on supersymmetric particles
from LEP turn out to be automatically satisfied. No constraint from the
anomalous magnetic moment of the muon is applied, since for all models
considered here, the predicted value is actually closer to the
experimental central value(s) from the BNL E821 experiment
\cite{BNLmuongminustwo} than the Standard Model prediction is (but not by
a very statistically significant amount). I do not impose any bound coming
from $b \rightarrow s \gamma$, since the measurement can be easily
accommodated \cite{bsgammaisbogus} by introducing some extra small
GUT-scale flavor violation in the supersymmetry-breaking parameters, in a
way that would not affect the rest of the model in any appreciable way.
Results for a typical two-parameter model space are shown in Figure
\ref{fig:LSPstop}. Here I consider models with boundary conditions at the
GUT scale:
\beq
1.5 M_1 = M_2 = 3 M_3,
\eeq
with $M_1$ and $m_0$ allowed to vary independently,
$\tan\beta = 10$ and $\mu>0$,
and two values for the ratio $A_0/M_1 = -0.75$ (outlined region) and
$A_0/M_1 = -1$ (shaded region). The allowed regions are cut off on the
lower left by the Higgs mass bound constraint eq.~(\ref{eq:Mhbound}). The
upward bulges in the regions are the places where top-squark exchange
plays the dominant role in $\tilde N_1 \tilde N_1 \rightarrow t \overline
t$, which in turn is the most important annihilation process for the dark
matter. Typically, in the bulge regions
$\tilde N_1 \tilde N_1 \rightarrow t \overline t$
accounts for 70\% to 90\% of the contributions to $1/\Omega_{\rm DM} h^2$.
Each of these regions is continuously connected to much narrower
co-annihilation regions on either side. For $A_0/M_1 = -1$, stop
co-annihilation is the dominant effect in these thin strips, but for
$A_0/M_1 = -0.75$, stau co-annihilations are also important there.
For smaller values of $-A_0/M_1$, not shown here, the $\tilde N_1 \tilde
N_1 \rightarrow t \overline t$ bulge region still exists, but continuously
connects instead to a thin stau co-annihilation strip.
As can be seen from Figure \ref{fig:LSPstop}, the smaller $-A_0$ case
requires a larger mass difference $m_{\tilde t_1} - m_{\tilde N_1}$ in the
allowed bulge region. This can be traced to the fact that $\mu$ decreases
as $-A_0$ decreases [see eqs.~(\ref{eq:mZ}) and (\ref{eq:mHu})], slightly
enhancing the small Higgsino component of the LSP, which substantially
increases the top-squark exchange amplitude contributions to the
annihilation cross section as mentioned above.
The $s$-channel Higgs scalar annihilation diagrams shown in Figure
\ref{fig:annihilation} play only a small role in the LSP pair annihilation
in these models. This is because in all cases $m_{\tilde N_1}$ is well
below the resonance point $m_{A^0}/2 \,\approx \, m_{H^0}/2$, and well
above the resonance point $m_{h^0}/2$.
\begin{figure*}[!p]
\vspace{-0.2cm}
\centering
\mbox{\includegraphics[width=11cm]{LSPstop}}
\caption{\label{fig:LSPstop}
Allowed regions in the $m_{\tilde N_1}$, $m_{\tilde t_1}$ plane
that predict a thermal relic abundance of neutralino LSP
dark matter $0.09 < \Omega_{\rm DM} h^2 < 0.13$ and satisfy other
constraints
given in the text.
The GUT-scale parameters satisfy
$1.5 M_1 = M_2 = 3 M_3$, with variable $m_0$, and
$A_0 = -0.75 M_1$ (region outlined in black) and $A_0 = -M_1$
(shaded region). Also, $\tan\beta = 10$ and $\mu>0$.
The lowest dashed line is $m_{\tilde t_1} = m_{\tilde N_1}$.
Below the upper dashed line, the
decay $\tilde t_1 \rightarrow t \tilde N_1 $ is
forbidden.
Below the middle dashed line, the three-body
decay $\tilde t_1 \rightarrow W b \tilde N_1 $ is also forbidden.}
\vspace{0.8cm}
\mbox{\includegraphics[width=11.3cm]{LSPmzero}}
\caption{\label{fig:LSPmzero}
The allowed regions depicted in Figure 1 arise from the values of
$m_0$ shown here.}
\end{figure*}
Also shown in Figure \ref{fig:LSPstop} are the critical lines that govern
the decay of $\tilde t_1$. The lowest dashed line indicates the region
allowed by the requirement that $\tilde N_1$ is the LSP. In all cases, the
decay $\tilde t_1 \rightarrow t \tilde N_1$ is kinematically closed, as
indicated by the upper dashed line. Above the middle dashed line, the
decay
\beq
\tilde t_1 \rightarrow Wb\tilde N_1
\eeq
is kinematically open, and should dominate. Below that line,
the flavor-violating 2-body decay
\beq
\tilde t_1 \rightarrow c \tilde N_1
\eeq
is expected to win \cite{Hikasa:1987db} over the four-body decays
$
\tilde t_1 \rightarrow q \overline q' b\tilde N_1
$
and
$
\tilde t_1 \rightarrow \ell \nu b \tilde N_1
$.
The charginos ($\tilde C_i$) and sleptons are heavier in the models
shown, so they cannot appear in $\tilde t_1$ decays.
The relative thickness of the allowed regions
cannot be regarded as a direct measure of fine-tuning, even
subjectively. In fact, a perfectly accurate determination of $\Omega_{\rm
DM} h^2$ would make the allowed regions arbitrarily thin, up to model
assumption and theoretical errors and input parameter
uncertainties.\footnote{Moreover, one could
regard the entire parameter space between the indicated regions and the
$m_{\tilde t_1}=m_{\tilde N_1}$ line as allowed, in the sense that the
thermal relic abundance would be less than or equal to the observed value.
Then something else, for example axions, would make up the remaining dark
matter.}
(The
Planck satellite mission experiment \cite{Planck} should indeed
significantly improve the determination.) However, it seems clear that the
observed dark matter density is more naturally accommodated in the $\tilde
N_1 \tilde N_1 \rightarrow t \overline t$ bulge regions, since a larger
range of $m_0$ values (for each fixed $M_1$) lie within the range
specified by eq.~(\ref{eq:WMAPconstraint}). This is illustrated for the
same models in Figure \ref{fig:LSPmzero}. Notably, all of the soft
supersymmetry breaking parameters are less than the gaugino masses $M_1$
and $M_2$; this includes the holomorphic term $b = B\mu$, which is of
order (250 GeV)$^2$ in the bulge region of these models.
\begin{figure*}[tpb] \centering \mbox{\includegraphics[width=12cm]{c24m0}}
\caption{\label{fig:c24m0} Allowed regions that predict a thermal relic
abundance of neutralino LSP dark matter $0.09 < \Omega_{\rm DM} h^2 <
0.13$ and satisfy other constraints given in the text. At the GUT scale,
the bino mass parameter $M_1 = 500$ GeV is fixed, and the wino and gluino
mass parameters vary while obeying
eqs.~(\ref{eq:M1fromadj})-(\ref{eq:M3fromadj})
with $C_{75} = C_{200} = 0$. The horizontal direction
is parameterized by $M_3$ at the GUT scale (lower horizontal axis) or
equivalently by $\ctf$ (upper horizontal axis). The vertical axis is the
common scalar mass $m_0$. The region outlined in black has
$A_0 = -375$ GeV and the shaded region has $A_0 = -500$ GeV, with
$\tan\beta=10$ and $\mu > 0$ in each case. The very thin, nearly
horizontal regions with $m_0$ near 110 GeV feature co-annihilation of
sleptons and the LSP. In the thicker sloping areas on the left, the
dominant contribution to $1/\Omega_{\rm DM} h^2$ is $\tilde N_1 \tilde N_1
\rightarrow t \overline t$, mostly due to the $\tilde t_1$ exchange
diagram amplitudes.} \end{figure*}
Another handle on the $\tilde N_1 \tilde N_1 \rightarrow t \overline t$
annihilation scenario is provided by Figure \ref{fig:c24m0}, which shows
dark matter allowed regions in the plane of $m_0$ vs. $M_3$, for models
with fixed $M_1 = 500$ GeV at the GUT scale
(so that the LSP mass is approximately $200$
GeV) and obeying the boundary condition of
eqs.~(\ref{eq:M1fromadj})-(\ref{eq:M3fromadj}) with $\ctf$ varying and
$\csf = \cth = 0$. I again require $\mu>0$ and $\tan\beta=10$, and the
allowed regions are shown for $A_0 = -M_1$ and $A_0 = -0.75 M_1$. The thin
horizontal regions achieve the observed dark matter density by
co-annihilations of sleptons and the LSP; as is well-known, this requires
a rather precise adjustment of the slepton squared masses. For $\ctf \mathrel{\rlap{\lower4pt\hbox{$\sim$}
0.19$, or equivalently $M_3 \mathrel{\rlap{\lower4pt\hbox{$\sim$} 260$ GeV, the $\tilde N_1 \tilde N_1
\rightarrow t \overline t$ annihilation scenario takes over, leading to
the thicker, sloping allowed regions. They are cut off on the left by the
imposed Higgs mass constraint eq.~(\ref{eq:Mhbound}).
The distinctive features of the $\tilde N_1 \tilde N_1 \rightarrow t
\overline t$ annihilation scenario for dark matter in compressed
supersymmetry are illustrated in the superpartner spectrum for a typical
model point shown in Figure \ref{fig:spectrum}, with $M_1 = 500$ GeV and
$m_0 = 342$ GeV in order to give $\Omega_{\rm DM} h^2 = 0.11$.
\begin{figure*}[tpb]
\centering
\mbox{\includegraphics[width=11cm]{spectrum}}
\caption{\label{fig:spectrum}
A typical sample ``compressed" Higgs and superpartner
mass spectrum with $\Omega_{\rm DM} h^2 = 0.11$ brought
about by $\tilde N_1 \tilde N_1 \rightarrow t \overline t$
through $\tilde t_1$ exchange.
The GUT scale parameters of the model
are $M_{1,2,3} = 500, 750, 250$, $A_0 = -500$, and $m_0 = 342$ GeV,
with $\tan\beta= 10$ and $\mu>0$ at the weak scale. The ratio of the
largest superpartner mass to the smallest is less than 4. An
unfortunate feature, quite common to this scenario for dark matter,
is that no visible superpartners would be
within reach of a linear collider with $\sqrt{s} = 500$ GeV.}
\end{figure*}
In this model, $\tilde N_1 \tilde N_1 \rightarrow t
\overline t$ contributes about 89\% to $1/\Omega_{\rm DM} h^2$.
The amplitude from $\tilde t_1$ exchange is largest, with an amplitude
from $Z$ exchange about $0.3$ times as big in the velocity-independent
part of the ${}^1S_0$ channel, with destructive interference.
The superpartner mass spectrum shows compression compared to mSUGRA
models, with the ratio of masses of the largest superpartners (nearly
degenerate $\tilde u_L$, $\tilde c_L$ and $\tilde d_L$, $\tilde s_L$) to
the LSP being less than 4, with all superpartners between 200 GeV and 800
GeV. The NSLP is $\tilde t_1$. The lightest chargino $\tilde C_1$ and the
neutralinos $\tilde N_2$ and $\tilde N_3$ are higgsino-like; this is a
consequence of $\mu$ being not too large as discussed in section
\ref{sec:compressed}. Another consequence of the choice of a relatively
large wino mass to ameliorate the little hierarchy problem is that the
wino-like states $\tilde N_4$ and $\tilde C_2$ are comparatively heavy,
just below the gluino mass, and there is a wide split between left-handed
squarks and sleptons and their right-handed counterparts.
\section{Implications for colliders}\label{sec:colliders}
\setcounter{equation}{0}
\setcounter{footnote}{1}
In this section I will make some brief remarks on the implications of the
scenario outlined above for collider searches. Figure \ref{fig:spectrum}
shows a typical model of this type, with the characteristic feature that
the decay $\tilde t_1 \rightarrow c \tilde N_1$ has a 100\% branching
fraction. For this section,
I will use this as a benchmark, with the important caveat that
search strategies will be qualitatively different if the decay $\tilde t_1
\rightarrow Wb \tilde N_1$ is open. Other notable decays, with approximate
branching fractions computed by {\tt ISAJET 7.74} \cite{ISAJET}, are:
\beq
\tilde g \rightarrow
\Biggl \{ \begin{array}{l}
t \,\tilde t_1^*\qquad(\sim 50\%)
\\
\overline t \,\tilde t_1\qquad(\sim 50\%)
\end{array}\Biggr.
\label{eq:gluinodecay}
\eeq
for the gluino, and
\beq
&&
\tilde N_2 \rightarrow
\Biggl \{ \begin{array}{ll}
\tilde N_1 h\quad &(\sim 90\%)
\\
\tilde N_1 Z\quad &(\sim 10\%)
\end{array}\Biggr.
\\
&&\tilde N_3 \rightarrow \tilde N_1 Z\qquad (\sim 97\%)
\\
&&
\tilde C_1 \rightarrow \, \tilde t_1 b\qquad (\sim 95\%)
\eeq
for the Higgsino-like neutralinos and charginos. The wino-like neutralino
and chargino $\tilde N_4$ and $\tilde C_2$ are so heavy in this class of
models that they are unlikely to be directly produced in great numbers at
any foreseeable colliders, but may appear (with small branching fractions)
in decays of left-handed squarks. This scarcity of winos is
different from the expectation in many mSUGRA models, for example. The
left-handed squarks of the first two families decay predominantly through
the gluino, while the right-handed squarks decay mostly directly to the
LSP:
\beq
&&\tilde q_L \rightarrow
\Biggl \{ \begin{array}{ll}
q \tilde g \quad &(\sim 78\%)
\\
q' \tilde C_2\quad &(\sim 11\%)
\end{array}\Biggr.
\\
&&\tilde q_R \rightarrow\, q \tilde N_1 \qquad (\sim 90\%)
\eeq
However, the latter large branching fraction is due here to the very small
phase space allowed for decays to the gluino, so while it is a real
possibility, it cannot be considered a robust prediction for this class of
models in general.
The sleptons decay mostly directly to the bino-like LSP $\tilde N_1$.
Unfortunately, they almost completely decouple from the other superpartner
decay chains, so they must be produced directly to be observed. This
is clearly a challenge, given their large masses.
Because of the large masses of the entire superpartner spectrum, it will
be difficult to probe the scenario outlined here at the Tevatron. The
process
\beq
p \overline p \rightarrow \tilde t_1 \tilde t_1^* \rightarrow
c \overline c \tilde N_1 \tilde N_1 \rightarrow c\,\overline c + \missingET
\eeq
is one of the searches being actively pursued by both the D$\emptyset$
\cite{Dzerostops} and CDF \cite{CDFstops} collaborations, using heavy
flavor tags. However, the sensitivity appears to be well short
\cite{Demina:1999ty} of that needed to probe most of the region favored by
neutralino annihilations to top quarks as the solution for dark matter.
The same process without the heavy flavor tag requirement, $p\overline p
\rightarrow$ (acoplanar $jj$) $+ \missingET$ \cite{Dzeroacoplanarjets},
may also be interesting. However, in the present case the jets will not
have a particularly high transverse momentum compared to the typical
situation in mSUGRA benchmark models. The clean trilepton and $\missingET$
signal from wino-like $\tilde C_1 \tilde N_2$
production that is found in mSUGRA is
unavailable here. The other superpartners generally are too heavy to be
produced in sufficient numbers at the Tevatron with the anticipated
integrated luminosity. The Higgs sector consists of a Standard-Model like
Higgs boson $h^0$
and a heavy isotriplet of Higgs bosons ($H^0, A^0, H^\pm$),
so enhanced signals are not
expected there.
At the Large Hadron Collider, squark and gluino production will dominate
as usual. The latter leads to the signal of two tops and two charm jets,
but with 50\% probability for like-sign tops, because of the Majorana
nature of the gluino decays:
\beq
pp \rightarrow \tilde g \tilde g \rightarrow
\left \{ \begin{array}{ll}
t \,\overline t\, \tilde t_1\, \tilde t_1^* \rightarrow
t \,\overline t\, c\, \overline c + \missingET\qquad (50\%)
\\
t \,t\, \tilde t_1^*\, \tilde t_1^*\rightarrow
t \,t\, \overline c \,\overline c + \missingET\qquad(25\%)
\\
\overline t \,\overline t\, \tilde t_1\, \tilde t_1
\rightarrow
\overline t\, \overline t\, c \, c + \missingET\qquad(25\%)
\end{array}
\right.
\label{eq:likesigntops}
\eeq
This LHC signal has been studied in \cite{Kraml:2005kb} using
the like-charge
lepton signal arising from the
leptonic decay modes for both top quarks, with the result
that both discovery and mass measurements are possible up to
a gluino mass
of about 900 GeV. The assumptions in the
benchmark models
used in that paper included a neutralino and stop that
were both relatively lighter than in the scenario discussed in the present
paper, but the results seem likely to be qualitatively the same.
Since many of the squarks produced at the LHC will decay through gluinos
and then into top squarks and top quarks by eqs.~(\ref{eq:gluinodecay}),
one expects also the same signal as in eq.~(\ref{eq:likesigntops}) with
two extra (usually light-flavor) jets, some of which may however be
relatively soft. Other squark-squark and gluino-squark events will yield
the typical jets plus leptons plus $\missingET$ signatures.
For models like the one in Figure \ref{fig:spectrum}, sleptons will not
appear with significant multiplicity in the decays of the squarks and
sleptons that are produced directly at the highest rates at the LHC. For
these masses, the direct production of sleptons would also be very
difficult to observe in either Drell-Yan production \cite{LHCsleptonsDY}
or vector boson fusion \cite{LHCsleptonsVBF}.
If the decay mode $\tilde t_1 \rightarrow W b \tilde N_1$ is open, as can
occur for models with lower $|A_0|$ and/or higher $\Omega_{\rm DM} h^2$,
then a different like-charge lepton signal results from $pp \rightarrow
\tilde g \tilde g \rightarrow t \,t \overline b\, \overline b \,\ell^-
\ell^- + \missingET$ and $\overline t \,\overline t\, b\, b\, \ell^+
\ell^+ + \missingET$ at the LHC. The resulting events with four potential
$b$ tags, like-charge dileptons, and large missing energy should provide
for a striking signal.
Finally, it is important to note that the scenario for neutralino LSP
annihilation to a top quark does not present a particularly promising
situation for a linear collider with $\sqrt{s} = 500$ GeV. In the model
shown in Figure \ref{fig:spectrum}, and in a great deal of the allowed
parameter space in Figure \ref{fig:LSPstop}, the only supersymmetric
particle that can be produced at such a machine is
$\tilde N_1$, which does not lead to a visible signal (except possibly
through initial state radiation $e^+ e^- \rightarrow \tilde N_1 \tilde N_1
\gamma$ \cite{ISRLSPs}). With that collision energy, only an $h^0$ with
properties nearly indistinguishable from a Standard Model Higgs boson will
be in direct evidence. Fortunately, if this is the course that Nature has
chosen, the LHC should be able to identify the problem in advance, and
allow for informed decisions regarding required beam energy. However, the
difficulty in seeing sleptons at the LHC as noted above will present, as
it does in many models, a challenge for assessing the capabilities of a
linear collider.
\section{Outlook}\label{sec:outlook}
\setcounter{equation}{0}
\setcounter{footnote}{1}
In this paper, I have argued that there is a particularly appealing region
of parameter space in which the right amount of dark matter can be
obtained naturally, in the sense that $\Omega_{\rm DM} h^2$ varies rather
slowly with changes in the input parameters. The key feature is
suppression of $\Omega_{\rm DM} h^2$ due primarily to neutralino pair
annihilation to top quarks through top squark exchange, which can occur
with moderate top-squark mixing in models that have a relatively light
gluino compared to the predictions of models with universal gaugino mass
boundary conditions as in mSUGRA. The resulting superpartner spectrum can
therefore be described as compressed, and leads to rather distinctive
predictions. The fact that the $\tilde N_1$ mass
has to exceed the top quark
mass in this scenario for dark matter makes the discovery of supersymmetry
impossible\footnote{This is hardly a bold prediction now, but it does have
the virtue of being inevitable in this scenario.} at the past CERN LEP
$e^+ e^-$ collider, very difficult for the ongoing Tevatron, but quite
promising for the LHC. Since the $\tilde t_1$ squark has to be still
heavier by tens of GeV, there is considerable doubt that a linear collider
would be able to see it, or any other superpartners, unless the
center-of-mass energy is higher than $\sqrt{s} = 500$ GeV.
In this paper, I have not attempted any detailed study of LHC potential
for discovery or precision measurements, or of the possibilities for
direct or indirect detection of the dark matter. These issues will
hopefully be addressed in future work.
{\bf Acknowledgments:} I am grateful to James Wells for useful
conversations. This work was supported by the National Science
Foundation under Grant No.~PHY-0456635.
| {
"attr-fineweb-edu": 1.991211,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdaE5qdmDNotg_VV8 | \section{Introduction}The weight part of Serre's conjecture has been
much-studied over the last two decades, and while the original
problem has been resolved, a great deal remains to be proved in more general
settings. In the present paper we address the question of the weight
part of Serre's conjecture for totally real fields. Here one has the
Buzzard--Diamond--Jarvis conjecture \cite{bib:BDJ} and its various
generalisations, much of which has now been established
(cf. \cite{gee08serrewts}, \cite{blgg11-serre-weights-for-U2}). One
case that has not (so far as we are aware) been considered at all
over totally real fields is the case of forms of (partial) weight
one. This case is markedly different to the case of higher weights,
for the simple reason that mod $p$ modular forms of weight one
cannot necessarily be lifted to characteristic zero modular forms of
weight one; as the methods of \cite{gee08serrewts} and
\cite{blgg11-serre-weights-for-U2} are centered around modularity
lifting theorems, and in particular depend on the use of lifts to
characteristic zero modular forms, they cannot immediately say
anything about the weight one case.
In this paper we generalise a result of Gross \cite{MR1074305}, and
prove a companion forms theorem for Hilbert modular forms of parallel
weight one in the unramified $p$-distinguished case. To explain what this means,
and how it (mostly) resolves the weight one part of Serre's conjecture
for totally real fields, we return to the case of classical modular
forms. Serre's original formulation \cite{MR885783} of his
conjecture only considered mod $p$ modular forms which lift to
characteristic zero, and in particular ignored the weight one
case. However, Serre later observed that one could further refine his
conjecture by using Katz's definition \cite{MR0447119} of mod $p$
modular forms, and thus include weight one forms. He then conjectured
that a modular Galois representation should arise from a weight one
form (of level prime to $p$) if and only if the Galois representation
is unramified at $p$. The harder direction is to prove that the Galois
representation being unramified implies that there is a weight one
form; this was proved by Gross \cite{MR1074305}, under the further
hypothesis that the eigenvalues of a Frobenius element at $p$ are
distinct (i.e. we are in the $p$-distinguished case). It is this
result that we generalise in this paper, proving the following
theorem.
\begin{ithm}\label{thm: main result, introduction version}
Let $p>2$ be prime, let $F$ be a totally real field in which $p$ is
unramified, and let $\rhobar:G_F\to\operatorname{GL}_2(\overline{\F}_p)$ be an irreducible
modular representation such that $\rhobar|_{G_{F(\zeta_p)}}$ is
irreducible. If $p=3$ (respectively $p=5$), assume further that the projective image of
$\rhobar(G_{F(\zeta_p)})$ is not conjugate to $\operatorname{PSL}_2({\mathbb{F}}_3)$ (respectively $\operatorname{PSL}_2({\mathbb{F}}_5)$ or
$\operatorname{PGL}_2({\mathbb{F}}_5)$).
Suppose that
for each place $v|p$, $\rhobar|_{G_{F_v}}$ is unramified, and that
the eigenvalues of $\rhobar({\operatorname{Frob}}_v)$ are distinct.
Then there is a mod $p$ Hilbert modular form $f$ of
parallel weight $1$ and level prime to $p$ such that $\rhobar_f\cong\rhobar$.
\end{ithm}
(See Theorem \ref{thm: main result}, and see the body of the paper
for any unfamiliar notation or terminology.) The condition on
$\rhobar|_{G_{F(\zeta_p)}}$ is mild, and the only other hypothesis
which is not expected to be necessary (other than that $p$ is
unramified in $F$, which we assume for our geometric arguments) is
that the eigenvalues of $\rhobar({\operatorname{Frob}}_v)$ are distinct for all
$v|p$. This condition appears to be essential to our method, as we
explain below.
Our method of proof is a combination of modularity lifting theorem
techniques and geometric methods.
The first part of the argument, using modularity lifting theorems
to produce Hilbert modular forms of parallel weight $p$, was carried
out in \cite{gee051} under some additional hypotheses (in
particular, it was assumed that $\rhobar$ arose from an ordinary
Hilbert modular form), and in Section \ref{sec: BLGG} below we use
the techniques of \cite{BLGGT} (which involve the use of automorphy
lifting theorems for rank 4 unitary groups) to remove these
hypotheses. This gives us $2^n$ Hilbert modular forms of parallel
weight $p$ and level prime to $p$, where there are $n$ places of $F$
above $p$, corresponding to the different possible choices of
Frobenius eigenvalues at places above $p$. In Section \ref{section:
weight one} we take a suitable linear combination of these forms,
and show that it is divisible by the Hasse invariant of parallel weight $p-1$, by explicitly calculating the $p$-th power of the quotient. It is easy to show that the quotient is the sought-after Hilbert modular form of parallel weight one. If we do not assume that $\rhobar$ has
distinct Frobenius eigenvalues at each place dividing $p$, the
weight one form we obtain in this manner is actually zero.
In Section \ref{sec: Artin} we give an application of our
main theorem to Artin's conjecture, generalising the results (and
arguments) of \cite{MR1434905} to prove the following result, which
shows that the weak form of Serre's conjecture for totally real fields
implies the strong form of Artin's conjecture for totally odd
two-dimensional representations.
\begin{ithm}
Let $F$ be a totally real field. Assume that every irreducible,
continuous and totally odd representation
$\rhobar:G_F\to\operatorname{GL}_2(\overline{\F}_p)$ is modular, for every prime $p$. Then
every irreducible, continuous and totally odd representation
$\rho:G_F\to\operatorname{GL}_2({\mathbb{C}})$ is modular.
\end{ithm}
(See Theorem \ref{thm: Serre implies Artin}.) Finally, we remark that
it is possible that our results can be applied to Artin's conjecture
more directly (that is, without assuming Serre's conjecture), as an
input to modularity lifting theorems in parallel weight one; see the
forthcoming work of Calegari--Geraghty for some results in this
direction. Additionally, it would be of interest to generalise our
results to forms of partial weight one; the geometric techniques we
use in this paper amount to determining the intersection of the
kernels of the $\Theta$-operators of \cite{AG}, and it is possible
that a determination of the kernels of the individual
$\Theta$-operators could shed some light on this.
We are grateful to Shu Sasaki for suggesting that our results could be
used to generalise the arguments of \cite{MR1434905} to totally real
fields. We are also grateful to Kevin Buzzard for several helpful
conversations, and to Frank Calegari for asking a question which led
to our writing this paper.
\subsection{Notation} If $M$ is a field, we let $\overline{M}$ denote
an algebraic closure of $M$, and we let $G_M:={\operatorname{Gal}\,}(\overline{M}/M)$
denote its absolute Galois group. Let $p$ be a prime number, and let
$\varepsilon$ denote the $p$-adic cyclotomic character; our choice of
convention for Hodge--Tate weights is that $\varepsilon$ has all
Hodge--Tate weights equal to $1$. Let $F$ be a totally real field and
$f$ a cuspidal Hilbert modular eigenform of parallel weight $k$. If
$v$ is a finite place of $F$ which is coprime to the level of $f$,
then we have, in particular, the usual Hecke operator $T_v$
corresponding to the double coset \[\operatorname{GL}_2(\mathcal{O}_{F_v})
\begin{pmatrix}
\varpi_v&0\\0&1
\end{pmatrix}\operatorname{GL}_2(\mathcal{O}_{F_v}),\]where $\varpi_v$ is a uniformiser of $\mathcal{O}_{F_v}$,
the ring of integers of $F_v$.
There is a Galois representation $\rho_f:G_F\to\operatorname{GL}_2(\overline{\Q}_p)$
associated to $f$; we adopt the convention that if $v\nmid p$ is
as above, and ${\operatorname{Frob}}_v$ is an \emph{arithmetic} Frobenius element
of $G_{F_v}$ then ${\operatorname{tr}\,}\rho_f({\operatorname{Frob}}_v)$ is the $T_v$-eigenvalue of
$f$, so that in particular the determinant of $\rho_f$ is a finite
order character times $\varepsilon^{k-1}$.
We say that
$\rhobar:G_F\to\operatorname{GL}_2(\overline{\F}_p)$ is \emph{modular} if it arises as
the reduction mod $p$ of the Galois representation
$\rho_f:G_F\to\operatorname{GL}_2(\overline{\Q}_p)$ for some $f$.
\section{Modularity lifting in weight $p$}\label{sec: BLGG}\subsection{}In this section we apply the
modularity lifting techniques first used in \cite{gee051} to produce
companion forms in (parallel) weight $p$. We make use of the
techniques of \cite{BLGGT} in order to weaken the hypotheses
(for example, to avoid the necessity of an assumption of
ordinarity). In this section, we do not assume that $p$ is unramified
in $F$. Note that the definition of a mod $p$ modular form is recalled
in Section~\ref{section: weight one} below.
\begin{thm}\label{thm: lifting to weight p by blgg}
Let $p>2$ be prime, let $F$ be a totally real field, and let
$\rhobar:G_F\to\operatorname{GL}_2(\overline{\F}_p)$ be an irreducible modular
representation such that $\rhobar|_{G_{F(\zeta_p)}}$ is irreducible.
If $p=3$ (respectively $p=5$), assume further that the projective image of
$\rhobar(G_{F(\zeta_p)})$ is not conjugate to $\operatorname{PSL}_2({\mathbb{F}}_3)$ (respectively $\operatorname{PSL}_2({\mathbb{F}}_5)$ or
$\operatorname{PGL}_2({\mathbb{F}}_5)$).
Suppose that
for each place $v|p$, $\rhobar|_{G_{F_v}}$ is unramified, and in
fact suppose that \[\rhobar|_{G_{F_v}}\cong
\begin{pmatrix}
\lambda_{\alpha_{1,v}} &0\\0&\lambda_{\alpha_{2,v}}
\end{pmatrix}\]where $\lambda_x$ is the unramified character sending
an arithmetic Frobenius element to $x$. For each place $v|p$, let
$\gamma_v$ be a choice of one of $\alpha_{1,v}$,
$\alpha_{2,v}$. Let $N$ be an integer coprime to $p$ and divisible
by the Artin conductor of $\rhobar$. Then there is a mod $p$ Hilbert modular eigenform $f$ of
parallel weight $p$ such that:
\begin{itemize}
\item $f$ has level $\Gamma_1(N)$; in particular, $f$ has level prime
to $p$.
\item $\rhobar_f\cong\rhobar$.
\item $T_v f =\gamma_vf$ for each place $v|p$.
\end{itemize}
\end{thm}
\begin{proof} It suffices to prove that there is a (characteristic zero) Hilbert
modular eigenform $\mathcal{F}$ of parallel weight $p$ such that $\mathcal{F}$ has level
$N$, the Galois representation $\rho_{\mathcal{F}}$ associated to
$\mathcal{F}$ satisfies $\rhobar_{\mathcal{F}}\cong\rhobar$, and for each place
$v|p$ we have $T_v\mathcal{F}=\gammat_v \mathcal{F}$ for some lift $\gammat_v$ of
$\gamma_v$. (We remark that in the argument below, we will feel free
to let $\gammat_v$ denote \emph{any} lift of $\gamma_v$, rather than
a fixed lift.) By local-global compatibility at places dividing $p$
(cf. Theorem 4.3 of \cite{kisinpst}), it is enough to find a lift
$\rho$ of $\rhobar$ which is automorphic, which is minimally
ramified outside $p$, and has the further property that for each place
$v|p$ we have \[\rho|_{G_{F_v}}\cong
\begin{pmatrix}
\varepsilon^{p-1}\lambda_{x_v} & *\\ 0 & \lambda_{\gammat_v}
\end{pmatrix}\]for some $x_v$ and some $\gammat_v$ (such a
representation is automatically crystalline with Hodge--Tate weights
$0$, $p-1$ at each place $v|p$ and thus corresponds to a Hilbert
modular form of parallel weight $p$ and level prime to $p$).
The existence of such a representation $r$ is a straightforward
application of the results of \cite{blgg11-serre-weights-for-U2}, as
we now explain. This argument is very similar to the proof of
Proposition 6.1.3 of \cite{blggord}. Firstly, choose a quadratic
imaginary CM extension $F_1/F$ which splits at all places dividing
$p$ and all places where $\rhobar$ is ramified, and which is
linearly disjoint from $\overline{F}^{\ker\rhobar}(\zeta_p)$ over
$F$. Let $S$ denote a finite set of places of $F$, consisting of the
infinite places, and the union of set of places of $F$ at which
$\rhobar$ is ramified and the places which divide $p$. From now on
we will consider $\rhobar$ as a representation of $G_{F,S}$, the
Galois group of the maximal extension of $F$ unramified outside of
$S$. Let $\chi$ be the Teichm\"uller lift of
$\varepsilonbar^{1-p}\det\rhobar$.
Fix a finite extension $E/\Q_p$ with ring of integers $\mathcal{O}$ and
residue field ${\mathbb{F}}$ such that $\rhobar$ is valued in $\operatorname{GL}_2({\mathbb{F}})$. For
each finite place $v$ of $F$, let $R^\chi_{G_{F_v}}$ denote the
universal $\mathcal{O}$-lifting ring for lifts of $\rhobar|_{G_{F_v}}$ of
determinant $\chi\varepsilon^{p-1}$. By (for example) Lemma 3.1.8 of \cite{GG}, for
each place $v|p$ of $F$ there is a quotient
$R_{G_{F_v}}^{\chi,\gamma}$ of $R_{G_{F_v}}^\chi$ whose
$\overline{\Q}_p$-points correspond precisely to those lifts of
$\rhobar|_{G_{F_v}}$ which are conjugate to a representation of the
form \[\begin{pmatrix} \varepsilon^{p-1}\chi/\lambda_{\gammat_v} & *\\
0 & \lambda_{\gammat_v}
\end{pmatrix}\] for some $\gammat_v$ lifting $\gamma_v$. For each
finite place $v\in S$ with $v\nmid p$, let $\overline{R}_{G_{F_v}}^\chi$ be a
quotient of $R_{G_{F_v}}^\chi$ corresponding to an irreducible
component of $R_{G_{F_v}}[1/p]$, the points of which correspond to
lifts of $\rhobar|_{G_{F_v}}$ with the same Artin conductor as
$\rhobar|_{G_{F_v}}$. Let $R^{\chi,\gamma}$ denote the universal
deformation ring for deformations of $\rhobar$ of determinant
$\chi\varepsilon^{p-1}$, which have the additional property that for
each place $v|p$, the deformation corresponds to a point of
$R_{G_{F_v}}^{\chi,\gamma}$, and for each finite place $v\in S$,
$v\nmid p$, it corresponds to a point of $\overline{R}_{G_{F_v}}^{\chi}$. In
order to construct the representation $r$ that we seek, it is enough
to find a $\overline{\Q}_p$-point of $R^{\chi,\gamma}$ that is
automorphic. We will do this by showing that $R^{\chi,\gamma}$ is a
finite $\mathcal{O}$-algebra of dimension at least one (so that it has
$\overline{\Q}_p$-points), and that all its $\overline{\Q}_p$-points are automorphic.
We can and do extend $\rhobar|_{G_{F_1}}$ to a representation
$\rhobar:G_F\to\mathcal{G}_2({\mathbb{F}})$, where $\mathcal{G}_2$ is the group scheme
introduced in Section 2.1 of \cite{cht} (cf. Section 3.1.1 of
\cite{blggord}). In the notation of Section 2.3 of \cite{cht}, we
let $\widetilde{{S}}$ be a set of places of $F_1$ consisting of exactly one
place ${\widetilde{{v}}}$ above each place $v$ of $S$, and we let $\mathcal{S}$ denote the
deformation
problem \[(F_1/F,S,\widetilde{{S}},\mathcal{O},\rhobar,\varepsilon^{p-1}\chi,\{\overline{R}^\chi_{G_{F_{1,{\widetilde{{v}}}}}}\}_{v\in
S, v\nmid p},\{R_{G_{F_{1,{\widetilde{{v}}}}}}^{\chi,\gamma}\}_{v|p}).\] Let
$R_\mathcal{S}$ denote the corresponding universal deformation ring. Exactly
as in section 7.4 of \cite{GG}, the process of ``restriction to
$G_{F_1}$ and extension to $\mathcal{G}_2$'' makes $R^{\chi,\gamma}$ a
finite $R_\mathcal{S}$-module in a natural way. By Proposition 3.1.4 of
\cite{gee061} we have $\dim R^{\chi,\gamma}\ge 1$, so that (by
cyclic base change for $\operatorname{GL}_2$) it suffices to show that $R_\mathcal{S}$ is
finite over $\mathcal{O}$, and that all its $\overline{\Q}_p$-points are automorphic.
By Proposition A.2.1 of \cite{blgg11-serre-weights-for-U2} and our
assumptions on $\rhobar|_{G_{F(\zeta_p)}}$,
$\rhobar(G_{F(\zeta_p)})$ is adequate in the sense of
\cite{jack}. By Theorems 7.1 and 10.1 of \cite{jack}, it is enough
to check that $R_\mathcal{S}$ has an automorphic $\overline{\Q}_p$-point. We claim
that we can do this by applying Theorem A.4.1 of
\cite{blgg11-serre-weights-for-U2} to $\rhobar$. The only hypotheses
of {\em loc. cit.} that are not obviously satisfied are those
pertaining to potential diagonalizability. By Lemma 3.1.1 of
\cite{blgg11-serre-weights-for-U2}, we may choose a finite solvable
extension $F'/F$ of CM fields which is linearly disjoint from
$\overline{F}^{\ker\rhobar}(\zeta_p)$, such that $\rhobar|_{G_{F'}}$
has an automorphic lift which is potentially diagonalizable at all
places dividing $p$. All $\overline{\Q}_p$-points of each
$R_{G_{F_v}}^{\chi,\gamma}$ are potentially diagonalizable by Lemma
1.4.3 of \cite{BLGGT}, so Theorem A.4.1 of
\cite{blgg11-serre-weights-for-U2} produces a $\overline{\Q}_p$-point of
$R_{\mathcal{S}}$ which is automorphic upon restriction to $G_{F'}$. Since
$F'/F$ is solvable, the result follows by solvable base change
(Lemma 1.4 of \cite{BLGHT}).
\end{proof}
\section{Weight one forms}\label{section: weight one}\subsection{}
Let $p>2$ be a prime number. Let $F/\mathbb{Q}$ be a totally real
field of degree $d>1$, $\mathcal{O}_F$ its ring of integers, and
${\frak{d}}_F$ its different ideal. Let $\mathbb{S}=\{v|p\}$ be the set of all primes lying above $p$. We assume that $p$ is unramified in $F$. Let $N>3$ be an integer prime to $p$.
In this section we make our geometric arguments. We begin by
recalling some standard definitions. Let $K$ be a finite extension of
$\mathbb{Q}_p$ (which we will assume to be sufficiently large without further
comment), and let ${\mathcal{O}}_K$, $\kappa$ denote, respectively, its ring
of integers and its residue field. Let $X/{\mathcal{O}}_K$ be the Hilbert
modular scheme representing the functor which associates to an
${\mathcal{O}}_K$-scheme $S$, the set of all polarized abelian schemes with
real multiplication and $\Gamma_{00}(N)$-structure
$\underline{A}/S=(A/S,\iota,\lambda,\alpha)$ as follows:
\begin{itemize}
\item $A$ is an abelian scheme of relative
dimension $d$ over $S$;
\item the real multiplication $\iota\colon\mathcal{O}_F \hookrightarrow {\rm End}_S(A)$ is a ring
homomorphism endowing $A$ with an action of $\mathcal{O}_F$;
\item the map $\lambda$ is a polarization as in \cite{DP};
\item $\alpha$ is a rigid
$\Gamma_{1}(N)$-level structure, that is, $\alpha\colon \mu_N
\otimes_{\mathbb{Z}} {\frak{d}}_F^{-1} {\; \rightarrow \;} A$, an ${\mathcal{O}}_F$-equivariant
closed immersion of group schemes.
\end{itemize}
Let $X_K/K$, $\overline{X}/\kappa$ respectively denote the generic and the special fibres of $X$.
Let $\widetilde{X}$ denote a toroidal compactification of $X$. Similarly, we define $\widetilde{X}_K$ and $\widetilde{\overline{X}}$.
Let $Y$ be the scheme representing the functor which associates to an ${\mathcal{O}}_K$-scheme $S$, the set of all $(\underline{A}/S,C)=(A/S,\iota,\lambda,\alpha,C)$, where $(A/S,\iota,\lambda,\alpha)$ is as above, and $C$ is an ${\mathcal{O}}_F$-invariant
isotropic finite flat subgroup scheme of $A[p]$ of order $p^g$. Let $\widetilde{Y}$ denote a toroidal compactification of $Y$ obtained using the same choices of polyhedral decompositions as for $\overline{X}$. We introduce the notation $Y_K,\overline{Y},\widetilde{Y}_K,\widetilde{\overline{Y}}$ in the same way as we did for $X$. The ordinary locus in $\overline{X}$ is denoted by $\overline{X}^{ord}$. It is Zariski dense in $\overline{X}$.
There are two finite flat maps
\[
\pi_{1},\pi_2:\widetilde{Y} {\; \rightarrow \;} \widetilde{X},
\]
where $\pi_1$ forgets the subgroup $C$, and $\pi_2$ quotients out by $C$. We define the Atkin--Lehner involution $w:\widetilde{Y} \rightarrow \widetilde{Y}$ to be the map which sends $({\underline{A}},C)$ to $({\underline{A}}/C, A[p]/C)$; it is an automorphism of $\widetilde{Y}$. We have $\pi_2=\pi_1 \circ w$. We also define the Frobenius section $s:\widetilde{\overline{X}} \rightarrow \widetilde{\overline{Y}}$ which sends ${\underline{A}}$ to $({\underline{A}},{\rm Ker}({\operatorname{Frob}}_{A}))$. Our convention is to use the same notation to denote maps between the various versions of $X,Y$.
Let $\epsilon: \underline{\mathcal A}^{\rm univ} {\; \rightarrow \;} X$ be the universal abelian scheme. Let
\[
\underline{\Omega}=\epsilon_\ast \Omega^1_{\underline{\mathcal A}^{\rm
univ}/X}
\]
be the Hodge bundle on $X$. Since $p$ is assumed unramified in $F$, $\underline{\Omega}$ is a locally free $({\mathcal{O}}_F \otimes_\mathbb{Z} {\mathcal{O}}_X)$-module of rank one. We define $\underline{\omega}=\wedge^d \underline{\Omega}$. The sheaf $\underline{\omega}$ naturally extends to $\widetilde{X}$ as an invertible sheaf. Let $\epsilon^\prime: \underline{\mathcal B}^{\rm univ} {\; \rightarrow \;} Y$ be the universal abelian scheme over $Y$ with the designated subgroup $\mathcal C$. We have
\[
\pi_1^\ast\underline{\omega}=\wedge^d \epsilon_\ast \Omega^1_{{\mathcal B}^{\rm
univ}/Y},
\]
\[
\pi_2^\ast\underline{\omega}=\wedge^d \epsilon_\ast \Omega^1_{ ({\mathcal B}^{\rm
univ}/{\mathcal C})/Y }.
\]
Let
\[
\rm{pr}^\ast: \pi_1^\ast \underline{\omega} \rightarrow \pi_2^\ast\underline{\omega}
\]
denote the pullback under the natural projection ${\rm{pr}}: \underline{\mathcal B}^{\rm univ} \rightarrow \underline{\mathcal B}^{\rm univ}/{\mathcal C}$. We will often denote $\pi_1^\ast \underline{\omega}$ by $\underline{\omega}$.
Let $R$ be an ${\mathcal{O}}_K$-algebra. A (geometric) Hilbert modular form of parallel weight $k \in \mathbb{Z}$ and level $\Gamma_1(N)$ defined over $R$ is a section of $\underline{\omega}^k$ over $X\otimes_{{\mathcal{O}}_K} R$. Every such section extends to $\widetilde{X} \otimes_{{\mathcal{O}}_K} R$ by the Koecher principle. We denote the space of such forms by $M_k(\Gamma_1(N),R)$, and the subspace of cuspforms (those sections vanishing on the cuspidal locus) by $S_k(\Gamma_1(N),R)$. If $R$ is a $\kappa$-algebra, then elements of $M_k(\Gamma_1(N),R)$ are referred to as \emph{mod $p$ Hilbert modular forms}. Every such form is a section of $\underline{\omega}^k$ over $\overline{X}\otimes_{\kappa} R$, and extends automatically to $\widetilde{\overline{X}} \otimes_{\kappa} R$.
Given any fractional ideal ${\frak{a}}$ of ${\mathcal{O}}_F$, let $X_{\frak{a}}$ denote the subscheme of $X$ where the polarization module of the abelian scheme is isomorphic to $({\frak{a}},{\frak{a}}^+)$ as a module with notion of positivity. Then $X_{\frak{a}}$ is a connected component of $X$, and every connected component of $X$ is of the form $X_{\frak{a}}$ for some ${\frak{a}}$. The same statement is true for $Y_{\frak{a}}$, $Y$. Let
\[
Ta_{{\frak{a}}}=(\underline{\mathbb{G}_m\! \otimes\! {\frak{d}}_F^{-1})/q^{{\frak{a}}^{-1}}}
\]
denote a cusp on $X_{\frak{a}}$, where underline indicates the inclusion of standard PEL structure. Pick $c \in p^{-1}{\frak{a}}^{-1}-{\frak{a}}^{-1}$. Then
\[
Ta_{{\frak{a}},c}=(Ta_{\frak{a}},<\! q^c\!>)
\]
is a cusp on $Y_{\frak{a}}$, where $<\!q^c\!>$ denotes the ${\mathcal{O}}_F$-submodule of $Ta_{\frak{a}}[p]$ generated by $q^c$.
We now prove our main result, the following theorem.
\begin{thm}\label{thm: main result} Let $p>2$ be a prime which is
unramified in $F$, a totally real field. Let
$\rhobar:G_F\to\operatorname{GL}_2(\overline{\F}_p)$ be an irreducible modular
representation such that $\rhobar|_{G_{F(\zeta_p)}}$ is irreducible.
If $p=3$ (respectively $p=5$), assume further that the projective
image of $\rhobar(G_{F(\zeta_p)})$ is not conjugate to
$\operatorname{PSL}_2({\mathbb{F}}_3)$ (respectively $\operatorname{PSL}_2({\mathbb{F}}_5)$ or $\operatorname{PGL}_2({\mathbb{F}}_5)$).
Suppose that
for each prime $v|p$, $\rhobar|_{G_{F_v}}$ is unramified, and that
the eigenvalues of $\rhobar({\operatorname{Frob}}_v)$ are distinct.
Then there is a mod $p$ Hilbert modular form $h$ of
parallel weight $1$ and level prime to $p$ such that
$\rhobar_h\cong\rhobar$. Furthermore, $h$ can be chosen to have
level bounded in terms of the Artin conductor of $\rhobar$.
\end{thm}
\begin{proof}
For each $v\in \mathbb{S}$, let the eigenvalues of $\rhobar({\operatorname{Frob}}_v)$ be
$\gamma_{v,1}\ne\gamma_{v,2}$. Let $\mathfrak{N}$ denote the Artin
conductor of $\rhobar$, and $N>3$ an integer prime to $p$ and divisible by $\mathfrak{N}$. By Theorem \ref{thm: lifting to weight p
by blgg}, we see that for each subset $I\subset S$, there is a
mod $p$ Hilbert modular eigenform $f_I$ of weight $p$ and level
$\Gamma_1(N)$ such that $\rhobar_{f_I}\cong\rhobar$, and for each prime
$v\in \mathbb{S}$, we have $T_vf=\gamma_{v,1}f$ if $v\in I$, and
$T_vf=\gamma_{v,2}f$, otherwise. Since $\rhobar_{f_I}\cong\rhobar$,
we see that for each prime ${\frak{l}} \nmid Np$ of $F$, the $f_I$ are
eigenvectors for $T_{\frak{l}}$ with eigenvalues $\lambda_{\frak{l}}$ which are
independent of $I$. By a standard argument, using Proposition 2.3
of \cite{MR507462}, we can furthermore assume (at the possible
cost of passing to forms of level $N^2$) that for each prime
${\frak{l}}|N$, we have $T_{\frak{l}} f_I=0$ for all $I$.
We can and do assume that each $f_I$ is normalised, in the sense
that (in the notation of \cite{MR507462}) $c(\mathcal{O}_F,f_I)=1$. For
any $I\subset \mathbb{S}$, let $\gamma_I=\Pi_{v \in I}
\gamma_{v,1} \Pi_{v \not\in I} \gamma_{v,2}$; this is the $T_p$-eigenvalue of $f_I$. Set
\[
f=\sum_{I\subset S}(-1)^{|I|}\gamma_I f_I,
\]
\[
g=\sum_{I\subset S}(-1)^{|I|}f_I.
\]
We begin with a Lemma.
\begin{lemma}\label{Lemma: q-expansion} The section $\pi_1^\ast f-{\rm pr}^\ast \pi_2^\ast g$ of $\underline{\omega}^p$ on $\widetilde{Y}$ has $q$-expansion divisible by $p$ at every cusp of the form $Ta_{{\frak{a}},c}$.
\end{lemma}
\begin{proof} We let $\eta$ denote a generator of the sheaf $\underline{\omega}$ on the base of $Ta_{\frak{a}}$ or $Ta_{{\frak{a}},c}$. We first remark that by \cite[(2.23)]{MR507462}, if $h$ is a normalized Hilbert modular eigenform of
parallel weight $k$, and $h(Ta_{\frak{a}})=\sum_{\xi \in ({\frak{a}}^{-1})^+}
c_\xi q^\xi\eta^k$, then $c_\xi=c(\xi{\frak{a}},h)$ is the eigenvalue of the $T_{\xi{\frak{a}}}$ operator on $h$, for all $\xi\in ({\frak{a}}^{-1})^+$.
Write $f(Ta_{{\frak{a}}})=\sum_{\xi \in ({\frak{a}}^{-1})^+} a_\xi({\frak{a}}) q^\xi \eta^p$ and $g(Ta_{{\frak{a}}})=\sum_{\xi \in ({\frak{a}}^{-1})^+} b_\xi({\frak{a}}) q^\xi \eta^p$. We have:
\[
\pi_1^\ast f(Ta_{{\frak{a}},c})=f(Ta_{{\frak{a}}})=\sum_{\xi \in ({\frak{a}}^{-1})^+} a_\xi({\frak{a}}) q^\xi \eta^p,
\]
\[
{\rm pr}^\ast \pi_2^\ast g(Ta_{{\frak{a}},c})={\rm pr}^\ast g(Ta_{p{\frak{a}}})=\sum_{\xi \in p^{-1}({\frak{a}}^{-1})^+} b_\xi(p{\frak{a}}) q^\xi \eta^p.
\]
It is therefore enough to show that $b_\xi(p{\frak{a}})=0$ if $\xi \in
p^{-1}({\frak{a}}^{-1})^+-({\frak{a}}^{-1})^+$, and that $a_\xi({\frak{a}})\equiv
b_\xi(p{\frak{a}})\ {\rm mod}\ p$ for $\xi \in ({\frak{a}}^{-1})^+$.
For the first statement, let $v\in \mathbb{S}$ be such that $v$ does not divide $\xi p{\frak{a}}$. Then we can write
\[
b_\xi(p{\frak{a}})=\sum_{I\subset \mathbb{S}} (-1)^{|I|} c(\xi p{\frak{a}},f_I)=\sum_{v \not\in I}(-1)^{|I|}c(\xi p{\frak{a}},f_I) -\sum_{v \in I} (-1)^{|I|}c(\xi p{\frak{a}},f_I)=0,
\]
using $c(\xi\p{\frak{a}},f_I)=c(\xi p{\frak{a}},f_{I \cup \{v\}})$, since $v$
does not divide $\xi p {\frak{a}}$.
For the second statement, note firstly that for $\xi \in ({\frak{a}}^{-1})^+$ we can write
\[
b_\xi(p{\frak{a}})=\sum_{I \subset \mathbb{S}} (-1)^{|I|}c(\xi
p{\frak{a}},f_I),\]\[a_\xi({\frak{a}})= \sum_{I \subset \mathbb{S}}
(-1)^{|I|}\gamma_I c(\xi {\frak{a}},f_I).
\]
Now, if $h$ is a Hilbert modular eigenform, then for any integral
ideal ${\frak{m}}$ of ${\mathcal{O}}_F$, we have $c(p{\frak{m}},h)\equiv c((p),h)
c({\frak{m}},h)\ {\rm mod}\ p$. Since for any $I \subset \mathbb{S}$, we
have $c((p),f_I)=\gamma_I$, the result follows.
\end{proof}
For any section $h \in H^0(\widetilde{X},\underline{\omega}^k)$, we denote its image in $H^0(\widetilde{\overline{X}}, \underline{\omega}^k)$ by $\bar{h}$.
\begin{cor}\label{Corollary: equality of section on Xbar} We have the following equality of sections of $\underline{\omega}^p$ on $\widetilde{\overline{X}}$:
\[
s^\ast \pi_2^\ast \bar{f}=s^\ast w^\ast {\rm pr}^\ast \pi_2^\ast \bar{g}.
\]
\end{cor}
\begin{proof} Reducing the equation in Lemma \ref{Lemma: q-expansion}
mod $p$, we obtain an equality of sections on every irreducible
component of $\widetilde{\overline{Y}}$ which contains the reduction of a
cusp of the form $Ta_{{\frak{a}},c}$. These are exactly the irreducible
components of $ws(\widetilde{\overline{X}})$, since
$w(Ta_{{\frak{a}},c})=(Ta_{p{\frak{a}}},<\!\zeta\!>)=s(Ta_{p{\frak{a}}})$ (where
$<\!\zeta\!>$ is the ${\mathcal{O}}_F$-module generated by a $p$-th root of
unity $\zeta$). Pulling back under $w\circ s$, we obtain the desired equality on $\widetilde{\overline{X}}$.
\end{proof}
For any scheme $Z$ over $\kappa$, let ${\rm Fr}: Z {\; \rightarrow \;} Z^{(p)}$ denote the relative Frobenius morphism. Since $\widetilde{\overline{X}}$ has a model over $\mathbb{F}_p$, we have $\widetilde{\overline{X}}^{(p)}=\widetilde{\overline{X}}$. For any non-negative integer $k$, we define a morphism
\[
V: H^0(\widetilde{\overline{X}},\underline{\omega}^k) \rightarrow H^0(\widetilde{\overline{X}},\underline{\omega}^{kp})
\]
as follows: choose a trivialization $\{(U_i,\eta_i)\}$ for $\underline{\omega}$ on $\widetilde{\overline{X}}$. Let $f\in H^0(\widetilde{\overline{X}},\underline{\omega}^k)$ be given by $f_i \eta_i^k$ on $U_i$. Then, there is a unique section $V(f)$ in $H^0(\widetilde{\overline{X}},\underline{\omega}^{kp})$ whose restriction to $U_i$ is ${\rm Fr}^\ast (f_i) \eta_i^{kp}$.
Calculating on points, we see easily that for $\widetilde{\overline{X}}$, we
have $\pi_2\circ s={\rm Fr}$. Let $\bf{h}$ denote the Hasse invariant
of parallel weight $p-1$. It can be defined as follows: let $U$ be an open subset of $\overline{X}$ over which $\omega$ is trivial, and let $A_U$ denote the universal abelian scheme over $U$. Let $\eta$ be a non vanishing section of $\omega$ on $U$; it can be thought of as a section of $\Omega_{A_U/U}$. We let $\eta^{(p)}$ denote the induced section of $\Omega_{A_U^{(p)}/U}$. Let ${\rm Ver}: A_U^{(p)} \rightarrow A_U$ be the Verschiebung morphism. Then, there is a unique $\lambda \in {\mathcal{O}}_{\overline{X}}(U)$, such that ${\rm Ver}^\ast \eta=\lambda \eta^{(p)}$. We define a section of $\omega^{p-1}$ on $U$ via
\[
{\bf h}_{U,\eta}:=\lambda\eta^{p-1}.
\]
It is easy to see that there is a unique section of $\omega^{p-1}$ on $\overline{X}$, denoted ${\bf h}$, such that ${\bf h}_{|_U}={\bf h}_{U,\eta}$ for any choice of $(U,\eta)$ as above. See \cite[\S 7.11]{AG} for an equivalent construction.
\begin{prop} We have $V(\bar{f})=V({\bf h})\bar{g}$ as sections of $\underline{\omega}^{p^2}$ on $\overline{X}$. Furthermore, $\bar{f}$ is divisible by ${\bf h}$, and $\bar{f}/{\bf h}$ is a mod $p$ Hilbert modular form of parallel weight one defined over $\kappa$.
\end{prop}
\begin{proof} Let $U$ be an open subset of $\overline{X}$ over which $\underline{\omega}$ is trivializable, and $\eta$ a non-vanishing section of $\underline{\omega}$ over $U$. We claim that if ${\bf h}=\lambda \eta^{p-1}$ on $U$, then
\[
\lambda s^\ast \pi_2^\ast \eta= s^\ast w^\ast {\rm pr}^\ast \pi_2^\ast \eta.
\]
Evaluating both sides at a point corresponding to ${\underline{A}}$, we need to show that for the natural projection
\begin{eqnarray}\label{Equation: Verschiebung projection}
{\rm pr}: A/{{\rm Ker}({\rm Frob}_A)} \rightarrow A/A[p] \cong A,
\end{eqnarray}
we have ${\rm pr}^\ast \eta=\lambda \eta^{(p)}$, which follows from the definition of the Hasse invariant, since ${\rm pr}$ is the Verschiebung morphism of $A$.
Now, writing $\bar{f}=F \eta^p$ and $\bar{g}=G\eta^p$ on $U$, and using the above claim, over $U$, we can write
\[
\lambda^p s^\ast \pi_2^\ast \bar{f}=(s^\ast \pi_2^\ast F)(\lambda^p s^\ast \pi_2^\ast \eta^p)={\operatorname{Fr}}^\ast(F)( s^\ast w^\ast {\rm pr}^\ast \pi_2^\ast \eta^p).
\]
On the other hand, we have
\[
\lambda^ps^\ast w^\ast {\rm pr}^\ast \pi_2^\ast \bar{g}=\lambda^p(s^\ast w^\ast \pi_2^\ast G) (s^\ast w^\ast {\rm pr}^\ast \pi_2^\ast \eta^{p})=\lambda^pG(s^\ast w^\ast {\rm pr}^\ast \pi_2^\ast \eta^{p}).
\]
At a point corresponding to an {\it ordinary} abelian variety $A$, the section $s^\ast w^\ast {\rm pr}^\ast \pi_2^\ast \eta$ specializes to ${\operatorname{pr}\,}^\ast \eta$, where ${\operatorname{pr}\,}$ is as in (\ref{Equation: Verschiebung projection}), and is, hence, non vanishing. Corollary \ref{Corollary: equality of section on Xbar} now implies that over $\overline{X}^{ord} \cap U$ we have
\[
{\rm Fr}^\ast(F)=\lambda^p G={\rm
Fr^\ast}(\lambda)G,\]
the last equality because $\bf h$ is defined on a model of $\overline{X}$ over $\mathbb{F}_p$. Running over a trivializing open covering of $\overline{X}$ for $\omega$, we conclude that $V(\bar{f})=V({\bf
h})\bar{g}$ on $\overline{X}^{ord}$. Since $\overline{X}^{ord}$ is Zariski dense in $\overline{X}$, it follows that
\[
V(\bar{f})=V({\bf h})\bar{g}
\]
as sections of $\omega^{p^2}$ over $\overline{X}$.
Viewing $F/\lambda$ as a function on the ordinary part of $U$, we need
to show that it extends to all of $U$. Since $U$ is smooth over
$\kappa$, it is enough to show that the Weil divisor of $F/\lambda$
is effective. But the coefficients appearing in that divisor are the
coefficients of the Weil divisor of $G$ multiplied by $p$. Since $G$
has an effective Weil divisor, so does $F/\lambda$, and hence $F$ is
divisible by $\lambda$ on $U$. Repeating this argument over an open
covering of $\widetilde{\overline{X}}$, we obtain that $\bar{f}$ is divisible
by $\bf h$, and $\bar{f}/{\bf h}$ is a mod $p$ Hilbert modular form
of parallel weight one defined over $\kappa$, as required.
\end{proof}
We can now finish the proof of Theorem \ref{thm: main result}. The desired mod $p$ Hilbert modular form of parallel weight one $h$ is $\bar{f}/{\bf h}$. Since ${\bf h}$ has $q$-expansion $1$ at all unramified cusps, it follows that $h$ satisfies the desired assumptions.
\end{proof}
\section{Serre's Conjecture implies Artin's Conjecture}\label{sec:
Artin}\subsection{}In this final section we generalise the arguments of
\cite{MR1434905} to show that for a fixed totally real field $F$, the
weak form of Serre's conjecture for $F$ implies the strong form of
Artin's conjecture for two-dimensional totally odd representations
over $F$. To
be precise, the weak version of Serre's conjecture that we have in
mind is the following (cf. Conjecture 1.1 of \cite{bib:BDJ}, where it
is described as a folklore conjecture).
\begin{conj}\label{conj:Serre}
Suppose that $\rhobar:G_F\to\operatorname{GL}_2(\overline{\F}_p)$ is continuous,
irreducible and totally odd. Then $\rhobar\cong\rhobar_f$ for some
Hilbert modular eigenform $f$.
\end{conj}
Meanwhile, we have the following strong form of Artin's conjecture.
\begin{conj}
\label{conj:Artin}Suppose that $\rho:G_F\to\operatorname{GL}_2({\mathbb{C}})$ is continuous,
irreducible and totally odd. Then $\rho\cong\rho_f$ for the some
Hilbert modular eigenform $f$ (necessarily of parallel weight one).
\end{conj}
In order to show that Conjecture \ref{conj:Serre} implies Conjecture
\ref{conj:Artin}, we follow the proof of Proposition 1 of
\cite{MR1434905}, using Theorem \ref{thm: main result} in place of the
results of Gross and Coleman--Voloch used in \cite{MR1434905}. The
argument is slightly more involved than in \cite{MR1434905}, because
we have to be careful to show that the $p$-distinguishedness
hypothesis in Theorem \ref{thm: main result} is satisfied.
\begin{thm}\label{thm: Serre implies Artin}
Fix a totally real field $F$. Then Conjecture \ref{conj:Serre}
implies Conjecture \ref{conj:Artin}.
\end{thm}
\begin{proof}Suppose that $\rho:G_F\to\operatorname{GL}_2({\mathbb{C}})$ is continuous,
irreducible and totally odd. Then $\rho(G_F)$ is finite, so after
conjugation we may assume that $\rho:G_F\to\operatorname{GL}_2(\mathcal{O}_K)$, where
$\mathcal{O}_K$ is the ring of integers in some number field $K$. We will
show that there are a fixed integer $N$ and infinitely many rational
primes $p$ such that for each such $p$, if $\rhobar_p$ denotes the
reduction of $\rho$ mod $p$ (or rather, modulo a prime of $\mathcal{O}_K$
above $p$), then $\rhobar_p$ arises from the reduction mod $p$ of the
Galois representation associated to an eigenform in
$S_1(\Gamma_1(N),{\mathbb{C}})$, the space of cuspidal Hilbert modular forms of parallel weight one and level $\Gamma_1(N)$ over $\mathbb{C}$. Since
$S_1(\Gamma_1(N),{\mathbb{C}})$ is finite-dimensional, there are only finitely many
such eigenforms, so we see that one eigenform $f$ must work for
infinitely many $p$; but then it is easy to see that
$\rho\cong\rho_f$, as required.
Firstly, we claim that it suffices to prove that there is a fixed
$N$ and infinitely many primes $p$ such that $\rhobar\cong\rhobar_f$
for some eigenform $f\in S_1(\Gamma_1(N),\overline{\F}_p)$ (using the
notation of Section \ref{section: weight one}). To see this, note
that for all but finitely many primes $p$, the finitely generated ${\mathbb{Z}}$-module
$H^1(X,\underline{\omega})$ is $p$-torsion free, so that for all but
finitely many $p$ the reduction map
$H^0(X,\underline{\omega})\to
H^0(\overline{X},\underline{\omega})$ is surjective, and the
Deligne--Serre lemma (Lemma 6.11 of \cite{deligne-serre}) allows us
to lift from $S_1(\Gamma_1(N),\overline{\F}_p)$ to $S_1(\Gamma_1(N),{\mathbb{C}})$.
We are thus reduced to showing that there are infinitely many primes
$p$ for which $\rhobar_p$ satisfies the hypotheses of Theorem
\ref{thm: main result}. Firstly, note that there is at most one prime
$p$ for which $\rho|_{G_{F(\zeta_p)}}$ is reducible, so if we exclude
any such prime, as well as the (finitely many) primes dividing
$\#\rho(G_F)$, the primes which ramify in $F$, and the
primes less than $7$, then $\rhobar_p$ will satisfy the requirements
of the first paragraph of Theorem \ref{thm: main result}.
If we also exclude the finite many primes $p$ for which
$\rho|_{G_{F_v}}$ is ramified for some $v|p$, we see that it is enough
to show that there are infinitely many $p$ such that for all $v|p$,
$\rhobar_p({\operatorname{Frob}}_v)$ has distinct eigenvalues.
In fact, we claim that it is enough to see that there are infinitely
many $p$ such that for all $v|p$, $\rho({\operatorname{Frob}}_v)$ is not scalar. To
see this, suppose that $\rho({\operatorname{Frob}}_v)$ is not scalar, but
$\rhobar_p({\operatorname{Frob}}_v)$ is scalar. Then it must be the case that the
difference of the eigenvalues of $\rho({\operatorname{Frob}}_v)$ is divisible by some
prime above $p$. Now, there are only finitely many non-scalar elements
in $\rho(G_F)$, and for each of these elements there are only finitely
many primes dividing the difference of their eigenvalues, so excluding
this finite set of primes gives the claim.
Let ${\operatorname{proj}} \rho$ be the projective representation
${\operatorname{proj}}\rho:G_F\to\operatorname{PGL}_2({\mathbb{C}})$ obtained from $\rho$. We must show that
there are infinitely many primes $p$ such that for each place $v|p$ of
$F$, ${\operatorname{proj}}\rho({\operatorname{Frob}}_v)\ne 1$. Letting
$L=\overline{F}^{\ker{\operatorname{proj}}\rho}$, we must show that there are
infinitely many primes $p$ such that no place $v|p$ of $F$ splits
completely in $L$. Let $M$ be the normal closure of $F$ over ${\mathbb{Q}}$, and
$N$ the normal closure of $L$ over ${\mathbb{Q}}$. Since $\rho$ is totally odd,
we see that $M$ is totally real and $N$ is totally imaginary. Consider
a complex conjugation $c\in{\operatorname{Gal}\,}(N/{\mathbb{Q}})$. By the Cebotarev density
theorem there are infinitely many primes $p$ such that ${\operatorname{Frob}}_p$ is
conjugate to $c$ in ${\operatorname{Gal}\,}(N/{\mathbb{Q}})$, and it is easy to see that each such
prime splits completely in $M$ and thus in $F$, and that no place $v|p$ of $F$ splits
completely in $L$, as required.
\end{proof}
\bibliographystyle{amsalpha}
| {
"attr-fineweb-edu": 1.410156,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdaY5qsNCPdQKuTkx | \section{Introduction}
Time Series Classification (TSC) involves building models to predict a discrete target class from ordered input variables. Over recent years, new TSC algorithms have made significant improvement over the previous state of the art, however many works were limited to univariate data, which consists of only a single input channel. Recent work has brought attention to Multivariate Time Series (MTS), where each example of data has multiple channels of information over time. However, MTS datasets are often small, with some training sets smaller than 100 examples, see \cite{bagnall2018uea}. This severely disadvantages deep learning methods, such as InceptionTime \cite{fawaz2020inceptiontime}, that have millions of parameters and can easily overfit on small datasets.
Recent work by \cite{ruiz2021great} has shown that non-deep learning methods achieve higher MTS classification accuracy on a suite of benchmarks, citing these disadvantages. However, this work demonstrates that deep learning methods can outperform non deep methods with proper data augmentation. With the augmentations presented in this work, the InceptionTime network (introduced by \cite{fawaz2020inceptiontime}) exceeds the state of the art classification accuracy for 13 of the 26 benchmark datasets from \cite{bagnall2018uea} that were examined by \cite{ruiz2021great} and matches on another 2 datasets.
Comparison studies, such as the one by \cite{ruiz2021great}, often utilize multiple MTS datasets to compare multiple methods. When considering MTS classication methods for practical application, these studies may be biased towards methods that avoid overfitting on small datasets. By adding augmentations for deep networks, it is possible to reduce the bias and provide more accurate guidance for the application of methods onto real world problems.
In this paper, we evaluate four MTS augmentations on the 26 benchmark datasets using three models. We show statistically significant improvements to classification accuracy when using augmentations. Additionally, we demonstrate that these four augmentations are applicable across a wide variety of dataset types and deep learning models. Furthermore, we provide theories as to why certain augmentations perform well on certain datasets and others do not. In addition to the above, we provide all the code necessary to reproduce the results and a Juptyer Notebook that anyone can run using Google Colab for replication\footnote{https://github.com/2022ICMLRAMTSC/CODE}. This repository was created anonymously For the double-blind review process.
\section{Related Work}
\subsection{Time Series Augmentation}
Augmentation in time series data is a developing area of research. \cite{wen2020time} identifies 3 major time series tasks for data augmentation: classification, forecasting, and anomaly detection. They also note that augmentation methods can be broadly split into two categories, basic and advanced. The authors of this paper interpret basic augmentation methods as those that are applied to a batch without information outside of the batch being augmented, while advanced augmentation methods require outside of batch information. For example, a flip augmentation applied to a batch that is not influenced by any data outside of the batch should be considered basic. The Feature Space augmentation described by \cite{devries2017dataset} would be considered advanced, as it requires computing a feature space, which is computed outside of the batch.
\begin{figure}[H]
\centering
\includegraphics[width=0.47\textwidth]{augs.JPG}
\caption{Examples of each augmentation examined in this paper in action. Blue indicates the targeted section of the time series and red indicates how the time series changed. For cutmix and mixup, purple indicates a reference from another time series, where the purple values are used in calculating the augmentation.}
\label{fig:augs}
\end{figure}
While more detailed taxonomies of time series augmentations exist, see \cite{wen2020time} and \cite{iwana2021empirical}, the separation of basic and advanced is sufficient for this paper, which only deals with basic augmentations. The work by \cite{wen2020time} provides extremely limited empirical results and empirical results from \cite{iwana2021empirical} and \cite{le2016data} are limited to univariate datasets. There is a lack of extensive empirical evidence as to the effectiveness of augmentations on MTS datasets.
\subsection{Image Augmentation}
Numerous methods have been developed for the augmentation of image data, and with over 2000 citations recorded on Google Scholar, the survey of image augmentations by \cite{shorten2019survey} shows much greater interest than similar surveys in time series augmentation \cite{wen2020time} \cite{iwana2021empirical}. Before the popularization of transformers for vision by \cite{dosovitskiy2020image}, most computer vision tasks involved convolutional neural networks, with augmentations designed to improve the robustness of such networks. Similarly, many TSC networks utilize convolutions, suggesting that image augmentation methods may also apply to time series data.
\subsection{Time Series Classification}
Several methods have been developed for MTS classification, for a review see \cite{fawaz2019deep}. Neural network MTS classifiers tend to utilize some combination of Recurrent Neural Network (RNN) and Convolutional Neural Network (CNN) methods \cite{karim2017lstm}, or Temporal CNN (TCNN) methods \cite{wang2017time}. Attention based models have also been used in time series, see \cite{song2018attend,yang2021predictive}. These methods involving deep neural networks require millions of parameters, which often leads to overfitting on smaller sized time series datasets.
\section{Augmentations}
The augmentations examined in this work were implemented for TPU compilation, which limits the available operations in comparison to GPU and CPU augmentations. These augmentations had marginal impact on training time, which is expected because augmentation occurs before a batch of data is placed on the GPU or TPU. Figure \ref{fig:augs} shows an example of how the augmentations modify a time series. To the authors knowledge, window warping has not yet been investigated as an augmentation for time series data, and cutout, cutmix and mixup have not been mentioned in TSC literature prior to \cite{yang2021predictive}.
\subsection{Window Warp}
Window warping is a data augmentation technique that stretches or shrinks data in the temporal dimension, as described by \cite{le2016data}. After extracting a slice of the time series, the slice is resized using bilinear interpolation. The slice is reinserted into the same location, resulting in lengthened or shortened time series. This paper's implementation crops or pads (with zeros) the end of the sequence of the time series back to its original size, after window warping. Bilinear interpolation is implemented using Tensorflow's image resize function by reshaping the time series from a shape of $(l, c)$ to $(1, l, c)$. The position of the window is randomly determined. The size of the window is randomly determined and ranges between the max and min hyperparameter values. The scale factor is a hyper parameter, with 2.0 indicating a doubling of the length of the window. This operation applies across all channels.
\subsection{Cutout}
Cutout is an image augmentation technique that removes contiguous sections of data, as described by \cite{devries2017improved}. Temporal cutout selects a random time segment and set of channels from a MTS and sets selected values to 0. The size of the random time segment is randomly determined and ranges between the max and min hyperparameter values. The chance of each channel being selected for this operation is a hyperparameter called \emph{channel probability}.
\subsection{Mixup}
Mixup is an image augmentation technique that combines two examples of data, as described by \cite{zhang2017mixup}. Temporal mixup multiplies a randomly selected set of channels in the first MTS by a value, $m$, and then adds all values from the same set of channels from a randomly selected second MTS of any label multiplied by $1 - m$. The value of $m$ is randomly selected between a min and max hyperparameter. This operation applies to all channels. Note that this implementation does take the average of the two series' labels.
\subsection{CutMix}
Cutmix is an image augmentation technique that combines two examples of data, as described by \cite{yun2019cutmix}. Temporal cutmix selects a random time segment from the first MTS and a random time segment of a random second MTS of any label. Note that the two random segments start and end at the same time step for both sequences. It then selects a random set of channels and replaces the first MTS's segment's channel values with those of the second. The size of the random time segment is randomly determined and ranges between the max and min hyperparameter values. The chance of each channel being selected for this operation is a hyperparameter called \emph{channel probability}.
\section{Datasets}
This paper evaluates the impact of the time series augmentations on the datasets in the UEA MTSC archive \cite{bagnall2018uea}. On release in 2018 it contained 30 multivariate datasets, of which four are not all equal length. We restrict our attention to the 26 equal length series to avoid any complications resulting from unequal length. An overview of the different datasets is provided in Appendix \ref{app:dataset}. Due to the page limits of this paper, we direct readers to \cite{bagnall2018uea} and \cite{ruiz2021great} for extended descriptions of the datasets.
\section{Models}
This paper explores three deep learning architectures that are often used for time series classification. By evaluating three different architectures, this paper hopes to demonstrate that these time series augmentations can be generalized for most deep learning architectures.
\subsection{Convolutional Networks and InceptionTime}
Convolutional networks are very popular in the domain of computer vision, but can also perform well in time series classification. \cite{fawaz2020inceptiontime} proposed the InceptionTime model as an ensemble of five Inception models. Each Inception model is composed of two residual blocks, each containing three Inception modules, followed at the very end by a global average pooling layer and a dense classification head layer. Inception modules contain convolutions of various kernel sizes and a bottleneck layer. The residual blocks help mitigate the vanishing gradient issue by allowing for direct gradient flow \cite{he2016identity}. \cite{fawaz2020inceptiontime} noted that ensembling was necessary due to the high standard deviation in accuracy of single Inception models and the small size of time series datasets.
For this study, we evaluate the Inception model without ensembling, so references to InceptionTime refer to just the Inception model, without ensembling. There are two justifications for evaluating the model without ensembling. First, the compute cost is reduced five-fold allowing for a more efficient use of compute resources. Second, because ensembling can be considered as a regularization method on its own, it is better to analyze the regularization benefits provided by augmentations alone.
\subsection{Recurrent Neural Networks}
\cite{husken2003recurrent} used a recurrent neural network (RNN) to process time series data for classification. Researchers have developed improved versions of RNN based networks for a variety of tasks, such as the Convolutional LSTM for classification, \cite{karim2017lstm}. However, RNN networks do not scale well with increasing sequence length, due to the vanishing gradient problem mentioned by \cite{le2016quantifying}. Additionally, due to their recurrent nature, RNNs must process a time series sequentially, whereas a convolutional network can apply the convolutions across the time series in parallel, which does not lend itself to effective use of GPUs and TPUs. This results in longer compute times during training.
This paper implements a simple RNN network, refered to as simpleRNN. It consists of 3 LSTM layers of 256, 512, and 512 units. The final LSTM layer returns only the units at last time step, which is then used for classification in a dense layer with softmax. Due to the compute costs in long sequences, this paper evaluates 18 datasets in the UEA MTSC archive of 512 length or shorter for simpleRNN.
\subsection{Self Attention Networks}
Multi-Headed Self Attention (MHSA) modules were popularized by \cite{devlin2018bert} for usage in Natural Language Processing and by \cite{dosovitskiy2020image} for Computer Vision. Research into MHSA for MTS classification is growing, as \cite{song2018attend} and \cite{russwurm2020self} use MHSA based networks for classification. While attention models do not suffer from vanishing gradients caused by long sequence lengths, the memory requirement increases by the square of the length of the sequence. This is because the scaled dot product attention that makes up the core of the MHSA architecture requires a dot product of two matrices along length of the time series, resulting in matrix shaped as \((batch size, length, length)\).
This paper implements a simple MHSA network, refered to as simpleMHSA. It consists of a 1D convolutional layer with a kernel size of 1 and 512 filters, followed by a positional encoding layer. It then applies two encoder layers, with each encoder layer consisting of a single MHSA layer with 8 heads and 512 units and a feed forward layer. The output of the last time step of the second encoder layer is sent to a dense softmax classification layer. Due to the compute costs in long sequences, we evaluate 18 datasets in the UEA MTSC archive of 512 length or shorter for simpleMHSA.
\subsection{Statistical Methods}
\cite{ruiz2021great} evaluates a wide variety of non deep learning methods on the UEA MTSC archive. For this paper, we will compare our results from the augmented deep learning networks with their results for both deep learning and non deep learning methods. Notable statistical methods that performed well on the UEA MTSC archive include ROCKET \cite{dempster2020rocket}, HIVE COTE \cite{lines2018time}, and CIF \cite{middlehurst2020canonical}.
\section{Experiments}
\subsection{Augmentation Setup}
We evaluate different augmentation combinations, each with their own augmentation code, in order to measure the impact of individual augmentations both separately and when combined with other augmentations. It should be noted that there are an infinite number of possible combinations to evaluate (we can evaluate every value of P between 0 and 1, the probability of applying the augmentation). We limit our search to eight augmentation pipelines, four of which evaluate the four augmentations in isolation, three of which evaluate augmentations together, and one control augmentation, which does not modify the data in any way. It is important to note that there are an infinite number of possible combinations for augmentations. The specific combinations of augmentations are arbitrary, except for the isolated augmentations, which were designed to evaluate the augmentations in isolation.
Across all augmentation combinations, certain input parameters are dynamically defined based on the dataset. For Mixup, there are no input shape dependent parameters. For cutout and cutmix, given $l_{max}$ as the maximum length of the input time sequences, the segment to be cutmixed or cutout has a length $s_{length}$ selected uniformly at random, $s_{length} \sim U(\frac{1}{2}l_{max},l_{max})$, starting at a position, $p$, selected uniformly at random, $p \sim U(0, l_{max} - s_{length} - 1)$, where 0 is the starting index of the time series. For window warping, the segment to be lengthened or shorted has a length $s_{length}$ selected uniformly at random, $s_{length} \sim U(\frac{1}{8}l_{max}, \frac{1}{3}l_{max})$, with a starting position, $p$ selected similarly, uniformly at random, $p \sim U(0, l_{max} - s_{length} - 1)$. The new length of the segment is the old length multiplied by a static scaling hyper parameter $S$. See Table \ref{table:augs} for an explanation of static hyper parameters for each augmentation.
\begin{table}[t!]
\centering
\begin{tabular}{|l|l|}
\toprule
Aug Code & Definition \\
\midrule
A & Cutmix(P = 0.8, CP = 0.5)\\
& Cutout(P = 0.8, CP = 0.5) \\
& Mixup(P = 0.8) \\ \hline
B & Cutmix(P = 0.8, CP = 0.2) \\
& Cutmix(P = 0.8, CP = 0.2) \\
& Cutout(P = 0.8, CP = 0.2) \\
& Mixup(P = 0.8) \\ \hline
C & Cutout(P = 0.5, CP = 0.3) \\
& Cutout(P = 0.5, CP = 0.3) \\ \hline
D & Mixup(P = 0.5) \\
& Mixup(P = 0.5) \\ \hline
E & Cutmix(P = 0.5, CP = 0.3) \\
& Cutmix(P = 0.5, CP = 0.3) \\ \hline
F & Windowwarp(P = 0.5, S = 0.5) \\
& Windowwarp(P = 0.5, S = 2.0) \\ \hline
G & Cutmix(P = 0.8, CP = 0.2) \\
& Cutout(P = 0.8, CP = 0.2) \\
& Cutout(P = 0.8, CP = 0.2) \\
& Mixup(P = 0.8) \\
& Windowwarp(P = 0.5, S = 0.5) \\
& Windowwarp(P = 0.5, S = 2.0) \\
\bottomrule
\end{tabular}
\caption{Augmentation Combinations, with their associated code and static hyper parameter values. For each augmentation code, augmentations are applied from top to bottom in order. P represents the probability to apply the augmentation. CP represents the probability for each channel to be selected for the augmentation and if not present indicates the augmentation applies to all channels. S presents the scale factor for window warping.}
\label{table:augs}
\end{table}
\subsection{Experimental Setup}
All experimental results were generated using Google Colab instances with a v2-8 TPU. In each training run, models were trained for up to 1500 epochs using a batch size of 64. If a model did not improve its validation loss for 100 epochs, the training process was stopped. Models used an Adam optimizer, with an initial learning rate of 1e-4, which was cut in half if the model's loss did not improve for 10 epochs. Each epoch was 100 batches of data, or enough batches for one full pass through the training set, whichever was greater. This was necessary to prevent epochs where the model trains for only one batch for certain datasets. All models were trained with categorical crossentropy loss.
We define one experiment as a single training run for a single combination of neural network model, dataset, and augmentation pipeline. We repeat the experiment for each combination using 29 different splits of the dataset and once with the original train test split of the dataset. The 29 splits were generated randomly, but with a preset seed value. This ensures that each single split is the same across models and augmentation pipelines. When generating splits, we combined the train and test datasets and resampled new train and test datasets, such that the number of examples for each class in the new split matches the original train and test split.
To compute statistical significance, we use the two sample T test, with the assumption of unequal variances, to compare the best accuracy. We consider an augmentation to be statistically better/worse than no augmentations if it passes a two sided T test with $a = 0.05$ and its mean accuracy is greater/lesser than without augmentations.
Due to memory constraints, we only evaluated the simpleRNN and simpleMHSA networks on datasets with 512 or less time steps. Due to limited resources, SimpleRNN and simpleMHSA experiments are repeated for a total of 10 times, one original train test split and 9 generated splits.
\begin{table*}[t]
\centering
\begin{tabular}{llrrrrrrrr}
\toprule
& Aug Code & None & A & B & C & D & E & F & G \\
Model & Dataset & & & & & & & & \\
\midrule
IT & ArticularyWordRecognition & 99.52 & \textit{-1.82} & -0.18 & -0.07 & \textbf{0.19} & \textit{-0.89} & 0.02 & \textbf{0.22} \\
& AtrialFibrillation & 39.33 & \textbf{7.11} & \textbf{7.33} & \textbf{5.56} & 0.67 & \textbf{5.33} & 0.00 & \textbf{6.44} \\
& BasicMotions & 100.00 & \textit{-0.33} & -0.17 & 0.00 & 0.00 & -0.08 & 0.00 & 0.00 \\
& Cricket & 99.95 & -0.09 & -0.09 & \textit{-0.32} & 0.05 & -0.23 & 0.00 & -0.09 \\
& DuckDuckGeese & 69.53 & -1.33 & -0.60 & \textbf{7.20} & \textit{-3.87} & -0.13 & 1.40 & \textbf{5.13} \\
& ERing & 95.02 & \textit{-3.96} & \textit{-1.40} & \textbf{2.14} & \textbf{1.07} & \textit{-3.52} & \textit{-1.37} & \textbf{0.96} \\
& EigenWorms & 76.72 & \textbf{14.83} & \textbf{13.97} & 0.66 & \textbf{10.53} & \textit{-8.24} & 2.60 & \textbf{14.73} \\
& Epilepsy & 99.69 & \textit{-1.47} & \textit{-0.63} & 0.14 & -0.12 & \textit{-1.52} & \textbf{0.24} & \textbf{0.24} \\
& EthanolConcentration & 33.52 & \textbf{47.96} & \textbf{43.27} & \textbf{14.75} & \textbf{4.71} & \textbf{38.75} & -0.32 & \textbf{10.56} \\
& FaceDetection & 71.08 & \textbf{1.20} & \textbf{2.19} & 0.23 & -0.50 & 0.66 & \textit{-5.07} & -0.30 \\
& FingerMovements & 63.03 & -1.50 & -1.70 & \textit{-3.13} & \textbf{2.73} & \textit{-2.80} & 0.50 & \textit{-2.73} \\
& HandMovementDirection & 49.73 & -1.44 & 0.41 & -0.59 & 1.62 & -0.50 & -1.53 & \textit{-2.21} \\
& Handwriting & 63.36 & \textit{-15.60} & \textit{-8.48} & \textit{-6.68} & \textit{-3.05} & \textit{-10.10} & \textbf{3.82} & \textit{-3.76} \\
& Heartbeat & 77.64 & 0.13 & \textbf{0.94} & 0.62 & -0.60 & 0.08 & -0.02 & \textbf{1.40} \\
& LSST & 49.88 & 0.11 & \textit{-6.10} & -1.53 & 2.01 & \textit{-8.82} & -1.11 & -0.82 \\
& Libras & 92.13 & \textit{-1.70} & \textit{-0.85} & -0.41 & 0.50 & \textit{-4.20} & \textbf{4.04} & \textbf{4.76} \\
& MotorImagery & 58.80 & 1.10 & 1.50 & 1.13 & 1.27 & 1.03 & 0.90 & -0.17 \\
& NATOPS & 97.83 & \textit{-8.50} & \textit{-7.54} & \textit{-2.41} & 0.20 & \textit{-6.37} & \textbf{0.96} & \textit{-1.11} \\
& PEMS-SF & 85.51 & -0.73 & \textbf{2.76} & \textbf{3.28} & 0.92 & \textbf{3.85} & -1.48 & \textbf{1.97} \\
& PenDigits & 99.75 & \textit{-0.96} & \textit{-0.51} & -0.04 & -0.02 & \textit{-0.60} & \textit{-0.11} & \textit{-1.04} \\
& PhonemeSpectra & 30.71 & \textit{-2.41} & -0.17 & \textbf{1.56} & -0.10 & \textit{-0.50} & \textbf{1.35} & \textbf{2.84} \\
& RacketSports & 93.77 & \textit{-4.52} & \textit{-1.54} & -0.09 & 0.07 & \textit{-1.75} & \textbf{1.56} & \textbf{1.78} \\
& SelfRegulationSCP1 & 85.93 & \textit{-2.54} & 0.55 & \textit{-1.67} & \textbf{1.89} & \textit{-1.31} & -0.56 & \textbf{1.99} \\
& SelfRegulationSCP2 & 57.04 & \textbf{2.06} & \textbf{1.22} & \textbf{1.89} & 0.76 & \textbf{1.78} & -0.30 & \textbf{1.44} \\
& StandWalkJump & 52.22 & 4.22 & 4.22 & \textbf{6.67} & -0.67 & \textbf{6.44} & -1.11 & \textbf{4.89} \\
& UWaveGestureLibrary & 93.61 & \textit{-3.66} & \textit{-1.10} & -0.55 & \textbf{1.59} & \textit{-4.40} & -0.23 & \textbf{1.33} \\
\bottomrule
\end{tabular}
\caption{Difference in accuracy for the InceptionTime model between no Augmentation and the Specific Augmentation Combination for the Inception Time Model. Entries are bold where the difference is statistically significant and the Augmentation Combination's accuracy is greater. Entries are italicized where the difference is statistically significant and the Augmentation Combination's accuracy is lower.}
\label{table:ITIME}
\end{table*}
\section{Discussion}
See Table \ref{table:ITIME} for experimental results for InceptionTime and Appendix \ref{app:rnnmhsa} for RNN and Self Attention.
\subsection{Overfitting}
When training deep neural networks with small datasets, the biggest challenge is often generalization from the training set to the testing set. When there is no augmentation, we observe this problem with the InceptionTime model in almost all datasets. For example, when training on the EthanolConcentration dataset without augmentation, the best test accuracy is achieved in the first epoch, at approximately 40\% train accuracy and 30\% test accuracy. In future epochs, the train accuracy increases and the test accuracy decreases, resulting in approximately 95\% train accuracy and 25\% test accuracy by the 50th epoch and an early stopping of the training process at the 100th epoch.
This is not the case when we use augmentations. With Augmentation Combination B, we will often observe a train accuracy of around 65\% and a test accuracy of 70\% at the 300th epoch. It's quite possible that with very aggressive augmentations, the model may suffer from underfitting, where the model has difficulty learning the training data. Regardless, the use of augmentations is one method for reducing overfitting and improving the model's ability to generalize.
\subsection{Analysis of Augmentations and Datasets}
\subsubsection{Cutout}
Augmentation combination C isolates the cutout augmentation. Compared other isolated augmentation combinations, cutout appears to perform well in the spectrogram datasets, which are DuckDuckGees, Heartbeat, PhonemeSpectra, and StandWalkJump. Because a spectrogram is a visual representation of the spectrum of frequencies of a signal as it varies with time, it is realistic for for certain frequencies to be absent at certain time steps. The cutout augmentation does exactly that, setting random channels (frequencies) for a random segment of the time series to zero.
We would expect cutout to perform poorly in datasets where zero values are unrealistic, which is what we observe for the Handwriting dataset. The Handwriting dataset records a subject's writing hand's accelerometer data as they write letters of the alphabet. It is possible that zero values in this case only confuse the model during training. However, we observe significant improvement for SelfRegulationSCP2, which is an Electroencephalogram (EEG) dataset. Because EEG data normally oscillates as a wave, it is unusual that randomly setting long segments of EEG data to zero would benefit the model.
Based on the above analysis, further research can be done to improve or modify cutout for non spectrogram data. For example, rather than setting the values of a time segment to zero, one could set the values to the mean of that channel. This could result in a version of cutout that sets all frequencies to zero. Alternatively, one could transform a time series into a spectrogram, perform frequency cutout, and then transform the result back. This could be used to cut specific frequencies.
\subsubsection{Mixup}
The EigenWorms dataset measures the movement of various roundworms on an agar plate according to 6 eigen vectors, with the goal of classifying the individual worm as either a wild type or one of 4 mutant types. The mixup augmentation combination D performs best out of the 4 isolated augmentations. It is plausible that mixup generates time sequences that are more realistic or plausible when compared to cutmix. One would expect mixup to generate intermediary movement patterns, where the time sequence adopts some characteristics of another one. This should result in a more diverse training set of data.
\subsubsection{Cutmix}
Augmentation combination E isolates the cutmix augmentation, which performs extremely well on the EthanolConcentration dataset. \cite{ruiz2021great} performed a case study on why deep learning methods struggle so much with this dataset. In summary, the discriminatory features lie in a small region and that the variation in this small region is significantly smaller than the rest of the sequence. When these factors are combined with a relatively small training set size (261 examples), they observe that non augmented deep learning models tend to focus on irrelevant regions of the sequence. Cutmix directly addresses this issue by swapping regions between examples, regardless of class. Given 4 classes distributed equally and if we assume that 10\% of the sequence is relevant for prediction, there is a 90\% chance of swapping noise between two examples and a 67\% chance of swapping noise between examples of different classes. This will strongly penalize any model from learning spurious features from the noise in the sequence. While the percentage chance estimates are not completely accurate, it should be clear that one should expect cutmix performs quite well when the discriminatory features lie in a small region.
Another case where cutmix can perform well is when cutmix results in plausible transformations of the time seqeuence. In the PEMS-SF dataset, each example tracks traffic data in 10 minute intervals for 1 day, with the goal of classifying the day of week. Swaps between two examples can generate plausible traffic patterns and help reduce overfitting on the training data.
Overall, we tend to observe that cutmix performs well on the same datasets where cutout performs well. This is likely because both augmentations remove a section of the original time series, with cutmix replacing that section. However, cutmix is the augmentation most likely to reduce accuracy by more than 3\%, which it does for 6 datasets.
\subsubsection{Window Warp}
Augmentation combination F isolates the window warp augmentation, which performs much better than other augmentations on the Handwriting and Libras dataset. The Handwriting dataset measures accelerometer data from the writing hand as an individual writes letters of the alphabet, while Libras measures the 2D coordinate of gestures from Brazilian sign language. We also observe noticeable improvements in the RacketSports dataset, which measures gyroscope and accelerometer data. These 3 datasets measure movements executed by people, in which the execution speed can vary significantly between individuals. In comparison with other augmentations, window warp rarely harms classification performance by a significant amount.
\subsubsection{Combining Augmentations Together}
When we compare augmentations combinations solely on the basis whether or not it improves performance, combination G improves performance in 16 datasets out of 26. However, it does not always improve performance by the largest amount. For the Handwriting dataset, using all 4 augmentations together performs much worse than using just the window warping augmentation. Because the nature of time series data can vary significantly, different augmentations can have drastically different effects. It is important to evaluate a wide variety of augmentations both individually and together.
Furthermore, it is important to evaluate different hyper parameter values for the augmentations. Augmentation combinations A and B are almost identical, but B applies cutmix twice and applies all of its augmentations with a lower channel probability. Some datasets, such as the PEMS-SF dataset benefit more from combination B than combination A.
Unfortunately, finding the best combination of augmentations may require a very thorough cross validation process. The authors would recommend beginning such a process with isolated augmentations, with the parameters specified in this paper. Doing so may reveal which type of augmentations perform best and provide more insight on what may work for that particular dataset.
\subsubsection{Effects on RNN and Self Attention Networks}
Results on the simpleRNN and simpleMHSA networks are limited to shorter datasets and lower statistical power due to a smaller sample size, with results presented in Appendix \ref{app:rnnmhsa}. However, the results show quite a different trend compared to the InceptionTime results. In particular, the window warp and mixup augmentations provide a statistically significant improvement far more often in these two networks than in InceptionTime. It is likely that architectural differences create different biases in networks, resulting in networks learning features in different ways and interacting with augmentations differently.
One potential source of difference is the use of the Global Average Pooling layer in the InceptionTime model, which is not present in both the simpleRNN and simpleMHSA model. Without Global Average Pooling, only the output of the last time step is used for predicting the class. Global Average Pooling can also ignore empty sections or swapped sections of the time series. This could explain why the simpleMHSA does not benefit much from cutout and cutmix.
Regardless, the results show that most augmentations provide a benefit for the simpleRNN model, which suggests that more complex RNN based models may also benefit from these augmentations. The simpleMHSA model appears to benefit only from mixup and window warping. Further research into the impact of augmentations on these model architectures is needed.
\subsection{Comparison with Other Methods}
For each dataset, we compare the best augmentation combination for the InceptionTime model with the best model found in the work by \cite{ruiz2021great}, see Table \ref{table:compare}. If we exclude datasets where our accuracy and the state-of-the-art best results from \cite{ruiz2021great} are both 100\%, we show that at least one augmentation combination for InceptionTime performs better in 13 of 24 datasets compared with their best model. The augmentations allow for the InceptionTime network to outperform non deep learning methods, as well as the 5 network ensembled version of InceptionTime.
The work by \cite{ruiz2021great} suggests that non deep learning methods, such as ROCKET, CIF, and HIVE-COTE, are more effective at MTS classification than deep learning methods. However, deep learning methods are prone to overfitting and the UEA MTSC Archive contains a large number of datasets with fewer than 300 training examples. We believe that future comparisons between methods should include datasets with more training examples and evaluations of models with and without augmentations.
\begin{table}[t!]
\centering
\begin{tabular}{lrrl}
\toprule
{} & Our Acc & Ruiz Acc & Algorithm \\
Dataset Name & & & \\
\midrule
\textbf{AWR} & \textbf{99.74} & \textbf{99.56} & \textbf{ROCKET} \\
AtrialFibrillation & 46.67 & 74.00 & MUSE \\
\textbf{BasicMotions} & \textbf{100.00} & \textbf{100.00} & \textbf{Multiple} \\
\textbf{Cricket} & \textbf{100.00} & \textbf{100.00} & \textbf{Multiple} \\
\textbf{DuckDuckGeese} & \textbf{76.73} & \textbf{63.47} & \textbf{IT} \\
ERing & 97.16 & 98.05 & ROCKET \\
\textbf{EigenWorms} & \textbf{91.55} & \textbf{90.33} & \textbf{CIF} \\
Epilepsy & 99.93 & 100.00 & HC \\
EC & 81.48 & 82.36 & STC \\
FaceDetection & 73.27 & 77.24 & IT \\
\textbf{FingerMovements} & \textbf{65.77} & \textbf{56.13} & \textbf{IT} \\
HMD & 51.35 & 52.21 & CIF \\
\textbf{Handwriting} & \textbf{67.19} & \textbf{65.74} & \textbf{IT} \\
\textbf{Heartbeat} & \textbf{79.04} & \textbf{76.52} & \textbf{CIF} \\
LSST & 51.88 & 63.62 & MUSE \\
\textbf{Libras} & \textbf{96.89} & \textbf{94.11} & \textbf{ResNet} \\
\textbf{MotorImagery} & \textbf{60.30} & \textbf{53.80} & \textbf{TSF} \\
\textbf{NATOPS} & \textbf{98.80} & \textbf{97.11} & \textbf{ResNet} \\
PEMS-SF & 89.36 & 99.68 & CIF \\
PenDigits & 99.73 & 99.85 & IT \\
PhonemeSpectra & 33.55 & 36.74 & IT \\
\textbf{RacketSports} & \textbf{95.55} & \textbf{92.79} & \textbf{ROCKET} \\
SCP1 & 87.92 & 95.68 & TapNet \\
\textbf{SCP2} & \textbf{59.09} & \textbf{53.69} & \textbf{DTW} \\
\textbf{StandWalkJump} & \textbf{58.89} & \textbf{45.56} & \textbf{ROCKET} \\
\textbf{UW} & \textbf{95.21} & \textbf{94.43} & \textbf{ROCKET} \\
\bottomrule
\end{tabular}
\caption{The mean of the accuracies across folds of the best augmentation combination for each dataset compared with the best results reported by \cite{ruiz2021great} (see Table 10 of their work). Bold rows indicate better or equal accuracy. Dataset names were shortened for formatting. ArticularyWordRecognition(AWR), EthanolConcentration(EC), HandMovementDirection(HMD), SelfRegulationSCP1(SCP1), SelfRegulationSCP2(SCP2), UWaveGestureLibrary(UW) }
\label{table:compare}
\end{table}
\subsection{Future Work}
Augmentations are normally applied to deep learning networks, but there may be a benefit to applying augmentations to non deep methods, such as ROCKET, CIF, and HIVE-COTE. This is an area research that could be further explored, as the work by \cite{ruiz2021great} shows how these methods can outperform deep learning methods.
Time series forecasting can also benefit from augmentation, as shown by \cite{bandara2021improving}. They evaluate 3 advanced augmentation methods and do not evaluate any of the methods in this paper. Further work can be done to evaluate the basic methods in this paper for the time series forecasting task.
Further development of deep learning methods for time series may be limited by the overfitting issue, due to the small size of available datasets. With augmentations, we can develop complex models that would otherwise overfit.
Follow up work should evaluate the augmentations in this paper on more state of the art deep neural networks. Due to the resource limits, the authors of this paper could not evaluate the ensembled InceptionTime, but such an evaluation could prove quite helpful in understanding how ensembling and augmentations interact.
\section{Conclusion}
The experimental results above demonstrate that the InceptionTime network, with augmentations and without ensembling, is the best classifier for the UEA MTSC Archive; it matches or exceeds classification accuracy on 15 of the 26 datasets. The improvement brought about by augmentations can be as drastic as a 48\% increase in accuracy from 33\% to 81\% on the Ethanol Concentration dataset. These improvements suggest that deep neural networks are strongly overfitting on MTS datasets.
We recommend that future researchers and engineers fully leverage the benefits of augmentation to improve performance on classification tasks. While it does require more time, finding the right augmentation combination can easily and significantly improve performance. With augmentations, we can design and train deeper models with a lowered risk of overfitting.
However, the use of augmentations can complicate the comparison between models. Could model A perform better than model B using different hyper parameters for the augmentations? By recommending augmentations for comparison studies, are we opening a Pandora's box of infinite combinations for evaluation? The authors of this paper notes that the box was already open; there's already an infinite combination of learning rates, optimizers, schedulers, and many more hyper parameters. The key contribution of this work is to add impactful augmentations for designing, evaluating, and applying deep MTS classification models when overfitting is a problem, which it always is.
\nocite{langley00}
| {
"attr-fineweb-edu": 1.806641,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdajxK03BfL1dVWKz | \section{Introduction}
\label{s:Introduction}
This paper is a continuation of the work~\cite{Jarosz2011-01}, whose focus was on the simplest characteristics (level density) of a weighted sum of unitary random matrices---a non-Hermitian random matrix model encountered e.g. in quantum information theory or the theory of random walks on trees. Inspired by the first of these areas of physics (Sec.~\ref{ss:Motivation}), I introduce more general models (Sec.~\ref{ss:Models}) and calculate---using the tools of free probability (Sec.~\ref{ss:Tools})---their level densities (Sec.~\ref{s:GeneralizedBuresProducts}).
\subsection{Models}
\label{ss:Models}
\subsubsection{Definitions of the models}
\label{sss:DefinitionsOfTheModels}
The goal of this article is a basic study of the following non-Hermitian random matrix models $\mathbf{T}$, $\mathbf{W}$, $\mathbf{V}$, which I will call ``generalized Bures products'' for the reason explained at the end of Sec.~\ref{sss:ApplicationsToQuantumInformationTheory}. In order to introduce them, I recall two other matrix models, $\mathbf{S}$ and $\mathbf{P}$, investigated recently in the literature:
(i) Let $L \geq 2$ be an integer; let \smash{$\mathbf{U}_{l}$}, for $l = 1 , 2 , \ldots , L$, be independent $N \times N$ unitary random matrices belonging to the Wigner-Dyson ``circular unitary ensemble'' (CUE or ``Haar measure''; i.e., their eigenvalues are on average uniformly distributed on the unit circle); let \smash{$w_{l}$} be arbitrary complex parameters. Then define a weighted sum,
\begin{equation}\label{eq:SDefinition}
\mathbf{S} \equiv w_{1} \mathbf{U}_{1} + w_{2} \mathbf{U}_{2} + \ldots + w_{L} \mathbf{U}_{L} .
\end{equation}
This non-Hermitian model may be referred to as the ``generalized Kesten model''~\cite{Kesten1959} (cf. also~\cite{HaagerupLarsen2000,GorlichJarosz2004}), and has been considered in some detail in~\cite{Jarosz2011-01}.
(ii) Let $K \geq 1$ be an integer; let \smash{$\mathbf{A}_{k}$}, for $k = 1 , 2 , \ldots , K$, be rectangular (of some dimensions \smash{$N_{k} \times N_{k + 1}$}) complex random matrices such that all the real and imaginary parts of the entries of each matrix are independent random numbers with the Gaussian distribution of zero mean and variance denoted by \smash{$\sigma_{k}^{2} / ( 2 ( N_{k} N_{k + 1} )^{1 / 2} )$}, i.e., the joint probability density function (JPDF),
\begin{equation}\label{eq:RectangularGinUEJPDF}
P \left( \{ \mathbf{A}_{k} \} \right) \propto \prod_{k = 1}^{K} \exp \left( - \frac{\sqrt{N_{k} N_{k + 1}}}{\sigma_{k}^{2}} \Tr \left( \mathbf{A}_{k}^{\dagger} \mathbf{A}_{k} \right) \right) ;
\end{equation}
these are rectangular ``Ginibre unitary ensembles'' (GinUE)~\cite{Ginibre1965,Girko19841985}. Then define their product, an \smash{$N_{1} \times N_{K + 1}$} random matrix,
\begin{equation}\label{eq:PDefinition}
\mathbf{P} \equiv \mathbf{A}_{1} \mathbf{A}_{2} \ldots \mathbf{A}_{K} .
\end{equation}
This model has been thoroughly investigated (cf.~e.g.~\cite{BanicaBelinschiCapitaineCollins2007,PensonZyczkowski2011,KanzieperSingh2010,BurdaJanikWaclaw2009,BurdaJaroszLivanNowakSwiech20102011}).
Now, having a number of copies of $\mathbf{S}$ and $\mathbf{P}$ (all of them assumed statistically independent from each other), one may use them as building blocks of various products:
(i) A product of $J \geq 1$ weighted sums (\ref{eq:SDefinition}),
\begin{equation}\label{eq:TDefinition}
\mathbf{T} \equiv \mathbf{S}_{1} \mathbf{S}_{2} \ldots \mathbf{S}_{J} ,
\end{equation}
where each sum \smash{$\mathbf{S}_{j}$} has arbitrary length \smash{$L_{j}$} and complex weights \smash{$w_{j l}$}.
(ii) A product of (\ref{eq:TDefinition}) and (\ref{eq:PDefinition}),
\begin{equation}\label{eq:WDefinition}
\mathbf{W} \equiv \mathbf{T} \mathbf{P} ,
\end{equation}
i.e., this is a string of $J$ generalized Kesten ensembles and $K$ Ginibre unitary ensembles. Note that one has to set \smash{$N = N_{1}$}, and then $\mathbf{W}$ is rectangular of dimensions \smash{$N_{1} \times N_{K + 1}$}.
(iii) Finally, a most general string of generalized Kesten and Ginibre unitary ensembles is obtained by multiplying a number $I \geq 1$ of random matrices (\ref{eq:WDefinition}),
\begin{equation}\label{eq:VDefinition}
\mathbf{V} \equiv \mathbf{W}_{1} \mathbf{W}_{2} \ldots \mathbf{W}_{I} .
\end{equation}
The dimensions of the terms (\smash{$N_{i , 1} \times N_{i , K_{i} + 1}$}, for $i = 1 , 2 , \ldots , I$) must obey \smash{$N_{i , K_{i} + 1} = N_{i + 1 , 1}$}, and then $\mathbf{V}$ is rectangular of dimensions \smash{$N_{1 , 1} \times N_{I , K_{I} + 1}$}.
\subsubsection{Thermodynamic limit}
\label{sss:ThermodynamicLimit}
The techniques I apply to approach the above models are valid only in the ``thermodynamic limit,'' i.e., for all the matrix dimensions infinite and their ``rectangularity ratios'' finite,
\begin{equation}
\begin{split}\label{eq:ThermodynamicLimit}
&N = N_{1} , N_{2} , \ldots , N_{K + 1} \to \infty ,\\
&r_{k} \equiv \frac{N_{k}}{N_{K + 1}} \textrm{ = finite.}
\end{split}
\end{equation}
[I use here the notation for the model $\mathbf{W}$ (\ref{eq:WDefinition}); for $\mathbf{V}$ (\ref{eq:VDefinition}), one should add an index $i$.]
\subsubsection{Mean densities of the eigenvalues and singular values}
\label{sss:MeanDensitiesOfTheEigenvaluesAndSingularValues}
\emph{Level densities.} This paper is just a most basic study of the models $\mathbf{X} \in \{ \mathbf{T} , \mathbf{W} , \mathbf{V} \}$, namely, I will be calculating only:
(i) The mean density of the eigenvalues (``mean spectral density'' or ``level density''),
\begin{equation}\label{eq:NonHermitianMeanSpectralDensityDefinition}
\rho_{\mathbf{X}} ( z , \overline{z} ) \equiv \frac{1}{N} \sum_{i = 1}^{N} \la \delta^{( 2 )} \left( z - \lambda_{i} \right) \ra ,
\end{equation}
where the mean values are with respect to the JPDF of $\mathbf{X}$ and are evaluated at the complex Dirac delta functions at the (generically complex) eigenvalues \smash{$\lambda_{i}$} of $\mathbf{X}$. Note that one must set \smash{$N_{K + 1} = N$} (i.e., \smash{$r_{1} = 1$}) for $\mathbf{W}$ to be a square matrix.
(ii) The mean density of the ``singular values,'' which are the (real and non-negative) eigenvalues \smash{$\mu_{i}$} of the \smash{$N_{K + 1} \times N_{K + 1}$} Hermitian random matrix \smash{$\mathbf{H} \equiv \mathbf{X}^{\dagger} \mathbf{X}$},
\begin{equation}\label{eq:HermitianMeanSpectralDensityDefinition}
\rho_{\mathbf{H}} ( x ) \equiv \frac{1}{N_{K + 1}} \sum_{i = 1}^{N_{K + 1}} \la \delta \left( x - \mu_{i} \right) \ra ,
\end{equation}
where the Dirac delta is real. This time, \smash{$r_{1}$} may acquire any positive value.
\emph{Future work---universal quantities.} Certainly, this is but the first step in understanding the considered models. The level density of a random matrix is known not to be universal, i.e., it depends on the precise probability distribution of the matrix. (However, cf.~the end of Sec.~\ref{sss:FreeProbabilityAndModelP} for a hint in favor of a certain universality of the level densities of our models.) As a next step, it would be desirable to investigate some ``universal'' properties of our models, i.e., depending solely on their symmetries but not the specific probability distributions. Some basic universal observables would be a ``two-point connected correlation function,''
\begin{subequations}
\begin{align}
\rho^{\textrm{connected}}_{\mathbf{H}} ( x , y ) &\equiv \rho_{\mathbf{H}} ( x , y ) - \rho_{\mathbf{H}} ( x ) \rho_{\mathbf{H}} ( y ) ,\label{eq:HermitianTwoPointConnectedCorrelationFunctionDefinition1}\\
\rho_{\mathbf{H}} ( x , y ) &\equiv \frac{1}{N_{K + 1}^{2}} \sum_{i , j = 1}^{N_{K + 1}} \la \delta \left( x - \mu_{i} \right) \delta \left( y - \mu_{j} \right) \ra\label{eq:HermitianTwoPointConnectedCorrelationFunctionDefinition2}
\end{align}
\end{subequations}
(here written in the Hermitian case), or the behavior of the level density/correlation function close to the borderline (for non-Hermitian models) or the edges (for Hermitian models) of the spectrum.
\emph{Universal erfc scaling.} Even though I focus on the level densities in the bulk of the spectrum and not close to its borders, the models $\mathbf{T}$, $\mathbf{W}$, $\mathbf{V}$ share a certain property (their mean spectrum is rotationally-symmetric around zero; cf.~Sec.~\ref{sss:RotationalSymmetryAndNTransformConjecture}) which permits an application of the so-called ``erfc conjecture'' (cf.~Sec.~\ref{sss:ErfcConjecture}), which describes the universal way in which the mean spectral density (\ref{eq:NonHermitianMeanSpectralDensityDefinition}) is modified close to the borderline of the ``mean spectral domain'' $\mathcal{D}$ (i.e., the subset of the complex plane where the eigenvalues of an infinite random matrix fall). The pertinent form-factor (\ref{eq:FiniteSizeBorderlineFactor}) depends on one or two parameters, and I positively test this hypothesis on a number of examples for each class of the models $\mathbf{T}$, $\mathbf{W}$, $\mathbf{V}$ by fitting these parameters to match the Monte Carlo data. However, it still remains to prove this hypothesis, as well as analytically determine the form-factor parameters.
\emph{Future work---microscopic quantities.} Another step in the research on our models could be to consider finite matrix dimensions, and to calculate the complete JPDF of the eigenvalues and singular values of $\mathbf{S}$, $\mathbf{P}$, $\mathbf{T}$, $\mathbf{W}$, $\mathbf{V}$. This however is a much more involved task.
\subsection{Tools}
\label{ss:Tools}
In this Section, I sketch the means by which the level densities [(\ref{eq:NonHermitianMeanSpectralDensityDefinition}), (\ref{eq:HermitianMeanSpectralDensityDefinition})] of the models [(\ref{eq:TDefinition}), (\ref{eq:WDefinition}), (\ref{eq:VDefinition})] will be evaluated in the thermodynamic limit (\ref{eq:ThermodynamicLimit}). I do not delve into details, as they have been addressed e.g.~in~\cite{Jarosz2011-01,Jarosz2010-01}.
\subsubsection{Basic language of random matrix theory}
\label{sss:BasicLanguageOfRandomMatrixTheory}
\emph{Mean spectral densities and Green functions.} First of all, it is convenient to replace the mean spectral density of any ($N \times N$, $N \to \infty$) Hermitian random matrix $\mathbf{H}$ or non-Hermitian random matrix $\mathbf{X}$ with an equivalent but more tractable object in the following way: Since the definitions [(\ref{eq:HermitianMeanSpectralDensityDefinition}), (\ref{eq:NonHermitianMeanSpectralDensityDefinition})] exploit the real (Hermitian case) or complex (non-Hermitian case) Dirac delta function, one uses their respective representations,
\begin{subequations}
\begin{align}
\delta ( x ) &= - \frac{1}{2 \pi \ii} \lim_{\epsilon \to 0^{+}} \left( \frac{1}{x + \ii \epsilon} - \frac{1}{x - \ii \epsilon} \right) ,\label{eq:RealDiracDeltaDefinition}\\
\delta^{( 2 )} ( z ) &= \frac{1}{\pi} \partial_{\overline{z}} \lim_{\epsilon \to 0} \frac{\overline{z}}{| z |^{2} + \epsilon^{2}} ,\label{eq:ComplexDiracDeltaDefinition}
\end{align}
\end{subequations}
which prompt one to introduce the following ``holomorphic'' or ``nonholomorphic Green functions (resolvents)''~\cite{SommersCrisantiSompolinskyStein1988,HaakeIzrailevLehmannSaherSommers1992,LehmannSaherSokolovSommers1995,FyodorovSommers1997,FyodorovKhoruzhenkoSommers1997},
\begin{subequations}
\begin{align}
G_{\mathbf{H}} ( z ) &\equiv \frac{1}{N} \sum_{i = 1}^{N} \la \frac{1}{z - \mu_{i}} \ra = \nonumber\\
&= \frac{1}{N} \Tr \la \frac{1}{z \Id_{N} - \mathbf{H}} \ra ,\label{eq:HolomorphicGreenFunctionDefinition}\\
G_{\mathbf{X}} ( z , \overline{z} ) &\equiv \lim_{\epsilon \to 0} \lim_{N \to \infty} \frac{1}{N} \sum_{i = 1}^{N} \la \frac{\overline{z} - \overline{\lambda_{i}}}{\left| z - \lambda_{i} \right|^{2} + \epsilon^{2}} \ra =\nonumber\\
&= \lim_{\epsilon \to 0} \lim_{N \to \infty} \frac{1}{N} \Tr \cdot \nonumber\\
&\cdot \la \frac{\overline{z} \Id_{N} - \mathbf{X}^{\dagger}}{\left( z \Id_{N} - \mathbf{X} \right) \left( \overline{z} \Id_{N} - \mathbf{X}^{\dagger} \right) + \epsilon^{2} \Id_{N}} \ra .\label{eq:NonHolomorphicGreenFunctionDefinition}
\end{align}
\end{subequations}
Thanks to (\ref{eq:RealDiracDeltaDefinition})-(\ref{eq:ComplexDiracDeltaDefinition}), the densities are straightforward to reproduce from them,
\begin{subequations}
\begin{align}
\rho_{\mathbf{H}} ( x ) &= - \frac{1}{2 \pi \ii} \lim_{\epsilon \to 0^{+}} \left( G_{\mathbf{H}} ( x + \ii \epsilon ) - G_{\mathbf{H}} ( x - \ii \epsilon ) \right) ,\label{eq:MeanSpectralDensityFromHolomorphicGreenFunction}\\
\rho_{\mathbf{X}} ( z , \overline{z} ) &= \frac{1}{\pi} \partial_{\overline{z}} G_{\mathbf{X}} ( z , \overline{z} ) ,\label{eq:MeanSpectralDensityFromNonHolomorphicGreenFunction}
\end{align}
\end{subequations}
while the resolvents prove to be handier from the computational point of view.
Remark: Eqs.~[(\ref{eq:NonHolomorphicGreenFunctionDefinition}), (\ref{eq:MeanSpectralDensityFromNonHolomorphicGreenFunction})] are valid for $z$ inside the mean spectral domain $\mathcal{D}$. Outside it, the regulator $\epsilon$ may be safely set to zero, and the nonholomorphic Green function reduces to its holomorphic counterpart (\ref{eq:HolomorphicGreenFunctionDefinition}),
\begin{equation}\label{eq:NonHolomorphicGreenFunctionOutsideD}
G_{\mathbf{X}} ( z , \overline{z} ) = G_{\mathbf{X}} ( z ) , \quad \textrm{for } z \notin \mathcal{D} .
\end{equation}
An implication is that by calculating the Green function both inside $\mathcal{D}$ and outside $\mathcal{D}$, and then equating them according to (\ref{eq:NonHolomorphicGreenFunctionOutsideD}), one arrives at an equation of the borderline $\partial \mathcal{D}$.
\emph{$M$-transforms.} In addition to the resolvents, one often finds even more convenient to work with their simple modification, the ``holomorphic'' or ``nonholomorphic $M$-transforms,''
\begin{subequations}
\begin{align}
M_{\mathbf{H}} ( z ) &\equiv z G_{\mathbf{H}} ( z ) - 1 ,\label{eq:HolomorphicMTransformDefinition}\\
M_{\mathbf{X}} ( z , \overline{z} ) &\equiv z G_{\mathbf{X}} ( z , \overline{z} ) - 1 .\label{eq:NonHolomorphicMTransformDefinition}
\end{align}
\end{subequations}
The Hermitian $M$-transform (\ref{eq:HolomorphicMTransformDefinition}) has the interpretation of the generating function of the ``moments'' \smash{$m_{\mathbf{H} , n}$} (if they exist), by expanding around $z = \infty$,
\begin{equation}\label{eq:HolomorphicMTransformMomentExpansion}
M_{\mathbf{H}} ( z ) = \sum_{n \geq 1} \frac{m_{\mathbf{H} , n}}{z^{n}} , \quad m_{\mathbf{H} , n} \equiv \frac{1}{N} \Tr \la \mathbf{H}^{n} \ra .
\end{equation}
\subsubsection{Rotational symmetry and $N$-transform conjecture}
\label{sss:RotationalSymmetryAndNTransformConjecture}
\emph{Rotational symmetry of the mean spectrum.} In this paper, I investigate only non-Hermitian random matrix models $\mathbf{X}$ with a feature that their mean spectrum is rotationally-symmetric around zero, i.e., that their density (\ref{eq:NonHermitianMeanSpectralDensityDefinition}) depends only on
\begin{equation}\label{eq:RDefinition}
R \equiv | z | .
\end{equation}
Equivalently, this means that their nonholomorphic $M$-transform (\ref{eq:NonHolomorphicMTransformDefinition}) also depends only on the radius,
\begin{equation}\label{eq:RotationallySymmetricNonHolomorphicMTransformDefinition}
M_{\mathbf{X}} ( z , \overline{z} ) = \mathfrak{M}_{\mathbf{X}} \left( R^{2} \right) ,
\end{equation}
giving the relevant radial part of the density through
\begin{equation}\label{eq:RadialMeanSpectralDensityFromRotationallySymmetricNonHolomorphicMTransform}
\rho^{\textrm{rad.}}_{\mathbf{X}} ( R ) \equiv 2 \pi R \left. \rho_{\mathbf{X}} ( z , \overline{z} ) \right|_{| z | = R} = \frac{\dd}{\dd R} \mathfrak{M}_{\mathbf{X}} \left( R^{2} \right) .
\end{equation}
\emph{$N$-transform conjecture.} For such ensembles, it has been suggested and numerically confirmed on a number of examples~\cite{BurdaJaroszLivanNowakSwiech20102011,Jarosz2011-01,Jarosz2010-01} that there exists a simple relationship between the mean densities of their eigenvalues and singular values (i.e., eigenvalues of \smash{$\mathbf{X}^{\dagger} \mathbf{X}$})---the ``$N$-transform conjecture.'' In order to express it, one needs to define the functional inverse of the $M$-transform. In the Hermitian case (\ref{eq:HolomorphicMTransformDefinition}), it can be directly done,
\begin{equation}\label{eq:HolomorphicNTransformDefinition}
M_{\mathbf{H}} \left( N_{\mathbf{H}} ( z ) \right) = N_{\mathbf{H}} \left( M_{\mathbf{H}} ( z ) \right) = z ,
\end{equation}
obtaining the ``holomorphic $N$-transform.'' In a generic non-Hermitian case (\ref{eq:NonHolomorphicMTransformDefinition}), however, such an inversion is obviously impossible. But with the rotational symmetry present (\ref{eq:RotationallySymmetricNonHolomorphicMTransformDefinition}), the $M$-transform again depends on one argument, and the inversion becomes generically doable,
\begin{equation}\label{eq:RotationallySymmetricNonHolomorphicNTransformDefinition}
\mathfrak{M}_{\mathbf{X}} \left( \mathfrak{N}_{\mathbf{X}} ( z ) \right) = z , \quad \mathfrak{N}_{\mathbf{X}} \left( \mathfrak{M}_{\mathbf{X}} \left( R^{2} \right) \right) = R^{2} ,
\end{equation}
which is called the ``rotationally-symmetric nonholomorphic $N$-transform.'' [To be more precise, \smash{$\mathfrak{N}_{\mathbf{X}} ( z )$} defined by the left equation in (\ref{eq:RotationallySymmetricNonHolomorphicNTransformDefinition}) is a holomorphic continuation of the one defined by the right equation.] Now the hypothesis states that these $N$-transforms of $\mathbf{X}$ and \smash{$\mathbf{H} \equiv \mathbf{X}^{\dagger} \mathbf{X}$} remain in a simple relation,
\begin{equation}\label{eq:NTransformConjecture}
N_{\mathbf{X}^{\dagger} \mathbf{X}} ( z ) = \frac{z + 1}{z} \mathfrak{N}_{\mathbf{X}} ( z ) .
\end{equation}
Typically, one of these random matrices will be much more accessible by analytical methods than the other, and therefore, Eq.~(\ref{eq:NTransformConjecture}) will provide a way to that more complicated model.
I will henceforth make extensive use of this hypothesis; numerical tests of the results thus obtained will further support it.
\subsubsection{Free probability and model $\mathbf{P}$}
\label{sss:FreeProbabilityAndModelP}
\emph{Algorithm for computing the densities of $\mathbf{P}$ and \smash{$\mathbf{P}^{\dagger} \mathbf{P}$}.} The model $\mathbf{P}$ (\ref{eq:PDefinition}) is a product of independent (rectangular) random matrices \smash{$\mathbf{A}_{k}$}. If \smash{$r_{1} \neq 1$} (i.e., $\mathbf{P}$ is rectangular), then one is interested in the eigenvalues of \smash{$\mathbf{P}^{\dagger} \mathbf{P}$} only, while if \smash{$r_{1} = 1$} (i.e., $\mathbf{P}$ is square), then its mean spectrum has rotational symmetry around zero (\ref{eq:RotationallySymmetricNonHolomorphicMTransformDefinition})~\cite{BurdaJanikWaclaw2009,BurdaJaroszLivanNowakSwiech20102011}, and can therefore be related (\ref{eq:NTransformConjecture}) to the spectrum of \smash{$\mathbf{P}^{\dagger} \mathbf{P}$}. The level density of this latter matrix can now be expressed (by cyclic shifts of the terms)~\cite{BurdaJaroszLivanNowakSwiech20102011} through the density of the product of the Hermitian (``Wishart''~\cite{Wishart1928}) matrices \smash{$\mathbf{A}_{k}^{\dagger} \mathbf{A}_{k}$}. Hence, the situation is suitable for the employment of the ``multiplication law'' of free probability calculus of Voiculescu and Speicher~\cite{VoiculescuDykemaNica1992,Speicher1994}.
\emph{Multiplication law in free probability.} This theory is essentially a probability theory of noncommuting objects, such as large random matrices. Its foundational notion is that of ``freeness,'' being a proper generalization of statistical independence. Qualitatively speaking, freeness requires the matrices \smash{$\mathbf{H}_{1}$}, \smash{$\mathbf{H}_{2}$} to not only be independent statistically, but also ``rotationally'' (i.e., no distinguished direction in their probability measures, i.e., dependence only on the eigenvalues)---independent random rotations from the CUE, \smash{$\mathbf{U}_{1} \mathbf{H}_{1} \mathbf{U}_{1}^{\dagger}$} and \smash{$\mathbf{U}_{2} \mathbf{H}_{2} \mathbf{U}_{2}^{\dagger}$}, would ensure that condition if necessary.
Assuming freeness, there is a very useful prescription for computing the mean spectral density of the product \smash{$\mathbf{H}_{1} \mathbf{H}_{2}$} (assuming it is still Hermitian, which obviously is not always the case; but the same procedure applies when all the matrices are unitary), once the densities of the terms are known: (i) Encode the densities by the holomorphic $M$-transforms (\ref{eq:HolomorphicMTransformDefinition}), \smash{$M_{\mathbf{H}_{1}} ( z )$} and \smash{$M_{\mathbf{H}_{2}} ( z )$}, as outlined in Sec.~\ref{sss:BasicLanguageOfRandomMatrixTheory}. (ii) Invert them functionally to find the respective $N$-transforms (\ref{eq:HolomorphicNTransformDefinition}), \smash{$N_{\mathbf{H}_{1}} ( z )$} and \smash{$N_{\mathbf{H}_{2}} ( z )$}. (iii) The $N$-transform of the product follows from the ``multiplication law,''
\begin{equation}\label{eq:MultiplicationLaw}
N_{\mathbf{H}_{1} \mathbf{H}_{2}} ( z ) = \frac{z}{z + 1} N_{\mathbf{H}_{1}} ( z ) N_{\mathbf{H}_{2}} ( z ) .
\end{equation}
(iv) Invert the result functionally to find the $M$-transform, and thus the mean spectral density of the product.
\emph{Eigenvalues and singular values of $\mathbf{P}$.} Implementing the above steps to $\mathbf{P}$, one readily discovers~\cite{BurdaJaroszLivanNowakSwiech20102011},
\begin{equation}\label{eq:HolomorphicNTransformOfPDaggerP}
N_{\mathbf{P}^{\dagger} \mathbf{P}} ( z ) = \sigma^{2} \sqrt{r_{1}} \frac{z + 1}{z} \prod_{k = 1}^{K} \left( \frac{z}{r_{k}} + 1 \right) ,
\end{equation}
where for short, \smash{$\sigma \equiv \prod_{k = 1}^{K} \sigma_{k}$}, from which stems both the mean density of the singular values [(\ref{eq:HolomorphicNTransformDefinition}), (\ref{eq:HolomorphicMTransformDefinition}), (\ref{eq:MeanSpectralDensityFromHolomorphicGreenFunction})] and (provided that \smash{$r_{1} = 1$}) the eigenvalues [(\ref{eq:NTransformConjecture}), (\ref{eq:RotationallySymmetricNonHolomorphicNTransformDefinition}), (\ref{eq:RadialMeanSpectralDensityFromRotationallySymmetricNonHolomorphicMTransform})] of $\mathbf{P}$ [in the latter case, the mean spectral domain $\mathcal{D}$ is a centered disk of radius \smash{$R_{\textrm{ext.}} = \sigma$}, cf.~(\ref{eq:XRExt})-(\ref{eq:XRInt})].
Furthermore, the authors of~\cite{BurdaJanikWaclaw2009} conjecture and numerically affirm that (\ref{eq:HolomorphicNTransformOfPDaggerP}) is to some degree universal, namely that it holds for the matrix elements of the \smash{$\mathbf{A}_{k}$} not only IID Gaussian but also arbitrary IID obeying the Pastur-Lindeberg condition. Depending on a possible analogous universality of the level density of $\mathbf{S}$, also the densities of $\mathbf{T}$, $\mathbf{W}$, $\mathbf{V}$ may exhibit a certain universality.
Let me close by saying that an algorithm parallel to the one described above will also be applied to derive the main results of this paper (cf.~Sec.~\ref{ss:GeneralizedMultiplicationLaws}).
\subsubsection{Quaternion free probability and model $\mathbf{S}$}
\label{sss:QuaternionFreeProbabilityAndModelS}
Although the topic of this work is multiplication rather than summation of random matrices, I should mention another [in addition to the technique outlined in Sec.~\ref{sss:FreeProbabilityAndModelP} and Eq.~(\ref{eq:HolomorphicNTransformOfPDaggerP})] pillar of this article---the results for the model $\mathbf{S}$ (\ref{eq:SDefinition}) derived in~\cite{Jarosz2011-01}.
\emph{Addition law in free probability.} In the realm of Hermitian random matrices, free probability provides---besides the multiplication law (\ref{eq:MultiplicationLaw})---a rule for summing free matrices \smash{$\mathbf{H}_{1}$}, \smash{$\mathbf{H}_{2}$}. This time, one should invert functionally not the holomorphic $M$-transforms, but the Green functions (\ref{eq:HolomorphicGreenFunctionDefinition}),
\begin{equation}\label{eq:HolomorphicBlueFunctionDefinition}
G_{\mathbf{H}} \left( B_{\mathbf{H}} ( z ) \right) = B_{\mathbf{H}} \left( G_{\mathbf{H}} ( z ) \right) = z ,
\end{equation}
which is known as the ``holomorphic Blue function''~\cite{Zee1996}. This objects then satisfies the ``addition law''~\cite{VoiculescuDykemaNica1992,Speicher1994},
\begin{equation}\label{eq:HermitianAdditionLaw}
B_{\mathbf{H}_{1} + \mathbf{H}_{2}} ( z ) = B_{\mathbf{H}_{1}} ( z ) + B_{\mathbf{H}_{2}} ( z ) - \frac{1}{z} ,
\end{equation}
upon which it remains to functionally invert the left-hand side, which leads to the holomorphic Green function of the sum, and consequently, its mean spectral density. (Recall that in classical probability theory, an analogous algorithm makes use of the logarithms of the characteristic functions of independent random variables.)
\emph{Addition law in quaternion free probability.} For non-Hermitian random matrices, a construction parallel to (\ref{eq:HolomorphicBlueFunctionDefinition})-(\ref{eq:HermitianAdditionLaw}) has been worked out in~\cite{JaroszNowak2004,JaroszNowak2006}. Basically, it consists of three steps:
(i) ``Duplication''~\cite{JanikNowakPappWambachZahed1997,JanikNowakPappZahed1997-01} (cf.~also~\cite{FeinbergZee1997-01,FeinbergZee1997-02})---the nonholomorphic Green function (\ref{eq:NonHolomorphicGreenFunctionDefinition}) has a denominator quadratic in $\mathbf{X}$, which makes its evaluation hard, and one needs to linearize it by introducing the ($2 \times 2$) ``matrix-valued Green function,''
\begin{equation}\label{eq:MatrixValuedGreenFunctionDefinition1}
\mathcal{G}_{\mathbf{X}} ( z , \overline{z} ) \equiv \lim_{\epsilon \to 0} \lim_{N \to \infty} \frac{1}{N} \bTr \la \frac{1}{\mathcal{Z}_{\epsilon} \otimes \Id_{N} - \mathbf{X}^{\dupl}} \ra ,
\end{equation}
where for short,
\begin{equation}\label{eq:MatrixValuedGreenFunctionDefinition2}
\mathcal{Z}_{\epsilon} \equiv \left( \begin{array}{cc} z & \ii \epsilon \\ \ii \epsilon & \overline{z} \end{array} \right) , \quad \mathbf{X}^{\dupl} \equiv \left( \begin{array}{cc} \mathbf{X} & \Zero_{N} \\ \Zero_{N} & \mathbf{X}^{\dagger} \end{array} \right) ,
\end{equation}
and the ``block--trace,''
\begin{equation}\label{eq:BlockTraceDefinition}
\bTr \left( \begin{array}{cc} \mathbf{A} & \mathbf{B} \\ \mathbf{C} & \mathbf{D} \end{array} \right) \equiv \left( \begin{array}{cc} \Tr \mathbf{A} & \Tr \mathbf{B} \\ \Tr \mathbf{C} & \Tr \mathbf{D} \end{array} \right) .
\end{equation}
In other words, this object resembles the holomorphic Green function (\ref{eq:HolomorphicGreenFunctionDefinition}), and thus can be approached by methods designed for Hermitian random matrices; the cost being the need to work with $2 \times 2$ matrices rather than complex numbers. The nonholomorphic Green function lies precisely on its upper left entry, \smash{$[ \mathcal{G}_{\mathbf{X}} ( z , \overline{z} ) ]_{1 1} = G_{\mathbf{X}} ( z , \overline{z} )$}.
(ii) The extension to the complete quaternion space---by replacing in (\ref{eq:MatrixValuedGreenFunctionDefinition1}) the infinitesimal $\epsilon$ with a finite complex number,
\begin{equation}\label{eq:QuaternionDefinition}
\mathcal{Q} \equiv \left( \begin{array}{cc} c & \ii \overline{d} \\ \ii d & \overline{c} \end{array} \right) , \quad c , d \in \mathbb{C} ,
\end{equation}
one obtains a quaternion argument of the ``quaternion Green function,''
\begin{equation}\label{eq:QuaternionGreenFunctionDefinition}
\mathcal{G}_{\mathbf{X}} ( \mathcal{Q} ) \equiv \frac{1}{N} \bTr \la \frac{1}{\mathcal{Q} \otimes \Id_{N} - \mathbf{X}^{\dupl}} \ra .
\end{equation}
The mean spectral density is reproduced from this quaternion function by approaching the complex plane ($c = z$, $d = \epsilon$), just as in the Hermitian case, it is obtained from the complex Green function by approaching the real line ($z = x \pm \ii \epsilon$) (\ref{eq:MeanSpectralDensityFromHolomorphicGreenFunction}).
(iii) The functional inversion can now be performed in the quaternion space,
\begin{equation}\label{eq:QuaternionBlueFunctionDefinition}
\mathcal{G}_{\mathbf{X}} \left( \mathcal{B}_{\mathbf{X}} ( \mathcal{Q} ) \right) = \mathcal{B}_{\mathbf{X}} \left( \mathcal{G}_{\mathbf{X}} ( \mathcal{Q} ) \right) = \mathcal{Q}
\end{equation}
(the ``quaternion Blue function''), and the ``quaternion addition law'' for free non-Hermitian random matrices can be proven,
\begin{equation}\label{eq:QuaternionAdditionLaw}
\mathcal{B}_{\mathbf{X}_{1} + \mathbf{X}_{2}} ( \mathcal{Q} ) = \mathcal{B}_{\mathbf{X}_{1}} ( \mathcal{Q} ) + \mathcal{B}_{\mathbf{X}_{2}} ( \mathcal{Q} ) - \mathcal{Q}^{- 1} .
\end{equation}
\emph{Eigenvalues and singular values of $\mathbf{S}$.} Formula~(\ref{eq:QuaternionAdditionLaw}) massively simplifies calculations of the mean spectral density of sums of free non-Hermitian ensembles. In particular, harnessing it to the weighted sum (\ref{eq:SDefinition}) implies that: (i) Its mean spectrum exhibits the rotational symmetry around zero (\ref{eq:RotationallySymmetricNonHolomorphicMTransformDefinition}). (ii) Its rotationally-symmetric nonholomorphic $N$-transform (\ref{eq:RotationallySymmetricNonHolomorphicNTransformDefinition}) is a solution to the following set of $( L + 2 )$ polynomial equations,
\begin{subequations}
\begin{align}
z &= \sum_{l = 1}^{L} M_{l} ,\label{eq:SMasterEquation1}\\
- C &= \frac{z ( z + 1 )}{\mathfrak{N}_{\mathbf{S}} ( z )} ,\label{eq:SMasterEquation2}\\
- C &= \frac{M_{l} \left( M_{l} + 1 \right)}{\left| w_{l} \right|^{2}} , \quad l = 1 , 2 , \ldots , L ,\label{eq:SMasterEquation3}
\end{align}
\end{subequations}
where $C \geq 0$ and \smash{$M_{l}$} are auxiliary unknowns. (iii) Its mean spectral domain $\mathcal{D}$ is either a disk or an annulus, whose external and internal radii are found from
\begin{subequations}
\begin{align}
R_{\textrm{ext.}}^{2} &= \mathfrak{N}_{\mathbf{S}} ( 0 ) ,\label{eq:SRExt}\\
R_{\textrm{int.}}^{2} &= \mathfrak{N}_{\mathbf{S}} ( - 1 ) .\label{eq:SRInt}
\end{align}
\end{subequations}
Furthermore, Eq.~(\ref{eq:NTransformConjecture}) means that the singular values are derived from an identical set of equations except (\ref{eq:SMasterEquation2}) which turns into
\begin{equation}\label{eq:SdSMasterEquation2}
- C = \frac{( z + 1 )^{2}}{N_{\mathbf{S}^{\dagger} \mathbf{S}} ( z )} .
\end{equation}
\subsubsection{Single ring conjecture}
\label{sss:SingleRingConjecture}
\emph{Single ring conjecture.} It is another hypothesis~\cite{Jarosz2011-01} (being a generalization of the ``Feinberg-Zee single ring theorem''~\cite{FeinbergZee1997-02,FeinbergScalettarZee2001,Feinberg2006,GuionnetKrishnapurZeitouni2009}) that for non-Hermitian random matrix models with the rotational symmetry (\ref{eq:RotationallySymmetricNonHolomorphicMTransformDefinition}) present, their mean spectral domain $\mathcal{D}$ is always a disk or an annulus. It has been proven for the models $\mathbf{S}$ and $\mathbf{P}$; for $\mathbf{T}$, $\mathbf{W}$, $\mathbf{V}$, I will assume it holds, and support it a posteriori by numerical simulations.
\emph{Radii of the disk/annulus.} If this conjecture is true, then the external and internal radii of the annulus (becoming a disk if the internal radius shrinks to zero) are still given by (\ref{eq:SRExt})-(\ref{eq:SRInt}), provided that there are no zero modes. Indeed, on the borderline of the domain $\mathcal{D}$, the nonholomorphic Green function (from the inside of $\mathcal{D}$) and the holomorphic one (from the outside of $\mathcal{D}$) must match (\ref{eq:NonHolomorphicGreenFunctionOutsideD}). But the holomorphic Green function compatible with the rotational symmetry (\ref{eq:RotationallySymmetricNonHolomorphicMTransformDefinition}) has the form \smash{$G_{\mathbf{X}} ( z ) = ( 1 + \mathfrak{M}_{\mathbf{X}} ( R^{2} ) ) / z$}, hence, holomorphicity leaves one possibility, i.e., that \smash{$\mathfrak{M}_{\mathbf{X}} ( R^{2} )$} is a constant, \smash{$G_{\mathbf{X}} ( z ) = ( 1 + \mathfrak{M} ) / z$}. In the external outside of $\mathcal{D}$ (including $z = \infty$), it is well-known that \smash{$G_{\mathbf{X}} ( z ) \sim 1 / z$} as $z \to \infty$, which means that $\mathfrak{M} = 0$. In the internal outside (including $z = 0$), supposing there are no zero modes, the Green function must be just zero, i.e., $\mathfrak{M} = - 1$. If there are zero modes, \smash{$\rho^{\textrm{zero modes}}_{\mathbf{X}} ( z , \overline{z} ) = \alpha \delta^{( 2 )} ( z , \overline{z} )$}, they are obtained by applying (\ref{eq:MeanSpectralDensityFromNonHolomorphicGreenFunction}) to the Green function \smash{$G^{\textrm{zero modes}}_{\mathbf{X}} ( z ) = \alpha / z$} regularized according to Eq.~(\ref{eq:ComplexDiracDeltaDefinition}); therefore, $\mathfrak{M} = \alpha - 1$. In other words,
\begin{subequations}
\begin{align}
R_{\textrm{ext.}}^{2} &= \mathfrak{N}_{\mathbf{X}} ( 0 ) ,\label{eq:XRExt}\\
R_{\textrm{int.}}^{2} &= \mathfrak{N}_{\mathbf{X}} ( \alpha - 1 ) .\label{eq:XRInt}
\end{align}
\end{subequations}
\subsubsection{erfc conjecture}
\label{sss:ErfcConjecture}
Calculations in the thermodynamic limit (\ref{eq:ThermodynamicLimit}) are capable of reproducing the mean spectral density only in the bulk of the domain $\mathcal{D}$, but not close to its borderline. However, based on earlier works~\cite{ForresterHonner1999,Kanzieper2005,KhoruzhenkoSommers2009,KanzieperSingh2010}, it has been suggested~\cite{BurdaJaroszLivanNowakSwiech20102011,Jarosz2011-01,Jarosz2010-01} how to extend the radial mean spectral density (\ref{eq:RadialMeanSpectralDensityFromRotationallySymmetricNonHolomorphicMTransform}) to the vicinity of the borderline, provided that the rotational symmetry (\ref{eq:RotationallySymmetricNonHolomorphicMTransformDefinition}) holds.
For each circle \smash{$R = R_{\textrm{b}}$} enclosing $\mathcal{D}$ (one or two; cf.~Sec.~\ref{sss:SingleRingConjecture}), one should simply multiply the radial density \smash{$\rho^{\textrm{rad.}}_{\mathbf{X}} ( R )$} by the universal form-factor,
\begin{equation}\label{eq:FiniteSizeBorderlineFactor}
f_{N , q_{\textrm{b}} , R_{\textrm{b}} , s_{\textrm{b}}} ( R ) \equiv \frac{1}{2} \erfc \left( q_{\textrm{b}} s_{\textrm{b}} \left( R - R_{\textrm{b}} \right) \sqrt{N} \right) ,
\end{equation}
where \smash{$\erfc ( x ) \equiv \frac{2}{\sqrt{\pi}} \int_{x}^{\infty} \dd t \exp ( - t^{2} )$} is the complementary error function, while the sign \smash{$s_{\textrm{b}}$} is $+ 1$ for the external borderline and $- 1$ for the internal borderline, and \smash{$q_{\textrm{b}}$} is a parameter dependent on the particular model, whose evaluation requires truly finite-size methods, but whose value I will adjust by fitting to the Monte Carlo data.
\subsection{Motivation}
\label{ss:Motivation}
\subsubsection{Interesting mathematical properties}
\label{sss:InterestingMathematicalProperties}
I find the models $\mathbf{T}$, $\mathbf{W}$, $\mathbf{V}$ mathematically interesting for the following reasons: (i) They are non-Hermitian, and the theory of such random matrices is richer and much less developed than for the Hermitian ones (cf.~e.g.~\cite{KhoruzhenkoSommers2009}). (ii) They belong to a special class of non-Hermitian matrices, namely, with rotationally-symmetric mean spectrum (\ref{eq:RotationallySymmetricNonHolomorphicMTransformDefinition}). As such, they conjecturally exhibit certain features, which demand testing (and eventually proofs): the $N$-transform conjecture (Sec.~\ref{sss:RotationalSymmetryAndNTransformConjecture}), which makes them reducible (at least concerning the level density) to Hermitian models; the single ring conjecture (Sec.~\ref{sss:SingleRingConjecture}); the erfc conjecture (Sec.~\ref{sss:ErfcConjecture}). (iii) They contain two operations widely investigated in the literature on random matrices, albeit rather separately: summation (cf.~e.g.~\cite{Stephanov1996,FeinbergZee1997-01,FeinbergZee1997-02,JanikNowakPappWambachZahed1997,JanikNowakPappZahed1997-01,JanikNowakPappZahed1997-02,HaagerupLarsen2000,GorlichJarosz2004,Rogers2010,JaroszNowak2004,JaroszNowak2006}) and multiplication (cf.~e.g.~\cite{GredeskulFreilikher1990,CrisantiPaladinVulpiani1993,Beenakker1997,Caswell2000,JacksonLautrupJohansenNielsen2002,JanikWieczorek2003,GudowskaNowakJanikJurkiewiczNowak20032005,TulinoVerdu2004,NarayananNeuberger2007,BanicaBelinschiCapitaineCollins2007,BlaizotNowak2008,LohmayerNeubergerWettig2008,BenaychGeorges2008,KanzieperSingh2010,BurdaJanikWaclaw2009,BurdaJaroszLivanNowakSwiech20102011,Jarosz2010-01,PensonZyczkowski2011,Rogers2010}).
\subsubsection{Applications to quantum information theory}
\label{sss:ApplicationsToQuantumInformationTheory}
The models $\mathbf{T}$ and $\mathbf{W}$ are encountered in the theory of quantum information. This application has been described in~\cite{ZyczkowskiPensonNechitaCollins2010,SEMZyczkowski2010}, as well as in Sec.~I B 2 of~\cite{Jarosz2011-01}, and I refer the reader to these sources for a more detailed exposition (the textbook~\cite{BengtssonZyczkowski2006} is an introduction to the subject). It is debatable whether one can find the model $\mathbf{V}$ in this theory; if not, the reader may treat it just as a mathematically natural generalization of $\mathbf{W}$.
\emph{Density matrix.} Fundamental objects in quantum information theory are ``mixed states,'' i.e., statistical ensembles of quantum states. Such random states arise in a number of important settings: (i) If a quantum system interacts with its environment in a complicated (noisy) way, which may be regarded as random. (ii) If a quantum system is in thermal equilibrium. (iii) If the preparation history of a quantum system is unknown or uncertain (such as for quantum analogues of classically chaotic systems). (iv) If one investigates generic properties of an unknown complicated quantum system, one may assume it is random. (v) If a system consists of subsystems which are entangled, each of them must be described by a mixed state (and quantum entanglement is a central feature in the theory of quantum computers). A classic example is light polarization: a polarized photon can be written as a superposition of two helicities, right and left circular polarizations, $( a | R \rangle + b | L \rangle )$ (a ``pure state''); whereas unpolarized light may be described as being $| R \rangle$ or $| L \rangle$, each with probability $1 / 2$ (a mixed state).
A mixed state cannot be represented by a single state vector---a proper formalism is that of a ``density matrix.'' If one considers a statistical mixture of $N$ pure states \smash{$| \psi_{i} \rangle$}, each with probability \smash{$p_{i} \in [ 0 , 1 ]$} (\smash{$\sum_{i = 1}^{N} p_{i} = 1$}), the density matrix is defined as \smash{$\boldsymbol{\rho} \equiv \sum_{i = 1}^{N} p_{i} | \psi_{i} \rangle \langle \psi_{i} |$} (a convex combination of pure states; called sometimes an ``incoherent superposition''), and it has this genuine property that the expectation value of any observable $\mathbf{A}$ is given by $\Tr ( \boldsymbol{\rho} \mathbf{A} )$. More generally, any operator which is Hermitian, positive-semidefinite (its eigenvalues are nonnegative) and has trace one (its eigenvalues sum to one) may be considered a density matrix.
I will be interested in complicated composite quantum systems. For example, for a bi-partite system consisting of a subsystem $\mathcal{A}$ of size \smash{$N_{1}$} and $\mathcal{B}$ of size \smash{$N_{2}$} (with orthonormal bases, \smash{$\{ | i \rangle_{\mathcal{A}} \}$} and \smash{$\{ | j \rangle_{\mathcal{B}} \}$}), a general pure state of the full system is ``entangled,'' i.e., it cannot be written as a tensor product of pure states of $\mathcal{A}$ and $\mathcal{B}$,
\begin{equation}\label{eq:EntangledStateOfABipartiteSystem}
| \psi \rangle \equiv \sum_{i = 1}^{N_{1}} \sum_{j = 1}^{N_{2}} X_{i j} | i \rangle_{\mathcal{A}} \otimes | j \rangle_{\mathcal{B}} .
\end{equation}
In other words, $\mathcal{A}$ or $\mathcal{B}$ cannot be said to be in any definite pure state. But they can be characterized by a density matrix---consider any observable $\mathbf{A}$ on $\mathcal{A}$; its expectation value in the state $| \psi \rangle$ reads \smash{$\Tr ( \boldsymbol{\rho}_{\mathcal{A}} \mathbf{A} )$}, where the ``reduced density matrix'' of $\mathcal{A}$,
\begin{equation}\label{eq:ReducedDensityMatrix}
\boldsymbol{\rho}_{\mathcal{A}} \equiv \frac{\mathbf{X} \mathbf{X}^{\dagger}}{\Tr \left( \mathbf{X} \mathbf{X}^{\dagger} \right)} ,
\end{equation}
and $\mathbf{X}$ is rectangular \smash{$N_{1} \times N_{2}$} complex matrix. (This may equivalently be obtained by performing the ``partial trace'' \smash{$\Tr_{\mathcal{B}}$} of the density matrix of the full system, $| \psi \rangle \langle \psi |$, i.e., \smash{$\sum_{j} {}_{\mathcal{B}} \langle j | \psi \rangle \langle \psi | j \rangle_{\mathcal{B}}$}, and normalizing it.) Therefore, the eigenvalues of \smash{$\boldsymbol{\rho}_{\mathcal{A}}$} (which are the properly normalized singular values of $\mathbf{X}$) are precisely the above probabilities \smash{$p_{i}$}. This picture should be supplied by the discussed above basic notion of replacing the complicatedness of the full system by randomness---considering $\mathbf{X}$ to be some random matrix, and calculating the mean density of its singular values.
An important measure for a mixed state is its ``von Neumann entropy,''
\begin{equation}\label{eq:VonNeumannEntropy}
\mathcal{S} ( \boldsymbol{\rho} ) \equiv - \Tr ( \boldsymbol{\rho} \log \boldsymbol{\rho} ) = - \sum_{i = 1}^{N} p_{i} \log p_{i} .
\end{equation}
It represents the degree of randomness (mixture) in the mixed state (thus, quantum information; thus, it also measures entanglement)---it is the larger, the more disperse the probabilities \smash{$p_{i}$} are; it is zero for a pure state, and reaches its maximum $\log N$ for all the probabilities equal [the full system is in the ``maximally entangled (Bell) state'']. Moreover, a measurement can never decrease this entropy; consequently, a measurement may take a pure state to a mixed one, but not conversely (except there is a greater growth of entropy in the environment).
I will focus on some physically motivated ``structured ensembles'' of random states in which an important role is played by the tensor product structure of the Hilbert spaces of the subsystems, i.e., which are invariant under local unitary transformations in the respective Hilbert spaces.
\emph{Model $\mathbf{S}$.} The following construction leads to the appearance of the model $\mathbf{S}$ in the above setting:
(i) Consider a bi-partite system $( \mathcal{A} , \mathcal{B} )$ of size $N \times N$ in the Bell state,
\begin{equation}\label{eq:BellStatePsiPlus}
| \Psi^{+}_{\mathcal{A} \mathcal{B}} \rangle \equiv \frac{1}{\sqrt{N}} \sum_{i = 1}^{N} | i \rangle_{\mathcal{A}} \otimes | i \rangle_{\mathcal{B}} .
\end{equation}
(ii) Consider the following type of randomness---apply to this Bell state $L$ independent random local unitary transformations (belonging to the CUE) \smash{$\mathbf{U}_{l}$} in the principal system $\mathcal{A}$, and form a coherent superposition of the resulting (maximally entangled) states with some weights \smash{$w_{l}$},
\begin{equation}
\begin{split}\label{eq:ModelSInQuantumInformationTheory}
| \psi \rangle &\equiv \sum_{l = 1}^{L} w_{l} \left( \mathbf{U}_{l} \otimes \mathbf{1}_{N} \right) | \Psi^{+}_{\mathcal{A} \mathcal{B}} \rangle =\\
&= \left( \mathbf{S} \otimes \mathbf{1}_{N} \right) | \Psi^{+}_{\mathcal{A} \mathcal{B}} \rangle =\\
&= \frac{1}{\sqrt{N}} \sum_{i , j = 1}^{N} S_{i j} | i \rangle_{\mathcal{A}} \otimes | j \rangle_{\mathcal{B}} .
\end{split}
\end{equation}
(iii) Hence, the reduced density matrix for $\mathcal{A}$ is given by (\ref{eq:ReducedDensityMatrix}) with $\mathbf{X} = \mathbf{S}$. [The normalization constant is \smash{$\Tr ( \mathbf{S} \mathbf{S}^{\dagger} ) = N \sum_{l = 1}^{L} | w_{l} |^{2} + \ldots$}, where the dots are much smaller than $N$ in the large-$N$ limit (\ref{eq:ThermodynamicLimit}). So one may set for convenience, \smash{$\sum_{l = 1}^{L} | w_{l} |^{2} = 1$}, and investigate the singular values of $\mathbf{S}$ rescaled by $N$.]
\emph{Model $\mathbf{T}$.} It is a matter of a nested repetition of the above procedure to obtain the model $\mathbf{T}$. For instance, for $J = 2$:
(i) Consider the above pure state (\ref{eq:ModelSInQuantumInformationTheory}), constructed according to a matrix \smash{$\mathbf{S}_{2}$} of length \smash{$L_{2}$}, and take its \smash{$L_{1}$} copies.
(ii) Perform a random independent CUE rotation \smash{$\mathbf{U}_{1 l}$} of each of these copies, and form their coherent superposition with some weights \smash{$w_{1 l}$}.
(iii) The reduced density matrix for $\mathcal{A}$ (\ref{eq:ReducedDensityMatrix}) will have \smash{$\mathbf{X} = \mathbf{T} = \mathbf{S}_{1} \mathbf{S}_{2}$}. One may continue this process any $J$ times.
\emph{Model $\mathbf{P}$.} This ensemble originates from a measurement in the product basis of Bell states, in the following way:
(i) Consider a composite system with $2 K$ subsystems of sizes
\begin{equation}\label{eq:ModelPInQuantumInformationTheoryDerivation01}
\underbrace{\mathcal{A}_{1} ,}_{\textrm{size } N_{1}} \underbrace{\mathcal{A}_{2} , \mathcal{A}_{3}}_{\textrm{size } N_{2}} , \ldots , \underbrace{\mathcal{A}_{2 K - 2} , \mathcal{A}_{2 K - 1}}_{\textrm{size } N_{K}} , \underbrace{\mathcal{A}_{2 K} .}_{\textrm{size } N_{K + 1}}
\end{equation}
(ii) Take an arbitrary product state \smash{$| \psi_{0} \rangle \equiv | 0 \rangle_{\mathcal{A}_{1}} \otimes | 0 \rangle_{\mathcal{A}_{2}} \otimes \ldots \otimes | 0 \rangle_{\mathcal{A}_{2 K}}$}, and apply to it random unitary local transformations acting on the pairs of subsystems,
\begin{equation}\label{eq:ModelPInQuantumInformationTheoryDerivation02}
| \psi \rangle \equiv \left( \mathcal{U}_{\mathcal{A}_{1} \mathcal{A}_{2}} \otimes \mathcal{U}_{\mathcal{A}_{3} \mathcal{A}_{4}} \otimes \ldots \otimes \mathcal{U}_{\mathcal{A}_{2 K - 1} \mathcal{A}_{2 K}} \right) | \psi_{0} \rangle .
\end{equation}
The result is separable with respect to the above pairing, i.e., it can be expanded in the product basis as
\begin{equation}
\begin{split}\label{eq:ModelPInQuantumInformationTheoryDerivation03}
| \psi \rangle = &\sum_{i_{1} = 1}^{N_{1}} \sum_{i_{2} , i_{2}^{\prime} = 1}^{N_{2}} \ldots \sum_{i_{K} , i_{K}^{\prime} = 1}^{N_{K}} \sum_{i_{K + 1} = 1}^{N_{K + 1}}\\
&[ \mathbf{A}_{1} ]_{i_{1} i_{2}} [ \mathbf{A}_{2} ]_{i_{2}^{\prime} i_{3}} \ldots [ \mathbf{A}_{K - 1} ]_{i_{K - 1}^{\prime} i_{K}} [ \mathbf{A}_{K} ]_{i_{K} i_{K + 1}} \cdot\\
&\cdot | i_{1} \rangle_{\mathcal{A}_{1}} \otimes | i_{2} \rangle_{\mathcal{A}_{2}} \otimes | i_{2}^{\prime} \rangle_{\mathcal{A}_{3}} \otimes \ldots \otimes | i_{K + 1} \rangle_{\mathcal{A}_{2 K}} ,
\end{split}
\end{equation}
where the coefficients, gathered into $K$ rectangular (\smash{$N_{k} \times N_{k + 1}$}) matrices \smash{$\mathbf{A}_{k}$}, may be assumed independent Gaussian (\ref{eq:RectangularGinUEJPDF}).
(iii) Consider the Bell states (\ref{eq:BellStatePsiPlus}) on the pairs \smash{$( \mathcal{A}_{2} , \mathcal{A}_{3} )$}, \ldots, \smash{$( \mathcal{A}_{2 K - 2} , \mathcal{A}_{2 K - 1} )$}, and project $| \psi \rangle$ onto their product,
\begin{equation}\label{eq:ModelPInQuantumInformationTheoryDerivation04}
\mathcal{P} \equiv \mathbf{1}_{\mathcal{A}_{1}} \otimes \left( \bigotimes_{k = 2}^{K} | \Psi^{+}_{\mathcal{A}_{2 k - 2} \mathcal{A}_{2 k - 1}} \rangle \langle \Psi^{+}_{\mathcal{A}_{2 k - 2} \mathcal{A}_{2 k - 1}} | \right) \otimes \mathbf{1}_{\mathcal{A}_{2 K}} ,
\end{equation}
which leads to a random pure state describing the remaining two subsystems, \smash{$\mathcal{A}_{1}$} and \smash{$\mathcal{A}_{2 K}$}, through the matrix $\mathbf{P}$,
\begin{equation}
\begin{split}\label{eq:ModelPInQuantumInformationTheoryDerivation05}
| \phi \rangle &\equiv \mathcal{P} | \psi \rangle \propto\\
&\propto \sum_{i_{1} = 1}^{N_{1}} \sum_{i_{K + 1} = 1}^{N_{K + 1}} [ \mathbf{P} ]_{i_{1} i_{K + 1}} | i_{1} \rangle_{\mathcal{A}_{1}} \otimes | i_{K + 1} \rangle_{\mathcal{A}_{2 K}} .
\end{split}
\end{equation}
(iv) The reduced density matrix for \smash{$\mathcal{A}_{1}$} (i.e., the normalized partial trace over \smash{$\mathcal{A}_{2 K}$}) is therefore (\ref{eq:ReducedDensityMatrix}) with $\mathbf{X} = \mathbf{P}$.
\emph{Model $\mathbf{W}$.} A direct combination of the above two algorithms leads to the model $\mathbf{W}$---one should consider the random pure states corresponding to the model $\mathbf{T}$ on the $2 K$ subsystems (\ref{eq:ModelPInQuantumInformationTheoryDerivation01}), and proceed as above.
One may find in the literature only an expression for the mean density of the singular values of $\mathbf{W}$ in the case of $J = 1$, \smash{$L_{1} = 2$}, \smash{$w_{1 l} = 1 / \sqrt{2}$} (for $l = 1 , 2$), $K = 1$, \smash{$\sigma_{1} = 1$}, \smash{$r_{1} = 1$}---the ``Bures distribution''~\cite{Bures1969,SommersZyczkowski2004},
\begin{equation}
\begin{split}\label{eq:BuresDistribution}
\rho_{\mathbf{W}^{\dagger} \mathbf{W}} ( x ) = &\frac{1}{4 \sqrt{3} \pi} \Bigg( \left( \frac{\beta}{x} + \sqrt{\frac{\beta^{2}}{x^{2}} - 1} \right)^{2 / 3} -\\
&-\left( \frac{\beta}{x} - \sqrt{\frac{\beta^{2}}{x^{2}} - 1} \right)^{2 / 3} \Bigg) ,
\end{split}
\end{equation}
for $x \in [ 0 , \beta ]$ and zero otherwise, where for short, \smash{$\beta \equiv 3 \sqrt{3}$}. The chief purpose of this paper is to extend this result to arbitrary values of the parameters, even to a product of a number of matrices $\mathbf{W}$, as well as to the mean density of the eigenvalues.
\section{Generalized Bures products}
\label{s:GeneralizedBuresProducts}
\subsection{Generalized multiplication laws}
\label{ss:GeneralizedMultiplicationLaws}
This paper deals with products of random rectangular or non-Hermitian matrices, hence, I will start from adjusting the free probability multiplication law (\ref{eq:MultiplicationLaw}) to such a situation. Consider a product of arbitrary independent random matrices,
\begin{equation}\label{eq:XDefinition}
\mathbf{X} \equiv \mathbf{X}_{1} \mathbf{X}_{2} \ldots \mathbf{X}_{I} ,
\end{equation}
where \smash{$\mathbf{X}_{i}$}, $i = 1 , 2 , \ldots , I$, is rectangular of dimensions \smash{$T_{i} \times T_{i + 1}$}, which tend to infinity in such a way that the rectangularity ratios
\begin{equation}\label{eq:sDefinition}
s_{i} \equiv \frac{T_{i}}{T_{I + 1}} ,
\end{equation}
remain finite. I will follow the steps sketched in~Sec.~\ref{sss:FreeProbabilityAndModelP} (cf.~\cite{BurdaJaroszLivanNowakSwiech20102011}) to derive the mean densities of its singular values and if \smash{$s_{1} = 1$} also the eigenvalues.
\subsubsection{Singular values of the product $\mathbf{X}$ via cyclic shifts and multiplication law}
\label{sss:SingularValuesOfTheProductXViaCyclicShiftsAndMultiplicationLaw}
Let me for simplicity set $I = 2$ here. I begin from the eigenvalues of the Hermitian \smash{$T_{3} \times T_{3}$} random matrix,
\begin{equation}\label{eq:SingularValuesOfTheProductXViaCyclicShiftsAndMultiplicationLawDerivation01}
\mathbf{X}^{\dagger} \mathbf{X} = \mathbf{X}_{2}^{\dagger} \mathbf{X}_{1}^{\dagger} \mathbf{X}_{1} \mathbf{X}_{2} .
\end{equation}
However, as a first step, consider instead the \smash{$T_{2} \times T_{2}$} matrix,
\begin{equation}\label{eq:SingularValuesOfTheProductXViaCyclicShiftsAndMultiplicationLawDerivation02}
\mathbf{Y} \equiv \left( \mathbf{X}_{1}^{\dagger} \mathbf{X}_{1} \right) \left( \mathbf{X}_{2} \mathbf{X}_{2}^{\dagger} \right) ,
\end{equation}
which differs from the previous one only by a cyclic shift of the terms. Therefore, as follows from (\ref{eq:HolomorphicMTransformMomentExpansion}), their $N$-transforms are related by
\begin{equation}\label{eq:SingularValuesOfTheProductXViaCyclicShiftsAndMultiplicationLawDerivation03}
N_{\mathbf{X}^{\dagger} \mathbf{X}} ( z ) = N_{\mathbf{Y}} \left( \frac{T_{3}}{T_{2}} z \right) .
\end{equation}
Now, $\mathbf{Y}$ is a product of two free Hermitian random matrices (their freeness is what I actually mean by the assumed ``independence'' of \smash{$\mathbf{X}_{1}$} and \smash{$\mathbf{X}_{2}$}), and thus the multiplication law (\ref{eq:MultiplicationLaw}) can be applied,
\begin{equation}
\begin{split}\label{eq:SingularValuesOfTheProductXViaCyclicShiftsAndMultiplicationLawDerivation04}
N_{\mathbf{Y}} ( z ) &= \frac{z}{z + 1} N_{\mathbf{X}_{1}^{\dagger} \mathbf{X}_{1}} ( z ) N_{\mathbf{X}_{2} \mathbf{X}_{2}^{\dagger}} ( z ) =\\
&= \frac{z}{z + 1} N_{\mathbf{X}_{1}^{\dagger} \mathbf{X}_{1}} ( z ) N_{\mathbf{X}_{2}^{\dagger} \mathbf{X}_{2}} \left( \frac{T_{2}}{T_{3}} z \right) ,
\end{split}
\end{equation}
where in the last step I used an analogue of (\ref{eq:SingularValuesOfTheProductXViaCyclicShiftsAndMultiplicationLawDerivation03}). Finally, inserting (\ref{eq:SingularValuesOfTheProductXViaCyclicShiftsAndMultiplicationLawDerivation04}) into (\ref{eq:SingularValuesOfTheProductXViaCyclicShiftsAndMultiplicationLawDerivation03}), one obtains a multiplication law which allows to calculate the singular values of $\mathbf{X}$ from the singular values of \smash{$\mathbf{X}_{1}$} and \smash{$\mathbf{X}_{2}$},
\begin{equation}\label{eq:SingularValuesOfTheProductXViaCyclicShiftsAndMultiplicationLawDerivation05}
N_{\mathbf{X}^{\dagger} \mathbf{X}} ( z ) = \frac{z}{z + \frac{T_{2}}{T_{3}}} N_{\mathbf{X}_{1}^{\dagger} \mathbf{X}_{1}} \left( \frac{T_{3}}{T_{2}} z \right) N_{\mathbf{X}_{2}^{\dagger} \mathbf{X}_{2}} ( z ) .
\end{equation}
This formula may be generalized to any $I$,
\begin{equation}\label{eq:SingularValuesOfTheProductXViaCyclicShiftsAndMultiplicationLawDerivation06}
N_{\mathbf{X}^{\dagger} \mathbf{X}} ( z ) = \frac{z^{I - 1}}{\prod_{i = 2}^{I} \left( z + s_{i} \right)} \prod_{i = 1}^{I} N_{\mathbf{X}_{i}^{\dagger} \mathbf{X}_{i}} \left( \frac{z}{s_{i + 1}} \right)
\end{equation}
[cf.~Eq.~(58) of the first position in~\cite{BurdaJaroszLivanNowakSwiech20102011}].
\subsubsection{Eigenvalues of the product $\mathbf{X}$ assuming it is square and has rotationally-symmetric mean spectrum}
\label{sss:EigenvaluesOfTheProductXAssumingItIsSquareAndHasRotationallySymmetricMeanSpectrum}
If \smash{$s_{1} = 1$}, i.e., $\mathbf{X}$ is square, one may also ask about its eigenvalues. Assume that its mean spectrum has the rotational symmetry around zero (\ref{eq:RotationallySymmetricNonHolomorphicMTransformDefinition}). Combining the $N$-transform conjecture (\ref{eq:NTransformConjecture}) with the multiplication law (\ref{eq:SingularValuesOfTheProductXViaCyclicShiftsAndMultiplicationLawDerivation06}), one finds the appropriate formula,
\begin{equation}\label{eq:EigenvaluesOfTheProductXAssumingItIsSquareAndHasRotationallySymmetricMeanSpectrumDerivation01}
\mathfrak{N}_{\mathbf{X}} ( z ) = \prod_{i = 1}^{I} \frac{z}{z + s_{i}} N_{\mathbf{X}_{i}^{\dagger} \mathbf{X}_{i}} \left( \frac{z}{s_{i + 1}} \right) .
\end{equation}
\subsubsection{Eigenvalues of the product $\mathbf{X}$ assuming all \smash{$\mathbf{X}_{i}$} are square and have rotationally-symmetric mean spectra}
\label{sss:EigenvaluesOfTheProductXAssumingAllXiAreSquareAndHaveRotationallySymmetricMeanSpectra}
If moreover all \smash{$s_{i} = 1$}, i.e., all \smash{$\mathbf{X}_{i}$} are square, and also the mean spectra of all \smash{$\mathbf{X}_{i}$} are rotationally symmetric around zero, then the right-hand side of the multiplication law (\ref{eq:EigenvaluesOfTheProductXAssumingItIsSquareAndHasRotationallySymmetricMeanSpectrumDerivation01}) may be expressed through the eigenvalues of \smash{$\mathbf{X}_{i}$} by virtue of the $N$-transform conjecture (\ref{eq:NTransformConjecture}), which yields simply
\begin{equation}\label{eq:EigenvaluesOfTheProductXAssumingAllXiAreSquareAndHaveRotationallySymmetricMeanSpectraDerivation01}
\mathfrak{N}_{\mathbf{X}} ( z ) = \prod_{i = 1}^{I} \mathfrak{N}_{\mathbf{X}_{i}} ( z ) .
\end{equation}
The models $\mathbf{T}$, $\mathbf{W}$, $\mathbf{V}$ all possess the property (\ref{eq:RotationallySymmetricNonHolomorphicMTransformDefinition}) since they are products of terms which separately exhibit this symmetry. Hence, one is allowed to use the multiplication law (\ref{eq:EigenvaluesOfTheProductXAssumingAllXiAreSquareAndHaveRotationallySymmetricMeanSpectraDerivation01}) for them. Eventually, the rotational symmetry will be confirmed by Monte Carlo simulations.
\subsection{Product $\mathbf{T}$}
\label{ss:ProductT}
\subsubsection{Master equations for $\mathbf{T}$}
\label{sss:MasterEquationsForT}
\emph{Eigenvalues.} It is now straightforward to write down the master equations for the nonholomorphic $M$-transform \smash{$\mathfrak{M} \equiv \mathfrak{M}_{\mathbf{T}} ( R^{2} )$} of the product $\mathbf{T}$ (\ref{eq:TDefinition})---they are comprised of the multiplication law (\ref{eq:EigenvaluesOfTheProductXAssumingAllXiAreSquareAndHaveRotationallySymmetricMeanSpectraDerivation01}),
\begin{equation}\label{eq:TMasterEquation0}
\mathfrak{N}_{\mathbf{T}} ( z ) = \prod_{j = 1}^{J} \mathfrak{N}_{\mathbf{S}_{j}} ( z ) ,
\end{equation}
which links the $J$ sets (\ref{eq:SMasterEquation1})-(\ref{eq:SMasterEquation3}),
\begin{subequations}
\begin{align}
z &= \sum_{l = 1}^{L_{j}} M_{j l} ,\label{eq:TMasterEquation1}\\
- C_{j} &= \frac{z ( z + 1 )}{\mathfrak{N}_{\mathbf{S}_{j}} ( z )} ,\label{eq:TMasterEquation2}\\
- C_{j} &= \frac{M_{j l} \left( M_{j l} + 1 \right)}{\left| w_{j l} \right|^{2}} , \quad l = 1 , 2 , \ldots , L_{j} ,\label{eq:TMasterEquation3}
\end{align}
\end{subequations}
for $j = 1 , 2 , \ldots , J$; these are \smash{$( 1 + 2 J + \sum_{j = 1}^{J} L_{j} )$} polynomial equations. They are valid inside the mean spectral domain $\mathcal{D}$, whose borderline (one or two centered circles) is given by (\ref{eq:SRExt})-(\ref{eq:SRInt}).
\emph{Singular values.} These are obtained from the same master equations, albeit with \smash{$\mathfrak{N}_{\mathbf{T}} ( z )$} in (\ref{eq:TMasterEquation0}) replaced by \smash{$\frac{z}{z + 1} N_{\mathbf{T}^{\dagger} \mathbf{T}} ( z )$} (\ref{eq:NTransformConjecture}).
I will now simplify the above equations and solve them either analytically or numerically in a number of special cases, comparing the findings with Monte Carlo simulations.
\subsubsection{Example 0}
\label{sss:TExample0}
\emph{Eigenvalues.} Before that, however, let us remark that if all the terms \smash{$\mathbf{S}_{j}$} have identical lengths, \smash{$L_{j} = L$}, and sequences of weights, \smash{$w_{j l} = w_{l}$}, then (\ref{eq:TMasterEquation1})-(\ref{eq:TMasterEquation3}) imply that all \smash{$\mathfrak{N}_{\mathbf{S}_{j}} ( z )$} are equal to each other. This along with (\ref{eq:TMasterEquation0}) in turn imply that the nonholomorphic $M$-transforms of $\mathbf{T}$ and any \smash{$\mathbf{S}_{j}$} are related by a simple rescaling of the argument,
\begin{equation}\label{eq:TEqualTermsScalingRelation}
\mathfrak{M}_{\mathbf{T}} \left( R^{2} \right) = \mathfrak{M}_{\mathbf{S}} \left( R^{2 / J} \right) .
\end{equation}
\emph{Singular values.} Unfortunately, there is no such scaling relation here; it is clear from trying to follow the above argument in conjunction with (\ref{eq:NTransformConjecture}).
\subsubsection{Example 1}
\label{sss:TExample1}
\begin{figure*}[t]
\includegraphics[width=\columnwidth]{Figures/1a.eps}
\includegraphics[width=\columnwidth]{Figures/1d.eps}
\includegraphics[width=\columnwidth]{Figures/1b.eps}
\includegraphics[width=\columnwidth]{Figures/1e.eps}
\includegraphics[width=\columnwidth]{Figures/1c.eps}
\includegraphics[width=\columnwidth]{Figures/1f.eps}
\caption{Theoretical level densities versus Monte Carlo data for the model $\mathbf{T}$. Top row concerns the eigenvalues, middle row the eigenvalues plus the erfc form-factor, bottom row the singular values. Left column illustrates Example 1, right column Example 2.}
\label{fig:ModelT}
\end{figure*}
\begin{figure}[t]
\includegraphics[width=\columnwidth]{Figures/2.eps}
\caption{Numerical verification of the property described in \emph{Remark 2} in Sec.~\ref{sss:TExample1}, that even though the model $\mathbf{T}$ may be rewritten as a sum of CUE matrices, it is very different from the model $\mathbf{S}$ due to the correlations between the thus obtained CUE terms.}
\label{fig:ModelTCorrelatedU}
\end{figure}
\begin{figure*}[t]
\includegraphics[width=\columnwidth]{Figures/3a.eps}
\includegraphics[width=\columnwidth]{Figures/3d.eps}
\includegraphics[width=\columnwidth]{Figures/3b.eps}
\includegraphics[width=\columnwidth]{Figures/3e.eps}
\includegraphics[width=\columnwidth]{Figures/3c.eps}
\includegraphics[width=\columnwidth]{Figures/3f.eps}
\caption{Theoretical level densities versus Monte Carlo data for the model $\mathbf{W}$, with $K = 2$. Top row concerns the eigenvalues, middle row the eigenvalues plus the erfc form-factor, bottom row the singular values. Left column illustrates Example 1, right column Example 2.}
\label{fig:ModelWK2}
\end{figure*}
\begin{figure*}[t]
\includegraphics[width=\columnwidth]{Figures/4a.eps}
\includegraphics[width=\columnwidth]{Figures/4d.eps}
\includegraphics[width=\columnwidth]{Figures/4b.eps}
\includegraphics[width=\columnwidth]{Figures/4e.eps}
\includegraphics[width=\columnwidth]{Figures/4c.eps}
\includegraphics[width=\columnwidth]{Figures/4f.eps}
\caption{Theoretical level densities versus Monte Carlo data for the model $\mathbf{W}$, with $K = 5$. Top row concerns the eigenvalues, middle row the eigenvalues plus the erfc form-factor, bottom row the singular values. Left column illustrates Example 1, right column Example 2. Figure (f) does not include theoretical graphs due to numerical complications encountered in solving the master equations.}
\label{fig:ModelWK5}
\end{figure*}
\begin{figure*}[t]
\includegraphics[width=\columnwidth]{Figures/5a.eps}
\includegraphics[width=\columnwidth]{Figures/5b.eps}
\caption{Numerical verification of the property described in \emph{Remark 2} in Sec.~\ref{sss:WExample1}. The theoretical graphs are the level densities of the model \smash{$\mathbf{P}_{1}$}, obtained from (\ref{eq:HolomorphicNTransformOfPDaggerP}) with all \smash{$r_{k} = 1$}.}
\label{fig:ModelWInv}
\end{figure*}
\begin{figure*}[t]
\includegraphics[width=\columnwidth]{Figures/6a.eps}
\includegraphics[width=\columnwidth]{Figures/6b.eps}
\includegraphics[width=\columnwidth]{Figures/6c.eps}
\caption{Theoretical level densities [the eigenvalues (a), the eigenvalues plus the erfc form-factor (b), the singular values (c)] versus Monte Carlo data for the model $\mathbf{V}$.}
\label{fig:ModelV}
\end{figure*}
As the first example, take arbitrary $J$ and \smash{$L_{j}$}, but suppose that the weights for each \smash{$\mathbf{S}_{j}$} are equal to each other and denoted by
\begin{equation}\label{eq:TExample1Derivation01}
w_{j l} = \frac{w_{j}}{\sqrt{L_{j}}} , \quad l = 1 , 2 , \ldots , L_{j} ,
\end{equation}
for some $J$ constants \smash{$w_{j}$}.
\emph{Eigenvalues.} Then, the master equations (\ref{eq:TMasterEquation1})-(\ref{eq:TMasterEquation3}) give explicitly
\begin{equation}\label{eq:TExample1Derivation02}
\mathfrak{N}_{\mathbf{S}_{j}} ( z ) = \left| w_{j} \right|^{2} \frac{z + 1}{\frac{z}{L_{j}} + 1} , \quad j = 1 , 2 , \ldots , J ,
\end{equation}
which inserted into (\ref{eq:TMasterEquation0}) and after performing the functional inversion (\ref{eq:RotationallySymmetricNonHolomorphicNTransformDefinition}) leads to a polynomial equation of order $J$ for $\mathfrak{M}$,
\begin{equation}\label{eq:TExample1Derivation03}
\frac{R^{2}}{| w |^{2}} = \frac{( \mathfrak{M} + 1 )^{J}}{\prod_{j = 1}^{J} \left( \frac{\mathfrak{M}}{L_{j}} + 1 \right)} ,
\end{equation}
where for short, \smash{$w \equiv \prod_{j = 1}^{J} w_{j}$}. [Notice a certain similarity to the counterpart Eq.~(\ref{eq:HolomorphicNTransformOfPDaggerP}) for $\mathbf{P}$; indeed, I will show (Sec.~\ref{sss:WExample1}) that $\mathbf{T}$ of Example 1 is in some sense ``inverse'' to a certain class of models $\mathbf{P}$.]
Inserting $\mathfrak{M} = 0$ or $\mathfrak{M} = - 1$ (\ref{eq:SRExt})-(\ref{eq:SRInt}) (no zero modes here, $\alpha = 0$) into (\ref{eq:TExample1Derivation03}), one finds that the mean spectral domain is a centered disk of radius
\begin{equation}\label{eq:TExample1Derivation04}
R_{\textrm{ext.}} = | w | .
\end{equation}
In particular, if all \smash{$L_{j} = L$}, Eq.~(\ref{eq:TExample1Derivation03}) can be solved explicitly, yielding (\ref{eq:RadialMeanSpectralDensityFromRotationallySymmetricNonHolomorphicMTransform}),
\begin{equation}\label{eq:TExample1Derivation05}
\rho^{\textrm{rad.}}_{\mathbf{T}} ( R ) = 2 | w |^{2} \frac{1}{J} \left( 1 - \frac{1}{L} \right) \frac{R^{2 / J - 1}}{\left( | w |^{2} - \frac{R^{2 / J}}{L} \right)^{2}} ,
\end{equation}
for \smash{$R \leq | w |$}, and zero otherwise. This could also be obtained by using the scaling relation (\ref{eq:TEqualTermsScalingRelation}) and the proper result for $\mathbf{S}$, i.e., Eq.~(58) of~\cite{Jarosz2011-01}.
\emph{Singular values.} The master equation for the holomorphic $M$-transform \smash{$M \equiv M_{\mathbf{T}^{\dagger} \mathbf{T}} ( z )$} is a small modification of (\ref{eq:TExample1Derivation03}) according to (\ref{eq:NTransformConjecture}),
\begin{equation}\label{eq:TExample1Derivation06}
\frac{z}{| w |^{2}} = \frac{( M + 1 )^{J + 1}}{M \prod_{j = 1}^{J} \left( \frac{M}{L_{j}} + 1 \right)} ,
\end{equation}
which is polynomial of order $( J + 1 )$.
The above findings, along with the erfc conjecture (\ref{eq:FiniteSizeBorderlineFactor}), are tested against Monte Carlo data in Fig.~\ref{fig:ModelT} [(a), (b), (c)].
\emph{Remark 1: Divergence at zero.} Let me note that from the above equations one may easily derive an interesting feature of the level densities---their divergence close to zero. I will apply the logic presented in~\cite{BurdaJaroszLivanNowakSwiech20102011} to this and the following models. Assume that for $z \to 0$, also $z \mathfrak{G} \to 0$ and $z G \to 0$ (recall, $\mathfrak{M} = z \mathfrak{G} - 1$, $M = z G - 1$), which will be verified a posteriori. Then the denominators of the right hand sides of [(\ref{eq:TExample1Derivation03}), (\ref{eq:TExample1Derivation06})] tend to nonzero constants, and consequently, \smash{$\mathfrak{G} \sim z^{1 / J - 1} \overline{z}^{1 / J}$}, \smash{$G \sim z^{- J / ( J + 1 )}$}, i.e.,
\begin{subequations}
\begin{align}
\rho^{\textrm{rad.}}_{\mathbf{T}} ( R ) &\sim R^{- \frac{d - 2}{d}} , \quad R \to 0 ,\label{eq:TExample1Derivation07a}\\
\rho_{\mathbf{T}^{\dagger} \mathbf{T}} ( x ) &\sim x^{- \frac{d}{d + 1}} , \quad x \to 0 ,\label{eq:TExample1Derivation07b}
\end{align}
\end{subequations}
where
\begin{equation}\label{eq:TExample1Derivation08}
d = J .
\end{equation}
[The initial suppositions thus hold true: \smash{$z \mathfrak{G} \sim R^{2 / J} \to 0$} and \smash{$z G \sim z^{1 / ( J + 1 )} \to 0$}.] Note that the mean density of the singular values diverges for any $J$, while for the eigenvalues, the radial mean spectral density is finite for $J = 1$ (i.e., the model $\mathbf{S}$, for which it zeroes) or $J = 2$, in which case however, the proper density (\ref{eq:NonHermitianMeanSpectralDensityDefinition}) diverges.
\emph{Remark 2: Is $\mathbf{T}$ really different from $\mathbf{S}$?} Notice that if one opens up the brackets in the definition of the model $\mathbf{T}$ [(\ref{eq:TDefinition}), (\ref{eq:SDefinition})] (let me focus on this Example 1 and all \smash{$w_{j} = 1$}), \smash{$\mathbf{T} = \prod_{j = 1}^{J} ( L_{j}^{- 1 / 2} \sum_{l = 1}^{L_{j}} \mathbf{U}_{j l} )$}, one obtains a sum of \smash{$L \equiv \prod_{j = 1}^{J} L_{j}$} terms (multiplied by \smash{$L^{- 1 / 2}$}), each of which is a product of some $J$ CUE matrices. Now, it is known that a product of CUE matrices still belongs to the CUE, hence, $\mathbf{T}$ looks like the model $\mathbf{S}$ of length $L$---except the fact that now the various terms are not statistically independent (free). So one might ask how relevant these correlations between the $L$ terms are, i.e., how much the level densities of $\mathbf{T}$ and of $\mathbf{S}$ of length $L$ differ. Figure~\ref{fig:ModelTCorrelatedU} illustrates the theoretical mean spectral densities of these two random matrix models---they look completely different, showing that even though $\mathbf{T}$ may be recast as a sum of CUE matrices, correlations between them are important, and it is not our model $\mathbf{S}$ at all. Especially, the behavior at zero of the level densities in these two cases is entirely distinct, cf.~\emph{Remark 1} above. Also, with growing $L$, the density of $\mathbf{S}$ tends to that of the square GinUE distribution (cf.~\cite{Jarosz2011-01}), $\rho^{\textrm{rad.}}_{\mathbf{S}} ( R ) \to 2 R$ inside the unit circle and zero otherwise, while the limit of $\mathbf{T}$ is very different.
\subsubsection{Example 2}
\label{sss:TExample2}
As the second example, take arbitrary $J$, and let all the lengths \smash{$L_{j} = 2$}, but with arbitrary weights \smash{$w_{j 1}$}, \smash{$w_{j 2}$}.
\emph{Eigenvalues.} In this case, the master equations (\ref{eq:TMasterEquation1})-(\ref{eq:TMasterEquation3}) can be transformed into
\begin{equation}\label{eq:TExample2Derivation01}
z = \frac{1}{\frac{\left( \left| w_{j 1} \right| + \left| w_{j 2} \right| \right)^{2}}{\mathfrak{N}_{\mathbf{S}_{j}} ( z )} - 1} + \frac{1}{\frac{\left( \left| w_{j 1} \right| - \left| w_{j 2} \right| \right)^{2}}{\mathfrak{N}_{\mathbf{S}_{j}} ( z )} - 1} ,
\end{equation}
which are quadratic for \smash{$\mathfrak{N}_{\mathbf{S}_{j}} ( z )$}; they are supplied by the multiplication law (\ref{eq:TMasterEquation0}).
The borderline of the mean spectral domain is in general a centered annulus of radii (\ref{eq:SRExt})-(\ref{eq:SRInt}),
\begin{subequations}
\begin{align}
R_{\textrm{ext.}}^{2} &= \prod_{j = 1}^{J} \left( \left| w_{j 1} \right|^{2} + \left| w_{j 2} \right|^{2} \right) ,\label{eq:TExample2Derivation02a}\\
R_{\textrm{int.}}^{2} &= \prod_{j = 1}^{J} \left| \left| w_{j 1} \right|^{2} - \left| w_{j 2} \right|^{2} \right| ,\label{eq:TExample2Derivation02b}
\end{align}
\end{subequations}
which reduces to a disk only if the (absolute values of the) two weights in at least one term are equal to each other.
\emph{Singular values.} As above, with the left-hand side of (\ref{eq:TMasterEquation0}) changed according to (\ref{eq:NTransformConjecture}).
These master equations are solved numerically and compared to the Monte Carlo eigenvalues in Fig.~\ref{fig:ModelT} [(d), (e), (f)].
\emph{Remark: Divergence at zero.} Assume that there is a certain number $1 \leq \mathcal{W} \leq J$ of terms \smash{$\mathbf{S}_{j}$} whose (absolute values of the) two weights are equal to each other, \smash{$| w_{j 1} | = | w_{j 2} |$}; then one may ask about a behavior of the above level densities close to zero. For each of these $\mathcal{W}$ terms, Eq.~(\ref{eq:TExample2Derivation01}) becomes linear and yields (I already replace $z$ by $\mathfrak{M} = z \mathfrak{G} - 1$ or $M = z G - 1$, respectively, and assume $z \mathfrak{G} \to 0$ and $z G \to 0$ for $z \to 0$), \smash{$\mathfrak{N}_{\mathbf{S}_{j}} ( \mathfrak{M} ) = 4 | w_{j} |^{2} z \mathfrak{G} / ( z \mathfrak{G} + 1 )$}, which tends to zero as $z \mathfrak{G}$ for $z \to 0$. For the remaining $( J - \mathcal{W} )$ terms in $\mathbf{T}$, Eq.~(\ref{eq:TExample2Derivation01}) implies that any \smash{$\mathfrak{N}_{\mathbf{S}_{j}} ( \mathfrak{M} )$} tends to a nonzero constant \smash{$| | w_{j 1} |^{2} - | w_{j 2} |^{2} |$} for $z \to 0$. Substituting these results into (\ref{eq:TMasterEquation0}), one discovers that the densities diverge at zero again according to (\ref{eq:TExample1Derivation07a})-(\ref{eq:TExample1Derivation07b}), but with
\begin{equation}\label{eq:TExample2Derivation03}
d = \mathcal{W} .
\end{equation}
(This finding is consistent with $z \mathfrak{G} \to 0$ and $z G \to 0$ for $z \to 0$.)
\subsection{Product $\mathbf{W}$}
\label{ss:ProductW}
\subsubsection{Master equations for $\mathbf{W}$}
\label{sss:MasterEquationsForW}
\emph{Eigenvalues.} The master equations for the mean spectral density of $\mathbf{W}$ (\ref{eq:WDefinition}) are easily obtained through the multiplication law (\ref{eq:EigenvaluesOfTheProductXAssumingAllXiAreSquareAndHaveRotationallySymmetricMeanSpectraDerivation01}), \smash{$\mathfrak{N}_{\mathbf{W}} ( z ) = \mathfrak{N}_{\mathbf{T}} ( z ) \mathfrak{N}_{\mathbf{P}} ( z )$}, from formula (\ref{eq:HolomorphicNTransformOfPDaggerP}) (obviously with \smash{$r_{1} = 1$})---they read
\begin{equation}\label{eq:WMasterEquation0}
\mathfrak{N}_{\mathbf{W}} ( z ) = \sigma^{2} ( z + 1 ) \prod_{k = 2}^{K} \left( \frac{z}{r_{k}} + 1 \right) \mathfrak{N}_{\mathbf{T}} ( z ) ,
\end{equation}
where the rotationally-symmetric nonholomorphic $N$-transform of $\mathbf{T}$ is given by (\ref{eq:TMasterEquation0})-(\ref{eq:TMasterEquation3}); one should substitute here \smash{$\mathfrak{M} \equiv \mathfrak{M}_{\mathbf{W}} ( R^{2} )$} in the place of $z$.
\emph{Singular values.} Choose arbitrary \smash{$r_{1}$}, use the multiplication law (\ref{eq:SingularValuesOfTheProductXViaCyclicShiftsAndMultiplicationLawDerivation05}) along with (\ref{eq:HolomorphicNTransformOfPDaggerP})---obtaining,
\begin{equation}\label{eq:WdWMasterEquation0}
N_{\mathbf{W}^{\dagger} \mathbf{W}} ( z ) = \frac{\sigma^{2}}{\sqrt{r_{1}}} ( z + 1 ) \prod_{k = 2}^{K} \left( \frac{z}{r_{k}} + 1 \right) N_{\mathbf{T}^{\dagger} \mathbf{T}} \left( \frac{z}{r_{1}} \right) ,
\end{equation}
where the holomorphic $N$-transform of \smash{$\mathbf{T}^{\dagger} \mathbf{T}$} is given by [(\ref{eq:NTransformConjecture}), (\ref{eq:TMasterEquation0})-(\ref{eq:TMasterEquation3})]; one should replace here $z$ by \smash{$M \equiv M_{\mathbf{W}^{\dagger} \mathbf{W}} ( z )$}.
\subsubsection{Example 1}
\label{sss:WExample1}
Consider the situation described in Sec.~\ref{sss:TExample1} [i.e., arbitrary $J$ and \smash{$L_{j}$}, weights for each \smash{$\mathbf{S}_{j}$} equal to each other (\ref{eq:TExample1Derivation01})] plus arbitrary rectangularity ratios \smash{$r_{k}$}.
\emph{Eigenvalues.} The master equation is polynomial of order $( J + K )$,
\begin{equation}\label{eq:WExample1Derivation01}
\frac{R^{2}}{| w |^{2} \sigma^{2}} = ( \mathfrak{M} + 1 )^{J + 1} \frac{\prod_{k = 2}^{K} \left( \frac{\mathfrak{M}}{r_{k}} + 1 \right)}{\prod_{j = 1}^{J} \left( \frac{\mathfrak{M}}{L_{j}} + 1 \right)} ,
\end{equation}
valid inside the disk of radius (\ref{eq:XRExt})-(\ref{eq:XRInt}),
\begin{equation}\label{eq:WExample1Derivation02}
R_{\textrm{ext.}} = | w | \sigma .
\end{equation}
[The internal radius vanishes because the density of the zero modes, \smash{$\alpha = 1 - \min \{ r_{k} \}$}, hence, either $\mathfrak{M} = - 1$ or at least one term in the product over $k$ in (\ref{eq:WExample1Derivation01}) with $\mathfrak{M} = \alpha - 1$ is zero. This same argument will be valid in all the models investigated henceforth.]
\emph{Singular values.} The master equation is polynomial of order $( J + K + 1 )$,
\begin{equation}\label{eq:WExample1Derivation03}
\frac{z}{| w |^{2} \sigma^{2}} = \sqrt{r_{1}} \frac{( M + 1 ) \left( \frac{M}{r_{1}} + 1 \right)^{J + 1}}{M} \frac{\prod_{k = 2}^{K} \left( \frac{M}{r_{k}} + 1 \right)}{\prod_{j = 1}^{J} \left( \frac{M}{r_{1} L_{j}} + 1 \right)} .
\end{equation}
Figures~\ref{fig:ModelWK2} [(a), (b), (c)] and~\ref{fig:ModelWK5} [(a), (b), (c)] present numerical solutions to these equations and the corresponding Monte Carlo results, both in perfect agreement.
\emph{Remark 1: Divergence at zero.} Following the same line of reasoning as in \emph{Remark 1} in Sec.~\ref{sss:TExample1}, one recognizes that the divergences close to zero of the level densities stemming from [(\ref{eq:WExample1Derivation01}), (\ref{eq:WExample1Derivation03})]---disregarding their possible zero-mode part, which amounts to considering \smash{$r_{1} \geq 1$}, which means that there cannot happen \smash{$L_{j} = 1 / r_{1}$}, i.e., one may disregard in our analysis the products over $j$ in the above formulae---are governed by the number \smash{$\mathcal{R} \equiv \# \left\{ 2 \leq k \leq K : r_{k} = 1 \right\}$}, and given again by (\ref{eq:TExample1Derivation07a})-(\ref{eq:TExample1Derivation07b}) but with
\begin{equation}\label{eq:WExample1Derivation04}
d = ( J + 1 ) \delta_{r_{1} , 1} + \mathcal{R} .
\end{equation}
\emph{Remark 2: $\mathbf{T}$ as an ``inverse'' of $\mathbf{P}$ with integer rectangularity ratios.} The above master equations of $\mathbf{W}$ imply the following peculiar joint property of the models $\mathbf{T}$ (of Example 1) and $\mathbf{P}$: Assume that $\mathbf{P}$ (of length $K$; set for simplicity $\sigma = 1$) is such that a certain number $1 \leq J \leq K$ of the rectangularity ratios \smash{$r_{k} / r_{1}$} are integers greater than $1$. If one multiplies this $\mathbf{P}$ from the left by a product $\mathbf{T}$ of length $J$ such that the lengths \smash{$L_{j}$} of its terms are equal to these integer rectangularity ratios (and \smash{$w_{j} = 1$}), then the master equations [(\ref{eq:WExample1Derivation01}), (\ref{eq:WExample1Derivation03})] turn into
\begin{equation}\label{eq:WExample1Derivation05}
R^{2} = ( \mathfrak{M} + 1 )^{J + 1} \prod_{\substack{2 \leq k \leq K :\\\textrm{$r_{k}$ is not integer $> 1$}}} \left( \frac{\mathfrak{M}}{r_{k}} + 1 \right) ,
\end{equation}
\begin{equation}
\begin{split}\label{eq:WExample1Derivation06}
z &= \sqrt{r_{1}} \frac{( M + 1 ) \left( \frac{M}{r_{1}} + 1 \right)^{J + 1}}{M} \cdot\\
&\cdot \prod_{\substack{2 \leq k \leq K :\\\textrm{$r_{k} / r_{1}$ is not integer $> 1$}}} \left( \frac{M}{r_{k}} + 1 \right) .
\end{split}
\end{equation}
Comparing [(\ref{eq:WExample1Derivation05}), (\ref{eq:WExample1Derivation06})] with (\ref{eq:HolomorphicNTransformOfPDaggerP}), one recognizes that they are the master equations for a model \smash{$\widetilde{\mathbf{P}} \equiv \mathbf{P}_{1} \mathbf{P}_{2}$}, where \smash{$\mathbf{P}_{1} \equiv \mathbf{B}_{1} \ldots \mathbf{B}_{J}$}, with \smash{$\mathbf{B}_{j}$} being the square GinUE random matrices (\ref{eq:RectangularGinUEJPDF}), while \smash{$\mathbf{P}_{2}$} is the model $\mathbf{P}$ with the terms \smash{$\mathbf{A}_{k}$} for which \smash{$r_{k} / r_{1} = L_{j}$} removed from it.
In particular, if the model $\mathbf{P}$ is such that all of its rectangularity ratios \smash{$r_{k} / r_{1}$}, $k \geq 2$, are integers greater than $1$, then multiplying it from the left by the appropriate matrix $\mathbf{T}$ [of length $( K - 1 )$ and the lengths of its terms \smash{$L_{k} = r_{k} / r_{1}$}] yields
\begin{equation}\label{eq:WExample1Derivation07}
\mathfrak{M} = R^{2 / K} - 1 , \quad \textrm{i.e.,} \quad \rho^{\textrm{rad.}}_{\mathbf{W}} ( R ) = \frac{2}{K} R^{2 / K - 1} ,
\end{equation}
\begin{equation}\label{eq:WExample1Derivation08}
z = \sqrt{r_{1}} \frac{( M + 1 ) \left( \frac{M}{r_{1}} + 1 \right)^{K}}{M} .
\end{equation}
In other words, the model $\mathbf{T}$ (of Example 1) is ``inverse'' (at least on the level of the mean densities of eigenvalues and singular values) to the model $\mathbf{P}$ with rectangularity ratios \smash{$r_{k} / r_{1}$}, $k \geq 2$, equal to the lengths \smash{$L_{j}$}, with the ``unity'' being \smash{$\mathbf{P}_{1}$} [(\ref{eq:WExample1Derivation07}), (\ref{eq:WExample1Derivation08})]. These results are verified numerically in Figs.~\ref{fig:ModelWInv} [(a), (b)].
\subsubsection{Example 2}
\label{sss:WExample2}
In the setting of Sec.~\ref{sss:TExample2} [arbitrary $J$, all \smash{$L_{j} = 2$}, arbitrary weights \smash{$w_{j 1}$}, \smash{$w_{j 2}$}] with arbitrary \smash{$r_{k}$}, the master equations (\ref{eq:WMasterEquation0}) and (\ref{eq:WdWMasterEquation0}) should be supplied by the result (\ref{eq:TExample2Derivation01}). These can be solved numerically and are compared to the numerical simulations in Figs.~\ref{fig:ModelWK2} [(d), (e), (f)] and~\ref{fig:ModelWK5} [(d), (e), (f)].
An analytical result can be obtained for the mean spectral domain---it is a disk of radius (\ref{eq:XRExt})-(\ref{eq:XRInt}),
\begin{equation}\label{eq:WExample2Derivation01}
R_{\textrm{ext.}}^{2} = \sigma^{2} \prod_{j = 1}^{J} \left( \left| w_{j 1} \right|^{2} + \left| w_{j 2} \right|^{2} \right) .
\end{equation}
This should be contrasted with the analogous situation for the model $\mathbf{T}$, where the domain is in general an annulus (\ref{eq:TExample2Derivation02a})-(\ref{eq:TExample2Derivation02b}).
\emph{Remark: Divergence at zero.} Combining the reasonings from Secs.~\ref{sss:TExample2} and~\ref{sss:WExample1}, one finds that for the mean spectral domains to touch zero, there must be a number $\mathcal{W}$ of weights obeying \smash{$| w_{j 1} | = | w_{j 2} |$}, in which case the divergences at zero of the level densities (without taking into account possible zero modes) are still described by (\ref{eq:TExample1Derivation07a})-(\ref{eq:TExample1Derivation07b}) but with
\begin{equation}\label{eq:WExample2Derivation02}
d = ( \mathcal{W} + 1 ) \delta_{r_{1} , 1} + \mathcal{R} .
\end{equation}
Notice that this is consistent with (\ref{eq:WExample1Derivation05}), and even suggests how an expression for $d$ might look like for a most arbitrary model $\mathbf{W}$.
\subsection{Product $\mathbf{V}$}
\label{ss:ProductV}
\subsubsection{Master equations for $\mathbf{V}$}
\label{sss:MasterEquationsForV}
The mean densities of the eigenvalues or singular values of any product $\mathbf{V}$ (\ref{eq:VDefinition}) can be directly derived from (\ref{eq:WMasterEquation0}) or (\ref{eq:WdWMasterEquation0}) using the multiplication laws (\ref{eq:EigenvaluesOfTheProductXAssumingItIsSquareAndHasRotationallySymmetricMeanSpectrumDerivation01}) [or (\ref{eq:EigenvaluesOfTheProductXAssumingAllXiAreSquareAndHaveRotationallySymmetricMeanSpectraDerivation01})] or (\ref{eq:SingularValuesOfTheProductXViaCyclicShiftsAndMultiplicationLawDerivation06}).
\subsubsection{Example}
\label{sss:VExample1}
I will consider in this paper just one instance of the model $\mathbf{V}$, namely, any length $I$ but all the terms \smash{$\mathbf{W}_{i} = \mathbf{T}_{i} \mathbf{P}_{i}$} of the form of Example 1 above (Sec.~\ref{sss:WExample1}) [i.e., the lengths \smash{$J_{i}$} of \smash{$\mathbf{T}_{i}$} arbitrary, with arbitrary lengths \smash{$L_{i j}$} of \smash{$\mathbf{S}_{i j}$} but with the weights \smash{$w_{i j l} = w_{i j} / \sqrt{L_{i j}}$} independent of $l$; the lengths \smash{$K_{i}$} of \smash{$\mathbf{P}_{i}$}, variances \smash{$\sigma_{i k}^{2}$} and rectangularity ratios \smash{$r_{i k}$} of \smash{$\mathbf{A}_{i k}$} also arbitrary], having moreover arbitrary rectangularity ratios (\ref{eq:sDefinition}), \smash{$s_{i} = N_{i 1} / N_{I , K_{I} + 1} = \prod_{i^{\prime} = i}^{I} r_{i^{\prime} 1}$} (of course, \smash{$s_{1} = 1$} if one investigates the eigenvalues of $\mathbf{V}$).
\emph{Eigenvalues.} The master equation is polynomial of order \smash{$\sum_{i = 1}^{I} ( J_{i} + K_{i} )$},
\begin{equation}\label{eq:VExample1Derivation01}
\frac{R^{2}}{| w |^{2} \sigma^{2}} = \prod_{i = 1}^{I} \left( \frac{\mathfrak{M}}{s_{i}} + 1 \right)^{J_{i} + 1} \frac{\prod_{k = 2}^{K_{i}} \left( \frac{\mathfrak{M}}{s_{i + 1} r_{i k}} + 1 \right)}{\prod_{j = 1}^{J_{i}} \left( \frac{\mathfrak{M}}{s_{i} L_{i j}} + 1 \right)} ,
\end{equation}
where \smash{$\sigma \equiv \prod_{i = 1}^{I} \prod_{k = 2}^{K_{i}} \sigma_{i k}$} and \smash{$w \equiv \prod_{i = 1}^{I} \prod_{j = 1}^{J_{i}} w_{i j}$}, and it is valid inside the disk of radius (\ref{eq:XRExt})-(\ref{eq:XRInt}),
\begin{equation}\label{eq:VExample1Derivation02}
R_{\textrm{ext.}} = | w | \sigma .
\end{equation}
\emph{Singular values.} The master equation is polynomial of order \smash{$\sum_{i = 1}^{I} ( J_{i} + K_{i} ) + 1$},
\begin{equation}
\begin{split}\label{eq:VExample1Derivation03}
\frac{z}{| w |^{2} \sigma^{2}} &= \sqrt{s_{1}} \frac{M + 1}{M} \cdot\\
&\cdot \prod_{i = 1}^{I} \left( \frac{M}{s_{i}} + 1 \right)^{J_{i} + 1} \frac{\prod_{k = 2}^{K_{i}} \left( \frac{M}{s_{i + 1} r_{i k}} + 1 \right)}{\prod_{j = 1}^{J_{i}} \left( \frac{M}{s_{i} L_{i j}} + 1 \right)} .
\end{split}
\end{equation}
Figures~\ref{fig:ModelV} [(a), (b), (c)] illustrate numerical solutions to these polynomial equations and test them against Monte Carlo simulations, finding perfect agreement.
\emph{Remark 1: Generalized Bures products.} Notice that for $I = 1$, Eqs.~[(\ref{eq:VExample1Derivation01}), (\ref{eq:VExample1Derivation03})] reduce to [(\ref{eq:WExample1Derivation01}), (\ref{eq:WExample1Derivation03})]. Also, setting $I = 1$, \smash{$J_{1} = 1$}, \smash{$K_{1} = 1$}, $w = 1$, $\sigma = 1$, \smash{$s_{1} = r_{1} = 1$}, \smash{$L_{1 1} = 2$} in (\ref{eq:VExample1Derivation03}) yields a cubic equation, whose solution is the original Bures distribution (\ref{eq:BuresDistribution}). Of course, an even broader generalization of the Bures model would be to include distinct weights \smash{$w_{i j l}$} in the \smash{$\mathbf{S}_{i j}$}; I refrain from explicitly writing down the pertinent master equations, even though it is a straightforward step from the above results.
\emph{Remark 2: Divergence at zero.} The level densities stemming from [(\ref{eq:VExample1Derivation01}), (\ref{eq:VExample1Derivation03})] diverge at zero (beyond possible zero modes) again according to (\ref{eq:TExample1Derivation07a})-(\ref{eq:TExample1Derivation07b}) but with
\begin{equation}\label{eq:VExample1Derivation04}
d = \sum_{i = 1}^{I} \left( \left( J_{i} + 1 \right) \delta_{s_{i} , 1} + \mathcal{R}_{i} \right) ,
\end{equation}
where for short, \smash{$\mathcal{R}_{i} \equiv \# \left\{ 2 \leq k \leq K_{i} : r_{i k} = 1 / s_{i + 1} \right\}$}. Formula (\ref{eq:VExample1Derivation04}) reduces to (\ref{eq:WExample1Derivation04}) for $I = 1$, as it should.
\section{Conclusions}
\label{s:Conclusions}
\subsection{Summary}
\label{ss:Summary}
In this paper, I attempted to show how free probability theory (the multiplication law in the realm of Hermitian random matrices and the addition law for non-Hermitian ones) greatly simplifies otherwise difficult calculations of the mean densities of eigenvalues and singular values of the generalized Bures products $\mathbf{T}$, $\mathbf{W}$, $\mathbf{V}$, which are random matrix models of relevance in quantum information theory.
\subsection{Open problems}
\label{ss:OpenProblems}
As already mentioned, this article is certainly only the initial step (nevertheless important, especially from the point of view of quantum information theory) in learning about the models $\mathbf{T}$, $\mathbf{W}$, $\mathbf{V}$. A major endeavor would be to consider finite matrix dimensions and attempt a computation of the complete JPDF of the eigenvalues of these models. One could also investigate some of their universal properties (cf.~Sec.~\ref{sss:MeanDensitiesOfTheEigenvaluesAndSingularValues}). Actually, inspired by~\cite{BurdaJanikWaclaw2009}, one could check whether the level densities themselves show any sign of universality, just like for $\mathbf{P}$.
A pressing challenge is to prove the three hypotheses which my derivation is founded upon: the $N$-transform conjecture (cf.~Sec.~\ref{sss:RotationalSymmetryAndNTransformConjecture}), the single ring conjecture (cf.~Sec.~\ref{sss:SingleRingConjecture}), and the erfc conjecture (cf.~Sec.~\ref{sss:ErfcConjecture}).
It would also be desirable to understand more about the features of our models important for quantum entanglement theory, e.g. analyze their von Neumann entropy (\ref{eq:VonNeumannEntropy}).
\begin{acknowledgments}
My work has been partially supported by the Polish Ministry of Science and Higher Education Grant ``Iuventus Plus'' No.~0148/H03/2010/70. I acknowledge the financial support of Clico Ltd., Oleandry 2, 30-063 Krak\'{o}w, Poland, while completing parts of this paper. I am grateful to Karol \.{Z}yczkowski for valuable discussions.
\end{acknowledgments}
| {
"attr-fineweb-edu": 1.464844,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdew5qX_Bk6r0sFcq |
\section{Introduction}
\bibliographystyle{IEEEtran}
\section{Introduction}
\label{sec:intro}
Distributed machine learning~(ML) is extensively used to solve real-world problems in high performance computing~(HPC) environments. Typically, training data is first collected at a central location like a data center or HPC cluster. Afterwards, the data is carefully distributed across the cluster nodes based on the availability of resources. Training is conducted in a distributed manner and a resource and data aware fashion. However, new legislation such as General Data Protection Regulation~(GDPR)~\cite{gdpr} and Health Insurance Portability and Accountability Act~(HIPAA)~\cite{hipaa} prohibit user data collection. In response to user privacy concerns, federated learning~(FL)~\cite{mcmahan2017fedavg} was proposed to train ML models while maintaining local data privacy~(restricting direct access to private user data).
FL trains a shared model on edge devices (e.g., mobile phones) by aggregating locally trained models on a cloud/central server.
This setting, however, presents three key challenges:
First, the imbalance/non-independent identically distributed~(non-IID) local data easily causes training failure in the decentralized environment.
Second, frequent sharing of model weights between edge devices and the server incurs excessive communication overhead.
Lastly, the increasing demand of computing, memory, and storage for AI models~(e.g., deep neural networks -- DNNs) makes it hard to deploy on resource-limited edge devices.
These challenges suggest that designing efficient FL models and deploying them effectively will be critical in achieving higher performance on future systems.
Recent works in FL and its variants~\cite{karimireddy2020scaffold,wang2020fednova,li2020fedprox} predominantly focus on learning efficiency, i.e., improving training stability and using the minimum training rounds to reach the target accuracy. However, these solutions further induce extra communication costs. As such, there is no superior solution to address the three key issues jointly. Furthermore, the above methods aim to learn a uniform shared model for all the heterogeneous clients. But this provides no guarantees of model performance on every non-IID local data.
Deep learning models are generally over-parameterized and can easily overfit during local FL updates; in which case, only a subset of salient parameters decide the final prediction outputs. It is therefore unnecessary to aggregate all the parameters of the model.
Additionally, existing work~\cite{torrey2010translearn,weiss2016survey_transfer} demonstrate that a well-trained deep learning model can be easily transferred to non-IID datasets.
Therefore, we propose to use transfer learning to address the data heterogeneity issue of Federated Learning.
As such, we train a shared model and transfer its knowledge to heterogeneous clients by keeping its output layers customized on each client.
For instance, computer vision models~(e.g., CNNs) usually consist of an encoder part~(embed the input instance) and a predictor head~(output layers).
In this case, we only share the encoder part in the FL communication process and transfer the encoder's knowledge to local Non-IID data using a customized local predictor.
Although we use the encoder-predictor based model as an example, our idea can be extend to all AI models whose knowledge is transferable~(i.e., we can transfer deep learning model by keeping the output layers heterogeneous on local clients).
Based on these observations, we propose an efficient FL method through \textbf{S}alient \textbf{P}arameter \textbf{A}ggregation and \textbf{T}ransfer \textbf{L}earning~({\textsc{SPATL}\xspace}). Specifically, we train the model's encoder in a distributed manner through federated learning and transfer its knowledge to each heterogeneous client via locally deployed predictor heads. Additionally, we deploy a pre-trained local salient parameter selection agent to select the encoder's salient parameters based on its topology. Then, we customize the pre-trained agent on each local client by slightly fine-tuning its weights through online reinforcement learning.
We reduce communication overhead by only uploading the selected salient parameters to the aggregating server.
Finally, we leverage a gradient control mechanism to correct the encoder's gradient heterogeneity and guide the gradient towards a generic global direction that suits all clients. This further stabilizes the training process and speeds up model convergence.
In summary, the contributions of {\textsc{SPATL}\xspace} are:
\begin{itemize}
\item {\textsc{SPATL}\xspace} reduces communication overhead in federated learning by introducing salient parameter selection and aggregation for over-parameterized models. This also results in accelerating the model's local inference.
\item \textsc{SPATL}\xspace addresses data heterogeneity in federated learning via knowledge transfer of the trained model to heterogeneous clients.
\item \textsc{SPATL}\xspace utilizes a salient parameter selection agent by leveraging online reinforcement learning for fine-tuning.
\item \textsc{SPATL}\xspace enables scalable federated learning to allow large-scale decentralized training.
\item We evaluate \textsc{SPATL}\xspace on a medium-scale, 100 clients setup using Non-IID Benchmark~\cite{li2022NonIIDBench}. Our results show that compared to state-of-the-art FL solutions, when optimizing a model, \textsc{SPATL}\xspace reduces communication cost by up to $7.4 \times$, improves model performance by up to 19.86\%, and reduces inference time by up to 39.7\% FLOPs.
\end{itemize}
\section{Related work}
\label{sec:relat}
\subsection{Federated Learning}
With increasing concerns over user data privacy, federated learning was proposed in~\cite{mcmahan2017fedavg}, to train a shared model in a distributed manner without direct access to private data. The algorithm FedAvg~\cite{mcmahan2017fedavg} is simple and quite robust in many practical settings. However, the local updates may lead to divergence due to heterogeneity in the network, as demonstrated in previous works~\cite{hsu2019measuring,karimireddy2020scaffold,li2020convergence_noniid}. To tackle these issues, numerous variants have been proposed~\cite{li2020fedprox,wang2020fednova,karimireddy2020scaffold}. For example, FedProx~\cite{li2020fedprox} adds a proximal term to the local loss, which helps restrict deviations between the current local model and the global model. FedNova~\cite{wang2020fednova} introduces weight modification to avoid gradient biases by normalizing and scaling the local updates. SCAFFOLD~\cite{karimireddy2020scaffold} corrects update direction by maintaining drift variates, which are used to estimate the overall update direction of the server model.
Nevertheless, these variants incur extra communication overhead to maintain stable training. Notably, in FedNova and SCAFFOLD, the average communication cost in each communication round is approximately $2 \times$ compared to FedAvg.
Numerous research papers have addressed data heterogeneity~(i.e. non-IID data among local clients) in FL ~\cite{zhao2018federated,hsieh2020quagmire,lim2020federated,zhang_fedufo,gong_ensemblefl,caldarola_graphfl,sun_soteria,horvath2021fjord}, such as adjusting classifier~\cite{luo2021no}, improving client sampling fairness~\cite{nishio2019clientselection}, adapting optimization~\cite{zhang2021adaptive_noniid,han2020adaptive_FLquantization,reddi2021fedopt,yu2021adaptive}, correcting local updates~\cite{karimireddy2020scaffold,li_model-contrastive,wang2020fednova}, using a tiering mechanism to synchronously update local model parameters within tiers and asynchronously update the global model~\cite{chai2021fedat}, generating client models from a central hypernetwork model~\cite{Shamsian2021pFedHN}, dynamically measuring local model divergence and adaptively adjusting to optimize hyper-parameters~\cite{zhuang2022fedema}, and using data-free knowledge distillation approach to address heterogeneous FL~\cite{zhu2021fedgen}.
Further more, federated learning has been extended in real life applications~\cite{liu_feddg,guo_multi-institutional}. One promising solution is personalized federated
learning~\cite{dinh2021personalized_moreau,huang2021personalized_crosssilo,zhang2021personalized_modelopt,fallah2020personalized_meta,hanzely2020lowerbound_personazlied,ozkara2021quped}, which tries to learn personalized local models among clients to address data heterogeneity. These works, however, fail to address the extra communication overhead.
However, very few works, such as~\cite{guo2020VeriFL_communication, wu2022knowledgeD_communication} focus on addressing communication overhead in FL. They either use knowledge distillation, or aggregation protocol,
the communication overhead reduction is not significant.
Additionally, benchmark federated learning settings have been introduced to better evaluate the FL algorithms. FL benchmark LEAF~\cite{caldas2019leaf} provides benchmark settings for learning in FL, with applications including federated learning, multi-task learning, meta-learning, and on-device learning.
Non-IID benchmark~\cite{li2022NonIIDBench} is an
experimental benchmark that provides us with Non-IID splitting of CIFAR-10 and standard implementation of SOTAs.
Framework Flower~\cite{beutel2020flower} provides FL SOTA baselines and is a collection of organized scripts used to reproduce results from well-known publications or benchmarks.
IBM Federated Learning~\cite{ibmfl2020ibm} provides a basic fabric for FL on which advanced features can be added. It is not dependent on any specific machine learning framework and supports different learning topologies, e.g., a shared aggregator and protocols. It is meant to provide a solid basis for federated learning that enables a large variety of federated learning models, topologies, learning models, etc., particularly in enterprise and hybrid-Cloud settings.
\subsection{Salient Parameter Selection}
Since modern AI models are typically over-parameterized, only a subset of parameters determine practical performance.
Several network pruning methods have been proposed to address this issue. These methods have achieved outstanding results and are proven techniques to drastically shrink model sizes. However, traditional pruning methods~\cite{gao_network_2021,liu_learnable_2021,wang_convolutional_2021} require time-consuming re-training and re-evaluating to produce a potential salient parameter selection policy. Recently, AutoML pruning algorithms~\cite{li2020eagleeye,chin2020legr} offered state-of-the-art~(SoTA) results with higher versatility. In particular, reinforcement learning~(RL)-based methods~\cite{yu2021gnnrl,yu2021gnnrl1,he2018amc,yu2021agmc}, which model the neural network as graphs and use GNN-based RL agent to search for pruning policy present impressive results.
However, AutoML methods need costly computation to train a smart agent, which is impractical to deploy on resource-limited edge FL devices.
The enormous computational cost and effort of network pruning makes it difficult to directly apply in federated learning.
To overcome challenges of previous salient parameter selection methods and inspired by the RL-based AutoML pruning methods, we utilize a salient parameter selection RL agent pre-trained on the network pruning task.
Then with minimal fine-tuning, we implemented an efficient salient parameter selector with negligible computational burden.
\input{figure/f_overview}
\section{Methodology}
\label{sec:metho}
\textsc{SPATL}\xspace consists of three main components: knowledge transfer learning, salient parameter selection agent, and gradient control federated learning. Figure~\ref{fig:overview} shows the \textsc{SPATL}\xspace overview.
Unlike mainstream FL solutions, which attempt to train the entire deep learning model, \textsc{SPATL}\xspace only trains the encoder part of the model in a distributed manner and transfers the knowledge to heterogeneous clients.
In each round of federated learning, the client first downloads the encoder from the cloud aggregator~(\ding{202} in Figure~\ref{fig:overview}) and transfers its knowledge using a local predictor through local updates~(\ding{203} in Figure~\ref{fig:overview}). After local updates, the salient parameter selection agent will evaluate the training results of the current model based on the model performance~(\ding{204} in Figure~\ref{fig:overview}), and finally selected clients send the salient parameters to the server~(\ding{205} in Figure~\ref{fig:overview}). Additionally, both clients and the server maintain a gradient control variate to correct the heterogeneous gradients, in order to stabilize and smoothen the training process.
\subsection{Heterogeneous Knowledge Transfer Learning}
Inspired by transfer learning~\cite{torrey2010translearn}, \textsc{SPATL}\xspace aims to train an encoder in FL setting and address the heterogeneity issue through transferring the encoder's knowledge to heterogeneous clients.
Formally, we formulate our deep learning model as an encoder $E(w_e, x)$ and a predictor $P(w_p, e)$, where $w_e$ and $w_p$ are encoder and predictor parameters respectively, $x$ is an input instance to the encoder and $e$ is an input instance to the predictor (or embedding).
\textsc{SPATL}\xspace shares the encoder $E(w_e, x)$ with the cloud aggregator, while the predictor $P^k(w^k_p, e)$ for the $k^{th}$ client is kept private on the client.
The forward propagation of the model in the local client $k$ is formulated as follows:
\begin{equation}
e = E(w_e, x) ,
\end{equation}
\begin{equation}
\hat{y} = P^k(w_p^k, e)
\end{equation}
\noindent During local updates, the selected $k^{th}$ client first downloads the shared encoder parameter, $w_e$, from the cloud server and optimizes it with the local predictor head, $w^k_p$, through back propagation. Equation~\ref{eq:loss1} shows the optimization function.
\begin{equation}
\label{eq:loss1}
\min_{w_e, w^k_p} \mathcal{L}(w_e,w^k_p) = \frac{1}{n_i}\sum l(w_e,w^k_p,x_i,y_i)
\end{equation}
Here, $l$ refers to the loss when fitting the label $y_i$ for data $x_i$, and $n_i$ is the constant coefficient.
In federated learning, not all clients are involved in communication during each round. In fact, there is a possibility a client might never be selected for any communication round. Before deploying the trained encoder on such a client, the client will download the encoder from the aggregator and apply local updates to its local predictor only. After that, both encoder and predictor can be used for that client. Equation~\ref{eq:loss2} shows the optimization function.
\begin{equation}
\label{eq:loss2}
\min_{w^k_p} \mathcal{L}(w^k_p) = \frac{1}{n_i}\sum l(w_e,w^k_p,x_i,y_i)
\end{equation}
\subsection{RL-based Topology-Aware Salient Parameter Selection}
One key issue of FL is the high communication overhead caused by the frequent sharing of parameters between clients and the cloud aggregator server. Additionally, we observed that deep learning models~(e.g., VGG~\cite{simonyan2015vgg} and ResNet~\cite{he2016resnet}) are usually bulky and over-parameterized. As such, only a subset of salient parameters decide the final output.
Therefore, in order to reduce the communication cost, we implemented a local salient parameter selection agent for selecting salient parameters for communication.
Figure~\ref{fig:pram_selction} shows the idea of a salient parameter agent.
Specifically, inspired by topology-aware network pruning task~\cite{yu2021agmc,yu2021gnnrl}, we model the neural network~(NN) as a simplified computational graph and use it to represent the NN's states. Since NNs are essentially computational graphs, their parameters and operations correspond to nodes and edges of the computational graph.
We then introduced the graph neural network~ (GNN)-based reinforcement learning~(RL) agent, which takes the graph as input~(RL's environment states) and produces a parameter selection policy from the topology through GNN embedding. Additionally, the RL agent uses the selected sub-model's accuracy as reward to guide its search for the optimal pruning policy.
Training a smart agent directly through RL, however, is costly and impractical to deploy on the edge.
To address this issue, we first pre-train the salient parameter agent in the network pruning task, and then customize the pre-trained agent on each local client by slightly fine-tuning its weights through online reinforcement learning~(detailed hyper-parameter setting in section~\ref{sec:eval}).
\input{figure/f_pram_selct_agent}
\input{ alg/a_ppo}
\input{ figure/f_aggregate}
\subsubsection{Reinforcement Learning Task Definition}
Defining environment states, action space, reward function, and RL policy are essential for specifying an RL task. In this section, we will discuss these components in more detail.
Algorithm~\ref{alg:ppo} shows the RL search process.
For search step, we first initialize the target encoder $\hat{E}(\hat{w_e})$ with the input encoder ${E}({w_e})$, and convert it to a graph. If the size of $\hat{E}$ does not satisfy the constraints, the proximal policy optimization~(PPO)~\cite{schulman2017ppo} RL agent will produce a parameter selection policy $a$~(i.e., the action of the RL), to update $\hat{E}$. If $\hat{E}$ satisfies the size constraint, the RL agent will use its accuracy as reward to update the policy. Finally, the parameter $w$ and corresponding parameter index $idx$ of the target encoder $\hat{E}$ with the best reward will be uploaded to the cloud server.
\noindent\textbf{Environment States.}
We use a simplified computational graph $G(v,e)$ to represent the NN model~\cite{yu2021agmc}. In a computational graph, nodes represent hidden features~(feature maps), and edges represent primitive operations~(such as `add', `minus', and `product'). Since the NN model involves billions of operations, it's unrealistic to use primitive operations. Instead, we simplified the computational graph by replacing the primitive operations with machine learning operations~(e.g., conv 3x3, Relu, etc.).
\noindent\textbf{Action Space.} The actions are the sparsity ratios for encoder's hidden layers. The action space is defined as $a\in [0,1]^{N }$, where $N$ is the number of encoder's hidden layers.
The actor network in the RL agent projects the NN's computational graph to an action vector, as shown in equations~\ref{eq:graphencoder} and~\ref{eq:mlp}.
\begin{equation}
g = GraphEncoder(G) ,
\label{eq:graphencoder}
\end{equation}
\begin{equation}
a = mlp(g)
\label{eq:mlp}
\end{equation}
Here, $G$ is the environment state, $g$ is the graph representation, and MLP is a multi-layer perceptron neural network. The graph encoder learns the topology embedding, and the MLP projects the embedding into hidden layers' sparsity ratios.
\noindent\textbf{Reward Function.} The reward function is the accuracy $\times 100$ of selected sub-network on validation dataset.
\begin{equation}
Reward = Accuracy\times 100
\label{eq:reward}
\end{equation}
\subsubsection{Policy Updating}
The RL agent is updated end-to-end through the PPO algorithm. The RL agent trains on the local clients through continual online-learning over each FL round.
Equation~\ref{eq:1} shows the objective function we used for the PPO update policy.
\begin{equation}
\label{eq:1}
L(\theta) = \hat{\mathbb{E}_t}[min(r_t(\theta)\hat{A}_t, clip(r_t(\theta),1-\epsilon,1+\epsilon)\hat{A}_t)]
\end{equation}
Here, $\theta$ is the policy parameter~(the actor-critic network's parameter),
$\hat{\mathbb{E}_t}$ denotes the empirical expectation over time steps,
$r_t(\theta)$ is the ratio of the probability under the new and old policies, respectively,
$\hat{A}_t$ is the estimated advantage at time t, and $\epsilon$ is a clip hyper-parameter, usually 0.1 or 0.2.
\subsection{Generic Parameter Gradient Controlled Federated Learning}
Inspired by stochastic controlled averaging federated learning~\cite{karimireddy2020scaffold}, we propose a generic parameter gradient controlled federated learning to correct the heterogeneous gradient.
Due to client heterogeneity, local gradient update directions will move towards local optima and may diverge across all clients. To correct overall gradient divergence by estimating gradient update directions, we maintain control variates both on clients and the cloud aggregator.
However, controlling the entire model's gradients will hurt the local model's performance on non-IID data. In order to compensate for performance loss, \textsc{SPATL}\xspace only corrects the generic parameter's gradients~(i.e., the encoder's gradients) while maintaining a heterogeneous predictor. Specifically in equation~\ref{eq:control_variates}, during local updates of the encoder, we correct gradient drift by adding the estimate gradient difference $(c_g - c_l)$.
\begin{equation}
\label{eq:control_variates}
w_e \leftarrow w_e - \eta \triangledown(\mathcal{L}(w_e,w_p;b) { + c_g - c_l})
\end{equation}
Here, control variate $c_g$ is the estimate of the global gradient direction maintained on the server side, and $c_l$ is the estimate of the update direction for local heterogeneous data maintained on each client.
In each round of communication, the $c_l$ is updated as equation~\ref{eq:cl_update}:
\begin{equation}
\label{eq:cl_update}
c_l^* \leftarrow c_l - c_g + \frac{1}{E\eta}(w_g - w_e)
\end{equation}
Here, $E$ is the number of local epochs, and $\eta$ is the local learning rate, while $c_g$ is updated by equation~\ref{eq:cg_update}:
\begin{equation}
\label{eq:cg_update}
c_g \leftarrow c_g + \frac{1}{|N|}\sum_{k \in K} \Delta c^k
\end{equation}
Here, $\Delta c^k$ is the difference between new and old local control variates $c_l$ of client $k$, $N$ is the set of clients, and $K$ is the set of selected clients.
Algorithm~\ref{alg:aggregate} shows \textsc{SPATL}\xspace with gradient controlled FL.
In each update round, the client downloads the global encoder's parameter $w_g$ and update direction $c_g$ from server, and performs local updates. When updating the local encoder parameter $w_e$, $(c_g - c_l)$ is applied to correct the gradient drift. The predictor head's gradient remains heterogeneous. Before uploading, the local control variate $c_l$ is updated by estimating the gradient drift.
\input{ alg/a_agg}
\input{ figure/f_train_effi}
\subsubsection{Aggregation with Salient Parameters}
Due to the non-IID local training data in heterogeneous clients, salient parameter selection policy varies among the heterogeneous clients after local updates. Since the selected salient parameters have different matrix sizes and/or dimensions, directly aggregating them will cause a matrix dimension mismatch. To prevent this, as Figure~\ref{fig:pram_aggre} shows, we only aggregate partial parameters according to the current client's salient parameter index on the server side.
Equation \ref{eq:aggre} shows the mathematical representation of this process.
\begin{equation}
\label{eq:aggre}
w_g \leftarrow w_g + \eta(\frac{1}{|K|} \sum_{k \in K} (w_g[i^k,:,:] - w^k))
\end{equation}
Here, $w_g$ is the global parameter, $w_k$ is the $i^{th}$ client's salient parameter, $i^k$ is the $w^k$'s index corresponding to the original weights, and $\eta$ is the update step size.
By only aggregating the salient parameter $w^k$ and its corresponding index $i^k$~(negligible burdens), we can significantly reduce the communication overhead and avoid matrix dimension mismatches.
\section{Experiment}
\label{sec:eval}
We conducted extensive experiments to examine \textsc{SPATL}\xspace's performance. Overall, we divided our experiments into three categories: learning efficiency, communication cost, and inference acceleration. We also performed an ablation study and compared \textsc{SPATL}\xspace with state-of-the-art FL algorithms.
\input{figure/f_converge_acc}
\subsection{Implementation and Hyper-parameter Setting}
\textbf{Datasets and Models.}
The experiments are conducted with FEMNIST~\cite{caldas2019leaf} and CIFAR-10~\cite{krizhevsky2009cifar}. In FEMNIST, we follow the LEAF benchmark federated learning setting~\cite{caldas2019leaf}. In CIFAR-10, we use the Non-IID benchmark federated learning setting~\cite{li2022NonIIDBench}. Each client is allocated a proportion of the samples of each label according to Dirichlet distribution (with concentration $\alpha$). Specifically, we sample $p_k \sim Dir_N(\alpha)$ and allocate a $p_{k,j}$ proportion of the instances to client $j$. Here we choose the $\alpha = 0.1$.
The deep learning models we use in the experiment are VGG-11~\cite{simonyan2015vgg} and ResNet-20/32~\cite{he2016resnet}.
\textbf{Federated Learning Setting.}
We follow the Non-IID benchmark federated learning setting and implementation ~\cite{li2022NonIIDBench}.
In \textsc{SPATL}\xspace, the models in each client are different. Thus, we evaluate the average performance of models in heterogeneous clients.
We experiment on different clients and sample ratio~(percentage of participating clients in each round) setting, from 10 clients to 100 clients, and the sample ratio from 0.4 to 1.
During local updates, each client updates 10 rounds locally. The detailed setting can be found in supplementary materials.
\textbf{RL Agent Settings.}
The RL agent is pre-trained on ResNet-56 by a network pruning task. Fine-tuning the RL agent in the first 10 communication rounds with 20 epochs in each updating round. We only update the MLP's (i.e., output layers of RL policy network) parameter when fine-tuning. We use the PPO~\cite{schulman2017ppo} RL policy, the discount factor is $\gamma = 0.99$, the clip parameter is $0.2$, and the standard deviation of actions is $0.5$. Adam optimizer is applied to update the RL agent, where the learning rate is $3~\times 10^{-4}$ and the $\beta = (0.9, 0.999)$.
\textbf{Baseline.}
We compare \textsc{SPATL}\xspace with the state-of-the-art FL algorithms, such as FedNova~\cite{wang2020fednova}, FedAvg~\cite{mcmahan2017fedavg}, FedProx~\cite{li2020fedprox}, and SCAFFOLD~\cite{karimireddy2020scaffold}.
\textbf{Experimental Setup.}
Our experimental setup ranges from 10 clients to 100 clients, and our experimental scale is on par with existing state-of-the-art works.
To better compare and fully investigate the optimization ability, some of our experiments~(e.g., communication efficiency) scales are set to be larger than many SOTA method experiments~(such as FedNova~\cite{wang2020fednova}, FedAvg~\cite{mcmahan2017fedavg}, FedProx~\cite{li2020fedprox}, and SCAFFOLD~\cite{karimireddy2020scaffold}).
Recent FL works, such as FedAT~\cite{chai2021fedat}, pFedHN~\cite{Shamsian2021pFedHN}, QuPeD~\cite{ozkara2021quped}, FedEMA~\cite{zhuang2022fedema}, and FedGen~\cite{zhu2021fedgen}, their evaluations are within the same scale as our experiments.
Since the FL is an optimization algorithm, we mainly investigate the training stability and robustness.
The larger experiment scale will show a similar trend.
\textbf{FL Benchmark.}
We use two standard FL benchmark settings: LEAF~\cite{caldas2019leaf} and Non-IID benchmark~\cite{li2022NonIIDBench}.
LEAF~\cite{caldas2019leaf} provides benchmark settings for learning in FL, with applications including federated learning, multi-task learning, meta-learning, and on-device learning. We use the LEAF to split the FEMNIST into Non-IID distributions.
Non-IID benchmark~\cite{li2022NonIIDBench} is an
experimental benchmark that provides us with Non-IID splitting of CIFAR-10 and standard implementation of SOTAs. Our implementation of FedAvg, FedProx, SCAFFOLD, and FedNova are based on the Non-IID benchmark.
\subsection{Learning Efficiency}
In this section, we evaluate the learning efficiency of \textsc{SPATL}\xspace by investigating the relationship between communication rounds and the average accuracy of the model.
Since \textsc{SPATL}\xspace learns a shared encoder, each local client has a heterogeneous predictor, and the model's performance is different among clients. Instead of evaluating a global test accuracy on the server side, we allocate each client a local non-IID training dataset and a validation dataset to evaluate the top-1 accuracy, i.e., the highest probability prediction must be exactly the expected answer, of the model among heterogeneous clients.
We train VGG-11~\cite{simonyan2015vgg} and ResNet-20/32~\cite{he2016resnet} on CIFAR-10~\cite{krizhevsky2009cifar}, and 2-layer CNN on FEMNIST~\cite{caldas2019leaf} separately until the models converge. We then compare model performance results of \textsc{SPATL}\xspace with state-of-the-arts~(SoTAs)~(i.e., FedNova~\cite{wang2020fednova}, FedAvg~\cite{mcmahan2017fedavg}, FedProx~\cite{li2020fedprox}, and SCAFFOLD~\cite{karimireddy2020scaffold}).
Figure~\ref{fig:vgg_cifar} experiments show 10 clients setting where we sample all 10 clients for aggregation. The effect of heterogeneity is not significant compared to a real-world scale. \textsc{SPATL}\xspace moderately outperforms the SoTAs on CIFAR-10. Results on the 2-layer CNN model trained on FEMNIST however, is an exception; in this case the model trained by \textsc{SPATL}\xspace has a slightly lower accuracy than SoTAs. We suspect that it has to do with the small size of the 2-layer CNN and the large data quantity. Particularly, in this case, our ``model over-parameterization" assumption no longer holds, making it hard for the salient parameter selection to fit the training data. To verify our analysis, we increase the complexity of our experiments and conduct further experiments on larger scale FL settings. We increase the number of clients to 30, 50, and 100 with different sample ratios.
As heterogeneity rises with the increase in number of clients, \textsc{SPATL}\xspace demonstrates superiority in coping with data heterogeneity.
Experiment results in Figure~\ref{fig:vgg_cifar} show that for more complex FL settings, \textsc{SPATL}\xspace outperforms SoTAs with larger margins.
In the 30 clients FL setting\footnote{In Fig.~\ref{fig:vgg_cifar} SCAFFOLD\cite{karimireddy2020scaffold} diverges with gradient explosion in Non-IID benchmark settings~\cite{li2022NonIIDBench} when there are more than 10 clients.},
for ResNet-20, ResNet-32, and VGG-11, SPTAL outperforms the SoTA FL methods. Notably, \textsc{SPATL}\xspace yields a better convergence accuracy and a substantially more stable training process.
In the 50 clients and 100 clients settings, the experiment improvements become more significant, as \textsc{SPATL}\xspace outperforms the SoTAs by a larger margin.
Moreover, we noticed that the gradient control based method SCAFFOLD~\cite{karimireddy2020scaffold} suffers from gradient explosion issues when the number of clients increases. Even though we set a tiny learning rate, the explosion problem persists.
Other researchers are facing the same issues when reproducing SCAFFOLD, and our results satisfy finding 6 in ~\cite{li2022NonIIDBench}.
Intuitively, we investigate the model's accuracy overhead. Figure~\ref{fig:conv_acc} shows the converge accuracy comparison with SoTAs. \textsc{SPATL}\xspace surpasses SoTAs in all the FL settings and achieves higher converge accuracy than SoTAs. Again, the superiority of \textsc{SPATL}\xspace grows progressively with the heterogeneity of FL settings. For instance, in ResNet-20 with 30 clients, \textsc{SPATL}\xspace outperforms SoTAs only in terms of final convergence accuracy. However, when we increase to 50 heterogeneous clients, \textsc{SPATL}\xspace achieves 42.54 \% final accuracy, that is around 10\% higher than FedAvg and FedProx~(they achieve 32.71\% and 32.43\% accuracy, respectively). Additionally, it is worth mentioning that, in the 100 clients experiment setting, we compare the accuracy within 200 rounds since all the baselines diverge in 200 rounds except \textsc{SPATL}\xspace. This further demonstrates that \textsc{SPATL}\xspace optimizes and improves the quality of the model progressively and stably.
\input{ figure/f_local_acc}
Trained model performance on heterogeneous local clients is also an essential indicator when evaluating FL algorithms with regards to deploying AI models on the edge.
Since edge devices have various application scenarios and heterogeneous input data, models will likely exhibit divergence on such devices.
We further evaluate the robustness and feasibility of FL methods on distributed AI by testing local model accuracies on all clients.
Figure~\ref{fig:local_acc} shows ResNet-20 on each client's accuracy after the training is complete for CIFAR-10~(total 10 clients trained by \textsc{SPATL}\xspace and SCAFFOLD with 100 rounds). The model trained by \textsc{SPATL}\xspace produces better performance across all clients. In particular, the edge model trained by \textsc{SPATL}\xspace produces more stable performance among each client, whereas models trained by baselines exhibit more variance. For instance, all the edge models trained by \textsc{SPATL}\xspace have similar accuracy performance.
Since \textsc{SPATL}\xspace uses heterogeneous predictors to transfer the encoder's knowledge, the model is more robust when dealing with non-IID data. However, our baseline methods~(such as SCAFFOLD~\cite{karimireddy2020scaffold}) share the entire model when training on non-IID clients, leading to a variance in model performance on non-IID clients and causing poor performance on some clients.
\subsection{Communication Efficiency}
\input{table/t_training_cost}
A key contribution of \textsc{SPATL}\xspace that makes it stand out among SoTAs is its significant reduction of communication overhead due to salient parameter selection. Although
SoTAs, like FedNova~\cite{wang2020fednova} and SCAFFOLD~\cite{karimireddy2020scaffold}, achieve stable training via gradient control or gradient normalization variates, their average communication cost doubles compared to FedAvg~\cite{mcmahan2017fedavg} as a result of sharing the extra gradient information.
We present two experiment settings to evaluate model communication efficiencies. First, we trained all models to a target accuracy and calculated communication cost. Second, we trained all models to converge and calculated the communication cost of each FL algorithm.
The communication cost is calculated as:
\begin{equation}
\# \text{Rounds} \times \text{Client's round cost} \times \# \text{Sampled Clients}
\end{equation}
\input{ figure/f_train_rounds}
Table~\ref{tab:com_cost} shows the detailed information of communication cost~(FedAvg~\cite{mcmahan2017fedavg} as benchmark) when we train models to a target accuracy.
\textsc{SPATL}\xspace remarkably outperforms SoTAs. In ResNet-20, \textsc{SPATL}\xspace reduced communication cost by up to $7.6\times$~(FedNova costs 8.12GB while \textsc{SPATL}\xspace only costs 1.1GB). Moreover, \textsc{SPATL}\xspace reduced communication by up to 102GB when training VGG-11 compared to FedNova.
There are two main benefits of \textsc{SPATL}\xspace in reducing communication overhead.
First, training in \textsc{SPATL}\xspace is much more efficient and stable. Our experiments show that \textsc{SPATL}\xspace requires fewer communication rounds to achieve target accuracy.
For example, in VGG-11, FedProx uses 296 training rounds while \textsc{SPATL}\xspace uses 250 less rounds to achieve the same target accuracy. This significantly reduces the communication cost.
We provide a more comprehensive comparison to show the number of rounds different models take to achieve target accuracy. As Figure~\ref{fig:train_rounds} shows, we try different FL settings, and \textsc{SPATL}\xspace consistently requires fewer rounds than SoTAs in most of the FL settings~(except in ResNet-20 with 10 clients and 80\% target accuracy, SPATL requires 3 rounds more than SCAFFOLD. However, as shown in Table~\ref{tab:com_cost}), the total communication cost of \textsc{SPATL}\xspace is significant less than all others.
Second, since the salient parameter selection agent selectively uploads partial parameters, \textsc{SPATL}\xspace significantly reduces communication cost. As shown in Table~\ref{tab:com_cost}, compared to the gradient control based methods, such as SCAFFOLD~\cite{karimireddy2020scaffold} and FedNova~\cite{wang2020fednova}, \textsc{SPATL}\xspace remarkably reduces per round communication cost. For instance, \textsc{SPATL}\xspace uses $2 \times$ less round costs in ResNet-20 compared to the traditional FL, such as FedAvg~\cite{mcmahan2017fedavg}. Even when factoring in gradient control information, salient parameter selection enables \textsc{SPATL}\xspace to drop unnecessary communication burdens, which makes its round costs comparable to FedAvg.
Furthermore, we investigated the convergence accuracy of models using SoTAs and \textsc{SPATL}\xspace optimizations.
We consider a performance upper bound by creating a hypothetical centralized case where images are heterogeneously distributed across 30, 50, and 100 clients.
Table~\ref{tab:com_cost2} shows the results of training the models to convergence. Compared to FedAvg~\cite{mcmahan2017fedavg}, gradient control based FL algorithms have higher accuracy at the expense of communication efficiency. For instance, FedNova~\cite{wang2020fednova} achieves slightly higher accuracy. However, their communication budget increases by more than $2 \times$. Models optimized by \textsc{SPATL}\xspace achieve significantly higher accuracy than all other baselines. Especially on VGG-11 with 50 clients, \textsc{SPATL}\xspace achieves 17.8\%, 19.86\%, and 17.62\% higher accuracy than FedAvg, FedNova, and FedProx respectively.
Particularly, unlike FedNova, which sacrifices communication cost for higher accuracy, \textsc{SPATL}\xspace takes advantage of salient parameter selection to achieve higher accuracy with relatively negligible communication overhead.
For instance, when training the ResNet-20 for 30 and 50 clients, \textsc{SPATL}\xspace achieves the best accuracy with the lowest communication cost. The results further show that \textsc{SPATL}\xspace remarkably reduces the communication cost and has a higher capacity to train models for better performance.
\input{table/t_converge}
\subsection{Inference Acceleration}
\input{ table/t_inference}
In this section, we evaluate the inference acceleration of \textsc{SPATL}\xspace.
Local inference is a crucial measure for deploying AI models on the edge since edge devices have limited computing power, and edge applications~(e.g., self-driving car) are inference sensitive.
In \textsc{SPATL}\xspace, when the salient parameter selection agent selects salient parameters, it prunes the model as well.
For a fair evaluation of inference, instead of recording the actual run time~(run time may vary on different platforms) of pruned models, we calculated the FLOPs~(floating point operations per second).
Table~\ref{tab:inference} shows the inference acceleration status after training is complete. \textsc{SPATL}\xspace notably reduced the FLOPs in all the evaluated models. For instance, in ResNet-32, the average FLOPs reduction among 10 clients is $29.6\%$, and the client with the highest FLOPs reduction achieves $38.4\%$ fewer FLOPs than the original model, while the client models have a relatively low sparsity ratio~(the sparsity ratio represents the ratio of salient parameters compared to the entire model parameters). The low sparsity ratio can further benefit by accelerating the models on parallel platforms, such as GPUs.
Additionally, we evaluate the salient parameter selection agents' pruning ability and compare it with SoTA pruning methods. As shown in Table~\ref{tab:res_cifar}, our agent achieves outstanding results in pruning task and outperforms popular AutoML pruning baselines. The results indicate that \textsc{SPATL}\xspace can significantly accelerate model inference with acceptably small accuracy loss.
\input{table/t_transfer}
\subsection{Transferbility of Learned Model}
In \textsc{SPATL}\xspace, since only the partial model~(i.e., knowledge encoder) is trained in a distributed manner, we conducted a transferability comparison experiment to test for successful transfer of knowledge among heterogeneous edge clients.
Specifically, we transfer the neural network trained by \textsc{SPATL}\xspace and SoTAs (e.g., FedAvg, FedNova, SCAFFOLD, etc.) separately to a new portion of data and compare the performance of transferred models.
The experimental settings are as follows:
we split the CIFAR-10~\cite{krizhevsky2009cifar} into two separate datasets, one with 50K images~(for federated learning) and another with 10K images~(for transfer learning after FL is finished). We use ResNet-20 and set 10 clients for federated learning, where each client has 4k image local training data and 1k validation set. Transfer learning was conducted in a regular manner without involving training distribution.
Table~\ref{tab:trans} shows the results.
The model trained by \textsc{SPATL}\xspace achieves comparable transfer learning results with the SoTAs.
This further shows that \textsc{SPATL}\xspace, which only trains a shared encoder in a distributed manner~(i.e., as opposed to training the entire model), can successfully learn and transfer the knowledge of a heterogeneous dataset.
\input{figure/f_pram_select_ablation}
\input{figure/fsub_ablation}
\subsection{Ablation Study}
\subsubsection{Salient Parameter Selection vs. No Parameter Selection}
Modern AI models are huge (involving billions of parameters) and often over-parameterized.
Thus, only a subset of salient parameters can significantly affect the model's final performance. As such, a reasonable pruning of redundant parameters might not negatively impact model training. This section investigates the impact of salient parameter selection on federated learning. Specifically, we compare \textsc{SPATL}\xspace with and without salient parameter selection.
\input{table/t_prune}
Figure~\ref{fig:pram_ablation} shows the results.
We conducted the experiment on ResNet-20 with various FL settings. All of the results indicate that
properly pruning some unimportant weights of over-parameterized networks will not harm training stability in federated learning. Instead, it might produce better results in some cases. Especially in the 10 clients setting, \textsc{SPATL}\xspace optimized a higher quality model after applying the parameter selection.
We further evaluate the salient parameter agent by applying it to network pruning task and comparing it with popular pruning methods.
Table~\ref{tab:res_cifar} shows the pruning results. \textsc{SPATL}\xspace's parameter selection agent achieves competitive results with SoTA pruning methods, which means that it can prune redundant parameters and significantly reduce the FLOPs or pruned model with negligible accuracy loss.
Moreover, state-of-the-art salient parameter selection methods, such as SFP~\cite{he2018sfp}, DSA~\cite{ning2020dsa}, and FPGM~\cite{he2019FPGM}, are usually non-transferable for a given model. They require time-consuming search and re-training to find a target model architecture and salient parameters. For instance, as shown in~\cite{wang2020apq}~(table 2), a combined model compression method needs 85.08 lbs of $CO_{2}$ emissions to find a target model architecture. This makes it expensive to deploy on edge devices.
In \textsc{SPATL}\xspace, the RL agent is a tiny GNN followed by an MLP. The cost to compute target salient parameters within one-shot inference~(0.36 ms on NVIDIA V100) and the memory consumption is 26 KB, which is acceptable on edge devices.
\subsubsection{Transfer Learning vs. No Transfer Learning}
\textsc{SPATL}\xspace transfers the shared encoder to local non-IID datasets and addresses the heterogeneous issue of FL. To investigate the effects of transfer learning on \textsc{SPATL}\xspace, in this section, we disable \textsc{SPATL}\xspace's transfer learning. Figure~\ref{fig:ablation} (a) shows the results. We train the ResNet-20~\cite{he2016resnet} on CIFAR-10~\cite{krizhevsky2009cifar} with 10 clients and sample all the clients in communication. \textsc{SPATL}\xspace without transfer learning has a poor performance when optimizing the model. Combining the results present in Figure~\ref{fig:local_acc}, we can infer that a uniform model deployed on heterogeneous clients can cause performance diversity~(i.e., the model performs well on some clients but poor on others). Intuitively, clients with data distribution similar to global data distribution usually perform better; nevertheless, clients far away from global data distribution are hard to converge.
It is adequate to show that by introducing transfer learning, \textsc{SPATL}\xspace can better deal with heterogeneous issues in FL.
Transfer learning enables every client to customize the model on its non-IID data and produces significantly better performance than without transfer learning.
\subsubsection{Impact of Gradient Control}
\textsc{SPATL}\xspace maintains control variates both in the local and cloud environment to help correct the local update directions and guide the encoder's local gradient towards the global gradient direction.
Figure~\ref{fig:ablation} (b) shows the results of \textsc{SPATL}\xspace with and without gradient control. We train VGG-11~\cite{simonyan2015vgg} on CIFAR-10~\cite{krizhevsky2009cifar} with 10 clients.
Training the model in the heterogeneous non-IID local dataset typically causes high variants of local gradients leading to poor convergence. The gradient control variates in \textsc{SPATL}\xspace maintain the global gradient direction and correct the gradient drift, thus producing a stable training process.
The results are in line with our expectations that gradient control remarkably improves the training performance of \textsc{SPATL}\xspace.
\input{figure/f_rl_effic}
\subsubsection{Fine-tuning Reinforcement Learning Agent}
This section discusses the cost of pre-training a reinforcement learning agent and the cost of customization by slightly fine-tuning the agent's weights through online reinforcement learning.
We pre-train the RL-agent and perform network pruning on ResNet-56, then transfer the agent to ResNet-18 by fine-tuning, and only update the predictor of RL-agent's policy network.
Figure~\ref{fig:rl_ef} shows the average reward the RL agent gets on network pruning task and the agent's corresponding update round. In both ResNet-18 and ResNet-56, the RL agent converges rapidly around 40 rounds of RL policy updating. Particularly, in ResNet-18, by slightly fine-tuning the RL-agent, it achieves comparable rewards to ResNet-56. That means the agent can be successfully transferred to a newly deployed model. This further shows the feasibility of fine-tuning the pre-trained salient parameter selection agent.
\section{Conclusion and Discussion}
\label{sec:con}
In this paper, we presented \textsc{SPATL}\xspace, a method for efficient federated learning using salient parameter aggregation and transfer learning. To address data heterogeneity in federated learning, we introduced a knowledge transfer local predictor that transfers the shared encoder to each client. We proposed a salient parameter selection agent to filter salient parameters of the over-parameterized model before communicating it with the server. As a result, the proposed method significantly decreases the communication overhead. We further leveraged a gradient control mechanism to stabilize the training process and make it more robust. Our experiments show that \textsc{SPATL}\xspace has a stable training process and achieves promising results. Moreover, \textsc{SPATL}\xspace significantly reduces the communication cost and accelerates the model inference time.
The proposed approach may have poor performance on simple models. As Figure~\ref{fig:vgg_cifar} shows, our approach works well on over-parameterized neural networks, such as ResNet~\cite{he2016resnet} and VGG~\cite{simonyan2015vgg} net. However, when it turns to less-parameterized models, such as 2-layer CNNs, the salient parameter selection may degrade in performance, making the model converge slower than baselines. In practice, less-parameterized models are rarely used in real-world applications.
Second, not all AI models are transferable.
In our future work, we will continuously improve the universality of our method.
\subsection{Limitations}
\label{sec:lim}
The proposed approach may have poor performance on simple models. As Figure~\ref{fig:vgg_cifar} shows, our approach works well on over-parameterized neural networks, such as ResNet~\cite{he2016resnet} and VGG~\cite{simonyan2015vgg} net. However, when it turns to a less-parameterized model, such as 2-layer CNN, the salient parameter selection may degrade its performance, making the model converge slower than the baselines. Generally, the less-parameterized models are rarely being used in real-world applications.
\section{Motivation – Use Cases}
\label{sec:motivation}
With advancements in the performance of mobile and embedded devices, more and more applications are moving to decentralized learning on the edge.
Improved ML models and advanced weight pruning techniques mean a significant amount of future ML workload will come from decentralized training and inference on edge devices~\cite{park2018dl_inference_fb}.
Edge devices operate under strict performance, power, and privacy constraints, which are affected by factors such as model size and accuracy, training and inference time, and privacy requirements.
Many edge applications, such as self-driving cars, could not be developed and validated without HPC simulations, in which HPC accelerates data analysis and the design process of these systems to ensure safety and efficiency.
Therefore, the prevailing edge computing trend alongside FL requirements and edge constraints motivate \textsc{SPATL}\xspace to address challenges in HPC.
Firstly, frequent sharing of model weights between edge devices and the central server incurs a hefty communication cost~\cite{problemsfl,Sheller2018medical}. Thus, reducing communication overhead is imperative.
Secondly, the increasing demand for computing, memory, and storage for AI models (e.g., deep neural networks -- DNNs) makes it hard to deploy them on resource-constrained Internet-of-Things~(IoT) and edge devices~\cite{imteaj2022iot,nguyen2021iot}. Transfer learning can be a viable solution to address this problem.
Thirdly, latency-sensitive applications with privacy constraints (e.g., self-driving cars~\cite{kato2018auto}, augmented reality~\cite{mangiante2017vr}) in particular, are better suited for fast edge computing~\cite{wu2019ml_fb}. Hence, cutting back on inference time is quite important.
Tech giants like Google, Apple, and NVIDIA are already using FL for their applications (e.g., Google Keyboard~\cite{yang2018keyboard,chen2019keyboard}, Apple Siri~\cite{apple2019siri,paulik2021siri}, NVIDIA medical imaging~\cite{li2019nvidia}) thanks to their large number of edge devices. Hence, scalability is important in FL and HPC settings.
Lastly, training data on client edge devices depends on the user's unique usage causing an overall non-IID~\cite{mcmahan2017fedavg,zhao2018federated} user dataset. Data heterogeneity is a major problem in decentralized model training~\cite{problemsfl,nishio2019clientselection,zhao2018federated,hsieh2020quagmire,li2020convergence_noniid,lim2020federated,li2020fedprox,reddi2021fedopt,karimireddy2020scaffold,wang2020fednova,zhang2021adaptive_noniid}.
Thus designing efficient decentralized learning models and deploying them effectively will be crucial to improve performance of future edge computing and HPC.
\section{Discussion and Future Work}
| {
"attr-fineweb-edu": 1.912109,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdhw4eIZjvCjkaX4N | \section{Introduction}
Recently the LHCb collaboration reported a $3.5\sigma$ evidence for a non-zero
value of the difference between the time-integrated CP asymmetries in the
decays $D^0 \to K^+K^-$ and $D^0 \to \pi^+\pi^-$~\cite{lhcb},
\begin{equation}\label{LHCbnews}
\Delta a_{CP} \equiv a_{K^+ K^-} - a_{\pi^+ \pi^-}
= -(0.82\pm 0.21\pm0.11)\%\,.
\end{equation}
The time-integrated CP asymmetry for a final CP eigenstate, $f$, is
defined as
\begin{equation}
a_f \equiv \frac{\Gamma(D^0\to f)-\Gamma(\bar D^0\to f)}
{\Gamma(D^0\to f)+\Gamma(\bar D^0\to f)}\,.
\end{equation}
Combined with previous measurements of these CP
asymmetries~\cite{Aaltonen:2011se, Staric:2008rx, Aubert:2007if, HFAG}, the
world average is
\begin{equation}
\Delta a_{CP} = -(0.65\pm 0.18)\%\,.
\label{eq:acpExp}
\end{equation}
Following~\cite{Grossman:2006jg} we write the singly-Cabibbo-suppressed $D^0\
(\bar D^0)$ decay amplitudes $A_f\ (\bar A_f)$ to CP eigenstates, $f$, as
\begin{subequations}
\begin{eqnarray}
A_f &=& A^T_f\, e^{i \phi_f^T} \big[1+r_f\, e^{i(\delta_f + \phi_f)}\big]\,, \\
\bar A_f &=& \eta_{CP}\, A^T_f\, e^{-i \phi_f^T}
\big[1+r_f\, e^{i(\delta_f - \phi_f)}\big]\,,
\end{eqnarray}
\end{subequations}
where $ \eta_{CP}=\pm1$ is the CP eigenvalue of $f$, the dominant
singly-Cabibbo-suppressed ``tree" amplitude is denoted $A^T_f\, e^{\pm i
\phi^T_f}$, and $r_f$ parameterizes the relative magnitude of all the subleading
amplitudes (often called ``penguin" amplitudes), which have different strong
($\delta_f$) and weak ($\phi_f$) phases.
In the following we focus on the $\pi^+ \pi^-$ and $K^+ K^-$ final states. In
general, $a_f$ can be written as a sum of CP asymmetries in decay, mixing, and
interference between decay with and without mixing. Mixing effects are
suppressed by the $D^0-\bar D^0$ mixing parameters, and, being universal, tend
to cancel in the difference between $K^+ K^-$ and $\pi^+ \pi^-$ final
states~\cite{Grossman:2006jg}. Taking into account the different time-dependence
of the acceptances in the two modes, LHCb quotes~\cite{lhcb} for the
interpretation of Eq.~(\ref{LHCbnews}),
\begin{equation}
a_{K^+ K^-} - a_{\pi^+\pi^-} \approx a^{\rm dir}_K - a^{\rm dir}_\pi
+ (0.10\pm0.01)\,a_{\rm ind}\,.
\end{equation}
Thus, {because of the experimental constraints on the mixing parameters [see
Eq.~(\ref{eq:expth})]}, a large $\Delta a_{CP}$ can be generated only by the
direct CP violating terms,
\begin{equation}
a_f^{\rm dir} = -\frac{2 {r_f} \sin \delta_f \sin \phi_f}
{1+2 r_f \cos \delta_f \cos \phi_f + r_f^2}\,,
\label{eq:adf}
\end{equation}
and we use the $f=K,\pi$ shorthand for $K^+K^-$ and $\pi^+\pi^-$.
\section{General considerations and SM prediction}
Independent of the underlying physics, a necessary condition for non-vanishing
$a_f^{\rm dir}$ is to have at least two amplitudes with different strong and
weak phases contribute to the final state $f$. In the isospin symmetry limit,
the condition on the strong phases implies that different isospin amplitudes
have to contribute. Since the leading (singly-Cabibbo-suppressed) terms in the
standard model (SM) effective Hamiltonian, defined below, have both $\Delta
I=1/2$ and $\Delta I=3/2$ components, the subleading operators with a different
weak phase may have a single isospin component. As far as amplitudes with a
different weak phase are concerned, in the SM, as well as within its MFV
expansions~\cite{MFV, GMFV}, they are suppressed by $\xi \equiv |V_{cb}V_{ub}| /
|V_{cs}V_{us}| \approx 0.0007$.
The SM effective weak Hamiltonian relevant for hadronic
singly-Cabibbo-suppressed $D$ decays, renormalized at a scale $m_c < \mu< m_b$
can be decomposed as
\begin{equation}\label{SMHeff}
\mathcal H^{\rm eff}_{|\Delta c| = 1} =
\lambda_d\, \mathcal H^{d}_{|\Delta c| = 1}
+ \lambda_s\, \mathcal H^{s}_{|\Delta c| = 1}
+ \lambda_b\, \mathcal H^{{\rm peng}}_{|\Delta c| = 1}\,,
\end{equation}
where $\lambda_q = V_{cq}^*V_{uq}$, and
\begin{eqnarray}\label{eq:Q12}
\mathcal H^{q}_{|\Delta c| = 1} &=& \frac{G_F}{\sqrt 2}
\sum_{i=1,2} C^q_i Q_i^s + {\rm H.c.}\,, \qquad q=s,d, \nonumber \\
Q^q_1 &=& (\bar u q)_{V-A}\, (\bar q c)_{V-A}\,, \nonumber \\
Q^q_2 &=& (\bar u_\alpha q_\beta)_{V-A}\, (\bar q_\beta c_\alpha)_{V-A}\,,
\end{eqnarray}
and $\alpha,\beta$ are color indices.
The first two terms in Eq.~(\ref{SMHeff}) have ${\cal O}(1)$ Wilson coefficients
in the SM. On the contrary, the so-called penguin operators in
$\mathcal H^{{\rm peng}}_{|\Delta c| = 1}$ have tiny Wilson coefficients at scales $m_c <
\mu< m_b$ (see Refs.~[\onlinecite{Golden:1989qx}, \onlinecite{Grossman:2006jg}]
for the list of relevant operators and Wilson coefficients).
Let us first consider the $D \to K^+K^-$ amplitude. In the SM, it is convenient
to use CKM unitarity, $\lambda_d + \lambda_s + \lambda_b=0$, to eliminate the
$\lambda_d$ term, and obtain $A_{K} = \lambda_s (A^s_{K} - A^d_{K}) + \lambda_b
(A_{K}^b - A_{K}^d)$. For $D \to \pi^+\pi^-$, it is convenient to eliminate
$\lambda_s$ to obtain $A_\pi = \lambda_d (A^d_\pi - A^s_\pi) + \lambda_b
(A_\pi^b - A_\pi^s)$. This way, the first terms are singly-Cabibbo-suppressed,
while the second terms are both CKM suppressed and have either vanishing
tree-level matrix elements or tiny Wilson coefficients. The magnitudes of these
subleading amplitudes are controlled by the CKM ratio $\xi=
|\lambda_b/\lambda_s| \simeq |\lambda_b/\lambda_d| \approx 0.0007$ and the
ratio of hadronic amplitudes. We define
\begin{equation}\label{RKpidef}
R^{{\rm SM}}_K = \frac{ A^b_K - A^d_K }{ A_K^s - A_K^d }\,, \qquad
R^{{\rm SM}}_\pi = \frac{ A^b_\pi - A^s_\pi }{ A_\pi^d - A_\pi^s }\,.
\end{equation}
Since $\mathrm{arg}(\lambda_b/\lambda_s) \approx
-\mathrm{arg}(\lambda_b/\lambda_d) \approx 70^\circ$, we can set
$|\sin(\phi^{{\rm SM}}_f)| \approx 1$ in both channels, and neglect the interference
term in the denominator of Eq.~(\ref{eq:adf}).
In the $m_c \gg \Lambda_{\rm QCD}$ limit, one could analyze these decay
amplitudes model independently. Given the valence-quark structure of the
$K^+K^-$ final state, a penguin contraction is required for operators of the
type $c \to ud\bar d$ or $ub\bar b$ to yield a non-vanishing $D \to K^+K^-$
matrix element. This is why $R^{{\rm SM}}_K$ is expected to be substantially smaller
than one. A na\"ive estimate in perturbation theory yields $|A^d_K/A^s_K| \sim
\alpha_s(m_c)/\pi\sim 0.1$ (and $|A^b | \lesssim |A^d |$). However, since the charm
scale is not far from $\Lambda_{\rm QCD}$, non-perturbative enhancements leading
to substantially larger values cannot be excluded~\cite{Golden:1989qx}. The
same holds for the ratio $R^{{\rm SM}}_\pi$ defined in Eq.~(\ref{RKpidef}).
To provide a semi-quantitative estimate of $R^{{\rm SM}}_{K,\pi}$ beyond perturbation
theory, we note that penguin-type contractions are absent in the
Cabibbo-allowed $c\to u s \bar d$ Hamiltonian, contributing to $D\to K^+ \pi^-$.
In the absence of penguin contractions, $D\to K^+K^-$ and $D\to \pi^-\pi^+$
amplitudes have identical topologies to $D\to K^+ \pi^-$, but for appropriate
$s\leftrightarrow d$ exchanges of the valence quarks. The data imply $|A_{KK}|
\approx 1.3\, |\lambda_s A_{K\pi}|$ and $A_{\pi\pi} \approx 0.7\, |\lambda_s
A_{K\pi}|$. These results are compatible with the amount of $SU(3)$ breaking
expected in the tree-level amplitudes and show no evidence for anomalously
large penguin-type contractions competing with the tree-level amplitudes.
Further evidence that tree-level topologies dominate the decay rates is
obtained from the smallness of $\Gamma(D\to K^0\bar K^0)/\Gamma(D\to K^+K^-)$,
which is consistent with the vanishing $D\to K^0\bar K^0$ tree-level matrix
element of $\mathcal H^{(s -d)}$ in the $SU(3)$ limit. However, it must be stressed
that data on the decay rates do not allow us to exclude a substantial
enhancement of the CKM suppressed amplitudes. The latter do not have an $s-d$
structure as the leading Hamiltonian, and, if enhanced over na\"ive estimates as
in the case of the $\Delta I =1/2$ rule in $K\to \pi\pi$ amplitudes, may
account for $|R^{{\rm SM}}_{K,\pi}|>1$~\cite{Golden:1989qx}.
In the following we assume that $r_f \ll 1$ even in the presence of new physics
(NP), and we can expand Eq.~(\ref{eq:adf}) to first order in this parameter. We
can thus write
\begin{equation}
a_K^{\rm dir} \approx 2 \bigg[ \xi\, \mathrm{Im} (R_K^{{\rm SM}})
+ \frac1{\sin\theta_C} \sum_i \mathrm{Im}(C_i^{{\rm NP}})\, \mathrm{Im}
(R_{K,i}^{{\rm NP}}) \bigg] ,
\end{equation}
and similarly in the $\pi^+\pi^-$ mode. Here $R_{K,i}^{{\rm NP}}$ denote the ratio
of the subleading amplitudes generated by the operators $Q_i$ in the NP
Hamiltonian defined below in Eq.~(\ref{eq:HNP}), normalized to the dominant SM amplitude,
after factoring out the leading CKM dependence, $\sin\theta_C \approx
|\lambda_{s,d}|\approx 0.225$, and the NP Wilson coefficients,\footnote{Contrary to the SM case,
where the CKM factors are explicitly factorized, in the NP case
we include flavor mixing terms in the $C_i^{\rm NP}$ --- see Eq.~\eqref{eq:HNP}.} $C^{\rm NP}_i$. This
implies
\begin{equation}
\Delta a_{CP} \approx (0.13\%) \mathrm{Im} (\Delta R^{{\rm SM}})
+ 8.9 \sum_i \mathrm{Im} (C_i^{{\rm NP}})\, \mathrm{Im} (\Delta R^{{\rm NP}}_{i} ),
\label{eq:acpNP}
\end{equation}
where we defined
\begin{equation}\label{DeltaRdef}
\Delta R^{{\rm SM},{\rm NP}} = R_K^{{\rm SM},{\rm NP}} + R^{{\rm SM},{\rm NP}} _\pi\,.
\end{equation}
In the $SU(3)$ limit, $R_K^{{\rm SM}} = R_\pi^{{\rm SM}}$, and therefore $a_K^{\rm dir}
\approx - a_\pi^{\rm dir}$, which add constructively in $\Delta
a_{CP}$~\cite{Golden:1989qx, Quigg:1979ic}.
Assuming the SM, the central value of the experimental result is recovered if
${\rm Im}(\Delta R^{{\rm SM}}) \approx 5$, as illustrated in Fig.~\ref{fig:SM}. Such
an enhancement of the CKM-suppressed amplitude cannot be excluded from first
principles, but it is certainly beyond its na\"ive
expectation~\cite{Grossman:2006jg}.
\begin{figure}[tb]
\centering
\includegraphics[width=0.47\textwidth]{SM}
\caption{Comparison of the experimental $\Delta a_{CP}$ values with the SM
reach as a function of $|\Delta R^{{\rm SM}}|$. } \label{fig:SM}
\end{figure}
Note that the applicability of $SU(3)$ flavor symmetry should be questioned,
because the $D\to K^+K^-$ and $D\to \pi^+\pi^-$ decay rates imply a large
breaking of the symmetry. Without $SU(3)$ as a guidance, one can no longer
expect $a_K^{\rm dir} \approx - a_\pi^{\rm dir}$; in particular, the strong
phases relevant for direct CP violation in these two channels are no longer
related. One might then expect $|a_\pi^{\rm dir}| < |a_K^{\rm dir}|$, if the
deviation from factorization is smaller in the $\pi^+\pi^-$ than in the $K^+K^-$
mode. Therefore, it will be very interesting for the interpretation of the
results when the CP asymmetries are measured separately with increased
precision. Recent measurements by CDF~\cite{Aaltonen:2011se},
Belle~\cite{Staric:2008rx} and BaBar~\cite{Aubert:2007if} yield for the average
of the individual CP asymmetries (without LHCb, and dominated by
CDF~\cite{Aaltonen:2011se}) in the $\pi^+\pi^-$ and $K^+K^-$ modes
$(2.0\pm2.2)\times 10^{-3}$ and $(-2.3\pm1.7)\times 10^{-3}$~\cite{HFAG},
respectively, which does not yet allow us to draw definite conclusions [and are
included in Eq.~(\ref{eq:acpExp})].
Another important experimental handle to decide whether the
observed signal can or cannot be accommodated in the SM would be observing or
constraining CP violation in other decay modes, corresponding to the same
quark-level transitions. These include pseudoscalar-vector or vector-vector
final states, three-body decays, $D_s$ and $\Lambda_c$ decays. More precise
measurements in such decays will help to decide whether the measured CP
asymmetry in Eq.~(\ref{LHCbnews}) is due to new short distance physics, or to a
large enhancement of a hadronic matrix element in one particular channel.
\section{New Physics contributions}
The size of NP effects allowed in $\Delta a_{CP}$ depends on $\mathrm{Im} (\Delta R^{{\rm SM}})$. In order to understand the scale probed by the
measurement, we parametrize the NP contributions in terms of an effective NP
scale $\Lambda_{\rm NDA}$, normalized to the Fermi scale: $
\mathrm{Im}(C_i^{{\rm NP}}) = \sqrt 2\, \mathrm{Im} (C_{\rm NDA}) / (\Lambda_{\rm
NDA}^2 G_F)$. The resulting sensitivity for $\Lambda_{\rm NDA}$ ($C_{\rm
NDA}$) can be written as
\begin{equation}\label{eq:NP}
\mathrm{Im}(C_{\rm NDA})\,\frac{(10~\rm TeV)^2}{\Lambda_{\rm NDA}^2}
= \frac{ {(0.61\pm 0.17)} - 0.12\, \mathrm{Im}(\Delta R^{\rm SM})}{\mathrm{Im}(\Delta R^{\rm NP})}.
\end{equation}
In other words, assuming ${\mathrm{Im}(\Delta R^{\rm NP})}\sim1$, $|\Delta R^{\rm SM}| \ll
5 $ and $C_{\rm NDA}=1$ implies that a NP scale of $\mathcal
O(13{\rm\,TeV})$ will saturate the observed CP violation; alternatively,
setting $\Lambda_{\rm NDA}\to 2^{1/4}/\sqrt{G_F}$ implies that $C_{\rm
NDA} \sim 7 \times10^{-4}$ is required. As we discuss below, despite
the large scale involved, after taking into account the bounds from CP violation
in $|\Delta c|=2$ and $|\Delta s|=1$ processes, only a few NP operators may
saturate the value in Eq.~(\ref{eq:NP}) in the limit $|\Delta R^{\rm SM}| \ll 5 $.
To discuss possible NP effects, we consider the following effective Hamiltonian
\begin{eqnarray}
\mathcal H^{\rm eff-\rm NP}_{|\Delta c| = 1} &=& \frac{G_F}{\sqrt 2}
\sum_{i=1,2,5,6} \sum_q (C^q_i Q_i^q + C^{q\prime}_i Q^{q\prime}_i) \nonumber \\
&+& \frac{G_F}{\sqrt 2} \sum_{i=7,8} (C_i Q_i + C'_i Q^{\prime}_i)+ {\rm H.c.}\,,
\label{eq:HNP}
\end{eqnarray}
where $q=\{d,s,b,u,c\}$, and the list of operators includes, in addition to
$Q^q_{1,2}$ given in Eq.~(\ref{eq:Q12}),
\begin{eqnarray}
Q^q_5 &=& (\bar u c)_{V-A}\, (\bar q q)_{V+A}\,,\nonumber\\
Q^q_6 &=& (\bar u_\alpha c_\beta)_{V-A}\, (\bar q_\beta q_\alpha)_{V+A}\,, \nonumber\\
Q_{7} &=& - \frac{e}{8\pi^2}\, m_c\, \bar u \sigma_{\mu\nu}
(1+\gamma_5) F^{\mu\nu}\, c\,, \nonumber\\
Q_{8} &=& - \frac{g_s}{8\pi^2}\, m_c\, \bar u \sigma_{\mu\nu}
(1+\gamma_5) T^a G_a^{\mu\nu} c\,,
\end{eqnarray}
and another set, $Q^{(q)\prime}_i$, obtained from $Q^{(q)}_i$ via the
replacements $A \leftrightarrow -A$ and $\gamma_5 \leftrightarrow -\gamma_5$.
This is the most general dimension-six effective Hamiltonian relevant for $D\to
K^+K^-,\, \pi^+\pi^-$ decays, after integrating out heavy degrees of freedom
around or above the electroweak scale.
\subsection{\boldmath Bounds on NP effects from $D^0 -\bar D^0$ mixing}
Charm mixing arises from $|\Delta c| = 2$ interactions that generate
off-diagonal terms in the mass matrix for $D^0$ and $\bar D^0$ mesons. The $D^0
-\bar D^0$ transition amplitudes are defined as
\begin{equation}
\bra{D^0} \mathcal H \ket{\bar D^0} = M_{12} - \frac{i}{2} \Gamma_{12}\,.
\end{equation}
The three physical quantities related to the mixing can be defined as
\begin{equation}
y_{12} \equiv \frac{|\Gamma_{12}|}{\Gamma}\,, \quad
x_{12} \equiv 2 \frac{|M_{12}|}{\Gamma}\,, \quad
\phi_{12} \equiv \mathrm{arg} \bigg(\frac{M_{12}}{\Gamma_{12}}\bigg)\,.
\end{equation}
HFAG has performed a fit to these theoretical quantities, even allowing for CP
violation in decays, and obtained the following 95\% C.L.\ regions~\cite{HFAG}
\begin{align}
x_{12} &\in [0.25,\, 0.99]\,\%\,, \nonumber\\
y_{12} &\in [0.59,\, 0.99]\,\%\,, \nonumber\\
\phi_{12} &\in [-7.1^\circ,\, 15.8^\circ]\,.
\label{eq:expth}
\end{align}
We cannot reliably estimate the SM contributions to these quantities from first
principles, and thus simply require the NP contributions to at most saturate
the above experimental bounds on $x_{12},\ y_{12}$, and $\phi_{12}$.
The NP operators present in $\mathcal H^{\rm eff-NP}_{|\Delta c| = 1}$ may
affect $D^0$--$\bar D^0$ mixing parameters at the second order in the NP coupling,
$T\big\{\mathcal H^{\rm eff-NP}_{|\Delta c| = 1}(x)\, \mathcal H^{\rm
eff-NP}_{|\Delta c| = 1} (0)\big\}$. {Such a contribution, which formally
corresponds to a quadratically divergent one loop diagram, is highly UV
sensitive. If we assume a fully general structure for our effective theory,
where operators are of NDA strength, then the scaling in Eq.~(\ref{eq:NP}) would
imply much too large contributions to $D-\bar D$ mixing and CP violation (see,
e.g.,~\cite{Isidori:2010kg}). This could be a major constraint for many
SM extensions. However, being a genuine UV effect, it is also
highly model dependent. On the other hand, assuming that $\mathcal H^{\rm
eff-NP}_{|\Delta c| = 1}$ is generated above the electroweak scale and the UV
completion of the theory cures the above mentioned problem, we can derive
(model-independent) bounds on the coefficients of $\mathcal H^{\rm
eff-NP}_{|\Delta c| = 1}$ from the effective $|\Delta c|=2$ operators generated at low energies, by considering its time ordered product with the SM charged-current interactions, $T\big\{\mathcal H^{\rm eff-NP}_{|\Delta c| = 1}(x)\, \mathcal
H^{\rm SM}_{|\Delta c| = 1} (0)\big\}$ (schematically depicted Fig.~\ref{fig:dressing}). }
\begin{figure}[t]
\centering
\includegraphics[width=0.2\textwidth]{dressing}
\caption{Contribution of $\mathcal H^{\rm
eff-NP}_{|\Delta c| = 1}$ (red square) to $|\Delta c|=2$ and $|\Delta s|=1$ operators via a $W$ (blue wavy line) loop.
\label{fig:dressing}}
\end{figure}
The effective Hamiltonian thus obtained integrating out all the heavy fields is
\begin{equation}
\mathcal H^{\rm eff}_{|\Delta c|=2} = \frac{G_F}{\sqrt 2}
\left( \sum_{i=1}^5 C^{cu}_i Q_i^{cu} + \sum_{i=1}^3 C^{cu\prime}_i
Q_i^{cu\prime} \right) ,
\label{eq:HdC=2}
\end{equation}
where
\begin{eqnarray}
Q_1^{cu} &=& (\bar u c)_{V-A} \, (\bar u c)_{V-A}\,, \nonumber\\
Q_2^{cu} &=& (\bar u c)_{S-P} \, (\bar u c)_{S-P}\,, \nonumber\\
Q_3^{cu} &=& (\bar u_\alpha c_\beta)_{S-P} \, (\bar u_\beta c_\alpha)_{S-P}\,, \nonumber\\
Q_4^{cu} &=& (\bar u c)_{S-P} \, (\bar u c)_{S+P}\,,\nonumber\\
Q_5^{cu} &=& (\bar u_\alpha c_\beta)_{S-P} \, (\bar u_\beta c_\alpha)_{S+P}\,,
\end{eqnarray}
and, as before, the $Q_{1,2,3}^{cu\prime}$ operators are obtained from
$Q_{1,2,3}^{cu}$ by the replacements $A \leftrightarrow -A$ and $P
\leftrightarrow -P$.
We perform the matching at one-loop at the matching scale $\mu\gtrsim m_W$. Some
of the contributions generate logarithmic divergencies, which are canceled by the
appropriate counterterms, genuine short-distance contributions to the $|\Delta
c| =2$ Hamiltonian in Eq.~(\ref{eq:HdC=2}). We denote the corresponding
contributions to the $|\Delta c| =2$ Wilson coefficients $\delta
C^{cu(\prime)}_i$. Using dimensional regularization with the $\overline{\rm
MS}$ prescription we obtain for the renormalized $|\Delta c| =2$ Wilson
coefficients
\begin{eqnarray}
C_1^{cu} &=& \delta C_1^{cu} + \frac{g^2}{32\pi^2}
\sum_q \lambda_q\, (C^q_2-C^q_1)\, \ln \frac{\mu^2}{m_W^2}\,, \nonumber\\
C_4^{cu} &=& \delta C_4^{cu} - \frac{g^2}{16\pi^2}\sum_q
\lambda_q\, C^{q\prime}_{6}\, \ln \frac{\mu^2}{m_W^2} \,, \nonumber\\
C_5^{cu} &=& \delta C_5^{cu} - \frac{g^2}{16\pi^2}\sum_{q}
\lambda_q\, C^{q\prime}_{5}\, \ln \frac{\mu^2}{m_W^2} \,,
\label{eq:dC2}
\end{eqnarray}
where here and below we neglect contributions proportional to $r_q=m_q^2/m_W^2$.
In particular, the leading order contributions to $C'_{1,2}$ and $C_{5,6}$ which
are proportional to $r_q \ln r_q$ were set to zero. Similarly, contributions of
the gluonic and electromagnetic dipole operators, $Q_{7,8}$, both at tree-level
via two insertions, as well as at one loop, are parametrically suppressed by
$r_c\, \alpha/\sin^2\theta_W$.\footnote{We have verified that due to similar
chiral suppression the contribution of $Q_{7(8)}$ to the down quark
(chromo)electric dipole moment via weak charged current ``dressing" remains well
below present bounds, even for order one Wilson coefficient $C_{7(8)}$.}
Numerically this leads to bounds of order unity on the corresponding Wilson
coefficients, well above the values obtained in Eq.~(\ref{eq:NP}), and thus no
useful constraint is obtained from $D-\bar D$ mixing.
To compute the contributions of $\mathcal H_{|\Delta c|=2}^{\rm eff}$ to
$M_{12}$, we take into account the running and mixing of the operators between
the matching scale $\mu$ and the scale $m_D$. This is performed using the
formula~\cite{Bona:2007vi}
\begin{eqnarray}
\bra{\bar D^0} \mathcal H_{|\Delta c|=2}^{\rm eff} \ket{D^0}_i
&=& \frac{G_F}{\sqrt 2} \sum_{j=1}^5 \sum_{r=1}^5 \left( b_j^{(r,i)}
+ \eta\, c_{j}^{(r,i)}\right) \eta^{a_j} \nonumber\\
&&{}\times C^{cu}_i(\mu) \bra{\bar D^0} Q_r^{cu} \ket{D^0}\,,
\end{eqnarray}
where all the relevant parameters are defined in Ref.~\cite{Bona:2007vi},
including the relevant hadronic operator matrix elements. Requiring that such
contributions do not exceed the bounds on $x_{12}$ and $x_{12}\sin\phi_{12}$ in
Eq.~(\ref{eq:expth}), we obtain the bounds on $C^{cu}_i$ at the matching scale
$\mu\sim1$\,TeV
\begin{align}
|C^{cu}_1| &\lesssim 5.7 \times 10^{-8}\,, & {\rm Im} (C^{cu}_1) \lesssim 1.6\times 10^{-8}\,, \nonumber\\
|C^{cu}_2| &\lesssim 1.6 \times 10^{-8}\,, & {\rm Im} (C^{cu}_2) \lesssim 4.3\times 10^{-9}\,, \nonumber\\
|C^{cu}_3| &\lesssim 5.8 \times 10^{-8}\,, & {\rm Im} (C^{cu}_3) \lesssim 1.6\times 10^{-8}\,, \nonumber\\
|C^{cu}_4| &\lesssim 5.6 \times 10^{-9}\,, & {\rm Im} (C^{cu}_4) \lesssim 1.6\times 10^{-9}\,, \nonumber\\
|C^{cu}_5| &\lesssim 1.6 \times 10^{-8}\,, & {\rm Im} (C^{cu}_5) \lesssim 4.5\times 10^{-9}\,.
\end{align}
Inserting expressions (\ref{eq:dC2}) into the above constraints we can obtain
bounds on the combinations of $\delta C^{cu}_i$ and $C_i^q$ at the high scale.
In the following we put all counter term contributions to zero and consider only
a single chirality operator structure at a time.
In order to control the QCD induced RGE evolution of the $|\Delta c|=1$
operators between the matching scale and the hadronic charm scale $\mu_D\sim
2$\,GeV, it is convenient to change flavor basis and consider the following set
of operators, both for $|\Delta c|=1$ (and $|\Delta s|=1$, see below) NP
Hamiltonians ($i=1,2,5,6$):
\begin{eqnarray}
Q^{(s-d)}_i &=& Q^{s}_i - Q^{d}_i\,, \nonumber \\
Q^{(c-u)}_i &=& Q^{c}_i - Q^{u}_i\,, \nonumber \\
Q^{(8d)}_i &=& Q^{s}_i + Q^{d}_i - 2 Q^{b}_i \,, \nonumber \\
Q^{(b)}_i &=& Q^{s}_i + Q^{d}_i + Q^{b}_i - (3/2)\big(Q^{c}_i + Q^{u}_i\big)\,, \nonumber \\
Q^{(0)}_i &=& Q^{s}_i + Q^{d}_i + Q^{b}_i + Q^{c}_i + Q^{u}_i~, \label{eq:newbasis}
\end{eqnarray}
and similarly for the primed operators.
With this choice, the $Q^{(0)(\prime)}_i$ are the standard QCD penguin operators, whose
RGE evolution can be found, for instance, in~\cite{Golden:1989qx}. Moreover,
penguin contractions are completely absent in the RGE evolution at $\mu\gtrsim
m_c$ of the first two sets of terms in (\ref{eq:newbasis}) and, to a good
approximation (i.e., for $\mu\gtrsim m_b$), are safely negligible also in the
case of $Q^{(b,8d)(\prime)}_i$. For these operators we can thus consider, to
lowest order, a simplified RGE evolution in terms of $2\times 2$ blocks of same
flavor and chirality:
\begin{eqnarray}
\frac{d C^{(f)}_i }{d\ln\mu} &=& \gamma_{LL}^T\, C^{(f)}_i , \quad
\gamma^{(0)}_{LL} = \left( \begin{array}{cc} -\frac{6}{N_c} & 6 \\
6 & -\frac{6}{N_c} \end{array} \right) , \quad{i=1,2}\,, \nonumber \\
\frac{d C^{(f)}_i }{d\ln\mu} &=& \gamma_{LR}^T\, C^{(f)}_i , \quad
\gamma^{(0)}_{LR} = \left( \begin{array}{cc} \frac{6}{N_c} & -6 \\
0 & 6\frac{N_c^2-1}{N_c} \end{array} \right) , \quad{i=5,6}\,, \nonumber \\
\label{eq:rge}
\end{eqnarray}
where $f=\{s-d,\, c-u,\, 8d,\, b\}$, $N_c=3$ is the number of colors, and the
same equations hold for primed operators.
This basis also has the benefit of clearly distinguishing between various
contributions to $D-\bar D$ mixing observables suppressed by different CKM
prefactors. Most severe constraints are expected for the flavor combination
$Q_i^{(s-d)(\prime)}$ proportional to $\lambda_s-\lambda_d \approx 2
\lambda_s$. On the other hand, $Q_i^{(8d)(\prime)}$ contributions are
suppressed by $\lambda_s + \lambda_d - 2\lambda_b \approx - 3 \lambda_b$. An
even stronger suppression of $r_b \lambda_b$ is expected for the flavor
combinations $Q_i^{(b,0)(\prime)}$, while
$Q_i^{(c-u)(\prime)}$ do not contribute to $|\Delta c|=2$ observables at one
electroweak loop order.
Considering thus only the cases $Q_i^{(s-d)(\prime)}$ and
$Q_i^{(8d)(\prime)}$, we obtain the bounds on $C^q_i$ in
Table~\ref{tab:uvbounds}. We also verified that due to $r_q$ suppression,
$C'_{1,2}$, $C_{5,6}$, and $C_{7,8}$, as well as the contributions of
$C^{(b,0)}_{12}$ and $C^{(b,0)\prime}_{5,6}$ are presently allowed by $D -\bar
D$ data to be $\mathcal O(1)$. We observe that $Q^{(s-d)}_{1,2}$ and
$Q^{(s-d)\prime}_{5,6}$ are excluded from explaining the central value of
$\Delta a_{CP}$ in Eq.~(\ref{eq:acpExp}) for $|\Delta R^{{\rm SM}}|\ll5$ and
reasonable values of $|\Delta R^{{\rm NP}}|$. On the other hand,
$Q^{(8d)}_i$ can satisfy all present experimental constraints in the charm
sector given significant values of $|\Delta R^{{\rm NP}}|$ as also shown in
Fig.~\ref{fig:corr}.
\begin{table}[bt]
\tabcolsep 8pt
\begin{tabular}{c|cc}
\hline
$f$ & $s-d$ & $8d$ \\
\hline\hline
$\mathrm{Im}\big(C_{1,2}^{(f)}\big)$ & $5.4\times 10^{-6}$ & $4.5\times 10^{-3}$ \\
$\mathrm{Im}\big(C_5^{(f)\prime}\big)$ & $7.3\times 10^{-7}$ & $6.1\times 10^{-4}$ \\
$\mathrm{Im}\big(C_6^{(f)\prime}\big)$ & $2.7\times 10^{-7}$ & $2.2\times 10^{-4}$ \\
\hline
\end{tabular}
\caption{\label{tab:uvbounds} Bounds on the imaginary parts of $|\Delta c|=1$ Wilson
coefficients at the scale $\mu=1$~TeV, derived from searches for CP violation in
$D- \bar D$ mixing.}
\end{table}
\begin{figure}[t]
\centering
\includegraphics[width=0.4\textwidth]{corr}
\caption{NP contributions of the form $Q^{(c-u,8d,b,0)}_{1,2}$,
$Q^{(0)}_{5,6}$, and $Q_{5,6}^{(8d)\prime}$, reproducing the measured central
value of $\Delta a_{CP}$ and consistent with searches for CP violation in
$D-\bar D$ mixing (in the electronic version shaded in red) and the measured
value of $\epsilon'/\epsilon$ (in the electronic version shaded in green).
Both are estimated via one weak loop matching at $\mu\simeq 1$~TeV, as a
function of the unknown amplitude ratios, $\Delta R^{\rm SM}$ and $\Delta
R^{\rm NP}$ defined in Eq.~(\ref{eq:acpNP}). Assuming reasonable ranges for the
hadronic matrix elements, contributions of individual operators can be consistent with all experimental
results above the contours below the respective operator labels.
\label{fig:corr}}
\end{figure}
\subsection{\boldmath Bounds on NP effects from $\epsilon'/\epsilon$}
As before, we can derive bounds from $T\{\mathcal H^{\rm
eff-NP}_{|\Delta c| = 1}(x)\, H^{\rm SM}_{c.c}(0)\}$ generating an effective
$|\Delta s| = 1$ interaction. We first project the $|\Delta s| = 1$ effective
operators onto the following basis:
\begin{equation}
\mathcal H^{\rm eff-{\rm NP}}_{|\Delta s| = 1} = \frac{G_F}{\sqrt 2} \sum_{i,q} C^{q(ds)}_i Q_i^{q(ds)} + {\rm H.c.}\,,
\end{equation}
where
\begin{eqnarray}
Q^{q(ds)}_1 &=& (\bar d s)_{V-A}\, (\bar q q)_{V-A}\,, \nonumber\\
Q^{q(ds)}_2 &=& (\bar d_\alpha s_\beta)_{V-A}\, (\bar q_\beta q_\alpha)_{V-A}\,, \nonumber\\
Q^{q(ds)}_5 &=& (\bar d s)_{V-A}\, (\bar q q)_{V+A}\,, \nonumber\\
Q^{q(ds)}_6 &=& (\bar d_\alpha s_\beta)_{V-A}\, (\bar q_\beta q_\alpha)_{V+A}\,.
\end{eqnarray}
These are the only effective operators generated at
the one-loop level from $T\{\mathcal H^{\rm eff-NP}_{|\Delta c| = 1}(x)\, H^{\rm
SM}_{c.c}(0)\}$ in the limit where we neglect light quark masses. It is also
clear that these receive non-suppressed contributions only from the $Q_i^{q}$ in
$H^{\rm eff-NP}_{|\Delta c| = 1}$: the contributions of $Q^{q\prime}_{i}$ and
dipole operators are doubly Yukawa suppressed (in addition to the loop
suppression), and thus can be safely neglected.
\begin{table}[t]
\tabcolsep 8pt
\begin{tabular}{c|ccccc}
\hline
$f$ & $s-d$ & $c-u$ & $8d$ & $b$ & $0$ \\
\hline\hline
${\rm Im}\big(C_{1}^{(f)}\big)\times10^{3}$ & $2.0$ & $2.0$ & $2.0$ & $0.79$ & $2.2$ \\
${\rm Im}\big(C_{2}^{(f)}\big)\times10^{3}$ & $2.0$ & $2.3$ & $2.0$ & $0.88$ & $6.6$ \\
${\rm Im}\big(C_{5}^{(f)}\big)\times10^{5}$ & $2.7$ & $2.8$ & $2.7$ & $1.1$ & $142$ \\
${\rm Im}\big(C_{6}^{(f)}\big)\times10^{5}$ & $0.90$ & $0.94$ & $0.90$ & $0.37$ & $28$ \\
\hline
\end{tabular}
\caption{\label{tab:ds} Bounds on the imaginary parts of $|\Delta c|=1$ Wilson coefficients at the scale $\mu =1$~TeV, from the contributions to $|\epsilon'/\epsilon|$.}
\end{table}
Procceding as before, we get
\begin{equation}
C^{q(ds)}_i = \delta C^{q(ds)}_i + C^q_{i} \frac{g^2}{32\pi^2}\,
\ln \frac{\mu^2}{m_W^2}\,,
\label{eq:dsmatch}
\end{equation}
for all the relevant four-quark operators.
To compute the contributions of $\mathcal H^{\rm eff}_{|\Delta s| = 1}$ to
$K\to\pi\pi$ amplitudes we need to take into account the running and mixing of
the operators between the matching scale and a scale $\mu\sim 1$~GeV. Again it is
done in the flavor basis (\ref{eq:newbasis}), and using Eq.~(\ref{eq:rge})
analogous to the $|\Delta c|=1$ sector.
The master formula for $\epsilon^\prime/\epsilon$ is
\begin{eqnarray}
\left| \frac{\epsilon^\prime}{\epsilon} \right|
&=& \frac{\omega}{\sqrt{2}\, |\epsilon|\, {\rm Re}A_0}
\left| {\rm Im} A_0 -\frac{1}{\omega}\, {\rm Im} A_2 \right| , \\[4pt]
{\rm Im} A_I &=& \frac{G_F}{\sqrt{2}}\, \sum_{i,f} C^{(f)(ds)}_i\,
\langle (2\pi)_I | Q_i^{(f)(ds)} | K \rangle\,, \nonumber
\end{eqnarray}
where $\omega= {\rm Re}A_2 / {\rm Re}A_0 \approx 0.045$ (from now on we omit the superscript
($sd$) on the coefficients and operators of the $|\Delta s|=1$ Hamiltonian).
Evaluating the matrix elements of $\mathcal H^{\rm eff-NP}_{|\Delta s| = 1}$
in the large $N_c$ limit leads to
\begin{eqnarray}
\left| \frac{\epsilon^\prime}{\epsilon} \right|_{\rm NP} &\approx&
10^2\, \bigg| {\rm Im} \Big[
3.5 C_1^{(3/2)} +3.4 C^{(3/2)}_2 -1.7\rho^2 C_5^{(3/2)} \nonumber \\
&& -5.2 \rho^2 C_6^{(3/2)} - 0.04 C_1^{(1/2)} - 0.12 C_2^{(1/2)}
\nonumber \\
&& - 0.04\rho^2 C_5^{(1/2)} + 0.11\rho^2 C_6^{(1/2)} \Big] \bigg|\,,
\end{eqnarray}
in terms of the $|\Delta s|=1$ Wilson coefficients at the low scale $(\mu= 1.4\,\mathrm{GeV})$, where $C_i^{(3/2)}=
[-C^{(s-d)}+C^{(c-u)}+C^{(8d)}]/2+(5/4)C^{(b)}$,
$C_i^{(1/2)}=[C^{(s-d)}+C^{(c-u)}-C^{(8d)}]/2+(1/4)C^{(b)}-C^{(0)}$,
and $\rho=m_K/m_s$. Imposing the conservative bound $|
\epsilon^\prime/ \epsilon |_{\rm NP} < | \epsilon^\prime/\epsilon |_{\rm exp}
\approx 1.7 \times 10^{-3}$, leads to severe constraints on all the
coefficients. In terms of $|\Delta s| =1$ Wilson coefficients at the high scale
($\mu=1$~TeV) the constraints read
\begin{align}
\mathrm{Im} (C^{(s-d)}_{1}) &\lesssim 1.4 \times 10^{-5}\,, & \mathrm{Im} (C^{(s-d)}_{2}) &\lesssim 1.4 \times 10^{-5}\,, \nonumber\\
\mathrm{Im} (C^{(s-d)}_{5}) &\lesssim 1.9 \times 10^{-7}\,, & \mathrm{Im} (C^{(s-d)}_{6}) &\lesssim 6.1 \times 10^{-8}\,, \nonumber\\
\mathrm{Im} (C^{(c-u)}_{1}) &\lesssim 1.3 \times 10^{-5}\,, & \mathrm{Im} (C^{(c-u)}_{2}) &\lesssim 1.6 \times 10^{-5}\,, \nonumber\\
\mathrm{Im} (C^{(c-u)}_{5}) &\lesssim 1.9 \times 10^{-7}\,, & \mathrm{Im} (C^{(c-u)}_{6}) &\lesssim 6.4 \times 10^{-8}\,, \nonumber\\
\mathrm{Im} (C^{(8d)}_{1}) &\lesssim 1.4 \times 10^{-5}\,, & \mathrm{Im} (C^{(8d)}_{2}) &\lesssim 1.4 \times 10^{-5}\,, \nonumber\\
\mathrm{Im} (C^{(8d)}_{5}) &\lesssim 1.9 \times 10^{-7}\,, & \mathrm{Im} (C^{(8d)}_{6}) &\lesssim 6.1 \times 10^{-8}\,, \nonumber\\
\mathrm{Im} (C^{(b)}_{1}) &\lesssim 5.4 \times 10^{-6} \,,& \mathrm{Im} (C^{(b)}_{2}) &\lesssim 5.9 \times 10^{-6}\,,\nonumber\\
\mathrm{Im} (C^{(b)}_{5}) &\lesssim 7.5 \times 10^{-8} \,,& \mathrm{Im} (C^{(b)}_{6}) &\lesssim 2.5 \times 10^{-8}\,,\nonumber\\
\mathrm{Im} (C^{(0)}_{1}) &\lesssim 1.5 \times 10^{-5} \,,& \mathrm{Im} (C^{(0)}_{2}) &\lesssim 4.5 \times 10^{-5}\,,\nonumber\\
\mathrm{Im} (C^{(0)}_{5}) &\lesssim 9.6 \times 10^{-6} \,,& \mathrm{Im} (C^{(0)}_{6}) &\lesssim 1.9 \times 10^{-6}\,.
\end{align}
Inserting the matching conditions (\ref{eq:dsmatch}), we obtain bounds on the
$|\Delta c|=1$ Wilson coefficients in Table~\ref{tab:ds}. We observe that all
$Q_{5,6}^{(f)}$ except $Q_{5,6}^{(0)}$ are excluded from contributing
significantly to $\Delta a_{CP}$. The remaining operators are only marginally
constrained and can give observable effects in the charm sector provided
$|\Delta R^{{\rm NP}}|$ have significant values as also shown in Fig.~\ref{fig:corr}.
\begin{table}[t]
\tabcolsep 8pt
\begin{tabular}{ccc}
\hline
Allowed & Ajar & Disfavored \\
\hline\hline
$Q_{7,8}\,,\ Q'_{7,8}\,, $
& $ Q_{1,2}^{(c-u,8d,b,0)},$
& $Q_{1,2}^{s-d}\,, \ Q_{5,6}^{(s-d)\prime},$\\
$ {\forall f}\ Q_{1,2}^{f\prime}\,,\ Q_{5,6}^{(c-u,b,0)\prime} $
& $Q_{5,6}^{(0)}\,,\ Q_{5,6}^{(8d)\prime}$
& $ Q_{5,6}^{s-d,c-u,8d,b}$\\
\hline
\end{tabular}
\caption{\label{tab:summary} List of $|\Delta c|=1$ operators grouped according to whether they
can contribute to $\Delta a_{CP}$ at a level comparable to the central value of
the measurement, given the constrains from $D-\bar D$ mixing and
$\epsilon'/\epsilon$.}
\end{table}
\section{Conclusions}
We explored the implications of the recent LHCb measurement of a $3.5\sigma$
deviation from no CP violation in $D$ decays. Clearly, it will require more
data to establish whether the measurement is or is not consistent with the SM.
While a sufficient QCD enhancement of the penguin matrix element cannot be
excluded at the present time, if similar CP violation is observed in other
channels as well (e.g., pseudoscalar-vector final states, three-body decays,
$D_s$ or $\Lambda_c$ decays), then it would suggest that the measurement is due
to new short distance physics, rather than the enhancement of a hadronic matrix
element in one particular channel.
Our analysis implies that operators where the charm
bilinear current is of $V-A$ structure are constrained by $D-\bar D$
mixing or by $\epsilon'/\epsilon$, especially the ones which violate $U$-spin. A complete list of the operators grouped
according to whether they can contribute to $\Delta a_{CP}$ at a level
comparable to the central value of the measurement, given the constrains from
$D-\bar D$ mixing and $\epsilon'/\epsilon$, is shown in
Table~\ref{tab:summary}. It is also worth noting that in cases where the new
physics contributions are large, we generically expect sizable contributions to
CP violation in $D-\bar D$ mixing (and in $\epsilon'/\epsilon$) to arise. This
will be tested when the constraints on CP violation in $D-\bar D$ mixing will
improve substantially with more LHCb and future super-$B$-factory data.
\begin{acknowledgments}
We thank Marco Gersabeck, Vladimir Gligorov, and Alex Kagan for
helpful discussions.
GI acknowledges the support of the TU~M\"unchen -- Institute for Advanced
Study, funded by the German Excellence Initiative, and the EU ERC Advanced
Grant FLAVOUR (267104).
The work of JFK was supported in part by the Slovenian Research Agency.
The work of ZL was supported in part by the Director, Office of Science, Office
of High Energy Physics of the U.S.\ Department of Energy under contract
DE-AC02-05CH11231. GP is supported by the GIF, Gruber foundation, IRG, ISF and Minerva.
\end{acknowledgments}
| {
"attr-fineweb-edu": 1.834961,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdj84eIZjpBzzwYB- |
\section{Introduction}
\label{intro}
Quantitative evaluation of generative models is challenging \cite{theis2015note};
evaluating purely by visual inspection can introduce biases, particularly towards ``precision'' at cost of ``recall'' \cite{sajjadi2018assessing}.
The traditional metric, likelihood,
is also not only difficult to evaluate for implicit generative models on complex, highly structured datasets,
but can also be a poor fit to users' goals with generative modeling
\cite{theis2015note, sajjadi2018assessing}.
Many quantitative proxy measures for the discrepancy between generator and real distributions have thus been used in recent work on generative modeling of images,
some of the most important including the
Inception Score (IS) \cite{salimans2016improved}, Fréchet
Inception Distance (FID) \cite{heusel2017gans}, Precision/Recall (PR) \cite{sajjadi2018assessing}, and Density/Coverage \citep{naeem2020reliable}.
FID and PR measures in particular have been shown to correlate with human
judgments in some practical settings.
All of these measures use representations extracted from CNNs pretrained on ImageNet classification \citep{ILSVRC15}. Nevertheless, it is recently shown that self-supervised representations archives more reasonable ranking in terms of FID/Precision/Recall, while the ranking with ImageNet-pretrained embeddings often can be misleading \cite{morozov2021on}.
These problems are exacerbated for the evaluation of generative models of graph data.
Graphs are often used to represent concepts in a broad range of specialized domains, including various fields of engineering, bioinformatics, and so on,
so that it is more difficult or perhaps impossible to find ``generally good'' features like those available for natural images via ImageNet models.
Many methods for evaluating graph generative models are thus based on measuring discrepancies between statistics of generic low-level graph features,
including local measurements like
degree distributions,
clustering coefficient distributions, and four-node orbit counts \cite{li2018learning, you2018graphrnn}, or simple global properties such as
spectra \cite{liao2019efficient}.
Differences between distributions of these features are generally measured via the maximum mean discrepancy (MMD) \cite{gretton:mmd}
or total variation (TV) distance.
More recent work \cite{thompson2022evaluation} proposes instead extracting features via randomly initialized Graph Isomorphism Networks (GINs) \cite{xu2018powerful}.
These features provide a representation for each graph, so that metrics like FID, Precision/Recall, and Density/Coverage can be estimated using these representations.
For datasets with natural labels, we can alternatively find representations based on training a classifier;
such features, however, are optimized for a very different task.
One significant challenge of graph data is that it is computationally difficult to even tell whether two graphs are the same (are isomorphic to one another).
GINs are known to be as powerful at detecting isomorphism as the standard Weisfeiler-Lehman (WL) test \cite{xu2018powerful}, but it is not known whether \emph{random} GIN representations have this ability. Experimental evidence \cite{xu2018powerful} shows that trained GIN networks are as powerful as the WL test, but have substantially worse power at random initialization.
Unlike for distributions of natural images, however, the relevant graph features and particularly the semantics of node or edge features varies widely between datasets.
On molecular graphs, adding any single edge will almost always violate physical constraints about stable atomic bonds;
for house layouts, the feasibility of adding an edge depends significantly on the rest of the layout and the type of rooms (few front doors open into a bathroom, and direct connections between rooms on the first and third floors are unlikely);
Traditional metrics and random graph neural networks are both unlikely to be able to capture these complex inter-dependencies functioning very differently across domains,
and it similarly seems quite unlikely that pretraining on some ``graph ImageNet'' would be able to find useful generic graph features.
We therefore propose to train graph encoders on the same data used to train the generative model, using self-supervised contrastive learning to find meaningful feature extractors \cite{velickovic_2019_iclr,Sun2020InfoGraph,pmlr-v119-hassani20a,qiu2020gcc,you2020graph,you2021graph,hassani2022learning,zhu2020deep,zhu2021graph, thakoor2021bootstrapped}, which we then use to compare the generated graphs to a test set.
The set of perturbations introduced in contrastive learning teaches the model which kinds of graphs should be considered similar to one another.
In this work we use types of perturbations traditionally used for training contrastive learning methods on graphs, but point out that future work focusing on domain-specific modeling can directly incorporate knowledge about which graphs should be considered similar by choosing different perturbation sets.
Inspired by the theoretical results for contrastive learning of \cite{wang2022chaos},
we also propose two variations to our representation learning procedures.
This upper bound shows that contrastively learned representations work well for downstream tasks as long as the probability of data points yielding overlapping augmentations is relatively large for \emph{within-class} dataset pairs and relatively small for \emph{cross-class} pairs.
Edit distances between graphs on typical training sets are large, however;
we propose to use subgraphs (``crops'') in our set of data augmentations for contrastive learning,
which is more likely to yield overlaps.
We also suggest enforcing a layer-wise Lipschitz constraint on feature extractors,
which encourages similar graphs to have similar learned representations.
We show experimentally that both changes improve learning.
With all of these improvements, we further ask: can we theoretically guarantee that our learned GNN representations outperform traditional local metrics?
We prove that we cannot:
we give examples of graphs easily distinguishable by local metrics that first-order GNNs cannot distinguish.
Yet the converse is also true: we show graphs easily distinguishable by GNNs that appear equivalent to local metrics.
We thus propose to use models based on Graph Substructure Networks (GSNs) \citep{bouritsas2022improving}, (using node degrees, and node clustering coefficients), and as a result,
explicitly incorporating local metrics into our models
and surpassing the power of the WL test, and yield further improvements.
\section{Related Work}
\paragraph{Graph generative models.}
Our work is not particular to any type of graph generative model, as it focuses on simply evaluating samples;
nonetheless, it is worth briefly reviewing some methods.
Sequential generation models, such as GraphRNN \cite{you2018graphrnn} and GRAN \cite{liao2019efficient}, generate nodes and edges in an auto-regressive manner. These models are efficient, but can struggle to capture global properties of the graphs, and require a predefined ordering. One-shot models such as MolGAN \cite{de2018molgan}, GraphDeconv \cite{flam2020graph}, GraphVAE \cite{kipf2016semi}, and GraphNVP \cite{madhawa2019graphnvp}, on the other hand, generate all nodes and edges in one shot; they can model long-range dependencies, but generally are not efficient and cannot scale to large graphs.
For a detailed overview of graph generative models, see \cite{guo2020systematic}.
\paragraph{Evaluating graph generative models.}
Graph generative models can be even more challenging to evaluate than visual or textual models,
because it is generally more difficult for humans to judge the quality of a generated graph.
The classic measure of the quality of a generative model, likelihood,
also has significant issues with graphs:
in addition to the kinds of issues that appear in generative models of images \citep{theis2015note},
the likelihood is particularly hard to evaluate on graphs where even checking equality is quite difficult \citep{o2021evaluation}. The most common method is to compute the Maximum Mean Discrepancy (MMD) \citep{gretton:mmd} between distributions of local graph statistics,
including node degrees, cluster coefficients, and counts of orbits up to a particular size \citep{you2018graphrnn,liao2019efficient}.
Global statistics, such as eigenvalues of the normalized graph Laplacian, are also used \citep{you2018graphrnn}.
These metrics, however, focus only on low-level structure of the graph, and ignore any features that might be present on nodes or edges \citep{thompson2022evaluation}.
Choosing an appropriate kernel is also very important to consistency of these metrics \citep{o2021evaluation}.
These types of graph statistics might also encourage models to overfit to the training data, rather than truly learning the target distribution \citep{shirzad2021td}.
For unlabeled image-based generative models, most work focuses on metrics including the Inception Score (IS) \cite{salimans2016improved}, Fréchet
Inception Distance (FID) \cite{heusel2017gans}, Precision/Recall (PR) \cite{sajjadi2018assessing}, and Density/Coverage \citep{naeem2020reliable},
all of which compare distributions in a fixed latent space (typically activations of a late layer in an InceptionV3 \citep{inception} model trained on ImageNet).
These methods are rarely adopted in graph generative models,
due to challenges with ``general-purpose'' graph models discussed in \cref{intro}.
\citet{thompson2022evaluation} thus used random (untrained) graph encoders in these metrics.
We discuss these methods in more detail in the appendix.
Our work, inspired by \citeauthor{morozov2020self}'s similar proposal in image domains \cite{morozov2020self}, explores the use of self-supervised contrastive training to find representations that work better than random initializations.
\paragraph{Graph Contrastive Learning.}
A popular method for self-supervised learning,
contrastive learning generally aims to find a representation
roughly invariant to various operations (e.g.\ for images, taking random crops, horizontal flipping, shifting colors)
but able to identify different source data points.
Ideally, such a representation will be useful for downstream tasks not known when learning the representation.
In graph settings,
learned representations may be at the node, edge, or graph levels.
DGI \cite{velickovic_2019_iclr}
and InfoGraph \cite{Sun2020InfoGraph} adopt DeepInfoMax \cite{hjelm_2019_iclr}, and enforce consistency between local (node) and
global (graph) representation.
MVGRL \cite{pmlr-v119-hassani20a} augments a graph via graph diffusion, and constructs two views by
randomly sampling sub-graphs from the adjacency and diffusion matrices. GCC \cite{qiu2020gcc} uses sub-graph instance discrimination
and contrasts sub-graphs from a multi-hop ego network. GraphCL \cite{you2020graph} uses trial-and-error to hand-pick graph augmentations
and the corresponding parameters of each augmentation. JOAO \cite{you2021graph} extends the GraphCL using a bi-level min-max optimization
that learns to select the augmentations. LG2AR \cite{hassani2022learning} learns a policy to sample augmentations. GRACE \cite{zhu2020deep} uses a
similar approach to GraphCL to learn node representations. GCA \cite{zhu2021graph} uses a set of heuristics to adaptively pick the augmentation
parameters. BGRL \cite{thakoor2021bootstrapped} adopts BYOL \cite{NEURIPS2020_f3ada80d} and uses random augmentations to learn node representations.
\section{GNNs Versus Local Metrics} \label{sec:gnn-v-local}
\begin{figure}
\centering
\includegraphics[width=110mm]{figures/graphs.pdf}
\caption{(a) An example of two graphs that can be differentiated by GNNs but not local metrics. Both graphs have same degree distribution, clustering coefficient and 4 node orbits. (b) An example of two graphs that can be differentiated by local metrics (clustering coefficient and smallest cycle) but not GNNs. (c) Adding a single edge can drastically change cluster coefficient and 4-node orbits.}
\label{fig:local_iso}
\end{figure}
Different methods for understanding graphs can ``understand'' the difference between graphs in very different ways:
a particular change to a graph might barely affect some features,
while drastically changing others.
One extreme case is when a given metric cannot detect that two distinct (non-isomorphic) graphs are different.
Since graph isomorphism is a computationally difficult problem,
we expect that all efficiently computable graph representations ``collapse'' some pairs of input graphs.\footnote{Alternatively, an efficient graph representation might be able to distinguish non-isomorphic graphs, if it also sometimes distinguishes isomorphic graphs. We do not consider such representations in this paper.}
It is conceivable, however, that one method could be strictly more powerful than another.
For instance,
since recent GNN models have overcome traditional models based on local metric representations in a variety of problems \citep{kipf2016semi, velivckovic2017graph, xu2018powerful},
is it the case that GNNs are strictly more powerful than local metrics?
We show, constructively, that the answer is no:
there are indeed graphs which GNNs can distinguish and local metrics cannot,
but there are also graphs which local metrics can distinguish but first-order GNNs cannot.
\Cref{fig:local_iso}(a) shows a pair of graphs with the same degree distribution, clustering coefficient, and four-node orbits,
which can nonetheless be distinguished by GNNs
(proof in \cref{app:proofs}).
On the other hand, the graphs in \cref{fig:local_iso}(b) have different clustering coefficient and smallest cycle,
but first-order GNNs cannot tell them apart.
Thus, neither method strictly outperforms the other on all problems,
and so there are theoretical generative models which perfectly match in GNN-based representations but differ in local metrics,
and vice versa.
This motivates our addition of local features to our graph representation models (\cref{sec:add-local-feats}).
It is much easier to incorporated such hard-coded structures into GNNs than it would be to add learning to feature metrics; in particular, counting higher-order local patterns quickly becomes prohibitively expensive, with a super-exponential time complexity. GNNs can also easily handle node and/or edge features on the underlying graphs, which is far more difficult to add to local metrics.
Another quality we would like in our graph representations, in addition to the ability to distinguish distinct graphs, is some form of stability: if a distribution of graphs only changes slightly, we would like our evaluation methods to give ``partial credit'' as opposed to a distribution where all graphs are dramatically different.
(This is closely related to issues in training and evaluating image-based generative models \citep{arjovsky:towards-principled-gans,wgan,smmdgan,theis2015note}.)
As previously discussed,
the notion of ``similar graphs'' is very domain-dependent,
but traditional local metrics can be highly sensitive to changes like adding a single edge:
\cref{fig:local_iso}(c) shows an example where two graphs differing only by a single edge can have drastically different statistics.
By learning GNN representations,
we can have some control over these types of smoothness properties;
we exploit this explicitly in our methodology in \cref{sec:adding-smoothness}.
\begin{figure}
\centering
\includegraphics[width=130mm]{figures/genarc.pdf}
\caption{During the \textbf{training} phase, we use the same training data ($\mathbb{G}^{train}$) to train both the generator and the encoder networks. The encoder is trained using a contrastive loss where
two augmentation $\tau_{i}$ and $\tau_{j}$ are randomly sampled
from a set of rational augmentations $\mathcal{T}$ to construct two views of a sampled graph $G_k$. During \textbf{evaluation} phase, we sample the generator to form a generated set
of graphs $\mathbb{G}^{gen}$ and feed it along with a held-out set of real graphs $\mathbb{G}^{test}$ to the encoder to compute the graph representations. The representations are then used to compute robust metrics to quantify the discrepency between real and generated graphs.
}
\label{fig:gen-arc}
\end{figure}
\section{Self-supervised Training of Graph Representations for Evaluation}
Suppose we have two sets of graphs $\mathbb{G}^{train} = \{G_1,\dots\,G_S\}$ and $\mathbb{G}^{test} = \{G_1,\dots\,G_M\}$, each sampled from the same data distribution $p\left(G\right)$. Also suppose that we have access to an unconditional graph generative model $g_\phi(.)$, which is trained on $\mathbb{G}^{train}$ to learn the distribution of the observed set of graphs. We sample a set of generated graphs $\mathbb{G}^{gen} = \{G_1,\dots\,G_N\} \sim p_{g_\phi}(G)$ from this model. In order to evaluate the quality of the sampled graphs (i.e., to decide whether the model $g_\phi(.)$ has successfully recovered the underlying distribution $p\left(G\right)$), we can define a measure of divergence $\mathcal{D}\left(\mathbb{G}^{test}, \mathbb{G}^{gen} \right)$ to quantify the discrepancy
between distributions of the real and generated graphs. One robust way to achieve this is
to define the metric on latent vector representation spaces, and expect representations of graphs rather than the original objects. Thus, to use these metrics, we need to train a shared encoder $f_{\theta}(.)$ and then compute the discrepancy as $\mathcal{D}\left(f_{\theta}(\mathbb{G}^{test}, f_{\theta}(\mathbb{G}^{gen}\right)$
There are a few such metrics well-studied in visual domains that can differentiate the fidelity and diversity of the model, and which we can adopt in graph domains.
For evaluating image generative models, due to similar feature space across image datasets and also availability of large-scale data, the trunk of a model trained over ImageNet with explicit supervisory signals is usually chosen as the encoder.
However, it is not straightforward to adopt the same trick to graph-structured data: there are no ImageNet-scale graph datasets available, and more importantly,
the semantics of graphs and their features vary wildly across commonly used graph datasets,
far more than occurs across distributions of natural images.
For instance, even datasets of molecular graphs may use different feature sets to represent the molecules \cite{hassani2022cross},
in addition to the many cross-domain challenges discussed in \cref{intro}.
Thus, it is not feasible to imagine a single ``universal'' graph representation;
we would like a general-purpose method for finding representations useful for a new graph dataset.
\paragraph{Training with Graph Contrastive Learning}
\label{sec:using-gcl}
To find expressive representation of real and generated graphs, we train the encoder using a contrastive objective. Assuming a set of rational augmentations $\mathcal{T}$over $\mathbb{G}$ where each augmentation
$\tau_{i} \in\mathcal{T}$ is defined as a function over graph $G_k$ that generates an identity-preserving view of the graph: $G^+_k=\tau_{i}(G_k)$, a
contrastive framework with negative sampling strategy uses $\mathcal{T}$to draw positive samples from the joint distribution $p\left(\tau_{i}(G_k), \tau_{j}(G_k)\right)$
in order to maximize the agreement between different views of the same graph $G_k$ and to draw negative samples from the product of
marginals $p\left(\tau_{i}(G_k)\right) \times p\left(\tau_{j}(G_{k'})\right)$ to minimize it for views from two distinct graphs $G_k$ and $G_{k'}, k\neq k'$.
As mentioned previously, we use a GIN architecture for our feature extractor.
Other than the training process, the rest of our evaluation pipeline is similar to that of \citet{thompson2022evaluation},
who use random GIN weights.We consider two methods for training our GIN's parameters:
GraphCL \citep{you2020graph} and InfoGraph \citep{sun2019infograph}. GraphCL randomly samples augmentation from four possible augmentaiotns including \textit{node dropping}, \textit{edge perturbation}, \textit{attribute masking}, and \textit{sub-graph inducing} based on a random walk to which
we would like the representation to be roughly invariant. . GraphCL uses normalized temperature-scaled
cross-entropy (NT-Xent) objective \cite{chen_2020_arxiv} to maximize the agreement between positive graph-level representations. InfoGraph works differently: it contrasts the graph-level representation with the representations of individual nodes, which encode neighborhood structure information. InfoGraph uses
DeepInfoMax \cite{hjelm_2019_iclr} objective to maximize the mutual information between graph-level and node-level representations of each individual graph.
\paragraph{Using Local Subgraph Counts as Input Features for GINs} \label{sec:add-local-feats}
To build on the insight of \cref{sec:gnn-v-local},
we also consider various methods for adding information about the local graph structure as node features,
similarly to Graph Substructure Networks \citep{bouritsas2022improving}. The simplest such method is to add the degree of a node as an explicit feature for the GIN to consider.
We do this, but also add, by concatenation on node features,
higher-order local information as well: three-node and four-node clustering features for each node. Four node clustering coefficient is calculated as:
\begin{equation}
C_{4}(v)=\frac{\sum_{(u,w) \in Nei(v)} q_{v}(u, w)}{\sum_{(u,w) \in Nei(v)}\left[deg(u) + deg(w) - q_v(u,w) - 2\mathbb{I}(u \in Nei(w))\right]}
\end{equation}
where $q_v(u,w)$ defined as the number of common neighbors of $u$ and $w$, not counting $v$.
Aggregating these features across the whole graph would give information on distribution of $4$ node orbits of the graph,
but this provides more localized information across the graph that is nonetheless difficult or impossible for a GIN to examine otherwise \citep{chen2020can}.
\paragraph{Choice of Augmentations}
\label{sec:choosing-augmentations}
\citet{wang2022chaos}, building off work of \citet{wang2020understanding}, study contrastive learning in general settings (with an eye towards vision applications),
and provide an intriguing bound for the performance of downstream classifiers.
To explain it,
consider the ``augmentation graph'' of the training samples:
if $G_i$ is the $i$th training example
and $G_i^+$ is a random augmentation of that example,
we connect the nodes $G_i$ and $G_j$ in the augmentation graph
if there are feasible augmentations $G_i^+$ and $G_j^+$ such that $G_i^+ = G_j^+$.\footnote{%
Technically, the augmentations we use (described in \cref{sec:using-gcl}) would result in a densely-connected augmentation graph:
for instance, each graph has some vanishingly small probability of being reduced to an empty graph.
We can instead think about the augmentation graph based on the augmentations from a high-probability set for each training example.
}
\citeauthor{wang2022chaos} argue that downstream linear classifiers are likely to succeed if this augmentation graph has stronger intra-class connectivity than it does cross-class connectivity,
proving a tight connection between the two under a particular setting with strong assumptions about this and related aspects of the setup.
Given the connection between the distribution metrics discussed in \cref{sec:eval-methods-on-latents} and classifier performance \citep[e.g.][Section 4]{liu:deep-testing},
if we accept the argument of \citet{wang2022chaos},
evaluation metrics based on a contrastively trained graph representation will give poor values (good classifier performance) when generated samples are not well-connected to real samples in the augmentation graph,
and vice versa.
If we choose augmentations appropriately, this is sensible behavior.
\paragraph{Enforcing Lipschitz Layers in Representation Networks}
\label{sec:adding-smoothness}
The prior line of reasoning also suggests that we should choose augmentations that are capable of making real graphs look like one another.
Edit distances between graphs, however, are typically large on the datasets we consider,
and so augmentations based on adding or deleting individual nodes and/or edges will struggle to do this.
The same is true for many of the augmentations used on images,
except -- as \citet{wang2022chaos} note -- for crop-type augmentations,
where e.g.\ two different car pictures might become quite similar if we crop to just a wheel.
On graphs, an analogous operation is subgraph sampling,
which we include in our GraphCL setup;
InfoGraph already naturally looks at subgraph features as a core component.
Taking this line of reasoning as well as the general motivations of contrastive learning further,
it is also natural to think that if we can inherently enforce ``similar graphs'' to have similar representations,
this could improve the process of contrastive learning:
we would save on needing to train the model to learn these similarities,
and it could help decrease the classifier performance for good generative models whose output graphs are legitimately near the distribution of target graphs.
A line of work on GANs in visual settings \citep{arjovsky:towards-principled-gans,wgan,smmdgan,roth:stabilizing,mescheder:converge,spectral-norm}
has made clear the importance of this type of reasoning in losses for training generative models:
the loss should smoothly improve as generator samples approach the target distribution, even if the supports differ.
Viewing model evaluation metrics as a kind of ``out-of-the-loop'' loss function for training generative models -- hyperparameter selection and model development focusing on variants with better evaluation metrics -- suggests that these kinds of properties may be important for the problem of evaluation as well.
We thus explicitly enforce the layers of our GIN to have a controlled Lipschitz constant, similarly to e.g.\ spectral normalization in GAN discriminators \citep{spectral-norm}. To this end, we fix the $\lambda$ Lipschitz factor to $1.0$ in the experiments. For each linear layer with weights $\mathbf{W}_\ell$, we use projected gradient descent; after each update on the weights, if $\|\mathbf{W}_\ell\| > 1.0$, we update the value of the weights to $\mathbf{W}_\ell = \frac{\mathbf{W}_\ell}{\|\mathbf{W}_\ell\|}$.
This guarantees small changes in the graphs, such as adding/removing edges, or change in the input features, will not drastically change the final representation.
If we assume node representations for input of a GIN layer has a maximum norm of $\lVert B \rVert$, changing one connection changes two input messages in the graph by at most $\lVert B \rVert$. Each MLP of the GIN has a Lipschitz constant of $1$, between the normalization above and using ReLU activations. As a result, each node's representation will change by at most $\lVert B \rVert$.
As the GIN concatenates the representations from all layers, this difference will propagate in the layers and increase as the number of layers increase. For fixed-depth networks, however, such as our depth-3 networks here, controlling this Lipschitz constant will still give reasonable control of the changes in the final representation.
\section{Experimental Results}
In all of the experiments we train the model on the full dataset in an self-supervised manner. Following \citep{thompson2022evaluation}, we take the dataset and make perturbations on the dataset and see what is the trend in the measurements as the perturbation degree increases. We denote perturbation degree with $r$, and define it for each type of perturbation. We use these type of perturbations:
\begin{figure*}[ht!]
\captionsetup[subfloat]{farskip=-2pt,captionskip=-8pt}
\centering
\subfloat[][]{\includegraphics[width = 2.55in]{figures/mixing-random_pretrained_vs_random_no_feats.pdf}}
\subfloat[][]{\includegraphics[width = 2.55in]{figures/rewiring-edges_pretrained_vs_random_no_feats.pdf}}
\\
\subfloat[][]{\includegraphics[width = 2.55in]{figures/mode-collapse_pretrained_vs_random_no_feats.pdf}}
\subfloat[][]{\includegraphics[width = 2.55in]{figures/mode-dropping_pretrained_vs_random_no_feats.pdf}}
\caption{Experimental results for the pretrained models versus random GIN, without structural features.}
\label{fig:no_feat}
\end{figure*}
\begin{itemize}
\item \textbf{Mixing-Random:} In perturbation with ratio $r$, we remove $r$ chunk of the reference samples, and replace them with Erdős–Rényi (ER) graphs with the same ratio of edges.
\item \textbf{Rewiring Edges:} This perturbation, rewires each edge of the graph with probability $r$. Each rewired edge, will change one of the sides of the edge with equal probability to another node that is not already connected to the stable node.
\item \textbf{Mode Collapse and Mode Dropping:} For these perturbation, first we cluster the graphs using the WL-Kernel. First, we choose $r$ ratio of clusters, then, for mode collapse, we replace each graph on that dataset with the center of the cluster. For, mode dropping, we remove the selected clusters and then for making size of the dataset fixed, we randomly repeat samples from other clusters.
\end{itemize}
For each experiment, we measure the Spearman rand correlation between the perturbation ratio, $r$, and the value of the measurement. For measurements that supposed to decrease by the increase of ratio, we flip the values. As a result, in all experiments higher is better. We gather the results among different datasets and several random seeds and plot them for distribution of the correlations. For detailed experiments on the individual datasets see \ref{app:further_experiments}.
\paragraph{Datasets:} Following \cite{thompson2022evaluation}, we use six diverse datasets ( three synthetic and three real-world) that are frequently used in literature
including: (1) \textbf{Lobster} \cite{dai2020scalable} consisting of stochastic graphs with each node at most 2-hops away from a backbone path, (2) 2D \textbf{Grid}
graphs \cite{dai2020scalable,you2018graphrnn,liao2019efficient}, (3) \textbf{Proteins} \cite{dobson2003distinguishing} where represents connectivity among amino acids
if their physical distance is less than a threshold, (4) 3-hop \textbf{Ego} networks \cite{you2018graphrnn} extracted from the CiteSeer network \cite{sen2008collective} representing citation-based connectivity among documents, (5) \textbf{Community} \cite{you2018graphrnn} graphs generated by Erdős–Rényi model \cite{erdos1960evolution}, and (6) \textbf{Zinc} \cite{irwin2012zinc} is a dataset of attributed molecular graphs which allows to study sensitivity of metrics to changes in node/edge feature distributions. We follow the exact protocols used in \cite{thompson2022evaluation, dai2020scalable,you2018graphrnn,liao2019efficient} as follows. We randomly sample 1000 graphs from Zinc dataset and use one-hot encoding for edge/node features. For community dataset, we set $n=|\mathcal{V}|/2$, $p=0.3$, and add $0.05|\mathcal{V}|$ inter-community edges with uniform probability. For all datasets, we use the node size range indicated in Table \ref{table:dataset} in Appendix.
\paragraph{Contrastive Training Versus Random GIN without Structural Features:} In the first experiment, we compare the methods without adding structural features. Results can be seen in Figure \ref{fig:no_feat}. In general we can see that in most measurements pretraining shows superior performance compared to the random network. In this experiments we use $10$ random seeds and gather data on all datasets. In \ref{app:further_experiments}, same results separated for each dataset can be seen. In our experience, pretraining shows near perfect results on larger datasets, but for Lobster and Grid datasets correlations are not near to $1$. Our observation is these measurements are moving with the correct trend up to some perturbation ratio; but for example precision/recall become zero after some perturbation and keeps still. Our intuition here is because of highly regular structures in these datasets, model learns instantly to discrimate the real graphs from the fake ones very easily. And after small amount of perturbation the perturbed graphs are all very far from the reference graphs.
\paragraph{Contrastive Training Versus Random GIN with Node Degree Features:} In this experiment, similar to the first one, we compare random and pretrained models. The difference is that we use just node degree features for the nodes, the easiest and fastest structural information that could be added to the model. Results are provided in Figure \ref{fig:deg_feat}.
\begin{figure*}[ht!]
\captionsetup[subfloat]{farskip=-2pt,captionskip=-8pt}
\centering
\subfloat[][]{\includegraphics[width = 2.55in]{figures/mixing-random_pretrained_vs_random_deg_feats.pdf}}
\subfloat[][]{\includegraphics[width = 2.55in]{figures/rewiring-edges_pretrained_vs_random_deg_feats.pdf}}
\\
\subfloat[][]{\includegraphics[width = 2.55in]{figures/mode-collapse_pretrained_vs_random_deg_feats.pdf}}
\subfloat[][]{\includegraphics[width = 2.55in]{figures/mode-dropping_pretrained_vs_random_deg_feats.pdf}}
\caption{Experimental results for the pretrained models versus random GIN, with node degree features.}
\label{fig:deg_feat}
\end{figure*}
\begin{figure*}[ht!]
\captionsetup[subfloat]{farskip=-2pt,captionskip=-8pt}
\centering
\subfloat[][]{\includegraphics[width = 2.55in]{figures/mixing-random_pretrained_vs_random_clustering_feats.pdf}}
\subfloat[][]{\includegraphics[width = 2.55in]{figures/rewiring-edges_pretrained_vs_random_clustering_feats.pdf}}
\\
\subfloat[][]{\includegraphics[width = 2.55in]{figures/mode-collapse_pretrained_vs_random_clustering_feats.pdf}}
\subfloat[][]{\includegraphics[width = 2.55in]{figures/mode-dropping_pretrained_vs_random_clustering_feats.pdf}}
\caption{Experimental results for the pretrained models versus random GIN, with structural features.}
\label{fig:clustering_feat}
\end{figure*}
\begin{figure*}[ht!]
\captionsetup[subfloat]{farskip=-2pt,captionskip=-8pt}
\centering
\subfloat[][]{\includegraphics[width = 2.55in]{figures/mixing-random_pretrained_vs_random_ablation.pdf}}
\subfloat[][]{\includegraphics[width = 2.55in]{figures/mode-collapse_pretrained_vs_random_ablation.pdf}}
\caption{Comparison of the full method on no feature data versus removing layer normalization versus removing subgraph data augmentations on pretraining the network. GraphCL is normal training. In GraphCL2 we do not enforce lipschitzness. In GraphCL3 we remove the subgraph constraints and reduce the probability of node and edge dropping in the augmentations.}
\label{fig:ablation}
\end{figure*}
\paragraph{Contrastive Training with Structural Information of Node Clustering coefficients:} In this experiment we concatenate node degrees and node clustering coefficients for each node feature. Here we can see that adding these features improves the models power for some measurements. However, on small datasets Grid and Lobster results become poorer. Again intuition is more powerful model can easier distinguish main graphs from the fake ones. Break down of the results on the datasets in Appendix \ref{app:further_experiments} proves this point.
\paragraph{Ablation Study:} We analyze how layer normalization and using subgraph augmentations in GraphCL reflects on the final results. We have used no structural feature setup and conducted the experiment for mixing-random and mode-collapse experiments. Figure \ref{fig:ablation}, shows the results on this task. The results prove that both of these improvements are essential for getting better results.
\section{Conclusion}
We have demonstrated that self-supervised pre-training of representations can yield significantly better metrics for graph evaluation than random ones,
particularly when incorporating local graph features with Lipschitz control,
as inspired by theory.
We suggest graph generative modeling papers should consider evaluating with these metrics in addition to or instead of their existing ones.
\begin{ack}
This research was enabled in part by support, computational resources, and services provided by the Canada CIFAR AI Chairs program, the Natural Sciences and Engineering Research Council of Canada, WestGrid, and the Digital Research Alliance of Canada.
\end{ack}
\printbibliography
\newpage
\iffalse
\section*{Checklist}
\begin{enumerate}
\item For all authors...
\begin{enumerate}
\item Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope?
\answerYes{}
\item Did you describe the limitations of your work?
\answerYes{}
\item Did you discuss any potential negative societal impacts of your work?
\answerNA{}
\item Have you read the ethics review guidelines and ensured that your paper conforms to them?
\answerYes{}
\end{enumerate}
\item If you are including theoretical results...
\begin{enumerate}
\item Did you state the full set of assumptions of all theoretical results?
\answerNA{}
\item Did you include complete proofs of all theoretical results?
\answerNA{}
\end{enumerate}
\item If you ran experiments...
\begin{enumerate}
\item Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)?
\answerYes{}
\item Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)?
\answerYes{}
\item Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)?
\answerYes{}
\item Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)?
\answerYes{}
\end{enumerate}
\item If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...
\begin{enumerate}
\item If your work uses existing assets, did you cite the creators?
\answerYes{}
\item Did you mention the license of the assets?
\answerNA{}
\item Did you include any new assets either in the supplemental material or as a URL?
\answerNo{}
\item Did you discuss whether and how consent was obtained from people whose data you're using/curating?
\answerNA{}
\item Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content?
\answerNA{}
\end{enumerate}
\item If you used crowdsourcing or conducted research with human subjects...
\begin{enumerate}
\item Did you include the full text of instructions given to participants and screenshots, if applicable?
\answerNA{}
\item Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable?
\answerNA{}
\item Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation?
\answerNA{}
\end{enumerate}
\end{enumerate}
\fi
\clearpage
| {
"attr-fineweb-edu": 1.678711,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdjA5qoYA189x-S87 | \section{Introduction}
The parameter $\epsilon_K$ quantifies indirect CP violation in the
neutral kaon system.
At present, the tension between the Standard Model (SM) and
experimental values of $|\epsilon_{K}|$ is $3.4\sigma$~\cite{Wlee} with the value of
$|V_{cb}|$ from the exclusive decay $B\to D^*\ell\nu$~\cite{JWL}.
This value of $|V_{cb}|$, the most precise from exclusive decays to date, is
$3\sigma$ away from the value from inclusive decays~\cite{GS}.
The largest error in the $\epsilon_K$ determination in the SM comes
from $|V_{cb}|$, so it is crucial to improve the precision of
exclusive
The dominant error of exclusive $|V_{cb}|$ comes from the heavy-quark
discretization error in the form-factor calculation of the
semi-leptonic decay $B\to D^*\ell\nu$ \cite{JWL}.
Hence, the SWME Collaboration plans to use the Oktay--Kronfeld (OK) action~\cite{OK} in the
upcoming calculation in order to reduce it efficiently.
This action is an improved version of the Fermilab action~\cite{EKM},
which incorporates the dimension-6 and -7 bilinear operators needed
for tree-level matching to QCD through order $\mathrm{O}(\Lambda^3/m_Q^3)$
for heavy-light mesons and $\mathrm{O}(v^6)$ for quarkonium.
We expect that the bottom- and charm-quark discretization errors could
be reduced below the current $1\%$ level.
A~similar error for the
charm-quark could also be achieved with other highly-improved
actions, such as HISQ~\cite{Follana:2006rc}.
For the heavy-meson spectrum, we present results for the inconsistency
parameter \cite{Collins,Kronfeld} and hyperfine splittings, all of which
test how well the Fermilab and OK actions perform in practice.
For this purpose, we follow the strategy of our previous
work~\cite{MBO:LAT2010}, in which the $c_5$ term was not
completely tadpole-improved.
In this work, we now implement the tadpole improvement for $c_5$
completely.
We also extend the data analysis to data sets produced on a finer ($a \approx
0.12\;$fm) MILC asqtad lattice.
\section{Meson Correlators}
\input{tables/ensemble}
We use a subset of the MILC $N_f=2+1$ asqtad ensembles at $a=0.12\;$fm and
$0.15\;$fm~\cite{Bazavov:RevModPhys.82.1349}, summarized in
Table~\ref{tbl:ensemble}.
We compute meson correlators $C(t,\bm{p})$
\begin{equation}
\label{eq:corr}
C(t,\bm{p})
= \sum_{\bm{x}} e^{\mathrm{i}\bm{p} \cdot \bm{x}}
\expv{\mathcal{O}^{\dagger}(t,\bm{x}) \mathcal{O}(0,\bm{0})} \,.
\end{equation}
The interpolating operators $\mathcal{O}(x)$ are
\begin{align}
\mathcal{O}_\mathsf{t}(x)
&= \bar{\psi}_\alpha(x) \Gamma_{\alpha\beta}
\Omega_{\beta\mathsf{t}}(x) \chi(x)
& \text{(heavy-light meson)} \,,&
\\
\mathcal{O}(x)
&= \bar{\psi}_\alpha(x) \Gamma_{\alpha\beta}
\psi_{\beta}(x) & \text{(quarkonium)}\,, &
\end{align}
where the heavy-quark field $\psi$ is that of the OK action, while
the light-quark field $\chi$ is that of the asqtad action.
The spin structure is $\Gamma = \gamma_5$ for the pseudoscalar and
$\Gamma = \gamma_i$ for the vector meson.
The taste degree of freedom for the staggered fermion is obtained from
the 1-component field $\chi$ with $\Omega(x) = \gamma_1^{x_1}
\gamma_2^{x_2} \gamma_3^{x_3} \gamma_4^{x_4}$ \cite{Wingate,Bernard}.
We compute 2-point correlators with 4 different values of hopping
parameter: $\kappa =$ 0.038, 0.039, 0.040, 0.041.
We fix the valence light-quark mass to $am_s$ in Table~\ref{tbl:ensemble}.
We choose 11 meson momenta, $a\bv{p} = (2\pi/N_L) \bv{n}$:
$\bm{n} =$ (0,0,0), (1,0,0), (1,1,0), (1,1,1), (2,0,0), (2,1,0),
(2,1,1), (2,2,0), (2,1,1), (3,0,0), (4,0,0).
To increase statistics, correlators are computed at 6 different
source time slices on each gauge configuration.
Each correlator is folded in half, and then fit to the function
\begin{equation}
f(t) = A \{e^{-E t} + e^{-E (T-t)}\}
+(-1)^t A^p \{e^{-E^p t} + e^{-E^p (T-t)}\} \,,
\end{equation}
where $A$, $A^p$, $E$, and $E^p$ are determined by \snu{the} fitting.
Figure \ref{fig:corr-fit} shows the correlator fit results with fit
residual $r(t)$ and effective mass $m_\text{eff}(t)$ for a
pseudoscalar heavy-light meson data:
\begin{equation}
r(t) = \frac{C(t)-f(t)}{\abs{C(t)}} \,,\qquad
%
m_\text{eff}(t) = \frac{1}{2} \ln \Bigg(\frac{C(t)}{C(t+2)}\Bigg)
\,.
\end{equation}
We exclude the largest momentum, $\bm{n}=(4,0,0)$, from the dispersion
relation fits, because these data are very noisy, and the correlator
fits are poor.
\begin{figure}[t]
\centering
\subfloat[Residual]{%
\includegraphics[width=.45\textwidth]
{{resd_pi_d_d_m0.05_k0.038_p000}.pdf}
}
\hfill
\subfloat[Effective Mass]{%
\includegraphics[width=.45\textwidth]
{{meff_pi_d_d_m0.05_k0.038_p000}.pdf}
}
\caption{ $r(t)$ and $m_\text{eff}(t)$ for a pseudoscalar
heavy-light meson correlator at $\kappa=0.038$ and
$\bv{p}=\bv{0}$, obtained using the uncorrelated fit. }
\label{fig:corr-fit}
\end{figure}
\section{Dispersion Relation}
\label{sec:disp-rel}
Once we obtain the ground state energy at each momentum, we fit them
to the non-relativistic dispersion relation \cite{EKM},
\begin{equation}
\label{eq:disp}
E = M_1 + \frac{\bm{p}^2}{2M_2} - \frac{(\bm{p}^2)^2}{8M_4^3}
- \frac{a^3 W_4}{6} \sum_i p_i^4 \,,
\end{equation}
where $M_1$ ($M_2$) is the rest (kinetic) mass.
In the Fermilab formulation, the kinetic mass is chosen to be the
physically relevant mass \cite{EKM}, because that choice minimizes discretization errors in matrix elements
and in mass splittings.
When fitting the data to the dispersion relation, we use the full
covariance matrix and no Bayesian prior information.
In Fig.~\ref{fig:disp-ps}, we plot results after subtracting from the
data the $W_4$ term, which parametrizes the breaking of O(3) rotational symmetry.
Here, $\widetilde{E}$ is defined to be
\begin{equation}
\label{eq:disp-noart}
\widetilde{E} = E + \frac{a^3 W_4}{6} \sum_i p_i^4 \,.
\end{equation}
Note that the two data points at momenta $\bv{n} = (2,2,1)$ and
$(3,0,0)$ lie on top of each other due to the removal of the $W_4$
term.
As one can see from the plots, fits to Eq.~\eqref{eq:disp} are good enough to
determine the kinetic mass reliably.
\begin{figure}[t]
\centering
\subfloat[Heavy-light]{%
\includegraphics[width=.45\textwidth]
{{disp_jk_fit_pi_d_d_m0.05_k0.038.m1m2m4w4.poly.n00n09.full.10}.pdf}
}
\hfill
\subfloat[Quarkonium]{%
\includegraphics[width=.45\textwidth]
{{disp_jk_fit_pi_d_d_k0.038_k0.038.m1m2m4w4.poly.n00n09.full.10}.pdf}
}
\caption{ Fit results of pseudoscalar meson spectrum to dispersion
relation at $\kappa=0.038$. }
\label{fig:disp-ps}
\end{figure}
\section{Inconsistency Parameter}
The inconsistency parameter $I$, Eq.~\eqref{eq:iparam}, is designed to
examine the improvements by $\mathrm{O}(\bm{p}^4)$ terms in the
action~\cite{Collins,Kronfeld}.
This is, in particular, good for probing the improvement by the OK action,
because it isolates those improvement terms by construction.
\begin{equation}
\label{eq:iparam}
I \equiv \frac{2{\delta}M_{\wbar{Q}q}
- ({\delta}M_{\wbar{Q}Q} + {\delta}M_{\wbar{q}q})}
{2M_{2\wbar{Q}q}}
= \frac{2{\delta}B_{\wbar{Q}q}
- ({\delta}B_{\wbar{Q}Q} + {\delta}B_{\wbar{q}q})}
{2M_{2\wbar{Q}q}} \,,
\end{equation}
where
\begin{align}
\label{eq:deltaM}
\delta M_{\wbar{Q}q} \equiv& M_{2\wbar{Q}q} - M_{1\wbar{Q}q}
\end{align}
is the difference between the kinetic ($M_2$) and rest ($M_1$) masses.
By construction, $I$ vanishes in the continuum limit, and it should be
closer to 0 for more improved actions.
The meson masses $M$ can be written as a sum of the perturbative quark
masses $m_1$ or $m_2$ and the binding energy $B$ as follows:
\begin{align}
\label{eq:M1M2}
M_{1\wbar{Q}q} = m_{1\wbar{Q}} + m_{1q} + B_{1\wbar{Q}q} \,,
\qquad \qquad
M_{2\wbar{Q}q} = m_{2\wbar{Q}} + m_{2q} + B_{2\wbar{Q}q} \,.
\end{align}
These formulas define $B_1$ and $B_2$.
Then, substituting them into Eq.~\eqref{eq:iparam}, the quark masses
cancel out, and the inconsistency parameter becomes a relation
among the binding energies
\begin{align}
\label{eq:deltaB}
\delta B_{\wbar{Q}q} =& B_{2\wbar{Q}q} - B_{1\wbar{Q}q} \,.
\end{align}
The corresponding quantities for Eqs.~\eqref{eq:deltaM},
\eqref{eq:M1M2}, and \eqref{eq:deltaB} for heavy ($\wbar{Q}Q$) and
light ($\wbar{q}q$) quarkonium are defined similarly.
Because light quarks always have $ma\ll1$, the $\mathrm{O}((ma)^2)$ distinction between rest
and kinetic mass is negligible.
We therefore omit $\delta M_{\bar{q}q}$ (or $\delta B_{\bar{q}q}$) when forming~$I$.
\begin{figure}[t!]
\centering
\vspace*{-5mm}
\includegraphics[width=0.7\textwidth]{I_PS}
\caption{ Inconsistency Parameter $I$. Data labels denote $\kappa$
values. The square (green and orange) represents old data sets at
$a=0.15\;$fm and circle (magenta) represents new data sets at
$a=0.12\;$fm. Vertical lines represent physical masses of $B^0_s$
(dotted) and $D_s^+$ (dash-dotted) mesons. Near the $B^0_s$ meson
mass, $I$ almost vanishes for the OK action, but for the Fermilab
action it does not. This behavior suggests the OK action is
significantly closer to the continuum limit.}
\label{fig:iparam}
\end{figure}
Considering the non-relativistic limit of quark and antiquark system, for
$S$-wave case, the spin-independent binding-energy difference can be expressed as
follows~\cite{Kronfeld,Bernard}:
\begin{align}
\label{eq:NRdeltaB}
\delta B_{\wbar{Q}q}
&= \frac{5}{3} \frac{\expv{\bm{p}^2}}{2\mu_2} \Big[ \mu_2
\Big(\frac{m_{2\wbar{Q}}^2}{m_{4\wbar{Q}}^3}
+ \frac{m_{2q}^2}{m_{4q}^3} \Big) -1 \Big]
+ \frac{4}{3} a^3 \frac{\expv{\bm{p}^2}}{2\mu_2}
\mu_2 \left[w_{4\wbar{Q}} m_{2\wbar{Q}}^2 + w_{4q} m_{2q}^2 \right]
+ \mathrm{O}(\bm{p}^4) \,,
\end{align}
where $\mu_2^{-1} = m_{2\wbar{Q}}^{-1} + m_{2q}^{-1}$, and $m_2$, $m_4$,
and $w_4$ are defined through the quark dispersion relation analog of
Eq.~\eqref{eq:disp}.
Equation~\eqref{eq:NRdeltaB} holds for the quarkonium $\delta
B_{\wbar{Q}Q}$ too.
The leading contribution of $\mathrm{O}(\bm{p}^2)$ in $\delta{B}$
vanishes when $m_4=m_2$ and $w_4=0$, also for orbital angular momenta beyond the $S$~wave~\cite{Bernard}.
The OK action matches $m_4=m_2$ and $w_4=0$, so the two expressions in
square brackets vanish (at the tree level), leaving $I \sim \bm{p}^4 \approx 0$.
The result for the pseudoscalar channel is shown in
Fig.~\ref{fig:iparam}.
We find that $I$ is close to 0 for the OK action even in the mass
region where the Fermilab action produces very large $|I| \approx 1$.
This outcome provides good numerical evidence for the improvement expected
with the OK action.
It also shows that the new data set with the coarse ($a =
0.12\;$fm) ensemble data covers the $B_s^{0}$~mass.
\section{Hyperfine Splittings}
\begin{figure}[t]
\centering
\subfloat[Quarkonium]{
\includegraphics[width=.45\textwidth]{hfs_ok_ok}}
\hfill
\subfloat[Heavy-light meson]{
\includegraphics[width=.45\textwidth]{hfs_ks_ok}}
\caption{Hyperfine splitting obtained from the kinetic mass vs.\ that obtained from the rest mass.}
\label{fig:hfs}
\end{figure}
The hyperfine splitting $\Delta$ is defined to be the difference in mass
between the vector ($M^\ast$) and pseudoscalar ($M$) mesons:
\begin{equation}
\Delta_1 = M_1^{\ast} - M_1 \,,
\qquad
\Delta_2 = M_2^{\ast} - M_2 \,.
\end{equation}
The hyperfine splitting of the kinetic mass ($\Delta_2$) has a larger
error than that of the rest mass ($\Delta_1$), mainly because the
kinetic mass requires correlators with $\bm{p}\neq\bm{0}$, which are noisier
than $\bm{p}=\bm{0}$.
Interestingly, with the OK action the statistical error is about $1/6$ of that with the Fermilab action,
as one can see in Fig.~\ref{fig:hfs}.
Thus, the OK action is not only more accurate in the sense of improved action but also statistically more
precise.
From Eq.~\eqref{eq:deltaB}, we have
\begin{align}
\Delta_2 &= \Delta_1 + \delta{B^{\ast}} - \delta{B} \,.
\end{align}
Spin-independent contributions to the binding energies cancel,
so the difference in hyperfine splittings $\Delta_2 - \Delta_1$
diagnoses the improvement of spin-dependent $\mathrm{O}(\bm{p}^4)$
terms.
As one can see in Fig.~\ref{fig:hfs}, the OK action shows clear
improvement for quarkonium: the OK results lie close to the continuum limit
$\Delta_2 = \Delta_1$ (the red line).
The heavy-light results do not deviate much from the line
$\Delta_2=\Delta_1$ even with the clover action, and remain in good shape
with the OK action.
\section{Conclusion and Outlook}
The results for the inconsistency parameter show that the OK action
improves the $\mathrm{O}(\bm{p}^4)$ effects, in practice as well as in theory.
The hyperfine splitting shows that the OK action significantly
improves the higher-dimension chromomagnetic effects on the quarkonium
spectrum.
For the heavy-light system, the data for the hyperfine splittings at
$0.15\;$fm suffer from statistical errors that are too large to draw
any firm conclusion.
The SWME Collaboration plans to determine $|V_{cb}|$ by calculating
$B\to D^{(*)}\ell\nu$ semi-leptonic form factors with the OK action
and commensurately improved currents.
For this purpose, a project to obtain the improved current relevant to
the decay $B\to D^*\ell\nu$ at zero recoil is
underway~\cite{JAB:LAT2014}.
Another component of this plan is to calculate the 1-loop coefficients
for $c_B$ and $c_E$ in the OK action.
A highly optimized conjugate gradient inverter using QUDA is under
development~\cite{JANG:LAT2013}.
\section{Acknowledgments}
C.D. is supported in part by the U.S.\ Department of Energy under
grant No.\ DE-FC02-12ER-41879 and the U.S.\ National Science
Foundation under grant PHY10-034278.
A.S.K. is supported in part by the German Excellence Initiative and
the European Union Seventh Framework Programme under grant agreement
No.~291763 as well as the European Union's Marie Curie COFUND program.
Fermilab is operated by Fermi Research Alliance, LLC, under Contract
No.\ DE-AC02-07CH11359 with the United States Department of Energy.
The research of W.~Lee is supported by the Creative Research
Initiatives Program (No.~2014001852) of the NRF grant funded by the
Korean government (MEST). W.~Lee would like to acknowledge the
support from KISTI supercomputing center through the strategic support
program for the supercomputing application research
[No.~KSC-2013-G2-005].
Part of computations were carried out on the DAVID GPU clusters at
Seoul National University. J.A.B. is supported by the Basic Science
Research Program of the National Research Foundation of Korea (NRF)
funded by the Ministry of Education (2014027937).
| {
"attr-fineweb-edu": 1.526367,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdjs4eIXhzGirDQ7m | \section{Introduction}
\setcounter{equation}{0}
\ \indent
In this paper we examine some aspects of charge two BPS monopoles in a
gauge theory SU(3)
spontaneously broken by an adjoint Higgs to U(2). Their moduli space
was found by Dancer in \cite{3}, and subsequently studied in
\cite{4,5,13}.
Magnetic monopoles in SU(2)
gauge theory have been the focus of
considerable interest. Recently, there has been renewed
interest in monopoles of higher rank gauge groups mainly in relation
to electric-magnetic duality \cite{GL, LWY1, Connell, LWY3, GW}.
Substantial progress has been made in theories where the gauge symmetry
is broken to the maximal torus. However if the unbroken symmetry group
contains a non-Abelian component various complications arise and much
less is known.
For SU(3) monopoles with minimal symmetry breaking the
topological charge is specified by a single integer $k$.
But as pointed out in
\cite{GNO} there is a further gauge invariant classification of
monopoles, with the result that the charge $k$ moduli space is
stratified into different connected components. Charge one monopoles
are given by embeddings of
the charge one SU(2) monopole. Their moduli space is given by
$\hbox{\upright\rlap{I}\kern 1.7pt R}^3\times S^3$, which is fibred over $S^2$ with fibre
$\hbox{\upright\rlap{I}\kern 1.7pt R}^3 \times U(1)$.
Points in $S^2$ label different possible embeddings of the SU(2) solution,
however it is well known that it is not possible to move along the
$S^2$ factor \cite{19, 20, 21} due to the non-Abelian nature of the long-range
fields (the long-range magnetic field does not commute with all the
generators of the unbroken gauge group).
Charge two monopoles appear in two
distinct categories depending on whether or not the long-range fields
are Abelian.
Monopoles with non-Abelian long-range magnetic fields are given by
embeddings of charge two SU(2) solutions, the moduli space is
of dimension ten (which is fibred over $S^2$ with fibre the charge
two SU(2) moduli space). However if the long-range magnetic fields
are Abelian the
moduli space is more
complicated and has dimension twelve. This phenomenon where the charge
$k$ moduli space is stratified, with different components having different
dimensions occurs for any gauge theory in which the unbroken subgroup
is non-Abelian \cite {Bow, Murray}.
In the presence of a
monopole whose long-range fields are non-Abelian
it is not
possible in general to define a global color gauge transform which is
orthogonal to the orbits of gauge transforms which are 1 at
spatial infinity (i.e. little gauge
transforms) \cite{19}. As Abouelsaood showed in \cite{20}, this is a
consequence of the result \cite{21}, which showed the impossibility of globally
defining (in any regular gauge) a full set of generators that commute
with the Higgs field on a
two sphere at infinity when the monopole has long-range non-Abelian fields.
In the SU(3)
theory this is so for embedded SU(2) monopoles. However, when
the long-range fields are Abelian such
transformations are possible and the moduli space gains a
normalisable action of SU(2). This
suggests that the moduli space for the larger strata would be of
dimension eleven rather than eight as for embedded SU(2) monopoles (ignoring
the $S^2$ fibre). However,
it is known that the moduli space is of dimension twelve \cite{Bow, Wein1},
and so an extra parameter appears whose physical interpretation is
somewhat unclear. We shall see that in certain regions the monopoles
look like embedded SU(2) monopoles surrounded by a non-Abelian cloud
whose approximate radius can be attributed to this extra parameter.
It has been proved that the moduli space of monopoles for many gauge
groups is in
1-1 correspondence with the moduli space of solutions to Nahm's
equations on a given interval with certain boundary
conditions \cite{Nahm}. This correspondence is known to be an isometry
for SU(2)
monopoles and SU(n+1) monopoles broken to U(n), which includes the
present case \cite{NAK1, NAK2}. In \cite{3}, Dancer has constructed
the metric on the moduli space of solutions to Nahm's equations which
is a twelve dimensional
hyperk\"{a}hler manifold with free, isometric actions of $R^3$, U(2),
and Spin(3). By \cite{NAK2} this gives the metric on the two
monopole moduli space.
The Dancer manifold can be thought of as having a boundary which corresponds
to the space of embedded SU(2) monopoles (however the manifold with
the induced $L^2$ metric is
complete and the boundary is infinitely far away in metric distance).
Like in the SU(2) case, the monopoles have a
well-defined center of mass and total U(1) phase. In \cite{3}
an implicit expression was given for the relative moduli space metric
which has isometry group SO(3)$\times$SO(3). Below, this
metric is expressed in terms of invariant one-forms
corresponding to the actions of each of the SO(3)'s and two other
parameters which roughly measure the monopoles' separation and how
``close'' the monopole configuration is to an embedded SU(2) monopole
configuration. The expressions for the metric are very complicated and
are relegated to the Appendix. However for regions of the moduli space
which approach the boundary the metric simplifies into a direct
product of ${\cal AH}\times \hbox{\upright\rlap{I}\kern 1.7pt R}^4$. An interpretation
is that the configuration looks like an embedded
charge two SU(2) monopole surrounded by a ``cloud'' \cite{LWY1},
which is parametrised by its physical radius (which is related to the
inverse of the coordinate distance from
the boundary of the Dancer manifold) and its SO(3) orientation
(residual gauge group
action). Evidence for this is given by solving for the long-range
Higgs and gauge fields in this region of the moduli space.
Here, the long-range fields do indeed have this cloud where
the fields change from those of a non-Abelian
embedded SU(2) monopole to the Abelian fall-off of these
monopoles.
The long-range fields are obtained from a spherical
symmetry ansatz and so our results are only valid at a distance much larger
than the separation of the two monopoles. The kinetic energy obtained by
varying the size of the cloud may be calculated and is shown to agree with the
kinetic energy deduced from the metric. The moments of inertia,
corresponding to the SO(3) gauge action, can be read off
from the metric and they diverge as the cloud
spreads out to infinity.
The presence of the cloud can alter the possible types of
monopole scattering from the SU(2) case.
For SU(2) monopoles if two monopoles collide head on, they
form a torus and then scatter at right angles. Here a
different outcome is possible \cite {5}. Incoming
monopoles can collide, forming instantaneously a spherically
symmetric monopole, but now
instead of scattering outwards they continue
to approach the embedded SU(2) torus. Due to the
conservation of kinetic energy of the incoming monopoles (and angular
momentum conservation) the SU(2) monopoles must scatter but in the
SU(3) case the kinetic energy is carried off by the cloud while the
monopoles' core is static in the limit of large time.
An alternative description of this cloud was given by Lee, Weinberg
and Yi \cite{LWY1}. They
proposed that the moduli space of monopoles whose magnetic charge is Abelian
can be obtained as a limit of a monopole moduli space in a theory where the
gauge group is broken to its maximal torus. Our case, where SU(3) is
broken to U(2),
can be viewed as a limit of SU(3) broken to
U(1)$\times$U(1) as the Higgs expectation value is varied.
Then the Dancer moduli space would arise from a space of
three monopoles : two of the same type, and the third distinct and
becoming massless in the limit. As this monopole becomes massless its
core radius expands and eventually loses its identity as a monopole
and is manifested as a cloud surrounding the remaining two monopoles.
Section 2 is a review of SU(3) monopoles. For completeness we
discuss both cases, SU(3) broken to U(1)$\times$U(1) and
SU(3) broken to U(2). In section 3 we consider the charge two
monopoles where SU(3) is broken to U(2). In particular we
solve for the long-range gauge
and Higgs fields in regions of the moduli space which are close to
the boundary of embedded SU(2) monopoles. In the Appendix the metric
given in \cite{3} is reexpressed in an explicit form.
\section{Review of SU(3) Monopoles}
\setcounter{equation}{0}
\ \indent
We assume throughout that the Higgs
field is in the adjoint representation and we are in the BPS limit in
which the scalar potential is zero but a nonzero Higgs expectation
value is imposed
at spatial infinity as a boundary condition.
An SU(3) gauge theory can be broken by an adjoint Higgs mechanism
to either U(1)$\times$U(1) or U(2). Monopole solutions will occur
in either theory. The generators of SU(3) may be chosen to be
two commuting operators $H_1$, $H_2$, together with ladder operators,
associated with the roots $\pm\mbox{\boldmath$\alpha$}$, $\pm\mbox{\boldmath$\beta$}$, $\pm(\mbox{\boldmath$\alpha$}+\mbox{\boldmath$\beta$})$
(see figure 1), that obey
\begin{equation}
[H_i,E_{\mbox{\boldmath$\gamma$}}]=\gamma^iE_{\mbox{\boldmath$\gamma$}},\;\;\;\;\; [E_{\mbox{\boldmath$\gamma$}},
E_{-\mbox{\boldmath$\gamma$}}]=\gamma^iH_i\;,
\end{equation}
for $\mbox{\boldmath$\gamma$}$ any root.
Following \cite{Wein1} we let $\Phi_{\infty}$ be the asymptotic
value of the Higgs field along the positive $x^3$-axis. Choose this to lie
in the Cartan subalgebra and this defines a vector $\bf{h}$ by
$\Phi_{\infty}=\bf{h}.\bf{H}$.
If SU(3) is broken to U(1)$\times$U(1), all roots have nonzero inner
product with $\bf{h}$ and there is a unique set of simple roots with
positive inner product with $\bf{h}$. If SU(3) is broken to U(2)
then one of the roots, say $\mbox{\boldmath$\beta$}$, is perpendicular to $\bf{h}$. Now
there are two choices of simple roots with non-negative inner product
with $\bf{h}$; namely $(\mbox{\boldmath$\alpha$},\mbox{\boldmath$\beta$})$ and
$(\mbox{\boldmath$\alpha$}+\mbox{\boldmath$\beta$},-\mbox{\boldmath$\beta$})$. Figure 1 illustrates the two different
types of symmetry breaking.
\begin{figure}[ht]
\begin{center}
\leavevmode
\epsfxsize=13cm\epsffile{SU3monfig1.eps}
\caption{(a) SU(3)$\rightarrow U(1)\times U(1)\;\;\;\;$(b) SU(3)$\rightarrow$ U(2)}
\end{center}
\end{figure}
For any finite energy solution, asymptotically
\begin{equation}
B_i=\frac{r_i}{4\pi r^3}G(\Omega)
\end{equation}
$G(\Omega)$ is a covariantly constant element of the Lie algebra of
SU(3), and takes the value $G_0$ along the
positive $x^3$-axis. The Cartan subalgebra may be chosen so that
$G_0=\bf{g}.\bf{H}$.
The quantisation of magnetic charge \cite{GNO,EW}, ensures that
$\bf{g}$ is of the form
\begin{equation}
{\bf{g}}=\frac{4\pi}{e}\left\{n\mbox{\boldmath$\alpha$}^*+m\mbox{\boldmath$\beta$}^*\right\}\;,\;\;\;\;\;\;
\mbox{\boldmath$\alpha$}^*=\frac{\mbox{\boldmath$\alpha$}}{\mbox{\boldmath$\alpha$}\cdot\mbox{\boldmath$\alpha$}}
\end{equation}
where $e$ is the gauge coupling, and $n$, $m$ are non-negative integers.
We denote such a charge by $(n,m)$.
When SU(3) is broken to U(1)$\times$U(1) the topological
charges of the monopoles are determined by a pair of integers,
ie. the monopoles
can be charged with respect to either of the unbroken U(1)'s.
All BPS monopoles may
be thought of as superpositions of two fundamental monopoles given by
embeddings of the charge one SU(2) monopole \cite{Wein1}.
To embed an SU(2) solution note that any root $\mbox{\boldmath$\gamma$}$ defines an
SU(2) subalgebra by
\begin{eqnarray}
t^1(\mbox{\boldmath$\gamma$})&=&(2{\mbox{\boldmath$\gamma$}\cdot\mbox{\boldmath$\gamma$}})^{-1/2}(E_{\mbox{\boldmath$\gamma$}}+E_{-\mbox{\boldmath$\gamma$}})\\
t^2(\mbox{\boldmath$\gamma$})&=&-i(2{\mbox{\boldmath$\gamma$}\cdot\mbox{\boldmath$\gamma$}})^{-1/2}(E_{\mbox{\boldmath$\gamma$}}-E_{-\mbox{\boldmath$\gamma$}})\nonumber\\
t^3(\mbox{\boldmath$\gamma$})&=&\mbox{\boldmath$\gamma$}^*\cdot{\bf{H}}\;.\nonumber
\end{eqnarray}
If $A_i^a$ and ${\Phi}^a$ is an SU(2) charge $n$ BPS solution with
Higgs expectation value $v$, then a monopole with magnetic charge
\begin{equation}
{\bf{g}}=\frac{4\pi n}{e}\mbox{\boldmath$\gamma$}^*
\end{equation}
is given by \cite{Bais}
\begin{eqnarray}
\Phi &=& \Phi^at^a +({\bf h}-\frac{{\bf h}\cdot
\mbox{\boldmath$\gamma$}}{\mbox{\boldmath$\gamma$}\cdot\mbox{\boldmath$\gamma$}}\mbox{\boldmath$\gamma$})\cdot{\bf H}\\
{\bf A}_i &=& A_i^a t^a \nonumber\\
v &=& {\bf h}\cdot\mbox{\boldmath$\gamma$}\;\;.\nonumber
\end{eqnarray}
The two fundamental monopoles are obtained by embedding charge one
solutions along the simple
roots, $\mbox{\boldmath$\alpha$}$ and $\mbox{\boldmath$\beta$}$. Each fundamental monopole has four
zero modes, corresponding to its position and a U(1) phase.
Embedding along the root $\mbox{\boldmath$\alpha$}$ gives the (1,0) monopole
charged with respect to one of the U(1)'s. Similarly, one can embed
along the root $\mbox{\boldmath$\beta$}$ to give the monopole (0,1)
charged with respect to the other U(1). Any BPS solution of
topological charge ($n$,$\,m$) can be viewed as a collection
of $n$ $\,\mbox{\boldmath$\alpha$}$-monopoles and $m$
$\,\mbox{\boldmath$\beta$}$-monopoles. The dimension of the $(n,\,m)$ moduli space is
$4(n+m)$. The ($n$,$\,0$) and ($0$,$\,n$)
moduli spaces will be copies of the charge $n$ SU(2) moduli space.
The ($1$,$\,1$) moduli space was studied in \cite{Connell}, see also \cite{GL}.
It contains the spherically symmetric
solution obtained by embedding a single SU(2) monopole along the
root $\mbox{\boldmath$\alpha$} +\mbox{\boldmath$\beta$}$. The relative moduli space is
Taub-NUT (Newman-Unti-Tamburino) with positive mass
parameter. Because the monopoles are charged with respect to different
U(1)'s their interaction is simpler than for two SU(2) monopoles.
The other moduli spaces are known as spaces of holomorphic rational
maps from the two sphere into a flag manifold \cite{Hurt}. But their
moduli space metrics are as yet unknown, although a conjectured
form for the (2,1) moduli space metric was given in \cite{Chalmers}.
For SU(3) broken to U(2) the situation is quite different. Now the
vacuum expectation value of the Higgs field along the $x^3$-axis, ${\bf h}$,
is perpendicular to one
of the roots, $\mbox{\boldmath$\beta$}$. Embedding an SU(2) solution using the SU(2)
subalgebra defined by the root $\mbox{\boldmath$\beta$}$, (2.4), now gives the
trivial zero solution so (0,1) solutions do not exist here. In
general,
the quantization of magnetic charge is again
given by (2.3) but now $n$ is the only topological charge. Solutions
for a given value of $n$ exist only if $m\leq n$, and the $(n,m)$
moduli space is
identical to the $(n,n-m)$ moduli space.
Two distinct charge one solutions are
given by embeddings along the roots $\mbox{\boldmath$\alpha$}$ and
$\mbox{\boldmath$\alpha$}+\mbox{\boldmath$\beta$}$, they are (1,0) and (1,1) respectively. The
long-range magnetic field has a non-Abelian component ie., $G_0$ does not
commute with the generator of the unbroken SU(2)
(${\bf{g}}\cdot\mbox{\boldmath$\beta$}\neq 0$) so it is not possible in general to
perform a global gauge transform $g=e^{-\Gamma}$, taking
values in the unbroken U(2) at infinity which
is orthogonal to little gauge
transforms \cite{19}, i.e. $D_iD_i \Gamma+[\Phi,[\Phi,\Gamma]]=0$.
This is so only for gauge transforms generated by
$E_{\mbox{\boldmath$\beta$}},\,E_{-\mbox{\boldmath$\beta$}}$. Gauge transforms in the Cartan subalgbra
pose no such difficultly. However a linear combination of the Cartan
generators leaves the monopole invariant so
only a global U(1) charge rotation remains
($g=e^{-\chi\Phi}$) exactly as for SU(2) monopoles.
However there are more charge one solutions than this, because one may
act with the global SU(2) in the singular gauge,
but one cannot move between these solutions dynamically
(the embedded
solutions based on the roots $\mbox{\boldmath$\alpha$}$ and $\mbox{\boldmath$\alpha$}+\mbox{\boldmath$\beta$}$ are
related in this fashion). This gives a
three dimensional family of solutions parametrised by $S^3$, and
with translational
invariance the space of solutions is of the form $\hbox{\upright\rlap{I}\kern 1.7pt R}^3\times S^3$
which may be viewed as a fibre bundle over $S^2$ with fibre
$\hbox{\upright\rlap{I}\kern 1.7pt R}^3\times$U(1).
However, the zero modes corresponding to the action of the global
unbroken SU(2) gauge group on the monopole (corresponding to the
$S^2$ factor in the moduli space) cannot satisfy $both$
the linearized BPS equation and orthogonality to little gauge
orbits. Only the U(1) factor corresponding to electric charge
rotations can do this. So, although the space of solutions is
$\hbox{\upright\rlap{I}\kern 1.7pt R}^3\times S^3$, it is clear that the usual procedure of
finding the metric by calculating the $L^2$
norms of the zero modes satisfying $D_i\delta {\bf A}_i+[\Phi,\delta
\Phi]=0$ is not possible here.
For topological charge two solutions, either one can embed charge two SU(2)
solutions along the root $\mbox{\boldmath$\alpha$}$ or $\mbox{\boldmath$\alpha$}+\mbox{\boldmath$\beta$}$, giving
(2,0), (2,2) respectively, or,
alternatively one may combine the two charge one solutions based on the
roots $\mbox{\boldmath$\alpha$}$ and $\mbox{\boldmath$\alpha$}+\mbox{\boldmath$\beta$}$ giving (2,1). In
the former case, again there is a long-range non-Abelian magnetic
field and the same problems as for charge one solutions are
present. The moduli space
will be the charge two SU(2) monopole space fibred over $S^2$. In the
latter case, the magnetic charge is Abelian, (${\bf{g}}\cdot\mbox{\boldmath$\beta$}=0$),
so global color (SU(2)) gauge transforms are possible with finite
$L^2$ norm for the zero modes \cite{19}.
The moduli space acquires an action of SU(2), but as stated in the
introduction another parameter is present on the moduli space whose
appearance is somewhat surprising.
As noticed in \cite{Bow} the parameter counting means these monopoles
cannot be interpreted as a superposition of any fundamental monopoles,
unlike the SU(3)$\rightarrow$U(1)$\times$U(1) case.
In the following we shall be studying SU(3) broken to U(2)
charge (2,1) monopoles which have an Abelian magnetic charge.
\section{The Asymptotic Fields}
\setcounter{equation}{0}
\ \indent
\begin{figure}
\begin{center}
\leavevmode
\epsfxsize=3in\epsffile{SU3monfig2.eps}
\caption{The geodesic submanifold $Y$.}
\end{center}
\end{figure}
Nakajima and Takahasi have proved that the metric on the moduli space
of SU(2) monopoles and SU(n+1) monopoles broken to U(n)
is equivalent to the metric on a corresponding moduli space of
solutions of Nahm's equations \cite{NAK1,NAK2}.
This equivalence provides a direct method of finding metrics on
monopole moduli spaces, at least for low charges where Nahm's
equations can be solved. Using the
Hyperk\"{a}hler quotient construction, Dancer \cite{3}, has
constructed the metric on the
moduli space of Nahm data corresponding to SU(3) broken to U(2)
charge (2,1) monopoles denoted $M^{12}$
and by Takahasi's proof on the equivalence of the the two metrics this
gives the monopole metric.
$M^{12}$ is a twelve dimensional manifold with commuting actions of
Spin(3) (rotations), U(2) (unbroken gauge group), and $\hbox{\upright\rlap{I}\kern 1.7pt R}^3$
(translations). Let $M^8$ be the quotient of $M^{12}$ by
$\hbox{\upright\rlap{I}\kern 1.7pt R}^3\times U(1)$ where U(1) is the center of U(2). $M^8$ has free
commuting actions of SO(3) and SU(2)$/\hbox{\upright\rlap{\ssf Z}\kern 2.7pt {\ssf Z}}_2$. The
metric on $M^{12}$ is just the Riemannian product of the metric on
$M^8$ with the flat metric on $\hbox{\upright\rlap{I}\kern 1.7pt R}^3\times U(1)$ so the manifold $M^8$
describes the relative motion of the monopoles.
$M^8$ may be
quotiented by the SU(2)$/\hbox{\upright\rlap{\ssf Z}\kern 2.7pt {\ssf Z}}_2$ action to give a manifold denoted by
$N^5$ which has a non-free action of SO(3), corresponding to rotations.
In \cite{3}, an explicit expression was given for the metric on $N^5$
and an implicit expression for the metric on $M^8$. In the Appendix
the metrics on $N^5$ and $M^8$ are reexpressed in terms of coordinates
$D$, $k$, on the quotient space $N^5/$SO(3) and left-invariant 1-forms on
SO(3) corresponding to the action of SO(3) on $N^5$ and
SO(3)$\times$SU(2)$/\hbox{\upright\rlap{\ssf Z}\kern 2.7pt {\ssf Z}}_2$ on $M^8$.
Here we are interested in the interpretation of the moduli space
coordinates in terms of the monopole fields. After the removal
of gauge freedom, spatial translations and rotations, we are left
with the two dimensional space $N^5/$SO(3). $N^5/$SO(3) may be
parametrised by $D$, $k$ with $0\leq k\leq 1$ and $0\leq D
<\frac{2}{3}K(k)$, where $K(k)$ denotes the first complete
elliptic integral $K(k)=
\int^{\frac{\pi}{2}}_0(1-k^2\sin^2\theta\,)^{-1/2}d\,\theta$ \cite{3}.
Figure 2 depicts a totally geodesic submanifold of $M^8$, $\,Y$,
corresponding to monopoles which are
reflection symmetric (up to a gauge transform) in each of the three
Cartesian axes. $Y$ is composed of six regions each isomorphic
to $N^5/$SO(3). The
metric and geodesic flow on $Y$ was studied in \cite{5}. In the shaded
region $x,\,y$ are given in terms of $D,\,k$ by
\begin{equation}
x=(2-k^2)D^2\,,\qquad
y=-\sqrt{3}k^2D^2\;.
\end{equation}
Similar formulae for $x,\,y$ in terms of $D,\,k$ exist in the other
regions of $Y$.
The origin, $x=y=0$ corresponds to the spherically symmetric monopole
for which the fields are known, \cite{4, BW}. The line segments
$-\infty<x\leq 0$,$\,\,y=0$ and
$0\leq x<\frac{2}{9}\pi^2$,$\,\,y=0$
correspond to axially symmetric monopoles, denoted in \cite{4} as
hyperbolic and trigonometric monopoles respectively, because the Nahm
data involves hyperbolic and trigonometric functions. In terms of $D$
and $k$ these are lines $k=1$ and $k=0$. These line segments together
correspond to a geodesic on $Y$. As $x$ varies from $-\infty$ to
$\frac{2}{9}\pi^2$, two infinitely separated hyperbolic monopoles
approach each other, collide to form the spherically symmetric monopole,
which then deforms to trigonometric monopoles and approaches the
axially symmetric
embedded SU(2) monopole. In \cite{4}, for these cases,
the Higgs field was determined along the axis of axial symmetry.
The dotted boundaries of $Y$ [which is $D=\frac{2}{3}K(k)$ in
$N^5/$SO(3)] corresponds to embedded SU(2) monopoles. $Y$ is geodesically
complete and the boundary is infinitely far away in metric distance.
We want to understand the nature of the metric and the fields in
different regions of $Y$. From numerical evidence in \cite{4,5}, for
large $D$, $D$ represents the separation of the monopoles. The
asymptotic regions of Figure 2 (the legs of $Y$) correspond to
well separated monopoles
along each of the three axes. The central region corresponds to
configurations of coincident or closely separated monopoles.
It is interesting to try to understand
the effect on the fields as one moves towards the boundary of $Y$.
The coefficient of $1/r$, $(r=|{\bf x}|),$ in
the asymptotic
expansion of the Higgs field, $\Phi({\bf x})$, of an SU(3) monopole is
$-\frac{i}{2}\mbox{diag}(-1,-1,2)$ whereas the
$1/r$ coefficient of an embedded SU(2) monopole is
$-\frac{i}{2}\mbox{diag}(-2,0,2)$. If one is close
to the boundary of $Y$ then the fields should look like embedded SU(2)
fields (since the boundary corresponds to embedded SU(2) monopoles),
but with a cloud where the $1/r$ coefficient of the Higgs field changes
from -$\frac{i}{2}\mbox{diag}(-2,0,2)$ to
$-\frac{i}{2}\mbox{diag}(-1,-1,2)$ \cite{4}.
A useful parametrization near the boundary of $Y$ which measures
approximately the radius of the cloud is given by
$z=2\sqrt{\frac{D}{\rho}}$ with $\rho=2K(k)-3D$: $z$ is related to the
inverse of the coordinate distance (not metric) from the boundary
of $Y$.
Near the boundary the expressions for the metric (A.14) may be
simplified to
\begin{eqnarray}
ds^2 = dz^2+\frac{z^2}{4}(\hat{\sigma}_1^2
+\hat{\sigma}_2^2+\hat{\sigma}_3^2)
+ \frac{b^2}{D^2}dD^2+a^2\sigma_1^2+b^2\sigma_2^2+c^2\sigma_3^2
\;\;+\;\;O(z^{-4})\;\;\;.
\end{eqnarray}
$\sigma_i,\,\hat{\sigma}_i$ are left-invariant 1-forms (defined in the
appendix (A.11)), corresponding to
rotational and SU(2) degrees of freedom;
$a^2,\,b^2,\,c^2$ are evaluated at the boundary ,$\;D=\frac{2}{3}K(k)$, giving
\begin{equation}
a^2=\frac{2K(K-E)(E-k'^2K)}{3E}\,,\qquad
b^2=\frac{2EK(K-E)}{3(E-k'^2K)}\,,\qquad
c^2=\frac{2EK(E-k'^2K)}{3(K-E)}\,.
\end{equation}
Here $k'^2=1-k^2$ and
$E(k)=\int_0^{\frac{\pi}{2}}(1-k^2\sin^2\theta)^{1/2}\,d\theta$.
Now, noting that (3.2) describes the flat space metric for $\hbox{\upright\rlap{I}\kern 1.7pt R}^4$ and
the Atiyah-Hitchin (${\cal AH}$) metric for
SU(2) monopoles (with a scale factor of $1/3$) \cite{AH}, we
see that in the limit the metric decouples into the direct product $\hbox{\upright\rlap{I}\kern 1.7pt R}^4\times
{\cal AH}$. We can interpret this by saying the monopole configuration
looks like an embedded charge two SU(2) monopole surrounded by a
cloud, parametrized by $\hbox{\upright\rlap{I}\kern 1.7pt R}^4$. In this limit the geodesics on
$M^8$ are easy to analyse. They are given by a straight line in the
$\hbox{\upright\rlap{I}\kern 1.7pt R}^4$ factor and the usual geodesics on the Atiyah-Hitchin manifold.
From numerical evidence in \cite{5}, it was shown that all geodesics
(except for the axially symmetric monopole collision)
approach the asymptotic
regions of $Y$. In fact, we were able to check (using MATLAB) that
generic geodesics approach the boundary of the asymptotic region,
i.e. $D\rightarrow \infty$ and $D\rightarrow
\frac{2}{3}K(k)$. It is interesting to ask whether similar behaviour
occurs for generic geodesics on $M^8$ not restricted to the
submanifold $Y$. From
(3.2) it can be seen that if a configuration is close to the boundary
and heading towards the boundary ($\dot{z}>0$) then the monopoles
will continue to approach $z=\infty$.
Also (3.2) suggests that the
cloud will be spherically symmetric in the
limit $z\rightarrow \infty$ because the coefficients of $\hat{\sigma}_i^2$
are all equal.
With the assumption that near the boundary the long-range fields are
spherically symmetric we may
now solve for these fields and independently verify that the $\hbox{\upright\rlap{I}\kern 1.7pt R}^4$
part of the metric in (3.2) is correct. This is simplest for the
axially symmetric
trigonometric monopoles, ($k=0$).
For such monopoles in \cite{4}, the Higgs field was given along
the axis of axial symmetry.
The long-range part of the Higgs field on the axis of symmetry may be
found by
dropping all terms exponentially
small in $r$. Spherical symmetry determines
the long-range
Higgs field in all directions. On the axis of symmetry,
$\Phi=i$diag$(\Phi_{11},\,-\Phi_{11}-\Phi_{33},\,\Phi_{33})\,+O(e^{-6r})$ with
\begin{equation}
\Phi_{11}
=-1+\frac{1}{D^2+4r^2}\left\{ \frac{(4r^2-D^2)\sin3D-4Dr\cos3D}
{2r\sin3D-D\cos3D}
\right\}\,,\;\;\Phi_{33}=2-\frac{4r}{4r^2+D^2}\;.
\end{equation}
To solve the spherically symmetric BPS equations
the Higgs field needs to be further truncated to
\begin{equation}
\Phi_{11}
=-1+\frac{1}{r}\left\{ \frac{r\sin3D-D\cos3D}{2r\sin3D-D\cos3D}
\right\}\;,\qquad\Phi_{33}=2-\frac{1}{r}\;.
\end{equation}
Because $D<\pi/3$ and $r$ is large this is a valid
approximation. The equations may then be solved
for all values of
$D$. However we are only interested in configurations near the
boundary $(D\rightarrow \pi/3)$ where the $\hbox{\upright\rlap{I}\kern 1.7pt R}^4$ part of the metric decouples.
We expand $\sin3D,\,\cos3D$ to order $\rho$, (here $\rho$ is $\pi-3D$
because $k=0$)
and express
the fields in terms of $z=2\sqrt{\frac{D}{\rho}}$.
Then using \cite{6} the spherically symmetric
BPS equations may be solved to give the long-range part of the gauge field
$A_i^a$ (in a gauge with a Dirac string). The singularity at the origin
is not relevant here as we are only interested in the long-range
fields. We find
\begin{eqnarray}
\Phi(r)&=& i\left ( \begin{array}{ccc} -1 & 0 & 0 \\ 0 & -1 & 0 \\
0 & 0 & 2 \end{array} \right ) +
\frac{i}{r}\left ( \begin{array}{ccc} \frac{z^2+4r}{z^2+8r} &
0 & 0 \\ 0 & \frac{4r}{z^2+8r} & 0 \\ 0 & 0 & -1 \end{array} \right ) \\
{\bf A}(r,\theta)&=&\frac{i}{2r}\,\frac{8r}{z^2+8r}\left(\begin{array}{ccc}
0 & -i & 0\\ i & 0 & 0\\ 0 & 0 & 0 \end{array}\right)\hat{\theta}
\\ \nonumber
&-&\left\{\frac{i}{2r}\,
\frac{8r}{z^2+8r}\left( \begin{array}{ccc} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0
& 0 & 0 \end{array} \right)
-i\frac{\cos\theta}{r\sin\theta}
\left( \begin{array}{ccc} 1 & 0
& 0 \\ 0 & 0 & 0 \\ 0 & 0 & -1
\end{array} \right)\right\} \hat{\phi}
\end{eqnarray}
where $r$, $\theta$, $\phi$ are spherical polar coordinates.
If $z$ is large enough so that $z^2\gg r$
then the $1/r$ coefficient in $\Phi$ is approximately $-\frac{i}{2}\mbox{diag}
(-2,0,2)$.
However as $r\rightarrow\infty$ the $1/r$ coefficient in
$\Phi$ is $-\frac{i}{2}\mbox{diag}(-1,-1,2)$ as required.
Near the boundary of $Y$ ($z\rightarrow \infty$), the change in
fall-off in the
$1/r$ term is extremely slow.
As $r\rightarrow \infty$ with $z^2\ll r,\;\Phi_{11}$ may be expanded
to give
\begin{equation}
\Phi_{11}=-i+\frac{i}{2r}+\frac{iz^2}{16r^2}\;,
\end{equation}
so $z$ may be seen as changing the dipole term in $\Phi$. The
foregoing analysis leads us to interpret $z^2$ as the cloud radius.
From ${\bf A}$, the magnetic field ${\bf B}$ may be found, giving
\begin{eqnarray}
{\bf B}&=&\frac{i}{2r^2}\left\{\left (
\begin{array}{ccc}
-1 & 0 & 0 \\ 0 & -1 & 0 \\0 & 0 & 2 \end{array} \right )-
\frac{z^2(z^2+16r)}{(z^2+8r)^2}\left( \begin{array}{ccc} 1 & 0
& 0 \\ 0 & -1 & 0 \\ 0 & 0 & 0
\end{array} \right)
\right\}\hat{r} \\
&+&
\frac{4iz^2}{r(z^2+8r)^2}\left\{\left(\begin{array}{ccc}
0 & 1 & 0\\ 1 & 0 & 0\\ 0 & 0 & 0 \end{array}\right)\hat{\theta}+
\left(\begin{array}{ccc}
0 & -i & 0\\ i & 0 & 0\\ 0 & 0 & 0
\end{array}\right)\hat{\phi}\right\}\;\;.\nonumber
\end{eqnarray}
The magnetic field is seen to be non-Abelian in the cloud region
$r\approx z^2$. As $r\rightarrow \infty$ the non-Abelian components
decay like $1/r^3$ leaving an Abelian magnetic field at infinity,
\begin{equation}
{\bf B}=\frac{i}{2}\left
( \begin{array}{ccc} -1 & 0 & 0 \\ 0 & -1 & 0 \\
0 & 0 & 2 \end{array} \right )\,\frac{\hat{r}}{r^2}\;.
\end{equation}
The cloud may be
viewed as some form of shield for the non-Abelian magnetic field. The
fields described here exhibit very similar behaviour to the fields
given in \cite{LWY1} for the SO(5) monopole where again there is a
cloud parameter describing the extent to which the long-range
non-Abelian fields penetrate beyond the monopoles' core.
Now $0\leq z<\infty$ and $k=0$ is part of a geodesic on $Y$ \cite{3}.
The kinetic energy, $T$, from the asymptotic
fields obtained by varying $z$ may be calculated and compared to
that from the metric (the
constant of proportionality between the metric and the Lagrangian for
the two monopole system is one sixth the reduced mass of a pair of
monopoles, which is $\pi$). Normally one needs to add a
little gauge transform to ensure that the variation of the fields is
orthogonal to gauge orbits, but in this case, $D_i\delta A_i+[\Phi,
\delta \Phi]=0$ is satisfied anyway with
$\delta A_i=\dot{z}\,\partial A_i/\partial z\,\,$,
$\delta \Phi=\dot{z}\,\partial \Phi/\partial z\;$. Thus, the kinetic
energy $T$ is given by
\begin{equation}
T=\frac{1}{2}\int d^3x\left\{<\frac{\partial \Phi}{\partial z},
\frac{\partial \Phi}{\partial z}> +
<\frac{\partial A_i}{\partial z},\frac{\partial A_i}{\partial z}>
\right\}\;\dot{z}^2\;,
\end{equation}
where $<X,Y>=-2\mbox{Tr}XY\;$.
One finds
\begin{equation}
T=\pi\dot{z}^2\;+\;O(z^{-4})\,\dot{z}^2\;.
\end{equation}
In the
limit $z\rightarrow \infty$, this formula agrees with that found
from the metric (3.2), however not to $O(z^{-4})$.
This means that as $z\rightarrow \infty$ all the kinetic energy is
outside the core (and indirectly verifies the assumption of asymptotic
spherical symmetry).
The geodesic equation may be solved near the boundary
to give
\begin{equation}
z(t)=\beta t\;,
\end{equation}
with $\beta$ constant and total kinetic energy,
$T=\pi\beta^2$.
For the line of hyperbolic monopoles $(k=1)$, the Higgs field
was given on the axis of axial symmetry in \cite{4}. It was seen
that there is no cloud in this case. The line $k=1$ (or $z=0$) is
infinitely far
from the boundary and the cloud should not be expected to exist here.
The long-range fields for the 1-parameter family of trigonometric
monopoles is given in (3.6), (3.7). For all points in $Y$ near the
boundary the long-range fields must behave in a similar fashion, with
a cloud where the $1/r$ part of the Higgs field changes from
$-\frac{i}{2}$diag$(-2,0,2)$ to $-\frac{i}{2}$diag$(-1,-1,2)$. This cloud
will necessarily be larger than the monopoles' separation which is
$D$. We may view
the long-range fields as functions of $D$, $z$.
In analogy with
the trigonometric case we expect that as $z\rightarrow\infty$ the
kinetic energy of the long-range fields obtained by varying the cloud
parameter $z$ should equal the kinetic
energy derived from the corresponding term in the metric, i.e. $\pi
\dot{z}^2$.
This is indeed the case if the long-range fields are gauge equivalent
to (3.6), (3.7) for all $k$.
It is worth noting that this cloud cannot be interpreted as a kink, i.e.
some region where the fields rapidly change their $1/r$ fall-off from
$-\frac{i}{2}\mbox{diag}(-2,0,2)$ to $-\frac{i}{2}\mbox{diag}(-1,-1,2)$.
As seen from (3.6) the
change in $1/r$ fall-off in $\Phi$ is very slow as
$z\rightarrow \infty$. Also the static energy density ${\cal{E}}$, $
({\cal{E}}=<B^{i},\,B^{i}>)$, in this region may be easily
found from (3.9) to be
\begin{equation}
{\cal E}=\frac{4}{r^4}-\frac{z^2+2r}{2r(r+z^2/8)^4}
\end{equation}
We see that for large $z$, $\cal{E}$ is of order $1/r^4$ varying from
$4/r^4$ to $3/r^4$
as $r$ changes from $r\ll z^2$ to $r\gg z^2$. The energy density outside the
monopole cores is small irrespective of $z$. For large
$z$ the long-range fields and energy density
change only slightly with $z$.
At an arbitrary point in $M^8$ represented by $(\Phi,\,{\bf A})$ a
tangent vector may be written as $(\delta\Phi,\,\delta{\bf A})$ with
$D_i\delta A_i+[\Phi,\delta \Phi]=0$, and $(\delta\Phi,\,\delta{\bf
A})$ also satisfy the linearized Bogomol'nyi equations. From one such
tangent vector other tangent vectors may be generated by
\begin{eqnarray}
\delta'\Phi&=&-\hat{\bf n}\cdot\delta{\bf A}\\
\delta'{\bf A}&=&\hat{\bf n}\delta \Phi+\hat{\bf n}\times\delta{\bf A}\nonumber
\end{eqnarray}
where $\hat{\bf n}$ is any constant unit vector. It is easy to see
that these also
satisfy the above conditions for a tangent vector. By choosing an
orthonormal triplet $\hat{\bf n}_1,\,\hat{\bf n}_2,\,\hat{\bf n}_3$
we obtain four zero modes which are mutually orthogonal. Denoting by $J_i$
the action that takes $(\delta\Phi,\,\delta{\bf A})$ to
$(-\hat{\bf n}_i\cdot\delta{\bf A},\,\hat{\bf n}_i\delta \Phi+\hat{\bf n}_i
\times\delta{\bf A})$
it is obvious that $J_iJ_j=-\delta_{ij}+\epsilon_{ijk}J_k$ thus
giving a realisation of the hyperk\"{a}hler structure on $M^8$.
For large $z$,$\;M^8$ splits as a product of hyperk\"{a}hler manifolds,
$\cal {AH}$ and $\hbox{\upright\rlap{I}\kern 1.7pt R}^4$. Thus one would expect that from the zero mode
$\delta A_i=\delta z\,\partial A_i/\partial z$,
$\,\delta \Phi=\delta z\,\partial \Phi/\partial z$
one may generate three other zero modes which correspond to the
unbroken SU(2) action. In fact, it is not difficult to check that
these zero modes may be written as
\begin{eqnarray}
\delta' A_i&=&D_i\,\Delta({\bf r})\\
\delta' \Phi&=&[\Delta({\bf r}),\,\Phi]\nonumber
\end{eqnarray}
with $\Delta({\bf r})$ in the unbroken SU(2) whose explicit form is
\begin{eqnarray}
\Delta({\bf r})=\frac{8ir\delta z}{z(z^2+8r)}\left\{\hat{{\bf n}}
\cdot\hat{\theta}
\left(\begin{array}{ccc}
0 & 1 & 0\\ 1 & 0 & 0\\ 0 & 0 & 0 \end{array}\right)+\hat{{\bf n}}
\cdot\hat{\phi}
\left(\begin{array}{ccc}
0 & -i & 0\\ i & 0 & 0\\ 0 & 0 & 0 \end{array}\right)+\hat{{\bf n}}
\cdot\hat{r}\left(\begin{array}{ccc}
1 & 0 & 0\\ 0 & -1 & 0\\ 0 & 0 & 0 \end{array}\right)\right\}\,.
\nonumber
\end{eqnarray}
The gauge rotation angle is given by
$\lim_{r\rightarrow \infty}\,\parallel\Delta({\bf r})\parallel$
which is $2\delta z/z$,
independent of
$\hat{{\bf n}}$, where $\parallel X\parallel=\sqrt{<X,X>}\;$.
This implies that the metric coefficients for the
SU(2) action are isotropic for large $z$ and their norm is $z^2/4$
times that of the $dz^2$ coefficient, in agreement with (3.2).
\\[20pt]
\noindent{\bf Acknowledgments}
\setcounter{equation}{0}
\ \indent
Thanks to Conor Houghton, Bernd Schroers and especially
Nick Manton for many helpful discussions. I also acknowledge the
financial support of the PPARC.
\\[20pt]
| {
"attr-fineweb-edu": 1.139648,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdkA25V5jD_GbN4Nr | \section{Introduction}
Generators of Markov diffusions -- i.e. Markov processes with continuous sample paths -- are second-order differential operators $\LL$ or suitable generalizations of them.
These generators give rise to intrinsically defined geometries on the underlying spaces $X$. Equilibration and regularization properties of such stochastic processes are intimately linked to curvature bounds for the induced geometries.
We regard $\Gamma(u,v)(x)=\frac12[{\LL}(uv)-u{\LL}v-v{\LL}u](x)$ as the 'metric tensor' at $x\in X$ and
\begin{eqnarray}
\RR(u,u)(x)=\inf\big\{ \Gamma_2(\tilde u,\tilde u)(x): \ \Gamma(\tilde u- u)(x)=0\big\}
\end{eqnarray}
as the 'Ricci tensor' at $x$ where
$\Gamma_2(u,v)=\frac12[{\LL}\Gamma(u,v)-\Gamma(u,{\LL}v)-\Gamma(v,{\LL}u)]$.
More generally, for $N\in[1,\infty]$ -- which will play the role of an upper bound for the dimension -- we consider the $N$-Ricci tensor
\begin{eqnarray*}
\RR_N(u,u)(x)=\inf\big\{ \Gamma_2(\tilde u,\tilde u)(x)-\frac1N \big(\LL \tilde u\big)^2(x): \ \Gamma(\tilde u- u)(x)=0\big\}.
\end{eqnarray*}
If $\LL$ is the Laplcae-Beltrami operator on a complete $n$-dimensional Riemannian manifold then
\begin{eqnarray*}
\RR_N(u,v)=\Ric(\nabla u,\nabla v)
\end{eqnarray*}
for all $N\in[n,\infty]$ and all $u,v$.
One of the key results of this paper is \emph{Bochner's formul
} in this general setting.
\begin{theorem}
Under mild regularity assumptions, for each $u$
\begin{eqnarray}\Gamma_2(u,u)=\RR(u,u)+\big\|\HH_u(.)\big\|_{HS}^2\end{eqnarray}
where $\HH_u(.)$ denotes the {Hessian} of $u$ and $\big\|\,.\,\big\|_{HS}$ its {Hilbert-Schmidt norm}.
\end{theorem}
A refined assertion states that
\begin{eqnarray*}
\Gamma_2(u,u)
&=&\RR_N(u,u)+\big\| \HH_u(.) \big\|^2_{HS}+
\frac1{N-n}\Big( \tr\, \HH_u(.) - \, \LL u\Big)^2
\end{eqnarray*}
whenever $N$ is larger than the vector space dimension of the 'tangent space' defined in terms of $\Gamma$.
In particular, this equality implies \emph{Bochner's inequality}
$\Gamma_2(u,u)
\ge\RR_N(u,u)+\frac1N \big(\LL u\big)^2$.
The latter in turn implies
the \emph{energetic curvature-dimension condition} or \emph{Bakry-Emery condition}
\begin{equation}\label{BE}
\Gamma_2(u,u)\ge \K\cdot\Gamma(u,u)+\frac1N ({\LL}u)^2
\end{equation}
for every function $\K: X\to\R$ which is a pointwise lower bound for the $N$-Ricci tensor in the sense that
$\RR_N\ge \K\cdot\Gamma$.
\bigskip
The second major topic of this paper is to study the transformation of the $N$-Ricci tensor and of the
the Bakry-Emery condition
under each of the basic transformations of stochastic processes.
The transformation which we have in mind are:
\begin{itemize}
\item time change ${\LL}' u=f^2\,{\LL}u$
\item drift transformation ${\LL}' u={\LL}u+ \Gamma(h,u)$
\item metric transformation ${\LL}' u=f^2\,{\LL}u+ \Gamma(f^2,u)$
\item conformal transformation ${\LL}' u=f^2\,{\LL}u- \frac{N-2}2\Gamma(f^2,u)$
\item Doob transformation ${\LL}' u=\frac1{\rho}{\LL}(u\rho)$ provided ${\LL}\rho=0$ and $\rho>0$.
\end{itemize}
Indeed, we will study more general transformations of the form
\begin{equation}\label{Lsharp}
{\LL}' u=f^2\, {\LL}u+f\,\sum_{i=1}^r g_i \, \Gamma(h_i,u)
\end{equation}
which will cover all the previous examples and, beyond that, provides various new examples (including non-reversible ones).
Our main result is
\begin{theorem}
For every $N'>N$ the $N'$-Ricci tensor for the operator $\LL'$ from \eqref{Lsharp}
can be pointwise estimated from below in terms of the $N$-Ricci tensor for $\LL$ and
the functions $f,g_i,h_i$.
For instance, in the case of the time change
\begin{eqnarray}
\RR'_{N'}\ge f^2\,\RR_N +
\frac12{\LL}f^2-2\Gamma(f)^2- \frac{(N-2)(N'-2)}{N'-N}\,\Gamma(f,.)^2.
\end{eqnarray}
In the particular case of the conformal transformation, such an estimate is also available for $N'=N$ and in this case
it indeed is an \emph{equality}.
\end{theorem}
\begin{corollary}
Assume that ${\LL}$ satisfies the condition $\BE(k,N)$ from \eqref{BE}. Then for every $N'>N$ the transformed operator ${\LL}'$ from \eqref{Lsharp}
satisfies the condition $\BE(\K',N')$ with a function $\K'$ explicitly given in terms of the functions $\K$ and $f,g_i,h_i$.
For instance, in the case of the time change
\begin{eqnarray}
\K'= f^2\,\K +
\frac12{\LL}f^2-N^*\,\Gamma(f,f).
\end{eqnarray}
with $N^*=2+\frac{[(N-2)(N'-2)]_+}{N'-N}$.
In the particular case of the conformal transformation, such a Bakry-Emery condition is also available for $N'=N$.
\end{corollary}
The Bakry-Emery condition is particularly useful and mostly applied in cases where the Markov diffusion is reversible w.r.t. some measure $m$ on the state space $X$ and where
${\LL}$ is the generator of the Dirichlet form
$\E(u)=\int \Gamma(u,u)\,dm$ on $L^2(X,m)$.
In this framework, most of the previous examples for transformations of generators can be obtained as generators ${\LL}'$ of
Dirichlet forms
\begin{equation}\label{dir-form}
\E'(u)=\int \Gamma(u,u)\,\phi^2\,dm\quad\mbox{on }L^2(X,\rho^2\,m)
\end{equation}
for suitable choices of the weights $\phi$ and $\rho$:
time change ($\phi=1$, $\rho=1/f$), drift transformation/Doob transformation ($\phi=\rho=e^{h/2}$), metric transformation ($\rho=1$, $\phi=f$), conformal transformation ($\rho=f^{-N/2}$, $\phi=f^{-N/2+1}$).
The study of
Bakry-Emery estimates for such Dirichlet forms is closely related to the analysis of curvature bounds in the sense of Lott-Sturm-Villani for metric measure spaces.
We say that a mms $(X,d,m)$ satisfies the \emph{entropic curvature-dimension condition} $\CD^e(\KK,N)$ for (extended) real parameters $\KK$ and $N$ if the Boltzmann entropy $S$ (with $S(\mu)=\int\rho\log\rho\,dm$ if $\mu=\rho\,m$) satisfies
\begin{equation}
D^2 S-\frac1N D S\otimes D S\ge K
\end{equation}
in a well-defined, weak sense on the $L^2$-Wasserstein space ${\mathcal P}_2(X)$.
We will analyze the behavior of the entropic curvature-dimension condition $\CD^e(\KK,N)$ under transformation of the data: changing the measure $m$ into the weighted measure $m'=e^v\,m$ and changing the length metric $d$ into
the weighted length metric $d'$ given by
\begin{equation}
d'(x,y)
=\inf\Big\{ \int_0^1 |\dot\gamma_t|\cdot e^{w(\gamma_t)}\,dt: \ \gamma:[0,1]\to X \mbox{ rectifiable, } \gamma_0=x, \gamma_1=y\Big\}.
\end{equation}
We will not treat this problem in full generality but assume that the mms $(X,d,m)$ is {'smooth'} and also that the weights $v$ and $w$ are 'smooth'.
\begin{theorem}
If $(X,d,m)$ satisfies the $\CD^e(\KK,N)$-condition then for each $N'>N$ the metric measure space $(X,d',m')$ satisfies the $\CD^e(\KK',N')$-condition
with a number $\KK'$ explicitly given in terms of $\K$, $v$ and $w$.
For instance, if $v=0$
\begin{eqnarray*}
\KK'= \inf_X \, e^{-2w}\Big[\KK-
{\LL}w+2\Gamma(w)-
\sup_{u\in\A}
\frac1{\Gamma(u)}\Big( \big(\frac{N'\,N}{N'-N}+2\big)\Gamma(w,u)^2-2 {\HH}_{w}(u,u)\Big)\Big].
\end{eqnarray*}
If $w=0$ also $N=N'=\infty$ is admissible; if $w=\frac1N v$ also $N'=N$ is admissible.
\end{theorem}
From the very beginning, in theoretical studies and applications of the Bakry-Emery condition and of the Lott-Sturm-Villani condition, their transformation behavior under drift transformation was well studied and widely used \cite{Bak-Eme}, \cite{Stu1,Stu2,LV}.
As far as we know, however, until now for none of the other transformations of diffusion operators a transformation result for the
Bakry-Emery condition or for the Lott-Sturm-Villani condition existed.
\tableofcontents
\section{Diffusion Operators and Ricci Tensors}
\subsection{The $\Gamma$-Operator and the Hessian}
Our setting will be the following (cf. \cite{Bak94, Led2}): ${\LL}$ will be a linear operator, defined on an algebra $\A$ of functions on a set $X$ such that ${\LL}(\A)\subset\A$.
(No topological or measurability assumptions on $X$ are requested, no measure is involved.)
In terms of these data we define the square field operator
$\Gamma(f,g)=\frac12[f{\LL}g+f{\LL}g-{\LL}(fg)]$.
We assume that ${\LL}$ is a \emph{diffusion operator} in the sense that
\begin{itemize}
\item $\Gamma(f,f)\ge0$ for all $f\in\A$
\item $\psi(f_1,\ldots,f_r)\in\A$ for every $r$-tuple of functions $f_1,\ldots,f_r$ in $\A$ and every $C^\infty$-function $\psi:\R^r\to\R$ vanishing at the origin and
\begin{equation}\label{diff}
{\LL}\psi(f_1,\ldots,f_r)=\sum_{i=1}^r \psi_i(f_1,\ldots,f_r)\cdot {\LL}f_i + \sum_{i,j=1}^r \psi_{ij}(f_1,\ldots,f_r)\cdot \Gamma(f_i,f_j)
\end{equation}
where $\psi_i:=\frac{\partial}{\partial y_i}\psi$ and $\psi_{ij}:=\frac{\partial^2}{\partial y_i\,\partial y_j}\psi$.
\end{itemize}
We
define the Hessian of $f$ at a point $x\in X$ as a bilinear form on $\A$ by
$${\HH}_f(g,h)(x)=\frac12\Big[\Gamma\big(g,\Gamma(f,h)\big)+\Gamma\big(h,\Gamma(f,g)\big)-\Gamma\big(f,\Gamma(g,h)\big)\Big](x)$$
for $f,g,h\in\A$.
We can always extend the definition of ${\LL}$ and $\Gamma$ to the algebra generated by the elements in $\A$ and the constant functions which leads to ${\LL}1=0$ and $\Gamma(1,f)=0$ for all $f$.
For later use, let us state the chain rule for ${\LL}f$ and ${\HH}_f$:
\begin{eqnarray*}\frac1p f^{-p}{\LL}f^p={\LL} \log f+p \Gamma(\log f),\qquad
\frac1p f^{-p}{\HH}_{f^p}(u,u)= {\HH}_{ \log f}+p \Gamma(\log f,u)^2
\end{eqnarray*}
for all $p\not=0$ and all $f\in\A$ with $\log f$ and $f^p\in\A$.
\subsection{The $\Gamma_2$-Operator}
Of particular importance is the $\Gamma_2$-operator defined via iteration of the $\Gamma$-operator
$$\Gamma_2(f,g)=\frac12[\Gamma(f,{\LL}g)+\Gamma(f,{\LL}g)-{\LL}\Gamma(f,g)].$$
We put $\Gamma(f)=\Gamma(f,f), \Gamma_2(f)=\Gamma_2(f,f)$.
\begin{lemma}[\cite{Bak94},\cite{Led2}]\label{chain-rule} Given $f_1,\ldots,f_r$ in $\A$ and a $C^\infty$-function $\psi:\R^r\to\R$.
Then
$$\Gamma(\psi(f_1,\ldots,f_r))=\sum_{i,j=1}^r [\psi_i\cdot\psi_j](f_1,\ldots,f_r)\cdot \Gamma(f_i,f_j)$$ and
\begin{eqnarray*}
\Gamma_2(\psi(f_1,\ldots,f_r))&=&\sum_{i,j}[\psi_i\cdot \psi_j](f_1,\ldots,f_r)\cdot \Gamma_2(f_i,f_j)\\
&+& 2\sum_{i,j,k}[\psi_i\cdot \psi_{jk}](f_1,\ldots,f_r)\cdot {\HH}_{f_i}(f_j,f_k)\\
&
+&\sum_{i,j,k,l}[\psi_{ij}\cdot \psi_{kl}](f_1,\ldots,f_r)\cdot \Gamma(f_i,f_k)\cdot\Gamma(f_j,f_l).
\end{eqnarray*}
\end{lemma}
\begin{remark}
We say that the family $\{f_1,\ldots,f_n\}$ is an $n$-dimensional \emph{normal coordinate system} at a given point $x\in X$ if
for all $i,j,k\in \{1,\ldots,n\}$
$$\Gamma(f_i,f_j)(x)=\delta_{ij}, \quad {\HH}_{f_i}(f_j,f_k)(x)=0, \quad {\LL}f_i(x)=0.$$
Given such a system at the point $x\in X$, for each
$C^\infty$-function $\psi:\R^n\to\R$
we have
\begin{eqnarray}\label{bochner-formula}
\Gamma_2(\psi\circ f)(x) =
\Ric^\sharp(D\psi,D\psi)(f(x)) + \| D^2{\psi}\|^2_{HS}(f(x))
\end{eqnarray}
where
$\Ric^\sharp(D\psi,D\psi)(f):=\sum_{i,j=1}^n \psi_i(f)\, \psi_j(f)\, \Gamma_2(f_i,f_j)$
and $\|D^2{\psi}\|^2_{HS}:=\sum_{i,j=1}^n |\frac{\partial^2}{\partial y_i\,\partial y_j}\psi|^2$.
Note that
\begin{eqnarray*}\| D^2{\psi}\|^2_{HS}(f(x))&\ge&\frac1n \Big(\sum_{i=1}^n\frac{\partial^2}{\partial y_i^2}\psi\Big)^2\big(f(x)\big)\ =\ \frac1n\Big({\LL}\big(\psi\circ f\big)\Big)^2(x).
\end{eqnarray*}
\end{remark}
\subsection{The Ricci Tensor}
In terms of the $\Gamma_2$-operator we define the \emph{Ricci tensor} at the point $x\in X$ by
\begin{eqnarray}
\RR(f)(x)=\inf\big\{ \Gamma_2(\tilde f)(x): \ \tilde f\in\A, \ \Gamma(\tilde f-f)(x)=0\big\}
\end{eqnarray}
for $f\in\A$.
More generally, given any extended number $N\in[1,\infty]$ we define the \emph{$N$-Ricci tensor} at $x$ by
\begin{eqnarray}
\RR_N(f)(x)=\inf\big\{ \Gamma_2(\tilde f)(x)-\frac1N(\LL\tilde f)^2(x): \ \tilde f\in\A, \ \Gamma(\tilde f-f)(x)=0\big\}.
\end{eqnarray}
Obviously, for $N=\infty$ this yields the previously defined Ricci tensor.
Moreover, $\Gamma_2(f)\ge \RR_N(f)+\frac1N (\LL f)^2$ and $\RR(f)\ge \RR_N(f)$
for all $N$.
One might be tempted to believe that $\RR_N(f)\ge\RR(f)-\frac1N (\LL f)^2$ but this is not true in general, see
Proposition \ref{mani-ric} below.
\begin{lemma}
For every $N\in[1,\infty]$ and every $x\in X$ the $N$-Ricci tensor is a quadratic form on $\A$.
Thus by polarization it extends to a bilinear form $\RR_N(.,.)(x)$ on its \emph{domain}
$\Dom\big(\RR_N(x)\big)=\{f\in\A: \, \RR_Nf(x)>-\infty\}$.
\end{lemma}
\begin{proof}
It suffices to prove the parallelogram inequality. Given $x\in X$, $f,g\in\Dom\big(\RR_N(x)\big)$ and $\epsilon>0$, choose
$\tilde f, \tilde g\in\A$ with $\Gamma(f-\tilde f)(x)=\Gamma(g-\tilde g)(x)=0$ such that
$\RR_N(f)(x)\ge \RR_N^0(\tilde f)(x)-\epsilon$ and $\RR_N(g)(x)\ge \RR_N^0(\tilde g)(x)-\epsilon$. Here
we put
$$\RR_N^0(h)(x)=\Gamma_2(h)(x)-\frac1N(\LL h)^2(x)$$
which obviously defines a quadratic form in $h\in\A$. Thus
\begin{eqnarray*}
\RR_N(f+g)(x)+\RR_N(f-g)(x)&\le&
\RR_N^0(\tilde f+\tilde g)(x)+\RR_N^0(\tilde f-\tilde g)(x)\\
& =&
2\RR_N^0(\tilde f)(x)+2\RR_N^0(\tilde g)(x)\\
&\le&
2\RR_N( f)(x)+2\RR_N( g)(x)+4\epsilon.
\end{eqnarray*}
Since $\epsilon>0$ was arbitrary, this proves the claim.
\end{proof}
Let us illustrate the concept of the $N$-Ricci tensor with two basic examples.
\begin{proposition}\label{mani-ric} Let $L$ be the Laplace-Beltrami operator on a complete $n$-dimensional Riemannian manifold (and let $\A$ be the set of ${\mathcal C}^\infty$-functions vanishing at infinity or the set of ${\mathcal C}^\infty$-functions with compact supports -- this makes no difference as long as no measure is involved). Then for all $f\in\A$
\begin{equation} \RR_N(f)=\left\{
\begin{array}{ll}
\Ric(\nabla f,\nabla f)& \mbox{if }N\ge n,\\
-\infty &\mbox{if }N<n.
\end{array}\right.
\end{equation}
\end{proposition}
\begin{proof}
The well-known Bochner's formula states that
\begin{equation}\label{boch-form}\Gamma_2(f)=\Ric(\nabla f,\nabla f)+\big\| D^2 f\big\|^2_{HS}
\end{equation}
(cf. formula \eqref{bochner-formula}) and
Bochner's inequality states that
\begin{equation*}\Gamma_2(f)\ge\Ric(\nabla f,\nabla f)+\frac1n (\Delta f)^2.
\end{equation*}
Given $f$ and $x$, applying the latter to $\tilde f\in\A$ with $\nabla f(x)=\nabla\tilde f(x)$ yields
$$\Gamma_2(\tilde f)(x)-\frac1N (\Delta \tilde f)^2(x)\ge \Ric(\nabla \tilde f,\nabla \tilde f)(x)=\Ric(\nabla f,\nabla f)(x)$$
and thus
$$\RR_Nf(x)\ge \Ric(\nabla f,\nabla f)(x).$$
Conversely, given $f$ and $x$ choose $f_0$ with $\nabla f_0(x)=\nabla f(x)$ and $D^2{f_0}(x)=0$.
Then \eqref{boch-form} implies
$$\Gamma_2(f_0)(x)-\frac1n (\Delta f_0)^2(x) =\Ric(\nabla f_0,\nabla f_0)(x)=\Ric(\nabla f,\nabla f)(x)$$
and therefore $\RR_N(f)(x)\le \Ric(\nabla f,\nabla f)(x)$.
To verify the second claim, for given $x\in\A$ and $f\in\A$, consider the functions
$f_j=f_0+j\cdot v$ with $v=\frac12 d(x,.)^2$. (More precisely, choose $v\in\A$ with $v=\frac12 d(x,.)^2$ in a neighborhood of $x$.)
Note that $\nabla v(x)=0$, $\Delta v(x)=n$ and $\big\| D^2 v\big\|^2_{HS}(x)=n$.
Thus $\nabla f(x)=\nabla f_j(x)=0$ and
$$\Gamma_2(f_j)(x)-\frac1N(\Delta f_j)^2(x)=\Ric(\nabla f,\nabla f)(x)+j^2\,n-\frac{j^2\, n^2}N$$
for every $j\in\N$.
As $j\to\infty$ this proves $\RR_N(f)(x)=-\infty$ whenever $N<n$.
\end{proof}
\begin{proposition}\label{drift-ric} Let $\LL=\Delta+Z$ be the Laplace-Beltrami operator with drift on a complete $n$-dimensional Riemannian manifold where $Z$ is any smooth vector field. Then
\begin{equation}\RR_\infty(f)=\Ric(\nabla f,\nabla f)-(D Z)(\nabla f,\nabla f)
\end{equation}
for all $f\in\A$ and
\begin{equation} \RR_N(f)(x)=\left\{
\begin{array}{ll}
\RR_\infty(f)(x)
-\frac1{N-n}\big|Z\, f\big|^2(x)\quad&\mbox{if } N>n,\\
\RR_\infty(f)(x)&
\mbox{if }N=n \mbox{ and }(Z\, f)(x)=0,\\
-\infty&\mbox{if } N=n\mbox{ and }(Z\, f)(x)\not=0,\\
-\infty& \mbox{if }N<n.
\end{array}\right.
\end{equation}
\end{proposition}
\begin{proof}
The lower estimates for $\RR_N$ will follow from our general estimates in Theorem \ref{g2-est} (cf. also Corollary \ref{drift-be}).
For the upper estimate in the case $N<n$, for given $f$ and $x$ choose $f_j=f_0+j\cdot v$ as in the proof of the previous proposition.
Then
$$\big\| D^2 f_j\big\|^2_{HS}-\frac1N(\LL f_j)^2(x)=j^2\,n-\frac{1}N\big(j\,n+(Z\,f)(x)\big)^2\to-\infty$$
as $j\to\infty$. Similarly, if $N=n$ and $(Zf)(x)\not=0$ we choose $f_j=f_0+j\cdot(Zf)(x)\cdot v$ to obtain
$$\big\| D^2 f_j\big\|^2_{HS}-\frac1N(\LL f_j)^2(x)=j^2\,n(Zf)^2(x)-\frac{1}N(jn+1)^2(Z\,f)^2(x)\to-\infty$$
as $j\to\infty$.
If $N>n$ choose $\tilde f=f_0+\frac1{N-n}(Zf)(x)\cdot v$. Then $\nabla\tilde f(x)=\nabla f(x)$, $\Delta \tilde f(x)=\frac{n}{N-n}(Zf)(x)$ and
$\LL \tilde(x)=\frac N{N-n}(Zf)(x)$. Moreover, $\big\| D^2 \tilde f\big\|^2_{HS}(x)=\frac n{(N-n)^2}(Zf)^2(x)$.
Thus
\begin{eqnarray*}
\RR_N(f)(x)&\le& \Gamma_2(\tilde f)-\frac1N (\LL \tilde f)^2(x)\\
&=&\Ric(\nabla f,\nabla f)(x)+(D Z)(\nabla f,\nabla f)(x)+\big\| D^2 \tilde f\big\|^2_{HS}(x)-\frac1N (\LL \tilde f)^2(x)\\
&=&\Ric(\nabla f,\nabla f)(x)+(D Z)(\nabla f,\nabla f)(x)-\frac1{N-n}(Zf)^2(x).
\end{eqnarray*}
Essentially the same argument works in the case $N=n$ and $(Zf)(x)=0$ if we put $\tilde f=f_0$.
\end{proof}
\begin{definition}
Given a function $\K: X\to\R$ and an extended number $N\in[1,\infty]$
we say that the operator $({\LL},\A)$ satisfies the
\emph{Bakry-Emery condition} $\BE(\K,N)$ if
\begin{eqnarray}
\Gamma_2(f)(x)\ge \K(x)\cdot\Gamma(f)(x)+\frac1N (\LL f)^2(x).
\end{eqnarray}
for all $f\in\A$ and all $x\in X$.
\end{definition}
The Bakry-Emery condition obviously is equivalent to the condition
\begin{eqnarray}\RR_N(f)(x)\ge \K(x)\cdot\Gamma(f)(x) \end{eqnarray}
for all $f\in\A$ and all $x\in X$.
\section{Fundamental Estimates for $N$-Ricci Tensors and Hessians}
In the classical Riemannian setting
$\Gamma_2(f)=\Ric(\nabla f,\nabla f)+ \| D^2 f\|^2_{HS}$.
The Bakry-Emery condition $\BE(\K,n)$ relies on the bound for the Ricci tensor $\Ric(\nabla f,\nabla f)\ge k\cdot|\nabla f|^2$ and on the estimate for the Hilbert-Schmidt norm of the Hessian
$\| D^2 f\|^2_{HS}\ge\frac1n (\Delta f)^2$. A more refined analysis might be based on the identity
$$\| D^2 f\|^2_{HS}=\frac 1n (\Delta f)^2+\| D^2 f-\frac1n \Delta f\cdot \mathbb{I} \|^2_{HS}$$
and on estimates for the bilinear form ('traceless Hessian')
$$\Big( D^2 f-\frac1n \Delta f\cdot \mathbb{I}\Big)(g,h)=D^2 f (\nabla g, \nabla h)- \frac1n \Delta f \cdot \nabla g\,\nabla h.$$
For instance, one might use the estimate
$$\|a\|_{HS}^2\ge \frac n{n-1} \, \|a\|_{2,2}^2$$
valid for any traceless, symmetric $(n\times n)$-matrix $a=(a_{ij})_{1\le i,j\le n}$.
In the abstract setting,
we are now going to prove a fundamental estimate for the
$N$-Ricci tensor in terms of
the bilinear form
$(g,h)\mapsto {\HH}_f(g,h)-\frac1N \Gamma(g,h)\cdot {\LL}f$.
Recall that by definition $\Gamma_2(f)\ge\RR_N(f)+\frac1N (\LL f)^2$.
\begin{theorem}\label{sharpGamma2} For all $N\in[1,\infty]$ and all $f,g,h\in\A$
\begin{equation}
\Gamma_2(f)\ge\RR_N(f)+\frac1N (\LL f)^2+2
\frac{\Big[ {\HH}_f(g,h)-\frac1N \Gamma(g,h)\cdot {\LL}f\Big]^2}{ \frac{N-2}N\Gamma(g,h)^2+\Gamma(g)\cdot \Gamma(h)}.
\end{equation}
\end{theorem}
Note that $\frac{N-2}N\Gamma(g,h)^2+ \Gamma(g)\cdot\Gamma(h)\ge0$ for any choice of $g,h$ and $N\ge1$
(and that $\ldots=0$ is excluded if $N>1$ and $\Gamma(g)\cdot \Gamma(h)\ne 0$).
The previous estimate allows for two interpretations or applications. Firstly, it provides a very precise and powerful estimate for the Hessian.
We will use it in the following form.
\begin{corollary}\label{Hess-est} For all $N\in[1,\infty]$ and for all $f,g,h\in\A$
\begin{equation*}
2\Big[ {\HH}_f(g,h)-\frac1N \Gamma(g,h)\cdot {\LL}f\Big]^2\le \Big[\Gamma_2(f)-\frac1N(\LL f)^2-\RR_N(f)\Big]\cdot \Big[ \frac{N-2}N\Gamma(g,h)^2+\Gamma(g)\cdot \Gamma(h)\Big].
\end{equation*}
and thus for each function $\rho: X\to\R$
\begin{eqnarray*}2\rho\Big|{\HH}_f(g,h)-\frac1N \Gamma(g,h)\cdot {\LL}f\Big|&\le& \rho^2\cdot \Big[\Gamma_2(f)-\frac1N(\LL f)^2-\RR_N(f)\Big] \\ &&+\frac12 \Big[ \frac{N-2}N\Gamma(g,h)^2+\Gamma(g)\cdot \Gamma(h)\Big].
\end{eqnarray*}
\end{corollary}\label{abs-cont-hess}
This will be the key ingredient for the estimate of the Ricci tensor for a transformed operator.
Moreover, it implies that the Hessian $�\HH_f(.)(x)$ is well-defined on equivalence classes of functions w.r.t. vanishing $\Gamma(.)(x)$.
\begin{corollary} For all $x\in X$, all $f\in\A(x)$ and all $g,h\in\A$
$$\Gamma(g)(x)=0\quad\Rightarrow\quad{\HH}_f(g,h)(x)=0.$$
\end{corollary}
Secondly, the estimate of the theorem leads to the following refined estimate for the Ricci tensor as a consequence of which one obtains a \emph{self-improvement property} of the Bakry-Emery condition $\BE(\K,N)$.
\begin{corollary} For all $N\in[1,\infty]$ and all $f\in\A$
\begin{eqnarray}
\Gamma_2(f)&\ge&\RR_N(f)+\frac1N (\LL f)^2+
\frac N{N-1}\, \big\| {\HH}_f(.)-\frac1N {\LL}f\cdot\Gamma(.)\big\|_{\Gamma}^2\\
&\ge&
\RR_N(f)+\frac1N (\LL f)^2+
\frac N{N-1}\Big[ \frac1{\Gamma(f)} {\HH}_f(f,f)-\frac1N {\LL}f\Big]^2
\end{eqnarray}
where $\|B\|_{\Gamma}(x)=\sup\{|B(g,g)|(x):\ g\in\A, \Gamma(g)(x)\le 1\}$ denotes the norm of a bilinear form $B(.)(x)$ on $\A$ w.r.t. the seminorm $\Gamma(.)(x)$.
In particular, for $N=\infty$
\begin{eqnarray}
\Gamma_2(f)&\ge&\RR(f)+ \big\| {\HH}_f(.)\big\|_{\Gamma}^2.
\end{eqnarray}
\end{corollary}
This second aspect will be taken up (and further developed) in the subsequent Chapters 4 and 5. The first aspect will be the key ingredient for the results on Ricci tensors for transformed operators in Chapters 6-10. These results will only rely on Theorem \ref{sharpGamma2}
and not on the more sophisticated results of the next two chapters.
\begin{proof}[Proof of the theorem]
We will consider functions of the form ${\tilde f}=\psi(f,g,h)\in\A$ for smooth $\psi:\R^3\to\R$ and use the fact that
$\Gamma_2(\tilde f)\ge\RR_N(\tilde f)+\frac1N (\LL \tilde f)^2$.
At each point $x\in X$ we choose the optimal $\psi$.
Indeed, it suffices to consider
${\tilde f}=f+t[gh-g(x)h-f g(x)]$ and to optimize in $t\in\R$ (for each given $x\in X$).
More precisely, we use equation \eqref{diff} and the assertions of Lemma \ref{chain-rule} with
$r=3$, $f_1=f$, $f_2=g$ $f_3=h$. For given $x\in X$ and $t\in\R$ we choose $\psi$ such that $\psi_1=1$, $\psi_{23}=t$ and
$\psi_2=\psi_3=\psi_{11}=\psi_{22}=\psi_{33}=\psi_{12}=\psi_{13}=0$ at the particular point $(f(x),g(x),h(x))\in\R^3$.
Then at the point $x\in X$
\begin{eqnarray*}
&&\Gamma(\tilde f-f)=0,\quad {\LL}\tilde f={\LL}f+2t\Gamma(g,h),\\
&&\Gamma_2(\tilde f)=\Gamma_2(f)+4t {\HH}_f(g,h)+2t^2\Big[\Gamma(g,h)^2+\Gamma(g)\cdot\Gamma(h)\Big].
\end{eqnarray*}
Thus $\RR_N(\tilde f)=\RR_N(f)$ and
\begin{eqnarray*}
0&\le&
\Gamma_2(\tilde f)- \RR_N(\tilde f)-\frac1N ({\LL}\tilde f)^2\\
&=& \Gamma_2(f)- \RR_N(f)-\frac1N ({\LL}f)^2\\
&&+ 4t \Big[ {\HH}_f(g,h)-\frac1N {\LL}f\cdot \Gamma(g,h)\Big]\\
&&+2t^2\Big[ \frac{N-2}N\Gamma(g,h)^2+ \Gamma(g)\cdot\Gamma(h)\Big]\\
&=&\Gamma_2(f)- \RR_N(f)-\frac1N ({\LL}f)^2+4tb+2t^2a.
\end{eqnarray*}
Choosing $t=-\frac b{a}$
yields
$$0\le \Gamma_2(f)- \RR_N(f)-\frac1N ({\LL}f)^2-2\frac{b^2}{a}$$
(at the given point $x\in X$).
This is the claim.
\end{proof}
\section{Self-Improvement Property of $\Gamma_2$-Estimates}
Given $x\in X$, we denote by $\A_x^1$ the space of equivalence classes in $\A$ w.r.t. the seminorm $\Gamma(.)(x)$ and by
$\dim\, (\A,\Gamma)(x)$ the dimension of the inner product space $(\A_x^1,\Gamma(.)(x))$.
For a bilinear form $B$ on $\A_x^1$ we define its
\emph{operator norm} by $\|B\|_\Gamma(x)=\sup\{|B(v,v)|:\ v\in\A_x^1, \Gamma(v)(x)\le1\}$ and its
\emph{Hilbert-Schmidt norm} by
\begin{equation*}\big\|B\big\|_{HS}(x)=\sup\Big\{
\Big(\sum_{i,j=1}^r B(e_i,e_j)^2\Big)^{1/2}:\, r\in\N, e_1,\ldots,e_r\in\A_x^1, \Gamma(e_i,e_j)(x)=\delta_{ij}\Big\}
\end{equation*}
provided $\dim(\A,\Gamma)(x)>0$.
If $\dim(\A,\Gamma)(x)=0$ we put $\big\|B\big\|_{HS}(x)=0$. Obviously, in any case
$\|B\|_\Gamma(x)\le \big\|B\big\|_{HS}(x)$.
For any $(n\times n)$-matrix $B$ we put $\tr B=\sum_{i=1}^n B_{ii}$ and $B^o_{ij}= B_{ij}-\frac1n\tr B\,\delta_{ij}$.
Note that
$\sum_{i,j} (B_{ij})^2=\sum_{i,j} (B_{ij}^o)^2+ \frac1n (\tr B)^2$.
Similar definitions and results apply for any bilinear form on $\A_x^1$ provided $\dim\, (\A,\Gamma)(x)=n$.
\begin{theorem}\label{super-self}
\begin{itemize}
\item[(i)]
For all $f\in\A$
\begin{equation}\label{super-eins}
\Gamma_2(f)\ge \RR(f)+\big\| \HH_f \big\|^2_{HS}.
\end{equation}
In particular, $\RR(f)(x)=-\infty$ if $\big\| \HH_f \big\|_{HS}(x)=+\infty$.
\item[(ii)]
Moreover, for $x\in X$ and $N\in[1,\infty)$ with $N\ge n(x):=\dim\, (\A,\Gamma)(x)$
\begin{eqnarray}
\Gamma_2(f)(x)&\ge&
\RR_N(f)(x)+\big\| \HH_f(.) \big\|^2_{HS}(x)+
\frac1{N-n(x)}\Big( \tr\, \HH_f(.) - \, \LL f\Big)^2(x)\label{super-zwei0}\\
&=&
\RR_N(f)(x)+\frac1N \big(\LL f\big)^2(x)+
\big\| \HH_f(.)-\frac1N\, \LL f\cdot\Gamma(.) \big\|^2_{HS}(x)\nonumber\\
&&\qquad\qquad\qquad+
\frac1{N-n(x)}\Big( \tr\, \HH_f(.) - \frac nN\, \LL f\Big)^2(x).\label{super-zwei}
\end{eqnarray}
In the case $N=n(x)$, the respective last terms on the RHS here should be understood as the limit $N\searrow n(x)$ (which is either $0$ or $+\infty$).
\item[(iii)]
Finally,
\begin{equation*}
\RR_N(f)(x)=-\infty
\end{equation*}
whenever $N<\dim\, (\A,\Gamma)(x)$ or if $N=\dim\, (\A,\Gamma)(x)$ and $\tr \HH_f(.)(x)\not= \LL f(x)$.
\end{itemize}
\end{theorem}
\begin{proof}
(i) Given $f$ and $x$, let us first consider the case $\big\| \HH_f \big\|_{HS}(x)<\infty$. Here for any $\epsilon>0$, choose $r\in\N$ and $e_1,\ldots,e_r\in\A$ with $\Gamma(e_i,e_j)(x)=\delta_{ij}$ and
$$\big\|\HH_f \big\|^2_{HS}(x)\le \sum_{i,j=1}^r \HH_f^2(e_i,e_j)(x)+ \epsilon.$$
Consider $\tilde f\in\A$ of the form
$\tilde f=f+\psi\circ e$ for smooth $\psi:\R^r\to\R$ with $\psi_i(e(x))=0$ for all $i$ where $e(x)=(e_1,\ldots,e_r)(x)$.
Then by the chain rule (see Lemma \ref{chain-rule}, applied to $\tilde\psi(y_0,y_1,\ldots,y_r)=y_0+\psi(y_1,\ldots,y_r)$ and $\tilde e=(f,e_1,\ldots,e_r)$), $\Gamma(\tilde f,.)(x)=\Gamma(f,.)(x)$ and
\begin{eqnarray}\label{g2-hf}
\Gamma_2\big(\tilde f\big)(x)&=&
\Gamma_2\big(f\big)(x)+ 2\sum_{i,j=1}^r \psi_{ij}\big(e(x)\big)\cdot \HH_f\big(e_i,e_j\big)(x)+ \sum_{i,j=1}^r\psi_{ij}^2\big(e(x)\big)
\nonumber\\
&=&
\Gamma_2\big(f\big)(x)+ \sum_{i,j=1}^r\Big(\HH_f\big(e_i,e_j\big)(x)+ \psi_{ij}\big(e(x)\big)\Big)^2-
\sum_{i,j=^1}^r \HH_f^2\big(e_i,e_j\big)(x).
\end{eqnarray}
Choosing $\psi$ such that $\psi_{ij}\big(e(x)\big)=-\HH_f\big(e_i,e_j\big)(x)$ for all $i,j$ leads to
\begin{eqnarray*}
\Gamma_2\big(\tilde f\big)(x)&=&
\Gamma_2\big(f\big)(x)- \sum_{i,j=1}^r \HH_f^2\big(e_i,e_j\big)(x).
\end{eqnarray*}
(For instance, the choice
$\psi(y_1,\ldots,y_n)=-\frac12 \sum_{i,j=1}^n \HH_f\big(e_i,e_j\big)(x)\cdot
\big(y_i-e_i(x)\big)\cdot\big(y_j-e_j(x)\big)$
will do the job.) Thus
\begin{eqnarray*}
\RR\big( f\big)(x)&\le&
\Gamma_2\big(f\big)(x)- \sum_{i,j=1}^r \HH_f^2\big(e_i,e_j\big)(x)\\
&\le&
\Gamma_2\big(f\big)(x)- \big\|\HH_f \big\|^2_{HS}(x)+\epsilon.
\end{eqnarray*}
Since $\epsilon>0$ was arbitrary this proves the claim.
Now let us consider the case $\big\| \HH_f \big\|_{HS}(x)=+\infty$. Then for any $C>0$ there exist
$r\in\N$ and $e_1,\ldots,e_r\in\A$ with $\Gamma(e_i,e_j)(x)=\delta_{ij}$ and
$\sum_{i,j=1}^r \HH_f^2(e_i,e_j)(x)\ge C$.
With the same argument as before
$\RR\big( f\big)(x)\le
\Gamma_2\big(f\big)(x)- \sum_{i,j=1}^r \HH_f^2\big(e_i,e_j\big)(x)
\le
\Gamma_2\big(f\big)(x)- C$.
Since $C>0$ was arbitrary this proves the claim.
(ii) Let $x\in X$, $f\in\A$ and $N> n(x)=\dim\, (\A,\Gamma)(x)$ be given.
Choose $e_1,\ldots,e_n\in\A$ with $\Gamma(e_i,e_j)(x)=\delta_{ij}$.
Again we will consider $\tilde f\in\A$ of the form
$\tilde f=f+\psi\circ e$ for smooth $\psi:\R^n\to\R$ with $\psi_i(e(x))=0$ for all $i$.
According to the chain rule for $\LL$ (i.e. the property of being a 'diffusion operator')
$$\LL \tilde f(x)=\LL f(x)+(\Delta \psi)\big(e(x)\big)\quad\mbox{with}\quad\Delta=\sum_{i=1}^n \frac{\partial^2}{\partial y_i^2}.$$
Thus
\begin{eqnarray}\label{eins}
\lefteqn{\Big[
\Gamma_2\big(\tilde f\big)(x)-\frac1N \big(\LL \tilde f\big)^2(x)\Big]
-
\Gamma_2\big( f\big)(x)}\nonumber\\
&=&
2\sum_{i,j} \psi_{ij}\big(e(x)\big)\cdot \HH_f\big(e_i,e_j\big)(x)+ \sum_{i,j}\psi_{ij}^2\big(e(x)\big)
-\frac1N \Big(\LL f(x)+(\Delta\psi)(e(x))\Big)^2 \nonumber
\\
&=&
\sum_{i,j}\Big(\HH_f\big(e_i,e_j\big)(x)+ \psi_{ij}\big(e(x)\big)\Big)^2\nonumber\\
&&-
\sum_{i,j} \HH_f^2\big(e_i,e_j\big)(x)- \frac1N \Big(\LL f(x)+(\Delta\psi)(e(x))\Big)^2 \nonumber
\\
&=&
\sum_{i,j}\Big(\HH_f^o\big(e_i,e_j\big)(x)+ \psi_{ij}^o\big(e(x)\big)\Big)^2
+\frac1n\Big( \tr \HH_f(.)(x)+(\Delta \psi)(e(x))\Big)^2\nonumber\\
&&
-
\sum_{i,j} \HH_f^2\big(e_i,e_j\big)(x)- \frac1N \Big(\LL f(x)+(\Delta\psi)(e(x))\Big)^2.
\end{eqnarray}
Now let us choose $\psi$ such that
\begin{equation}\psi_{ij}^o\big(e(x)\big)=-\HH_f^o\big(e_i,e_j\big)(x)\label{zwei}\end{equation}
for all $i,j=1\ldots,n$. Moreover, we may require that
\begin{equation}(\Delta\psi)\big(e(x)\big)=-\frac{N}{N-n}\big(\tr \HH_f-\frac nN\LL f\big)(x).\label{drei}\end{equation}
For instance, the choice
\begin{eqnarray*}\psi(y_1,\ldots,y_n)&=&-\frac12 \sum_{i,j=1}^n \HH_f^o\big(e_i,e_j\big)(x)\cdot
\big(y_i-e_i(x)\big)\cdot\big(y_j-e_j(x)\big)\\
&&
-\frac{N}{N-n}\big(\tr \HH_f-\frac nN\LL f\big)(x)\cdot
\frac1{2n}\sum_{i=1}^n
\big(y_i-e_i(x)\big)^2
\end{eqnarray*}
will do the job.
Combining \eqref{eins}, \eqref{zwei} and \eqref{drei} yields
\begin{eqnarray*}
\lefteqn{\Big[
\Gamma_2\big(\tilde f\big)(x)-\frac1N \big(\LL \tilde f\big)^2(x)\Big]
-
\Gamma_2\big( f\big)(x)}\nonumber\\
&=&
- \sum_{i,j} \HH_f^2\big(e_i,e_j\big)(x)
-\frac1{N-n}\big(\tr \HH_f-\LL f\big)^2(x).
\end{eqnarray*}
Thus
\begin{eqnarray*}
\RR_N(f)(x)&\le&
\Gamma_2\big( f\big)(x)
-
\sum_{i,j} \HH_f^2\big(e_i,e_j\big)(x)
-\frac1{N-n}\big(\tr \HH_f-\LL f\big)^2(x).
\end{eqnarray*}
This is the first claim.
(A similar argumentation proves the assertion in the case $N=n$.)
To see the equality \eqref{super-zwei} note that for each real symmetric $(n\times n)$-matrix $B$ and each scalar $a$
$$\big\|B\big\|^2_{HS}=\big\|B-\frac aN {\mathbb I} \big\|^2_{HS}+\frac1n \big( \tr\, B\big)^2-\frac1n \big( \tr\, B-\frac nNa\big)^2$$
(with ${\mathbb I}$ being the unit matrix, i.e. ${\mathbb I}_{jk}=\delta_{jk}$) and that for all $a,b\in\R$
$$\frac1nb^2-\frac1n(b-\frac nNa)^2+\frac1{N-n}(b-a)^2=\frac1N a^2+\frac1{N-n}(b-\frac nN a)^2.$$
(iii)
In the case $N<n$ we consider the sequence of functions $\psi^{(k)}$ given by
\begin{eqnarray*}\psi^{(k)}(y_1,\ldots,y_n)=-\frac12 \sum_{i,j=1}^n \HH_f^o\big(e_i,e_j\big)(x)\cdot
\big(y_i-e_i(x)\big)\cdot\big(y_j-e_j(x)\big)
+
\frac k{2n}\sum_{i=1}^n
\big(y_i-e_i(x)\big)^2.
\end{eqnarray*}
Then for each $k\in\N$, equations \eqref{eins} and \eqref{zwei} hold true as before with $f^{(k)}:=f+\psi^{(k)}\circ e$ in the place of $\tilde f$ and $\psi^{(k)}$ in the place of $\psi$
whereas instead of \eqref{drei} we obtain
\begin{equation*}(\Delta\psi^{(k)})\big(e(x)\big)=k.\label{drei'}\end{equation*}
Thus
\begin{eqnarray*}
\RR_N(f)(x)\le \inf_{k\in\N}
\Big[
\Gamma_2\big( f^{(k)}\big)(x)-\frac1N \big(\LL f^{(k)}\big)^2(x)\Big]=-\infty.
\end{eqnarray*}
\end{proof}
Theorem \ref{super-self} implies a \emph{strong self-improvement property} of the Bakry-Emery condition.
\begin{corollary}\label{be-N-imp} The Bakry-Emery condition $\BE(\K,N)$ -- that is, the condition $\Gamma_2(f)\ge\K\,\Gamma(f)+\frac1N (\LL f)^2$ for all $f\in \A$ -- implies that $N\ge\dim(\A,\Gamma)(.)$ everywhere on $X$ and that the following scale of 'improved $\BE(\K,N)$-inequalities' hold true for all $f\in\A$
\begin{eqnarray}
\Gamma_2(f)
&\ge&
\K\,\Gamma(f)+\frac1N (\LL f)^2+
\big\| \HH_f(.)-\frac1N \LL f\cdot\Gamma(.)\big\|_{HS}^2+\frac1{N-n}\big(\tr\,\HH_f-\frac nN\LL f\big)^2\label{first}\\
&\ge&
\K\,\Gamma(f)+\frac1N (\LL f)^2 +\frac N{N-1} \, \big\| {\HH}_f(.)-\frac1N {\LL}f\cdot\Gamma(.)\big\|_{\Gamma}^2\label{second}
\\
&\ge&
\label{Ba-Qi}
\K\,\Gamma(f)+\frac1N (\LL f)^2 +\frac N{N-1}\Big[ \frac1{\Gamma(f)} {\HH}_f(f,f)-\frac1N {\LL}f\Big]^2
\end{eqnarray}
pointwise on $X$ where $n(x)=\dim(\A,\Gamma)(x)$.
\end{corollary}
\begin{proof}
It only remains to prove the step from \eqref{first} to \eqref{second}.
This follows from
\begin{eqnarray*}
\lefteqn{\big\| \HH_f(.)-\frac1N \LL f\cdot\Gamma(.)\big\|_{HS}^2}\\
&=&
\big\| \HH_f^o(.)\big\|_{HS}^2+\frac1n \big(\tr\,\HH_f-\frac nN\LL f\big)^2\\
&\ge&\frac n{n-1}\big\| \HH_f^o(.)\big\|_{\Gamma}^2+\frac1n \big(\tr\,\HH_f-\frac nN\LL f\big)^2\\
&\ge&\frac n{n-1}\Big[\big\| \HH_f(.)-\frac1N \LL f\cdot\Gamma(.)\big\|_{\Gamma} - \frac1n \big(\tr\,\HH_f-\frac nN\LL f\big)\Big]^2+\frac1n \big(\tr\,\HH_f-\frac nN\LL f\big)^2\\
&\ge&\frac N{N-1}\big\| \HH_f(.)-\frac1N \LL f\cdot\Gamma(.)\big\|_{\Gamma}^2 - \frac1{N-n} \big(\tr\,\HH_f-\frac nN\LL f\big)^2.
\end{eqnarray*}
Here we used the fact that
$\|B\|_{HS}^2\ge \frac n{n-1} \, \|B\|_{2,2}^2$
for any traceless, symmetric $(n\times n)$-matrix $B$ and that $\frac n{n-1}(a-\frac1n b)^2+(\frac 1n+\frac1{N-n})b^2\ge\frac N{N-1}a^2$ for any pair of numbers $a,b$.
\end{proof}
The last version \eqref{Ba-Qi} is the 'extended $\BE(\K,N)$-inequality' derived in \cite{Bak-Qia}. See also \cite{Sav} for recent generalizations (with $N=\infty$) to metric measure spaces.
Let us reformulate the previous result for the case $N=\infty$.
\begin{corollary}\label{be-8-imp}
The Bakry-Emery condition $\BE(\K,\infty)$ -- that is, the condition $\Gamma_2(f)\ge\K\,\Gamma(f)$ for all $f\in \A$ -- implies the following scale of 'improved $\BE(\K,\infty)$-inequalities' for all $f\in\A$
\begin{eqnarray}
\Gamma_2(f)
&\ge&
\K\,\Gamma(f)+
\big\| \HH_f(.)\big\|_{HS}^2\\
&\ge&
\K\,\Gamma(f)+ \big\| {\HH}_f(.)\big\|_{\Gamma}^2\\
&\ge&
\K\,\Gamma(f)+\Big[ \frac1{\Gamma(f)} {\HH}_f(f,f)\Big]^2.
\end{eqnarray}
\end{corollary}
The importance of the
inequalities \eqref{super-eins} and \eqref{super-zwei0}
-- which lead to the
very first inequality in Corollaries \ref{be-N-imp} and \ref{be-8-imp} --
will become evident in the next chapter where under mild assumptions we prove that these are indeed \emph{equalities}.
\section{The Bochner Formula}
For the sequel, fix $x\in X$ and a family $\{e_i:\, i\in I\}\subset\A$ which is orthonormal w.r.t. $\Gamma(.)(x)$, i.e.
$\Gamma(e_i,e_j)(x)=\delta_{ij}$. Let $I$ be either $\N$ or $\{1,\ldots,n\}$ for some $n\in\N$.
We say that the system $\{e_i:\, i\in I\}\subset\A$ is \emph{regular} if
for all $f,g\in\A$ with $\Gamma(g)(x)=0$ there exist a sequence of smooth $\psi^r:\R^r\to\R$ with
$\frac{\partial}{\partial y_i}\psi^r(e_1,\ldots,e_r)(x)=0$ for all $i\in I$ such that
\begin{equation}\label{reg1}
\Gamma_2(f+g_r)(x)\to\Gamma_2(f+g)(x)\qquad\mbox{as }r\to\infty
\end{equation}
where $g_r=\psi^r\circ(e_1,\ldots,e_r)$ and -- if $\dim(\A,\Gamma)(x)<\infty$ -- in addition
\begin{equation}\label{reg2}
\LL g_r(x)\to\LL g(x)\qquad\mbox{as }r\to\infty.
\end{equation}
\begin{theorem} Let $x\in X$ be given as well as a regular orthonormal system $\{e_i:\, i\in I\}$
\begin{itemize}
\item[(i)] Then for all $f\in\A$
\begin{equation}
\Gamma_2(f)(x)= \RR(f)(x)+\big\| \HH_f \big\|^2_{HS}(x).
\end{equation}
\item[(ii)] Moreover, for $N\ge n(x):=\dim\, (\A,\Gamma)(x)$
\begin{eqnarray}
\Gamma_2(f)(x)
&=&\RR_N(f)(x)+\big\| \HH_f(.) \big\|^2_{HS}(x)+
\frac1{N-n(x)}\Big( \tr\, \HH_f(.) - \, \LL f\Big)^2(x)\\
&=&\RR_N(f)(x)+\frac1N \big(\LL f\big)^2(x)+
\big\| \HH_f(.)-\frac1N\, \LL f\cdot\Gamma(.) \big\|^2_{HS}(x)\nonumber
\\
&&\qquad\qquad\qquad+
\frac1{N-n(x)}\Big( \tr\, \HH_f(.) - \frac nN\, \LL f\Big)^2(x).
\end{eqnarray}
In the case $N=n(x)$, the last term on the RHS here should be understood as the limit $N\searrow n(x)$ (which is either $0$ or $+\infty$).
\item[(iii)] $\RR_N(f)(x)>-\infty$ if and only if $N>\dim\, (\A,\Gamma)(x)$ or if $N=\dim\, (\A,\Gamma)(x)$ and $\tr\,\HH_f(x)=\LL f(x)$.
\end{itemize}
\end{theorem}
\begin{proof}
(i)
Let $f,g\in\A$ be given with $\Gamma(g)(x)=0$.
Since $\{e_i:\, i\in I\}$ is regular at $x$ we may approximate $g$ by $g_r\in\A$ of the form
$ g_r=\psi^r\circ (e_1,\ldots,e_r)$ for smooth $\psi^r:\R^r\to\R$ with $\psi^r_i(e(x))=0$ for all $i$.
(For $r>n(x)=\dim(\A,\Gamma)(x)$ the function $\psi^r$ should be a function of the first $n$ coordinates.)
Recall from \eqref{g2-hf}
\begin{eqnarray*}
\Gamma_2\big(f+g_r\big)(x)
&=&
\Gamma_2\big(f\big)(x)+ \sum_{i,j=1}^r\Big(\HH_f\big(e_i,e_j\big)(x)+ \psi_{ij}\big(e(x)\big)\Big)^2-
\sum_{i,j=^1}^r \HH_f^2\big(e_i,e_j\big)(x)\\
&\ge&
\Gamma_2\big(f\big)(x)-
\sum_{i,j=^1}^r \HH_f^2\big(e_i,e_j\big)(x)\\
&\ge&
\Gamma_2\big(f\big)(x)-
\big\| \HH_f \big\|^2_{HS}(x).
\end{eqnarray*}
The regularity assumption thus implies
$\Gamma_2\big(f+g\big)(x)\ge\Gamma_2\big(f\big)(x)-
\big\| \HH_f \big\|^2_{HS}(x)$ for all $g\in\A$ with $\Gamma(g)(x)=0$.
Therefore, $\RR(f)(x)\ge \Gamma_2\big(f\big)(x)-\big\| \HH_f \big\|^2_{HS}(x)$.
Together with the upper estimate from Theorem \ref{super-self} this proves the claim.
(ii) Now let us assume $I=\{1,\ldots,n\}$ with $n=\dim (\A,\Gamma)(x)<\infty$
and let us approximate $g$ by $g_r\in\A$ of the form
$ g_r=\psi^r\circ (e_1,\ldots,e_n)$ for smooth $\psi^r:\R^n\to\R$ with $\psi^r_i(e(x))=0$ for all $i$.
Recall from \eqref{eins}
\begin{eqnarray*}
\lefteqn{\Big[
\Gamma_2\big(f+g_r\big)(x)-\frac1N \big(\LL (f+g_r)\big)^2(x)\Big]
-
\Gamma_2\big( f\big)(x)}\nonumber\\
&=&
\sum_{i,j=1}^n\Big(\HH_f^o\big(e_i,e_j\big)(x)+ (\psi_{ij}^r)^o\big(e(x)\big)\Big)^2
+\frac1n\Big( \tr \HH_f(.)(x)+(\Delta \psi^r)(e(x))\Big)^2\nonumber\\
&&
-
\sum_{i,j=1}^n \HH_f^2\big(e_i,e_j\big)(x)- \frac1N \Big(\LL f(x)+(\Delta\psi^r)(e(x))\Big)^2\\
&\ge&
- \ \sum_{j,k=1}^n \HH_{f}^2(e_j,e_k)(x)
-\frac1{N-n} \Big( \tr \HH_f(.)(x)- \LL f(x)\Big)^2.
\end{eqnarray*}
Passing to the limit $r\to\infty$ this yields
$$\Big[
\Gamma_2\big(f+g\big)(x)-\frac1N \big(\LL (f+g)\big)^2(x)\Big]
-
\Gamma_2\big( f\big)(x)\ge
- \big\| \HH_f \big\|^2_{HS}(x)
-\frac1{N-n} \Big( \tr \HH_f(.)(x)- \LL f(x)\Big)^2$$
for every $g\in\A$ with $\Gamma(g)(x)=0$. In other words,
$$\RR_N(f)(x)\ge \Gamma_2\big(f\big)(x)-\big\| \HH_f \big\|^2_{HS}(x)-\frac1{N-n} \Big( \tr \HH_f(.)(x)- \LL f(x)\Big)^2.$$
Together with the upper estimate from Theorem \ref{super-self} this proves the claim.
\end{proof}
Let us add some brief discussion on the regularity assumption in the previous theorem. (This assumption will not be used at any other place in this paper.)
\begin{lemma}
Assume that for given $x\in X$ an orthonormal system $\{e_i:\, i\in I\}$ satisfies $\big\| \HH_f \big\|^2_{HS}(x)<\infty$ for all $f\in\A$ and
\begin{equation}\label{g2=hess}
\Gamma_2(f,g)(x)=\big\langle \HH_f,\HH_g\big\rangle_{HS}(x)
\end{equation}
for all $g\in\A$ with $\Gamma(g)(x)=0$.
Then $\{e_i:\, i\in I\}$ satisfies condition \eqref{reg1} in the definition of 'regularity'.
If in addition
\begin{equation}\label{L=tr}
\LL g(x)=\tr\, \HH_g(x)
\end{equation}
for all $g\in\A$ with $\Gamma(g)(x)=0$
then $\{e_i:\, i\in I\}$ satisfies condition \eqref{reg2} in the definition of 'regularity'.
\end{lemma}
Note that \eqref{g2=hess} and \eqref{L=tr} are always satisfied if $f=\phi\circ (e_1,\ldots,e_r)$ and $g=\psi\circ (e_1,\ldots,e_r)$ for some smooth
$\phi, \psi:\R^r\to\R$. Indeed,
Lemma \ref{chain-rule} and Lemma \ref{hess-calc} from below imply
\begin{eqnarray*}
\Gamma_2(f,g)(x)&=&\sum_{i,j,k}(\phi_i\,\psi_{jk})(e(x))\cdot \HH_{e_i}(e_j,e_k)(x)+\sum_{j,k}(\phi_{jk}\,\psi_{jk})(e(x))\\
&=& \sum_{j,k}\HH_f(e_j,e_k)(x)\cdot \HH_g(e_j,e_k)(x)
\end{eqnarray*}
and
$$\LL g(x)=\sum_{i}\psi_{ii}(e(x))=\tr\,\HH_f(x).$$
\begin{proof}[Proof of the Lemma]
First, observe that \eqref{g2=hess} implies
$\Gamma_2(g)(x)=\big\| \HH_g\big\|^2_{HS}(x)$ and thus
\begin{equation}
\Gamma_2(f+g)(x)=\Gamma_2(f)(x)+2\big\langle \HH_f,\HH_g\big\rangle_{HS}(x)+\big\| \HH_g\big\|_{HS}(x).
\end{equation}
Put $\psi^r(y_1,\ldots,y_r)=\frac12\sum_{i,j=1}^r \HH_g(e_j,e_k)(x)\cdot\big[y_j-e_j(x)\big]\cdot\big[y_k-e_k(x)\big]$ and $g_r=\psi\circ(e_1,\ldots,e_r)$.
Then according to the chain rule
\begin{eqnarray*}
\Gamma_2(f+g^r)(x)&=&
\Gamma_2(f)(x)+2\sum_{j,k=1}^r \HH_f(e_j,e_k)(x)\cdot\HH_g(e_j,e_k)(x)
+\sum_{j,k=1}^r \HH_g(e_j,e_k)^2(x)\\
&\to
&\Gamma_2(f)(x)+2\big\langle \HH_f,\HH_g\big\rangle_{HS}(x)+\big\| \HH_g\big\|_{HS}(x)\ =\ \Gamma_2(f+g)(x)
\end{eqnarray*}
as $r\to\infty$.
In the case $n=\dim(\A,\Gamma)(x)<\infty$ put $\psi^r$ and $g_r=\psi^r\circ (e_1,\ldots,e_n)$ as above with $r:=n$.
Then $\LL g_r(x)=(\Delta\psi^r)(e(x))$ and $\Delta\psi^r(.)=\sum_{i=1}^r\HH_g(e_i,e_i)(x)$ uniformly on $\R^r$. Thus
$\LL g_r(x)=\tr\HH_g(x)$.
\end{proof}
\begin{remark}
Given any family $\{e_i:\, i\in I\}\subset\A$ which is orthonormal w.r.t. $\Gamma(.)(x)$ we put
$$\eta_{jk}(.)=\frac12\big[e_j-e_j(x)\big]\cdot \big[e_k-e_k(x)\big]\in\A$$
and $I_2=\{(j,k)\in I^2:\, j\le k\}$.
Then the family $\{\hat\eta_{jk}\}_{(j,k)\in I_2}$ with $\hat\eta_{jj}=\eta_{jj}$ and $\hat\eta_{jk}=\sqrt2\, \eta_{jk}$ if $j<k$ is orthonormal w.r.t. $\Gamma_2(.)(x)$.
Indeed, Lemma \ref{chain-rule} implies via polarization that
$$\Gamma_2\big(\phi\circ e,\psi\circ e\big)(x)=\sum_{j,k}\big(\phi_{jk}\cdot\psi_{jk}\big)\big(e(x)\big)$$
for all smooth $\phi,\psi:\R^n\to\R$ with $\phi_i(e(x))=\psi_i(e(x))=0$ for all $i$.
\end{remark}
\begin{proposition}
Given $x\in X$ and an orthonormal system $\{e_i:\, i\in I\}$ w.r.t. $\Gamma(.)(x)$.
Condition \eqref{g2=hess} implies that the family $\{\hat\eta_{jk}\}_{(j,k)\in I_2}$ is a \emph{complete} orthonormal system
for the pre-Hilbert space $(\A_x^2,\Gamma_2(.)(x))$
where $\A_x^2$ denotes the set of equivalence classes in $\{f\in\A: \ \Gamma(f)(x)=0\}$ w.r.t. the relation
$f\approx g\Leftrightarrow \Gamma_2(f-g)(x)=0$.
More precisely, for any orthonormal system $\{e_i:\, i\in I\}$
the following assertions are equivalent:
\begin{itemize}
\item[(i)] For all $f,g\in\A^2_x$
\begin{equation}\label{hess=g2}
\Gamma_2(f,g)(x)=\big\langle\HH_f, \HH_g\big\rangle_{HS}(x).
\end{equation}
\item[(ii)]
For all $f\in\A^2_x$
\begin{equation}\label{bi-cpl}
\Gamma_2(f)(x)=\sum_{j,k\in I}\Gamma_2(\eta_{jk},f)^2(x)
\end{equation}
and for all $j,k\in I$
\begin{equation}\label{hessjk}
\Gamma_2(f,\eta_{jk})(x)=\HH_f(e_j,e_k)(x).
\end{equation}
\end{itemize}
\end{proposition}
\begin{proof} For the function $g=\eta_{jk}$, the subsequent lemma implies $\HH_g(e_i,e_l)(x)=\frac12\delta_{ij}\cdot\delta_{lk}+\frac12\delta_{ik}\cdot\delta_{lj}$.
Assumption \eqref{hess=g2} thus implies \eqref{hessjk}.
On the other hand, assuming \eqref{hessjk} obviously yields the equivalence of \eqref{hess=g2} and \eqref{bi-cpl}.
\end{proof}
\begin{lemma}\label{hess-calc}
Assume that $f=\psi\circ (e_1,\ldots,e_r)$ for some smooth $\psi:\R^r\to\R$ and an orthonormal system $\{e_1,\ldots,e_r\}$. Then
\begin{eqnarray}\label{hess-chain}
\HH_f(e_j,e_k)(x)=\sum_i\psi_i(e(x))\, \HH_{e_i}(e_j,e_k)(x)+\psi_{jk}(e(x)).
\end{eqnarray}
\end{lemma}
\begin{proof}
Note that e.g.
$\Gamma(e_j,\Gamma(f,e_k))=\sum_i \psi_i(e)\,\Gamma(e_j,\Gamma(e_i,e_k))+\sum_{i,l}\psi_{il}(e)\,\Gamma(e_l,e_k)\,\Gamma(e_i,e_j)$
and thus
\begin{eqnarray*}
2 \HH_f(e_j,e_k)&=&\Gamma(e_j,\Gamma(f,e_k))+\Gamma(e_k,\Gamma(f,e_j))-\Gamma(f,\Gamma(e_j,e_k))\\
&=&2\sum_i\psi_i(e) \, \HH_{e_i}(e_j,e_k)+ 2\sum_{i,l}\psi_{il}(e)\,\Gamma(e_l,e_k)\,\Gamma(e_i,e_j)
\end{eqnarray*}
everywhere on $X$. Using the orthonormality of the $e_j$ at $x$ yields \eqref{hess-chain}.
\end{proof}
Let us conclude this chapter with an example illustrating that the dimension
$\dim\, (\A,\Gamma)(.)$
might be non-constant on $X$
\begin{example}
Let $X=\R^2$, $\A={\mathcal C}_c^\infty(X)$ and
$$\LL f(x)=\phi(x_1)\,\frac{\partial^2}{\partial x_1^2}f(x)
+\frac12\phi'(x_1)\,\frac{\partial}{\partial x_1}f(x)
+\frac{\partial^2}{\partial x_2^2}f(x)$$
for some ${\mathcal C}^\infty$-function $\phi:\R\to\R$ with $\phi>0$ on $(0,\infty)$ and $\phi=0$ on $(-\infty,0]$. Then
$$ \dim\, (\A,\Gamma)(x)=\left\{
\begin{array}{ll}
1& \quad \mbox{if }x_1\le0,\\
2& \quad \mbox{if }x_1>0.
\end{array}
\right.$$
Moreover,
$\RR_Nf(x)=0$ for all $f\in\A$ and all $N\ge2$ and
$$ \RR_N(f)(x)=\left\{
\begin{array}{ll}
0& \quad \mbox{if }x_1\le0,\\
-\infty& \quad \mbox{if }x_1>0
\end{array}
\right.$$
for $N\in[1,2)$.
Indeed, by construction
$$\Gamma(f)(x)= \phi(x_1)\,\Big|\frac{\partial}{\partial x_1}f\Big|^2(x)+\Big|\frac{\partial}{\partial x_2}f\Big|^2(x).$$
The assertion on $\dim\, (\A,\Gamma)(.)$ thus is obvious.
By Theorem \ref{super-self}(iii) it implies the assertion on $\RR_Nf(x)$ in the case $x_1>0$ and $N<2$. In the cases $x_1\le0$ or $N\ge2$, the assertion follows from the analogous assertion for the 1-dimensional diffusion in $x_2$-direction (which is a conformal transformation of the standard diffusion in $x_2$-direction), cf. Theorem \ref{conf-ricc}.
\end{example}
\section{Ricci Tensor for Transformed Operators}
In the sequel we will study the operator
\begin{equation}\label{Lsharp2}
{\LL}' u=f^2\, {\LL}u+f\,\sum_{i=1}^r g_i \, \Gamma(h_i,u)
\end{equation}
for given $r\in \N$ and functions $f, g_i,h_i\in\A$ (for $i=1,\ldots,r$).
Obviously, the associated $\Gamma$-operator is given by $\Gamma'(u)=f^2\, \Gamma(u)$.
Our main result is the following estimate for the $N'$-Ricci tensor for $\LL'$ in terms of the $N$-Ricci tensor for $\LL$.
\begin{theorem}\label{g2-est} For every $N'>N$
\begin{eqnarray*}
\RR'_{N'}(u)
&\ge&
f^4\, \RR_N(u)-\frac1{N'-N}\Big(\frac{N-2}2\Gamma(f^2,u)+\sum_i f g_i\Gamma(h_i,u)\Big)^2\\
&&+
\frac12\big(f^2{\LL}f^2-\Gamma(f^2)\big)\Gamma(u)-\frac{N-2}4\Gamma(f^2,u)^2-\sum_i f^3g_i {\HH}_{h_i}(u,u)\\
&&+\frac12\sum_i f g_i\Gamma(h_i,f^2)\Gamma(u)-\sum_i f^2\Gamma(f g_i,u)\Gamma(h_i,u).
\end{eqnarray*}
In the particular case $r=1$, $g_1=-(N-2)f$, $h_1=f$ one may also choose $N'=N$.
\end{theorem}
\begin{corollary}\label{g2-est-cor}
Assume that the operator ${\LL}$ satisfies the $\BE(\K,N)$-condition. Then for every $N'>N$
the operator ${\LL}'$ satisfies the $\BE(\K',N')$-condition with
\begin{eqnarray*}
\K'&:=& f^2\,\K+
\frac12{\LL}f^2-2\Gamma(f)+\sum_i g_i\Gamma(h_i,f)\\
&&+
\inf_{u\in\A}
\frac1{\Gamma(u)}\Big[-\frac1{N'-N}\Big((N-2)\Gamma(f,u)+\sum_i g_i\Gamma(h_i,u)\Big)^2\\
&&\qquad\qquad -(N-2)\Gamma(f,u)^2-\sum_i f g_i {\HH}_{h_i}(u,u)-\sum_i \Gamma(f g_i,u)\Gamma(h_i,u)\Big].
\end{eqnarray*}
\end{corollary}
\begin{proof} All the subsequent statements will be pointwise statements for a given $x\in X$. It suffices to consider $f\in\Dom(\RR_N(x))$.
By the very definition of ${\LL}'$, $\Gamma'$ and $\Gamma_2'$ we obtain for all $u\in\A$
\begin{eqnarray*}
\Gamma_2'(u)&=&
\frac12 f^2 {\LL}(f^2\Gamma(u))+\frac12\sum_if g_i\Gamma(h_i,f^2\Gamma(u))-f^2\Gamma(u,f^2 {\LL}u)-f^2\Gamma(u,\sum_i f g_i\Gamma(h_i,u))\\
&=&
f^4 \Gamma_2(u)+\frac12 f^2 {\LL}f^2\cdot \Gamma(u)+2f^2\Big({\HH}_u(f^2,u)-\frac1N {\LL}u\cdot\Gamma(f^2,u)\Big)-\frac{N-2}N f^2\Gamma(f^2,u){\LL}u\\
&&+\sum_i\Big(-f^3 g_i {\HH}_{h_i}(u,u)+\frac12 f g_i\Gamma(h_i,f^2)\Gamma(u)-f^2\Gamma(f g_i,u)\Gamma(h_i,u)\Big)\\
&=&
f^4 \Gamma_2(u)+2f^2\Big({\HH}_u(f^2,u)-\frac1N {\LL}u\cdot\Gamma(f^2,u)\Big) -\frac{N-2}N f^2\Gamma(f^2,u){\LL}u+A(f,g,h,u)\\
&=&
f^4\,\RR_N(u)+f^4 \big(\Gamma_2(u)-\RR_N(u)-\frac1N(\LL u)^2 \big)\\
&&+2f^2\Big[{\HH}_u(f^2,u)-\frac1N {\LL}u\cdot\Gamma(f^2,u)\Big]+\frac{N-2}{2N} \Gamma(f^2,u)^2\\
&&+\frac12\Gamma(f^2)\Gamma(u)+\frac1N\Big(f^2{\LL}u-\frac{N-2}2\Gamma(f^2,u)\Big)^2+A'(f,g,h,u)
\end{eqnarray*}
with \begin{eqnarray*}
A(f,g,h,u):=\frac12 f^2 {\LL}f^2\,\Gamma(u)
+\sum_i\Big(-f^3 g_i {\HH}_{h_i}(u,u)+\frac12 f g_i\Gamma(h_i,f^2)\Gamma(u)-f^2\Gamma(f g_i,u)\Gamma(h_i,u)\Big)
\end{eqnarray*}
and
\begin{eqnarray*}
A'(f,g,h,u):=A(f,g,h,u)-\frac12\Gamma(f^2)\Gamma(u)-\frac{N-2}4\Gamma(f^2,u)^2.
\end{eqnarray*}
Corollary \ref{Hess-est} provides a sharp lower estimate for the above $[\ .\ ]$-term (involving the 'traceless Hessian' of $u$) which leads to
\begin{eqnarray*}
\Gamma_2'(u)&\ge&
f^4\,\RR_N(u)+\frac1N\Big(f^2{\LL}u-\frac{N-2}2\Gamma(f^2,u)\Big)^2+A'(f,g,h,u).
\end{eqnarray*}
Therefore,
\begin{eqnarray*}
\Gamma_2'(u)-\frac1{N'}({\LL}'u)^2&\ge&
f^4\,\RR_N(u)-\frac1{N'}\Big(f^2{\LL}u+\sum_if g_i\Gamma(h_i,u)\Big)^2\\
&&\qquad\qquad +\frac1N\Big(f^2{\LL}u-\frac{N-2}2\Gamma(f^2,u)\Big)^2
+A'(f,g,h,u)\\
&\ge&
f^4\,\RR_N(u)-\frac1{N'-N}\Big(\frac{N-2}N \Gamma(f^2,u)+\sum_if g_i\Gamma(h_i,u)\Big)^2+A'(f,g,h,u).
\end{eqnarray*}
Given $u_0\in\A$ and varying among all $u\in\A$ with $\Gamma(u-u_0)(x)=0$ (for the given $x\in X$) then yields
\begin{eqnarray*}
\RR'_{N'}(u_0)(x)&=&
\inf_{u\in \A,\, \Gamma(u-u_0)(x)=0}\Big[\Gamma_2'(u)(x)-\frac1{N'}\big(\LL' u\big)^2(x)\Big]\\
&\ge&
f^4\,\RR_N(u)-\frac1{N'-N}\Big(\frac{N-2}N \Gamma(f^2,u)+\sum_if g_i\Gamma(h_i,u)\Big)^2\\
&&\qquad +\inf_{u\in \A,\, \Gamma(u-u_0)(x)=0}A'(f,g,h,u).
\end{eqnarray*}
According to Corollary \ref{abs-cont-hess}, $A'(f,g,h,u)=A'(f,g,h,u_0)$ for all $u$ under consideration. This proves the claim.
\end{proof}
\section{Conformal Transformation}
The previous results significantly simplify in the case
$r=1$, $g_1=-(N-2)$, $h_1=f$.
This is the only case
where we can estimate the $N$-Ricci tensor for the transformed operator in terms of the $N$-Ricci tensor of the original one.
It is also the only case where the Bakry-Emery condition for $\LL$ will imply a a Bakry-Emery condition
for the transformed operator will satisfy with the \emph{same} dimension bound $N$.
Put $$\tilde {\LL}=f^2{\LL}-\frac{N-2}2\Gamma(f^2,u)$$ and let $\tilde \Gamma, \tilde\Gamma_2, \tilde\RR_N$ be the associated square field operator, iterated square field operator, and $N$-Ricci tensor resp. Theorem \ref{g2-est} immediately yields
\begin{corollary} For all $u\in\A$
\begin{eqnarray*}
\tilde\RR_N(u)
&\ge& f^4\, \RR_N(u)+
\Big(
\frac12 f^2{\LL}f^2-\frac N4\Gamma(f^2)\Big)\Gamma(u)-\frac{N-2}4\Gamma(f^2,u)^2+\frac{N-2}2 f^2 {\HH}_{f^2}(u,u).
\end{eqnarray*}
In particular, if the operator ${\LL}$ satisfies the $\BE(\K,N)$-condition then
the operator $\tilde {\LL}=f^2{\LL}-\frac{N-2}2\Gamma(f^2,u)$ satisfies the $\BE(\tilde\K,N)$-condition with
\begin{eqnarray*}
\tilde\K&:=& f^2\,\K+
\frac12{\LL}f^2-N\Gamma(f)+
\inf_{u\in\A}
\frac1{\Gamma(u)}\Big[-(N-2)\Gamma(f,u)^2+\frac{N-2}2 {\HH}_{f^2}(u,u)\Big]\\
&=& f^2\,\K+
f{\LL}f-(N-1)\Gamma(f)+
\inf_{u\in\A}
\frac{N-2}{\Gamma(u)}f\, {\HH}_{f}(u,u).
\end{eqnarray*}
\end{corollary}
If $f>0$ with $\log f\in\A$ the above estimate for the $N$-Ricci tensor indeed becomes an equality.
\begin{theorem}\label{conf-ricc} Given any $w\in\A$ and $N\in[1,\infty]$ let $\tilde\RR_N$ denote the $N$-Ricci tensor associated to the operator
$\tilde L=e^{-2w}\big(L+(N-2)\Gamma(w,.)\big)$.
Then for all $u\in\A$
\begin{eqnarray}\label{conf-est-exp}
\tilde\RR_N(u)
&=&
e^{-4w}\,\Big( \RR_N(u)+\big[-{\LL} w -(N-2)\Gamma(w)\big]\, \Gamma(u)\nonumber\\
&&\qquad\qquad -(N-2){\HH}_{w}(u,u)+(N-2)\Gamma(w,u)^2\Big).
\end{eqnarray}
\end{theorem}
\begin{proof}
Firstly, we apply the previous corollary with
$f=e^{-w}$ to obtain a lower bound for the $N$-Ricci tensor $\tilde\RR_N$ associated to the operator $\tilde\LL=e^{-2w}(\LL +(N-2)\Gamma(w,.))$ in terms of the $N$-Ricci tensor $\RR_N$ associated to the operator $\LL$: This yields the ''$\ge$'' in \eqref{conf-est-exp}:
\begin{eqnarray}\label{conf-upp}
\tilde\RR_N(u)
&\ge&
e^{-4w}\,\Big( \RR_N(u)+\big[-{\LL} w -(N-2)\Gamma(w)\big]\, \Gamma(u)\nonumber\\
&&\qquad\qquad -(N-2){\HH}_{w}(u,u)+(N-2)\Gamma(w,u)^2\Big).
\end{eqnarray}
Secondly, we apply the previous corollary with $f=e^w$ to obtain a lower bound for the $N$-Ricci tensor associated to the operator $e^{2w}(\tilde\LL -(N-2)\tilde\Gamma(w,.))$ in terms of the $N$-Ricci tensor $\tilde\RR_N$ associated to the operator $\tilde\LL$.
Note that $e^{+2w}(\tilde\LL -\tilde\Gamma(w,.))=\LL$. Thus this indeed provides us with a lower bound for $\RR_N$:
\begin{eqnarray}\label{conf-low}
\RR_N(u)
&\ge&
e^{+4w}\,\Big( \tilde\RR_N(u)+\big[+{\tilde\LL} w -(N-2)\tilde\Gamma(w)\big]\, \tilde\Gamma(u)\nonumber\\
&&\qquad\qquad +(N-2){\tilde\HH}_{w}(u,u)+(N-2)\tilde\Gamma(w,u)^2\Big).
\end{eqnarray}
Combining these two estimates and using the fact that
\begin{eqnarray*}
\tilde\HH_w(u,u)&=&\tilde\Gamma(u,\tilde\Gamma(u,w))-\frac12\tilde\Gamma(w,\tilde\Gamma(u))\\
&=& e^{-4w}\cdot\Big[\HH_w(u,u)-2\Gamma(w,u)^2+\Gamma(w)\cdot\Gamma(u)\Big]
\end{eqnarray*}
finally yields that \eqref{conf-upp} and \eqref{conf-low} are indeed equalities.
\end{proof}
\begin{remark}
If we re-formulate this result in terms of $v=(N-2)w$ then $\tilde {\LL}u=e^{-2v/(N-2)}\big({\LL}u+\Gamma(v,u)\big)$ which converges to ${\LL}u+\Gamma(v,u)$ as $N\to\infty$. That is, conformal transformations in the limit $N\to\infty$ lead to drift transformations. Note that as $N\to\infty$ the RHS of \eqref{conf-est-exp} tends to $\K-{\HH}_v(u,u)$ -- which is consistent with the well-known result for drift transformations.
\end{remark}
\bigskip
Conformal transformations are of particular interest in a Riemannian setting: if ${\LL}$ is the Laplace-Beltrami operator for the Riemannian manifold $(M,g)$ and if $N$ coincides with the dimension of $M$
then ${\LL}'$ is the Laplace-Beltrami operator for the Riemannian manifold $(M,f^{-2}\,g)$.
More precisely: Given an $n$-dimensional Riemannian manifold $(M,g)$ and a smooth function $w:M\to \R$, we can define a new Riemannian metric $\tilde g$ on $M$ by
$$\tilde g=e^{2w}\cdot g.$$
The induced ('new') Riemannian volume measure is given by
$d\tilde m=e^{nw} \, dm$, the associated ('new') Dirichlet form is
$$\tilde\E(u)=\int_M |\nabla u|^2 e^{(-2+n)w}dm \quad \mbox{on }L^2(M, e^{nw}m).$$
Here $\nabla$, $|.|$ and $\Delta$ are all defined w.r.t. the metric $g$. Note that
$\tilde\nabla u=e^{-2w}\nabla u$ and thus $|\tilde\nabla u|_{\tilde g}^2=e^{-2w}\, |\nabla u|_g^2$.
Therefore, the associated ('new') Laplace-Beltrami operator is
$$\tilde\Delta u=e^{-2w}\Delta u-\frac{n-2}2\nabla e^{-2w}\cdot\nabla u.$$
The Ricci tensor for the metric $\tilde g$ is given by
\begin{eqnarray*}
\tilde\Ric(X,X)
&=& \Ric(X,X)-\big(\Delta w+(n-2)|\nabla w|^2\Big)\cdot |X|^2-(n-2)\Big[ {\HH}_w(X,X)-(X\, w)^2\Big].
\end{eqnarray*}
(see e.g. \cite{Bes}, p 59). Applying this to the gradient of a (smooth) function $u$ on $M$ and taking into account that $\tilde\nabla u=e^{-2w}\nabla u$ yields
\begin{eqnarray}
\tilde\Ric(\tilde\nabla u,\tilde\nabla u)
&=& e^{-4w}\cdot \Big[\Ric(\nabla u,\nabla u)-\big(\Delta w+(n-2)|\nabla w|^2\big)\cdot |\nabla u|^2\nonumber\\
&&\qquad\qquad\qquad-(n-2)\big( {\HH}_w(\nabla u,\nabla u)-\langle \nabla w,\nabla u\rangle^2\big)\Big].
\end{eqnarray}
Thus, indeed, \eqref{conf-est-exp} provides the \emph{exact} formula for the Ricci tensor for $\tilde {\LL}$.
\begin{example} The \emph{Poincar\'e model of hyperbolic space} is of the above type $(M,\tilde g)$ with
$M=B_R(0)\subset \R^n$ and $\tilde g=f^{-2}\, g_{Euclid}$ for $$f(x)=\frac12\Big(1-\frac{|x|}R\Big)^2.$$
Its sectional curvature is constant $-\frac1{R^2}$ and the Ricci curvature is $-\frac{n-1}{R^2}\cdot \tilde g$.
\end{example}
\begin{remark}
For conformal transformations of Laplacians with drift on smooth $N$-dimensional Riemannian manifolds, estimates of the type
\begin{eqnarray*}
\tilde\Gamma_2(u)-\frac1N (\tilde {\LL} u)^2
\ge
e^{-4w}\,\Big[\K\, \Gamma(u)
-{\LL} w \,\Gamma(u) +c_1\Gamma(w)\, \Gamma(u) -(N-2){\HH}_{w}(u,u)+c_2\Gamma(w,u)^2\Big].
\end{eqnarray*}
have been presented in \cite{ThaWan} and \cite{Wan}, however, with wrong constants.
The first claim was $c_1= -N, c_2=2(N-2)$.
The 'corrected' claim then was $c_1=-(N-4), c_2=N$.
Indeed, the correct choices are $c_1=-(N-2)$ and $c_2=N-2$.
\end{remark}
\section{Time Change and Drift Transformation}
This chapter will be devoted to
study the operator
\begin{equation}\label{time+drift}
{\LL}' u=f^2\, {\LL}u+f^2 \, \Gamma(h,u)
\end{equation}
for $f, h\in\A$. That is, in \eqref{Lsharp2} we specify to $r=1$ and $g=f$.
The case $h=-(N-2)\log f$ ('conformal transformation') was already treated in the previous chapter,
the cases $h=0$ ('time change') and $f=1$ ('drift transformation') will be considered in more detail in subsequent paragraphs of this chapter.
Any operator ${\LL}'$ of the form \eqref{time+drift} can be obtained from ${\LL}$ by a combination
of
\begin{itemize}
\item a drift transformation with $h$ and
\item a time change with $f$.
\end{itemize}
Recall that $\Gamma'(u)=f^2\,\Gamma(u)$. Theorem \ref{g2-est} yields a precise estimate for the $N'$-Ricci tensor associated to
the operator $\LL'$.
\begin{proposition} For every $N'>N$ and $u\in\A$
\begin{eqnarray*}
\RR'_{N'}(u)
&\ge&
f^4\, \RR_N(u)-\frac1{N'-N}\Big(\frac{N-2}2\Gamma(f^2,u)+f^2\Gamma(h,u)\Big)^2\\
&&+
\frac12\big(f^2{\LL}f^2-\Gamma(f^2)\big)\Gamma(u)-\frac{N-2}4\Gamma(f^2,u)^2- f^4 {\HH}_{h}(u,u)\\
&&+\frac12 f^2\Gamma(h,f^2)\Gamma(u)-f^2\Gamma(f^2,u)\Gamma(h,u).
\end{eqnarray*}
\end{proposition}
\begin{remark}
Let us re-formulate this estimate in the case $N'=\infty$ and $f=e^{-w}$ for some $w\in\A$. It states that the Ricci tensor for the operator ${\LL}'=e^{-2w}\big[{\LL}+\Gamma(h,.)\big]$ satisfies
\begin{eqnarray}\label{dt-k-inf}
\RR'_\infty(u)&\ge& e^{-4w}\Big[\RR_N-
{\LL}w-\Gamma(w,h)\nonumber\\
&&-
\sup_{u\in\A}
\frac1{\Gamma(u)}\Big( (N-2)\Gamma(w,u)^2+ {\HH}_{h}(u,u)-2 \Gamma(w,u)\Gamma(h,u)\Big)\Big].
\end{eqnarray}
\end{remark}
\begin{corollary}\label{cor-g2-est}
Assume that the operator ${\LL}$ satisfies the $\BE(\K,N)$-condition. Then for every $N'>N$
the operator ${\LL}'$ satisfies the $\BE(\K'_{N'},N')$-condition with
\begin{eqnarray}\label{dt-k-N}
\K'_{N'}&:=& f^2\,\K+
\frac12{\LL}f^2-2\Gamma(f)+f\Gamma(h,f)\nonumber\\
&&-
\sup_{u\in\A}
\frac1{\Gamma(u)}\Big[\frac1{N'-N}\Big((N-2)\Gamma(f,u)+f\Gamma(h,u)\Big)^2\\
&&\qquad\qquad +(N-2)\Gamma(f,u)^2+ f^2 {\HH}_{h}(u,u)+ \Gamma(f^2,u)\Gamma(h,u)\Big].\nonumber
\end{eqnarray}
\end{corollary}
\subsection{Drift Transformation}
Let us have a closer look on the case $f=1$. This is the {'drift transformation'} which leads to a particularly simple, well-known formula for the Ricci tensor associated to the operator $\LL'=\LL+\Gamma(h,.)$ Obviously, $\Gamma'=\Gamma$.
\begin{proposition}
\begin{eqnarray}
\RR'(u)
&=&
\RR(u)- {\HH}_{h}(u,u)
\end{eqnarray}
and for every $N'>N$
\begin{eqnarray*}
\RR'_{N'}(u)
&\ge&
\RR_N(u)- {\HH}_{h}(u,u)-\frac1{N'-N}\Gamma(h,u)^2.
\end{eqnarray*}
\end{proposition}
\begin{proof} The ''$\ge$''-inequality follows immediately from Theorem \ref{g2-est}. The ''$\le$''-inequality in the case $N=\infty$ follows from another application of this result to the transformation of the operator $\LL'$ by means of the drift $\Gamma(-h,.)$ or, in other words, by exchanging the roles of $\LL$ and $\LL'$.
\end{proof}
\begin{corollary}[\cite{Bak-Eme}]
Assume that the operator ${\LL}$ satisfies the $\BE(\K,N)$-condition. Then for every $N'>N$
the operator ${\LL}'={\LL}+\Gamma(h,.)$ satisfies the $\BE(\K',N')$-condition with
\begin{eqnarray*}
\K'&:=& \K-
\sup_{u\in\A}
\frac1{\Gamma(u)}\Big[ {\HH}_{h}(u,u)+\frac1{N'-N}\Gamma(h,u)^2\Big].
\end{eqnarray*}
In particular, if the operator
${\LL}$ satisfies the $\BE(\K,\infty)$-condition then
the operator ${\LL}'={\LL}+\Gamma(h,.)$ satisfies the $\BE(\K',\infty)$-condition with
$\K'= \K-
\sup_{u}
\frac1{\Gamma(u)} {\HH}_{h}(u,u)$,
\end{corollary}
Actually, the framework of Theorem \ref{g2-est} allows to treat more general drift terms. Given $g_i,h_i\in\A$ for $i=1,\ldots,r$,
define $Z:\A\to\A$ (regarded as 'vector field') by
\begin{equation*}
Zu=\sum_{i=1}^r g_i\Gamma(h_i,u).
\end{equation*}
For instance, in the Riemannian case one might choose $r$ to be the dimension, $h_i=x^i$ the coordinate functions and $g_i=Z^i$ as the components of a given vector field $Z=\sum_i Z^i \frac{\partial}{\partial x^i}$.
According to Theorem \ref{g2-est},
\begin{eqnarray}
\RR'(u)
&=&
\RR(u)- \big(DZ\big)(u,u)
\end{eqnarray}
where $\big(DZ\big)(u,u):=\sum_{i=1}^r \Big[g_i\, {\HH}_{h_i}(u,u)+\Gamma(g_i,u)\,\Gamma(h_i,u)\Big]$
and for every $N'>N$
\begin{eqnarray*}
\RR'_{N'}(u)
&\ge&
\RR_N(u)- \big(DZ\big)(u,u)-\frac1{N'-N}(Z\,u)^2.
\end{eqnarray*}
\begin{corollary}[\cite{Bak94}]\label{drift-be}
Assume that the operator ${\LL}$ satisfies the $\BE(\K,N)$-condition. Then for every $N'>N$
the operator ${\LL}'={\LL}+Z$ satisfies the $\BE(\K',N')$-condition with
\begin{eqnarray*}
\K'&:=& \K-
\sup_{u\in\A}
\frac1{\Gamma(u)}\Big[ \big(DZ\big)(u,u)+\frac1{N'-N}(Zu)^2\Big].
\end{eqnarray*}
\end{corollary}
\begin{remark}
If ${\LL}$ is the generator of the diffusion process $(X_t,\PP_x)$ then under appropriate regularity assumptions (see e.g. \cite{RevYor}, \cite{FOT}) the transformed operator ${\LL}'={\LL}+Z$ will be the generator of the diffusion process $(X_t,\PP'_x)$ where $\PP'_x=M_t\cdot \PP_x$ on $\sigma\{X_s: s\le t\}$ for a suitable martingale $(M_t)_{t\ge0}$, e.g. in the Riemannian case
$$M_t=\exp\left(\int_0^t Z(X_s)dX_x-\frac12 \int_0^t |Z(X_s)|^2ds\right).$$
A particular case is the Doob transformation where $Z=2\nabla\log\phi$ for some $\phi>0$ satisfying ${\LL}\phi=0$. In this case, the martingale can alternatively be given as
$M_t=\phi(X_t)/\phi(X_0)$. The transition semigroup is given by $P'_tu=\frac1\phi\, P_t(\phi\, u)$.
\end{remark}
\subsection{Time Change}
Next we will focus on the particular case $h=0$. That is, we will consider the operator
$${\LL}'=f^2\, {\LL}$$
('time change') for some $f\in\A$. Obviously, $\Gamma'(u)=f^2\,\Gamma(u)$.
Theorem \ref{g2-est} immediately yields the following sharp estimate for the Ricci tensor for $\LL'$.
\begin{corollary}\label{g2-est-time}
\begin{eqnarray*}
\RR'_{N'}(u)
&\ge&
f^4\, \RR_N(u)-\frac{(N-2)(N'-2)}{N'-N}f^2\,\Gamma(f,u)^2+
\frac12\big(f^2{\LL}f^2-\Gamma(f^2)\big)\Gamma(u).
\end{eqnarray*}
\end{corollary}
\begin{remark}
If $f=e^{-w}$ for some $w\in\A$ these results can be reformulated as
\begin{eqnarray}
\RR'_{N'}(u)&\ge& e^{-4w}\Big[\RR_N(u) - {\LL}w\,\Gamma(u)- \frac{(N-2)(N'-2)}{N'-N}\,\Gamma(w,u)^2\Big].
\end{eqnarray}
\end{remark}
\begin{corollary}\label{be-tc}
Assume that the operator ${\LL}$ satisfies the $\BE(\K,N)$-condition. Then for every $N'>N$
the operator ${\LL}'$ satisfies the $\BE(\K',N')$-condition with
\begin{eqnarray}\label{k-time}
\K'&:=& f^2\,\K
+
\frac12{\LL}f^2-N^*\,\Gamma(f).
\end{eqnarray}
where $N^*=2+
\frac{[(N-2)(N'-2)]_+}{N'-N}\ge\max\{N,2\}$.
In particular,
the operator ${\LL}'$ satisfies the $\BE(\K'_\infty,\infty)$-condition with
\begin{eqnarray*}
\K'_\infty= f^2\,\K
+
\frac12{\LL}f^2-\max\{N,2\}\,\Gamma(f).
\end{eqnarray*}
\end{corollary}
\begin{remark} Assume that ${\LL}$ is the generator of the diffusion process $(X_t,\PP_x)$
in the sense that ${\LL}u(x)=\lim_{t\to0}\frac1t \EE_x\big( u(X_t)-u(X_0)\big)$ for all $u\in\A$ and all $x\in X$ or -- if there exists an invariant measure $m$ -- in the sense of $L^p$-convergence. Then under appropriate regularity assumptions (see e.g. \cite{RevYor}, \cite{FOT}) the transformed operator ${\LL}'=f^2\,{\LL} $ will be the generator of the diffusion process $(X'_t,\PP_x)$
where $X'_t=X_{\tau(t)}$ with the 'time change' $t\mapsto\tau(t)$ being the inverse to $t\mapsto \int_0^t f^{-2}(X_s)ds$.
\end{remark}
\section{Dirichlet Forms}\label{df}
Let us from now on assume that $X$ is a measurable space, that all $u\in\A$ are bounded and measurable, and that we are given a $\sigma$-finite measure $m$ on $X$ with full support such that $\A\subset L^2(X,m)$.
(For the latter property, it might be of advantage not to require that the constants belong to $\A$.)
We say that
\begin{itemize}
\item $m$ is ${\LL}$-\emph{invariant} if $\int {\LL}u\,dm=0$ for all $u\in\A$,
\item $m$ is ${\LL}$-\emph{reversible} if $\int v{\LL}u\,dm=\int u{\LL}v\,dm$ for all $u,v\in\A$.
\end{itemize}
Throughout this chapter, let $h,w\in\A$ be given and put ${\LL}'=e^{-2w}({\LL}+\Gamma(h,.))$ and $m'=e^{h+2w}m$. Then $m'$ is ${\LL}'$-invariant (or ${\LL}'$-reversible) if and only if
$m$ is ${\LL}$-invariant (or ${\LL}$-reversible, resp.).
Given a measure $m$ on $X$ which is invariant and reversible w.r.t. ${\LL}$, we define the Dirichlet form $(\E,\Dom(\E))$ on $L^2(X,m)$
as the closure of
\begin{equation*}
\E(u)=\int\Gamma(u)\,dm\quad\mbox{for }u\in\A\subset L^2(X,m).
\end{equation*}
Similarly, we define
the Dirichlet form $(\E',\Dom(\E'))$ on $L^2\big(X,m\big)$
as the closure of
\begin{equation*}
\E'(u)=\int\Gamma(u)e^h\,dm\quad\mbox{for }u\in\A\subset L^2\big(X,e^{h+2w}m\big).
\end{equation*}
Then the generator $({\LL},\Dom(\LL))$ of $(\E,\Dom(\E))$ is the Friedrichs extension of $({\LL},\A)$ in $L^2(X,m)$ and
$({\LL}, \Dom(\LL'))$, the generator of $(\E',\Dom(\E'))$, is the Friedrichs extension of $({\LL}',\A)$ in $L^2(X,m')$.
\begin{definition}
We say that the triple $({\LL},\A,m)$ satisfies the $\BE(\K,N)$-condition if $\A$ is dense in $\Dom((-L)^{3/2})$ and if the operator $({\LL},\A)$ satisfies the $\BE(\K,N)$-condition.
\end{definition}
Density here is understood w.r.t.
the graph norm
$u\mapsto \Big[\| (-L)^{3/2}u \|_{L^2}^2+\|u\|_{L^2}^2\Big]^{1/2}\,=\, \Big[\E(\LL u)+\|u\|_{L^2}^2\Big]^{1/2}$.
\begin{lemma}\label{D3dense}
If $\A$ is dense in $\Dom((-\LL)^{3/2})$ then it is also dense in $\Dom((-\LL')^{3/2})$.
\end{lemma}
\begin{proof}
i) Firstly, the boundedness of $h$ and $w$ implies that $\Dom(\E)=\Dom(\E')$. In other words, $\Dom((-\LL)^{1/2})=\Dom((-\LL')^{1/2})$.
ii) Next, recall that $u\in \Dom(\LL')$ if (and only if) $u\in \Dom((-\LL')^{1/2})$ and $\exists C$ s.t.
$\E'(u,\phi)\le C\cdot \|\phi\|_{L^2(m')}$ for all $\phi\in\Dom((-\LL')^{1/2})$.
For $u,\phi\in\A$ we easily see
$$\E'(u,\phi)=-\int e^h (\LL u+ \Gamma(h,u))\,dm\le C\cdot \|\phi\|_{L^2(m')}$$
for $C=\|e^h\|_{\infty} \cdot\big(\|\LL u\|_{L^2(m)}+\| \Gamma(h)\|_{\infty}\cdot \E(u)^{1/2}\big)$.
The estimate $\E'(u,\phi)\le C\cdot \|\phi\|_{L^2(m')}$ is preserved if we approximate $u\in\Dom(\LL)$ and $\phi\in\Dom(\E)$ by $u_n\in\A$ and $\phi_n\in\A$, resp.
This proves $\Dom(\LL)\subset\Dom(\LL')$. Exchanging the roles of $\LL$ and $\LL'$ yields the converse inclusion.
That is, $\Dom(\LL)=\Dom(\LL')$.
iii) Observe that $u\in\Dom((-\LL)^{1/2})\Leftrightarrow e^{h/2}u\in\Dom((-\LL)^{1/2})$ and that
$$u\in\Dom(-\LL)\Leftrightarrow e^{h/2}u\in\Dom(-\LL).$$
To see the latter, note that
$\E'(u,\phi)=\E(e^{h/2}u,e^{h/2}\phi)+\int u\phi e^{h/2}\LL e^{h/2}\,dm$.
iv) Our next claim is that
$$u\in\Dom((-\LL')^{3/2})\Leftrightarrow e^{h/2}u\in\Dom((-\LL)^{3/2}).$$
To prove this, recall that
$u\in\Dom((-\LL')^{3/2})$ if (and only if) $u\in \Dom(-\LL')$ and $\exists C$ s.t.
\begin{equation}\label{dom-l3}
\int \LL' u\cdot \LL' \phi\,dm'\le C\cdot \E'(\phi)^{1/2}
\end{equation} for all $\phi\in\Dom(-\LL')$.
For $u,\phi\in\A$ put $\tilde u=e^{h/2}u$, $\tilde\phi=e^{h/2}\phi$. Then
\begin{eqnarray*}
\int \LL' u\cdot \LL'\phi\,dm'&=&
\int\big(\LL u+\Gamma(h,u)\big)\cdot \big(\LL\phi+\Gamma(h,\phi)\big)\cdot e^{h-2w}\,dm\\
&=&
\int\big(\LL\tilde u - u\LL e^{h/2 }\big)\cdot \big(\LL\tilde\phi - \phi\LL e^{h/2 }\big)\cdot e^{-2w}\,dm\\
&=& \int \LL\tilde u \cdot\LL \tilde\phi\cdot e^{-2w}dm+ \mbox{LOT}_1\\
&=&-\int \Gamma (\LL\tilde u ,\tilde\phi)\, e^{-2w}dm+ \mbox{LOT}_2\\
&\le& C\cdot \|\tilde u\|_{\Dom((-\LL)^{3/2})}\cdot \E(\tilde \phi)^{1/2}
\end{eqnarray*}
where $\mbox{LOT}_1$ and $\mbox{LOT}_2$ denote 'low order terms' which can be estimated in terms of $\|u\|_{L^2(m)}$, $\E(u)$, $\|\LL u\| _{L^2(m)}$
and $\E(\phi)$. This proves \eqref{dom-l3} for all $u$ and $\phi\in\A$.
Due to the assumed density of $\A$ in all the $\Dom((-\LL)^{k/2})$ for $k=1,2,3$ and the previously proven equivalences
$\Dom((-\LL)^{1/2})=\Dom((-\LL')^{1/2})$ and
$\Dom(\LL)=\Dom(\LL')$, the estimate \eqref{dom-l3}
extends to all $u\in \Dom(-\LL')$ and all $\phi\in\Dom(-\LL')$.
This proves the implication ''$\Rightarrow$'' of the above claim. Again the converse implication follows by interchanging the roles of $\LL$ and $\LL'$.
v) Finally, we will prove that $\A$ is dense in $\Dom((-\LL')^{3/2})$. Let $u\in \Dom((-\LL')^{3/2})$ be given, put
$\tilde u=e^{h/2}u\in \Dom((-\LL')^{3/2})$ and choose an approximating sequence $\tilde u_n\in\A$ such that
$\E(\LL(\tilde u -\tilde u_n))+\int (\tilde u -\tilde u_n)^2\,dm \to0$ for $n\to\infty$.
But this already implies
$\E'(\LL'( u - u_n))+\int (u -u_n)^2\,dm \to0$ for $u_n=e^{-h/2}\tilde u_n\in\A$ since for every $\phi\in \A$
\begin{eqnarray*}
\E'(\LL'\phi)&=&
\int\Gamma\Big(e^{-2w}\big(\LL \phi+\Gamma(h,\phi)\big)\Big)e^h\,dm\\
&=&\int \Gamma\Big(e^{-2w}\big(e^{-h/2}\LL \tilde\phi-\tilde\phi \LL e^{h/2}\big)\Big)e^h\,dm\\
&\le& C_1\cdot \E(\LL \tilde \phi)+C_2 \cdot \int \tilde\phi^2\,dm
\end{eqnarray*}
where $\tilde\phi=e^{h/2}\phi$.
\end{proof}
Many spectral properties for $(\LL,\Dom(\LL))$ and functional inequalities involving it will follow -- typically with sharp constants -- from the Bakry-Emery estimate $\BE(\K,N)$ for $(\LL,\A,m)$, among them Poincar\'e inequalities, Sobolev and logarithmic Sobolev inequalities, concentration of measure estimates, isoperimetric inequalities, gradient estimates and heat kernel estimates.
We refer to the surveys \cite{Bak94} and \cite{Led2}. To mention at least one example, we state a fundamental estimate for the spectral gap.
\begin{proposition}
Assume that the $(\LL,\A,m)$ satisfies the $\BE(\K,N)$-condition. Then the spectral gap $\lambda$ of the operator $({\LL},\Dom(\LL))$ on $L^2(X,m')$ satisfies
\begin{equation*}
\lambda\ge\inf_{x\in X}\K'_\infty(x)
\end{equation*}
with the function $\K'_\infty$ from \eqref{dt-k-N}
where $f=e^{-w}$.
A more refined estimate yields
$\lambda\ge\frac{N'}{N'-1}\cdot \inf_{x\in X}\K'_{N'}(x)$
for every $N'>N$
with the function $\K'_{N'}$ from \eqref{dt-k-N}.
\end{proposition}
\begin{proof}
According to Corollary \ref{cor-g2-est},
$({\LL}',\A')$ satisfies the $\BE(\mathrm{K}',N')$-condition with the function $\K'$ given by \eqref{dt-k-N}.
Finally, according to Lemma \ref{D3dense} $\A$ is dense in $\Dom((-L)^{3/2})$.
Thus $(\LL',\Dom(\LL'))$ satisfies the
$\BE(\mathrm{K}',N')$-condition with $\mathrm{K}':=\inf_{x\in X}\K'(x)$.
According to \cite{Bak-Eme}, every $\BE(\mathrm{K}',N')$-condition with a constant $\mathrm{K}'>0$ implies a spectral gap estimate of Lichnerowicz type $\lambda\ge\frac{N'}{N'-1}\cdot \mathrm{K}'$.
\end{proof}
The assertion of Corollary \ref{g2-est-time} also allows for a subtle generalization of the well-known Bonnet-Myers Theorem.
\begin{proposition} Assume that the $(\LL,\A,m)$ satisfies the $\BE(\K,N)$-condition and that there exist numbers $N^*>N, N^*\ge2$, $\mathrm{K}>0$ and a function $f\in\A$, $|f|\le 1$ such that
\begin{eqnarray}\label{mod-myers}
f^2\,\K
+
\frac12{\LL}f^2-N^*\,\Gamma(f)\ge \mathrm{K}.
\end{eqnarray}
Then
\begin{eqnarray}
\diam(X)\le \frac\pi{\sqrt{\mathrm{K}}}\cdot\sqrt{N-1+ \frac{(N-2)^2}{N^*-N}}.
\end{eqnarray}
where $\diam(X)=\sup\{u(x)-u(y):\ u\in\A, \Gamma(u)\le 1\}$ denotes the diameter of $X$ w.r.t. the 'intrinsic metric' induced by $({\LL},\A)$.
\end{proposition}
The choice $f=1$ will lead (in the limit $N^*\to\infty$) to the classical Bonnet-Myers Theorem.
\begin{proof} Put $N'=N+\frac{(N-2)^2}{N^*-N}$ in the case $N\not=2$ and $N'=3$ in the case $N=2$.
According to Corollary \ref{be-tc}, the operator ${\LL}'=f^2 {\LL}$ satisfies $\BE(\K',N')$ with $\K'$ given by \eqref{k-time}.
Together with the assumption \eqref{mod-myers} this yields the $\BE(\mathrm{K},N^*)$-condition for ${\LL}'$.
According to \cite{BakLed96} this implies the diameter bound w.r.t. the intrinsic metric induced by ${\LL}'$.
Due to the assumption $|f|\le1$ the latter is bounded by the intrinsic metric induced by ${\LL}$.
\end{proof}
\section{Smooth Metric Measure Spaces}
Finally, we will study curvature bounds for metric measure spaces and their behavior under transformation of the data.
A triple $(X,d,m)$ is called \emph{metric measure space} if $(X,d)$ is a complete separable metric space and if $m$ is a locally finite measure on the Borel field of $X$. Without restriction we will always assume that $m$ has full topological support.
\begin{definition}
Given (extended) numbers $\KK\in\R$ and $N\in[1,\infty]$ we say that $(X,d,m)$ satisfies the \emph{entropic curvature-dimension condition} $\CD^e(\KK,N)$ if the Boltzmann entropy
\begin{eqnarray*}
S: \mu\mapsto\left\{
\begin{array}{ll}
\int\left(\frac{d\mu}{dm}\right)\log\left(\frac{d\mu}{dm}\right)\,dm&\quad \mbox{if }\mu\ll m,\\
+\infty&\quad \mbox{else.}
\end{array}\right.
\end{eqnarray*}
is $(\KK,N)$-convex on the $L^2$-Wasserstein space $({\mathcal P}_2(X), d_W)$.
\end{definition}
Here a function $S$ on a metric space $(Y,d_Y)$ is called $(\KK,N)$-convex if every pair of points $y_0,y_1\in Y$ can be joined by a (minimizing, constant speed) geodesic $\big(y(t)\big)_{0\le t\le1}$ in $Y$ such that the function $u(t)=e^{-\frac1N S(y(t))}$ is lower semicontinuous in $t\in[0,1]$, continuous in $(0,1)$ and satisfies
$$u''\le -\frac KN |\dot y|^2\cdot u$$
weakly in $(0,1)$.
In the limit $N\to\infty$ this leads to the usual $\KK$-convexity; if in addition $\KK=0$ it yields the classical convexity.
In the general case, $(\KK,N)$-convexity gives a precise meaning for weak solutions to the differential inequality
$$D^2 S-\frac1N \, DS\,\otimes\, DS\ge K$$
on geodesic spaces.
Note that the entropic curvature-dimension condition
implies that $(X,d,m)$ is a geodesic space. More precisely,
$d(x,y)
=\inf\Big\{ \int_0^1 |\dot\gamma_t|\,dt: \ \gamma:[0,1]\to X \mbox{ rectifiable, } \gamma_0=x, \gamma_1=y\Big\}$ for each $x,y\in X$.
We want to prove that the entropic curvature-dimension condition is preserved (with modified parameters) under the most natural transformations of the data $d$ and $m$. And we want to analyze how the parameters $\KK$ and $N$ will change.
The transformation which we have in mind are
\begin{itemize}
\item
given a measurable function $v$ on $X$, we replace the measure $m$ by the weighted measure with Radon-Nikodym derivative $e^v$:
\begin{equation*}
m'=e^v\,m;
\end{equation*}
\item
given a function $w$ on $X$, we replace the length metric $d$ by the weighted length metric with conformal factor $e^w$:
\begin{equation}\label{d-weighted}
d'(x,y)
=\inf\Big\{ \int_0^1 |\dot\gamma_t|\cdot e^{w(\gamma_t)}\,dt: \ \gamma:[0,1]\to X \mbox{ rectifiable, } \gamma_0=x, \gamma_1=y\Big\}.
\end{equation}
\end{itemize}
Treating these questions in full generality is beyond the scope of this paper. We will restrict ourselves here to \emph{smooth} metric measure spaces which allows to benefit from the results of the previous chapters. The general case
requires to deal with subtle regularity issues. We refer to \cite{EKS} and \cite{Sav} for such approximation and smoothing procedures in the general case.
\begin{definition} A metric measure space $(X,d,m)$ is called \emph{smooth} if there exists a diffusion operator $\LL$ defined on an algebra $\A$ as above (chapter 2) such that $\A\subset L^2(X,m)$ and
\begin{itemize}
\item $m$ is a reversible invariant measure for $({\LL},\A)$
and $\A$ is dense in $\Dom((-\LL)^{3/2})$;
\item $d$ is the intrinsic metric for ${\LL}$, i.e. for all $x,y\in X$
$$d(x,y)=\sup\{ u(x)-u(y): \ u\in\Dom(\E)\cap {\mathcal C}_b(X), \hat\Gamma(u)\le m\}.$$
\end{itemize}
Here $\hat\Gamma(.)$ denotes the so-called energy measure, i.e. the measure-valued quadratic form on
$\Dom(\E)\cap L^\infty(X)$ extending the quadratic form
$\Gamma(.)$ on $\A$.
\end{definition}
In particular, each $u\in\A$ will be bounded and continuous.
\begin{theorem} Let $(X,d,m)$ be a smooth mms.
Given $v,w\in\A$ define $m'=e^v\,m$ and $d'$ as in
\eqref{d-weighted}.
If $(X,d,m)$ satisfies the entropic curvature-dimension condition $\CD^e(\KK,N)$ for constants $N\in[1,\infty)$ and $\KK\in\R$ then for each $N'\in(N,\infty]$ the metric measure space $(X,d',m')$ satisfies the entropic curvature-dimension condition $\CD^e(\KK',N')$ for
\begin{eqnarray}\label{mms-N}
\KK'&=& \inf_X \, e^{-2w}\Big[\KK-
{\LL}w+\Gamma(w,2w-v)\nonumber\\
&&\qquad\qquad-
\sup_{u\in\A}
\frac1{\Gamma(u)}\Big( \frac1{N'-N}\Gamma(v-Nw,u)^2+(N-2)\Gamma(w,u)^2\\
&&\qquad\qquad\qquad\qquad\qquad\qquad\qquad+ {\HH}_{v-2w}(u,u)-2 \Gamma(w,u)\Gamma(v-2w,u)\Big)\Big].\nonumber
\end{eqnarray}
If $w=0$ also $N=N'=\infty$ is admissible; if $w=\frac1N v$ also $N'=N$ is admissible.
\end{theorem}
\begin{remark} The case $w=0$ of a pure measure transformation (or 'drift transformation') is well studied, see \cite{Stu1,Stu2, LV}.
Thus let us briefly focus on the case $v=0$ of a pure metric transformation.
In this case, formula \ref{mms-N} simplifies to
\begin{eqnarray*}
\KK'&=& \inf_X \, e^{-2w}\Big[\KK-
{\LL}w+2\Gamma(w)\nonumber\\
&&\qquad\qquad-
\sup_{u\in\A}
\frac1{\Gamma(u)}\Big( \big(\frac{N'\,N}{N'-N}+2\big)\Gamma(w,u)^2-2 {\HH}_{w}(u,u)\Big)\Big]
\end{eqnarray*}
or in terms of $f=e^{-w}$
\begin{eqnarray}\label{mms-N-no-measure}
\KK'&=& \inf_X \, \Big[\KK f^2+\frac12
{\LL}f^2-
\sup_{u\in\A}
\frac1{\Gamma(u)}\Big( \big(\frac{N'\,N}{N'-N}-
2\big)\Gamma(f,u)^2+ {\HH}_{f^2}(u,u)\Big)\Big].
\end{eqnarray}
In the case $N'=\infty$, the expression $\frac{N'\,N}{N'-N}$ simplifies to $N$.
\end{remark}
\begin{proof}
If $N=\infty$, the only admissible choice is $N'=\infty$ and $w=0$. This drift transformation is covered by \cite{Stu1, LV}.
Thus throughout the rest $N<\infty$.
Firstly, we then observe that the $\CD^e(\KK,N)$-condition implies that the underlying space is locally compact (\cite{EKS}, Prop. 3.6).
This guarantees that $(X,d,m)$ satisfies the criteria of \cite{AGS}, Def. 3.6, Def. 3.13.
Thus secondly, we conclude that the Dirichlet form $(\E,\Dom(\E))$ on $L^2(X,m)$ induced by the operator $(\LL,\A)$ (as considered in chapter \ref{df})
coincides with the Cheeger energy on $L^2(X,m)$ induced by the metric $d$ (\cite{AGS}, Thm 3.14).
In particular, $(X,d,m)$ is infinitesimally Hilbertian.
According to \cite{EKS}, Cor. 5.1, the condition $\CD^e(\KK,N)$ implies the Bakry-Ledoux gradient estimate from Lemma \ref{bochner-lemma} below
and thus $(\LL,\A)$ satisfies the Bakry-Emery condition $\BE(\KK,N)$.
According to Corollary \ref{cor-g2-est} (with $h=v-2w$), therefore, $(\LL',\A)$ satisfies
the Bakry-Emery condition $\BE(\KK',N')$ for any $N'>N$ and $\KK'$ given by
\eqref{mms-N}.
Obviously, the Dirichlet forms $\E'$ and $\E$ (as well as the measures $m$ and $m'$) are comparable and
$\hat\Gamma'(.)=e^{v-2w}\cdot\hat\Gamma(.)$.
Thus
\begin{eqnarray*
d_{\E'}(x,y)&=&\sup\Big\{
u(x)-u(y): \ u\in\Dom(\E)\cap {\mathcal C}_b(X), \hat\Gamma(u)\le e^{2w}\, m\Big\}.
\end{eqnarray*}
Moreover, the intrinsic metrics for both Dirichlet forms are length metrics with the same set of rectifiable curves. For each rectifiable curve $\gamma:[0,1]\to X$ its length w.r.t. the metric $d_{\E'}$ therefore is
\begin{eqnarray*
\mathrm{Length}_{\E'}(\gamma)&=& \int_0^1 |\dot\gamma_t|\cdot e^{w(\gamma_t)}\,dt.
\end{eqnarray*}
Thus $d_{\E'}$ coincides with the metric $d'$ as defined in \eqref{d-weighted}.
By assumption $\A$ is dense in $\Dom(({-\LL)^{3/2}})$. Thus according to Lemma \ref{D3dense}
it is also dense in $\Dom(({-\LL')^{3/2}})$.
Thus again by Lemma \ref{bochner-lemma}
the Bakry-Emery condition $\BE(\KK',N')$ is equivalent to the entropic curvature-dimension condition $\CD^e(\KK',N')$ for the smooth mms $(X,d',m')$.\end{proof}
To summarize
$$\CD^e(\KK,N)\mbox{ for }(X,d,m)\ \Leftrightarrow\ \BE(\KK,N)\mbox{ for }\LL\ \Rightarrow \ \BE(\KK',N')\mbox{ for }\LL'\ \Leftrightarrow \ \CD^e(\KK',N')\mbox{ for }(X,d,m).$$
\begin{lemma}\label{bochner-lemma} For any smooth mms $(X,d,m)$ and any $\KK\in\R$, $N\in[1,\infty)$ the following are equivalent
\begin{itemize}
\item[(i)] $({\LL},\A)$ satisfies the Bakry-Emery condition $\BE(\KK,N)$;
\item[(ii)] $\forall u\in\Dom(({-\LL)^{3/2}})$
and $\forall \phi\in\Dom({\LL})\cap L^\infty(X,m)$ with $\phi\ge0$, ${\LL}\phi\in L^\infty(X,m)$;
\begin{equation}\label{bochner1}
\frac12\int {\LL}\phi\cdot \Gamma(u)\,dm +\int \phi \Gamma(u,{\LL}u)\,dm\ge \KK\int\phi \Gamma(u)\,dm+\frac1N \int \phi ({\LL}u)^2\,dm;
\end{equation}
\item[(iii)] $\forall u\in\Dom(\E)$ with bounded $\Gamma(u)$ and $\forall t>0$
\begin{equation*}
\Gamma(P_tu)+ \frac{4\KK t^2}{N(e^{2\KK t}-1)} ({\LL}P_tu)^2\le e^{-2\KK t}P_t\Gamma(u);
\end{equation*}
\item[(iv)] $(X,d,m)$ satisfies the entropic curvature-dimension condition $\CD^e(\KK,N)$.
\end{itemize}
\end{lemma}
If we assumed that the algebra $\A$ were invariant under the semigroup $P_t$ then the implication of (i) $\Rightarrow$ (iii) would be more or less standard.
Following \cite{Led2} we could conclude (iii) for all $f\in\A$ by a simple differentiation/integration argument. Then (iii) in full generality would follow by a straightforward density argument.
However, assuming that $\A$ is invariant under $P_t$ in general is too restrictive.
Our main challenge will be to verify the Bochner inequality with parameters $\KK$ and $N$ for a 'large' class of functions which contains $\A$ and which is invariant under $P_t$. This is property (ii).
\begin{proof}
The equivalence of (ii), (iii) and (iv) was proven in \cite{EKS}.
The implication (ii)$\Rightarrow$(i) is trivial.
To proof the converse, let us assume (i).
Multiplying this pointwise inequality for $u\in\A$ by a nonnegative $\phi$ and integrating w.r.t. $m$ yields
\begin{equation*}\label{bochner2}
\frac12\int \phi\cdot {\LL}\Gamma(u)\,dm +\int \phi \Gamma(u,{\LL}u)\,dm\ge \KK\int\phi \Gamma(u)\,dm+\frac1N \int \phi ({\LL}u)^2\,dm.
\end{equation*}
For $\phi\in\Dom(\LL)$, the symmetry of $L$ then yields
\eqref{bochner1} for all $u\in\A$.
By assumption, the algebra $\A$ is dense in
$\Dom((-\LL)^{3/2})$.
Any $u\in \Dom((-\LL)^{3/2})$, therefore can be approximated by $u_n\in \A$ such that
$\Gamma(u_n)\to \Gamma(u)$, $(\LL u_n)^2\to (\LL u)^2$ and $\Gamma(u_n,\LL u_n)\to \Gamma(u,\LL u)$ in $L^1(X,m)$.
Hence,
we may pass to the limit in inequality \eqref{bochner1}. This proves the claim.
\end{proof}
\medskip
{\it Note added in proof.} After finishing (the first version of) this paper, the new monograph \cite{BGL} appeared which contains at various places calculations (e.g. section 6.9.2 or C.6) similar to those in the current paper. However, none of our main results is obtained there.
| {
"attr-fineweb-edu": 1.224609,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdkA5qoYDgaG4RQdt |
\section{}
In this community review report, we discuss applications and techniques for \textit{fast} machine learning (ML) in science---the concept of integrating power ML methods into the real-time experimental data processing loop to accelerate scientific discovery.
The material for the report builds on two workshops held by the Fast ML for Science community and covers three main areas: applications for fast ML across a number of scientific domains; techniques for training and implementing performant and resource-efficient ML algorithms; and computing architectures, platforms, and technologies for deploying these algorithms.
We also present overlapping challenges across the multiple scientific domains where common solutions can be found.
This community report is intended to give plenty of examples and inspiration for scientific discovery through integrated and accelerated ML solutions.
This is followed by a high-level overview and organization of technical advances, including an abundance of pointers to source material, which can enable these breakthroughs.
\tiny
\fontsize{8}{11}\helveticabold { \section{Keywords:} fast machine learning}
\end{abstract}
\clearpage
\begin{spacing}{1.3}
\makeatletter
\null\hfill\textbf{\Large\contentsname}\hfill\null\par
\@mkboth{\MakeUppercase\contentsname}{\MakeUppercase\contentsname}%
\@starttoc{toc}
\makeatother
\end{spacing}
\clearpage
\begin{spacing}{1.3}
\section*{Foreword}
\input{_s0_executivesummary}
\pagebreak
\section{Introduction}
\input{_s1_introduction}
\pagebreak
\section{Exemplars of domain applications}
\label{sec:apps}
\input{_s2_applications}
\pagebreak
\section{Key areas of overlap}
\label{sec:overlaps}
\input{_s3_overlaps}
\pagebreak
\addtocontents{toc}{\protect\setcounter{tocdepth}{5}}
\section{Technology State-of-the-Art}
\label{sec:technolog_sota}
\input{_s4_technology}
\pagebreak
\section{Outlook}
\label{sec:outlook}
\input{_s5_outlook}
\pagebreak
\section*{Acknowledgments}
We acknowledge the Fast Machine Learning collective as an open community of multi-domain experts and collaborators.
This community was important for the development of this project.
\clearpage
\bibliographystyle{frontiersFPHY}
\subsection{Domain X (Example)}
\subsection{Large Hadron Collider}
The Large Hadron Collider (LHC) at CERN is the world's largest and highest-energy particle accelerator, where collisions between bunches of protons occur every 25 ns.
To study the products of these collisions, several detectors are located along the ring at interaction points.
The aim of these detectors is to measure the properties of the Higgs boson~\cite{Aad:2012tfa,Chatrchyan:2012ufa} with high precision and to search for new physics phenomena beyond the standard model of particle physics.
Due to the extremely high frequency of 40 MHz at which proton bunches collide, the high multiplicity of secondary particles, and the large number of sensors, the detectors have to process and store data at enormous rates. For the two multipurpose experiments, CMS~\cite{Collaboration_2008} and ATLAS~\cite{Collaboration_2008}, comprised of tens of millions of readout channels, these rates are of the order of 100 Tb/s.
Processing and storing this data presents severe challenges that are among the most critical for the execution of the LHC physics program.
The approach implemented by the detectors for data processing consists of an online processing stage, where the event is selected from a buffer and analyzed in real time, and an offline processing stage, in which data have been written to disk and are more thoroughly analyzed with sophisticated algorithms.
The online processing system, called the \emph{trigger}, reduces the data rate to a manageable level of 10\unit{Gb/s} to be recorded for offline processing.
The trigger is typically divided into multiple tiers.
Due to the limited size of the on-detector buffers, the first tier (Level-1 or L1) utilizes FPGAs and ASICs capable of executing the filtering process with a maximum latency of $\mathcal{O}(1)~\mu$s.
At the second stage, the high-level trigger (HLT), data are processed on a CPU-based computing farm located at the experimental site with a latency of up to 100 ms.
Finally, the complete offline event processing is performed on a globally distributed CPU-based computing grid.
Maintaining the capabilities of this system will become even more challenging in the near future.
In 2027, the LHC will be upgraded to the so-called High-Luminosity LHC (HL-LHC) where each collision will produce 5--7 times more particles, ultimately resulting in a total amount of accumulated data that will be one order of magnitude higher than achieved with the present accelerator.
At the same time, the particle detectors will be made larger, more granular, and capable of processing data at ever-increasing rates. Therefore, the physics that can be extracted from the experiments will be limited by the accuracy of algorithms and computational resources.
Machine learning technologies offer promising solutions and enhanced capabilities in both of these areas, thanks to their capacity for extracting the most relevant information from high-dimensional data and to their highly parallelizable implementation on suitable hardware.
It is expected that a new generation of algorithms, if deployed at all stages of data-processing systems at the LHC experiments, will play a crucial part in maintaining, and hopefully improving, the physics performance. In the following sections, a few examples of the application of machine learning models to physics tasks at the LHC are reviewed, together with novel methods for their efficient deployment in both the real-time and offline data processing stages.
\subsubsection{Event reconstruction}
\label{sec:lhceventreco}
The reconstruction of proton-proton collision events in the LHC detectors involves challenging pattern recognition tasks, given the large number ($\mathcal{O}(1000)$) of secondary particles produced and the high detector granularity. Specialized detector sub-systems and algorithms are used to reconstruct the different types and properties of particles produced in collisions.
For example, the trajectories of charged particles are reconstructed from space point measurements in the inner silicon detectors, and the showers arising from particles traversing the calorimeters are reconstructed from clusters of activated sensors.
Traditional algorithms are highly tuned for physics performance in the current LHC collision environment, but are inherently sequential and scale poorly to the expected HL-LHC conditions.
It is thus necessary to revisit existing reconstruction algorithms and ensure that both the physics and computational performance will be sufficient.
Deep learning solutions are currently being explored for pattern recognition tasks, as a significant speedup can be achieved when harnessing heterogeneous computing and parallelizable and efficient ML that exploits AI-dedicated hardware. In particular, modern architectures such as graph neural networks (GNNs) are being explored for the reconstruction of particle trajectories, showers in the calorimeter as well as of the final individual particles in the event.
Much of the following work has been conducted using the TrackML dataset~\cite{trackml}, which simulates a generalized detector under HL-LHC-like pileup conditions.
Quantifying the performance of these GNNs in actual experimental data is an ongoing point of study.
For reconstructing showers in calorimeters, GNNs have been found to predict the properties of the original incident particle with high accuracy starting from individual energy deposits. The work in~\cite{Gray:2020mcm} proposes a graph formulation of pooling to dynamically learn the most important relationships between data via an intermediate clustering, and therefore removing the need for a predetermined graph structure. When applied to the CMS electromagnetic calorimeter, with single detector hits as inputs to predict the energy of the original incident particle, a 10\% improvement is found over the traditional boosted decision tree (BDT) based approach.
GNNs have been explored for a similar calorimeter reconstruction task for the high-granularity calorimeters that will replace the current design for HL-LHC.
The task will become even more challenging as such detectors will feature irregular sensor structure and shape (e.g. hexagonal sensor cells for CMS~\cite{collaboration:2017gbu}), high occupancy, and an unprecedented number of sensors.
For this application, architectures such as \textsc{EdgeConv}~\cite{DBLP:abs-1801-07829} and \textsc{GravNet/GarNet}~\cite{Qasim:2019otl} have shown promising performance in the determination of the properties of single showers, yielding excellent energy resolution and high noise rejection~\cite{Ju:2020xty}.
While these preliminary studies were focused on scenarios with low particle multiplicities, the scalability of the clustering performance to more realistic collision scenarios is still a subject of active development.
GNNs have also been extensively studied for charged particle tracking (the task of identifying and reconstructing the trajectories of individual particles in the detector)~\cite{exatrk_19,duarte_vlimant, heptrkx,dl_tracking}.
The first approaches to this problem typically utilized edge-classification GNNs in a three-step process: graphs are constructed by algorithmically constructing edges between tracker hits in a point cloud, the graphs are processed through a GNN to predict edge weights (true edges that are part of true particle trajectories should be highly weighted and false edges should be lowly rated), and finally, the selected edges are grouped together to generate high-weight sub-graphs which form full track candidates, as shown in Figure~\ref{fig:gnn_steps}.
\begin{figure}[ht!p]
\centering
\includegraphics[width=0.99\textwidth]{figures/GNN_steps.png}
\caption{High-level overview of the stages in a GNN-based tracking pipeline. Only a subset of the typical edge weights are shown for illustration purposes.}
\label{fig:gnn_steps}
\end{figure}
There have been several studies building upon and optimizing this initial framework.
The ExaTrkX collaboration has demonstrated performance improvements by incorporating a recurrent GNN structure \cite{exatrk_19} and re-embedding graphs prior to training the GNNs~\cite{embedding}.
Other work has shown that using an Interaction Network architecture~\cite{battaglia2016interaction} can substantially reduce the number of learnable parameters in the GNN \cite{dezoort2021charged}; the authors also provide comprehensive comparisons between different graph construction and track building algorithms.
Recent work has also explored alternate approaches that combine graph building, GNN inference, and track construction into a single algorithm that is trainable end-to-end; in particular, instance segmentation architectures have generated promising results~\cite{thais2021instance}.
Finally, a novel approach based on GNNs~\cite{Pata:2021oez} has been proposed as an alternative solution to the so-called particle-flow algorithm that is used by LHC experiments to optimally reconstruct each individual particle produced in a collision by combining information from the calorimeters and the tracking detectors~\cite{Sirunyan:2017ulk}. The new GNN algorithm is found to offer comparable performance for charged and neutral hadrons to the existing reconstruction algorithm.
At the same time, the inference time is found to scale approximately linearly with the particle multiplicity, which is promising for its ability to maintain computing costs within budget for the HL-LHC.
Further improvements to this original approach are currently under study, including an event-based loss, such as the object condensation approach.
Second, a complete assessment of the physics performance remains to be evaluated, including reconstruction of rare particles and other corners of the phase space.
Finally, it remains to be understood how to optimize and coherently interface this with the ML-based approach proposed for tasks downstream and upstream in the particle-level reconstruction.
\subsubsection{Event simulation}
\label{sec:lhceventsim}
The extraction of results from LHC data relies on a detailed and precise simulation of the physics of proton-proton collisions and of the response of the detector.
In fact, the collected data are typically compared to a reference model, representing the current knowledge, in order to either confirm or disprove it. Numerical models, based on Monte Carlo (MC) methods, are used to simulate the interaction between elementary particles and matter, while the Geant4 toolkit is employed to simulate the detectors. These simulations are generally very CPU intensive and require roughly half of the experiment's computing resources, with this fraction expected to increase significantly for the HL-LHC.
Novel computational methods based on ML are being explored so as to perform precise modeling from particle interactions to detector readouts and response while maintaining feasible computing budgets for HL-LHC.
In particular, numerous works have focused on the usage of generative adversarial networks or other state-of-the-art generative models to replace computationally intensive fragments of MC simulation, such as modeling of electromagnetic showers~\cite{Paganini:2017dwg,Paganini:2017hrr,deOliveira:2017pjk}, reconstruction of jet images~\cite{Musella:2018rdi} or matrix element calculations~\cite{Bendavid:2017zhk}.
In addition, the usage of ML generative models on end-to-end analysis-specific fast simulations have also been investigated in the context of Drell-Yan~\cite{Hashemi:2019fkn}, dijet~\cite{DiSipio:2019imz} and W+jets~\cite{Chen:2020uds} production.
These case-by-case proposals serve as proof-of-principle examples for complementary data augmentation strategy for LHC experiments.
\subsubsection{Heterogeneous computing}
State-of-the-art deep learning models are being explored for the compute-intensive reconstruction of each collision event at the LHC. However, their efficient deployment within the experiments' computing paradigms is still a challenge, despite the potential speed-up when the inference is executed on suitable AI-dedicated hardware. In order to gain from a parallelizable ML-based translation of traditional and mostly sequential algorithms, a heterogeneous computing architecture needs to be implemented in the experiment infrastructure.
For this reason, comprehensive exploration of the use of CPU+GPU~\cite{Krupa:2020bwg} and CPU+FPGA~\cite{Duarte:2019fta,Rankin:2020usv} heterogeneous architectures was made to achieve the desired acceleration of deep learning inference within the data processing workflow of LHC experiments. These works demonstrated that the acceleration of machine learning inference ``as a service'' represents a heterogeneous computing solution for LHC experiments that potentially requires minimal modification to the current computing model.
In this approach, the ML algorithms are transferred to a co-processor on an independent (local or remote) server by reconfiguring the CPU node to communicate with it through asynchronous and non-blocking inference requests.
With the inference task offloaded on demand to the server, the CPU can be dedicated to performing other necessary tasks within the event.
As one server can serve many CPUs, this approach has the advantage of increasing the hardware cost-effectiveness to achieve the same throughput when comparing it to a direct-connection paradigm.
It also facilitates the integration and scalability of different types of co-processor devices, where the best one is chosen for each task.
Finally, existing open-source frameworks that have been optimized for fast DL on several different types of hardware can be exploited for a quick adaptation to LHC computing. In particular, one could use the Nvidia Triton Inference Server within a custom framework, so-called Services for Optimized Network Inference on Co-processors (SONIC), to enable remote gRPC calls to either GPUs or FPGAs within the experimental software, which then only has to handle the input and output conversion between event data format and inference server format.
The integration of this approach within the CMS reconstruction software has been shown to lead to a significant overall reduction in the computing demands both at the HLT and offline.
\subsubsection{Real-time analysis at 40 MHz}
Bringing deep learning algorithms to the Level-1 hardware trigger is an extremely challenging task due to the strict latency requirement and the resource constraints imposed by the system. Depending on which part of the system an algorithm is designed to run on, a latency down to $\mathcal{O}(10)~$ns might be required.
With $\mathcal{O}(100)~$ processors running large-capacity FPGAs, processing thousands of algorithms in parallel, dedicated FPGA-implementations are needed to make ML algorithms as resource-efficient and fast as possible.
To facilitate the design process and subsequent deployment of highly parallel, highly compressed ML algorithms on FPGAs, dedicated open-source libraries have been developed: \texttt{hls4ml}\xspace and \texttt{Conifer}\xspace.
The former, \texttt{hls4ml}\xspace, provides conversion tools for deep neural networks, while \texttt{Conifer}\xspace aids the deployment of Boosted Decision Trees (BDTs) on FPGAs. Both libraries, as well as example LHC applications, will be described in the following.
\begin{figure*}[htb]
\centering
\includegraphics[width=0.89\textwidth]{figures/hls4ml_conifer.pdf}
\caption{Two dedicated libraries for the conversion of Machine Learning algorithms into FPGA or ASIC firmware: \texttt{hls4ml}\xspace for deep neural network architectures and \texttt{Conifer}\xspace for Boosted Decision Tree architectures. Models from a wide range of open-source ML libraries are supported and may be converted using three different high-level synthesis backends.}
\label{figs2:libraries}
\end{figure*}
The \texttt{hls4ml}\xspace library~\cite{Duarte:2018ite,aarrestad2021fast,DiGuglielmo:2020eqx,Coelho:2020zfu} converts pre-trained ML models into ultra low-latency FPGA or ASIC firmware with little overhead required.
Integration with the Google QKeras library~\cite{qkeras} allows users to design aggressively quantized deep neural networks and train them quantization-aware~\cite{Coelho:2020zfu} down to 1 or 2 bits for weights and activations~\cite{DiGuglielmo:2020eqx}.
This step results in highly resource-efficient equivalents of the original model, sacrificing little to no accuracy in the process. The goal of this joint package is to provide a simple two-step approach going from a pre-trained floating point model to FPGA firmware.
The \texttt{hls4ml}\xspace library currently provides support for several commonly used neural network layers like fully connected, convolutional, batch normalization, pooling, as well as several activation functions.
These implementations are already sufficient to provide support for the most common architectures envisioned for deployment at L1.
Some first examples of machine learning models designed for the L1 trigger are based on fully connected layers, and they are proposed for tasks such as the reconstruction and calibration of final objects or lower-level inputs like trajectories, vertices, and calorimeter clusters~\cite{CERN-LHCC-2020-004}.
One example of a convolutional NN (CNN) architecture targeting the L1 trigger is a dedicated algorithm for the identification of long-lived particles~\cite{Alimena_2020}. Here, an attempt is made to efficiently identify showers from displaced particles in a high-granularity forward calorimeter. The algorithm is demonstrated to be highly efficient down to low energies while operating at a low trigger rate.
Traditionally, cut-based selection algorithms have been used for these purposes, in order to meet the limited latency- and resource budget. However, with the advent of tools like \texttt{hls4ml}\xspace and QKeras, ML alternatives are being explored to improve the sensitivity to such physics processes while maintaining latency and resources in the available budget.
More recently, (variational) auto-encoders (VAEs or AEs) are being considered for the detection of ``anomalous'' collision events, i.e. events that are not produced by standard physics processes but that could be due instead to unexpected processes not yet explored at colliders.
Such algorithms have been proposed for both the incoming LHC run starting in 2022 as well as for the future high-luminosity runs where more granular information will be available.
The common approach uses global information about the event, including a subset of individual produced particles or final objects such as jets as well as energy sums.
The algorithm trained on these inputs is then used to classify the event as anomalous if surpassing a threshold on the degree of anomaly (typically the loss function), ultimately decided upon the available bandwidth.
Deploying a typical variational autoencoder is impossible in the L1-trigger since the bottleneck layer involves Gaussian random sampling.
The explored solution is therefore to only deploy the encoder part of the network and do inference directly from the latent dimension. Another possibility is to deploy a simple auto-encoder with the same architecture and do inference computing the difference between output and input.
However, this would require buffering a copy of the input for the duration it takes the auto-encoder to process the input.
For this reason, the two methods are being considered and compared in terms of accuracy over a range of new physics processes, as well as latency and resources.
Finally, another interesting aspect of the \texttt{hls4ml}\xspace tool is the capability for users to easily add custom layers that might serve a specific task not captured by the most common layers supported in the library. One example of this is compressed distance-weighted graph networks~\cite{garnet}, where a graph network block called a \emph{GarNet layer} takes as input a set of V vertices, each of which has $F_{in}$ features, and returns the same set of vertices with $F_{out}$ features.
To keep the dimensionality of the problem at a manageable level, the input features of each vertex are encoded and aggregated at $S$ aggregators. Message-passing is only performed between vertices and a limited set of aggregators, and not between all vertices, significantly reducing the network size.
In Ref.~\cite{garnet}, an example task of pion and electron identification and energy regression in a 3D calorimeter is studied.
A total inference latency of $\mathcal{O}(100)~$ns is reported, satisfying the L1 requirement of $\mathcal{O}(1)~\mu$s latency.
The critical resource is digital signal processing (DSP) units, where 29\% of the DSPs are in use by the algorithm.
This can be further reduced by taking advantage of quantization-aware training with QKeras.
Another example of a GNN architecture implemented on FPGA hardware using \texttt{hls4ml}\xspace is presented in Ref.~\cite{heintz2020accelerated}.
This work shows that a compressed GNN can be deployed on FPGA hardware within the latency and resources required by L1 trigger system for the challenging task of reconstructing the trajectory of charged particles.
In many cases, the task to be performed is simple enough that a boosted decision tree (BDT) architecture suffices to solve the problem.
As of today, BDTs are still the most commonly used ML algorithm for LHC experiments.
To simplify the deployment of these, the library {\tt Conifer}~\cite{Summers:2020xiy} has been developed. In {\tt Conifer}, the BDT implementation targets extreme low latency inference by executing all trees, and all decisions within each tree, in parallel.
BDTs and random forests can be converted from scikit-learn~\cite{scikit-learn}, XGBoost~\cite{XGBoost}, and TMVA~\cite{TMVA}, with support for more BDT training libraries planned.
There are several ongoing projects at LHC which plan to deploy BDTs in the Level-1 trigger using {\tt Conifer}. One example is a BDT designed to provide an estimate of the {\em track quality}, by learning to identify tracks that are reconstructed in error, and do not originate from a real particle~\cite{bdt_tq}.
While the accuracy and resource usage are similar between a BDT and a DNN, the latency is significantly reduced for a BDT architecture.
The algorithm is planned to be implemented in the CMS Experiment for the data-taking period beginning in 2022.
Rather than relying on open source libraries such as \texttt{hls4ml}\xspace or \texttt{Conifer}\xspace, which are based on high-level synthesis tools from FPGA vendors, other approaches are being considered based directly on hardware description languages, such as VHDL~\cite{Nottbeck:2019rqu,Fritzsche2020}.
One example is the application of ML for the real-time signal processing of the ATLAS Liquid Argon calorimeter~\cite{atlas1996atlas}.
It has been shown that with upgraded capabilities for the HL-LHC collision environment the conventional signal processing, which applies an optimal filtering algorithm~\cite{Cleland:2002rya}, will lose its performance due to the increase of overlapping signals.
More sophisticated DL methods have been found to be more suitable to cope with these challenges being able to maintain high signal detection efficiency and energy reconstruction.
More specifically, studies based on simulation~\cite{madysa-chep} of dilated convolutional neural networks showed promising results.
An implementation of this architecture for FPGA is designed using VHDL~\cite{Fritzsche2020} to meet the strict requirements on latency and resources required by the L1 trigger system.
The firmware runs with a multiple of the bunch crossing frequency to reuse hardware resources by implementing time-division multiplexing while using pipeline stages, the maximum frequency can be increased.
Furthermore, DSPs are chained up to perform the MAC operation in between two layers efficiently. In this way, a core frequency of more than 480 MHz could be reached, corresponding to twelve times the bunch crossing frequency.
\subsubsection{Bringing ML to detector front-end}
While LHC detectors grow in complexity to meet the challenging conditions of higher-luminosity environments, growing data rates prohibit transmission of full event images off-detector for analysis by conventional FPGA-based trigger systems.
As a consequence, event data must be compressed on-detector in low-power, radiation-hard ASICs while sacrificing minimal physics information.
Traditionally this has been accomplished by simple algorithms, such as grouping nearby sensors together so that only these summed ``super-cells'' are transmitted, sacrificing the fine segmentation of the detector.
Recently, an autoencoder-based approach has been proposed, relying instead on a set of machine-learned radiation patterns to more efficiently encode the complete calorimeter image via a CNN.
Targeting the CMS high-granularity endcap calorimeter (HGCal)~\cite{collaboration:2017gbu} at the HL-LHC, the algorithm aims to achieve higher-fidelity electromagnetic and hadronic showers, critical for accurate particle identification.
The on-detector environment (the ECON-T concentrator ASIC~\cite{collaboration:2017gbu}) demands a highly-efficient CNN implementation; a compact design should be thoroughly optimized for limited-precision calculations via quantization-aware training tools~\cite{qkeraspaper}.
Further, to automate the design, optimization, and validation of the complex NN circuit, HLS-based tool flows~\cite{Duarte:2018ite} may be adapted to target the ASIC form factor.
Finally, as the front-end ASIC cannot be completely reprogrammed in the manner of an FPGA, a mature NN design is required from the time of initial fabrication.
However, adaptability to changing run conditions and experimental priorities over the lifetime of the experiment motivate the implementation of all NN weights as configurable registers accessible via the chip's slow-control interface.
\subsection{High intensity accelerator experiments}
\subsubsection{ML-based Trigger System at the Belle II Experiment}
\subparagraph*{Context:} The Belle II experiment in Japan is engaged in the search for physics phenomena that cannot be explained by the Standard Model. Electrons and positrons are accelerated at the SuperKEKB particle accelerator to collide at the interaction point located inside of the Belle II detector.
The resulting decay products are continually measured by the detector's heterogeneous sensor composition.
The resulting data is then stored offline for detailed analysis.
\subparagraph*{Challenges:} Due to the increasing luminosity (target luminosity is $8\times10^{35}\mathrm{cm}^{-2}\mathrm{s}^{-1}$) most of the recorded data is from unwanted but unavoidable background reactions, rather than electron-positron annihilation at the interaction point. Not only is storing all the data inefficient due to the high background rates, but it is also not feasible to build an infrastructure that stores all the generated data. A multilevel trigger system is used as a solution to decide online which recorded events are to be stored.
\subparagraph*{Existing and Planned Work:} The Neural Network z-Vertex Trigger (NNT) described used at Belle II is a deadtime-free level 1 (L1) trigger that identifies particles by estimating their origin along the beampipe.
For the whole L1 trigger process, from data readout to the decision, a real-time 5\unit{$\mu$s} time budget is given to avoid dead-time \cite{Lai_2020}.
Due to the time cost of data pre-processing and transmission, the NNT needs to provide a decision within 300\unit{ns} processing time.
The task of the NNT is to estimate the origin of a particle track so that it can be decided whether it originates from the interaction point or not.
For this purpose, a multilayer perceptron (MLP) implemented on a Xilinx Virtex 6 XC6VHX380T FPGA is used.
The MLP consists of three layers with 27 input neurons, 81 hidden layer neurons and two output neurons.
Data from the Belle II's central drift chamber (CDC) is used for this task, since it is dedicated to the detection of particle tracks.
Before being processed by the network, the raw detector data is first combined into a 2D track based on so-called track segments, which are groupings of adjacent active sense wires.
The output of the NNT delivers the origin of the track in $z$, along the beampipe, as well as the polar angle $\theta$.
With the help of the z-vertex, the downstream global decision logic (GDL) can decide whether a track is from the interaction point or not.
In addition, the particle momentum can be detected using the polar angle $\theta$~\cite{baehr2019low}.
The networks used in the NNT are trained offline.
The first networks were trained with plain simulated data because no experimental data were available.
For more recent networks, reconstructed tracks from the experimental data are used.
For the training the iRPROP algorithm is used which is an extension of the RPROP backpropagation algorithm. Current results show a good correlation between the NNT tracks and reconstructed tracks.
Since the event rate and the background noise are currently still tolerable, the z-cut, i.e., the allowed estimated origin of a track origin in order to be kept, is chosen at $\pm 40$\,cm.
With increasing luminosity and the associated increasing background, this z-cut can be tightened.
Since the new Virtex Ultrascale based universal trigger board (UT4) is available for the NNT this year, an extension of the data preprocessing is planned.
This will be done by a 3D Hough transformation for further efficiency increases.
It has already been shown in simulation that a more accurate resolution and larger solid angle coverage can be achieved~\cite{Skambraks_2020}.
\subsubsection{Mu2e}
\subparagraph*{Context:} The Mu2e experiment at Fermilab will search for the charged lepton flavor violating process of neutrino-less $\mu \to e$ coherent conversion in the field of an aluminum nucleus.
About $7\cdot 10^{17}$ muons, provided by a dedicated muon beamline in construction at Fermilab, will be stopped in 3 years in the aluminum target.
The corresponding single event sensitivity will be $2.5\cdot 10^{-17}$.
To detect the signal $e^-$ ($p=105$\unit{MeV}), Mu2e uses a detector system made of a straw-tube tracker and a crystal electromagnetic calorimeter~\cite{MU2E}.
\subparagraph*{Challenges:} The trigger system is based on detector Read Out Controllers (ROCs) which stream out continuously the data, zero-suppressed, to the Data Transfer Controller units (DTCs). The proton pulses are delivered at a rate of about 600 kHz and a duty cycle of about 30\% (0.4 s out of 1.4 s of the booster-ring delivery period). Each proton pulse is considered a single event, with the data from each event then grouped at a single server using a 10~Gbps Ethernet switch. Then, the online reconstruction of the events starts and makes a trigger decision. The trigger system needs to satisfy the following requirements: (1) provide efficiency better than 90\% for the signals; (2) keep the trigger rate below a few kHz -- equivalent to 7 Pb/year; (3) achieve a processing time $<5$~ms/event. Our main physics triggers use the information of the reconstructed tracks to make the final decision.
\subparagraph*{Existing and Planned Work:} The current strategy is to perform the helix pattern recognition and the track reconstruction with the CPUs of the DAQ servers, but so far this design showed limitations in matching the required timing performance~\cite{pezzullo_gianantonio_2020_4088480}.
Another idea that the collaboration started exploring is to perform the early stage of the track reconstruction on the ROC and DTC FPGA using the High Level Synthesis tool (HLS) and the \texttt{hls4ml} package.
The Mu2e helix pattern-recognition algorithms~\cite{pezzullo_gianantonio_2020_4088480} are a natural fit for these tools for several reasons: they use neural-networks to clean up the recorded straw-hits from hits by low-momentum electrons ($p<10$\unit{MeV}) and they perform large combinatorics calculations when reconstructing the helicoidal electron trajectory.
This R\&D is particularly important for the design of the trigger system of the planned upgrade of Mu2e~\cite{abusalma2018expression}, where we expect to: (i) increase the beam intensity by at least a factor of 10, (ii) increase the duty cycle to at least 90\%, and (iii) increase the number of detector's channels to cope with the increased occupancy.
\subsection{Materials Discovery}
\subsubsection{Materials Synthesis}
\subparagraph*{Context:} Advances in electronics, transportation, healthcare, and buildings require the synthesis of materials with controlled synthesis-structure-property relationships.
To achieve application-specific performance metrics, it is common to design and engineer materials with highly ordered structures.
This directive has led to a boom in non-equilibrium materials synthesis techniques.
Most exciting are additive synthesis and manufacturing techniques, for example, 3d-printing\cite{Wang2020-tv,Parekh2016-vj,Visser2015-hy,Ligon2017-dg,Zarek2016-dw} and thin film deposition\cite{Chrisey1994-gw,Richter1990-ml,Yoshino2000-oo,Kelly2000-xk,Marvel2013-cd,George2010-pb,Park2001-so}, where complex nanoscale architectures of materials can be fabricated.
To glean insight into synthesis dynamics, there has been a trend to include in situ diagnostics to observe synthesis dynamics\cite{Ojeda-G-P2017-la,Egelhoff1989-pr,Thomas1999-ij,Langereis2007-dj}.
There is less emphasis on automating the downstream analysis to turn data into actionable information that can detect anomalies in synthesis, guide experimentation, or enable closed-loop control. Part of the challenge with automating analysis pipelines for in situ diagnostics is the highly variable nature and multimodality of the measurements and the sensors.
A system might measure many time-resolved state variables (time-series) at various locations (e.g., temperature, pressure, energy, flow rate, etc.)\cite{Hansen1999-an}.
Additionally, it is common to measure time-resolved spectroscopic signals (spectrograms) that provide, for instance, information about the dynamics of the chemistry and energetic distributions of the materials being synthesized\cite{Cooks2018-jm,Termopoli2019-gb,Dauchot1995-cu,Aubriet2002-ln}.
Furthermore, there are a growing number of techniques that leverage high-speed temporally-resolved imaging to observe synthesis dynamics\cite{Trigub2017-xw,Ojeda-G-P2018-cv}.
\subparagraph*{Challenges:} Experimental synthesis tools and in situ diagnostic instrumentation are generally semi-custom instruments provided by commercial vendors.
Many of these vendors rely on proprietary software to differentiate their products from their competition. In turn, the closed-nature of these tools and even data schemas makes it hard to utilize these tools fully. The varied nature and suppliers for sensors compounds this challenge. Integration and synchronization of multiple sensing modalities require a custom software solution.
However, there is a catch-22 because the software does not yet exist.
Researchers cannot be ensured that the development of analysis pipelines will contribute to their ultimate goal to discover new materials or synthesize materials with increased fecundity. Furthermore, there are significant workforce challenges as most curriculums emphasize Edisonian rather than computational methods in the design of synthesis. There is an urgent need for multilingual trainees fluent in typically disparate fields.
\subparagraph*{Existing and Planned Work:} Recently, the materials science community has started to embrace machine learning to accelerate scientific discovery\cite{Butler2018-qo,Schmidt2019-dz,Ramprasad2017-wp}.
However, there have been growing pains. The ability to create highly overparameterized models to solve problems with limited data provides a false sense of efficacy without the generalization required for science.
Machine learning model architectures designed for natural time-series and images are ill-posed for physical processes governed by equations.
In this regard, there is a growing body of work to embed physics in machine learning models, which serve as the ultimate regularizers.
For instance, rotational~\cite{Oxley2020-hg,Kalinin2020-xl} and Euclidean equivariance~\cite{Smidt_undated-oh,Smidt2020-sh} has been built into the model architectures, and methods to learn sparse representations of underlying governing equations have been developed\cite{Kaheman2020-zt,De_Silva2020-ef,Champion2019-kh}.
Another challenge is that real systems have system-specific discrepancies that need to be compensated\cite{Kaheman2019-yu}. For example, a precursor from a different batch might have a slightly different viscosity that needs to be considered. There is an urgent need to develop these foundational methods for materials synthesis. Complementing these foundational studies, there has been a growing body of literature emphasizing post-mortem machine-learning-based analysis of in situ spectroscopies\cite{Provence2020-ro,Trejo2019-ph}.
As these concepts become more mature, there will be an increasing emphasis on codesign of synthesis systems, machine learning methods, and hardware for on-the-fly analysis and control.
This effort towards self-driving laboratories is already underway in wet-chemical synthesis where there are minimal dynamics, and thus, latencies are not a factor\cite{MacLeod2020-mv,Langner2020-ds}.
Future efforts will undoubtedly focus on controlling dynamic synthesis processes where millisecond-to-nanosecond latencies are required.
\subsubsection{Scanning Probe Microscopy}
\subparagraph*{Context:} Touch is the first sense humans develop. Since the atomic force microscope's (AFM) invention in 1985\cite{Binnig1986-ig}, humans have been able to "feel" surfaces with atomic level resolution with pN sensitivity.
AFMs rely on bringing an atomically sharp tip mounted on a cantilever into contact with a surface. By scanning this tip nanometer-to-atomically resolved images can be constructed by measuring the angular deflection of a laser bounced off the cantilever. This detection mechanism provides high-precision sub-angstrom measures of displacement.
By adding functionality to the probe (e.g., electrical conductivity\cite{Benstetter2009-oe}, resistive heaters\cite{King2005-sc}, single-molecule probes\cite{Oberhauser2002-cs}, and N-V centers\cite{Ariyaratne2018-hg}), scanning probe microscopy (SPM) can measure nanoscale functional properties, including electrical conductivity\cite{Seidel2010-uv,Gomez-Navarro2005-pu}, piezoresponse\cite{Jesse2011-tv}, electrochemical response\cite{Jesse2012-gh}, magnetic force\cite{Kazakova2019-dj}, magnetometry\cite{Casola2018-ms}, and much more.
These techniques have been expanded to include dynamics measurements during a tip-induced perturbation that drives a structural transformation. These methods have led to a boom in new AFM techniques, including fast-force microscopy\cite{Benaglia2018-js}, current-voltage spectroscopies\cite{Holstad2020-kq}, band-excitation-based spectroscopies\cite{Jesse2018-jw}, and full-acquisition mode spectroscopies\cite{Somnath2015-qk}.
What has emerged is a data deluge where these techniques are either underutilized or under-analyzed.
\subparagraph*{Challenges:} The key practical challenge is that it takes on days-to-weeks to analyze data from a single measurement properly.
As a result, experimentalists have little information on how to design their experiments.
There is even minimal feedback on whether the experiments have artifacts (e.g., tip damage) that would render the results unusable. The number of costly failed experiments is a strong deterrent to conducting advanced scanning probe spectroscopies and developing even more sophisticated imaging techniques.
There is a significant challenge in both the acceleration and automation of analysis pipelines.
\subparagraph*{Existing and Planned Work:} In materials science, scanning probe microscopy has quickly adopted machine learning.
Techniques for linear and nonlinear spectral unmixing provide rapid visualization and extraction of information from these datasets to discover and unravel physical mechanisms~\cite{Ziatdinov2020-nt,Collins2020-na,Kalinin2021-gp,Collins2020-ml}.
The ease of applying these techniques has led to justified concerns about the overinterpretation of results and overextension of linear models~\cite{Griffin2020-mc} to highly nonlinear systems.
More recently, long-short term memory autoencoders were controlled to have non-negative and sparse latent spaces for spectral unmixing.
By traversing the learned latent space, it has been possible to draw complex structure-property relationships~\cite{Agar2019-eq,Holstad2020-kq}.
There are significant opportunities to accelerate the computational pipeline such that information can be extracted on practically relevant time scales by the experimentalist on the microscope.
Due to the high velocity of data, up to GB/s, with sample rates of 100,000 spectra, extracting even cursory information will require the confluence of data-driven models, physics-informed machine learning, and AI hardware.
As a tangible example, in band-excitation piezoresponse force microscopy, the frequency-dependent cantilever response is measured at rates up to 2,000 spectra-per-second.
Extracting the parameters from these measurements requires fitting the response to an empirical model. Using least-squares fitting throughput is limited to $\sim50$-fits/core-minute, but neural networks provide an opportunity to accelerate analysis and better handle noisy data~\cite{Borodinov2019-pn}.
There is an opportunity to deploy neural networks on GPU or FPGA hardware accelerators to approximate and accelerate this pipeline by orders of magnitude.
\subsection{Fermilab Accelerator Controls}
\subparagraph*{Context:}
The Fermi National Accelerator Laboratory (Fermilab) is dedicated to investigating matter, energy, space, and time \cite{fermilab_about}.
For over 50 years, Fermilab's primary tool for probing the most elementary nature of matter has been its vast accelerator complex. Spanning a number of miles of tunnels, the accelerator complex is actually multiple accelerators and beam transport lines each representing different accelerator techniques and eras of accelerator technologies.
In its long history, Fermilab's accelerator complex has had to adapt to the mission, asking more of the accelerators than they were designed for and often for purposes they were never intended.
This often resulted in layering new controls on top of existing antiquated hardware.
Until recently, accelerator controls focused mainly on providing tools and data to the machine operators and experts for tuning and optimization.
Having recognized the future inadequacies of the current control system and the promise of new technologies such as ML, the Fermilab accelerator control system will be largely overhauled in the coming years as part of the Accelerator Controls Operations Research Network (ACORN) project~\cite{acorn_paper}.
\subparagraph*{Challenges:}
The accelerator complex brings unique challenges for machine learning. Particle accelerators are immensely complicated machines, each consisting of many thousands of variable components and even larger data sources.
Their large size and differing types, resolution, and frequency of data mean collecting and synchronizing data is difficult.
Also, as one might imagine, control and regulation of beams that travel at near light speeds is always a challenge.
Maintaining and upgrading the accelerator complex controls is costly.
For this reason, much of the accelerator complex is a mixture of obsolete, new and cutting edge hardware.
\subparagraph*{Existing and Planned Work:}
Traditional accelerator controls have focused on grouping like elements so that particular aspects of the beam can be tuned independently. However, many elements are not always completely separable.
Magnets, for example, often have higher-order fields that affect the beam in different ways than is the primary intent.
Machine learning has made it finally possible to combine previously believed to be unrelated readings and beam control elements into new novel control and regulation schemes.
One such novel regulation project is underway for the Booster Gradient Magnet Power Supply (GMPS). GMPS controls the primary trajectory of the beam in the Booster~\cite{operations_booster_rookie_book}.
The project hopes to increase the regulation precision of GMPS ten-fold.
When complete, GMPS would be the first FPGA online ML-model-based regulation system in the Fermilab accelerator complex~\cite{john2021realtime}.
The promise of ML for accelerator controls is so apparent to the Department of Energy that a call for accelerator controls using ML was made to the national labs \cite{doe_foa_lab_20-2261}. Of the two proposals submitted by Fermilab and approved by the DOE is the Real-time Edge AI for Distributed Systems (READS) project. READS is actually two projects.
The first READS project will create a complimentary ML regulation system for slow extraction from the Delivery Ring to the future Mu2e experiment~\cite{bartoszek2015mu2e}.
The second READS project will tackle a long-standing problem with de-blending beam losses in the Main Injector (MI) enclosure. The MI enclosure houses two accelerators, the MI and the Recycler.
During normal operation, high intensity beams exist in both machines.
One to use ML to help regulate slow spill in the Delivery ring to Mu2e, and another to develop a real-time online model to de-blend losses coming from the Recycler and Main Injector accelerators which share an enclosure. Both READS projects will make use of FPGA online ML models for inference and will collect data at low latencies from distributed systems around the accelerator complex~\cite{seiya2021accelerator}.
\subsection{Neutrino and direct dark matter experiments}
\subsubsection{Accelerator Neutrino Experiments}\label{sec:nuaccel}
\subparagraph*{Context:} Accelerator neutrino experiments detect neutrinos with energies ranging from a few tens of MeV up to about 20\unit{GeV}.
The detectors can be anywhere from tens of meters away from the neutrino production source, to as far as away as 1500\unit{km}.
For experiments with longer baselines it is common for experiments to consist of both a near ($\sim$1\unit{km} baseline) and a more distant far detector (100's\unit{km} baseline).
Accelerator neutrino experiments focused on long-baseline oscillations use highly pure muon neutrino beams, produced by pion decays in flight.
By using a system of magnetic horns it is possible to produce either a neutrino, or antineutrino beam. This ability is particularly useful for CP-violation measurements.
Other experiments use pions decaying at rest, which produce both muon and electron flavors.
The primary research goal of many accelerator neutrino experiments is to perform neutrino oscillation measurements; the process by which neutrinos created in one flavor state are observed interacting as different flavor states after traveling a given distance.
Often this takes the form of measuring electron neutrino appearance and muon neutrino disappearance. The rate of oscillation is energy-dependent, and so highly accurate energy estimation is essential.
Another key research goal for accelerator neutrinos is to measure neutrino cross-sections, which in addition to accurate energy estimation requires the identification of the particles produced by the neutrino interaction.
\subparagraph*{Challenges:} Accelerator neutrino experiments employ a variety of detector technologies.
These range from scintillator detectors such as NOvA (liquid), MINOS (solid), and MINERvA (solid), to water
Cherenkov detectors such as T2K, and finally liquid argon time projection chambers such as MicroBooNE, ICARUS, and DUNE.
Pion decay-at-rest experiments (COHERENT, JSNS$^2$) use yet different technologies (liquid and solid scintillators, as well as solid-state detectors). The individual challenges and solutions are unique to each experiment, though common themes do emerge.
Neutrino interactions are fairly uncommon due to their low cross-section.
Some experiments can see as few as one neutrino interaction per day. This, combined with many detectors being close to the surface, means that analyses have to be highly efficient whilst achieving excellent background rejection.
This is true both in online data taking and offline data analysis.
As experiments typically have very good temporal and/or spatial resolution it is often fairly trivial to isolate entire neutrino interactions.
This means that it is then possible to use image recognition tools such as CNNs to perform classification tasks.
As a result, many experiments initially utilized variants of GoogLeNet, though many are now transitioning to use GNNs and networks better able to identify sparse images.
\subparagraph*{Existing and Planned Work:} As discussed in Section~\ref{sec:nu_astro}, DUNE will use machine learning in its triggering framework to handle its immense data rates and to identify candidate interactions, for both traditional neutrino oscillation measurements and for candidate solar and supernova events.
Accelerator neutrino experiments have successfully implemented machine learning techniques for a number of years, the first such example being in 2017~\cite{Adamson_2017}, where the network increased the effective exposure of the analysis by 30\%. Networks aimed at performing event classification are common across many experiments, with DUNE having recently published a network capable of exceeding its design sensitivity on simulated data and which includes outputs that count the numbers of final state particles from the interaction~\cite{Abi_2020}.
Experiments are becoming increasingly cognizant of the dangers of networks learning features of the training data beyond what is intended. For this reason, it is essential to carefully construct training datasets such that this risk is reduced.
However, it is not possible to correct or quantify bias which is not yet known; therefore the MINERvA experiment has explored the use of a domain adversarial neural network~\cite{Perdue_2018} to reduce unknown biases from differences in simulated and real data.
The network features a gradient reversal layer in the domain network (trained on data), thus discouraging the classification network (trained on simulation) to learn from any features that behave differently between the two domains.
A more robust exploration of the machine learning applied to accelerator neutrino experiments can be found here in Ref.~\cite{Psihas_2020}.
\subsubsection{Neutrino Astrophysics} \label{sec:nu_astro}
\subparagraph*{Context:} Neutrino astrophysics spans a wide range of energies, with neutrinos emitted from both steady-state and transient sources with energies from less than MeV to EeV scale.
Observations of astrophysical neutrinos are valuable both for the understanding of neutrino sources and for probing fundamental physics.
Neutrino detectors designed for observing these tend to be huge scale (kilotons to megatons).
Existing detectors involve a diverse range of materials and technologies for particle detection; they include Cherenkov radiation detectors in water and ice, liquid scintillator detectors and, liquid argon time projection chambers.
Astrophysical neutrinos are one kind of messenger contributing to the thriving field of \textit{multimessenger astronomy}, in which signals from neutrinos, charged particles, gravitational waves, and photons spanning the electromagnetic spectrum are observed in coincidence.
This field has had some recent spectacular successes~\cite{Abbott_2017,AaAc2018,GrFo2020}.
For multimessenger transient astronomy, time is of the essence for sharing data and locating sources.
\textit{Directional information} from the neutrinos is critically valuable, to allow prompt location of the source by other messengers.
Potential interesting transient astrophysical sources include sources of ultra-high energy neutrinos, as well as nearby stellar core collapses.
Neutrinos in the multi-GeV and higher range are emitted from distant cosmic sources, including kilonovae and blazars, and cubic-km-scale water-based Cherenkov detectors such as IceCube at the South Pole can produce fast alerts from single neutrino observations.
Core-collapse supernovae are another promising use case for fast machine learning.
These are copious sources of few tens of MeV-scale neutrinos, which are emitted in a burst lasting a few tens of seconds~\cite{Scholberg:2012id,Mirizzi:2015eza}.
The neutrinos are prompt after core collapse (as will be gravitational waves) but observable electromagnetic radiation will not emerge for anywhere from tens to 10$^6$\unit{s}, depending on the nature of the progenitor and its envelope~\cite{Kistler:2012as}.
Low-latency information is therefore immensely valuable.
Core-collapse supernovae are rare events within the distance range observable by current and near-future neutrino detectors.
They occur only every several decades, which makes prompt and robust detection especially important.
The SuperNova Early Warning System~\cite{Antonioli:2004zb,Kharusi:2020ovw} aims to provide a prompt alert from a coincidence of burst detections.
However, pointing information from neutrinos is relatively difficult to extract promptly. Detectors with the capability for prompt pointing thanks to the anisotropy of neutrino interactions (i.e. the interaction products that remember where the neutrino came from) offer the best prospects, but these need to be able to select neutrino events from background and reconstruct their directions with very low latency.
Presupernova neutrinos are another interesting possibility. In the final stages of stellar burning, one expects a characteristic uptick in neutrino luminosity and average energy, producing observable events in detectors for nearby progenitors.
This could give a warning of hours or perhaps days before core collapse for the nearest progenitors. For this case, fast selection of neutrino-like events and reconstruction of their directional information for background reduction is needed.
\subparagraph*{Challenges:}
The challenges, in general, are fast selection and reconstruction of neutrino event (interaction) information.
The specifics of the problem depend on the particular detector technology, but in general, the charged particle products of a neutrino interaction will have a distinctive topology or other signature and must be selected from a background of cosmic rays, radiologicals, or detector noise.
Taking as an example a liquid argon time projection chamber like the Deep Underground Neutrino Experiment (DUNE), neutrino-induced charged particles produce charge and light signals in liquid argon.
Supernova neutrino interactions appear as small (tens of cm spatial scale) stubs and blips~\cite{Abi:2020lpk, Abi:2020evt}.
The recorded neutrino event information from the burst can be used to reconstruct the supernova direction to $\sim$5--10$^\circ$ for core collapse at 10\unit{kpc} distance~\cite{ajpointingtalk,Abi:2020evt}.
The neutrino events need to be selected from a background of radioactivity and cosmogenics, as well as detector noise, requiring background reduction of many orders of magnitude.
Total data rate amounts to $\sim$40\unit{Tb/s}.
The detector must take data for a decade or more at this rate, with near-continuous uptime.
For steady-state signals such as solar neutrinos, triggering on individual events in the presence of large backgrounds is a challenge that can be addressed with machine learning.
For burst signals, the triggering is a different problem: the general strategy is to read out all information on every channel within a tens-of-seconds time window, for the case of a triggered burst.
This leads to the subsequent problem of sifting the signal events and reconstructing sufficient information on a very short timescale to point back to the supernova.
The required timescale is minutes, or preferably seconds.
Both the event-by-event triggering and fast directional reconstruction can be addressed with fast machine learning.
\subparagraph*{Existing and Planned Work:}
There are a number of existing efforts towards the use of machine learning for particle reconstruction in neutrino detectors including water Cherenkov, scintillator, and liquid argon detectors.
These overlap to some extent with the efforts described in Sec.~\ref{sec:nuaccel}.
Efforts directed specifically towards real-time event selection and reconstruction are ramping up.
Some examples of ongoing efforts can be found in Refs.~\cite{Abi_2020,Drielsma:2021jdv,Qian:2021vnh,Acciarri:2020ond,Abratenko:2020pbp,Wang:2020fjr,Psihas_2020}.
\subsubsection{Direct Detection Dark Matter Experiments}
\subparagraph*{Context:} Direct dark matter (DM) search experiments take advantage of the vastly abundant DM in the universe and are searching for direct interactions of DM particles with the detector target material.
The various target materials can be separated into two main categories, crystals and liquid noble gases, though other material types are subject to ongoing detector R\&D efforts~\cite{alexander2016dark, Schumann_2019}.
One of the most prominent particle DM candidates is the WIMP (weakly interacting massive particle), a thermal, cold DM candidate with an expected mass and coupling to Standard Model particles at the weak scale~\cite{Jungman_1996}.
However, decades of intensive searches both at direct DM and at collider experiments have not yet been able to discover\footnote{The DAMA/NaI and subsequent DAMA/LIBRA experiment, claim the direct observation of DM particles in the galactic halo \cite{Bernabei_2013}, but the results are in tension with negative results from similar experiments~\cite{Schumann_2019}.} the vanilla WIMP while excluding most of the parameter space of the simplest WIMP hypothesis~\cite{Schumann_2019}.
This instance has lead to a shift in paradigm for thermal DM towards increasingly lower masses well below 1\unit{GeV} (and thus the weak scale) \cite{B_hm_2004} and as low as a few keV, i.e. the warm DM limit \cite{Weinberg_2015}.
Thermal sub-GeV DM is also referred to as light dark matter (LDM).
Other DM candidates that are being considered include non-thermal, bosonic candidates like dark photons, axions and axion-light particles (ALPs)~\cite{Peccei:2006as, Svrcek:2006yi, Holdom:1985ag}.
The most common interactions direct DM experiments are trying to observe are thermal DM scattering off either a nucleus or an electron and the absorption of dark bosons under the emission of an electron.
The corresponding signatures are either nuclear recoil or electron recoil signatures.
\subparagraph*{Challenges:} In all mentioned interactions, and independent of the target material, a lower DM mass means a smaller energy deposition in the detector and thus a signal amplitude closer to the baseline noise.
Typically, the baseline noise has non-Gaussian contributions that can fire a simple amplitude-over-threshold trigger even if the duration of the amplitude above threshold is taken into account.
The closer the trigger threshold is to the baseline, the higher the rate of these spurious events.
In experiments which cannot read out raw data continuously and which have constraints on the data throughput, the hardware-level trigger threshold has thus to be high enough to significantly suppress accidental noise triggers.
In the hunt for increasingly lower DM masses, however, an as-low-as-possible trigger threshold is highly desirable, calling for a more sophisticated and extremely efficient event classification at the hardware trigger level.
Particle-induced events have a known, and generally constant, pulse-shape while non-physical noise ``events" (e.g. induced by the electronics) generally have a varying pulse-shape which is not necessarily predictable.
A promising approach in such a scenario is the use of machine learning techniques for most efficient noise event rejection in real-time allowing to lower the hardware-level trigger threshold, and thus the low mass reach in most to all direct DM searches, while remaining within the raw data read-out limitations imposed by the experimental set-up.
\subparagraph*{Existing and Planned Work:} Machine learning is already applied by various direct DM search experiments \cite{Khosa_2020, Szydagis:2021hfh, Simola_2019}, especially in the context of offline data analyses.
However, it is not yet used to its full potential within the direct DM search community.
Activities in this regard are still ramping up but with increasing interest, efforts, and commitment.
Typical offline applications to date are the reconstruction of the energy or position of an event and the classification of events (e.g. signal against noise or single-scattering against multiple-scattering).
In parallel R\&D has started on real-time event classification within the FPGA-level trigger architecture of the SuperCDMS experiment \cite{Agnese_2017} with the long-term goal of lowering the trigger threshold notably closer to the baseline noise without triggering on spurious events.
While these efforts are being conducted within the context of SuperCDMS the goal is a modular trigger solution for easier adaption to other experiments.
\subsection{Electron-Ion Collider}
\subparagraph*{Context:}
The Electron-Ion Collider (EIC) will support the exploration of nuclear physics over a wide range of center-of-mass energies and ion species, using highly-polarized electrons to probe highly-polarized light ions and unpolarized heavy ions.
The frontier accelerator facility will be designed and constructed in the U.S. over the next ten years. The requirements of the EIC are detailed in a white paper~\cite{Accardi:2012qut}, the 2015 Nuclear Physics Long Range Plan~\cite{Geesaman:2015fha}, and an assessment of the science by the National Academies of Science~\cite{NAS:2018eic}.
The EIC's high luminosity and highly polarized beams will push the frontiers of particle accelerator science and technology and will enable us to embark on a precision study of the nucleon and the nucleus at the scale of sea quarks and gluons, over all of the kinematic range that is relevant as described in the EIC Yellow Report~\cite{AbdulKhalek:2021gbh}.
\subparagraph*{Challenges:}
While the event reconstruction at the EIC is likely easier than the same task at present LHC or RHIC hadron machines, and much easier than for the High-Luminosity LHC, which will start operating two years earlier than the EIC, possible contributions from machine backgrounds form a challenge.
The expected gain in CPU performance in the next ten years as well as the possible improvement in the reconstruction software from the use of AI and ML techniques give a considerable margin to cope with higher event complexity that may come by higher background rates.
Software design and development will constitute an important ingredient for the future success of the experimental program at the EIC.
Moreover, the cost of the IT related components, from software development to storage systems and to distributed complex e-Infrastructures can be raised considerably if a proper understanding and planning is not taken into account from the beginning in the design of the EIC.
The planning must include AI and ML techniques, in particular for the compute-detector integration at the EIC, and training in these techniques.
\subparagraph*{Existing and Planned Work:}
Accessing the EIC physics of interest requires an unprecedented integration of the interaction region (IR) and detector designs.
The triggerless DAQ scheme that is foreseen for the EIC will extend the highly integrated IR-detector designs to analysis.
A seamless data processing from DAQ to analysis at the EIC would allow to streamline workflows, e.g., in a combined software effort for the DAQ, online, and offline analysis, as well as to utilize emerging software technologies, in particular fast ML algorithms, at all levels of data processing.
This will provide an opportunity to further optimize the physics reach of the EIC.
The status and prospects for ``AI for Nuclear Physics'' have been discussed in a workshop in 2020~\cite{Bedaque:2021bja}.
Topics related to fast ML are intelligent decisions about data storage and (near) real-time analysis.
Intelligent decisions about data storage are required to ensure the relevant physics is captured.
Fast ML algorithms can improve the data taken through data compactification, sophisticated triggers, and fast online analysis.
At the EIC, this could include automated alignment and calibration of the detectors as well as automated data-quality monitoring.
A (near) real-time analysis and feedback enables quick diagnostics and optimization of experimental setups as well as significantly faster access to physics results.
\subsection{Gravitational Waves}
\subparagraph*{Context:}
As predicted by Einstein in 1916, gravitational waves are fluctuations in the gravitational field which within the theory of general relativity manifest as a change in the spacetime metric.
These ripples in the fabric of spacetime travel at the speed of light and are generated by changes in the mass quadruple moment, as, for example, in the case of two merging black holes~\cite{PhysRevLett.116.061102}.
To detect gravitational waves, the LIGO/Virgo/KAGRA collaborations employ a network of kilometer-scale laser interferometers~\cite{aLIGO, Acernese_2014, Affeldt_2014, PhysRevD.88.043007}.
An interferometer consists of two perpendicular arms; as the gravitational wave passes through the instrument, it stretches one arm while compressing the other in an alternating pattern dictated by the gravitational wave itself.
Such length difference is then measured from the laser interference pattern.
Gravitational waves are providing a unique way to study fundamental physics, including testing the theory of general relativity at the strong field regime, the speed of propagation and polarization of gravitational waves, the state of matter at nuclear densities, formation of black holes, effects of quantum gravity and more.
They have also opened up a completely new window for observing the Universe and in a complementary way to one enabled by electromagnetic and neutrino astronomy.
This includes the study of populations, including their formation and evolution, of compact objects such as binary black holes and neutron stars, establish the origin of gamma-ray bursts (GRBs), measure the expansion of the Universe independently of electromagnetic observations, and more~\cite{PhysRevLett.119.161101}.
\subparagraph*{Challenges:}
In the next observing run in 2022, LIGO, Virgo, and KAGRA will detect an increasing number of gravitational-wave candidates.
This poses a computational challenge to the current detection framework, which relies on matched-filtering techniques that match parameterized waveforms (templates) from simulations into the gravitational-wave time series data~\cite{PhysRevLett.116.061102, Vas2001, PhysRevD.44.3819}.
Matched filtering scales poorly as the low-frequency sensitivity of the instrument improves and the search parameter space of the gravitational wave expands to cover spin effects and low mass compact objects.
To estimate the physical properties of the gravitational wave, stochastic Bayesian posterior samplers, such as Markov-chain Monte Carlo and Nested Sampling, have been used until now.
Such analysis approaches can take up hours to days to complete~\cite{Abbott_2016}.
The latency introduced by the current search and parameter estimation pipeline is non-negligible and can hinder electromagnetic follow-ups of time-sensitive sources like binary neutron stars, supernovae, and other, yet unknown, systems.
Observations of gravitational-wave transients are also susceptible to environmental and instrumental noise.
Transient noise artifacts can be misidentified as a potential source, especially when the gravitational-wave transients have an unknown morphology (e.g. supernovae, neutron star glitches).
Line noise in the noise spectrum of the instruments can affect the search for continuous gravitational waves (e.g. spinning neutron stars) and stochastic gravitational waves (e.g astrophysical background of gravitational waves from unresolved compact binary systems).
These noise sources are difficult to simulate, and current noise subtraction techniques are insufficient to remove the more complex noise sources, such as non-linear and non-stationary ones.
\subparagraph*{Existing and Planned Work:}
In recent years, machine learning algorithms have been explored in different areas of gravitational-wave physics~\cite{Cuoco_2020}.
CNNs have been applied to detect and categorize compact binary coalescence gravitational waves~\cite{PhysRevLett.120.141103, Kim_2015, PhysRevD.101.083006, George_2018, Gebhard_2019}, burst gravitational waves from core-collapse supernovae~\cite{Astone_2018, Chan_2020, Iess_2020}, and continuous gravitational waves~\cite{Dreissigacker_2019, Beheshtipour_2020}.
Besides, recurrent neural networks (RNNs) based autoencoders have been explored to detect gravitational wave using an unsupervised strategy~\cite{moreno2021source}. FPGA-based RNNs are also explored to show the potential in low-latency detection of gravitational wave~\cite{que2021accelerating}.
Applications of ML in searches of other types of gravitational waves, such as generic burst and stochastic background, are currently being explored.
Moreover, probabilistic and generative ML models can be used for posterior sampling in gravitational-wave parameter estimation and achieve comparable performance to Bayesian sampler on mock data while taking significantly less time to complete~\cite{shen2019deterministic, gabbard2020bayesian, PhysRevLett.124.041102}.
ML algorithms are also being used to improve the gravitational-wave data quality and subtract noise.
Transient noise artifacts can be identified and categorized from their time-frequency transforms and constant-Q transforms~\cite{Zevin_2017, Razzano_2018} or through examining hundreds of thousands of LIGO's auxiliary channels~\cite{iDQ2013}.
These auxiliary channels can also be used to subtract quasi-periodic noise sources (e.g. spectral lines)~\cite{PhysRevD.101.042003, Ormiston_2020}.
Although ML algorithms have shown a lot of promise in gravitational-wave data analysis, many of these algorithms are still at the proof-of-concept stage and have not yet been successfully applied in real-time analysis.
Current efforts seek to create a computational infrastructure for low-latency analysis, improve the quality of the training data (e.g. expanding the parameter space, using a more realistic noise model), and better quantify the performance of these algorithms on longer stretches of data.
\subsection{Biomedical engineering}
\subparagraph*{Context:} We have seen an explosion of biomedical data, such as biomedical images, genomic sequences, and protein structures, due to the advances in high-resolution and high-throughput biomedical devices.
AI-augmented reality-based microscopy~\cite{Chen2019-ze} enables automatic analysis of cellular images and real-time characterization of cells.
Machine learning is used \textit{in-silico} prediction of fluorescent labels, label-free rare cell classification, morphology characterization, and RNA sequencing~\cite{Christiansen2018-eu,Wang2020-lr,Siu2020-kd,Tang2018-mj,Li2020-cx}.
For in-situ cell sorting, real-time therapy response prediction, and augmented reality microscope-assisted diagnosis~\cite{Chen2019-ze,Nitta2018-bc,Sakellaropoulos2019-tq}, it is important to standardize and optimize data structure in deep learning models to increase speed and efficiency.
Various machine-learning-based algorithms for detecting hemorrhage and lesions, accelerating diagnosis, and enhancing medical video and image quality have also been proposed in biopsy analysis and surgery assistance.
\subparagraph*{Challenges:} A major challenge for clinical application of ML is inadequate training and testing data. The medical data annotation process is both time-consuming and expensive for large image and video datasets which require expert knowledge.
The latency of trained models' inference also introduces computational difficulties in performing real-time diagnosis and surgical operation.
The quality of services for time-critical healthcare requires less than 300 milliseconds as real-time video communication \cite{Shukla2019-bz}. For reaching 60 frames per second (FPS) high-quality medical video, the efficiency and performance of a deep learning model become crucial.
\subparagraph*{Existing and Planned Work:} Many changes in ML algorithms have involved improvements to performance both in accuracy and inference speed. Some state-of-art machine learning models can reach a high speed for inference.
For example, \textit{YOLOv3-tiny}~\cite{Adarsh2020-hq}, an object detection model commonly used for medical imaging, can process images at over 200 FPS on a standard dataset with producing reasonable accuracy.
Currently both GPU- and FPGA-based~\cite{Satpathy2020-gs,Chang2020-ob,Zhang2020-bb}, distributed networks of wireless sensors connected to cloud ML (edge computing), and 5G-high-speed-WiFi-based ML models are deployed in medical AI applications~\cite{Chen2018-qx,Zhang2020-ze,Morocho-Cayamcela2019-gt}.
ML models for fast diagnosis of stroke, thrombosis, colon polyps, cancer, and epilepsy have significantly reduced the time in lesion detection and clinical decision~\cite{Lee2020-oj,Nafee2020-yy,Nogueira-Rodriguez2020-zd,Bagheri2019-ee,Horie2019-hz}.
Real-time AI-assisted surgery can improve perioperative workflow, perform video segmentation~\cite{Volkov2017-oy}, detection of surgical instruments \cite{Choi2017-iv}, and visualization of tissue deformation~\cite{Tonutti2017-vv}.
High-speed ML is playing a critical role in digital health, \textit{i.e.}, remote diagnosis, surgery, and monitoring \cite{Zhang2020-ze}.
\subsection{Health Monitoring}
\subparagraph*{Context:} Our habits and behaviors affect our health and wellness. Unhealthy behaviors such as smoking, consuming excessive alcohol, or medication non-adherence often has an adverse effect on our health~\cite{baker2000health,klesges1989smoking,sokol2005impact,white2013burden}.
Traditional behavior monitoring approaches relied on self-reports, which were often biased and required intense manual labor~\cite{althubaiti2016information}.
With the advent of mobile and wearable devices, it is gradually becoming possible to monitor various human behaviors automatically and unobtrusively.
Over the years, researchers have either developed custom wearable hardware or have used off-the-shelf commercial devices for mobile and wearable health (mHealth) monitoring~\cite{dong2012new,parate2014risq,ali2012mpuff,sen2020annapurna,bi2018auracle,zhang2020necksense,mishra2020continuous}. The automatic and unobtrusive monitoring capability of these devices makes it possible to detect, identify and monitor behaviors, including unhealthy behaviors in a free-living setting.
\subparagraph*{Challenges:} There are various challenges associated with monitoring habits and behaviors using wearable devices. Firstly, these devices should be capable of monitoring unhealthy behaviors accurately, and in real-time.
The occurrence of these unhealthy behaviors in a free-living setting is often sparse as compared to other behaviors and thus it is important to spot them accurately, whenever they occur.
Most existing systems take an offline ML approach of detecting these unhealthy behaviors, where the ML algorithm identifies these behaviors well after they have occurred. An offline approach prevents providing interventions that can minimize unhealthy behaviors.
Thus, it is necessary to develop ML approaches that can detect these behaviors online, and in real-time, so that interventions such as just-in-time adaptive interventions (JITAIs) can be delivered.
Secondly, since these devices capture sensitive information, it is necessary to ensure that an individual's privacy is preserved.
Privacy-preserving approaches such as locally processing the data on-device can be taken so that critical information does not leave the device.
Finally, these behaviors can occur in various heterogeneous environments and thus the health monitoring system should be agnostic to where the behavior occurs.
Such monitoring requires developing multiple machine learning models for diverse environments.
\subparagraph*{Existing and Planned Work:} While existing work has ventured in various directions, there is a growing need for sensing health biomarkers correctly and developing ML approaches that are fast and can accurately identify these biomarkers.
Researchers have focused on developing novel sensing systems that can sense various health behaviors and biomarkers~\cite{holz2017glabella,pham2020wake,chun2020intraoral,bui2019ebp,li2020noninvasive,echterhoff2020personal,bedri2020fitbyte}. Historically, most of these novel sensing techniques were tested in controlled settings, but more recently researchers are ensuring that these systems can work seamlessly in free-living settings as well.
This often requires developing multiple ML models, each catering to a specific context and environment.
A new trend in this field has started relying on implementing models that can be implemented on-device and are both quick and accurate in detecting these behaviors.
In addition to providing real-time interventions~\cite{nahum2018just,thomas2015behavioral}, on-device monitoring of these behaviors can reduce privacy concerns~\cite{sadek2019privacy}.
However, since wearable devices themselves might not be capable of processing the data, federated machine learning approaches are also being explored recently by several researchers~\cite{rieke2020future}.
\subsection{Cosmology}
\subparagraph*{Context:} Cosmology is the study of the Universe's origin (big bang), evolution, and future (ultimate fate).
The large-scale dynamics of the universe are governed by gravity, where dark matter plays an important role, and the accelerating expansion rate of the universe itself, caused by the so-called dark energy.
A non-exhaustive list of cosmological probes includes type Ia supernovae~\cite{Riess_1998, Perlmutter_1999, Betoule_2014, Scolnic_2018, Abbott_2019}, cosmic microwave background~\cite{Fixsen_1996, Spergel_2003, Komatsu_2011, PC_2016, PC_2020}, large-scale structures (including baryon acoustic oscillation)~\cite{Eisenstein_2005, Percival_2010, Delubac_2015, Abbott_2019b}, gravitational lensing~\cite{Bacon_2000, Bacon_2003, Collett_2014, Suyu_2017, Heymans_2020} and 21\unit{cm} cosmology~\cite{McQuinn_2007, Pritchard_2012, Maartens_2015, Beardsley_2016}.
\subparagraph*{Challenges:} As astronomy is approaching the big data era with next-generation facilities, such as the Nancy Grace Roman Space telescope, Vera C. Rubin Observatory, and Euclid telescope, the uncertainty budget in the estimation of cosmological parameters is no longer expected to be dominated by statistical uncertainties, but rather by systematic ones; understanding such uncertainties can lead to attaining sub-percent precision.
On the other hand, the immense stream of astronomical images will be impossible to analyze in a standard fashion (by human interaction); new automated methods are needed to extract valuable pieces of cosmological data.
\subparagraph*{\textbf{Existing and future work}:} Current efforts are focused on applying ML techniques to study the influence of systematic biases on available analysis methods (e.g., for purposes of fitting or modeling) or on developing new methods to overcome present limitations; for example CNNs can be adapted to spherical surfaces to generate more accurate models when producing weak lensing maps~\cite{Perraudin_2019}, or to remove noise from cosmic microwave background maps~\cite{Petroff_2020}.
In addition, discovery and classification engines are being developed to extract useful cosmological data from next-generation facilities~\cite{Narayan_2018, Mahabal_2019, Forster_2020, Moller_2020}.
Furthermore, ML is also being used in cosmological simulations to test new analyses and methods and to set the foundations for the first operation of such new facilities~\cite{Kamdar16, Rodriguez18, Villaescusa-Navarro20}.
An extensive list of published ML applications in cosmology can be found in \url{https://github.com/georgestein/ml-in-cosmology}.
\subsection{Plasma Physics}
\subparagraph*{Context:} The focus of this description is on the Plasma Physics/Fusion Energy Science domain with regard to the major system constraints encountered for existing and expected algorithms and data representations when dealing with the challenge of delivering accelerated progress in AI---enabled deep machine learning prediction and control of magnetically-confined thermonuclear plasmas.
Associated techniques have enabled new avenues of data-driven discovery in the quest to deliver fusion energy---identified by the 2015 CNN ``Moonshots for the 21st Century'' televised series as one of 5 prominent grand challenges for the world today.
\subparagraph*{Challenges:} An especially time-urgent and challenging problem is the need to reliably predict and avoid large-scale major disruptions in ``tokamak systems'' such as the EUROFUSION Joint European Torus (JET) today and the burning plasma ITER device in the near future---a ground-breaking \$25B international burning plasma experiment with the potential capability to exceed ``breakeven'' fusion power by a factor of 10 or more with "first plasma" targeted for 2026 in France. The associated requirement is for real-time plasma forecasting with control capabilities operative during the temporal evolution of the plasma state well before the arrival of damaging disruptive events.
High-level supervisory control of many lower-level control loops via actuators (analogous to advanced robotics operations) will be essential for ITER and future burning plasmas to protect the facility and to avoid operational limits (for magnets, walls, plasma position, stability, etc.) while optimizing performance.
\subparagraph*{Existing and Planned Work:} In short, an overarching goal here involves developing realistic \textit{predictive plasma models of disruptions integrated with a modern plasma control system to deliver the capability to design experiments before they are performed}.
The associated novel AI-enabled integrated modeling tool would clearly be of great value for the most efficient and safe planning of the expensive discharges in ITER and future burning plasmas.
Verification, validation, and uncertainty quantification of associated components would include: (1) development of predictive neural net models of the plasma and actuators that can be extrapolated to burning plasma scales via advanced Bayesian reinforcement learning methods that incorporate prior information into efficient inference algorithms; (2) systematic well-diagnosed experimental validation studies of components in the integrated plasma forecasting models involving massive amounts of data from major tokamak experiments worldwide (e.g., DIII-D in the US, KSTAR \& EAST in Asia, JET in Europe, followed by JT60 SA---the large superconducting device in Japan that will precede ITER).
This would ideally lead to a mature AI-enabled comprehensive control system for ITER and future reactors that feature integration with full pilot-plant system models.
At present, a key challenge is to deliver significantly improved methods of prediction with better than 95\% predictive accuracy to provide advanced warning for disruption avoidance/mitigation strategies to be effectively applied before critical damage can be done to ITER. Significant advances in the deployment of deep learning recurrent and CNNs are well illustrated in Princeton's Deep Learning Code---``FRNN''---that have enabled the rapid analysis of large complex datasets on supercomputing systems.
Associated acceleration of progress in predicting tokamak disruptions with unprecedented accuracy and speed is described in~\cite{plasmaref}.
Included in this paper (and extensive references cited therein) are descriptions of FES data representation for physics features (density, temperature, current, radiation, fluctuations, etc.) and the nature of key plasma experiments featuring detectors/diagnostics with frame (event-based) level of accuracy accounting for required ``zero-D'' (scalar) and higher-dimension signals and real-time resolution recorded at manageable data rates.
Rough future estimates indicate that ITER will likely require dealing with the challenge of processing and interpreting exabytes of complex spatial and temporal data.
Since simulation is another vital aspect of ITER data analysis, dealing with the associated major computational expenses will demand the introduction of advanced compressional methods.
More generally, real-time predictions based on actual first-principles simulations are important for providing insights into instability properties and particle-phase space dynamics.
This motivates the development of an AI-based ``surrogate model''---for example, of the well-established HPC ``gyrokinetic'' particle-in-cell simulation code GTC~\cite{plasmaref2} that would be capable of accurately simulating plasma instabilities in real-time.
Data preparation and training a surrogate model – e.g., ``SGTC''---provides a clear example of the modern task of integration/connection between modern High Performance Computing (HPC) predictive simulations with AI-enabled Deep Learning/Machine Learning campaigns.
These considerations also serve to further illustrate/motivate the need to integrate HPC \& Big Data ML approaches to expedite the delivery of scientific discovery.
As a final note, the cited paper~\cite{plasmaref} represents the first adaptable predictive DL software trained on leadership class supercomputing systems to deliver accurate predictions for disruptions across different tokamak devices (DIII-D in the US and JET in the UK).
It features the unique statistical capability to carry out efficient ``transfer learning'' via training on a large database from one experiment (i.e., DIII-D) and be able to accurately predict disruption onset on an unseen device (i.e., JET).
In more recent advances, the FRNN inference engine has been deployed in a real-time plasma control system on the DIII-D tokamak facility in San Diego, CA.
As illustrated in slides 18 through 20 of the attached invited presentation slide deck, this opens up exciting avenues for moving from passive disruption prediction to active real-time control with subsequent optimization for reactor scenarios.
\subsection{ML for Wireless Networking and Edge Computing}
\subparagraph*{Context:} Wireless devices and services have become a crucial tool for collecting and relaying big data in many scientific studies. Moreover, mobility information has proven to be extremely useful in understanding human activities and their impact on the environment and public health. The exponential growth of data traffic is placing significant pressure on the wireless infrastructure. In particular, inter-cell interference causes large variability in reliability and latency. To meet user demands for data communication and value-added AI/ML services, wireless providers must 1) develop more intelligent learning algorithms for radio resource management that adapt to complicated and ever-changing traffic and interference conditions; and 2) realize many ML/AI computations and functionalities in edge devices to achieve lower latency and higher communication efficiency.
\subparagraph*{Challenges:} Conventional implementations of ML models, especially deep learning algorithms, lag far behind the packet-level dynamics for utility. Moreover, existing ML/AI services are often performed in the cloud for efficiency at the expense of communication overhead and higher latency. A major challenge in the wireless networking and edge computing context is to build a computing platform that can execute complex ML models at relevant timescales ($< 10$ ms) within small cell access points.
\subparagraph*{Existing and planned work:} Researchers have proposed a variety of learning algorithms to perform specific radio resource management tasks using artificial neural networks~\cite{
calabrese2018learning,
challita2018proactive,
huang2020deep,
zhu2020toward}.
Some of the first proposals to train a NN to perform transmit power control adopts supervised learning~\cite{sun2018learning,liang2019towards}.
More recent proposals adopt deep reinforcement learning approaches that work better with channel and network uncertainties and require little training data {\em a priori}~\cite{nasir2020deep,zhao2019deep,liang2019spectrum,meng2020power}.
A number of works are focused on the convergence of edge computing and deep learning~\cite{chen2019deep,zhang2019deep,wang2020convergence}.
A specific set of work is on federated learning where participants jointly train their models in lieu of sending all their data to a central controller for training purposes~\cite{niknam2020federated,amiri2020federated,chen2021convergence,ren2020scheduling}.
All of the preceding work basically ends at the simulation stage for the lack of practical ML/AI solutions that are fast and computationally efficient at the same time. More specifically, the research challenge is to develop a computing platform that can execute complex ML models at a very fast timescale ($<$ 10\unit{ms}) and can also be equipped in small cell access points.
One project with a potentially very high impact is to map intelligent radio resource management algorithms (such as that of~\cite{nasir2020deep}) onto an FPGA device suitable for deployment in a large network of connected and interfering access points.
Another interesting project is to build a federated learning system to conduct time-sensitive ML for Internet-of-Things (IoT) devices where transferring data to centralized computing facilities is latency-prohibitive.
This opens up entirely new possibilities for low-cost closed-loop IoT devices in healthcare, smart buildings, agriculture, and transportation.
\subsection{Data representations}
Data representation used in a particular domain influences both the computation system and data storage. One global classification for data representations across domains can be considered as being into raw versus reconstructed data.
The data representation often varies depending on the stage of the reconstruction and the upstream steps in the data processing pipeline.
Existing applications include fully connected NNs that often take pre-processed expert feature variables as inputs or CNNs when the data is of image nature.
On-going development of domain knowledge-inspired NN algorithms could further take advantage of the expert features in the accuracy and efficiency as detailed below.
To fully exploit the power of advanced NNs and bring it closer to data creation for minimum information loss, a more suitable representation of the raw data, e.g as point clouds, needs to be employed.
Prominent representations for raw data from different experimental and measurement systems are:
\begin{itemize}
\item \textbf{Spatial Data}: Used for describing physical objects in geometric space.
There are two main types, called vector and raster data.
Vector data, in turn, can be comprised of points, lines, or polygons. Raster data refers to a grid of pixels, such as images, but pixels can also represent other measurements such as intensity, charge, field strength, etc.
\item \textbf{Point Clouds}: Can be considered a type of spatial data.
This data representation is created by collating a set of spatial data, i.e., points in a 3D space, that usually form an object in space collectively.
\item \textbf{Temporal Data}: Used to represent the state of a system/experiment at a particular time.
Data collected across time, in a specific order, is classified in this manner.
Time-series data is a subset of this representation, where data is sampled at regular time intervals.
An example of time-series data can be seen in Fig.~\ref{fig:supernova}, for the specific case of supernova classification.
\item \textbf{Spatio-Temporal Data}: Measurements and observations of a system can be collected across both the space and time dimensions.
In that case, the data can be considered spatio-temporal.
\item \textbf{Multispectral Data}: Used to represent outputs of multiple sensors that capture measurements from multiple bands of the electromagnetic spectrum.
Multispectral representation is commonly used in the context of imaging, involving sensors that are sensitive to different wavelengths of light.
This usually involves in the order of a few to 10s of spectra.
\item \textbf{Hyperspectral Data}: Used to represent measurements from a high number of spectra, e.g., in the order of 100s.
These images collected from different narrow-band spectra are combined into a so-called hyperspectral cube with three main dimensions.
The first two reference the 2D spatial placement (e.g., earth's surface) while the third dimension represents the complete spectrum content at each ``pixel'' location.
\end{itemize}
\begin{figure*}[tbh!]
\centering
\includegraphics[width = 0.55\textwidth]{figures/supernova_classification.png}
\caption{Simulated type Ia supernova light-curve and classification.
Top: calibrated flux evolution in different DES band-passes as a function of normalized time (the first photometric measurement is set to time equals zero).
Bottom: Baseline RNN classification probability evolution with respect of time, no host-galaxy redshift information was provided.
At each photometric measurement, classification probability is obtained. The maximum light of the simulated supernova is shown in a gray dashed line and the simulated redshift of the supernovae is shown on the top $z = 0.466$.
We highlight that redshift is not used for this classification but can improve results.
Our baseline RNN classifies this light-curve as type Ia SN with great accuracy before maximum light, it only requires a handful of photometric epochs. \cite{supernova_2019}.}
\label{fig:supernova}
\end{figure*}
In Table~\ref{tab:representations}, we match these data representations to scientific application domains and give a brief description.
We highlight the data representations which are particularly important for a specific domain.
We will give more detailed examples below.
Cost of data communication (in terms of latency) and data storage (in terms of the cost of acquiring and managing the physical storage resources) present important challenges. Particularly, application domains, which require real-time analysis and/or real-time feedback demand highly optimized data analytics solutions.
Applications that rely on hyper-spectral data are faced with an ever-increasing rate of data input across the electromagnetic spectrum.
High-speed data reduction is required in these domains.
Applications that generate large-scale point clouds similarly demand efficient compression on their spatial data.
Application domains that handle multi-spectral data with limited spatial resolution require ultra-fast reconstruction in order to enable real-time control feedback. Another challenge is posed by applications that rely on accurate analysis of streaming time-series data, yet they are forced to perform under highly limited storage and communication resources, either due to privacy and security concerns or limitations of the associated edge devices.
Some current efforts in developing ML solutions to data processing front-ends focus on developing autoencoder based compression engines~\cite{ieee_nss_talk_1_2020, DiGuglielmo:2020eqx}.
ML-based dimensionality reduction for hyper-spectral data is another direction which has drawn attention~\cite{Agar2019}.
Deep learning-based approaches are investigated for image reconstruction; the field of material sciences being one of the most active fields in that regards~\cite{Schmidt_nature2019}.
\begin{table*}[]
\centering
\footnotesize
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
Domain & Spatial & Point Cloud & Temporal & Spatio- & Multi/Hyper- & Examples\\
& & & & Temporal & spectral & \\
\hline
LHC & \checkmark \checkmark & \checkmark \checkmark & \checkmark & \checkmark & -- & detector reconstruction\\
Belle-II/Mu2e & \checkmark \checkmark & \checkmark \checkmark & -- & -- & -- & track reconstruction\\
Material Synthesis & \checkmark & -- & \checkmark & \checkmark \checkmark & \checkmark \checkmark & high-speed plasma imaging \\
Accelerator Controls & \checkmark & -- & \checkmark \checkmark & -- & -- & beam sensors\\
Accelerator neutrino & \checkmark \checkmark & \checkmark \checkmark & \checkmark & \checkmark & -- & detector reconstruction \\
Direct detection DM & \checkmark \checkmark & \checkmark \checkmark & \checkmark & \checkmark & -- & energy signatures \\
EIC & \checkmark \checkmark & \checkmark \checkmark & \checkmark & \checkmark & -- & detector reconstruction\\
Gravitational Waves & \checkmark & -- & \checkmark \checkmark & -- & -- & laser inference patterns\\
Biomedical engineering & \checkmark \checkmark & -- & -- & \checkmark \checkmark & -- & cell and tissue images \\
Health Monitoring & \checkmark & -- & \checkmark \checkmark & \checkmark & \checkmark & physiological sensor data\\
Cosmology & \checkmark \checkmark & \checkmark \checkmark & \checkmark \checkmark & \checkmark & \checkmark \checkmark & lensing/radiation maps\\
Plasma Physics & \checkmark & -- & \checkmark \checkmark & \checkmark & -- & detector actuator signals\\
Wireless networking & -- & -- & \checkmark \checkmark & -- & -- & electromagnetic spectrum \\
\hline
\end{tabular}
\caption{Types of data representations and their relevance for the scientific domains discussed in this paper; \checkmark \checkmark = Particularly important for domain, \checkmark = Relevant for domain}
\label{tab:representations}
\normalsize
\end{table*}
\subsubsection{Expert Feature DNNs}
One straightforward approach to building powerful domain-specific ML algorithms is to start with expert domain features and combine them in a neural network or other multivariate analysis technique.
This embedded expertise has inherent advantages because the input features are interpretable, and correlations between features can yield insight into a particular task while optimizing performance.
Furthermore, depending on the computational complexity of the domain features, the computation efficiency of such a machine learning approach can be greater than the direct use of raw features.
However, the downside is that, by using expert features, we rely entirely on the informativeness of such new features.
Therefore, there is a lot of interest in automating the process of building informative new features from raw features.
In image classification tasks, for example, a lot of progress has been made in extracting high-level data representations through deep neural networks DNNs~\cite{Goodfellow_2016}.
In DNNs, layers of neurons above the original input signal are built to ensure that each new layer captures a more abstract representation of the data.
Each layer constructs new features by forming nonlinear combinations of the features in the layer below.
This hierarchical approach to feature construction has been effective in disentangling factors of variation in the data~\cite{Hinton_2006,Bengio_2013,Goodfellow_2016}, and has been useful to construct informative and meaningful representations.
In astronomical images, for example, a DNN starts with low-level pixel information, gradually capturing at upper layers edges, motifs, and eventually entire objects (e.g., galaxies) to provide a broad view of the Universe~\cite{Sanchez_2018,Huertas_Company_2018}.
The same applies to other fields of science.
For example, detecting particles in large accelerators requires transforming low-level signals into dynamic patterns that can be ascribed to specific particles~\cite{Belayneh_2020}.
In medical imaging, there is a need to quickly identify abnormal tissue from low-level pixel information by gradually capturing global tissue patterns~\cite{Bychkov_2018}.
The importance of transforming the initial input data into meaningful abstract representations cannot be overstated: it remains one of the most powerful properties of modern neural network architectures.
Several challenges exist in the construction of increasingly abstract representations using DNNs.
One challenge is to incorporate domain knowledge (e.g., physical constraints) into the neural network model.
This is important to address the need for excessive amounts of data when training a DNN and narrow the gap in representational bias between the model and target concept. Under scarce data but abundant domain expertise, adding domain knowledge can expedite the training process~\cite{Xie_2021}, as well as improving the model generalization performance.
Another challenge is to develop tools for model interpretability by explaining the semantics of the representations embedded at each layer~\cite{Chakraborty_2017}. This is challenging due to the distributed representation of information in the network architecture.
Despite the lack of a formal mechanism to attain a seamless integration between a statistical model and domain knowledge, current approaches point to interesting directions, e.g., using knowledge to add training data or to change the loss function~\cite{Vo_2017}.
Model interpretability in DNNs has seen an upsurge in research over the past years~\cite{Chakraborty_2017}.
Commonly, studies look at individual units and their activation patterns to elucidate what is learned across layers of neurons.
\subsubsection{Frame-based images}
Frame-based images are a suitable representation of the experimental data in multiple domains such as neutrino detection with time projection chambers in particle physics.
An example of this data representation can be seen in Fig.~\ref{fig:repframe} for an electron deposition in the ProtoDUNE neutrino detector.
A spatial frame is shown by plotting the time coordinate ``Tick'' and wire position in space.
Recent developments in neural network architectures exploit the sparsity of the images to reduce the computation complexity for real-time/accelerated ML applications.
Other types of experimental data in HEP and many other domains can also be processed to be represented as frame-based images, although often not without information loss.
\begin{figure*}[tbh!]
\centering
\includegraphics[width = 0.85\textwidth]{figures/R5770_E59001_T1T5T9_w0_480_t3750_5250_sc15.pdf}
\caption{A 6\unit{GeV/c} electron event in the ProtoDUNE detector.
The x-axis shows the wire number.
The y-axis shows the time tick in the unit of 0.5$\mu s$.
The color scale represents the charge deposition.\cite{}}
\label{fig:repframe}
\end{figure*}
\subsubsection{Point clouds}
Point cloud data representation is often used in HEP, where multiple frames of event-based measurements collected by a large number of detectors are combined into a data set.
Across many HEP applications point clouds commonly help to represent particle jets with data sizes exceeding Pb/s.
More broadly, point clouds can be used to capture any 3D space event and interactions of moving parts in space.
A point cloud visualization of the CMS detector at the LHC is shown in Fig.~\ref{fig:repcloud}.
Remnants of proton-proton collisions create sensors signals in a customized and optimized detector geometry and points are illustrated in space. Various types of scan-based imaging data can be represented as point clouds.
Other domains such as CT and PET scanning in biomedical engineering and virtual reality also utilize this representation for imaging. 3D scanners used for product design, solid object modeling, architecture, and infrastructure design leverage point clouds as well. Many of these imaging tasks generate point clouds of sizes in the order of several GB to TB.
Domains sharing point cloud representation (e.g., HEP and biomedical imaging) also commonly involve spatial characteristics.
\begin{figure*}[tbh!]
\centering
\includegraphics[width=0.85\textwidth]{figures/kaggle-tracking.pdf}
\caption{Visualization of particle tracking hits in 3D space from the TrackML Kaggle dataset~\cite{cmseventdisplay}}
\label{fig:repcloud}
\end{figure*}
\subsubsection{Multi-/Hyperspectral Data}
Multispectral data is common between wireless health monitoring and wireless communication systems.
A set of physiological sensors, often representing different modalities, are combined into a multispectral data set for health monitoring and intervention systems.
For wireless communication, signal interference and network traffic conditions are captured via multispectral data.
Both domains capture this data across the time domain, so also exhibit temporal features.
Furthermore, in both domains generated data size can be considered relatively smaller (ranging from 100s of Mb/s to 10s of Gb/s), compared to the rest of the domains discussed in this article.
Hyperspectral data is used across many astronomy applications, medical imaging, and electron microscopy, which is used to drive many materials science design and discovery applications.
An example of hyperspectral data in electron microscopy is shown in Fig.~\ref{fig:rephyper}.
An electron probe is rastered over a sample under study and diffraction patterns are captured on a pixelated detector.
The pixelated detector captures many images as the electron probe is scanned across the sample.
Emerging multimessenger astronomy applications further emphasize the utility of hyperspectral data representations combining observations from a wide array of detectors and telescopes.
\begin{figure*}[tbh!]
\centering
\includegraphics[width = 0.55\textwidth]{figures/TEM.pdf}
\caption{Experimental 4D-STEM measurement of a dichalcogenide 2D material.
Atomic map is inferred from the data, each diffraction pattern represents an average of $7\times 7$ experimental images, green STEM probes are labeled for regions of the sample with one layer, vacuum, and two layers~\cite{ophus_2019}.}
\label{fig:rephyper}
\end{figure*}
\subsubsection{Time-series data}
Time-series data is common in experiments that observe dynamically evolving systems in processes such as synthesis for material discoveries or the temporal evolution of the plasma state in nuclear fusion experiments.
It can be a measurement of high-speed temporally resolved imaging in material science or physics features (density, temperature, current, radiation, fluctuations, etc.) or spatial features of evolving plasma state, as a function of time.
In-situ diagnostics of the time-series data can either provide alerts to terminate an experiment early that indicates undesired outcome in material science without performing the entire experiment and offline analysis that is time-consuming and computationally expensive, thus improves the experiment operation efficiency and accelerates discoveries of material of desired properties.
This is illustrated in Fig.~\ref{fig:reptime} for accelerator controls at the Fermilab Booster accelerator.
In this application, magnet voltages that steer proton beams around a synchrotron are recorded at 15\unit{Hz} time samples.
This study builds a digital twin which is used to simulate the Booster data.
Furthermore, to reliably predict and avoid large-scale major disruptions in nuclear fusion experiments, real-time analysis of the time-series data is crucial in guiding the action needed in experimental prediction and control.
\begin{figure*}[tbh!]
\centering
\includegraphics[width = 0.85\textwidth]{figures/Booster.pdf}
\caption{Selected test data (blue) versus prediction values (orange) from the Booster LSTM surrogate model for the Booster proton synchrotron complex~\cite{John:2020sak}.}
\label{fig:reptime}
\end{figure*}
\subsection{System constraints}
In this section, we present an overview of desired system properties and constraints that are prevalent across a number of application domains.
Unique challenges are arising from each scientific application based on sensing technology, the physical processes, and the timescales and data rates, and bandwidth.
These system constraints result in specific choices of data processing platforms, often with multiple compute architectures across the data continuum, such as the choice of FPGA-based systems versus embedded processors, GPUs, or custom ASICs.
Table~\ref{applications} summarizes several scientific application domains along with their event rates, system latency constraints and performance requirements, and deployment characteristics.
We broadly define platforms for integration fast machine learning techniques into ``soft'', software programmable coprocessors, and ``custom'', custom embedded computing devices.
Software-programmable systems are often preferred because they are less complex to implement while custom embedded solutions are required when software programmable systems cannot satisfy experimental throughput, bandwidth, or latency constraints.
We will describe in further detail this distinction below.
Examples of these system design choices are the trigger systems for HEP include LHC reconstruction of collision events, the Belle-II experiment, the Mu2e experiment which deploy custom embedded systems.
Meanwhile, experiments like the Electron-Ion Collider have data rates that may not require custom hardware solutions and could deploy only software programmable solutions for event reconstruction and real-time processing experiments.
One final distinction worth discussing concerns the nature of real-time processing and the in-situ versus post-mortem nature of the inference and analysis tasks.
Examples that we consider in classifying tasks that have different requirements are: data reduction which primarily focuses on limiting data collection rates of experiments for offline analysis; real-time processing and data analysis which is required to extract real-time domain features of the data for tasks like filtering/triggering; and closed-loop controls where data processing provides direct feedback to the operation and continuous control of an experiment.
These distinctions and their consequences on the computing systems is illustrated in Table~\ref{timing}
\begin{table*}
\centering
\caption{\label{applications}
Domains and practical constraints: systems are broadly classified as soft (software-programmable computing devices: CPUs, GPUs, and TPUs) and custom (custom embedded computing devices: FPGAs and ASICs)}
\small
\begin{tabular}{ c c c c c}
\hline
Domain & Event Rate & Latency & Systems & Energy-constrained\\
\hline
\hline
\textbf{Detection and Event Reconstruction} & & & & \textbf{No} \\
LHC \& intensity frontier HEP & 10s Mhz & ns-ms & Soft/custom & \\
Nuclear physics & 10s kHz & ms & soft & \\
Dark matter \& neutrino physics & 10s MHz & $\mu$s & Soft/custom & \\
\hline
\textbf{Image Processing} & & & & \\
Material synthesis & 10s kHz & ms & Soft/custom & \\
Scanning probe microscopy & kHz & ms & Soft/custom & \\
Electron microscopy & MHz & $\mu$s & Soft/custom & \\
Biomedical engineering & kHz & ms & Soft/custom & Yes (mobile settings)\\
Cosmology & Hz & s & soft & \\
Astrophysics & kHz--MHz & ms-us & Soft & Yes (remote locations) \\
\hline
\textbf{Signal Processing} & & & & \\
Gravitational waves & kHz & ms & Soft & \\
Health monitoring & kHz & ms & Custom & Yes \\
Communications & kHz & ms & Soft & Yes (mobile settings)\\
\hline
\textbf{Control Systems} & & & \\
Accelerator controls & kHz & ms--$\mu$s & Soft/custom \\
Plasma physics & kHz & ms & Soft
\end{tabular}
\normalsize
\end{table*}
\begin{table}
\centering
\caption{\label{timing} Classification of domains and their system requirements with respect to real-time needs.}
\small
\begin{tabular}{ c c c c}
\hline
Domain & Real-time data reduction & Real-time analysis & Closed-loop Control \\
\hline
\hline
\textbf{Detection/Event Reconstruction} & & & \\
LHC & Yes & Yes & No \\
Nuclear Physics & Yes & No & No \\
Dark Matter - Neutrino & Yes & No & No \\
\hline
\textbf{Image Processing} & & & \\
Material Synthesis & Yes & Yes & Yes \\
Scanning Probe Microscopy & Yes & & \\
Electron Microscopy & Yes & & \\
Biomedical Engineering & Yes & & \\
Cosmology & Yes & No & No \\
Astrophysics & Yes & No & No \\
\hline
\textbf{Signal Processing} & & & \\
Gravitational Waves & Yes & No & No \\
Health Monitoring & Yes & Yes & Yes \\
Communications & Yes & Yes & Yes \\
\hline
\textbf{Control Systems} & & & \\
Accelerator Controls & Yes & Yes & Yes \\
Plasma Physics & Yes & Yes & Yes
\end{tabular}
\normalsize
\end{table}
\subsubsection{Software programmable coprocessors}
Historically, the first attempts at addressing the computational needs of the problems reviewed in this article have been through software-programmable systems. CPU-based local clusters or cloud services as well as cloud computing resources utilizing GPU or TPU-based hardware accelerators are utilized in different applications. One particular concept explored by the HEP community is the GPU as a Service (GPUaaS) model~\cite{krupa2020gpu}.
This can further be expanded into the Machine Learning as a Service concept, similarly explored within HEP~\cite{kuznetsov2020mlaas4hep}.
These paradigms involve the implementation of machine learning modules to solve a set of physics problems, which are then transferred to GPU or TPU accelerators and accessed by the local CPU ``client'' of the native experimental system.
One of the major system constraints is the computational capacity, which can be defined in terms of a number of floating point operations as far as neural network implementations are concerned.
Real-time machine learning methods require an ever-increasing rate of computational capacity as it directly impacts the \textit{latency per task}.
The \textit{task} could be a trigger for LHC, reconstruction of an event in accelerator experiments or astrophysics, material synthesis, reconstruction of an image captured by an electron microscope, etc.
Extreme parallelism would be desired to provide the highest capacity possible to minimize latency and maximize throughput.
In a processor-based system, this can be addressed by increasing the size of the compute cluster.
Naturally, facility costs impose a limit on the scale of these clusters.
Another constraint is the available amount of storage coupled with the cost of data movement across the memory hierarchy.
In the majority of the use cases, the latency involved with moving data from the front-end (detectors, microscopes, sensors, etc.) dominates the total latency.
One of the prominent performance constraints is related to the utilization and subsequent latency of the network that links the front-end with the back-end. Current limitations on the speed of data movement renders the CPU/GPU cluster-based systems unable to meet the real-time requirements.
\subsubsection{Custom embedded computing devices}
As the latency and throughput constraints are coupled with challenging practical energy constraints, efforts have been directed towards specialized computing systems to address the hard real-time needs.
An increasingly attractive paradigm is to design components that are finely optimized for specific steps in the data capture workflow.
These components can be mapped onto FPGA devices or they can be designed and manufactured as an application-specific integrated circuit (ASIC).
In the LHC and accelerator domains, there is a rich set of FPGA-based demonstrations of front-end data processing systems, which meet microsecond latencies.
These systems are in charge of tasks such as triggering, event reconstruction, and anomaly detection. Direct and naive implementations of neural networks to perform inference for these tasks can fail to meet the latency requirements since they often incur significant resource utilization.
The highest achievable FPGA clock frequency and inference latency is correlated with the resource utilization and percentage occupancy of the device.
Co-design techniques developed for these applications particularly specialize in extreme quantization and pruning (with an awareness of accuracy) so that resource requirements can be controlled aggressively to ensure inference latency targets.
These optimizations push the resource usage envelope as far as down as 10s of percent of the FPGA device in order to meet the system constraints and yet demonstrate implementations with high inference accuracy.
Some other applications (e.g., accelerator controls, biomedical and health applications) impose less stringent latency expectations, in the order of ms, where the urgency for resource minimization is alleviated.
Hence, the focus of the system design can shift from extreme resource economy to enhanced sophistication in the algorithms that are being mapped to the device.
Inference models can now include deep(er) learning models coupled with advanced video and signal processing engines, as well as local privacy-preserving processing tasks (applicable particularly to mobile health and networking and communication applications).
For mobile and IoT-based deployment of the edge devices, resource efficiency emerges as an important factor as it impacts energy consumption.
However, in these applications, energy efficiency can also be achieved by alternative means.
One option would be selective powering, i.e., creating a resource-rich full-featured baseline implementation, which still comfortably meets latency constraints if energy was not an issue, and introducing power gating or standby features to modulate energy consumption during periods of low/no activity.
There are system constraints, which point the designers to a custom ASIC solution in addition to or in place of FPGA devices. ASICs can address extreme form factor considerations, integration of computation with sensing (e.g., smart photon detectors) into compact front-end devices, tight integration with other mixed-signal or analog functionalities, radiation hardening requirements, and ultra-low energy budgets.
\subsection{Systematic Methods for the Efficient Deployment of ML Models}
\label{sec:deploy}
As discussed in Section~\ref{sec:apps}, many ML problems in science
require low latency, often with
constrained resources. However, most of the current state-of-the-art NN models have prohibitively high latency with a large
memory footprint and energy consumption.
For this reason, practitioners have been forced to use sub-optimal models (e.g. shallow NNs)
with non-ideal accuracy to avoid this latency problem.
There is a large body of literature that has
focused on solving this problem by making NN models more efficient
(in terms of latency, memory footprint, and energy consumption). These efforts could be broadly categorized as follows:
(i) Designing new efficient NN architectures;
(ii) NN and hardware co-design;
(iii) Quantization (low precision inference);
(iv) Pruning and sparse inference; and
(v) Knowledge distillation.
Here we briefly discuss each of these approaches.
\paragraph*{Designing new efficient NN architectures}
One line of research has been focused on finding new NN models
that are efficient by design. A notable early work is SqueezeNet~\cite{iandola2016squeezenet},
a new NN model without any expensive Fully Connected layers,
along with a new lightweight \emph{Fire module}, that resulted
in a $50\times$ smaller model as compared to AlexNet, but with the
same accuracy. Later on, several new innovations were made in
efficient NN architecture design.
One focus has been to find efficient layers/operators.
Notable works are
group convolutions~\cite{ioannou2017deep},
depthwise convolutions~\cite{howard2017mobilenets},
spatial separable convolutions~\cite{mamalet2012simplifying},
shuffle layers~\cite{ma2018shufflenet},
and shift convolutions~\cite{wu2018shift}, to name a few.
Another focus has been to find similar substitutes to \emph{Fire module} that are more efficient and result in better
accuracy/generalization. Notable works include
residual networks~\cite{he2016deep} (originally designed
to solve issues with vanishing gradients, but these structures are generally more efficient
than non-residual architectures), densely connected networks~\cite{huang2017densely},
squeeze-and-excite modules~\cite{hu2018squeeze},
and inverted residual blocks~\cite{sandler2018mobilenetv2}.
These classical techniques mostly found new architecture modules
through a manual design search.
This is not scalable, and as such recent approaches have proposed automated methods that use neural architecture search (NAS).
NAS methods automatically find the right NN architecture for a given constraint
of model size, depth/width, and/or latency.
The high-level approach here is to train a probabilistic \emph{SuperNet}
that includes all possible combinations of NN architectures within the prescribed
constraints, but with learnable probabilities.
After this SuperNet is trained, one can sample an architecture from its learned probability distribution.
Notable works include RL based methods~\cite{zoph2016neural}, efficient NAS~\cite{pham2018efficient}, MNasNet~\cite{tan2019mnasnet}, DARTS~\cite{liu2018darts}, and Differentiable NAS~\cite{wu2019fbnet}.
\paragraph*{NN and hardware co-design}
Another promising line of work has been to tailor the NN architecture for a specific hardware
platform, and/or co-design them together. This is quite promising for
configurable hardware such as FPGAs. The importance of hardware-aware NN design is that
the cost of performing different types of operations varies for different hardware.
For example, hardware that has a dedicated cache hierarchy can execute bandwidth
bound operations much more efficiently than hardware without a cache hierarchy.
Notable works in this area include SqueezeNext~\cite{gholami2018squeezenext}, where both the NN and the hardware
accelerator were co-designed with a manual tuning approach.
More recent works have proposed to automate hardware-aware design
through NAS.
Notable works include ProxylessNAS~\cite{cai2018proxylessnas}, OnceForAll~\cite{cai2019once}, FBNet~\cite{wu2019fbnet},
and MobileNetV3~\cite{howard2019searching}.
\paragraph*{Quantization (low precision inference)}
A common solution is to compress NN models with quantization~\cite{asanovic1991experimental, hubara2016binarized, rastegari2016xnor, zhou2017incremental, zhou2016dorefa, cai2017deep, jacob2018quantization, zhang2018lq, choi2018pact, wang2019haq, dong2019hawq, chin2020one, cai2020zeroq, gholami2021survey},
where low bit-precision is used for weights/activations.
A notable work here is Deep Compression~\cite{han2015deep}, which used
quantization to compress the model footprint of the
SqueezeNet model discussed above, bringing its size to 500x smaller than AlexNet.
In quantization, the model size is reduced without changing the original network architecture,
and it could potentially permit the use of low-precision matrix multiplication or convolution. Therefore, both the memory footprint
and the latency could be improved.
The quantization methods can be broadly classified into two categories of \emph{Post-Training Quantization} (PTQ),
and \emph{Quantization-Aware Training} (QAT). In PTQ, a pre-trained model in single precision is quantized to low precision
without any fine-tuning or re-training~\cite{banner2018post,meller2019same,choukroun2019low,zhao2019improving,fang2020post,fang2020near,lee2018quantization,nagel2019data,cai2020zeroq,hawks2021ps}. As such, these quantization methods are typically very fast, and, in some cases, do not even require any training data~\cite{cai2020zeroq, haroush2020knowledge,nagel2019data}.
However, PTQ often leads to high accuracy degradation, especially for low precision quantization.
To address this, some quantization methods adopt QAT to re-train the model after the quantization, so that the parameters can
get adjusted.
This approach often results in higher accuracy, but at the cost of longer
time associated with re-training the model~\cite{courbariaux2015binaryconnect,lin2015neural,hubara2016binarized,rastegari2016xnor,zhou2016dorefa,zhu2016trained,cai2017deep,hou2016loss,gysel2018ristretto,huang2021codenet,zhou2018explicit}.
Another differentiator is the use of \emph{simulated quantization} (aka fake quantization), versus \emph{integer-only} quantization~\cite{jacob2018quantization,yao2020hawqv3,kim2021bert,lin2016fixed}. In the former,
the weights/activations are stored in low precision, but they are cast to higher precision during
inference. In the latter, there is no casting involved, and the multiplication and accumulation also happen
in low precision. Using integer-only quantization has the advantage that one can speed up
inference by using low-precision logic for multiplication and addition, besides reducing the memory footprint of the model.
Another distinction is \emph{hardware-aware quantization}.
Similar to NN architecture design, quantization can also be tailored for specific hardware platforms.
This becomes important for mixed-precision quantization~\cite{zhou2018adaptive, wang2018haq, wu2018mixed, hawq, shen2020q, hawqv2, dong2021hao, yao2020hawqv3}.
The reason is that certain operations in the NN model may benefit more from low precision
quantization than others, based on whether they are bandwidth bound or compute-bound.
As such, as schematically
illustrated in Figure~\ref{fig:pruning_quantization},
one must determine the best precision setting based on the tradeoff between the potential footprint/latency gain and the sensitivity to accuracy degradation.
\begin{figure*}
\centering
\includegraphics[width=0.8\linewidth]{figures/pruning_quantization_illustration_nologo.png}
\caption{The illustration of hardware-aware quantization and pruning.
A given NN model can be compressed by using low precision quantization instead
of single precision. The extreme case is to use 0-bit quantization which is
equivalent to removing/pruning the corresponding neurons.
The goal of compression is to find the best bit-precision setting for
quantization/pruning to reduce model footprint/latency on a target hardware
with minimal generalization loss.
}
\label{fig:pruning_quantization}
\end{figure*}
\paragraph*{Pruning and sparse inference}
Another approach reducing the memory footprint and computational cost of NNs is to apply
pruning, which could be thought of as quantization to 0-bits. In pruning, neurons with small \emph{saliency} (sensitivity) are removed, which results in a sparse computational graph~\cite{lecun1990optimal}.
Here, neurons with small saliency are those whose removal should minimally affect the model output/loss function.
Pruning methods can be broadly categorized into unstructured pruning~\cite{lecun1990optimal,hassibi1993second,dong2017learning, lee2018snip, xiao2019autoprune, park2020lookahead}, and structured pruning~\cite{luo2017thinet, he2018amc, yu2018nisp, lin2018accelerating, huang2018data, zhao2019variational}.
Unstructured pruning removes neurons without any structure.
With this approach, one can remove most of the NN parameters with little impact on the
generalization performance of the model.
However, this approach leads to sparse matrix operations which are hard to accelerate
and are typically memory-bounded~\cite{buluc2008challenges,gale2019state,hoefler2021sparsity,blalock2020state}.
This can be addressed with structured pruning, where a
group of parameters (e.g., an output channel) is removed. However, the challenge here is that high degrees of structured
pruning often lead to significant accuracy degradation.
In both approaches, the key question is to find which parameters to prune.
A simple and popular approach is magnitude-based pruning~\cite{chauvin1989back,hanson1988comparing,mozer1988skeletonization,li2016pruning,he2017channel,liu2017learning,he2019filter,lin2020hrank}.
In this approach, the magnitude of parameters is used as the pruning metric.
The assumption here is that small parameters are not important and can be removed.
An important problem with magnitude-based pruning methods is that parameters with small magnitudes can actually be quite sensitive.
It is easy to see this through a second-order Taylor series expansion, where the perturbation is dependent on not just the weight magnitude but
also the Hessian~\cite{lecun1990optimal}. As such there are several works that use
second-order based pruning~\cite{lecun1990optimal,hassibi1993optimal,hassibi1993second,wang2019eigendamage,yu2021hessian}.
Finally, we should mention that it is possible to
combine pruning and quantization together to compress the NN model.
In fact, pruning could be viewed as quantization to 0-bits. The recent work
of ~\cite{hawks2021ps} proposes a quantization-aware pruning
method and applies to high energy physics problems; It reports
better results than pruning or quantization alone.
\paragraph*{Knowledge distillation}
Model distillation~\cite{romero2014fitnets, hinton2015distilling, mishra2017apprentice, li2017learning, yim2017gift, polino2018model, ahn2019variational, yin2020dreaming} trains a large model and then uses it as a teacher to train a compact model. Instead of using class labels during the training of the student model, the key idea of model distillation is to leverage the soft probabilities produced by the teacher,
which can guide/help the student training.
Previous methods of knowledge distillation focus on exploring different knowledge sources. Refs.~\cite{hinton2015distilling, li2017learning, park2019relational} use logits (the soft probabilities) as the source of knowledge, while Refs.~\cite{romero2014fitnets, yim2017gift, ahn2019variational} try to leverage the knowledge from intermediate layers. The choices of teacher models are also well studied, where Refs.~\cite{you2017learning, tarvainen2017mean} use multiple teacher models to jointly supervise the student model, while Refs.~\cite{crowley2018moonshine, zhang2019your} apply self-distillation without an extra teacher model. Other previous efforts apply knowledge distillation with different settings on different applications. Refs.~\cite{lopes2017data, nayak2019zero, yin2020dreaming} study data-free knowledge distillation, and Refs.~\cite{wang2018kdgan, wang2020minegan} combine knowledge distillation with GANs.
A major challenge of knowledge distillation methods is to achieve a high compression ratio.
Compared to quantization and pruning which can usually maintain accuracy at $4\times$ compression, knowledge distillation methods tend to have non-negligible accuracy degradation at those compression levels.
But these two approaches are orthogonal, and recent works have
shown that their combination can result in high accuracy/compression~\cite{polino2018model,mishra2017apprentice,yao2020hawqv3,mao2020ladabert}.
It should be mentioned that current distillation methods are mostly applied to classical ML problems, and few works have looked into their application in Science AI problems.
\subsection{Systematic Neural Network Design and Training}
\label{sec:train}
There is currently no analytical approach to find the right NN architecture for
a given task and training dataset.
Originally, designing the NN architecture was mostly a manual task with
intuitions that were often ad-hoc. However, in recent years there has been
a lot of innovations in automating the NN architecture design process, which is
referred to as Neural Architecture Search~\cite{zoph2016neural,pham2018efficient,tan2019mnasnet,liu2018darts,wu2019fbnet,cai2018proxylessnas,cai2019once}.
NAS could be viewed as a hyperparameter tuning problem, where the hyperparameters are
the design choices for a NN architecture. This could include
width, depth, types of operations, etc. The main challenge
is that the search space for the operation types scales exponentially with the number
of layers.
As such, one has to still include some high-level intuition about the NN architecture
to limit the search space.
After limiting the search space, the general NAS process is as follows:
A candidate architecture is sampled from the set of all
possible architectures and is then trained for a number of epochs on the training dataset.
The accuracy is then used
as the metric to evaluate how good that candidate architecture is. Then based
on this reward, the probability distribution of sampling architectures is updated.
This process needs to be repeated for many different candidate architectures (sometimes exceeding
hundreds of thousands).
Inherently, this leads to another problem related to tuning the optimization hyper-parameters
for each candidate architecture.
For example, if a good architecture is sampled from the NAS but
is trained with sub-optimal hyperparamters, then the error will be high and the NAS algorithm will reduce the likelihood
of sampling that architecture which is not the desired property.
As a result, \emph{scalability} has become an integral concern for any procedure in the presence of ``big data.''
One main class of procedures for which scalability has become indispensable is in
numerical optimization algorithms, which are the core of training methods.
There is a large body of literature on designing
efficient numerical optimization/training methods~\cite{reddi2018adaptive,shazeer2018adafactor,zhang2019lookahead,park2020lookahead,zhuang2020adabelief,liu2020variance,ginsburg2020stochastic,yao2020adahessian,ma2020apollo,gupta2018shampoo} as well as efficient NAS algorithms to
search for the right NN architecture~\cite{zoph2016neural,pham2018efficient,tan2019mnasnet,liu2018darts,wu2019fbnet}.
For the optimization, the goal is to design new methods that require fewer iterations to converge and are
more robust to hyper-parameter tuning.
One notable advancement here is the ability to apply second-order methods
without the need for forming the second-order operator~\cite{yao2020adahessian,yao2019pyhessian,gupta2018shampoo,reddi2018adaptive}.
It has been shown that the performance and robustness of these methods are higher than
first-order optimization methods on classical ML problems (e.g. in computer vision or natural language processing).
Interestingly, some recent results for Physics Informed Neural Networks (PINN)~\cite{raissi2019physics} have
found that first-order methods work significantly sub-par to (quasi) second-order methods.
This could potentially provide opportunities to adapt or redesign some of the second-order algorithms for Science problems.
For the NAS algorithms, the goal is similar, which is to find methods
that require evaluating fewer candidate architectures, with less manual restriction or tuning
of the search space.
Another goal is to design transferable NAS algorithms that can be trained on a small problem
and then transferred to larger problems that are more expensive~\cite{cai2018proxylessnas,cai2019once}.
In summary, the core of designing NN architecture is to have a fast method of sampling architectures (through NAS),
and the fast training of the sampled architectures (through fast and robust optimization algorithms).
\subsection{Hardware Architectures: Conventional CMOS}
\label{sec:cmos}
As the prevalence and demands for machine learning rapidly continue to grow, it is increasingly important that we design machine learning algorithms efficiently and simultaneously deploy them on complementary and powerful hardware platforms. The compute and memory demands of NN deployments are huge and growing beyond the limits to where standard silicon-based semiconductors can scale. The reasons behind the scalability challenges in the semiconductor industry are as follows:
Firstly, as we approach the End of Moore's Law, transistor cost has been exponentially rising due to rising chip design costs with shrinking technology nodes (as published by Xilinx and Gartner in 2011 already~\cite{trimberger2018three}). Furthermore, with the end of Dennard scaling, we've encountered considerable thermal challenges as power density no longer remains constant between node generations. To mitigate the challenges of increasing thermal density, chips are now designed to conditionally deliver power to groups of transistors, effectively throttling or "turning off" parts of a chip. This technique has come to be known as creating dark silicon ~\cite{esmaeilzadeh2011dark}.
To overcome these challenges and provide sufficient compute capabilities, many disruptive approaches have been proposed. For example, Cerebras Systems~\cite{cerebras} has brought to market the first computer system which employs \textbf{wafer scale integration}. where chips are built from complete wafers rather than individual dies. Such a technique brought with it substantial engineering challenges in regards to power delivery, packaging, and cooling. Exploring the other dimension, foundries are investigating true \textbf{3D chip stacking} as was presented at HotChips'2019 by TSMC~\cite{TSMC}. Even \textbf{analog computing}~\cite{aspinity,yuzuguler2019analog}, \textbf{quantum computing}~\cite{dwave} and \textbf{in-memory computing}~\cite{neuroblade,essera2016convolutional} are investigated as well.
Less risky approaches focus on moving away from traditional von Neumann architectures, using specialization of compute architectures to provide the necessary performance scaling and energy efficiency.
Due to the specialization, the devices become increasingly heterogeneous.
A huge range of devices has emerged that all try to address this problem in different ways, whereby the key challenge is:
How do we loop transform and unfold the algorithms best to maximize data reuse and compute efficiency, minimize memory bottlenecks, and limit power consumption while meeting real-time requirements?
The choice of hardware type and quantity often boils down to a set of constraints imposed by compute environment (datacenter, cloud, on-premise, edge, mobile), workload type (inference, training), data type (Language, Time Series, Vision, Graph, etc), ML model, usage model (online inference, batch jobs), and user-centric Service-Level Agreements (encryption level, request latency, etc). For large datacenter deployments handling various types of workloads, it is often the case that several platforms must be combined to reduce Total Cost of Ownership (ToC) across all their hardware platforms. It has therefore become increasingly necessary for owners of heterogeneous platforms to think of their systems as large-scale multi-processor computers, a trend sometimes termed Warehouse Scale Computing~\cite{wsc}. For Deep Learning hardware accelerators, these new computers generally take the form of CPU co-processors: a host CPU communicates with other entities in the datacenter, interfaces with disk memory, and formats input data which is then offloaded to the accelerator responsible for executing a user-defined compute graph, or Neural Network.
We begin with a taxonomy of these hardware architectures and discuss their relevant characteristics when it comes to the acceleration of machine learning workloads.
This is essential to understand how they will differ in their execution behavior, what it takes to leverage their unique features and how they can potentially benefit from previously introduced optimization techniques.
\subsubsection*{\textbf{Taxonomy of Compute Architectures for Deep Learning}}
\begin{figure*}
\centering
\includegraphics[width=0.8\linewidth]{figures/taxonomy.png}
\caption{Taxonomy of compute architectures, differentiating CPUs, GPUs and DPUs}
\label{fig:tax}
\end{figure*}
A broad range of hardware architectures to deploy machine learning algorithms exists today.
We can broadly classify them by the following criteria:
\begin{enumerate}[itemsep=-1pt]
\item Basic type of compute operation
\item Inherent support for specific numerical representations
\item External memory capacity (which is mostly relevant for training workloads)~\footnote{In these comparisons, we treat HBM and HBM2 as external memory as it is used in the same way as DDR4 or GDDR memory.}
\item External memory access bandwidth
\item Power consumption in the form of thermal design power (TDP)
\item Level of parallelism in the architecture and the degree of specialization
\end{enumerate}
As is shown in Figure~\ref{fig:tax}, we classify the compute architectures into scalar processors (\textbf{CPUs}), vector-based processors (\textbf{GPUs}), and so-called deep learning processing units (\textbf{DPUs}), although realistically these categories blend to some degree.
DPUs are specialized for this application domain whereby we distinguish the more generic matrix- or tensor-based processor and a spatial processing approach. DPUs can be implemented with either ASICs or FPGAs. All of these architectures will be discussed individually below.
\paragraph*{CPUs} CPUs are widely used for ML applications and are viewed as largely serial or scalar compute engines (even though high-end variants for cloud deployment may have up to 10s of cores). They are optimized for single-thread performance, with implicitly managed memory hierarchies (with multiple levels of caches), and support floating point operations (FP64 and FP32) as well as 8bit and 16bit integer formats with dedicated vector units in most recent variants. Theoretical peak performance tops at 6.8TOPs for FP64 assuming boost clock speed (Cascade lake, 56 cores, 3.8GHz). External memory is currently primarily leveraging DDR4 memory banks with large capacities: Intel's Cascade Lake offers up to 4.5 TebiByte ($2^{40}$ Bytes) which is beyond what any of the other device categories can offer. Access is at maximum speed through high-end hardened memory controllers, offering 282~Gbps bandwidth (for example Cascade Lake with 12 DDR4 channels).
Compared to GPUs and other HBM-enabled devices, the memory bandwidth of CPUs is lower. However, for many use cases, this can be compensated through their sophisticated cache hierarchies, combined with mature compiler tools.
Regarding power consumption, CPUs are at the upper end of the spectrum with high-end devices range up to 400\,W~\cite{cascadelake}.
In the embedded space, ARM processors provide generally popular solutions, in particular when performance requirements are very low and when functionality is required that is not supported by the specialized device variants. In particular, the
Ethos~\cite{skillman2020technical} family of processing cores is specialized for CNN workloads and as such is considered under the DPU category below.
The advantages of CPUs are the generality of the hardware, as well as the ease of programming where design environments have matured over decades.
As expected this comes at the cost of lower peak performance and less efficiency compared to the more specialized device families.
In regards to quantization, CPUs can only leverage this optimization technique for INT8 and INT16 if supported.
\paragraph*{GPUs} GPUs are SIMD-based (Single Instruction, Multiple Data) vector processors that support smaller floating point formats (FP16) natively, as well as fixed point 8-bit and 4-bit integer formats more recently, and have a mix of implicitly and explicitly managed memory.
NVIDIA GPUs are some of the most popular hardware targets for machine learning, and newer families of chips have been introduced to specifically accelerate this workload, with AMD not far behind.
The latest devices in NVIDIA's Volta and Turing architecture families, introduced in 2018 and 2019 respectively, offer up 130TOPs in FP16, which is beyond the capabilities of the latest CPU generations.
As such they are amongst the highest performant devices in the market for the acceleration of DNNs as they can exploit the high degree of parallelism inherent in this application via increasingly specialized architectural features.
For example, NVIDIA's Volta is the first generation to incorporate tensor cores as a new feature, as well as improved FP32 and FP64 support for training in a data center setting~\cite{NVIDIAv100}, and also introduced a deep learning accelerator (DLA) in their embedded devices to further reduce power consumption. This specialization brings additional challenges for their usage; there are up to 3 distinct execution units now, namely CUDA cores, tensor cores, and the DLA, which don't operate on the workload simultaneously (at least not easily or by default).
We, therefore, don't sum up the peak performance of different execution units, but use only the maximum.
AMD announced the Vega GPU~\cite{RadeonInstinctGPU} with new deep learning instruction set operations, with the goal of obtaining parity with NVIDIA's high-end Tesla V100 datacenter GPUs.
Also, AMD's most recent EPYC family supports customized instructions for deep learning~\cite{epyc}.
Both companies offer also low power GPUs for the embedded space, namely the AMD Vega mobile GPU~\cite{radeon-mobile} and NVIDIA Jetson TX2~\cite{nvidia-jetson} and AGX family~\cite{agx}.
In regards to memory, GPUs leverage specialized and highly pipelined GDDR memory, which reduces capacity, but offers much higher bandwidth (up to 732GBps). With NVIDIA's Turing family the latest devices include HBM2 DDR memory stacks~\cite{turing}, which scales the memory access bandwidth to 1TBps and beyond.
Again this is particularly important to address the needs of training workloads.
For the same reason, some of the DPUs introduce HBM2 as well, as discussed below.
In regards to power consumption, GPUs are high, up to 345\,W.
One general challenge for GPUs is that they need to leverage input parallelism to achieve high utilization of their large compute arrays. Therefore before execution inputs need to be grouped into batches, which has adverse effects on end latency.
Further, GPUs are relatively high in power consumption.
Regarding quantization, support is limited to the inherent datatypes, which are INT4 at smallest in the context of NVIDIA's Turing family, and INT8 for many of the others.
Finally, the corresponding software environments for GPUs, while not on the same level as CPUs, have matured significantly and provide increased ease of use.
\paragraph*{FPGAs and ASICs}
FPGA and ASIC customize hardware architectures to the specifics of a given application.
They can be adapted in all aspects to suit a use case's specific requirements.
This includes their IO capability, their functionality, or even to suit specific performance or efficiency targets.
FPGAs can be reprogrammed whereas ASICs are fully hardened.
This flexibility allows for amortizing the design costs of the circuit across many applications but comes at the expense of hardware resource cost and performance.
FPGAs are a popular choice for the acceleration of CNNs.
Traditionally, an FPGA compute fabric consist of a sea of lookup tables (LUTs) which are interconnected through a programmable interconnect. The latest generations host millions of LUTs. Furthermore, the fabric is interspersed with specialized hardened compute blocks (DSPs) which accelerate n-bit multiply accumulate operations (MACs), as well as SRAM blocks. The latter are referred to as block RAMs (BRAMs), which hold 36~kbits, and Ultra RAMs (URAMs) which store 288~kbits.
More recent FPGA generations combine multiple FPGA dies, referred to as super logic regions (SLRs), and leverage a silicon interposer to provide connectivity between SLRs.
This technology is referred to as stacked silicon interconnect (SSIT) and helps scale device capacity.
\paragraph*{DPUs} As mentioned at the beginning, the term DPU (short for deep learning processing unit) refers to a new type of compute architecture, specialized for the acceleration of CNNs.
DPUs are customized for these types of applications in a number of ways: types of operations supported, direct support of tensors or matrices, inherent data types and supported numerical representations, macro-architecture, explicitly managed and specialized memory hierarchies, and which levels of parallelism they exploit (input, output pixel, IFM, OFM, bit, and layer and branch parallelism) as was introduced in the first part of this chapter.
We differentiate two types of DPUs, which can be implemented with both ASIC technology and FPGAs.
\begin{figure*}[h!]
\centering
\includegraphics[width=0.8\linewidth]{figures/DParchitecturesv2.png}
\caption{DPU architectures: Matrix of Processing Engines (MPE) on the left, and spatial architecture on the right}
\label{fig:dpu}
\end{figure*}
\paragraph*{Matrix of Processing Elements (MPE)} The first type, as shown on the left side of Figure~\ref{fig:dpu}, consists of an MPE that operates on matrices or higher dimensional tensors. The processing engines can be simple MACs, vector processors, or more complex VLIW (Very Long Instruction Word) cores that can support concurrent execution of different instructions.
A popular example in this category is Google's Tensor Processing Unit (TPU).
Introduced in 2016~\cite{tpu1}, it was originally designed to accelerate Google's TensorFlow framework.
The first generation supported integer arithmetic with a massively parallel INT8 matrix-multiply engine.
The second generation TPU was announced in May 2017~\cite{tpu}, and the third generation in May 2018~\cite{tpu3}.
These newer chips boast improved memory performance as well as support for floating point specifically aimed at training.
There are a number of startups introducing custom hardware that fall into this category.
Within the cloud , there are Graphcore, Groq, and Wave Computing.
Within the embedded space, where the design constraints are even more stringent, we find even more solutions.
Most are secretive about the details of their designs. Intel is investigating several custom accelerators and has for that purpose acquired a number of startups, namely Nervana, Habana, and Movidius. Fathom~\cite{movidius-tom} is Movidius' ultra low power Neural Compute Stick (NCS) which operates at about 1\,W.
Also, ARM offers specialized CNN processors in the form of their Ethos family, boosting performance up to 4TOPs with support for INT8 and INT16 datatypes.
As mentioned above, DPUs provide specialized datatypes to execute heavily quantized, reduced precision CNN implementations.
At the extreme, binarized neural networks (which are very high throughput at extremely low power) are exploited in the following ASICs: BinarEye~\cite{binareye}, BNN Custom Fabric~\cite{ando2017brein}, and IBM AI Accelerator~\cite{IBMAI}. Also, Lattice has announced binarized neural network libraries targeting low power FPGA and achieving 1\,TOPs/W~\cite{lattice-bnn}.
Custom floating point representations are also considered. For example, Microsoft's Brainwave project~\cite{chung2018serving} uses this approach with the aim of applying FPGAs to CNNs at datacenter scale.
However, typically the hardened versions in ASICs only support INT8, as lower precisions could potentially limit their application scope. FPGA-based MPE implementations such as Xilinx's xDNN are less constrained and in principle can be customized as needed.
Similar to the GPU, but perhaps to a lesser degree, DPUs leverage input, IFM (input feature map) and OFM (output feature map) parallelism, which requires buffering of inputs and may have adverse effects on latency as well.
A particular challenge arises in the context of software environments, which differ for all vendors and are less mature than what we have observed for CPUs and GPUs. Typically, they are limited to support execution of very specific layer types (sometimes even restricted in regards to parameter ranges) and neural networks, whereby the range of layer types and neural network models is continuously expanding.
In summary, through their specialization, these implementations minimize hardware cost, maximize performance and optimize efficiency by exploiting specific precision arithmetic with a specialized instruction set and customized memory system. However, in order to gain a performance advantage, the algorithms need to be adapted to leverage these features.
\paragraph*{Spatial DPUs.} The second type of DPU leverages spatial acceleration and exploits layer and branch parallelism. Popular examples are hls4ml~\cite{hls4mldata_150p} and FINN~\cite{umuroglu2017finn,blott2018finnr}.
To that extent, the hardware architecture is even further specialized to the specifics of a given deep learning topology.
This is visualized on the right side of Figure~\ref{fig:dpu}.
The hardware architecture actually mimics the given deep learning topology and the inputs are streamed through the architecture.
Every layer is instantiated with a dedicated compute datapath.
Each layer has a dedicated weight buffer, and activation buffers in-between layers are FIFOs of minimal size. They buffer just enough data to feed the next set of convolutions in the next layer.
This is substantially more efficient compared to the first type of DPUs or GPUs and yields reduced latency.
DPUs and GPUs generally perform a layer-by-layer compute, where a sequence of images has to be buffered in order to extract maximum compute out of the platform (input, IFM and OFM parallelism).
For this, the device buffers a batch of images before computing the first layer of all images.
Then all intermediate results are buffered, and then the next layer is computed, and so on. Hence the latency is heavily dependent on the size of the input batch.
As a result, spatial DPUs have an advantage in regard to latency.
This level of customization is only possible with programmable hardware architectures such as FPGAs, as they can adapt the hardware architecture for different use cases.
This generally wouldn't make sense in the context of an ASIC accelerator, as that would yield an ASIC only capable of accelerating one specific topology, which would be far too restrictive in scope.
The limitation in spatial architectures is the scalability in the numbers of layers.
Each layer comes at a resource cost overhead and there is a maximum number of layers that can be created within a single device. As a result, some extremely deep CNNs might not be able to fit into a single device. Microsoft's Brainwave project leverages spatial computing and overcomes this limitation with a distributed approach~\cite{chung2018serving}.
Once a spatial DPU has been leveraged and the architecture is specialized for a very specific CNN, the architecture can be further customized in regards to minimum precision. By supporting only the bits as needed per layer of the CNN they can achieve even higher performance and efficiency, while in an MPE, the hardware will support the maximum precision that is required over the whole network. In regards to customized precisions and spatial architectures, FINN has pioneered the first binarized neural network accelerators \cite{umuroglu2017finn,fraser2017scaling} and provided many proof points for customized reduced precision implementations~\cite{blott2018finnr}.
This flexibility comes at a cost, in the form of programming complexity, and they are extremely difficult to characterize in general, as the performance characteristics depend on the specifics of the hardware architecture that has been implemented.
\paragraph*{Further Variants of DPUs}
Beyond the previously discussed spatial DPUs and MPEs, there are many more variants.
Some exploit sparse computing engines for example, such as EIE and its successor ESE~\cite{han2017ese}, SCNN~\cite{parashar2017scnn}, Cnvlutin~\cite{cnvlutin}, Cambricon-S and Cambricon-X~\cite{zhang2016cambricon}.
These are the only architectures that can benefit from irregular sparsity.
Finally, another dimension for customization of precision is to optimize over the execution- or run-time of a CNN.
In other words, beyond using statically fixed reduced precision, where the hardware operates with a fixed precision for all variables, some approaches explore run-time configurable bit precision which allows for the exploitation of bit-parallelism in the arithmetic.
On the hardware implementation side, this can be exploited with run-time programmable precision and is effective with \textbf{bit-serial} implementations.
For example Umuroglu et al.~\cite{umuroglu2018BISMO} demonstrate with BISMO that bit-serial can provide highly attractive performance with minimal overhead on FPGAs, while Judd et al. show the same is true for ASICs with their prototype ASIC called Stripes~\cite{judd2016stripes}.
While this concept can be applied to both MPE and spatial architectures, it makes the most sense for MPEs.
\begin{table}[!ht]
\resizebox{\linewidth}{!}{
\begin{tabular}{|c||c|c|c|c|c|c|c|}
\hline
\textbf{Server-class} & \textbf{Throughput} & \textbf{Latency} & \textbf{Power} & \textbf{Ext. Mem. Bandwidth} & \textbf{HW specialization} & \textbf{Ease of Use} & \textbf{Training/Inference} \\
\hline \hline
\textbf{Conventional} & & & & & & & \\
CPU & Medium & High & High & Medium & Low & High & Both \\
DPU-MPE & High & Medium-High & Medium & High & Medium & Low-Medium & Inference \\
DPU-Spatial & High & Low & Medium & High & High & Low & Inference \\
GPU (NVIDIA A100) & High & High & High & High & Medium & High & Both \\
\hline \hline
\textbf{Speculative} & & & & & & & \\
Cerebras CS-1 & Very High & Medium & High & Very High & Medium & Medium & Both \\
\hline
\end{tabular}}
\caption{Characterization of types of hardware based on important metrics
\label{tab:hwcompare}}
\end{table}
\paragraph*{Summary of Conventional CMOS Hardware Architectures}
We analyzed three categories of hardware architectures that are leveraged for CNN inference, namely common CPUs, SIMD-based vector processors such as GPUs, and DPUs which are specialized architectures for the acceleration of deep learning workloads.
An overview of the architectures is visualized in Table~\ref{tab:hwcompare}.
Please note, "Ease of Use" includes compute kernel programmability as well as general ease of use.
The degree of specialization includes operators, precision support, and customization towards topologies.
In summary, for DPUs, we distinguish between tensor processors which leverage a matrix of processing engines and spatial architectures which can be further specialized for specific topologies using FPGAs.
CPUs are the most general solution but high in power. GPUs and DPUs offer the highest performance, though GPU are more expensive in energy cost. Spatial DPU architectures excel at low latency and provide the highest compute efficiency through maximized customization.
CPUs, GPUs, and DPUs (MPE) use a sequential layer-by-layer compute model whereas spatial DPUs execute all layers of the network concurrently.
Hardened topologies in form of ASICs, CPU and GPU offer a fixed set of native dataypes, whereas FPGAs can adopt any precision and numerical representation, which provides the utmost flexibility and leverages optimization with quantization to the maximum, whereas hardened approaches need to default to the next higher supported precision into which the reduced precision variable can be embedded. However, the programmability in the FPGA fabric also comes at a speed and energy cost.
All architectures can benefit from coarse-grained pruning optimization techniques. Only sparse execution engines can benefit from irregular pruning, such as synaptic pruning.
We also discussed the various deployment options.
Many devices offer different power and operating modes as different compromises between throughput and power consumption to adapt to the potentially very different optimization targets of different application settings. Similarly, batch sizes, thread counts and stream sizes offer another compromise in regards to throughput versus latency. Again this is to facilitate a spectrum of different use cases.
Finally, the table shows that speculative approaches such as Cerebras can bring fundamental performance scalability.
Overall, each approach comes with its own advantages and disadvantages and the best solution greatly depends on the specifics of a given use case.
\subsection{Hardware/Software Codesign Example: FPGA-based Systems}
\label{sec:codesign}
In the last decade, we have observed the rise of two significant paradigms that have come to scientific applications: heterogeneous-computing systems and machine learning. Heterogeneous computing can overcome the decline of Moore's Law and Dennard Scaling and achieve the desired computational cost and performance by executing portions of the applications on the best-matched hardware, e.g., CPU, GPU, ASIC, and FPGA. On the other hand, machine learning is an automatic process that creates programs that can solve classes of problems. As with traditional programming, machine learning can significantly benefit from heterogeneous computing; in addition, designers can tailor specialized but reprogrammable hardware to fit ever-changing machine learning requirements. This section examines tools and methodologies that can automatically deploy and orchestrate machine learning on FPGA systems in larger scientific applications. FPGAs are a particularly compelling example to explore because the efficiency of the hardware coupled with their programmability makes for an interesting case study in hardware/software codesign.
Traditional software programming is complicated, and parallel high-performance programming is even more challenging. Programming heterogeneous systems that integrate FPGAs bring the challenge to the next level: the programmer must deal with a multi-objective optimization problem that involves performance and costs, i.e., hardware resources.
For machine learning applications, a common practice is to profile the application on CPU (or GPU) to identify the bottlenecks to be offloaded onto the reprogrammable logic to improve latency, throughput, or energy efficiency of the application as a whole. Then, part of the application can remain on the CPUs to control the execution and interact with the rest of the scientific setup.
\paragraph*{FPGA Programming}
FPGA are configurable integrated circuits that provide a good trade-off in
terms of performance, power consumption, and flexibility with
respect to other hardware paradigms. However, it is a challenging and lengthy task to program FPGAs. FPGA programming has traditionally been a job for hardware designers familiar with digital design and computer architecture. These requirements lead to a steep learning curve for software developers and other domain experts. In order to lower the entry barrier, there has been a growing focus on designing FPGA hardware at a higher level of abstraction. As a result, various approaches have brought FPGA development into the mainstream by allowing developers to design for FPGAs at a higher level using familiar languages such as C, C++, OpenCL, and in some cases, even C\# \cite{kiwiHLS}. Here an important question arises: what are the additional advantages of designing the hardware at a higher level of abstraction? High-level languages (HLLs) include various constructs and design patterns that are more functionally expressive. Furthermore, the amount of time spent in the verification of the design is also a crucial factor. Hardware-description languages such as Verilog or VHDL focus on the final implementation details and, because of that, are more verbose. Bigger code repositories are not easy to verify for functional correctness. On the other hand, HLLs are more compact and simulate faster. Thus, a designer can do more verification in the same span of time. Despite these advances, FPGA programming remains complex. This has compelled academia and industry to develop new compilers, frameworks, and libraries to facilitate hardware design.
\paragraph*{High-Level Synthesis and Languages}
High-level synthesis (HLS), also known as behavioral or algorithmic synthesis, is an automated design process that takes as input a functional description of a design and outputs an RTL implementation. It transforms an untimed (or partially timed) high-level specification into a fully timed implementation. The process of HLS starts by analyzing the data dependencies between the various operations in the functional description. This analysis leads to a Data Flow Graph (DFG) representation. After the DFG generation, during the allocation phase, HLS maps each operation onto a hardware resource with latency and area characteristics. Then, HLS adds the notion of time to the design during the scheduling phase. Scheduling takes the operations and resources of the DFG and decides in which clock cycle to execute them, given their latency information. This step infers sequential logic by adding registers between operations and creating finite state machines~\cite{hlsbluebook}.
Over the past three decades, many HLS tools have been proposed. The work in \cite{survey_evaluation} presents an evaluation of different academic and commercial HLS tools tested on the same set of benchmarks. These tools have different input languages, perform different internal optimizations, and produce different quality results, even for the same input languages. The results show that each HLS tool can significantly improve performance once the designer has mastered benchmark-specific optimizations and constraints. However, academic HLS tools have a higher learning curve because of a minor focus on usability. Commercial HLS tools have an advantage because of their better documentation, robustness, and design verification integration.
In terms of input languages for HLS, most of the HLLs are variants of the C language. However, there are a few limitations to generate hardware from a pure C specification. First, C lacks the notion of timing and concurrency. The designer must rely on the HLS tool to create clock-based timing. Similarly, the designer must specify the concurrency model or rely on HLS to extract the parallelism among operations or processes. Second, C lacks bit-accurate data types. It only provides ``native'' data types such as \texttt{char}, \texttt{int}, and \texttt{long}, whose size is a multiple of a byte. Third, it lacks the concepts of hardware interfaces and communication channels. SystemC was adopted as HLS language to address all of these limitations~\cite{6838614}. However, SystemC still has not entirely made inroads in the FPGA community. Another common problem with all C-based languages, including SystemC, is memory access and modeling. These languages have a flat memory model, and memory access is done through pointers. Either HLS has to decide how to implement the memories in hardware, or the designer must leverage additional HLS directives or libraries to model the memory sub-system properly. Finally, in the family of the C-based specification languages for HLS, the SYCL language is emerging. SYCL (pronounced sickle) is an industry-driven standard that adds parallelism to C++ to design heterogeneous systems. SYCL programs perform best when paired with SYCL-aware C++ compilers such as the open-source data-parallel C++ (DPC++) compiler \cite{dpcplus}.
Apart from the variations of C, Bluespec is an open-source language for the description and synthesis of hardware based on SystemVerilog. It provides levels of abstraction with a clean semantic that highlights aspects of the architecture. It can be considered a high-level functional HDL, where modules are implemented as rules using SystemVerilog syntax. Those rules are called guarded atomic actions and express behaviors as concurrently cooperating finite state machines (FSMs). Another recent language among FPGA designers is Chisel. It is based on Scala and supports hardware definition using highly parameterized generators, object-oriented and functional programming. Similar to an HLS flow, it compiles into an RTL Verilog implementation.
\begin{table}[h]
\centering
\caption {A brief taxonomy of domain-specific languages and frameworks for FPGA applications~\label{DSLs}}
\begin{tabular}{ | l | l | l | p{0.1cm}|}
\hline
\textbf{Domain and Interfaces} & \textbf{DSLs and Frameworks} \\ \hline
Signal-Processing & HDLCoder \cite{dsl}, LabView \cite{dsl}, Spiral \cite{SPIRAL}, VSIPL \cite{VSIPL} \\ \hline
Networking & SNORT \cite{snort}, Click \cite{click}, P4 \cite{p4lang}, Floem \cite{floem}\\ \hline
Databases & Glacier \cite{glacier} \\ \hline
Machine Learning & OptiML \cite{optiML} \\ \hline
Numerics &Verilog
AMS \cite{verilogAMS} \\ \hline
Streaming & Maxeler \cite{maxeler}, SCORE \cite{score}, Lime \cite{lime}, Aetherling \cite{aetherling} \\ \hline
Dataflow & OpenDF \cite{openDF}, OpenSpatial \cite{dsl} \\ \hline
Graphs & GraphStep \cite{graphstep}, GraphGen \cite{graphgen} \\ \hline
Data Parallel & MapReduce \cite{dsl}, Accelerator \cite{accelerator}, FCUDA \cite{fcuda}, SuSy \cite{susy} \\ \hline
Circuit Generators & Flopoco \cite{flopoco}, JHDL \cite{jhdl}, PAMDC \cite{pmdc} \\ \hline
Image processing & HIPACC \cite{hipacc}, FROST \cite{frost}, Darkroom \cite{darkroom}, RIPL \cite{ripl}, PolyMage \cite{polymage} \\ \hline
Static & JBits \cite{jbits}, TVM \cite{tvm}\\ \hline
Task based & TAPAS \cite{tapas} \\ \hline
Dynamic & PyRTL \cite{pyrtl}, APARAPI \cite{arapa}, TornadoVM\cite{tornado}, Caldeira et al. \cite{caldeira}, \\ & LINQits \cite{linqits}, DHDL \cite{dhdl}, Spatial \cite{spatial} \\ \hline
Type Systems & DAHLIA \cite{dahlia}\\ \hline
Verification & Kami \cite{kami} \\ \hline
Virtualization & Cascade \cite{cascade} \\ \hline
\end{tabular}
\end{table}
Although all these languages have helped create efficient hardware and significantly shorten the development time, specific coding techniques are still necessary.
Also, the growth and diversification of the application domains have shown the limitations of these programming languages. This has further pushed the level of abstraction to domain-specific languages (DSLs).
In recent years, we are observing the growth of a considerable corpus of DSLs and frameworks for FPGA designs~\cite{dsl, transparent}.
In a DSL-based approach, the users and the tools can use domain knowledge to apply static and dynamic optimizations.
However, a domain-specific HLS tool requires an appropriate compiler and a development environment that caters to the target domain.
Table~\ref{DSLs} shows some of the DSLs and frameworks developed over the years for FPGA computing organized by domains of application.
Although all the approaches in the table are diverse in terms of applications, the interesting question is, what are the common denominators?
To the best of our knowledge, most of the approaches are broadly based on two approaches: either the DSL specification gets directly compiled into the RTL implementation, or the approach leverages source-to-source compilers.
In the latter case, the DSL compiler produces an equivalent source code in a different programming language, for example, C++, for a more standard HLS flow.
As a final concluding remark for this paragraph, the efforts for designing
better HLS compilers and languages are a significant part of present FPGA research.
Furthermore, the work in Table 5 by no means is an exhaustive list.
The area of DSLs for FPGA easily outnumbers the work presented in the table.
\paragraph*{Software and Hardware Integration}
Running an application as software on a microprocessor is more accessible than designing and running specialized hardware, but it may result in poor performance and higher power costs. On the other hand, partitioning an application into software and hardware components is challenging. This process, also known as hardware/software codesign, divides an application between software running on the microprocessor and one or more custom hardware or co-processors components to achieve desired performance goals. Understandably there exists a plethora of research work in this area. The authors in \cite{rfu} have provided background information on notable aspects of older FPGA technologies and simultaneously explained the fundamental architectures and design methods for codesign. Furthermore, the work in \cite{microarchitecture} is another comprehensive study that aims to evaluate and analyze the microarchitectural characteristics of state-of-the-art CPU-FPGA platforms in depth. That paper covers most of the shared-memory platforms with detailed benchmarks.
The two leading FPGA vendors, Xilinx and Intel, have their own solutions. The Xilinx Runtime Library (XRT) \cite{XRT} is implemented as a combination of userspace and kernel driver components. It supports both PCIe-based boards and MPSoC based embedded platforms. Similarly, Xilinx SDSoc \cite{sdsoc} and SDAccel \cite{sdaccel} became publicly available later in late 2015; the former works only on select boards of the Zynq family of FPGAs, the latter only on selected PCIe-based boards for OpenCL computing. Since 2020 Xilinx has introduced Vitis \cite{vitis} as a unified platform. Vitis Unified Software Platform is a comprehensive development environment to build and seamlessly deploy accelerated applications on Xilinx platforms, including on-premises Alveo cards, FPGA-instances in the cloud, and embedded platforms.
In addition, the recent efforts of Xilinx under the flagship Versal \cite{versal} is also a step towards codesign applications. Intel has the Open Programmable Acceleration Engine (OPAE) \cite{intelopae} which is the API library for programmers writing host applications that will leverage the FPGA acceleration. Likewise, Intel oneAPI \cite{inteloneapi} is an open, unified programming model built on standards to simplify the development and deployment of data-centric workloads across CPUs, GPUs, FPGAs, and other accelerators.
Apart from vendor solutions, academia and the open-source community have also attempted to simplify the integration of applications, operating systems, and hardware acceleration. For a comprehensive analysis, the reader is referred to the works in \cite{reconfig_os, connectal}, which give a historical review and summary on ideas and key concepts to include reconfigurable computing aspects in operating systems. They also present an overview of published and available operating systems of the last 30 years targeting reconfigurable computing. Similarly, the design exploration and engineering of FPGA drivers that are portable across multiple physical interfaces (PCIe, Ethernet, optical links) have remained a significant part of HW/SW codesign research. The challenges come from the variety of FPGA boards, the plethora of interfaces, and the diverse user requirements. Fundamentally, the FPGA drivers should allow the designer to load or reconfigure an application bitstream and support data transfers between the FPGA and host.
A significant engineering challenge is to consider how to partition driver functionality between the hardware and software components. One growing research focus is to exploit the spatial parallelism of FPGA technology through implementing multiple queues on FPGA drivers. A thorough analysis of system-level drivers for FPGA is out of the scope of our white paper. Readers interested in FPGA system-level drivers are referred to the work in \cite{fpga_drivers, fpgadrivers2}. The authors of those papers have provided benchmarks of various mainstream academic and vendor solutions regarding system-level drivers in the FPGA domain.
Despite various existing OS and driver solutions, an open problem that remains is standardization. An industry-wide standardization would allow for faster development and better portability, and (re)usability of FPGA applications. There is already ongoing work in this area. Standards like the CCIX consortium \cite{ccix} and the Heterogeneous System Architecture (HSA) foundation \cite{hsa} have already made good progress.
\paragraph*{The Case for ML Frameworks for FPGA Design}
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{figures/FINN_flow_for_paper.pdf}
\caption{FINN Compiler Flow}\label{figures:finnflow}
\end{figure}
Machine learning is one of the fastest growing application domains and over the years there has been an increasing demand for FPGA-based implementations, as the FPGA can achieve latency and throughput and efficiency requirements through extreme customization of the hardware design leveraging reduced precision arithmetic, streaming dataflow implementations (as were introduced as spatial architectures), and fine-granular sparsity. In order to enable a broad spectrum of users with these customizations and to reduce the significant engineering effort, compilers and tools are needed that cater to the needs of ML researchers and domain experts working with FPGAs. Two main ML frameworks are making the effort to fill this vacuum: hls4ml and FINN. Considering the aforementioned tools, compilers, programming languages, and codesign solutions, both hls4ml and FINN have the potential to reach a broader scientific community.
To get a better understanding of how such a tool flow works, we consider the FINN compiler in more detail in the following paragraphs.
The \textbf{FINN compiler}~\cite{umuroglu2017finn} is an open-source framework to generate spatial DPU or streaming dataflow accelerators on FPGAs.
The FINN compiler has a highly modular structure as shown in Figure~\ref{figures:finnflow}, which allows the user to interactively generate a specialized architecture for a specific DNN.
The framework provides a frontend, transformation and analysis passes, and multiple backends to explore the design space in terms of resource and throughput constraints.
Brevitas~\cite{brevitas}, a PyTorch library for quantization-aware training, is the \textbf{frontend} used in this work.
It enables training DNNs with weights and activations quantized down to a few bits, then exports the trained network into the intermediate representation (IR) used by the FINN compiler.
The \textbf{transformation and analysis passes} help to generate an efficient representation of the DNN. Finally, the \textbf{backend} contains a code generator that creates synthesizable accelerator descriptions, which can be implemented as either a standalone Vivado IPI component or integrated into various shells, including Xilinx Alveo boards and PYNQ embedded platforms.
For further processing, the DNN model must be converted into the IR of the FINN compiler first. The frontend stage takes care of this by converting the PyTorch description into the IR, called FINN-ONNX. This IR is based on ONNX~\cite{onnx_github}, an open-source interchange format that uses a protobuf description to represent DNNs. It comes with several standard operators and allows the user to easily create their own operators to customize the model. The nodes represent layers and edges carry outputs from one layer to become inputs to another. The feature to customize the ONNX representation is used in the framework to add application-specific nodes and attributes. Each node is tagged with the quantization of its inputs, parameters (weights and activations), and outputs to enable quantization-aware optimizations and the mapping to backend primitives optimized for quantized computation. During the compiler flow the nodes will be transformed into a backend-specific variants via a series of transformation passes.
The main principle of the \textbf{FINN compiler} is \textbf{graph transformation} and \textbf{analysis passes}, which change or analyze the IR of the model. A pass is a function that takes the IR graph as input and either (a) transforms the DNN by looking for a certain pattern, changing the graph in a specific manner and outputs the modified graph, or (b) \textbf{analyzes} the DNN to produce metadata about its properties. To bring the model into a representation from which code can be produced and finally the hardware accelerator can be generated, various transformations must be applied. The main transformations involved are summarized below.
Although the PyTorch description of the network is mostly quantized, it may still contain some floating-point operations from e.g. preprocessing, channelwise scaling or batchnorm layers. In order to generate a hardware accelerator from the model, these floating-point operations must be absorbed into multi-level thresholds, so that a functionally identical network of integer operations is created. The transformation to achieve this is called \textbf{streamlining}, as described by Umuroglu and Jahre~\cite{DBLP:journals/corr/abs-1709-04060}. During streamlining, floating-point operations are moved next to each other, collapsed into a single operation, and absorbed into succeeding multi-thresholding nodes.
Next, high-level operations in the graph are \textbf{lowered} to simpler implementations that exist in the FINN HLS-based hardware library. For instance, convolutions will be lowered to a sliding window node followed by a matrix-vector node, while pooling operations will be implemented by a sliding window followed by an aggregation operator. The resulting graph now consists of layers that can be converted to hardware building block equivalents. Each node corresponds to a Vivado HLS C++ function call, from which an IP block per layer can be generated using Vivado. The resources utilized by each hardware building block can be controlled through specific attributes passed from FINN to Vivado. For example, multiplications can be performed with LUTs or DSP blocks, and parameters can be stored in distributed, Block, or Ultra RAM.
Finally, the \textbf{folding} process assigns compute resources to each layer to obtain the desired throughput with a balanced pipeline by fine-tuning their degree of parallelism.
To enable per-layer specialization without reconfiguration and minimize latency, FINN creates dedicated per-layer hardware interconnected with FIFO channels, thus the outermost loop across $L$ layers is always fully pipelined.
Once the folding is specified, resource estimates can be produced for each node.
There are several ways to estimate the resources. Even before IP blocks are generated from the HLS layers, an estimate of the resources per layer can be made by using analytical models based on the concepts from the FINN-R paper~\cite{blott2018finnr}.
Estimations can also be extracted from Vivado HLS after IP generation, though these results are still estimations that may differ from the resource usage of the final implementation due to synthesis optimizations.
The \textbf{Backend} is responsible for consuming the IR graph and backend-specific information to create a deployment package, also implemented using the transformation concept.
To get the inference accelerator, between the layers FIFOs are inserted, which can be sized automatically by the FINN compiler. Afterwards, the single IP blocks are stitched together and synthesized. The stitched IP can be manually integrated into a system, or inserted into an appropriate shell for the target platform. If the target platform is an Alveo card, the design is exported as a Vivado Design Checkpoint (DCP), followed by generation of Xilinx Vitis~\cite{kathail2020xilinx} object files and linking.
\paragraph*{Summary of Hardware/Software Codesign and FPGA-based Systems}
In summary, CPUs are the most general solution for CNN inference but high in power. GPUs and DPUs offer highest performance, whereby GPU are more expensive in regards to energy cost. FPGAs offer several tradeoffs that may well fit rapidly moving application domains.
FPGAs can adopt any precision and numerical representation, which provides utmost flexibility and leverages optimization with quantization to the maximum, whereas hardened approaches need to default to the next higher supported precision where the reduced precision variable can be embedded.
Furthermore, through the spatial dataflow approach, much lower latency can be achieved.
However, the complexity of programming FPGAs limits their deployment.
Tools such as hls4ml and FINN are frameworks specifically created for the ML domain where they automate the process of hardware generation for the end-user thus hiding the associated design complexity of FPGAs and enabling them for the previously discussed end applications.
\subsection{Beyond-CMOS neuromorphic hardware}
\label{sec:beyondcmos}
With rapidly growing machine learning applications comes the acute need for their efficient hardware implementations. Most of the efforts are focused on digital CMOS technology, such as implementations based on general-purpose TPUs/GPUs, FPGAs, and more specialized ML hardware accelerators. The steady improvements in such hardware platforms' performance and energy efficiency over the past decade are attributed to the use of very advanced, sub-10-nm CMOS processes and holistic optimization of circuits, architectures, and algorithms. It includes, for example, taking advantage of aggressive voltage supply scaling \cite{Moons2017}, very deep pipelines and extensive data reuse in architectures \cite{Chen2017}, and lowering the precision of weights and activations of the algorithms \cite{Simons2019}. As a result, very compact state-of-the-art neural networks, such as MobileNet based on 3.4M parameters and 300M multiply-and-add operations per inference \cite{Sandler2018}, can now be fitted entirely on a single chip. However, on all these fronts, advances are saturating and cannot rely on the faltering Moore's law.
On the other hand, further progress would be essential because ML algorithms are getting increasingly more complex. For example, transformer networks \cite{Vaswani2017}, the state-of-the-art approach for many ML tasks today \cite{Vaswani2017, Vinyals2019, Dosovitskiy2020}, could have hundreds of billions of parameters and perform hundreds of trillions of operations per inference. Moreover, the transformer's functional performance typically improves with the model size \cite{Rajbhandari2020,Brown2020}. Training such models requires enormous, data-center-scale (e.g., kiloTPU-year) resources while performing inference on resource-constrained edge devices would be extremely challenging.
The opportunities for building more efficient hardware may come from biological neural networks. Indeed, it is believed that the human brain, with its $>$1000$\times$ more synapses than the weights in the largest transformer network, is extremely energy efficient \cite{Hasler2013}, which serves as a general motivation for developing neuromorphic hardware \cite{Mead1990}. There is a long history of CMOS neuromorphic circuits \cite{Indiveri2011}. However, unleashing the full potential of neuromorphic computing might require novel, beyond-CMOS device and circuit technologies \cite{Berggren2020} that allow for more efficient implementations of various functionalities of biological neural systems.
In this section, the most prominent emerging technology proposals, including those based on emerging dense analog memory device circuits, are grouped according to the targeted low-level neuromorphic functionality - see, e.g. reviews in \cite{Burr2017, Bavandpour2018, Yang2013NatureNano, Yu2018IEEE} and original work utilizing volatile \cite{Sheridan2017, Cai2019NatElec, Chu2014Neuro, Yeon2020, Ohno2011, Wang2017NatMat, Pickett2013, Wang2018NatElec, Zhang2018Small, Lashkare2018, Adda2018} and nonvolatile \cite{Alibart2012, Adam2017,Govoreanu2013, Prezioso2015, Prezioso2016, Prezioso2018, MerrikhBayat2018, Lin2020NatElec, Hu2018AdvMat, Yao2020Nature, Liu2020ISSCC, Kim2019XBAR, Cai2020NatElec, Mahmoodi2019IEDM, Mahmoodi2019NatComm, Li2016IEDM, Wang2018NatElec, Pedretti2017} memristors, phase change memories (PCM) \cite{Burr2015, Tuma2016, Ambrogio2018, Karunaratne2020, Joshi2020, Kuzum2011, Rios2019}, and nonvolatile NOR \cite{Guo2017CICC, Guo2017IEDM, MerrikhBayat2015, Mahmoodi2019NatComm}, and NAND \cite{Bavandpour2019NAND, Bavandpour2020, Lee2019NAND}, and organic volatile \cite{Fuller2019} floating gate memories, as well as multiferroic and spintronic \cite{Grollier2020, Ostwal2018, Sengupta2016, Romera2018, Ni2018IEDM}, photonic \cite{Shasti2021, Goi2020, Rios2019, Lin2019Sciece, Hamerly2019PhysRevX, Hamley2019, Shen2017NatPhot, Tait2016, Feldmann2019, Buckley2017, Bruiner2013, Vandoorne2014}, and superconductor \cite{Segall2017, Buckley2017, Rowlands2021} circuits. More discussion is devoted to analog vector-by-matrix multiplication circuits in the following subsection because of their immediate value for today's state-of-the-art algorithms. More biologically-realistic proposals described in the subsequent sections are less emphasized because they target algorithms with inferior performance. The least mature though very intriguing quantum neuromorphic computing \cite{Yamamoto2020, Markovich2020} is not discussed in this brief review.
\paragraph*{Analog Vector-by-Matrix Multiplication}
The emergence of dense analog-grade nonvolatile memories in the past two decades renewed interest in analog-circuit implementations of vector-by-matrix multiplication (VMMs) \cite{Mead1990, Holmes1993, Alibart2012, Widrow1962, Chawla2004, Guo2017CICC, MerrikhBayat2015}, which is the most common and frequently performed operation of any neural network in training or inference \cite{Hertz1991, Gerstner2002}. In the simplest case, such a circuit is comprised of a matrix of memory cells that serve as configurable resistors for encoding the matrix (synaptic) weights and peripheral sense amplifiers playing the role of neurons (Fig. \ref{fig:VMM}). The input vector is encoded as voltages applied to rows of the memory matrix so that the currents flowing into virtually grounded columns correspond to VMM results. Because addition and multiplication are performed on the physical level, via Kirchhoff's and Ohm's laws respectively, such an approach can be extremely fast and energy-efficient, provided that memory devices are dense and their conductances are adjustable (i.e., multi-state). The energy efficiency in part comes from performing ``in-memory" computing that reduces the amount of data (corresponding to the synaptic weights) that are moved across or in-and-out of the chip during computation. Such communication overhead could dominate the energy consumption in the most advanced digital CMOS implementations.
\begin{figure}[!t]
\centering
\includegraphics[width=0.30\textwidth]{figures/vmm.png}
\caption{Analog vector-by-matrix multiplication (VMM) in a crossbar circuit with adjustable crosspoint devices. For clarity, the output signal is shown for just one column of the array, while sense amplifier circuitry is not shown. Note that other VMM designs, e.g. utilizing duration of applied voltage pulses, rather than their amplitudes, for encoding inputs/outputs, are now being actively explored – see, e.g., their brief review in Ref. \cite{Bavandpour2018}}.
\label{fig:VMM}
\end{figure}
The general challenge towards practical adoption of such circuits, especially when using the most prospective emerging memory technologies, is variations in $I$-$V$ characteristics, e.g., in the switching voltages applied to change the memory state. In light of this challenge, the most straightforward application is ex-situ trained inference accelerators for the earlier firing-rate neural networks \cite{Bavandpour2018}, i.e., the so-called second generation of artificial neural networks (ANNs) with graded-response neurons. In such applications, memory devices are updated infrequently, only when new inference functionality should be programmed. Thus, crosspoint devices' conductances can be tuned with slower, more tolerant to device variations write schemes. For example, after the weights have been found in the software, memory cells are programmed, one by one, using feedback write-verify algorithms that can adapt to the unique $I$-$V$ characteristics of each device \cite{Alibart2012}. For the same reason, the switching endurance, i.e., the number of times the memory devices can be reliably programmed, and the write speed/energy are less critical. Additionally, VMM operations in the inference of many neural networks could be performed with moderate, less than 8-bit precision, without incurring accuracy loss \cite{Yang2019IEDM}, which further relaxes requirements for analog properties and permits more $I$-$V$ non-idealities and noise.
The most advanced neuromorphic inference circuits have been demonstrated with more mature floating-gate transistor memory circuits. Up until recently, such circuits were implemented primarily with ``synaptic transistors"~\cite{Diorio1996}, which may be fabricated using the standard CMOS technology, and several sophisticated, efficient systems were demonstrated \cite{Hasler2013, Chawla2004, George2016}.
However, these devices have relatively large areas ($>$10$^3$ $F^2$, where $F$ is the minimum feature size), leading to higher interconnect capacitance and hence larger time delays.
More recent work focused on implementing mixed-signal networks with much denser ($\sim$40\unit{F$^2$}) commercial NOR-flash memory arrays redesigned for analog computing applications~\cite{MerrikhBayat2015, Guo2017CICC}.
For example, a prototype of a 100k+-cell two-layer perceptron network fabricated in a 180-nm process with modified NOR-flash memory technology was reported in Ref.~\cite{Guo2017IEDM}.
It performed reliably, with negligible long-term drift and temperature sensitivity, and reproducible classification of the MNIST benchmark set images with $\sim95\%$ fidelity and sub-1-$\mu$s time delay and sub-20-nJ energy consumption per pattern.
The energy-delay product was six orders of magnitude better than the best (at that time) 28-nm digital implementation performing the same task with a similar fidelity~\cite{Guo2017IEDM}.
Recent theoretical studies showed that neuromorphic inference circuits could be also implemented with much denser 3D-NAND flash memories \cite{Bavandpour2019NAND, Bavandpour2020, Lee2019NAND}, projected to scale eventually to 10 terabits per square inch density.
In the long term, the most promising are perhaps circuits based on metal-oxide resistive switching random access (ReRAM for short, which are also called metal-oxide memristors)~\cite{Yang2013NatureNano, Yu2018IEEE}, especially their passively integrated (0T1R) technology variety \cite{Kim2019XBAR}. Indeed, due to the ionic switching mechanism, ReRAM devices with dimensions below 10 nm still retain excellent analog properties and year-scale retention \cite{Govoreanu2013}. Furthermore, a low-temperature fabrication budget allows monolithic vertical integration of multiple ReRAM crossbar circuits, further increasing effective density \cite{Adam2017}. There has been rapid progress in scaling up the complexity of ReRAM-based neuromorphic circuit demonstrations over the past several years \cite{Prezioso2015, MerrikhBayat2018, Lin2020NatElec, Hu2018AdvMat, Yao2020Nature, Liu2020ISSCC, Kim2019XBAR}. However, the ReRAM technology is still in much need of improvement. In addition to high device variations, another remaining issue is high write currents and operating conductances, which must be decreased by at least one order of magnitude to reduce the significant overhead of peripheral circuits \cite{Kim2019XBAR}.
The device requirements for training hardware accelerators are different and much more stringent. For instance, long retention is not required because weights are frequently updated. That allows using volatile memories in analog VMM circuits, such as interfacial memristors based on electron trapping/detrapping switching~\cite{Sheridan2017, Cai2019NatElec, Chu2014Neuro} and solid-state-electrolyte memories~\cite{Fuller2019, Yeon2020, Berggren2020}, or even capacitor-based memories controlling current via crosspoint transistors~\cite{Ambrogio2018}.
However, the toughest challenge is much higher computing and weight precision required for training operation and the need for efficient schemes for weight updates, which in turn necessitate drastically tighter device variations.
The additional related requirement is that the change in device conductance upon applying the write pulse should not depend on its current state (the so-called linearity of update property).
Otherwise, accurate conductance adjustment would require sending a unique write pulse based on the current device state, which would be hardly compatible with fast (in parallel) weight update.
Phase change memories have also been investigated as candidates for variable resistors in analog VMM circuits \cite{Burr2015, Joshi2020}, though their main drawback is significant drift in the conductive state over time.
High write endurance, high density (with vertical 3D-NAND-like integrated structure), and long retention are demonstrated in 1T Ferroelectric RAM devices.
There is much excitement about such devices' applications in training and inference accelerators \cite{Ni2018IEDM}, though their analog properties are probably inferior to ReRAM.
The significant drawbacks of magnetic devices, such as magnetic tunnel junction memories, are smaller on/off current ratios, insufficient for practical VMM circuits, and poor analog properties for scaled-down devices \cite{Grollier2020}.
The potentials of using light for implementing fast and large-fanout interconnect and linear computations, such as multiply-and-add operation, have motivated photonic neuromorphic computing research~\cite{Berggren2020,Goi2020,Shasti2021,Hamley2019}.
Different implementation flavors, e.g., with fixed~\cite{Lin2019Sciece} and programmable~\cite{Rios2019, Hamerly2019PhysRevX, Shen2017NatPhot, Tait2016} functionalities, have been recently suggested in the context of modern neural networks.
Specifically, Ref.~\cite{Lin2019Sciece} reports a system of multiple 3D-printed optical layers, each being a mesh of regions (neurons) with specifically chosen transmission-reflection properties, which can perform pattern classification inference similar to the convolutional neural networks. By sending a coherent light with amplitude-encoded input, a useful computation is performed at the speed of light. Specifically, the light diffracts and interferes when passing through the optical system and is ultimately steered to the specific region at the output layer corresponding to the pattern class.
Refs.~\cite{Rios2019, Hamerly2019PhysRevX, Shen2017NatPhot, Tait2016} report optical neuromorphic systems with configurable weights.
The inputs are encoded in the light's energy, and the weights are encoded by optical attenuation in PCM devices in Ref.~\cite{Rios2019} so that a product is computed by passing the light via PCM device.
Ref.~\cite{Tait2016} proposes encoding inputs with light amplitude and uses specific frequency for different VMM inputs.
The light from inputs is combined and passed to the frequency selective weight banks based on a microring resonator (MRR) that features metal heaters to perform multiplication. In particular, the MRR coupling (i.e., weight) is controlled via heating by adjusting currents supplied to each MRR. In these reconfigurable implementations, the product accumulation (i.e., the summation operations in the VMM) is performed by integrating the light-induced charges on the photodetector. A very aggressive time-division multiplexing scheme for calculating VMM in which both weights and inputs are encoded in the coherent light's amplitude is proposed in Ref. \cite{Hamerly2019PhysRevX}.
At one step of such scheme, the input light is fanned out into $n$ channels and combined with the light-encoded $n$ weights using a beam splitter and then sent to $n$ homodyne photodetectors to compute $n$ products in parallel.
All-optical feed-forward inference based on Mach-Zehnder interferometer meshes utilizes single-valued decomposition for the weight matrix \cite{Shen2017NatPhot}. Unitary matrix transformations are implemented with optical beam splitters and phase shifters, while the diagonal matrix is implemented with optical attenuators.
In principle, sub-aJ energy and sub-ps latency for a single multiply-and-add operation might be possible with optical computing~\cite{Hamley2019}.
However, the main challenge remains much large dimensions of the optical components and the very high I/O overhead of converting to and from optical domains~\cite{Berggren2020, Shasti2021, Hamley2019}. The designs that rely on conversion to the electrical domain would be especially affected by poor integration density of optical devices due to larger electrical communication overheads, which were shown to overwhelm system-level performance of (much denser) ReRAM based circuits \cite{Bavandpour2018}.
Optical systems would ultimately benefit from very wide ($\gg$10,000) dot-products and/or utilizing deep time-division multiplexing to amortize the I/O overhead. However, the possible issues of nonlinearities in charge integration and utility of such wide dot-product computations remain unclear \cite{Hamley2019}.
\paragraph*{Stochastic Vector-by-Matrix Multiplication}
Computations performed by the brain are inherently stochastic, in that, e.g. substantially different neural responses are observed to the repeatable presentation of identical stimuli~\cite{Rolls2010}.
Such noisy operation is mimicked by probabilistic neural networks, such as Boltzmann machines~\cite{Hinton1983} and deep belief neural networks~\cite{Hinton2009}.
In the simplest case, such a network is comprised of binary neurons that compute stochastic dot products, i.e., probabilistically generate output according to their pre-activation (dot-product) values.
The stochastic functionality can be realized at either the synapse or the neuron side. In the latter, more straightforward scenario, the neuron first computes a dot-product of its inputs and corresponding weights deterministically.
The result is then passed to some ``probabilistic" activation function, e.g., used as an argument in the sigmoid probability function, to determine the probability of generating high output.
Because of the typically large ($>100$) ratio of synapses to neurons, the efficient deterministic dot-product implementations, e.g., with the already discussed analog VMM circuits, is of primary importance for realizing high-performance probabilistic neural network hardware. Still, earlier work showed that even the simplest, deterministic neurons may incur substantial overhead, e.g., occupy up to $30\%$ of the area and consume up to $40\%$ of energy for some neural network models~\cite{Bavandpour2018}. Hence neuromorphic hardware would also benefit from the efficient realization of stochastic neurons.
Emerging devices can be broadly employed in two ways to achieve stochastic functionality, namely by using either dynamic or static $I$-$V$ characteristics of memory devices. Specifically, the former approach is to utilize intrinsically stochastic switching between memory states in emerging memory devices.
For example, in MTJ memories, thermal fluctuation causes stochastic transition between the low resistance parallel and high resistance antiparallel states so that the probability of the final memory state upon switching could be controlled by the spin-torque current~\cite{Grollier2020}. The melt-quench-induced reconfiguration of the atomic structure is intrinsically stochastic in phase-change memories (PCMs)~\cite{Tuma2016}.
These phenomena were suggested for implementing MTJ~\cite{Ostwal2018} and PCM~\cite{Tuma2016} stochastic neurons.
The second approach is to utilize intrinsic and extrinsic current fluctuations in memory devices, e.g., random telegraph~\cite{Cai2020NatElec} and thermal noise~\cite{Mahmoodi2019IEDM} in ReRAM devices, or shot-noise in nanoscale floating gate transistors~\cite{Mahmoodi2019IEDM, Mahmoodi2019NatComm}.
In such an approach, the noisy current flowing into the neuron is compared against a reference value, e.g. using a simple latch, to implement a probabilistic activation function~\cite{Mahmoodi2019NatComm}.
The primary concern for the former approach is the limited endurance of many memories and the drift in the stochastic switching properties upon repeated switching.
An additional drawback is a necessity for the co-integration of multiple memory device technologies for scalable stochastic dot-product circuits, e.g., integrating ReRAM-based artificial synapses and MTJ-based neurons.
On the other hand, analog circuits based on ReRAM devices only (Fig.~\ref{fig:VMM}), though operating at a much lower signal-to-noise ratio (SNR), can be utilized to implement stochastic VMM of the second approach.
Furthermore, adjusting read voltages in such a circuit allows for controlling SNR.
Hence, the control of effective temperature, i.e. the slope of sigmoid probability function, enables efficient implementation of stochastic annealing in Boltzmann machines during runtime.
The second approach's possible downside is slower operation because of lower read currents (which can be potentially addressed by utilizing external noise instead~\cite{Mahmoodi2019NatComm}).
Finally, the impact of noise quality on functional performance is another common concern.
This issue has not been systematically studied yet, though Gaussian-like thermal or shot noise should be more advantageous for truly random operation.
\paragraph*{Spiking Neuron and Synaptic Plasticity}
Despite much recent progress in algorithms~\cite{Neftci2019, Tavanaei2019}, the most biologically plausible, spiking neural networks (SNNs)~\cite{Gerstner2002} are still inferior in the functional performance to simpler ANNs.
If simpler ANNs would remain superior, the work of efficient SNN hardware could still be justified by the need to efficiently interface to the brain and/or model it, which in turn could lead to the development of higher-cognition artificial intelligence algorithms.
An additional intriguing feature of SNNs is local weight update rules, requiring only information from pre- and post-synaptic neurons that could enable large-scale neuromorphic hardware with real-time training capabilities~\cite{Thakur2018}.
In the simplest SNN models, the information is encoded in spike-time correlations~\cite{Gerstner2002}, while the network function is defined by the synaptic weights, which are adjusted based on the relative timing of spikes that are passed via synapses.
In addition to VMM, the essential operations in SNNs are leaky-integrate-and-fire (LIF) functions performed by neurons and various types of synaptic plasticity, such as short-term plasticity (STP) and long-term potentiation (LTP), and spike-timing-dependent-plasticity (STDP)~\cite{Gerstner2002}.
LIF neurons mimic the dynamic processes in the neuronal membrane, while synaptic plasticities mimic learning and memory mechanisms in biological networks.
For example, STP is a temporary change in the synaptic strength implementing a short-term memory.
Without immediate reinforcement of synaptic weight adjustment, the memory would be lost, i.e., the synaptic weight would relax to the original equilibrium state. On the other hand, the frequently repeated spiking stimulus causes long-term memory, e.g., permanent potentiation via the LTP mechanism.
STDP is a time-dependent specialization of Hebbian learning.
Its specific goal is to strengthen the synaptic efficiency when pre- and post- synaptic spikes happen in the expected causal temporal order and weaken it otherwise.
A compact implementation of LIF neurons with biological, ms-scale integration times using conventional circuit technology is challenging because of the large capacitors that are required. Leaky integration circuits utilizing volatile memristors (e.g., based on filamentary~\cite{Zhang2018Small}, interfacial~\cite{Lashkare2018}, and Mott insulator \cite{Adda2018} switching mechanisms) have been suggested to address this problem.
In such implementations, the integrated current is encoded with a conductive state of the volatile memory device.
Neuron spiking functionality was demonstrated with threshold-switching (volatile) memory devices that feature S-type negative differential resistance (NDR) $I$-$V$ characteristics~\cite{Pickett2013}.
This approach's general idea is similar to the oscillator circuits based on S-type (NDR) device connected to a resistor-capacitor circuit \cite{Kesim2019}.
LIF neurons based on spin-torque magnetic memories were simulated in Ref.~\cite{Sengupta2016}.
In such a neuron, spin-torque oscillations are employed to generate spikes, while incremental magnetization and its relaxation mimic integration and leakage, respectively.
STP to LTP transition has been emulated with solid-state-electrolyte devices – see, e.g., original work in Ref.~\cite{Ohno2011} and more recent work on ``diffusive" memristors~\cite{Wang2017NatMat}.
Specifically, the short and infrequent write pulses result in the formation of thin filaments, which are unstable and quickly dissolve, representing a short memory.
However, a thicker and more stable filament can be formed by applying repeated and/or longer write pulses, thus mimicking transition to the LTP.
Different STDP window implementations, e.g., using PCM~\cite{Kuzum2011} or metal-oxide ReRAM~\cite{Prezioso2016} devices, have been suggested by carefully selecting the shape of pre and post-synaptic write voltage pulses---see a comprehensive review of the emulated synaptic plasticity with memristive devices in Refs.~\cite{Serrano-Gotarredona2013, Saighi2015}.
Several small-scale spiking neuromorphic systems based on emerging device technologies were demonstrated, including coincidence detection via STDP mechanism based on metal-oxide memristors~\cite{Prezioso2018, Pedretti2017} and temporal data classification with diffusive memristors~\cite{Wang2018NatElec}.
However, the overall progress in such advanced hardware has been much slower compared to simpler ANNs inference accelerators.
The main reason is more demanding functionality from emerging devices in such applications and hence the more severe impact of device variations on the SNN operation and performance.
For example, SNNs rely on fixed-magnitude spikes to update the conductance of multiple devices in parallel.
Because of that, change in the conductances could vary drastically even with minor variations in $I$-$V$'s switching voltages, which in turn leads to very significant variations in STDP characteristics~\cite{Prezioso2018}.
On the other hand, as already mentioned above, the implementation of simpler ex-situ trained ANNs is much less challenging because the write amplitude voltages in such networks can be adjusted uniquely for each device based on the feedback information during conductance tuning~\cite{Alibart2012}.
Superconductor circuits, e.g., based on rapid single flux quantum (RSFQ) variety~\cite{Likharev1991}, are naturally suited for spiking circuits due to information encoding in SFQ voltage pulses.
For example, Josephson Junction spiking neurons operating at up to 50\unit{GHz} range have been demonstrated in Ref.~\cite{Segall2017}.
The historical challenges of such an approach include inferior fabrication technology (which may finally change given the enormous investments in superconductor quantum computing), the low-temperature operation that limits its applications, and the lack of efficient analog memory circuits \cite{Likharev2012}.
The photonic spiking neural networks (e.g., Ref.~\cite{Feldmann2019}) and hybrid superconductor / optoelectronic neuromorphic circuits~\cite{Buckley2017} share the same challenges of the already discussed photonic neuromorphic inference approaches.
\paragraph*{Reservoir Computing}
Due to intrinsic memory properties, recurrent neural networks, such as Google Neural Machine Translation model, are especially suitable for processing sequential or temporal data. Reservoir computing (RC) networks are a special type of efficiently learning recurrent networks~\cite{Lukosevicius2009}, that were motivated by cortical information processing~\cite{Maas2004}.
Among its variants are liquid state machines~\cite{Maass2002}, which is a spiking RC network, and echo state networks~\cite{Jaeger2001}, an RC based on a very sparse recurrent network.
The main component in RC networks is a reservoir, which is a nonlinear recurrent network that maps inputs into a higher-dimensional spatio-temporal representation and has the property of a fading memory of the previous inputs and network states.
Another component is a readout layer, which maps the intermediate state to the outputs.
All connections in the reservoir are fixed and only weights in the readout layer are trainable.
Because of that and sparse intermediate representation, faster and online algorithms can be employed for training such networks, which is a primary strength of this approach.
Though both readout and the reservoir can also be realized with the discussed analog VMM circuits, intriguing opportunities for implementing the reservoir are presented by nonlinear physical phenomena in superconductor, magnetic, and photonic devices~\cite{Tanaka2019}.
For example, spoken vowel recognition was demonstrated with RC in which the reservoir was implemented with four coupled MTJ-based spin-torque oscillators (STO)~\cite{Romera2018}.
In such a demo, the temporal input corresponding to spoken vowels is first converted to the frequency domain, which is in turn mapped to the corresponding DC bias currents that are applied to the MTJ devices.
The induced voltage on the STO devices is used as an output of the reservoir.
The reservoir utilizes the nonlinear dependence of the frequency of STOs on the DC current and the history-dependent transient motions of the MTJ's free layer spins spin.
Various photonic reservoirs have been suggested~\cite{Shasti2021}, e.g. utilizing transient properties of optical systems with time-delayed feedback~\cite{Bruiner2013}, or relying on superimposing lights that passively circulates via waveguides, splitters and combiners, and nonlinear conversion to the electronic domain~\cite{Vandoorne2014}, to achieve high-dimensional response. The dynamics in the superconductor circuits are recently studied for efficient and extremely fast reservoir implementation~\cite{Rowlands2021}.
Specifically, the proposed reservoir is based on a Josephson transmission line (JTL) formed by a chain of biased JJs.
An input pulse from one end of the JTL causes a rapid cascade of junction phase slips that propagate SFQ pulse to the other end. Because JJs modulate each others' currents, a complex dynamical state is achieved.
There are several general concerns with RC computing approaches. On the algorithmic level, RC is inferior in performance to state-of-the-art approaches and it is unclear whether without further algorithm improvements such a handicap can be outweighed by the advantages of online training.
The main concern for various hardware implementations is again related to the device variations, e.g., whether the hardware would be able to produce repeatable results when applying the same input.
An additional concern for magnetic devices is the limited coupling between devices which could impact the effectiveness of the reservoir.
\paragraph*{Hyperdimensional Computing / Associative Memory}
Hyperdimensional computing~\cite{Kanerva2009} circuits have been recently demonstrated with ReRAM~\cite{Li2016IEDM} and PCM~\cite{Karunaratne2020} devices.
The low-level operation in hyperdimensional computing is closely related to that of associative or content addressable memory~\cite{Hertz1991}.
Specifically, at the core of such an approach is an associative memory array circuit that outputs the closest, in a Hamming distance sense, memory row entry to a binary input vector serving as a search key.
Assuming symmetric binary representation, with $-1$ and $+1$ encoding, Hamming distance is linearly related to a dot product, i.e., equal to output vector length minus dot product between the input vector and the stored memory row values. Therefore, the critical functionality in hyperdimensional computing is again a VMM operation. After the VMM operation has been completed, its results are passed to the winner-take-all circuit~\cite{Hertz1991} (which is a harder version of a softmax function~\cite{Bridle1989}) that determines the element with the smallest Hamming distance while discarding all other outputs.
The additional simplification is that both input and weights in VMM are binary.
In principle, binary VMM can be more efficiently implemented in hardware than its fully analog version. Similar to binary neural networks~\cite{Simons2019}, the apparent tradeoff is a worse functional performance of hyperdimensional computing.
Another essential feature of hyperdimensional computing is the suitability for fast ``one-shot" or incremental learning~\cite{Kanerva2009} though at the cost of having a much more redundant memory array.
Note that fast ``one-shot" learning is not unique to hyperdimensional computing. For example, Hebbian learning and its many variants used in training associative neural networks have recursive form and are naturally incremental in that the weights can be modified only based on current weight values and the new pattern stored in the network~\cite{Hertz1991}.
\paragraph*{Concluding Remarks} Many emerging devices and circuit technologies are currently being explored for neuromorphic hardware implementations.
Neuromorphic inference accelerators utilizing analog in-memory computing based on floating gate memories are perhaps the closest to widespread adoption, given the maturity of such technology, the practicality of its applications, and competitive performance as compared to conventional (digital CMOS) circuit implementations.
Comparing the performance prospects of other neuromorphic approaches is not straightforward because many proposals target algorithms with inferior functional performance, especially those closely mimicking the brain's operation.
Baring a substantial breakthrough in ML algorithms or the emergence of new applications that could benefit from high-performance low-accuracy neuromorphic hardware, the inferior functional performance may limit the practicality of other approaches.
The main challenge, much more so for advanced neuromorphic computing concepts, remains significant variations in the operation of emerging devices.
\clearpage
| {
"attr-fineweb-edu": 1.930664,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdkHxaKgQNOzmpVYp | \section{Introduction}
\label{intro}
In the most studied lattice model for the collapse transition (also called coil-globule transition) of polymers \cite{f66}, the chains are represented by self-avoiding walks, so that the bonds of the chain are placed on lattice edges and the monomers are located on the sites. An attractive interaction between monomers on nearest-neighbor sites which are not linked by bonds is added. The competition between the repulsive excluded-volume interactions and the attractive interactions leads to a change in the polymerization transition of SASAW (self-attracting self-avoiding walks) in a grand-canonical formalism. Experimentally, as the temperature of a polymer solution is lowered, the chain changes its configuration from extended (coil) to collapsed (globule), as the temperature crosses a particular value, called the $\Theta$-point. For weak attraction, the transition between a non-polymerized and a polymerized phase in the monomer fugacity-temperature plane is continuous, becoming discontinuous as the attraction is increased, so that the collapse transition is a tricritical point in this model. The nature of this transition was studied through a mapping of the polymer model onto a ferromagnetic $O(n)$ model in the limit $n \to 0$, due to de Gennes \cite{dg75,dg79}. The contributions to the high-temperature series expansion of the magnetic model are represented by self-avoiding walks on the lattice. Mean field tricritical exponents are found in three dimensions, with logarithmic corrections.
In two dimensions, non-classical exponents are expected, and a major result is due to Duplantier and Saleur (DS), who managed to derive the exact tricritical exponents for the SASAW model on a honeycomb lattice \cite{ds87}. The proposal of this model, which requires some fine tuning to allow it to be solved, as a generic result for the collapse transition has been discussed in the literature soon after its proposal, and lead to numerical results which seem to support its robustness \cite{ss88,ds89,po94,gh95,nt14}. Another aspect of the problem are the phase diagrams of the variety of models related to the problem of the collapse transition. Even a slight change in the SASAW model, if we assume that the attractive interactions are between polymer bonds on opposite sides of elementary squares of the square lattice, leads to a phase diagram which is different from the one found when the interactions are between monomers on nearest neighbor (NN) sites. In this model, an additional polymerized phase appears, besides the non-polymerized and the regular polymerized phases, and the critical polymerization line ends at a critical endpoint \cite{jurgen,foster07}.
The self-avoidance constraint may also be changed, allowing for more than one monomer at the same site, but still restricting the number of polymer bonds on each lattice edge to at most one \cite{mm75}. This generalization of SAW's, usually called trails, has the distinctive feature that the interactions are now between monomers {\em at the same site}. On two-dimensional lattices, the trails may collide or cross at each site, and in the original model, which we will call the ISAT (interacting self-avoiding trails) model, the statistical weights of both configurations are the same. If, the trails are not allowed to cross themselves, so that only collisions of the trails on sites exist, we have a model called VISAW (vertex interacting self-avoiding walks), which was exactly solved by Bl\"ote and Nienhuis (BN) \cite{bn89} and has critical exponents for the collapse transition distinct from the ones found in the DS model. There has been much discussion in the literature on which of the two distinct sets of exponents (DS or BN) is the generic result for the collapse transition of polymers. The BN exponents seem to be difficult to find in simulations, since numerical results for the exponents of the BN model seem to be closer to the DS values \cite{b13}. The inclusion of stiffness in the VISAW model, associating a bending energy to elementary bends of the chain, leads to an even richer phase diagram \cite{v15}, with the tricritical points from both integrable models (DS and BN) residing on a multicritical line. The robustness of the DS exponents has also been discussed in a recent paper \cite{n15}, where it has been shown that if crossings of the trails are included in the BN model, so that the lattice is no longer planar, the universality class is changed.
Although of course the analytic results mentioned above are of inestimable value, details of the phase diagrams of the different models are not always easily found with these techniques, and this is also true for simulations, in particular in the grand-canonical formalism, where the nature of the collapse transition was recognized to be tricritical by de Gennes. It is, therefore, interesting to study these phase diagrams with approximate analytic tools, among them solutions on hierarchical lattices such as the Bethe and the Husimi lattice. Indeed, the ISAT model was recently studied on a four-coordinated Husimi lattice built with squares and on a four and six-coordinated Bethe lattice \cite{tj16}. Rich phase diagrams were found in these studies, with the coil-globule transition for four-coordinated cases (which are mean-field approximations for the square lattice) being associated with a bicritical point. Such behavior was confirmed in a recent study by Pretti \cite{p16}, where the ISAT model was generalized by including an attractive interaction between NN monomers on single occupied sites not linked by a polymer bond and solved on Bethe and Husimi lattices with $q=4$. The VISAW model (when the crossings are forbidden and NN interactions vanish), the SASAW (when crossings and collisions are not allowed), the model by Wu and Bradley \cite{wb90} (when collisions and crossings have the same weight) and the simple ISAT model (when the NN interaction vanishes) are recovered as particular cases.
Given the relevance of the semi-flexible extension of the VISAW model \cite{v15} in the discussion about the differences between the DS and the BN critical behaviors for the collapse transition, we investigate another generalization of trail models by including an energy associated to elementary bends. This is done here for Bethe lattice, with different statistical weights associated to crossings and collisions, so that semi-flexible VISAW and semi-flexible ISAT are obtained as particular cases of our model. It is shown that the inclusion of semi-flexibility does not change the nature of the collapse transition when compared with the flexible ISAT model studied before \cite{tj16}, but an additional polymerized phase appears inside the regular polymerized phase, which is both dense and nematic, since all lattice sites are visited and all bonds are in the same direction. For SAW, the semi-flexible and the self-attracting cases were studied on the Bethe lattice \cite{bs93}. When both effects are present, studies on Bethe and Husimi lattices show the appearance of a second polymerized phase, which is dense and anisotropic in the sense that bonds in one particular direction are favored \cite{l98,p02}. The nature of the collapse transition is changed when the stiffness is sufficiently high, so that in this case also the appearance of a second polymerized phase signals the change of the nature of the collapse transition.
The model we study is defined in more detail in Section \ref{defmod} and its solution on a Bethe lattice is presented in Section \ref{sbl}. Final discussions and the conclusion may be found in Section \ref{conc}.
\section{Definition of the model}
\label{defmod}
We consider a semi-flexible generalized self-avoiding trail (sISAT) model. In this model, each lattice edge can be occupied by at most one polymer bond, which has an activity $z=\exp(\beta\mu)$, where $\beta=1/(k_BT)$ and $\mu$ is the chemical potential of a bond. The bonds connect monomers, which are placed on the sites of the lattice. For a lattice with coordination number $q$, each lattice site can be occupied by up to $q/2$ (for even $q$) and $(q-1)/2$ (for odd $q$) monomers. In two dimensions, walks meeting at a lattice site may either cross or collide, as is apparent in the generalized ISAT model on a square lattice depicted in Fig.~\ref{FigModRQ}. In higher dimensions, however, the distinction between collisions and crossings is no longer clear. We will restrict our attention to lattices with $q=4$ here, whose solutions may be compared with results for the square lattice. Statistical weights $\tau_x$ and $\tau_c$ will be associated for each crossing and collision of the chains, respectively. Note that when $\tau_x=\tau_c$ we recover the classical ISAT model, while in the case $\tau_x=0$ (crossings forbidden) the VISAW model is obtained. In order to analyze the effects of the polymer stiffness in such models, an additional weight $\omega=\exp(-\beta\epsilon_b)$ is introduced, associated to one polymer bend. If the energy $\epsilon_b$ associated to an elementary bend of the trails is positive ($\omega<1$), we say that the walks are semi-flexible. This is also illustrated in Fig.~\ref{FigModRQ}. The grand-canonical partition function of the model is given by
\begin{equation}
Y = \sum z^{N} \tau_c^{N_{c}} \tau_x^{N_{x}} \omega^{N_{b}},
\label{eqPartFunc}
\end{equation}
where $N$, $N_c$ and $N_x$ are the numbers of bonds, collisions and crossings in the system, respectively. $N_b$ is the number of bends in sites with a single monomer, since the bends in double visited sites are accounted for in the weight $\tau_c$, so that $\tau_c=\omega^2 \tau^*$, where $\tau^*$ is the weight of the collision itself, i. e., of the \textit{on-site} monomer-monomer interaction at colliding sites. We will mostly restrict ourselves in the discussions to $\tau_x=\tau^*=\exp(-\beta\epsilon)$, so that the monomer-monomer interaction energy $\epsilon$ is the same for crossings and collisions, but it is easy to consider $\epsilon_c \neq \epsilon_x$, and this will be done in part of the comments below. The sum in Eq.~(\ref{eqPartFunc}) is over all allowed configurations of the walks on the lattice we are considering, which will be the four coordinated Bethe lattice here.
\begin{figure}[t]
\centering
\includegraphics[width=7.0cm]{Fig1.eps}
\caption{Illustration of a self-avoiding trail on a square lattice. Collisions and crossings are indicated by $\tau_c$ and $\tau_x$, respectively, while bends at sites with two incoming bonds are indicated by $\omega$.}
\label{FigModRQ}
\end{figure}
\section{Solution on the Bethe lattice}
\label{sbl}
We solve the model on a four coordinated Bethe lattice, which corresponds to the core of a Cayley tree as shown in Fig.~\ref{bl}. The extremal monomers of each chain are placed on the surface of the tree. One possible configuration of three chains on a Cayley tree with four generations of sites is shown in Fig.~\ref{bl}.
\begin{figure}[t]
\centering
\includegraphics[width=7.0cm]{Fig2.eps}
\caption{A configuration of three chains placed on a Cayley tree with four generations of sites. The statistical weight of this configuration is $z^{16}\omega^3\tau_c\tau_x$.}
\label{bl}
\end{figure}
To solve the model on the Bethe lattice we consider sub-trees, defining partial partition functions (ppf's) for them for a fixed configuration of the root edge \cite{b82}. For the Bethe lattice, usually only two root configurations are needed, corresponding to the possibilities of empty or occupied (by chain bonds) root edges. However, as discussed above, in the semi-flexible case one may expect the appearance of anisotropic phases, which display orientational (nematic) ordering, so that bonds in one or more directions are favored. The study of models which present nematic ordering on such hierarchical lattices presents some difficulties, particularly for $q>4$. One way to avoid them is to solve these models on other treelike lattices for which the exact solution is the Bethe approximation on regular lattices with the same coordination number. One of such lattices is the random locally treelike layered (RLTL) lattice introduced by Dhar, Rajesh and Stilck \cite{drs11} to study nematic ordering of monodispersed rigid rods. Here we will follow a simpler option, assuming that at each site of the $q=4$ Bethe lattice two incoming bonds are in one direction and the two remaining ones are in a perpendicular direction. Actually, one should keep in mind that in the thermodynamic limit this lattice is effectively infinite dimensional, as was shown by Baxter \cite{b82}. Thus, for example, to correctly measure the Euclidean distance between two sites on an even-coordinated Bethe lattice, one may embed it in a hypercubic lattice whose dimension increases with the number of generations \cite{sca00}. Therefore, we will define partial partition functions for four root configurations: $g_{0,1}$ for a root edge in direction $1$ not occupied by a bond, $g_{0,2}$ for a empty root edge in direction $2$, and $g_{1,1}$ and $g_{1,2}$ for subtrees with occupied root edges in directions $1$ and $2$, respectively.
We now may obtain recursion relations for the ppf's of a sub-tree with an additional generation, considering the operation of attaching a new root site and edge to three sub-trees of the preceding generation. The results are:
\begin{subequations}
\begin{eqnarray}
g'_{0,1} &=& g_{0,1}g_{0,2}^2+g_{0,1}g_{1,2}^2+2\omega g_{1,1}g_{1,2}g_{0,2}, \\
g'_{0,2} &=& g_{0,1}^2g_{0,2}+g_{1,1}^2g_{0,2}+2\omega g_{0,1}g_{1,1}g_{1,2}, \\
g'_{1,1} &=& z_1[g_{1,1}g_{0,2}^2+2\omega g_{0,1}g_{0,2}g_{1,2}+\tau g_{1,1}g_{1,2}^2], \\
g'_{1,2} &=& z_2[g_{0,1}^2g_{1,2}+2\omega g_{0,1}g_{1,1}g_{0,2}+\tau g_{1,1}^2g_{1,2}],
\end{eqnarray}
\label{rrgb}
\end{subequations}
where $\tau \equiv \tau_x+2\tau_c$ is the only combination of the weights of double occupied sites that appears in the Bethe lattice solution; this will change if longer range correlations are taken into account, such as on the Husimi lattice. We note that we include the possibility of bonds in the two directions having different activities, although we will discuss the thermodynamic behavior of the model only for $z_1=z_2=z$.
Here, $g_{i,j}$ and $g'_{i,j}$ are ppf's of sub-trees with $M$ and $M+1$ generations, respectively. Usually, the ppf's diverge in the thermodynamic limit (when $M \rightarrow \infty$). Thus, it is suitable to define the ratios:
\begin{subequations}
\begin{eqnarray}
R_1 &=& \frac{g_{1,1}}{g_{0,1}}, \\
R_2 &=& \frac{g_{1,2}}{g_{0,2}},
\end{eqnarray}
\end{subequations}
which should remain finite for non-dense phases, where a finite fraction of the lattice sites is empty. The recursion relations for these ratios are:
\begin{subequations}
\begin{eqnarray}
R'_1 &=& z_1\frac{R_1+2\omega R_2+\tau R_1R_2^2}{1+R_2^2+2\omega R_1R_2},\\
R'_2 &=& z_2\frac{R_2+2\omega R_1+\tau R_1^2R_2}{1+R_1^2+2\omega R_1R_2}.
\end{eqnarray}
\label{rrrb}
\end{subequations}
We find four distinct fixed points for these recursion relations when $z_1=z_2=z$, which are stable in distinct regions of the parameter space $(z,\omega,\tau)$. They are:
\begin{itemize}
\item
A non-polymerized ({\bf NP}) phase: $R_1=0$, $R_2=0$.
\item
A regular polymerized ({\bf P}) phase: $R_1=R_2 \ne 0$ and finite.
\item
A dense polymerized ({\bf DP}) phase: $R_1=R_2 \to \infty$.
\item
A dense anisotropic and nematic ({\bf AN}) phase: $R_1 \to \infty$ and $R_2 \to 0$ or $R_1 \to 0$ and $R_2 \to \infty$.
\end{itemize}
In the dense phases, the edges corresponding to the direction of the ratio which diverges are all occupied by bonds. It is useful to define the reciprocal ratios $S_i=1/R_i$ to study the fixed points which are associated to these phases. For the {\bf DP} phase we may rewrite the recursion relation Eqs. (\ref{rrrb}) as:
\begin{subequations}
\begin{eqnarray}
S'_1 &=& \frac{1}{z}\frac{S_1S_2^2+S_1+2\omega S_2}{S_2^2+2\omega S_1S_2+\tau},\\
S'_2 &=& \frac{1}{z}\frac{S_1^2S_2+S_2+2\omega S_1}{S_1^2+2\omega S_1S_2+\tau}.
\label{rrdp}
\end{eqnarray}
\end{subequations}
So that the {\bf DP} fixed point is now located at the origin $S_1=0,\;\;S_2=0$. For the anisotropic {\bf AN} phase, there are two equivalent fixed points where the chains occupy bonds in one of the two directions. If we consider the fixed point with bonds in the 1 direction, the recursion relations may be written in terms of the variables $S_1$ and $R_2$, so that the fixed point again is located at the origin. The result is:
\begin{subequations}
\begin{eqnarray}
S'_1 &=& \frac{1}{z}\frac{S_1+S_1R_2^2+2\omega R_2}{1+2\omega S_1R_2+\tau R_2^2},\\
R'_2 &=& z\frac{S_1^2R_2+2\omega S_1+\tau R_2}{1+S_1^2+2\omega S_1R_2}.
\label{rrdn}
\end{eqnarray}
\end{subequations}
We notice that the product $P=R_1R_2$ attains a finite value at the {\bf AN} fixed point, given by:
\begin{equation}
P=\frac{z^2\tau-1+\sqrt{(z^2\tau-1)^2+16 z^2\omega^2}}{4\omega}
\label{P}
\end{equation}
The region of the parameter space where each fixed point is stable may be found by studying the eigenvalues of the Jacobian of the recursion relations:
\begin{equation}
J_{i,j}=\frac{\partial Q'_i}{\partial Q_j},
\end{equation}
where the $Q$'s are the appropriate ratios in each case. In the {\bf NP} fixed point $R_1=R_2=0$, the secular equation of the recursion relation (\ref{rrrb}) is
\begin{equation}
(z-\lambda)^2-4z^2\omega^2=0,
\end{equation}
so that this fixed point is stable for:
\begin{equation}
z \le 1/(1+2\omega).
\label{slnp}
\end{equation}
The secular equation associated to the {\bf DP} fixed point $S_1=S_2=0$ is:
\begin{equation}
\left(\frac{1}{z\tau}-\lambda\right)^2-\left(\frac{2\omega}{z\tau}\right)^2=0,
\end{equation}
and thus the region where this phase is stable will be:
\begin{equation}
\tau \ge \frac{1+2\omega}{z}
\label{sldp}
\end{equation}
The secular equation for the {\bf AN} fixed point $S_1=0$, $R_1=0$ is:
\begin{equation}
\lambda^2-\left(\frac{1}{z}+z\tau\right)\lambda+\tau-4\omega^2=0.
\label{sedn}
\end{equation}
From which it follows that this fixed point is stable if
\begin{equation}
\tau \le \frac{1}{z}-\frac{4\omega^2}{z-1}.
\label{sldn}
\end{equation}
Since $\tau$ (as well as $z$ and $\omega$) are non-negative, the {\bf AN} phase is stable for allowed values of $\tau$ if $z \geq 1/(1-4 \omega^2)$, so that it exists only when $\omega < 1/2$. This is expected since the chains should be sufficiently stiff to generate nematic order.
For the isotropic polymerized fixed point {\bf P}, where $R_1=R_2=R$, the elements of the Jacobian are $J_{1,1}=J_{2,2}=A$ and $J_{1,2}=J_{2,1}=B$, with:
\begin{subequations}
\begin{eqnarray}
A&=&\frac{z+(\tau z-2\omega)R^2}{1+(1+2\omega)R^2}, \\
B&=&2\frac{\omega z+(\tau z-1-\omega)R^2}{1+(1+2\omega)R^2},
\end{eqnarray}
\end{subequations}
where the squared ratio of ppf's is given by:
\begin{equation}
R^2=\frac{(1+2\omega)z-1}{1+2\omega-z\tau},
\label{rfp}
\end{equation}
and thus the stability condition for this fixed point will be
\begin{subequations}
\begin{eqnarray}
A+B &\le& 1 \\
A-B &\le& 1.
\end{eqnarray}
\label{slp}
\end{subequations}
We find that this condition is obeyed in the region of the parameter space between the surfaces $z=1/(1+2\omega)$, $z\tau=1+2\omega$, and $\tau=1/z-4\omega^2/(z-1)$, which correspond to the stability limits of the {\bf NP}, {\bf DP}, and {\bf AN} fixed points, respectively. Note that $\tau^{AN} \rightarrow \tau^{DP}$ when $\omega \rightarrow 0$, so that the {\bf P} phase disappears in this limit. As will be shown below, although the order parameter is discontinuous at the {\bf P-AN} transition, the two stability limits meet at the transition surface. For $\omega > 0$, the two critical surfaces {\bf NP-P} and {\bf DP-P} meet at the bicritical line, located at:
\begin{subequations}
\begin{eqnarray}
z&=&\frac{1}{1+2\omega} \\
\tau&=&(1+2\omega)^2.
\end{eqnarray}
\label{bcl}
\end{subequations}
Before proceeding, we will discuss the possibility of regular polymerized phases with nematic order, with distinct and finite ratios in both directions. Summing and subtracting the fixed point equations ($R'_i=R_i$ in Eqs. (\ref{rrrb})) we obtain:
\begin{subequations}
\begin{eqnarray}
(R_1+R_2)[(2\omega-z\tau+1)P-z(1+2\omega)+1]=0, \\
(R_1-R_2)[(2\omega+z\tau-1)P-z(1-2\omega)+1]=0,
\end{eqnarray}
\end{subequations}
where we recall that $P=R_1R_2$. If the phase is polymerized and nematic, the second factors of both equations have to vanish. This leads to:
\begin{subequations}
\begin{eqnarray}
P&=&\frac{z-1}{2\omega},\\
P&=&\frac{2\omega z}{1-z\tau}.
\label{prod}
\end{eqnarray}
\end{subequations}
We notice that the conditions $z>1$ and $\tau z <1$ must be satisfied, and also the three parameters of the model are not independent in the nematic phase with finite ratios, since they are related by the equation:
\begin{equation}
\frac{z-1}{2\omega}=\frac{2\omega z}{1-z\tau},
\label{nc}
\end{equation}
which happens to be equivalent to the stability limit of the {\bf AN} phase above, Eq.~(\ref{sldn}). Thus, on this surface of the parameter space, we have a continuous set of marginally stable (with $\lambda=1$) fixed points $P=const$, which includes the regular polymerized fixed point $R_1=R_2$ and the {\bf AN} fixed point $R_1 \to \infty$, $R_2 \to 0$. Therefore, we have a discontinuous transition between these two phases, but the transition surface is not between two spinodal surfaces of the two phases which coexist. Incidentally, we notice that the value of the product of ratios in the {\bf AN} phase [Eq.~(\ref{P})] reduces to Eq.~(\ref{prod}) on the stability limit of this phase. This rather unusual feature of the {\bf AN}-{\bf P} transition, which is critical but has a discontinuous order parameter, has been discussed in the literature before. One simple situation of this kind is the one-dimensional Ising model with interactions decaying with the distance between spins as $1/r^2$ \cite{t69}. At zero field, in this model a discontinuous magnetization is found at the critical point. The one-dimensional Ising model with nearest-neighbor interactions has also been studied via exact renormalization-group transformations by Nelson and Fisher \cite{nf75}, and, although the phase transition there is degenerate, since it happens at zero temperature, it may be interpreted also as a critical discontinuous transition. The possibility of such transitions in the framework of the renormalization group was discussed in general by Fisher and Berker \cite{fb82}. This unusual critical behavior was also found in the stationary behavior of non-equilibrium models associated to the Ising model and in the threshold contact process \cite{mjo93}.
The partition function of the model on the whole Cayley tree is obtained considering the operation of connecting four subtrees to the central site. The result is:
\begin{eqnarray}
Y&=&(g_{0,1}g_{0,2})^2+(g_{1,1}g_{0,2})^2+(g_{0,1}g_{1,2})^2+ \nonumber \\
&&4\omega g_{1,1} g_{0,1} g_{1,2} g_{0,2}+\tau (g_{1,1}g_{1,2})^2,
\label{pfct}
\end{eqnarray}
where the first term corresponds to the configuration with no bond incident on the central site, the next three have two incident bonds and in the last all four edges are occupied. This expression may also be written as $Y = (g_{0,1}g_{0,2})^2 y$, with
\begin{equation}
y=1+R_1^2+R_2^2+4\omega R_1R_2+\tau (R_1R_2)^2.
\label{pfct1}
\end{equation}
The densities of bonds in both directions, and bends (in non-colliding sites), collisions and crossings at the central site are given respectively by:
\begin{eqnarray}
\rho_{z,1} &=& \frac{R_1^2+2\omega R_1R_2+\tau R_1^2R_2^2}{y}\\
\rho_{z,2} &=& \frac{R_2^2+2\omega R_1R_2+\tau R_1^2R_2^2}{y}\\
\rho_\omega&=&\frac{4\omega R_1R_2}{y} \\
\rho_c&=&\frac{2\tau_c R_1^2R_2^2}{y} \\
\rho_x&=&\frac{\tau_x R_1^2R_2^2}{y}
\end{eqnarray}
As discussed before, the total density of bends is $\rho_b=\rho_\omega+2\rho_c$. In the {\bf NP} phase all densities vanish. In the {\bf AN} phase with $R_1 \to \infty$ and $R_2 \to 0$, $\rho_{z,1}=1$ and all other densities vanish. In the {\bf DP} phase, $\rho_{z,1}=\rho_{z,2}=1$, $\rho_\omega=0$, $\rho_c=2\tau_x/(2\tau_z+\tau_x)$, and $\rho_x=\tau_x/(2\tau_c+\tau_x)$. Finally, in the {\bf P} phase, $\rho_{z,1}=\rho_{z,2}= [z(1+2\omega)-1]/[2z(1+2\omega)-1-\tau z^2]$, $\rho_\omega=4\omega R^2/y$, $\rho_c=2\tau_cR^4/y$, and $\rho_x=\tau_xR^4/y$, where the ratio $R$ is given by Eq.~(\ref{rfp}).
The bulk free energy per site on the BL, which differs from the one obtained before in Eq.~(\ref{pfct}) by the contribution of the surface of the Cayley tree, may be found following an Ansatz by Gujrati \cite{g95}. The result is:
\begin{equation}
\frac{\phi_b}{k_BT}=-\lim_{M \to \infty}\frac{1}{2}\ln \frac{Y_{M+1}}{Y_M^3}.
\end{equation}
From the recursion relations (Eqs.~(\ref{rrgb})) we have that:
\begin{equation}
\frac{\phi_b}{k_BT}=-\frac{1}{2} \ln \frac{Y^\prime}{Y^3},
\label{phibi}
\end{equation}
where $Y^\prime$ is the partition function calculated with ppf's of subtrees with an additional generation with respect to the unprimed ppf's and the thermodynamic limit is implicit. In the {\bf NP} fixed point, $g_{0,1}$ and $g_{0,2}$ are dominant over the other terms in the partition function Eq.~(\ref{pfct}), so that we may rewrite Eq.~(\ref{phibi}) as:
\begin{equation}
\phi_b^{(NP)}=-\frac{k_BT}{2} \ln \frac{(g_{0,1}^\prime)^2(g_{0,2}^\prime)^2}{[g_{0,1}^2 g_{0,2}^2]^3}=0,
\label{phinp}
\end{equation}
where we have used the recursion relations in Eqs. (\ref{rrgb}). This result is consistent with the fact that this phase corresponds to an empty lattice. In the {\bf DP} phase, the last term of the partition function dominates over the others, so that:
\begin{eqnarray}
\phi_b^{(DP)}&=&-\frac{k_BT}{2}\ln \frac{\tau (g_{1,1}^\prime)^2 (g_{1,2}^\prime)^2}{(\tau g_{1,1}^2 g_{1,2}^2)^3}=-k_BT \ln(z^2\tau)= \nonumber \\
&&\epsilon-2\mu-k_BT\ln[1+\exp(-2\beta \epsilon_B)],
\label{phidp}
\end{eqnarray}
where we recall that in this phase four bonds are incident on each site. In the {\bf AN} phase, supposing that the bonds are in the 1 direction, the second term of the partition function dominates, so that:
\begin{equation}
\phi_b^{(AN)}=-\frac{k_BT}{2}\ln \frac{(g_{1,1}^\prime)^2(g_{0,2}^\prime)^2}{(g_{1,1}^2g_{0,2}^2)^3}=-k_BT \ln z=-\mu,
\label{phian}
\end{equation}
and again this confirms that in this phase each site has two bonds in direction 1 incident on it. Finally, in the regular polymerized phase {\bf P}, where $R_1=R_2=R$, we may rewrite Eq.~(\ref{phibi}) as:
\begin{eqnarray}
\phi_b^{(P)}&=&-\frac{k_BT}{2} \ln \frac{(g_{0,1}^\prime)^2(g_{0,2}^\prime)^2}{(g_{0,1} g_{0,2})^6 y^2}= \nonumber \\
&&-k_BT \ln \frac{[\tau-(1+2\omega)^2]z^2}{1-2(1+2\omega)z+\tau z^2}.
\end{eqnarray}
\subsection{Phase diagrams}
Beyond the critical surfaces (continuous for the {\bf NP-P} transition and {\bf P-DP} and discontinuous for {\bf P-AN} transition), we note that the {\bf NP} and {\bf DP} phases coexist in the region $z<1/(1+2\omega)$ and $\tau>(1+2\omega)/z$. The discontinuous {\bf NP-DP} transition is located at the surface where the bulk free energies per site of the two phases are equal and, from Eqs.~(\ref{phinp}) and (\ref{phidp}), we find it at $\tau=1/z^2$. This surface ends at the bicritical line (Eqs.~\ref{bcl}). As expected, at all other transition surfaces the respective bulk free energies of the involved phases are also equal.
As discussed above, for $\omega \geq 1/2$ the {\bf AN} phase is not present in the phase diagrams, see an example in Fig.~\ref{pd}(a) for $\omega=0.75$, which is qualitatively similar to the one obtained in the flexible case ($\omega=1$) \cite{tj16}. For $\omega<1/2$, the thermodynamic behavior is still the same, except for the presence of the {\bf AN} phase, as well as the critical discontinuous {\bf P-AN} surface. Diagrams for $\omega=0.25$ and $\omega = 0.1$ are shown in Figs.~\ref{pd}(b) and (c), respectively, where one sees that by decreasing $\omega$ the region occupied by the {\bf P} phase decreases, whilst that by the {\bf AN} phase increases.
\begin{figure}[!t]
\centering
\includegraphics[width=7.cm]{Fig3a.eps}
\includegraphics[width=7.cm]{Fig3b.eps}
\includegraphics[width=7.cm]{Fig3c.eps}
\includegraphics[width=7.cm]{Fig3d.eps}
\caption{Phase diagrams for (a) $\omega=0.75$, (b) $\omega=0.25$, (c) $\omega=0.1$ and (d) $\omega=0.0$. Regular continuous transitions are shown as solid lines, discontinuous transitions are represented as dashed lines and the dash-dotted line corresponds to the critical discontinuous transitions. The dot in (d) the endpoint of the bicritical line, where the three discontinuous transition lines meet.}
\label{pd}
\end{figure}
Indeed, in the limit of rigid trails $\omega \to 0$ ($\epsilon_b \to \infty$), only the two dense phases appear, besides the {\bf NP} phase, since the {\bf P} phase between them is absent. The roots of the secular equation related to the {\bf AN} phase [Eq.~(\ref{sedn})] are $1/z$ and $\tau_x z$ in this case, so that the stability limit of this phase for $z>1$ is $\tau_x=1/z$, which coincides with the stability limit of the {\bf DP} phase [Eq.~(\ref{sldp})] (since there are no bends, $\tau=\tau_x$). Also, the stability limits of the {\bf AN} and {\bf NP} phases meet at $z=1$. As always, the {\bf NP} and the {\bf DP} phases coexist on the line $\tau_x=1/z^2$. The three transition lines meet at $z=\tau_x=1$, as may be seen in Eq.~(\ref{bcl}). As $\omega \to 0$, at the discontinuous {\bf AN}-{\bf NP} transition line, located at $z=1$, the {\bf NP}-{\bf P} critical surface and the {\bf P}-{\bf AN} discontinuous critical surface meet, while the {\bf DP}-{\bf P} critical surface and the {\bf P}-{\bf AN} discontinuous critical surface meet at the {\bf DP}-{\bf AN} discontinuous transition line, located at $\tau_x=1/z$. The phase diagram in this limit is shown in Fig.~\ref{pd}(d). Actually, it is quite simple to obtain the free energy of the model on the square lattice in this limit of rods, and the same phase diagram is obtained. This calculation is presented in the appendix.
\subsection{Densities}
\begin{figure}[b]
\centering
\includegraphics[width=7.20cm]{Fig4a.eps}
\includegraphics[width=7.20cm]{Fig4b.eps}
\includegraphics[width=7.20cm]{Fig4c.eps}
\caption{Total densities of bonds as functions of $z$ for $\tau=0.5$ and (a) $\omega=0.25$, (b) $\omega=0.1$ and (c) $\omega=0.0$.}
\label{rhoz}
\end{figure}
In this subsection, we investigate the behavior of the densities at the different transitions. We start noting that the total density of bonds $\rho=(\rho_{z,1}+\rho_{z,2})/2$ assumes the values: $\rho=0$ in the {\bf NP} phase, $\rho=1$ in the {\bf DP} phase, $\rho=1/2$ in the {\bf AN} phase, and $0 < \rho < 1$ in the {\bf P} phase. It may be useful to recall that the order parameter of the polymerization transition is actually $m=\rho^{1/2}$, as is shown by the mapping of the problem onto the magnetic $n$-vector model in the limit $n \to 0$ \cite{wkp80}. Moreover, we may define a nematic order parameter as:
\begin{equation}
Q=\rho_{z,1}-\rho_{z,2}=\frac{R_1^2-R_2^2}{1+R_1^2+R_2^2+4\omega R_1R_2+\tau R_1^2R_2^2},
\label{eqnop}
\end{equation}
which is $|Q|=1$ in the {\bf AN} phase and $Q=0$ otherwise, indicating that any transition to {\bf AN} phase is discontinuous. Indeed, this is confirmed in Fig.~\ref{rhoz}, which shows the variation of $\rho$ with $z$, for $\tau=0.5$ and several values of $\omega$. Close to the {\bf NP}-{\bf P} transition we have $\rho \approx (z-z_c)$, consistent with an mean field exponent $\beta=1/2$ for the order parameter.
Along the {\bf P-AN} transition surface, we have $\rho^{(P)}=(z-1)/(z-1+2\omega z)$ for the {\bf P} phase, which increases with $z$, from $\rho=2\omega/(1+2\omega)$ (at $z=1/(1-4\omega^2)$) to $\rho=1/(1+2\omega)$ (for $z \rightarrow \infty$). Between these limits, there exists a line, given by $z=1/(1-2\omega)$, where $\rho$ is continuous (i.e., $\rho^{(P)}=\rho^{(AN)}=1/2$), but $Q$ (and so, the transition) is still discontinuous. Still on the {\bf P-AN} critical surface, the infinite set of marginally stable solutions $R_1R_2=const.$ have densities $\rho_1=R_1^2/(z+R_1^2)$ and $\rho_2=(z-1)^2/((z-1)^2+4\omega^2 z R_1)$. Thereby, for fixed $\omega$ and $z$, $\rho_1$ increases ($\rho_2$ decreases) from 0 to 1 (from 1 to 0) when $R_1$ changes from $0$ to $\infty$. In both limits, we recover the {\bf AN} result and, when $\rho_1=\rho_2$ the {\bf P} phase is obtained. We note that in this phase, $\rho \neq 1/2$ for all values of $\omega$, $z$ and $R_1$, except at the line $z=1/(1-2\omega)$, where $\rho=1/2$ regardless the value of $R_1$.
\subsection{Nematic susceptibility}
Let us discuss in some more detail the {\bf AN}-{\bf P} transition by studying the behavior of the appropriate susceptibility close to it. Given the activities $z_1$ and $z_2$ of bonds in each direction, we may define
\begin{equation}
z=\frac{z_1+z_2}{2}
\end{equation}
and
\begin{equation}
{\bar z}=\frac{z_1-z_2}{2}.
\end{equation}
The activity ${\bar z}$ is the appropriate field-like variable conjugated to the nematic order parameter, so that we define the nematic susceptibility as:
\begin{equation}
\chi_N=\left(\frac{\partial Q}{\partial {\bar z}}\right)_{z,\omega,\tau},
\end{equation}
where the nematic order parameter $Q$ is defined in Eq.~(\ref{eqnop}). To obtain an expression for $\chi_N$ in the {\bf P} phase, we start with the fixed point equations which follow from the recursion relations Eqs.~(\ref{rrrb}), remembering that $z_1=z+{\bar z}$ and $z_2=z-{\bar z}$. Differentiating these equations with respect to ${\bar z}$, we obtain a system of linear equations for the derivatives of the ratios, whose solution is (for ${\bar z}=0$):
\begin{subequations}
\begin{eqnarray}
\left(\frac{\partial R_1}{\partial {\bar z}}\right)_{z,\omega,\tau}&=&\frac{F(R_1,R_2;z,\omega,\tau)}{D(R_1,R_2;z,\omega,\tau)}, \\
\left(\frac{\partial R_2}{\partial {\bar z}}\right)_{z,\omega,\tau}&=&-\frac{F(R_2,R_1;z,\omega,\tau)}{D(R_2,R_1;z,\omega,\tau)},
\end{eqnarray}
\end{subequations}
where we have
\begin{widetext}
\begin{eqnarray}
F(R_1,R_2;z,\omega,\tau)&=&(1-z-4\omega^2z)R_1+2\omega(1-2z)R_2+
(1+4\omega^2-\tau z)R_1^3+4\omega(3-2\tau z)R_1^2R_2+ \nonumber \\
&&(2+\tau+8\omega^2-3\tau z)R_1R_2^2+3\tau(1-\tau z)R_1^3R_2^2+
4\tau\omega R_1^2R_2^3+2\tau\omega R_1^4R_2,\\
D(R_1,R_2;z,\omega,\tau)&=&1+z^2-2z(1+2\omega^2z)+
[4\omega^2z+(1-\tau z)(1-z)](R_1^2+R_2^2)+\nonumber \\
&&8\omega(1-\tau z^2)R_1R_2+
3[4\omega^2-(1-\tau z)^2]R_1^2R_2^2.
\label{eq_diffRzbar}
\end{eqnarray}
\end{widetext}
Now we may obtain an expression for the susceptibility as a function of the parameters $z$, $\tau$, and $\omega$, as well as of the ratios $R_1$ and $R_2$ and their derivatives with respect to ${\bar z}$. The expression is too long to be given here, but if we particularize it to the {\bf P} phase, where $R_1=R_2=R$, with $R$ given by Eq.~(\ref{rfp}) it simplifies to:
\begin{equation}
\chi_N=\frac{2(1+2\omega-\tau z)(1-z-2\omega z)}
{[1-2(1+2\omega)z+\tau z^2][1-(1+\tau-4\omega^2)z+\tau z^2]}.
\label{chi}
\end{equation}
At the {\bf P}-{\bf AN} transition $\tau_{critical}=1/z-4\omega^2/(z-1)$, as expected the denominator in Eq.~(\ref{chi}) vanishes, and thus we may write this equation as:
\begin{equation}
\chi_N=\frac{2(1+2\omega-\tau z)(1-z-2\omega z)}
{z(z-1)[1-2(1+2\omega)z+\tau z^2](\tau-\tau_{critical})}.
\label{chi1}
\end{equation}
We thus conclude that, in agreement with the findings of Fisher and Berker \cite{fb82} for discontinuous critical transitions, the transition from the regular polymerized to the nematic phase is characterized by the critical exponents $\beta=0$, since the nematic order parameter is discontinuous, and $\gamma=1$, as is clear in Eq.~(\ref{chi1}). Of course, the divergence may also be seen if we cross the critical line in another direction, such as parallel to the axis which represents the bond activity $z$, as long as we really cross it and do not touch the critical line tangentially. In Fig.~(\ref{chiws}) some of these curves are shown close to the transition activity.
\begin{figure}[t]
\centering
\includegraphics[width=7.0cm]{Fig5.eps}
\caption{Susceptibility $\chi_N$ in the {\bf P} phase as a function of the bond activity $z$ for $\tau=0.1$, close to the {\bf P}-{\bf AN} transition. The curves, from left to right, correspond to $\omega=0.15$ (blue), $\omega=0.20$ (red) and $\omega=0.25$ (black). Dashed lines indicate the corresponding critical activities.}
\label{chiws}
\end{figure}
\section{Final discussions and Conclusion}
\label{conc}
Semi-flexible trails on a Bethe lattice with coordination number equal to 4 show a very rich phase diagram in the parameter space defined by the activity of a bond ($z$), the statistical weights of a crossing and a collision ($\tau_c$ and $\tau_x$), and the statistical weight of an elementary bend in the trail ($\omega$). For sufficiently flexible chains ($\omega>1/2$) the phase diagrams are qualitatively similar to the one found in the flexible case ($\omega=1$), studied in \cite{tj16}, with non-polymerized ({\bf NP}), regular polymerized ({\bf P}) and dense polymerized ({\bf DP}) phases meeting at a bicritical point. When the Boltzmann factor of bends is smaller than 1/2, an additional polymerized phase appears inside the {\bf P} phase. In this phase all lattice sites are visited by the trails and all bonds are in one of the two possible directions, thus characterizing it as anisotropic and nematic ({\bf AN}). The nature of the {\bf P}-{\bf AN} transition is quite unusual: while the nematic order parameter is discontinuous, it also has a critical nature, characterized by the fact that the stability limits of both phases coincide with the transition line. This type of criticality was studied in the framework of the renormalization group by Fisher and Berker \cite{fb82}, and for the present case we have verified their result that the susceptibility critical exponent $\gamma$ should be equal to 1. In the limit of rigid chains $\omega=0$ the {\bf P} phase is no longer stable, and three coexistence lines ({\bf AN}-{\bf NP}, {\bf AN}-{\bf DP}, and {\bf NP}-{\bf DP}) meet at a triple point which is the endpoint of the bicritical line.
Some features of the results presented here may be due to the particular lattice on which the model is solved. On the Bethe lattice, since there are no closed paths, any collision may be replaced by a crossing and vice-versa, so that the two statistical weights associated to these configurations only appear in the combination $\tau_x+2\tau_c$; this should no longer be true on a lattice with closed paths. Also, the phases {\bf DP} and {\bf AN} are totally frozen in the Bethe lattice solution: all lattice edges are occupied by bonds in the former, the same happening for all edges in one of the two directions in the latter. This also should change on lattices which are closer to regular ones. It is thus of interest to study this problem on the Husimi lattice, where small loops are present, and we are presently working on this.
\begin{figure}[t]
\centering
\includegraphics[width=7.0cm]{Fig6.eps}
\caption{Bicritical lines in terms of (on-site) monomer-monomer interaction $\tau^c*$ against $\omega$ for ISAT and VISAW models.}
\label{ISATxVISAW}
\end{figure}
It is interesting to compare the behavior of the ISAT model, where we choose $\tau_x=\tau^*$ and $\tau_c = \omega^2 \tau^*$, and the VISAW model, for which $\tau_x=0$ and $\tau_c = \omega^2 \tau^*$. In the former, the bicritical line is located at $\tau^*_{BC}=(1+2\omega)^2/(1+2\omega^2)$, while for the VISAW model it is at $\tau^*_{BC}=(1+2\omega)^2/2\omega^2$. These behaviors are compared in Fig.~\ref{ISATxVISAW}. In the ISAT model, as the stiffness of the chains is increased, the collapse transition becomes easier, since $\tau^*$ decreases as $\omega$ decreases. For the VISAW model, we notice an opposite behavior, with $\tau^* \rightarrow \infty$ when $\omega \rightarrow 0$, so that an infinite (on-site) monomer-monomer interaction is needed to collapse the chains. This is expected, since crossings are forbidden in VISAW and, thus, the stiffness will make the collapse more difficult. Moreover, this suggests that the globule phase for VISAW is similar to the one in the SASAW model. On the other hand, the results for ISAT's show that its collapsed phase is quite different from the one found in previous models. Again we could expect that such behavior should change on lattices with closed paths, as suggested by the results for the semi-flexible VISAW on the square lattice \cite{v15}.
\acknowledgments
We acknowledge support of the brazilian agencies CNPq and FAPEMIG, particularly from the former through project PVE 401228/2014-2, which made TP's stay in Brazil possible. We thank Profs.~M\'{a}rio J.~de Oliveira and Ron Dickman for helpful discussions regarding discontinuous critical transitions. TP acknowledges support by EPSRC grant EP/L026708/1. JFS thanks the Queen Mary University of London for hospitality.
| {
"attr-fineweb-edu": 1.802734,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdkk5qYVBihbKCocr | \section{Introduction}
The suggestion that the barrier distributions in subbarrier fusion
reactions could be determined directly from the cross section data
\cite{row91} has led to a renewed experimental activity in the field
\cite{wei91,lei93,lem93,mor94,ste95}. Since barrier distributions are
proportional to $d^2(E\sigma)/dE^2$, very accurate measurements of
excitation functions at closely-spaced energies are required. Even with
excellent data, smooth barrier distributions can only be obtained under
certain model dependent assumptions \cite{izu95} (i.e. - well-chosen energy
spacing for calculating the second derivatives). Recently, it has been
suggested that analyses using integrals over fusion data \cite{kra95}
provide model independent results, so these should be preferred over
analyses that rely on differentiation of data. Specifically, the
incomplete $n$-th moments of $E\sigma$,
\begin{equation}
\label{nthmoment}
f_n(E) = n(n-1) \int_0^E dE' (E-E')^{n-2} E' \sigma(E'), \quad n\ge 2,
\end{equation}
were proposed as an alternative for comparing model calculations with
data. Unfortunately, these moments are not directly related to observables
so it is difficult to give a physical interpretation for them. In
addition, one of the main advantages of using barrier distributions is
that they bring out important features in the data, but the the $n$-th moments
in Eq.~(\ref{nthmoment}) are even more featureless than the data from which
they are calculated.
The purpose of this paper is to point out how the average angular momentum,
which is a measurable quantity \cite{van92}, is related to moments of the
fusion cross section. In order to turn these
relations into a practical tool to extract average angular
momentum directly from fusion data, we study the angular momentum and
deformation dependence of the effective barrier radius used in the
formalism.
In Section \ref{sec-general}, we review the relation between
cross section and average angular momentum and introduce an expression for
calculating the effective barrier radius. We derive improved relations in
Section \ref{sec-ldep} by using a better approximation for the transmission
probability for the $\ell$-th partial wave. This provides a satisfactory
description of the relation between the cross section and average angular
momentum for a spherical target. In Section \ref{sec-def},
we present a qualitative discussion of deformation effects and propose a
a simple way to include it in the effective radius.
As a practical application of the method, we obtain the average angular
momentum from the fusion cross section data in the $^{16}$O+$^{154}$Sm system
and compare the results with the experiment. Finally, we draw conclusions from
this work.
\section{The General Method}
\label{sec-general}
The idea of expressing average angular momenta in terms of integrals over
functions of cross sections dates back to Ref.~\cite{bal86}. Due to absence
of sufficiently good subbarrier fusion data, the full potential of this
idea was not explored at that time.
To introduce the concepts involved, we first review the general relations
between moments of fusion cross sections and averages of powers of angular
momenta in a one-dimensional barrier penetration picture. Although this
is strictly valid only for spherical systems, it will provide
the basis for extension to deformed systems. In the usual partial wave
expansion, the total fusion cross section at the center-of-mass energy
$E$ is written as
\begin{equation}
\label{sigma}
\sigma(E) = \sum_{\ell=0}^\infty \sigma_\ell(E) \, ,
\end{equation}
where the cross section for the $\ell$-th partial wave is
\begin{equation}
\label{sigmal}
\sigma_\ell(E) = \frac{\pi \hbar^2}{2\mu E}\, \left(2\ell+1 \right) T_\ell(E)
\,
\end{equation}
$T_\ell(E)$ the transmission probability for that partial wave, and $\mu$
the reduced mass.
For energies near the
Coulomb barrier, one can approximate the $\ell$-dependence in $T_\ell$ by using
the $s$-wave penetrability at a shifted energy \cite{bal83},
\begin{equation}
\label{tlappr}
T_\ell(E) = T_0 \left(E - \frac{\hbar^2 \ell(\ell+1)}{2\mu R^2(E)} \right) \, ,
\end{equation}
where $\mu R^2(E)$ is the effective moment of inertia of the system. The
energy shift simply accounts for the change in the height of the barrier
resulting from the centrifugal potential.
Note that the effective radius is allowed to vary as a function of energy.
In Section~\ref{sec-ldep}, we will demonstrate how this approximation for
the transmission probability can be improved. Substituting
Eqs. (\ref{sigmal}) and (\ref{tlappr}) into Eq.~(\ref{sigma}), converting the
sum over $\ell$ into an integral, and changing variables to
\begin{equation}
\label{eprime}
E' = E - \frac{\hbar^2 \ell(\ell+1)}{2\mu R^2(E)} \, ,
\end{equation}
we obtain the expression
\begin{equation}
\label{rinv}
\sigma(E) = \frac{\pi R^2(E)}{E} \, \int_0^E dE' \, T_0(E') \,.
\end{equation}
We will use Eq.~(\ref{rinv}) to study the energy dependence
of the effective radius from numerical calculations.
The average angular momentum after fusion is assumed to be
\begin{equation}
\langle \ell \rangle = \frac{1}{\sigma(E)} \, \sum_{\ell=0}^\infty
\ell\sigma_\ell(E) \,.
\end{equation}
Following a procedure similar to the one used to obtain Eq.~(\ref{rinv}),
the average angular momentum can be written as
\begin{eqnarray}
\langle \ell \rangle = \frac{\pi R^2(E)}{E\sigma(E)} \int_0^E dE' \,
T_0(E') \, \left\{\left[
\frac{2\mu R^2(E)}{\hbar^2} \, (E-E') + \frac{1}{4} \right]^{1/2}
- \frac{1}{2} \right\} \,.
\end{eqnarray}
Integrating by parts and using Eq.~(\ref{rinv}), we arrive at the desired
expression,
\begin{eqnarray}
\label{lint}
\langle \ell \rangle = \frac{\mu R^4(E)}{\hbar^2 E\sigma(E)} \int_0^E dE'
\, \frac{E' \sigma(E')}{R^2(E')} \,
\left[\frac{2\mu R^2(E)}{\hbar^2}\,(E-E')+ \frac{1}{4} \right]^{-1/2} ,
\end{eqnarray}
which relates the average angular momentum to a moment of the fusion cross
section. Note that the factor of 2 in Eq.~(\ref{lint}) is missing in
Ref.~\cite{bal86}. For practical applications of Eq.~(\ref{lint}),
it is important to note that, in order to obtain the detailed features of the
average angular momentum reliably, the cross section should be interpolated
rather than fit globally. We have found that a spline fit to the logarithm of
the cross section works quite well for this purpose.
This integral method with the assumption that the effective radius is
constant, has been applied to some nearly spherical systems \cite{das91}.
The results are in agreement with the data within the experimental errors
which, however, are rather large for a definitive confirmation that Eq.
(\ref{lint}) with a constant radius works.
The existing data for a variety of systems were also examined assuming a
constant effective radius\cite{bab93},
but that analysis used a fit of the the cross section to the exponential
of a polynomial so the features of the angular momenta are lost.
Higher moments of the angular momentum can be found by following similar
steps. For example, the second moment of the angular momentum is given by
\begin{equation}
\label{l2int}
\langle \ell(\ell+1) \rangle =
\frac{2\mu R^4(E)}{\hbar^2 E\sigma(E)} \, \int_0^E dE' \,
\frac{E' \sigma(E')}{R^2(E')} \,.
\end{equation}
A similar expression for $\langle \ell^2 \rangle$ was given in
Ref.~\cite{das86}, but $\langle \ell \rangle$ was neglected
and $R$ was assumed to be a constant. Under these assumptions,
$f_2$ in Eq.~(\ref{nthmoment}) can be related to $\langle \ell^2 \rangle$
through
\begin{equation}
\label{l2f2}
\langle \ell^2 \rangle \sim \frac{\mu R^2}{\hbar^2 E\sigma} \, f_2(E).
\end{equation}
However, as has been pointed out previously \cite{bal86,row93}, taking $R$ to
be a constant is not a good approximation, especially for deformed nuclei.
Therefore, Eq.~(\ref{l2f2}) does not result in a reliable physical
interpretation for $f_2$. There is little experimental data for higher moments
of $\ell$ due to the difficulty of these measurements, so we do not pursue them
here. The procedure for calculating them should be clear from the
preceding discussion.
In order to put Eq.~(\ref{lint}) into use, we would like to understand the
origin of the energy dependence in the effective radius better. For this
purpose, we use Eq.~(\ref{rinv}) as the defining relation for $R(E)$ and
study its deviation from a constant value. We use values of $\sigma$ and
$T_0$ generated by the computer code IBMFUS \cite{ben93} which uses the
interacting boson model (IBM) \cite{iac87} to account for nuclear structure
effects and evaluates transmission probabilities numerically in the WKB
approximation. For fusion reactions involving rare-earth nuclei, IBMFUS
has been shown to reproduce very well cross section and barrier
distribution systematics \cite{bal94a}, and average angular momentum data
\cite{bal94b}. It is important to note that the centrifugal energy is
treated exactly in the WKB calculations (i.e. approximations such as
Eq.~(\ref{tlappr}) are not used). In order to emphasize the effects of
nuclear structure, we kept the masses of the projectile ($^{16}$O) and
target ($^{154}$Sm) fixed and varied the quadrupole coupling parameter.
Fig.~\ref{fig1} shows the results obtained for $R(E)$ in three
cases corresponding to the target nucleus being spherical (no coupling),
vibrational (intermediate coupling) and deformed (strong coupling). These
results demonstrate that the effective radius is not constant even for the
spherical case and deviates more as the coupling to structure
increases. The sharp increase in radius below the barrier with increasing
deformation is obviously due to selective sampling of the longer nuclear
axis. The origin of the energy dependence of radius in the spherical
system is not that clear at this point, but the approximation used in
Eq.~(\ref{tlappr}) is an obvious suspect. In view of the demonstrated
energy dependence of the effective radius, extracting the average angular
momentum from Eq.~(\ref{lint}) by assuming that $R(E)$ is constant will not
accurately predict $\langle \ell \rangle$ across a wide range of energies.
Attempts have been made to parameterize $R(E)$, but they have not been very
successful. For example, a linear combination of the position of the barrier
peak $R_B$ and the Coulomb turning point
\begin{equation}
\label{rcoul}
R_C = Z_1 Z_2 e^2/E \,,
\end{equation}
has been suggested as a plausible choice \cite{bal83}
\begin{equation}
\label{rpar}
R(E) = \eta R_B + (1-\eta) R_C \,.
\end{equation}
In the spherical case, Eq.~(\ref{rpar}) provides a good description of
$R(E)$ in Fig.~\ref{fig1} with $\eta=0.78$.
However, is not clear how to include deformation effects in
Eq.~(\ref{rpar}) in a physically meaningful way.
Another expression for the effective radius, derived by assuming the nuclear
potential to be an exponential tail in the region of the barrier, is given by
\cite{row93}
\begin{equation}
R(E) = {1 \over 2} R_C \left[ 1 + \left(1-4a/R_C\right)^{1/2} \right] \,,
\end{equation}
where $a$ is the nuclear surface diffuseness. This expression gives an
energy dependence which is too strong for values of $a$ in the range
0.6-1.2 fm. Also it doesn't make any allowance for inclusion of
deformation effects, which are seen to have a significant influence on the
shape of $R(E)$. Clearly, a better understanding of $R(E)$ is needed to
make further progress.
\section{An Improved Expression for the Penetrability}
\label{sec-ldep}
The prescription given in Eq.~(\ref{tlappr}) for approximating the $\ell$-wave
penetrability by the $s$-wave penetrability at a shifted energy utilizes only
the leading term in what is actually an infinite series expansion in
$\Lambda = \ell(\ell+1)$. In this section, we derive the next term in this
expansion and show the resulting corrections to the calculations presented
in the Section \ref{sec-general}. We also demonstrate how this can
explain the part of the energy dependence of the effective radius which
arises from the centrifugal potential.
The effect of the angular momentum on the penetrability is usually taken
into account by the shift it makes in the height of the
potential barrier. The total potential for the $\ell$-th partial wave is
given by
\begin{equation}
\label{vsubl}
V_\ell(r) = V_N(r) + V_C(r) + \frac{\hbar^2 \ell(\ell+1)}{2\mu r^2} \,,
\end{equation}
where $V_N$ and $V_C$ are the nuclear and Coulomb potentials,
respectively. Let $r_\ell$ denote the position of the peak of the $\ell$-wave
barrier which satisfies
\begin{equation}
\label{rldef}
\left. {\partial V_\ell(r) \over \partial r} \right|_{r=r_\ell} =0 \,,
\end{equation}
and
\begin{equation}
\left. {\partial^2 V_\ell(r) \over \partial r^2} \right|_{r=r_\ell} <0 \,,
\end{equation}
then the height of the barrier is given by $V_{Bl} = V_l(r_\ell)$.
We make the ansatz that the barrier position can be written as an infinite
series,
\begin{equation}
\label{rlans}
r_\ell = r_0 + c_1 \Lambda + c_2 \Lambda^2 + \cdots,
\end{equation}
where the $c_i$ are constants. Expanding all functions in Eq.~(\ref{rldef})
consistently in powers of $\Lambda$, we find that the first
coefficient is
\begin{equation}
\label{c1}
c_1 = - \, {\hbar^2 \over \mu\alpha r_0^3} \,,
\end{equation}
where $\alpha$ is the curvature of the $s$-wave barrier
\begin{equation}
\alpha= \left. - \, {\partial^2 V_0(r) \over \partial r^2 } \right|_{r=r_0}
\,.
\end{equation}
Substituting the leading order correction in the barrier position $r_\ell$
into Eq.~(\ref{vsubl}), we
find that to second order in $\Lambda$ the $\ell$-wave barrier height
is given by
\begin{equation}
\label{vbl}
V_{Bl} = V_{B0}
+ \frac{\hbar^2 \Lambda}{2\mu r_0^2}
+ \frac{\hbar^4 \Lambda^2}{2\mu^2 \alpha r_0^6} \,.
\end{equation}
Therefore, an improved approximation for the $\ell$-dependence in the
penetrability is given by
\begin{equation}
\label{tlcor}
T_\ell(E) = T_0 \left(E - \frac{\hbar^2 \Lambda}{2\mu r_0^2}
- -\frac{\hbar^4 \Lambda^2}{2\mu^2 \alpha r_0^6}\right).
\end{equation}
We give an alternative derivation of this expansion and discuss its
validity in the Appendix.
To examine the consequences of the improved expression for the penetrability,
we repeat the steps outlined in Section~\ref{sec-general} using
Eq.~(\ref{tlcor}) instead of Eq.~(\ref{tlappr}).
To the leading order in $1/\alpha$, this introduces the following correction
to Eq.~(\ref{rinv}),
\begin{equation}
\label{rcor}
\sigma(E) = \frac{\pi r_0^2}{E} \, \int_0^E dE' \, T_0(E')
\left[1-{4 \over \alpha r_0^2} (E-E')\right] \,.
\end{equation}
Comparing Eq.~(\ref{rinv}) with Eq.~(\ref{rcor}), we find that the
energy-dependent effective radius can be expressed as
\begin{equation}
\label{reff}
R^2(E) = r_0^2 \left[ 1 - {4 \over \alpha r_0^2} { \int_0^E dE' \, T_0(E')
(E-E') \over \int_0^E dE' \, T_0(E')} \right].
\end{equation}
This predicts the decrease in $R(E)$ as the energy increases that was shown
in Fig.~\ref{fig1}.
The calculation of the average angular momentum with
the modified equations introduces a similar leading order correction,
\begin{eqnarray}
\label{lcor}
\langle \ell \rangle = \frac{\mu r_0^2 }{\hbar^2 E\sigma(E)} \int_0^E dE'
\, E' \sigma(E') \,
\left[\frac{2\mu r_0^2}{\hbar^2}\,(E-E')+ \frac{1}{4} \right]^{-1/2}
\left[1-\frac{7}{\alpha r_0^2} (E-E')\right]\,.
\end{eqnarray}
Since we have included a correction to $r_\ell$, this expression takes into
account the shifts in both the height and position of the barrier for
different partial waves.
It is easy to show that the expression for effective radius given in
Eq.~(\ref{reff}) is consistent with the result obtained by weighted averaging
over the barrier positions of all of the partial waves using
Eq.~(\ref{rlans}),
\begin{equation}
\label{rav1}
\langle r_\ell \rangle
= {\sum_\ell \sigma_\ell r_\ell \over \sum_\ell \sigma_\ell}
= r_0 - {\hbar^2 \over 2\mu \alpha r_0^3} \langle \Lambda \rangle + \cdots.
\end{equation}
To a first approximation, $\langle \Lambda \rangle$ is given by
Eq.~(\ref{l2int}) with $R(E)=r_0$. Substituting Eq.~(\ref{rinv}) for
$E'\sigma(E')$, Eq.~(\ref{rav1}) becomes
\begin{equation}
\label{rav2}
\langle r_\ell \rangle =
r_0 - {2\pi r_0 \over \alpha E\sigma} \int_0^E dE' \int_0^{E'}dE'' T_0(E'').
\end{equation}
After integration by parts, Eq.~(\ref{rav2}) becomes
\begin{equation}
\label{rav3}
\langle r_\ell \rangle =
r_0 - {2\pi r_0 \over \alpha E\sigma} \int_0^E dE' T_0(E') (E-E').
\end{equation}
Substituting Eq.~(\ref{rinv}) with $R(E)=r_0$ for $\sigma$ and squaring the
result, we recover an expression that is consistent with Eq.~(\ref{reff}) to
the first order in $1/\alpha$.
The effect of the correction to the barrier position on the distribution of
barriers is straightforward in the form given by Ackermann\cite{ack95},
which is in terms of first derivatives of the angular momentum
distribution. This expression can be written as
\begin{equation}
\label{bardist}
D(E') = \frac{4\mu^2 R^2 E}{(2\ell+1)^2 \pi\hbar^2} \,
\frac{d\sigma_\ell(E)}{d\ell} \,,
\end{equation}
where the shifted energy is
\begin{equation}
E' = E - \frac{\hbar^2 \ell(\ell+1)}{2\mu R^2} \,.
\end{equation}
Knowledge of the angular momentum distribution at an energy $E$ allows
the calculation of the barrier distribution between
$E- \hbar^2 \ell_{max}(\ell_{max}+1) / 2\mu R^2$ and $E$, where $\ell_{max}$ is
the largest angular momentum for which the partial cross section is measured.
The first order correction to Eq.~(\ref{bardist}) is obtained by
substituting the position of the $\ell$-wave barrier for the effective radius
of
the associated partial-wave cross section
\begin{equation}
R \rightarrow r_\ell \approx r_0 + c_1 \Lambda \,.
\end{equation}
When the angular momentum distribution is determined at 65 MeV for the
$^{16}$O+$^{154}$Sm system, the correction to $D(E')$ is about 12\% at 55
MeV and it will be less for higher energies. The other errors involved in
calculating the barrier distribution are typically larger than that, so
this correction can be safely ignored.
In order to check whether or not the correction in Eq.~(\ref{tlcor}) is
sufficient to account for the shift in the peak of the barrier due to the
angular momentum of the system, we calculated $r_0$ as a function of energy
for a spherical target using Eq.~(\ref{rcor}). The same numerical
values of $\sigma(E)$ and $T_0(E)$ obtained from IBMFUS as in
Fig.~\ref{fig1} and the known value of $\alpha$ for the potential
barrier were used in this calculation. The results for the $s$-wave barrier
radius $r_0$ is shown in Fig.~\ref{fig2} and the effective radius
$R(E)$, extracted using Eq.~(\ref{rinv}), is shown for comparison. The
results for $r_0$ are nearly independent of energy as we expect.
We would like to extract $r_0$ and $\alpha$ directly from cross section data.
Eq.~(\ref{rcor}) can be rewritten as
\begin{equation}
E \sigma(E) = \pi r_0^2 \, \int_0^E dE' \, T_0(E')
- -{4 \pi \over \alpha}\,E \int_0^E dE' \, T_0(E')
+{4 \pi \over \alpha} \int_0^E E' dE' \, T_0(E') \,.
\end{equation}
Using partial integration, the two integrals can be expressed as
\begin{equation}
\label{tint1}
\int_0^E dE' \, T_0(E') = E T_0(E)
- -\int_0^E E' dE' \, {dT_0 \over dE'} \,,
\end{equation}
and
\begin{equation}
\label{tint2}
\int_0^E E' dE' \, T_0(E') = {E^2 T_0(E) \over 2}
- - \int_0^E {{E'}^2 \over 2} dE' \, {dT_0 \over dE'} \,.
\end{equation}
For $E \gg V_{B0}$, the right hand sides of
Eqs. (\ref{tint1}) and (\ref{tint2}) become $E-q_1$ and ${E^2 / 2} - q_2$,
respectively, where
\begin{equation}
\label{q1def}
q_1 = \int_0^E E' dE' \, {dT_0 \over dE'} \,,
\end{equation}
and
\begin{equation}
\label{q2def}
q_2 = {1\over 2}\int_0^E {E'^2} dE' \, {dT_0 \over dE'} \,.
\end{equation}
For energies above the barrier, $dT_0/dE'$ goes to 0, so $q_1$ and $q_2$
become constants.
Therefore, for high energies $E\sigma(E)$ is a quadratic in $E$,
\begin{equation}
E \sigma(E) = - \left({2 \pi \over \alpha} \right) E^2
+ \left(\pi r_0^2 + {4 \pi \over \alpha} \, q_1 \right) E
- - \left(\pi r_0^2 q_1 + {4 \pi \over \alpha} \, q_2\right) \,.
\end{equation}
Classically,
\begin{equation}
{dT_0 \over dE} = \delta(E - V_{B0}) \,,
\end{equation}
where $V_{B0}$ is the barrier height, so $q_1 = V_{B0}$ and $q_2 =
V_{B0}^2/2$. Quantum mechanically, these expressions for $q_1$ and $q_2$
are also approximately true for energies above the barrier. Using the
values of $T_0$ generated from the code IBMFUS \cite{ben93} in
Eqs. (\ref{q1def}) and (\ref{q2def}), we have found that the error in these
approximations at an energy of 70 MeV are both less than $0.05\%$ for an
O+Sm system with no coupling where $V_{B0}\approx 59 {\rm MeV}$.
Therefore, the product of the cross section and the energy can be fit
at high energies with the expression
\begin{eqnarray}
\label{esfit}
E \sigma(E)
&=& - \left({2 \pi \over \alpha} \right) E^2
+ \left(\pi r_0^2 + {4 \pi V_{B0} \over \alpha} \right) E
- - \left(\pi r_0^2 V_{B0} + {2 \pi V_{B0}^2 \over \alpha} \right) \nonumber\\
&=& \pi r_0^2 \left( E-V_{B0} \right) - {2 \pi \over \alpha} \left( E-V_{B0}
\right)^2 \,.
\end{eqnarray}
in order to determine $\alpha$ and $r_0$. Of course, this requires high
precision fusion data for energies above the barrier which may be difficult to
obtain due to the competing processes.
As a test of the formalism, we apply the results derived in this section to
the fusion cross section ``data" generated by IBMFUS for a the system of
$^{16}$O + $^{154}$Sm, where both nuclei are taken to be spherical. First,
$\alpha$ and $r_0$ are determined from Eq.~(\ref{esfit}) by a fit to the
$\sigma$ ``data". Then these values are
employed in Eq.~(\ref{lcor}) and the average angular momenta are extracted
from the $\sigma$ ``data" through numerical integration.
Fig.~\ref{fig3} shows a comparison of the average angular momenta
calculated using Eq.~(\ref{lcor}) with those obtained from IBMFUS directly.
The agreement is very good at all energies.
For reference, results obtained from Eq.~(\ref{lint}) assuming a constant
radius are also shown (dashed line). As expected, the modified expression
(\ref{lcor}) leads to a clear improvement at high energies where the effective
radius varies most (cf. Fig.~\ref{fig2}). This example gives confidence
that one can use Eq.~(\ref{lcor}) in extracting average angular
momenta directly from the fusion data for spherical systems.
\section{Target Deformation Effects}
\label{sec-def}
Now that we have an improved description of the effects due to the
centrifugal barriers, we consider the effects of target deformation.
The shape of an axially symmetric deformed nucleus can be described by
\begin{equation}
\label{Rtdeform}
{\cal R}_t(\theta) = {\cal R}_{t0} \left( 1 + \beta Y_{20}(\cos \theta)\right)
\,.
\end{equation}
where ${\cal R}_{t0}$ is the radius for an undeformed target.
In order to qualitatively study deformation effects, we approximate the target
with a simple two-level system which displays the important features. As shown
in Ref. \cite{nag86}, fusion of a deformed nucleus with finite number of levels
($n$) can be described by sampling $n$ orientations of Eq.~(\ref{Rtdeform})
with their respective weights. For a two-level system, the orientations
$\theta_1=70.12^\circ$ and $\theta_2=30.55^\circ$ contribute with the weight
factors $w_1=0.652$ and $w_2=0.348$, respectively.
To proceed, we need to find out how the barrier position and height
changes with orientation, which can be calculated from the total potential
in a straightforward manner.
For the nuclear potential, we use the usual Woods-Saxon form with the target
radius given by Eq.~(\ref{Rtdeform})
\begin{equation}
V_N(r,\theta)=-V_{N0} \left[ 1 + \exp\left( {r-{\cal R}_p-{\cal
R}_t(\theta)}\over a \right)
\right]^{-1} \,.
\end{equation}
The Coulomb potential can be calculated from a multipole expansion and,
to leading order in $\beta$, is given by
\begin{equation}
\label{vcoul}
V_C(r,\theta)={Z_1 Z_2 e^2 \over r}
\left(1 + {3\over 5} {\beta {\cal R}_{t0}^2 Y_{20}(\cos \theta) \over
r^2}\right)
= {A \over r} \left(1 + {3\over 5} {{\cal R}_{t0}({\cal R}_t(\theta) - {\cal
R}_{t0}) \over r^2}\right) \,,
\end{equation}
where $A=Z_1 Z_2 e^2$.
Using the notation of Section \ref{sec-ldep}, the equation
for finding the peak of the $s$-wave barrier is
\begin{equation}
\label{defr0}
\left.
- {A \over r^2}
- {9\over 5} {A {\cal R}_{t0}({\cal R}_t - {\cal R}_{t0}) \over r^4}
+ {V_{N0} \over a}
{\exp\left(\left(r-{\cal R}_p-{\cal R}_t\right)/ a \right) \over
\left[ 1+\exp\left(\left({r-{\cal R}_p-{\cal R}_t}\right) /a
\right)\right]^2} \,
\right|_{r=r_0}
= 0 \,,
\end{equation}
and the height of the $s$-wave barrier is
\begin{equation}
\label{defVB0}
V_{B0} = {A\over r_0}
\left(1 + {3\over 5} {{\cal R}_{t0}({\cal R}_t - {\cal R}_{t0}) \over
r_0^2}\right)
- V_{N0} \left[ 1+\exp\left({r_0 -{\cal R}_p - {\cal R}_t}\over a
\right)\right]^{-1} .
\end{equation}
The $\theta$-dependence is suppressed in the above equations for convenience,
but both $r_0$ and $V_{B0}$ depend on the target orientation.
The rate of change in the barrier height due to the deformation is given by
\begin{equation}
\label{dvdRt}
{dV_{B0} \over d{\cal R}_t} =
{dV_{B0} \over dr_0}{dr_0 \over d{\cal R}_t}
+ {3 A {\cal R}_{t0} \over 5 r_0^3}
-{V_{N0} \over a}
{\exp\left(\left({r_0-{\cal R}_p-{\cal R}_t}\right)/ a \right) \over
\left[ 1+\exp\left(\left(r_0-{\cal R}_p-{\cal R}_t\right) /a
\right)\right]^2} \,.
\end{equation}
In Eq.~(\ref{dvdRt}), $dV_{B0}/dr_0=0$ by definition and using
Eq.~(\ref{defr0}), the second term can be simplified to give
\begin{equation}
\left. {dV_{B0} \over d{\cal R}_t} \right|_{{\cal R}_t={\cal R}_{t0}} = -\,{A
\over r_0^2}
+ {3 A {\cal R}_{t0} \over 5 r_0^3} \,.
\end{equation}
To find a similar expression for the barrier position, we differentiate
Eq.~(\ref{defr0}) with respect to ${\cal R}_t$
\begin{eqnarray}
\lefteqn{
\left[ {2A \over r_0^3}+{36 A\over 5} {{\cal R}_{t0}({\cal R}_t - {\cal
R}_{t0})\over r_0^5}\right]
{dr_0 \over d{\cal R}_t}
+ {9 A {\cal R}_{t0} \over 5 r_0^4}
+ \left({dr_0 \over d{\cal R}_t} -1 \right) } \nonumber\\[0.25cm]
& &\times {V_{N0} \over a^2} {\exp\left(\left({r_0-{\cal R}_p-{\cal
R}_t}\right)/a\right) \over
\left[ 1+ \exp\left(\left({r_0-{\cal R}_p-{\cal R}_t}\right)/ a
\right)\right]^2}
\left[ 1 - {2 \exp\left(\left({r_0-{\cal R}_p-{\cal R}_t}\right)/ a \right)
\over
\left[ 1+ \exp\left(\left({r_0-{\cal R}_p-{\cal R}_t}\right)/ a
\right)\right]} \right]
= 0 \,.
\end{eqnarray}
Using Eq.~(\ref{defr0}), we can solve for the rate of change in
the $s$-wave barrier position due to the change in the target radius
\begin{equation}
\left. {dr_0 \over d{\cal R}_t} \right|_{{\cal R}_t={\cal R}_{t0}} = \frac
{ Q - {9a {\cal R}_{t0} / 5r_0^2} }
{{Q + {2a / r_0} }} \,,
\end{equation}
where
\begin{equation}
Q = {2\over V_{N0}} \left({A\over r_0} - V_{B0} \right) -1 \,.
\end{equation}
For a given orientation $\theta$, the shift in the $s$-wave barrier height is
given approximately by
\begin{equation}
\label{deltav}
\delta V_{B0} = \left.{dV_{B0} \over d{\cal R}_t} \right|_{{\cal R}_t={\cal
R}_{t0}} \delta {\cal R}_t
= \left( -{A \over r_0^2} + {3 A {\cal R}_{t0} \over 5 r_0^3} \right)
\sqrt{5 \over 4\pi} \, \beta {\cal R}_{t0} P_l(\cos \theta) \,,
\end{equation}
and, similarly, the shift in the $s$-wave barrier position is approximately
\begin{equation}
\label{deltar}
\delta r_0 = \left. {dr_0 \over d{\cal R}_t} \right|_{{\cal R}_t={\cal R}_{t0}}
\delta {\cal R}_t
= \left. {dr_0 \over d{\cal R}_t}\right|_{{\cal R}_t={\cal R}_{t0}} \sqrt{5
\over 4\pi} \,
\beta {\cal R}_{t0} P_l(\cos\theta) \,.
\end{equation}
These expressions account for the changes due to deformation fairly accurately
as can be seen in Fig.~\ref{fig4}.
The total fusion cross section for a two-level system is given by \cite{nag86}
\begin{equation}
\sigma_T(E) = w_1 \sigma(E,\lambda_1) + w_2 \sigma(E,\lambda_2) \,,
\end{equation}
where $\sigma(E,\lambda_i)$ is the cross section for the $i$-th
orientation and $\lambda_i=P_2(\cos \theta_i)$. To simplify the notation,
we introduce
\begin{equation}
F=\sqrt{5 \over 4\pi}\, \left. {dr_0 \over d{\cal R}_t} \right|_{{\cal R}_t
={\cal R}_{t0}} {\cal R}_{t0} \,,
\end{equation}
so that, for a given orientation, the peak of the $s$-wave barrier in
Eq.~(\ref{rcor}) is replaced by
\begin{equation}
r_0 \rightarrow r_0 + F \beta \lambda_i\,.
\end{equation}
After this substitution, the cross section for each level
becomes
\begin{eqnarray}
\sigma(E,\lambda_i) &=& \frac{\pi}{E} \, \int_0^E dE' \, \left\{
r_0^2 T_0(E',\lambda_i)
+ F \beta r_0 \lambda_i T_0(E',\lambda_i)
+ F^2 \beta^2 \lambda_i^2 T_0(E',\lambda_i) \right\} \nonumber\\[0.25cm]
&&- \frac{4\pi}{\alpha E} \, \int_0^E dE' \, T_0(E',\lambda_i) (E-E') \,,
\end{eqnarray}
where $T_0(E,\lambda_i)$ is the transmission probability for $i$-th
orientation.
The curvature of the barrier, $\alpha$, is expected to have a second order
dependence on $\beta$, hence it is assumed to be constant in this leading order
calculation.
Defining the coupled $s$-wave transmission probability as
\begin{equation}
T_0^C(E) = w_1 T_0(E,\lambda_1) + w_2 T_0(E,\lambda_2) \,,
\end{equation}
the total cross section becomes
\begin{eqnarray}
\label{sigmacoupl}
\sigma_T(E) &=& \frac{\pi}{E} \, \left\{ r_0^2
+ 2F \beta r_0 \, \frac
{\int_0^E dE' \, \left[ w_1 \lambda_1 T_0(E',\lambda_1)
+ w_2 \lambda_2 T_0(E',\lambda_2) \right]}
{\int_0^E dE' \, T_0^C(E')} \right. \nonumber\\[0.25cm]
& & \hspace{1cm} \left. + F^2 \beta^2 \, \frac
{\int_0^E dE' \,\left[w_1 \lambda_1^2 T_0(E',\lambda_1)
+ w_2 \lambda_2^2 T_0(E',\lambda_2) \right] }
{\int_0^E dE' \, T_0^C(E')} \right\}
{\int_0^E dE' \, T_0^C(E')} \nonumber\\[0.25cm]
& & -{4\pi \over \alpha E} \, \int_0^E dE' \, T_0^C(E') (E-E') \,.
\end{eqnarray}
The cross section can be written in the form of Eq.~(\ref{rinv}) as
\begin{equation}
\label{rcdef}
\sigma_T(E) = \frac{\pi}{E} \, R_C^2(E) \int_0^E dE' \, T_0^C(E') \,,
\end{equation}
so that the effective radius with coupling is
\begin{eqnarray}
\label{rcoupl}
R_C^2(E) &=& r_0^2
+ 2F \beta r_0 \, \frac
{\int_0^E dE' \, \left[ w_1 \lambda_1 T_0(E',\lambda_1)
+ w_2 \lambda_2 T_0(E',\lambda_2) \right]}
{\int_0^E dE' \, T_0^C(E')} \nonumber\\[0.25cm]
& & + F^2 \beta^2 \, \frac
{\int_0^E dE' \,\left[w_1 \lambda_1^2 T_0(E',\lambda_1)
+ w_2 \lambda_2^2 T_0(E',\lambda_2) \right] }
{\int_0^E dE' \, T_0^C(E')} \nonumber\\[0.25cm]
& & - {4 \over \alpha} \, \frac {\int_0^E dE' \, T_0^C(E') (E-E')}
{\int_0^E dE' \, T_0^C(E')} \,.
\end{eqnarray}
Since $w_1 \lambda_1 = w_2 \lambda_2$ and both $T_0(E',\lambda_1)$ and
$T_0(E',\lambda_2)$ approach one very quickly for energies above the barriers,
the second term in Eq.~(\ref{rcoupl}) becomes zero for high energies. On
the other hand, the third term of that equation is always positive, so by
comparison with Eq.~(\ref{reff}) it is easy to see that the effective radius
is slightly higher at large energies for the deformed case than for the
uncoupled case. This argument also holds for multi-level systems, since
\begin{equation}
\int P_l(\cos \theta) \, d(\cos \theta) = 0 \,.
\end{equation}
The effective radius predicted by Eq.~(\ref{rcoupl}) (dashed curve) and the
result from definition of Eq.~(\ref{rcdef}) (solid curve) are shown in
Fig.~\ref{fig5}. That the two curves are in good agreement is an
indication that the curvature is approximately unchanged as assumed in
Eq.~(\ref{rcoupl}). This simple model exhibits the main features seen in
Fig.~\ref{fig1}; at low energies the difference between the
deformed and spherical cases becomes larger and this effect increases with
the deformation.
The preceding argument provides an explanation of how the effective radius
varies with the energy. Unfortunately, it is not easy to incorporate the
deformation effects in the calculation of average angular momentum as was
done for the centrifugal barrier in Section \ref{sec-ldep}. To make
progress, we take
a phenomenological approach and introduce the quantity
\begin{equation}
\label{rhodef}
\rho_0^2(E) = \frac {E\sigma/\pi + (4/\alpha) \int_0^E dE' \, T_0(E') (E-E')}
{\int_0^E dE' \, T_0(E')},
\label{rdef}
\end{equation}
For a single barrier, $\rho_0(E)$ is simply the location of the $s$-wave
barrier $r_0$ (cf. Eq.~(\ref{rcor})), so it is actually energy independent.
When couplings to target deformation are introduced, there is a
distribution of barriers. In this case, $\rho^2_0(E)$ is a suitable
average of the location of the barrier peaks. Using the
values of $\sigma$ and $T_0$ generated by IBMFUS in Eq.~(\ref{rdef}) again, we
calculate $\rho_0(E)$ for the three quadrupole coupling strengths used
in the previous section. In contrast to the effective
radii in Fig.~\ref{fig1}, the results shown in Fig.~\ref{fig6} level off at high
energies. They can be parametrized using
a simple Fermi function
\begin{equation}
\label{r0ofe}
\rho_0(E) = r_0 + \frac{\delta}{1+\exp((E-V_{B0})/W)}.
\end{equation}
Here $r_0$ is the asymptotic value at large $E$ and $V_{B0}$ is the barrier
height, which are determined from the fusion data using Eq.~(\ref{esfit}).
$r_0+\delta$ corresponds to the asymptotic value at low energies and can be
calculated from Eq.~(\ref{Rtdeform}) at $\theta=0$. The only quantity in
Eq.~(\ref{r0ofe}) that is not determined from data is the width $W$.
The fits to the curves in Fig.~\ref{fig6} results in values around
$2\pm0.3$ for $W$ with a mild dependence on $\beta$ (increases with $\beta$).
Since the precise value of $W$ makes no tangible difference, the slight
uncertainty in its value is not important for the purposes of this paper.
The averaged radius of Eq.~\ref{r0ofe} can now be used in Eq.~(\ref{lcor})
in the place of $r_0$ to obtain an improved expression for average angular
momentum,
\begin{eqnarray}
\label{lcordef}
\langle\ell\rangle &=& \frac{\mu \rho_0^4(E) }{\hbar^2 E\sigma(E)} \int_0^E dE'
\, \frac{E' \sigma(E')}{\rho_0^2(E')} \, \left\{
\left[\frac{2\mu \rho_0^2(E)}{\hbar^2}\,(E-E')+ \frac{1}{4} \right]^{-1/2}
\left[1-\frac{7}{\alpha \rho_0^2(E)} (E-E')\right] \right. \nonumber
\\[0.25cm]
& &\hspace{2.25cm} \left. +{4 \over \alpha} \left( {1 \over \rho_0^2(E')} - {1
\over \rho_0^2(E)} \right)
\left( \left[\frac{2\mu \rho_0^2(E)}{\hbar^2}\,(E-E')+
\frac{1}{4} \right]^{1/2} -{1\over 2} \right) \right\} \,.
\end{eqnarray}
Note that when $\rho_0(E)= r_0$, the last term vanishes and the above
expression reduces to Eq.~(\ref{lcor}).
The average angular momentum extracted from the fusion ``data" using
Eq.~(\ref{lcordef}) is compared to the $\langle \ell \rangle$ values
obtained from the same IBMFUS calculation in Fig.~\ref{fig7}.
The agreement is very good at all energies. In contrast, the constant radius
results underestimate $\langle \ell \rangle$ by about one $\hbar$.
To demonstrate the utility of this method in extracting $\langle \ell \rangle$,
we apply it to the $^{16}$O+$^{154}$Sm system for which quality cross section
data exist \cite{wei91}. In Fig.~\ref{fig8}, we compare the $\langle \ell
\rangle$ values obtained from Eq.~(\ref{lcordef}) and IBMFUS with the
experimental data \cite{bie93}. The two calculations are consistent with each
other but slightly underpredict the experimental values.
\section{Conclusions}
We have described two major reasons for the energy dependence of the
effective radius. The effects of the centrifugal barriers are described by
using an improved approximation for the penetration probability. This
also leads to a better relation between the cross section and the average
angular momentum. The effects of target deformation are described
with a simple model which reproduces the features of the effective radius.
Finally, we present a phenomenological expression for the position of the
$s$-wave barrier as a function of energy and show how it can be used
in extracting the average angular momentum from the fusion cross section.
Comparison of this method with numerical calculations shows that it predicts
the average angular momentum from the fusion data reliably, and hence it
can be used as a consistency check in cases where quality data are available
for both quantities.
\section*{Acknowledgments}
This research was supported in part by the National Science Foundation
Grants No. PHY-9314131 and INT-9315876, and in part by the Australian
Research Council and by an exchange grant from the Department of
Industry, Science and Technology of Australia.
| {
"attr-fineweb-edu": 1.584961,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdng5qX_Bl-uWG51f | \section{Introduction}\label{introduction}
The paper belongs to the line of research often called Schr\"odinger operators with delta potentials
\footnote{In the following we will equivalently use the notations delta interaction and delta
potential.}.
The analysis of these type of potentials is motivated by mesoscopic physics systems with the semiconductor
structures designed in such a way that they can be mathematically modelled by the Dirac delta supported on the sets
of lower dimensions. The support of delta potential imitates the geometry of the semicondutor material, for example,
it can take a form of one dimensional sets (wires) or surfaces with specific geometrical properties. A particle is confined
in the semiconductor structure however the model admits a possibility of tunneling.
Therefore these types of systems are called in literature \emph{leaky quantum graphs} or \emph{wires}. One of the most
appealing problem in this area is the question how the geometry of a wire affects the spectrum;
cf.~\cite{AGHH}~and~\cite[Chap.~10]{EK-book}. The aim of the present paper is to discuss
how the surface perturbation leads to resonances.
We consider a non relativistic three dimensional model of quantum particle
confined between two infinite unpenetrable parallel walls which form
a straight quantum layer defined by $\Omega :=\{(\underline{x}, x_3)\in \mathbb{R}^2 \times [0, \pi ]\}$.
In the case of absence
of any additional potential the Hamiltonian of such system is given by the negative Laplacian
$-\Delta\,:\, \mathrm{D}(\Delta )
\,:\, \to L^2 (\Omega )$ with the domain $\mathrm{D}(\Delta )=W^{2,2}(\Omega )\cap W^{1,2}_0(\Omega )$, i.e.~with the
Dirichlet boundary conditions on $\partial \Omega $.
The spectrum of $-\Delta $ is determined by $\sigma_{\mathrm{ess}}(-\Delta )=[1, \infty )$ however it is useful to keep
in mind that the energies in $x_3$-direction are quantized and given by $\{k^2\}_{k=1}^\infty $.
At the first stage we introduce a straight wire $I$ which connects the walls $\partial \Omega $
being at the same time perpendicular to them. We assume
the presence of interaction localized on $I$ and characterized by the coupling constant $\alpha \in \mathbb{R}$. The symbolic Hamiltonian of
such system can be formally written
\begin{equation}\label{eq-formal}
-\Delta + \delta_{\alpha , I}\,,
\end{equation}
where $\delta_{\alpha ,I}$ represents delta potential supported on $I$. Since the interaction support in this model has the co-dimension
larger then one it is called \emph{strongly singular potential}.
The proper mathematical definition of Hamiltonian
can be formulated in the terms of boundary conditions. More precisely, we define $H_\alpha $ as a self adjoint extension
of $-\Delta \left|_{C^{\infty}_0 (\Omega \setminus I)}\right.$ determined by means of the appropriate boundary conditions which functions
from the domain $\mathrm{D}(H_\alpha )$
satisfy on $I$.
The coupling constant $\alpha $ is involved in the mentioned boundary
conditions, however, it is worth to say at this point that
$\alpha $ does not contribute additively to the structure of Hamiltonian.
\\ To describe spectral properties of $H_\alpha $ we can rely on radial symmetry of the system
and consider two dimensional system with point interaction governed by the Hamiltonian $H^{(1)}_\alpha $.
The spectrum of $H^{(1)}_\alpha $ consists of positive half line and one discrete negative eigenvalue
$$
\xi_\alpha = -4\mathrm{e}^{2 (-2\pi \alpha +\psi(1))}\,,
$$
cf.~\cite{AGHH}, where $-\psi(1)= 0,577...$ determines the Euler--Mascheroni constant.
This reflexes the structure of spectrum of $H_\alpha $, namely,
for each $l\in \mathbb{N}$ the number
$$
\epsilon_l = \xi_\alpha +l^2\,,
$$
gives rise to an eigenvalue of $H_\alpha $. Note that infinite number of $\epsilon_l$ lives above the threshold of the
essential spectrum and, consequently, the Hamiltonian $H_\alpha $ admits the infinite number of \emph{embedded eigenvalues}.
\\ In the second stage we introduce to the layer
an attractive interaction supported on a finite $C^2$ surface $\Sigma \subset \Omega $ separated
from a wire $I$ by some distance, cf.~Fig.~1. Suppose that $\beta \neq 0$ is a real number. The Hamiltonian $H_{\alpha , \beta }$ which governs this system can
be symbolically written as
$$
-\Delta +\delta_{\alpha, I} -\beta \delta_{\Sigma}\,,
$$
where $\delta_{\Sigma}$ stands for the Dirac delta supported on $\Sigma $; this term represents \emph{weakly
singular potential}
Again, a proper mathematical definition of
$H_{\alpha, \beta }$ can be formulated as a self adjoint extension of $$H_{\alpha }\left|_{\{f\in \mathrm{D}(H_\alpha )
\cap C^\infty (\Omega \setminus I) \,:\, f =0 \,\,\, \mathrm{on}\,\,\,
\Sigma \}}\right. \,.$$
This extension is defined by means of the
appropriate boundary conditions on $\Sigma$ discussed in Section~\ref{sec-surface}.
\begin{figure}
\includegraphics[width=.40\textwidth]{Rysunek1.pdf}
\end{figure}
The aim of this paper is to analyse \emph{how the presence of surface interaction supported on
$\Sigma $ affects the embedded eigenvalues}. The existence of embedded eigenvalues is a direct consequence
of the symmetry. By introducing additional interaction on $\Sigma$ we break this symmetry, however, if the perturbation
is small then we may expect that the system preserves a ''spectral memory`` on original eigenvalues.
In Section~\ref{sec-reson} we show, for example, that
if the area $|\Sigma| \to 0 $ then \emph{the embedded eigenvalues $\epsilon_l$
turn to complex poles of the resolvent
of $H_{\alpha, \beta }$.} These poles are given by $z_l = \epsilon_l +o(|\Sigma | ) $ with $\Im z_l <0$; the latter confirms that $z_l $ is localized on the second sheet continuation.
We derive the explicit formula for the lowest order of the imaginary component of $z_l$ and show that it admits the following asymptotics
$$\Im z_l = \mathcal{O} (|\Sigma|^2)\,.$$ The poles of resolvent state resonances in the system governed by $H_{\alpha , \beta }$
and $\Im z_l $ is related to the width of the resonance given by $-2\Im z_l $.
\\ \\
Finally, let us mention that various types of resonators in waveguides and layers have been already analyzed.
For example, in \cite{KS} the authors study resonances induced
by the twisting of waveguide which is responsible for breaking symmetry.
The planar waveguide with narrows playing the role of resonators
has been studied in \cite{BKNPS}. On the other hand, the straight Dirichlet or Neuman waveguides with windows or barriers
inducing resonances have been analyzed in \cite{BEG, Popov2000, Popov2003}.
Furthermore, resonances in the curved waveguides with finite branches
have been described in \cite{DNG}. It is also worth to mention that quantum waveguides with electric and magnetic fields
have been considered, ~cf.~\cite{BPS, BG}.
On the other hand, the various types of resonators induced by delta potential
in two or three dimensional systems
have been analyzed. Let us mention the results of \cite{EK3, Kondej2012, KK013}
which describe resonances in the terms of breaking symmetry parameters or by means of tunnelling effect.
In \cite{KondejLeonski2014} the authors consider a straight two dimensional waveguide with a semitransparent perpendicular
barrier modeled by delta potential. It was shown that after changing slightly the slope of barrier the embedded eigenvalues turn to resonances;
the widths of these resonances can be expressed in the terms of the barrier slope.
The present paper is, in a sense, an extension of \cite{KondejLeonski2014}. However, the strongly singular character of the delta
interaction supported on $I$ causes that even an infinitesimal change of the slope of $I$, can not be understood as a small perturbation.
Therefore the resolvent poles are not interesting from the physical point of view
since they rapidly escape far away
from the real line.
In the present model the role of
small perturbation plays delta potential on $\Sigma$ which
leads to resonances.
Finally, it is worth to mention that the spectral properties of quantum waveguides and layers with delta
interaction have been studied, for example, in \cite{EKrecirik99, EN}. The results of \cite{EKrecirik99}
concern weakly singular potentials and in \cite{EN} the authors consider strongly singular
interaction. In the present paper we combine both types of delta interaction and analyze how they affect
each other.
\bigskip
\noindent General notations:
\\ $\bullet$ $\mathbb{C}$ stands for the complex plane and $\mathbb{C}_\pm $ for the upper, respectively, lower half-plane.
\medskip
\noindent $\bullet$ $\|\cdot \|$, $(\cdot ,\cdot )$ denote the norm and the scalar product
in $L^2 (\Omega )$ and $(\cdot, \cdot )_{\Sigma}$ defines the scalar product in $L^2 (\Sigma )$.
\medskip
\noindent $\bullet$ Suppose that $A$ stand
for a self adjoint operator. We standardly denote by $\sigma _\mathrm{ess}(A)$,
$\sigma _\mathrm{p}(A)$ and $\rho (A)$, respectively, the essential spectrum, the point spectrum and the resolvent set of $A$.
\medskip
\noindent $\bullet$ The notation $C$ stands for a constant which value can vary from line
to line.
\section{Parallel walls connected by wire inducing embedded eigenvalues}
\subsection{Free particle in layer.}
Let $\Omega \subset \mathbb{R}^3$ stand for a layer defined
by $\Omega := \{ x= ( \underline{x}, x_3)\,:\,\underline{x}
\in \mathbb{R}^2 \,,x_3 \in [0, \pi ]\}$ and in the following we assume convention
$\underline{x} = (x_1, x_2)
\in \mathbb{R}^2 $. \\
The ''free`` Hamiltonian is determined by $$H= -\Delta \,:\,
\mathrm{D}(H)= W^ {2,2} (\Omega )\cap W_0 ^{1,2}(\Omega )\to L^2 (\Omega )$$ and it admits the following decomposition
\begin{equation}\label{eq-Hamilton}
H=-\Delta ^{(2)} \otimes I +I \otimes -\Delta^{(1)}\,\quad \mathrm{on}\quad L^2 (\mathbb{R}^2) \otimes L^2 (0,\pi )\,,
\end{equation}
where $ \Delta^{(2)} \,:\, \mathrm{D}(\Delta^{(2)}) = W^{2,2}(\mathbb{R}^2)\to L^2 (\mathbb{R}^2)$ stands for the
two-dimensional Laplacian and $\Delta^{(1)} \,:\, \mathrm{D}(\Delta^{(1)}) = W^{2,2}(0,\pi)\cap W_0^{1,2}(0,\pi)\to
L^2 (0,\pi)$ determines one-dimensional Laplacian with the Dirichlet boundary conditions.
To define the resolvent of $H$ it is useful to note that
the sequence $\{\chi_n\}_{n=1}^\infty$ given by $$\chi_n (x_3) : = \sqrt{\frac{2}{\pi}}\sin ( n x_3)\,,\quad n\in \mathbb{N}$$
forms an
orthonormal basis in $L^2 (0, \pi)$. Suppose that $z\in \mathbb{C}\setminus [1,\infty )$. Then
$R(z):= (-\Delta -z)^{-1}$ defines an integral operator with the kernel
\begin{equation}\label{eq-integralH}
\mathcal{G}(z; \underline{x}, \underline{ x}',x_3,x'_3):= \frac{1}{2\pi} \sum_{n=1}^\infty K_0 (\kappa_n (z)
|\underline{x}- \underline{x}'|) \chi_n (x_3)\chi_n (x_3 ' )\,,
\end{equation}
where $K_0 (\cdot)$ denotes the Macdonald function, cf.~\cite{AS}, and
\begin{equation}\label{eq-defk}
\kappa _n (z):= -i\sqrt{z-n^2}\,, \quad \Im \sqrt{z-n^2} >0 \,.
\end{equation}
In the following we will also use the abbreviation $\mathcal{G} (z)$ for (\ref{eq-integralH}). The threshold of spectrum of $H$ is determined by the lowest discrete transversal energy, i.e. $1$.
Moreover, it is purely absolutely continuous and consequently, it takes the form
$$
\sigma (H) = [1,\infty )\,.
$$
\subsection{Layer with perpendicular wire: embedded eigenvalues phenomena.}
We introduce a wire defined by the straight segment of width $\pi$ and perpendicular to walls.
The presence of the wire will be modelled by delta interaction supported on $I\subset \Omega $, where
$I:= (0,0)\times [0,\pi]$.
\\
In view of the radial symmetry the operator with delta interaction on $I$ admits a natural
decomposition on
$L^2 (\Omega)= L^2 (\mathbb{R}^2)\otimes L^2 (0,\pi )$ and acts in the subspace $L^2 (\mathbb{R}^2)$ as the Schr\"odinger operator
with one point interaction. Therefore, the delta potential can be determined by appropriate boundary conditions, cf.~\cite[Chap.~1.5]{AGHH},
which can be implemented, in each sector of the transversal
energy, separately. For this aim we decompose a function $\psi\in L^2 (\Omega )$ onto
$\psi (x)=\sum_{n=1}^\infty \psi_n (\underline{x} )\chi_n (x_3)$, where
$\psi_n (\underline{x} ):= \int_{0}^\pi \psi (\underline{x}, x_3 ) \chi_n (x_3)\mathrm{d}x_3$.
\begin{description}
\item[$D_1$)] We say that a function $\psi $ belongs to the set $D'\subset
W^{2,2}_{\mathrm{loc}} (\Omega \setminus I) \cap L^2 (\Omega )$ if $\Delta \psi \in L^2 (\Omega )$, $\psi|_{\partial \Omega }=0$ and the following limits
$$
\Xi_n (\psi ):= - \lim_{|\underline{x}|\to 0 }\frac{1}{\ln |\underline{x}|} \psi_n (\underline{x})\,, \qquad \Omega_n (\psi):=
\lim_{|\underline{x}|\to 0 } \left( \psi_n (|\underline{x} |) - \Xi_n (\psi )\ln |\underline{x} |\right)$$
are finite.
\item[$D_2$)] For $\alpha \in \mathbb{R}$, we define the set
\begin{equation}\label{eq-bcalpha}
\mathrm{D} (H_\alpha ):= \{ \psi \in D' \,:\, 2\pi \alpha \Xi_n (\psi ) = \Omega_n (\psi)\,\,\,\mathrm{for}\,\,\,\mathrm{any}\,\,\,n\in \mathbb{N} \}\,
\end{equation}
and the operator $H_\alpha \,:\, \mathrm{D} (H_\alpha ) \to L^2 (\Omega )$ which acts
$$ H_\alpha \psi (x)= -\Delta \psi (x)\,,\quad \mathrm{for }\quad x\in \Omega \setminus I\,.$$
\end{description}
The resulting operator $H_\alpha \,:\,D(H_\alpha )\to L^2 (\Omega )$ coincides
\begin{equation}\label{eq-Hamilton}
-\Delta_\alpha ^{(2)} \otimes I +I \otimes -\Delta^{(1)}\,\quad \mathrm{on}\quad L^2 (\mathbb{R}^2) \otimes L^2 (0,\pi )\,,
\end{equation}
where $ \Delta_\alpha ^{(2)} \,:\, \mathrm{D}(\Delta_\alpha ^{(2)}) \to L^2 (\mathbb{R}^2)$ stands for the
two-dimensional Laplacian with point interaction, cf.~\cite[Chap.~1.5]{AGHH} with the domain $\mathrm{D}(\Delta_\alpha ^{(2)}) $. Consequently, $H_\alpha $
is self adjoint and its spectral properties will be discussed in the next section.
\subsection{Resolvent of $H_\alpha $.}
Suppose that $z\in \mathbb{C}_+$.
We use the standard notation $R_\alpha (z)$ for the resolvent operator, i.e. $R_\alpha (z):= (H_\alpha - z)^{-1}$.
To figure out the explicit resolvent formula we introduce
$$
\omega_n (z; x):= \frac{1}{2\pi } K_0 (\kappa_n (z)| \underline{x} | ) \chi_n (x_3)\,, \quad n\in \mathbb{N}\,;
$$
in the following we will use also abbreviation $\omega_n (z)= \omega_n (z; \cdot )$.
The following theorem states the desired result.
\begin{theorem}\label{th-resl1} The essential spectrum of $H_\alpha $ is given by
\begin{equation}\label{eq-ess}
\sigma_{\mathrm{ess}} (H_\alpha )=[1, \infty )\,.
\end{equation}
Furthermore, let \footnote{Analogously as in the previous discussion
we assume $\Im \sqrt{z-n^2} >0$. The logarithmic function $z\mapsto \ln z $ is defined in the cut
plane $-\pi <\arg z <\pi$
and admits continuation to entire logarithmic Riemann surface.}
\begin{equation}\label{eq-defGamma}
\Gamma_n (z):= \frac{1}{2\pi } \left( 2\pi \alpha +s_n (z)\right)\,,\quad
\mathrm{where }\quad s_n (z):=-\psi(1) +\ln \frac{\sqrt{z-n^2}}{2i}\,.
\end{equation}
Suppose that $z\in \mathbb{C}\setminus [1,\infty )$ and $\Gamma _n (z) \neq 0$. Then $z\in \rho (H_\alpha )$ and operator $R_\alpha (z)$ admits the Krein-like form:
\begin{equation}\label{eq-resolalpha}
R_\alpha(z)= R(z) +\sum_{n=1}^\infty \Gamma_n (z)^{-1} (\omega_n (\bar{z} ), \cdot )\omega_n(z)\,.
\end{equation}
\end{theorem}
\begin{proof}
Our first aim is to show that (\ref{eq-resolalpha}) defines the resolvent of $H_\alpha $.
Operator $H_\alpha $ is defined as the self adjoint extension of $-\Delta |_{C^\infty _0 (\Omega \setminus I)}$.
Suppose that $f\in C^\infty _0 (\Omega \setminus I)$. Then $g:=(-\Delta -z )f\in C^\infty _0 (\Omega \setminus I)$.
Employing the fact that $\omega_n (z)=\mathcal{G}(z)\ast (\delta \chi_n )$,
where $\mathcal{G}(z)$ is the kernel defined
by (\ref{eq-integralH}) and $\delta =\delta (\underline{x})$,
we conclude that $(\omega_n (\bar{z} ), g)= \langle \delta \chi_n, f\rangle_{-1,1} = 0$
where $\langle\cdot , \cdot \rangle_{-1,1}$ states the duality
between $W^{-1,2}(\Omega )$ and $W^{1,2}(\Omega )$. This, consequently,
implies
$R_\alpha (z) (-\Delta -z )f = R (z) (-\Delta -z )f= f$ in view of
(\ref{eq-resolalpha}) which means that $R_\alpha (z)$ defines the resolvent of a self adjoint
extension of $-\Delta |_{C^\infty _0 (\Omega \setminus I)}$. To complete the proof we have to show that
any function $g=R_\alpha (z)f$ satisfies boundary conditions (\ref{eq-bcalpha}).
In fact, $g$ admits the unique decomposition $g=g_1+g_2$, where $g_1:= R(z)f$ and
$g_2= \sum_{n=1}^\infty \Gamma_n (z)^{-1} (\omega_n (\bar{z} ), f )\omega_n(z)$.
Therefore, a nontrivial contribution to $\Xi_n (g)$ comes from $g_2$ since $g_1 \in W^{2,2}(\Omega )$.
Employing the asymptotic behaviour
of the Macdonald function, cf.~\cite{AS}
\begin{equation}\label{eq-Kexp1}
K_0 (\rho)= \ln \frac{1}{\rho} +\psi(1)+\mathcal{O}(\rho)\,,
\end{equation}
we get $\Xi_n (g)= \frac{1}{2\pi } \Gamma_n (z)^{-1}(\omega_n (\bar{z}), f)$ and
$$\Omega_n (g)= (1- \frac{1}{2\pi } \Gamma_n (z)^{-1}s_n (z) ) (\omega_n (\bar{z}), f) =
\alpha \Gamma_n (z)^{-1} (\omega_n (\bar{z}), f)\,.$$
Using (\ref{eq-defGamma}) one obtains (\ref{eq-bcalpha}). This completes the proof of (\ref{eq-resolalpha}).
The stability of the essential spectrum can be concluded in the analogous way as in \cite[Thm.~3.1]{BEKS}. The key step
is to show that $R(z)-R_\alpha (z)$ is compact. The statement can be proved relying on
compactness of the trace map $S\,:\,
W^{2,2}(\Omega )\to L^2 (I ) $ which follows from the boundedness of the trace map, cf.~\cite[Chap.~1,~Thm.~8.3]{LM} and the compactness theorem, cf.~\cite[Chap.~1,~Thm.~16.1]{LM}.
This implies, in view of boundedness of $R(z)\,:\, L^2 (\Omega )\to W_0^{1,2}(\Omega )\cap W^{2,2}(\Omega )$, that
$SR(z)\,:\, L^2 (\Omega )\to L^2(I)$ is compact. Employing the resolvent formula, cf.~\cite{Po}, and the fact
that the remaining operators contributing to
$R(z)-R_\alpha (z)$ are bounded we conclude that $R(z)-R_\alpha (z)$ is compact.
\end{proof}
\begin{remark} {\rm The spectral analysis developed in this work is mainly based
on the resolvent properties. In the following we will use the results of
\cite{BEKS, BEHL, Po, Po2} where strongly as well as weakly singular potentials were considered.
}
\end{remark}
In the following theorem we state the existence of eigenvalues of $H_\alpha $.
\begin{theorem} \label{th-ev}
Let $\mathcal{A}_\alpha := \{n\in \mathbb{N}
\,:\, \xi_\alpha +n^2 <1 \}$. Each $\epsilon_n:= \xi_\alpha +n^2$, where $n\in \mathcal{A}_\alpha $ defines the discrete
eigenvalue of $H_\alpha $ with the corresponding eigenfunction $\omega_n:= \omega_n (\epsilon_n)$. In particular, this means that for any $\alpha$
operator $H_\alpha $ has at least one eigenvalue $\epsilon_1$ below the threshold of the essential spectrum.
\\ Operator $H_\alpha $ has infinite number of embedded eigenvalues. More precisely, for any $n\in \mathbb{N} \setminus \mathcal{A}_\alpha $
the number $\epsilon_n:= \xi_\alpha +n^2$ determines the embedded eigenvalue. In particular,
there exists $\tilde{n} \in \mathbb{N} \setminus \mathcal{A}_\alpha $
such that $\epsilon_n \in ((n-1)^2, n^2)$ for any $n> \tilde{n}$.
\end{theorem}
\begin{proof}
The proof is based on the Birman-Schwinger argument which, in view of (\ref{eq-resolalpha}), reads
$$z\in \sigma_\mathrm{p} (H_\alpha )\quad \Leftrightarrow \quad \exists \, n\in \mathbb{N}\,:\,
\Gamma _n(z)=0 \,,$$ cf.~\cite[Thm.~2.2]{Po2}. Note that, given $n\in \mathbb{N}$
the function $z\mapsto \Gamma_n (z)$, $z \in \{ \mathbb{C}\,:\, \Im \sqrt{z-n^2}> 0\} $ has the unique
zero at $z=\xi_\alpha +n^2$, i.e.
$$
\Gamma_n (\xi_\alpha +n^2) = 0\,.
$$
Finally, it follows, for example, from \cite[Thm.~3.4]{Po2} that the corresponding eigenfunctions takes the form $\mathcal{G} (\epsilon_l)\ast \chi_n \delta $. This completes the proof.
\end{proof}
\section{Surface impurity } \label{sec-surface}
We define a finite smooth parameterized surface $\Sigma \subset \Omega $ being a graph of the map $ U\ni q=(q_1, q_2) \mapsto x (q) \in \Omega $.
The surface element can be calculated by means of the standard formula
$\mathrm{d} \Sigma = |\partial_{q_1}x (q)\times \partial_{q_1}x (q) |\mathrm{d}q$.
Additionally we assume that $\Sigma \cap I = \emptyset$.
Furthermore, let $n\,:\, \Sigma \to \mathbb{R}^3 $ stand for the unit normal vector (with an arbitrary orientation)
and $\partial_n $ denote the normal derivative defined by vector $n$.
Relying on the Sobolev theorem we state that the trace map $W^{1,2}(\Omega ) \ni \psi \mapsto \psi |_{\Sigma } \in L^2 (\Sigma )$ constitutes
a bounded operator; we set the notation $(\cdot , \cdot )_{\Sigma }$ for the
scalar product in $L^2 (\Sigma )$.
Given $\beta \in \mathbb{R}\setminus \{0\}$ we define the following boundary conditions: suppose
that $\psi \in C (\Omega ) \cap C^1 (\Omega\setminus \Sigma ) $
satisfies
\begin{equation}\label{eq-bc2}
\partial_n ^+ \psi|_{\Sigma } - \partial_n ^- \psi|_{\Sigma }= -\beta \psi|_{\Sigma }\,,
\end{equation}
where the partial derivatives contributing to the above expression
are defined as the positive, resp., negative limits on $\Sigma $
and signs are understood with respect
to direction of $n$.
\begin{description}
\item[$D_3$)] We say that a function $\psi $ belongs to the set $\breve{D}\subset
W^{2,2}_{\mathrm{loc}} (\Omega \setminus (I\cup \Sigma ))$ if $\Delta \psi \in L^2 (\Omega )$, $\psi|_{\partial \Omega }=0$ and the limiting equations (\ref{eq-bcalpha}) and (\ref{eq-bc2}) are satisfied.
\item[$D_4$)] Define operator which for $f\in \breve{D}$ acts as $-\Delta f (x)$ if $x\in \Omega \setminus (I\cup \Sigma)$
and let $H_{\alpha, \beta }\,:\, \mathrm{D} (H_{\alpha, \beta }) \to L^2 (\Omega )$ stand for its closure.
\end{description}
To figure out the resolvent of $H_{\alpha , \beta }$ we define the operator acting from
to $L^{2} (\Omega )$ to $L^{2}(\Sigma )$ as $
R_{\alpha, \Sigma }(z)f= (R_{\alpha }(z)f )|_{L^2(\Sigma )}
$. Furthermore, we introduce the operator from $L^2 (\Sigma )$ to $L^2 (\Omega )$
defined by $\mathrm{R}_{\alpha, \Sigma } (z)f = \mathcal{G}_{\alpha } \ast f \delta $, where $\mathcal{G}_{\alpha }$ stands
for kernel of (\ref{eq-resolalpha}). Finally, we define $\mathrm{R}_{\alpha, \Sigma \Sigma } (z)\,:\, L^2 (\Sigma ) \to L^2 (\Sigma )$
by $\mathrm{R}_{\alpha, \Sigma \Sigma } (z) f= (\mathrm{R}_{\alpha, \Sigma } (z)f)|_{\Sigma }$. In view of
(\ref{eq-resolalpha}) the latter takes the following form
\begin{equation}\label{eq-resolalphaembed}
\mathrm{R}_{\alpha, \Sigma \Sigma } (z)= \mathrm{R}_{\Sigma \Sigma }(z) +\sum_{n=1}^\infty \Gamma_n (z)^{-1} (w_n (\bar{z} ), \cdot )_{\Sigma }w_n(z)\,,
\end{equation}
where $w_n(z):= \omega_n(z)|_{\Sigma }$ and $ \mathrm{R}_{\Sigma \Sigma }(z)\,:\, L^2 (\Sigma )\to L^2 (\Sigma )$
stands for the bilateral embedding of $R(z)$.
Following the strategy developed in \cite{Po} we define the set $Z \subset \rho (H_\alpha )$ such that $z$ belongs to $Z$ if the operators
$$(I-\beta \mathrm{R}_{\alpha , \Sigma \Sigma}(z))^{-1}\,,\quad \mathrm{and} \quad (I-\beta \mathrm{R}_{\alpha , \Sigma \Sigma}(\bar{z}))^{-1}$$ acting from $L^2(\Sigma )$ to $L^2 (\Sigma)$
exist and are bounded.
Our aim is to show that
\begin{equation}\label{eq-Z}
Z\neq \emptyset\,.
\end{equation}
Therefore we auxiliary define the quadratic below bounded form
$$
\int_{\Omega } |\psi |^2\mathrm{d}x - \beta \int_{\Sigma} \left| \psi |_\Sigma \right|^2\mathrm{d}\Sigma\,,\quad \psi\in W^{1,2}_0 (\Omega )\,.
$$ Let $\breve{H}_\beta $ stand for the operator associated to the above form in the sense of the first representation theorem, cf.~\cite[Chap.VI]{Kato}. Following the arguments from~\cite{BEKS}
we conclude that $I-\beta \mathrm{R}_{\Sigma \Sigma}(z)\,:\,L^2 (\Sigma)\to L^2 (\Sigma)$ defines the Birman--Schwinger operator for $\breve{H}_\beta $. Using Thm.~2.2~of~\cite{Po2} one obtains
$$
z \in \rho (\breve{H}_\beta )\,\, \Leftrightarrow \,\, 0\in \rho (I-\beta \mathrm{R}_{\Sigma \Sigma}(z))\,.
$$
In the following we are interested in negative spectral parameter and thus we
assume $z=-\lambda $ where $\lambda >0$.
Since the spectrum of $\breve{H}_\beta$ is lower bounded we conclude
\begin{equation}\label{eq-rhoRss}
0\in \rho (I-\beta \mathrm{R}_{\Sigma \Sigma }(-\lambda) )\,,
\end{equation}
for $\lambda$ large enough.
Next step is to find a bound for the second component contributing to (\ref{eq-resolalphaembed}). In fact, it can be majorized by
$$
\sum_{n=1}^\infty \left| \Gamma_n (-\lambda)^{-1} \right| \|w_n (-\lambda )\|_{\Sigma }^2 \leq C
\sum_{n=1}^\infty \|w_n (-\lambda )\|_{\Sigma }^2\,,
$$
where we applied the uniform bound $\left| \Gamma_n (-\lambda)^{-1} \right| \leq C$, cf.~(\ref{eq-defGamma}).
Using the large argument expansion, cf.~\cite{AS},
\begin{equation}\label{eq-K0}
K_0 (z)\sim \sqrt{\frac{\pi }{2z}}\mathrm{e} ^{-z}
\end{equation}
we get the estimate
$$
|w_n (-\lambda ,x)| \leq C \frac{1}{\lambda^{1/4}}\mathrm{e}^{-r_{\mathrm{min}}(n^2+\lambda )^{1/2}}\quad
\mathrm{for} \,\,\,\lambda \to \infty\,,
$$
where $r_{\mathrm{min}}= \min _{x\in \Sigma } |\underline{x}|$. This implies that the norm of
the second component of (\ref{eq-resolalphaembed}) behave as $o(\lambda^{-1})$. Combining this result with
(\ref{eq-rhoRss}) we conclude that $0\in \rho (I-\beta \mathrm{R}_{\alpha, \Sigma\Sigma} (-\lambda ))$ for $\lambda $ sufficiently large which shows that (\ref{eq-Z}) holds.
To realize the strategy of~\cite{Po} we observe that the embedding operator
$\tau ^\ast \,:\, L^2 (\Sigma )\to W^{-1,2}(\Omega )$ acting as $\tau ^\ast f= f\ast \delta $ is bounded and, moreover,
\begin{equation}\label{eq-tau}
\mathrm{Ran}\, \tau ^\ast \cap L^2 (\Omega ) = \{0\} \,.
\end{equation}
Suppose that $z\in Z$. Using (\ref{eq-tau}) together with Thm.~2.1~of~\cite{Po} we conclude that the expression
\begin{equation}\label{eq-resolbeta}
R_{\alpha, \beta }(z)= R_{\alpha } (z)+ \mathrm{R}_{\alpha, \Sigma } (z)(I-
\beta \mathrm{R}_{\alpha , \Sigma \Sigma}(z))^{-1} R_{\alpha, \Sigma } (z)\,.
\end{equation}
defines the resolvent of self adjoint operator.
\begin{theorem} We have
$$
R_{\alpha, \beta }(z)= (H_{\alpha , \beta } - z)^{-1}\,.
$$
\end{theorem}
\begin{proof}
To show the statement we repeat the strategy applied in the proof of Theorem~\ref{th-resl1}.
Operator $H_{\alpha, \beta }$ is defined as the self adjoint extension of $-\Delta |_{C^\infty_0 (\Omega \setminus (I \cup \Sigma ))}$
determined by imposing boundary conditions (\ref{eq-bcalpha}) and (\ref{eq-bc2}). The idea is show that
any function from the domain $\mathrm{D} (H_{\alpha, \beta })$ satisfies (\ref{eq-bcalpha}) and (\ref{eq-bc2}). Since the proof can be done
by the mimicking the arguments from the proof of Theorem~\ref{th-resl1} we omit further details.
\end{proof}
Furthermore, repeating the arguments from the proof of Theorem~\ref{th-resl1} we state that
$$
\sigma_{\mathrm{ess}} (H_{\alpha , \beta }) = [1\,,\infty )\,.
$$
\bigskip
\noindent {\bf Notation.} In the following we will be interested in the spectral
asymptotic for small $|\Sigma |$. Therefore, we introduce an appropriate scaling with respect to a point $x_0 \in \Sigma$.
Namely, for small positive parameter $\delta$ we define $\Sigma_\delta$ as the graph of $U \ni q \mapsto x_\delta (q) \in \Omega $ where
$$
x_\delta (q):= \delta x(q)-\delta x_0+x_0\,.
$$
For example a sphere of radius $R$ originated at $x_0$ turns to the sphere of radius $\delta R$
after scaling. Note that equivalence $|\partial_{q_1}x_\delta (q)\times \partial_{q_1}x_\delta (q) |=\delta^2
|\partial_{q_1}x(q)\times \partial_{q_1}x (q) |$ implies the scaling of the surface area $|\Sigma_\delta |= \delta^2 |\Sigma |$.
\section{Preliminary results for the analysis of poles}
The Birman-Schwinger argument relates the eigenvalues of $H_{\alpha, \beta }$ and zeros
of $I-\beta \mathrm{R}_{\alpha , \Sigma \Sigma }(z)$ determined by the condition
$\ker (I-\beta \mathrm{R}_{\alpha , \Sigma \Sigma }(z))\neq \{0\}$.
To recover resonances we show that $\mathrm{R}_{\alpha , \Sigma \Sigma } (z)$ has a second sheet continuation
$\mathrm{R}_{\alpha , \Sigma \Sigma }^{\mathit{II}} (z)$ and the statement
\begin{equation}\label{eq-resonance}
\ker (I-\beta \mathrm{R}^{\mathit{II}}_{\alpha , \Sigma \Sigma }(z))\neq \{0\}
\end{equation}
holds for certain $z\in \mathbb{C}_-$.
\subsection{Analytic continuation of $\mathrm{R}_{\alpha , \Sigma \Sigma }(z)$}
We start with the analysis of the first component of
$\mathrm{R}_{\alpha , \Sigma \Sigma }(z)$ determined by $\mathrm{R}_{\Sigma\Sigma }(z)$, cf.~(\ref{eq-resolalphaembed}).
Since $\mathrm{R}_{\Sigma \Sigma }(z)$ is defined by means of the embedding of
kernel $\mathcal{G} (z)$, see~(\ref{eq-integralH}),
the following lemma will be useful for further discussion.
\begin{lemma} \label{le-contR}
For any $k\in \mathbb{N}$ the function $\mathcal{G}(z)$ admits the second sheet continuation
$\mathcal{G}^{\mathit{II}}(z)$ through $J_k:= (k^2, (k+1)^2))$
to an open set $\Pi_k \subset \mathbb{C}_-$ and $\partial \Pi_k \cap \mathbb{R} =J_k$. Moreover,
$\mathcal{G}^{\mathit{II}}(z)$ takes the form
$$
\mathcal{G}^{\mathit{II}} (z; \underline{x},\underline{x}', x_3, x_3') = \frac{1}{2\pi} \sum_{n=1}^\infty Z_0 (i\sqrt{z-n^2} |\underline{x}-\underline{x}'|)\chi_n (x_3) \chi_n (x'_3)\,,
$$
where
\begin{equation}\label{eq-defZ}
Z_0 (i\sqrt{z-n^2} \rho ) =
\left\{\begin{array}{lr}
K_0 (-i\sqrt{z-n^2}
\rho ), & \mathrm{for } \quad n>k \\
K_0 (-i\sqrt{z-n^2}
\rho )+i\pi I_0 (i\sqrt{z-n^2}
\rho ), & \mathrm{for } \quad n\leq k \,,
\end{array}\right.
\end{equation}
and $I_0 (\cdot)$ standardly denotes the Bessel function.
\end{lemma}
\begin{proof}
The proof is based on edge-of-the-wedge theorem, i.e. our aim is to establish the convergence
$$
\mathcal{G}(\lambda+i 0) = \mathcal{G}^{\mathit{II}}(\lambda-i 0)\,,
$$
for $\lambda \in J_k $. In fact, it suffices to show that the analogous formula holds for $Z_0$ and $z=\lambda \pm i 0
$. \\
Assume first that $n>k$. Then $\sqrt{\lambda -n^2 \pm i0}= \sqrt{\lambda -n^2 }$ since $\Im \sqrt{\lambda -n^2 } >0$.
Furthermore, the function $K_0 (\cdot)$ is analytic in the upper half-plane, consequently,
we have $ K_0 (-i\sqrt{\lambda -n^2\pm i0} \rho )
= K_0 (\sqrt{n^2 -\lambda } \rho )$.\\
Assume now that $n \leq k$. Then $\sqrt{\lambda -n^2 \pm i0}= \pm \sqrt{\lambda -n^2 }\in \mathbb{R}$ which implies \begin{equation}\label{eq-Kext} K_0 (-i\sqrt{\lambda -n^2 + i0} \rho )=
K_0 (-i\sqrt{\lambda -n^2 } \rho )\,.\end{equation}
On the other hand, using the analytic continuation formulae
$$
K_0 (z \mathrm{e}^{m\pi i })=K_0 (z) - i\,m\pi I_0 (z)\, \quad \mathrm{and} \quad I_0 (z \mathrm{e}^{m\pi i }) = I_0 (z)\,,
$$ for $m\in \mathbb{N}$, we get \begin{eqnarray}
\nonumber
Z_0 (\sqrt{\lambda -n^2 - i0} \rho ) &=&
K_0 (i\sqrt{\lambda -n^2 } \rho )+ i\,\pi I_0 (-i\sqrt{\lambda -n^2 } \rho )
\\ \nonumber
&=& K_0 (-i\sqrt{\lambda -n^2 } \rho )\,.
\end{eqnarray} This completes the proof.
\end{proof}\\
\begin{figure}
\includegraphics[width=.60\textwidth]{rys3-correction.pdf}
\end{figure}
The above lemma provides the second sheet continuation of $R (z)$ as well as $\mathrm{R}_{\Sigma\Sigma }(z)$;
the latter is defined
as the bilateral embedding of $R^{\mathit{II}} (z)$ to $L^2 (\Sigma )$.
\begin{remark} \rm{
Note that for each $k\in \mathbb{N}$ the analytic continuation of $\mathcal{G} (\cdot)$ through $J_k $ leads to
different branches. Therefore, we have to keep in mind that the analytic continuation of $\mathcal{G} (\cdot)$
is $k$-dependent.}
\end{remark}
In the next lemma we show that the operator $\mathrm{R}^{\mathit{II}}_{\Sigma_\delta \Sigma_\delta }(z
)$ is bounded and derive the operator norm asymptotics if $\delta \to 0 $.
\begin{lemma} \label{le-boundedR}
Assume that $k\in \mathbb{N}$ and $\lambda\in J_k$. Let $z=\lambda -i\varepsilon $, where $\varepsilon $ is a small positive number. Operator $\mathrm{R}^{\mathit{II}}_{\Sigma_\delta \Sigma_\delta} (z) $ is bounded and its norm admits the asymptotics
\begin{equation}\label{eq-boundedR}
\|
\mathrm{R}^{\mathit{II}}_{\Sigma_\delta \Sigma_\delta} (z) \| =o(1)\,,
\end{equation}
where the error term is understood with respect to $\delta$.
\end{lemma}
\begin{proof}
To estimate the kernel of $\mathrm{R}^{\mathit{II}}_{\Sigma_\delta \Sigma_\delta} (z)$ we use (\ref{eq-defZ}), i.e.
\begin{eqnarray} \label{eq-aux0}
\mathcal{G}^{\mathit{II}}_{\Sigma_\delta \Sigma_\delta }(z; \rho , x_3, x_3') = \frac{1}{2\pi } \left(\sum _{n=1}^\infty K_0 (\kappa_n (z)\rho )\chi_n (x_3)\chi_n (x_3')
\right. \\ \label{eq-aux0a} \left.+\sum _{n\leq k } I_0 (-\kappa_n (z)\rho )\chi_n (x_3)\chi_n (x_3') \right)\,,
\end{eqnarray}
where $$\rho =|\underline{x}-\underline{x}'|\,,\quad \mathrm{and } \quad x\,,x'\in \Sigma_\delta \,.$$
First, we consider (\ref{eq-aux0a}).
The expression $$\left|I_0 (-\kappa_n (z)\rho )\chi_n (\cdot )\chi_n (\cdot )\right|$$ is bounded.
Therefore the operator defined by the kernel (\ref{eq-aux0a}) is also bounded and the corresponding operator norm in
$L^2 (\Sigma_\delta )$ behaves as $|\Sigma_\delta |^2 =\mathcal{O}(\delta^4 )$. \\
The analysis of the term (\ref{eq-aux0}) is more involving because it consists of infinite number of components.
The asymptotics:
$$
K_0 (\kappa_n (z)\rho )-K_0 (n \rho ) = \ln \sqrt{1-\frac{z}{n^2}}\left( 1+\mathcal{O}(\rho )\right)\,
$$
implies
\begin{equation}\label{eq-aux1}
\sum_{n=1}^\infty \left| (K_0 (\kappa_n (z)\rho )-K_0 (n \rho ))\chi_n (x_3)\chi_n (x_3 ') \right|= C+ \mathcal{O}(\rho )\,;
\end{equation}
remind that $\kappa_n (z)$ is defined by (\ref{eq-defk}).
To estimate $\sum_{n=1}^\infty K_0 (n \rho )\chi_n (x_3)\chi_n (x_3 ') $ we borrow the idea
from~\cite{EN} and use~\cite[Chap. 10, II, 5.9.1.4.]{Prudnikov} to get
\begin{eqnarray}
\label{eq-aux2aa}
\sum_{n=1}^\infty K_0 (n\rho )\cos (na ) &=& \frac{\pi }{2\sqrt {\rho^2+a^2}}+\frac{1}{2} \left(\ln \frac{\rho }{4\pi } - \psi(1)\right) \\ \label{eq-aux2}&& +
\frac{\pi }{2}\sum _{n=1}^\infty \left( \frac{1}{\sqrt{(2n\pi +a)^2+\rho ^2}}-\frac{1}{2n \pi }\right)\\ \label{eq-aux2a}
&& + \frac{\pi }{2}\sum _{n=1}^\infty \left( \frac{1}{\sqrt{(2n\pi -a)^2+\rho^2}}-\frac{1}{2n \pi }\right)\,.
\end{eqnarray}
For $x,x'\in \Sigma_\delta $ the terms (\ref{eq-aux2}) and (\ref{eq-aux2a}) can be majorized by
$C\left(\sum_{n=1}^\infty \frac{1}{n^2}\right)$, i.e. by a uniform constant.
Consequently, using the above estimates together with the equivalence
$\sin a \sin b =\frac{1}{2}\left( \cos(a-b)-\cos(a+b)\right)$ we get after
straightforward calculations
\begin{equation}\label{eq-13}
\left| \sum_{n=1}^\infty K_0 (\kappa_n (z)\rho )\chi_n (x_3)\chi_n (x_3 ') \right| \leq C\left(\frac{1}{|x-x'|}+
\ln |\underline{x}-\underline{x}'|\right)\,;
\end{equation}
the singular terms in the above estimates come from (\ref{eq-aux2aa}).
Let us analyze the left hand side of (\ref{eq-13}). First, we consider the component $\mathcal{P}(x,x'):=\frac{1}{|x-x'|} $ which
gives
$$
\int_{\Sigma_\delta }\mathcal{P} (x,x') \mathrm{d} \Sigma_\delta = (\mathcal{P} \ast \delta_{\Sigma_\delta} )(x) =
\int_{\Sigma_\delta } \frac{1}{|x -x' |} \mathrm{d}\Sigma_\delta \,.
$$
To conclude the desired convergence we employ the concept
of generalized Kato measure. Namely, since the Dirac delta on $\Sigma_\delta $
defines Kato measure we obtain
$$
\sup _{x\in \Sigma_\delta } \int_{\Sigma_\delta } \mathcal{P} (x,x') \mathrm{d}\Sigma_\delta = o(1)\,,
$$
where the right hand side asymptotics is understood in the sense of convergence with respect to $\delta$.
Employing the Schur argument we conclude that the norm of the integral operator with the kernel
$\mathcal{P} (x,x') $ acting from $L^2 (\Sigma_\delta )$ to $L^2 (\Sigma_\delta )$ behaves as $o(1)$.
The term $\ln |\underline{x}-\underline{x}'|$ contributing to (\ref{eq-13}) can be estimated in the analogous way.
\end{proof}
To recover the second sheet continuation of $
\mathrm{R}_{\alpha, \Sigma \Sigma } (\cdot)$
it remains to construct the analytic extensions of $\omega_n (z)$ and $\Gamma _n (z)$,~cf.~(\ref{eq-resolalphaembed}).
\begin{lemma} Given $n\in \mathbb{N}$ the functions
$\omega_n (z)$ and $\Gamma_n (z)$ admit the second sheet continuations $\omega^{\mathit{II}}_n (z)$ and $\Gamma^{\mathit{II}}_n (z)$
to $\Pi_k$ through $J_k= (k^2, (k+1)^2)$, $k\in \mathbb{N}$ defined by
\begin{equation}\label{eq-omega2sheet}
\omega^{\mathit{II}}_n (z; x ):= \frac{1}{2\pi }Z_0 (i\sqrt{z-n^2 }|\underline{x}|)\chi_n (x_3)\,,
\end{equation}
where $Z_0 $ is determined by (\ref{eq-defZ}), and
\begin{equation} \label{eq-Gamma2sheet}
\Gamma^{\mathit{II}}_n (z)=
\left\{\begin{array}{lr}
\frac{1}{2\pi }\left(2\pi \alpha -\psi (1)+\ln \frac{\sqrt{z-n^2}}{2i} \right), & \mathrm{for } \quad n>k \\
\frac{1}{2\pi }\left(2\pi \alpha -\psi (1)+\ln \frac{\sqrt{z-n^2}}{2i}- \pi i \right), & \mathrm{for } \quad n\leq k \,.
\end{array}\right.
\end{equation
\end{lemma}
\begin{proof} The construction of $\omega_n^{\mathit{II}}(z)$ can be obtained mimicking the arguments from the proof of Lemma~\ref{le-contR}.
\\ To get $\Gamma^{\mathit{II}}(z)$ we first assume $k<n $ and $z=\lambda \pm i\varepsilon$, $\lambda \in (k^2, (k+1)^2)$. Then
$\ln \frac{\sqrt{\lambda-n^2\pm i 0}}{i}= \ln \sqrt{n^2 -\lambda }$ and, consequently, $\Gamma_n(\lambda +i0)=\Gamma^{\mathit{II}}_n(\lambda -i0)$. \\
Assume now that $n\leq k$. Then we have $\lambda -n^2 >0$ and
$\ln \frac{\sqrt{\lambda -n^2 \pm i0 }}{i}= \ln \sqrt {\lambda -n^2 \pm i0 } \mp \frac{\pi}{2} i $
which implies
\begin{equation}\label{eq-Gammaasy}
\Gamma_n(\lambda +i0)=\Gamma^{\mathit{II}}_n(\lambda -i0) =
\frac{1}{2\pi }\left(2\pi \alpha -\psi (1)+\ln \sqrt{\lambda -n^2} - \frac{\pi }{2} i \right)\,.
\end{equation}
This, in view of edge-of-the-wedge
theorem, completes the proof.
\end{proof}
\\ \\
Henceforth, we assume that $\epsilon_n \neq k^2$ for any $k,n\in \mathbb{N}$. Suppose $z=\lambda-i\varepsilon$, where $\varepsilon$ is a small non-negative number and $\lambda \in J_k$.
At most one eigenvalue $\epsilon_l$ can exist in the interval $J_k$. Assuming that $z \in (\Pi_k \cup J_k )\setminus \epsilon_l $
we define the analytic functions
$ z\mapsto \Gamma_n ^{\mathit{II}}(z)^{-1}$ for
$n\in \mathbb{N}$. Then the second sheet continuation of the resolvent
takes the form
\begin{equation}\label{eq-resolvent2}
\mathrm{R}^{\mathit{II}} _{\alpha ,\Sigma \Sigma } (z)= \mathrm{R}^{\mathit{II}} _{\Sigma \Sigma } (z)+ \sum_{n=1}^\infty \Gamma_n ^{\mathit{II}}(z)^{-1}(w^{\mathit{II}}_n (\bar{z} ), \cdot )_{\Sigma }
w^{\mathit{II}}_n (z )\,
\end{equation}
for $z \in (\Pi_k \cup J_k )\setminus \epsilon_l $.
\medskip
\noindent {\bf Notation.} In the following we will avoid the superscript $\mathit{II}$ keeping in mind that
all quantities
depending on $z$ are defined for second sheet continuation if $\Im z <0$ which admits infinitely many
branches $\Pi_k$, $k\in \mathbb{N}$.
\medskip
Assume that $\epsilon_l \in J_k$. Having in mind latter purposes we define
\begin{equation}\label{eq-defA}
A_l (z):= \sum_{n\neq l} \Gamma_n (z)^{-1} (w_n (\bar{z}), \cdot )_{\Sigma_\delta }w_n (z)\,,
\end{equation}
for $z\in (\Pi_k \cup J_k)\setminus \epsilon_l $. The following lemma states the operator
norm asymptotics.
\begin{lemma} \label{le-boundedA}
Operator $A_l(z)\,:\,L^2 (\Sigma_\delta ) \to L^2 (\Sigma_\delta )$ is bounded and the operator
norm satisfies
\begin{equation}\label{eq-boundednormA}
\|A_l (z)\| \leq C |\Sigma_\delta |\,
\end{equation}
\end{lemma}
\begin{proof}
Suppose that $z=\lambda - i\varepsilon $. We derive the estimates
\begin{eqnarray}
\nonumber
|(A_l (z)f,f)_{\Sigma_\delta }| &\leq & \left( \sum_{n\neq l } |\Gamma_{n}(z)^{-1}| \|w_n (z)\|_{\Sigma_\delta }^2 \right) \|f\|_{\Sigma_\delta }^2
\leq \\ \label{eq-2}&& C \left( \sum_{n\neq l } \|w_n (z)\|_{\Sigma_\delta }^2 \right) \|f\|_{\Sigma_\delta }^2\,;
\end{eqnarray}
to obtain (\ref{eq-2})
we use (\ref{eq-Gamma2sheet}).
Now our aim is to show
\begin{equation}\label{eq-aux3}
\sum_{n\neq l } \|w_n (z)\|_{\Sigma_\delta }^2 \leq C |\Sigma_\delta |\,.
\end{equation}
To find a bound for the left hand side of (\ref{eq-aux3}) we analyse first
the behaviour of $w_n (z)$ for large $n$ and $z\in (\Pi_k \cup J_k)\setminus \epsilon_l$.
For this aim we employ (\ref{eq-omega2sheet}) and (\ref{eq-defZ}).
Note that for $n>k $
function $w_n (z)$ admits the representation:
$$
w_n (z,x)= \frac{1}{2\pi } K_0 (\kappa_n (z)|\underline{x}|) \chi_n (x_3)\,,
$$
where $x\in \Sigma_\delta $.
Using again the large argument expansion (\ref{eq-K0})
and the fact that
$\Re (-i \sqrt{z-n^2})\sim n$ we get the estimate
$$
|w_n (z,x)| \leq C\mathrm{e}^{-r_{\mathrm{min}}n}\,,
$$
where $r_{\mathrm{min}}= \min _{x\in \Sigma_\delta } |\underline{x}|$. This implies
\begin{equation}\label{eq-aux4}
\sum_{n > k\,, n\neq l } \|w_n (z)\|_{\Sigma_\delta }^2 \leq C|\Sigma_\delta |= \mathcal{O} (\delta^2 )\,.
\end{equation}
On the other hand for $n\leq k $ function $w_n (z)$ consists of $K_0 $ and $I_0$, see (\ref{eq-defZ}). Both functions are continuous
on $\Sigma_\delta $ and therefore $\|w_n\|^2_{\Sigma_\delta } \leq C|\Sigma_\delta |$. Since the number of such components is finite, in view of
(\ref{eq-aux4}),
we come to (\ref{eq-aux3}) which completes the proof.
\end{proof}
\section{Complex poles of resovent} \label{sec-reson}
Assume that $\epsilon_l \in J_k$. Suppose that $\delta $ is sufficiently small. It follows from Lemmae~\ref{le-boundedR}~and~\ref{le-boundedA} that the operators
$I-\beta \mathrm{R}_{\Sigma_\delta \Sigma_\delta } (z)$ and $I-\beta A_l (z)$ acting in $L^2 (\Sigma_\delta )$
are invertible for $z\in (\Pi_k \cup J_k)\setminus \epsilon_l$ and it makes sense to introduce auxiliary notation
$$G_{\Sigma_\delta} (z):= (I-\beta \mathrm{R}_{\Sigma_\delta \Sigma_\delta } (z) )^{-1}\,.$$
Since the norm of $ G_{\Sigma_\delta }(z) A_l (z)\,:\,L^2 (\Sigma_\delta) \to L^2 (\Sigma_\delta) $ tends to $0 $ if $\delta\to 0 $ therefore
the operator $I +\beta G_{\Sigma_\delta }(z) A_l (z)$ is invertible as well.
\\ The following theorem ``transfer'' the analysis of resonances from the operator equation
to the complex valued function equation.
\begin{theorem} \label{th-ker}
Suppose $\epsilon_l \in J_k $ and assume that $z\in (\Pi_k \cup J_k )\setminus \epsilon_l$. Then the condition
\begin{equation}\label{eq-resoncondmain}
\ker (
I-\beta \mathrm{R}_{\alpha , \Sigma_\delta \Sigma_\delta } (z)) \neq \{0\}
\end{equation} is equivalent to
\begin{equation}\label{eq-reoncond}
\Gamma_l (z) +\beta ( w_l (\bar{z}), T_l (z)w_l (z) ) _{\Sigma_\delta }=0\,,
\end{equation}
where
$$
T_l (z) := (I -\beta G_{\Sigma_\delta }(z) A_l (z))^{-1}G_{\Sigma_\delta }(z)\,.
$$
\end{theorem}
\begin{proof}
The strategy of the proof is partially based on the idea borrowed from~\cite{Chu}.
The following equivalences
\begin{eqnarray} \nonumber
I- \beta \mathrm{R}_{\alpha , \Sigma_\delta \Sigma_\delta }(z) &=&\\ \nonumber
&=&(I- \beta \mathrm{R}_{ \Sigma_\delta \Sigma_\delta }(z) )
\left(I -\beta G_{\Sigma_\delta }(z) A_l (z)\right. \\ \nonumber && \left. -\beta \Gamma_l (z)^{-1}(w_l (\bar{z} ), \cdot )_{\Sigma_\delta } G_{\Sigma_\delta }(z)w_l (z) \right)\\ \nonumber
&=& (I- \beta \mathrm{R}_{ \Sigma_\delta \Sigma_\delta }(z) )(I - \beta G_{\Sigma_\delta }(z) A_l (z)) \times \\ \nonumber && \left[ I -\beta \Gamma_l (z)^{-1}(w_l (\bar{z} ), \cdot )_{\Sigma_\delta } T_l (z)w_l (z) \right]
\,,
\end{eqnarray}
show that (\ref{eq-resoncondmain}) is equivalent to
$$
\ker \left[ I -\beta \Gamma_l (z)^{-1}(w_l (\bar{z} ), \cdot )_{\Sigma_\delta } T_l (z)w_l (z) \right] \neq \{0 \}\,.
$$
The above condition is formulated for a rank one operator and, consequently, it is equivalent
to (\ref{eq-reoncond}).
\end{proof}
\\
Theorem~\ref{th-ker} shows that the problem of complex poles of resolvent $R_{\alpha, \beta }(z)$ can be shifted
to the problem of the roots analysis of
\begin{equation}\label{eq-resonII}
\eta_l (z,\delta )=0 \,,\quad \mathrm{where }\quad \eta_l (z,\delta ):=\Gamma_l (z) - \beta \vartheta_l (z,\delta )\,,
\end{equation}
and
$$
\vartheta_l (z,\delta ):= ( w_l (\bar{z}), T_l (z)w_l (z) ) _{\Sigma_\delta }\,.
$$
The further discussion is devoted to figuring out roots
of (\ref{eq-resonII}). In the following we apply the expansion $(1+A)^{-1}=(1-A+A^2-A^3...)$ valid if $\|A\|<1$.
Taking $-\beta \mathrm{R}_{\Sigma_\delta \Sigma_\delta } (z)$
as $A$ we get
\begin{equation} \label{eq-exp1aa}
G_{\Sigma_\delta } (z)=(I- \beta \mathrm{R}_{\Sigma_\delta \Sigma_\delta } (z))^{-1}=I+ \breve{\mathrm{R}}(z)\,,\quad
\breve{\mathrm{R}}(z):=\sum_{n=1}(\beta \mathrm{R}_{\Sigma_\delta \Sigma_\delta } (z))^n \,.
\end{equation}
Expanding the analogous sum for $-\beta G_{\Sigma_\delta } (z) A_l (z)$ one obtains
\begin{equation}\label{eq-exp1aaa}
(I- \beta G_{\Sigma_\delta } (z) A_l (z))^{-1}= I+ \beta A_l (z) + \beta\mathrm{\breve{R}}(z)A_l (z) +...
\end{equation}
In view of Lemmae~\ref{le-boundedA} and~\ref{le-boundedR} the norm of $\mathrm{R}_{\Sigma_\delta \Sigma_\delta} (z)A_l (z)$ behaves
as $o(1) \| A_l (z)\|_{\Sigma_\delta }$ for $\delta \to 0$ and the same asymptotics holds for the operator norm of
$\mathrm{\breve{R}}(z)A_l (z)$.
The further terms in (\ref{eq-exp1aaa}) are of smaller order with respect
to $\delta $.
Consequently, applying again (\ref{eq-exp1aaa}) we conclude that $T_l (z)$ admits the following expansion
\begin{equation}\label{eq-exp1}
T_l (z)= I+\beta A_l (z)+ \mathrm{\breve{R}}(z)+...
\end{equation}
Using the above statements we can formulate the main result.
\begin{theorem}
Suppose that $\epsilon_l \in J_k $ and consider the function $\eta_l (z,\delta )\,:\, \Pi_k \cup J_k \times [0, \delta_0)\to \mathbb{C}$,
where $\delta_0>0$, defined by (\ref{eq-resonII}). Then the equation
\begin{equation}\label{eq-sought}
\eta_l (z, \delta ) =0\,,
\end{equation}
possesses a solution which is determined by the function $ \delta \mapsto z(\delta )\in \mathbb{C}$ with the following asymptotics
\begin{equation}\label{eq-1}
z_l(\delta )= \epsilon_l+\mu_l (\delta )\,, \quad |\mu_l(\delta )|= o(1)\,.
\end{equation}
Moreover, the lowest order term of $\mu_l (\cdot )$ takes the form
\begin{eqnarray} \label{eq-asympmu}
\mu_l(\delta ) = && 4\pi \xi_\alpha \beta \left\{
\|w_l (\epsilon_l )\|^2_{\Sigma_{\delta }} \right.
\\ \label{eq-asym1} && \left.
+\beta
\sum_{n\neq l } \Gamma_n (\epsilon_l )^{-1} |(w_l (\epsilon_l), w_n (\epsilon_l))_{\Sigma_\delta }|^2
\right.
\\ \label{eq-asym2} &&
\left. + (w_l (\epsilon_l ),\mathrm{\breve{R}}(\epsilon_l) w_l (\epsilon_l ) )_{\Sigma_\delta } \right\}\,.
\end{eqnarray}
\end{theorem}
\begin{proof}
Note that $z\mapsto \eta_l (z,\delta )$, cf.~(\ref{eq-resonII}), is analytic and
$\eta_l (\epsilon_l , 0) = 0 $. Using (\ref{eq-Gamma2sheet})
one obtains $$\left. \frac{d \Gamma _n (z)}{dz}\right|_{z=\epsilon_n }=\frac{1}{4\pi \xi_\alpha } <0 \,, \quad n\in \mathbb{N}.$$
Combining this with
$$\left. \frac{\partial \vartheta_{n}(z,\delta )}{\partial z} \right|_{z=\epsilon_l, \delta =0 } = 0\,, $$
we get
$\left.\frac{\partial \eta_l }{\partial z}\right|_{\delta=0}= \frac{1}{4\pi \xi_\alpha }\neq 0$.
In view of the Implicit Function Theorem we conclude that the equation (\ref{eq-resonII})
admits a unique solution
which a continuous function of $\delta \mapsto z_l(\delta)$ and $z_l (\delta )= \epsilon_l +o(1)$.
To reconstruct asymptotics of $z(\cdot )$
first we expand $\Gamma_l (z)$ into the Taylor sum
$$
\Gamma_l (z)= \frac{1}{4\pi \xi_\alpha } (z-\epsilon_l) +\mathcal{O}((z-\epsilon_l)^2)\,.
$$
Then the spectral equation (\ref{eq-resonII}) reads
$$
z=\epsilon_l + 4 \pi\xi_\alpha \beta \vartheta_l (z,\delta )+\mathcal{O}((z-\epsilon_l)^2)\,.
$$
Now we expand $\vartheta_l (z,\delta )$. Using (\ref{eq-exp1}) and (\ref{eq-defA})
we reconstruct its first order term which reads
\begin{eqnarray} \nonumber
&& \left\{
\|w_l (\epsilon_l )\|^2_{\Sigma_{\delta }}
+\beta
\sum_{n\neq l } \Gamma_n (\epsilon_l )^{-1} |(w_l (\epsilon_l), w_n (\epsilon_l))_{\Sigma_\delta }|^2
\right.
\\ \nonumber && \left.
+ (w_l (\epsilon_l ),\mathrm{\breve{R}}(\epsilon_l) w_l (\epsilon_l ) )_{\Sigma_\delta } \right\} \,.
\end{eqnarray}
Applying the asymptotics
$z_l(\epsilon_l)=\epsilon_l +o(\delta )$ and the fact that $\vartheta_l (\cdot ,\cdot )$ is analytic with respect to
complex variable we get formula for $\mu(\cdot)$.
\end{proof}
\subsection{Analysis of imaginary part of the pole }
Since the imaginary component of resonance pole has a physical meaning
we dedicate to this problem a special discussion.
The information on the lowest order term of the pole imaginary component is contained in (\ref{eq-asym1}) and (\ref{eq-asym2}). On the other hand, note that
only the components subscripted by $n\leq k$ admit a non-zero imaginary parts. Therefore
\begin{eqnarray} \label{eq-imaginary}
\Im && \left( 4\pi \xi_\alpha \beta \left( \beta
\sum_{n\leq k } \Gamma_n (\epsilon_l )^{-1} |(w_l (\epsilon_l), w_n (\epsilon_l))_{\Sigma_\delta }|^2
\right. \right.\\ \label{eq-imaginary1} && \left. \left.+ (w_l (\epsilon_l ),\mathrm{\breve{R}}(\epsilon_l) w_l (\epsilon_l ) )_{\Sigma_\delta } \right) \right) \,.
\end{eqnarray}
determines the lowest order term of $\Im \mu (\delta )$.
\\ \\
\emph{Sign and asymptotics of $\Im \mu (\delta )$ with respect to $\Sigma_\delta$ }.
Recall that $\epsilon_l \in J_k$. First we analyse (\ref{eq-imaginary}) and for this aim we define
$$
\iota_{l,n}: = \frac{1}{2\pi }\left( 2\pi \alpha +\ln \frac{\sqrt{\epsilon_l -n^2}}{2}-\psi(1)\right)\,,
$$
for $n\leq k$.
Relying on (\ref{eq-Gammaasy}) we get
$$
\Gamma_l (\epsilon_l )^{-1} = \frac{1}{\iota_{l,n}^2 +(1/2)^2}\left(\iota_{l,n} +\frac{1}{2}i \right)
$$
if $n\leq k $. Consequently, formula (\ref{eq-imaginary}) is equivalent to
$$
\Im \, 4\pi \xi_\alpha \beta^2
\sum_{n \leq k } \frac{1}{2}\frac{1}{\iota_{l,n}^2 +(1/2)^2}
|(w_l (\epsilon_l), w_n (\epsilon_l))_{\Sigma_\delta }|^2\,.
$$
The above expression is negative because $\xi_\alpha <0$.
Moreover, since both $w_l (\epsilon_l)$ and $w_l (\epsilon_n)$ are continuous in $\Omega\setminus I$
we have $|(w_l (\epsilon_l), w_n (\epsilon_l))_{\Sigma_\delta }|^2\sim |\Sigma_\delta |^2 $. This means that (\ref{eq-imaginary}) behaves as $\mathcal{O}(|\Sigma_\delta|^2)$. To recover the asymptotics of (\ref{eq-imaginary1}) we
restrict ourselves to the lowest order term of $\breve{\mathrm{R}}(z)$, cf.~(\ref{eq-exp1}), namely
$$
\upsilon_l := \Im 4\pi \xi_\alpha \beta ^2 (w_l (\epsilon_l ),\mathrm{R}_{\Sigma_\delta \Sigma _\delta }(\epsilon_l) w_l (\epsilon_l ) )_{\Sigma_\delta }\,.
$$
Using analytic continuation formulae (\ref{eq-Kext}) and employing the small argument expansion, cf.~\cite{AS},
$$
K_0 (z) \sim -\ln z\,,
$$
where $-\pi <\arg z <\pi$ states the plane cut for the logarithmic
function,
one gets
$$
\upsilon_l \sim \Im \pi \xi_\alpha \beta ^2
\sum_{n\leq k } \left( \int_{\Sigma_\delta } w_l (\epsilon_l )\chi_n \right)^2 =\mathcal{O}(|\Sigma_\delta |^2 )\,.
$$
One can easily see that $ \upsilon_l <0$.
Summing up the above discussion we can formulate the following conclusion.
\begin{proposition}
The resonance pole takes the form $z_l (\delta )=\epsilon_l+\mu (\delta )$ with the lowest order of $\Im \mu (\delta )$ given by
$$
\pi \xi_\alpha \beta ^2 \sum_{n \leq k } \left( \frac{2}{\iota_{l,n}^2 +(1/2)^2}
|(w_l (\epsilon_l), w_n (\epsilon_l))_{\Sigma_\delta }|^2 +
\left( \int_{\Sigma_\delta } w_l (\epsilon_l )\chi_n \right)^2 \right) \,.
$$
It follows from the above formula that
$\Im \mu (\delta )<0$ and
the asymptotics $$ \Im \mu (\delta )= \mathcal{O}(|\Sigma_\delta |^2) \,$$ holds. Moreover, the lowest order
of $\Im \mu (\delta )$ is independent of sign of $\beta $.
\end{proposition}
Note that for the special geometrical cases the embedded eigenvalues can survive
after introducing $\Sigma_\delta $ since the "perturbed" eigenfunctions are not affected
by presence of $\Sigma_\delta $.
Let us consider
$$\Pi_l := \{x\in \Omega \,:\, x=\left(\underline{x}, \frac{\pi }{l }\right)\,, \quad l\in \mathbb{N}$$
and assume that $\Sigma_\delta \subset \Pi_l $. Then $w_{ml} (z) = 0 $ for each $m\in \mathbb{N}$ and, consequently
$\vartheta_{ml} (z,\delta ) = 0$, cf.~(\ref{eq-resonII}). This implies the following statement.
\begin{proposition}
Suppose that $\Sigma \subset \Pi_l $. Then
for all $m\in \mathbb{N}$ the numbers $\epsilon_{ml}$ remain the embedded eigenvalues of $H_{\alpha , \beta }$.
\end{proposition}
\subsection*{Acknowledgements}
The author thanks the referees for reading the paper carefully, removing errors and
recommending
various improvements in exposition. \\
The work was supported
by the project DEC-2013/11/B/ST1/03067 of the Polish National Science Centre.
| {
"attr-fineweb-edu": 1.764648,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdoQ5qoTBFHyxCRHT | \section{Introduction}
It is commonly supposed that the huge numbers of vacua that can
arise from different compactifications of string theory
\cite{boussopolchinski, KKLT} imply a complete loss of
predictability of low energy physics. If this is the case, the
stringiness simply constrains the possible dynamics rather than
the precise complement of forces and matter. Every string
theory leads to some effective field theory at a high scale
$\Lambda$, taken to be, say, an order of magnitude below the
string scale. Predictions for low energy physics have to made in
terms of this effective field theory. Thus, the landscape of
string theory vacua leads to a landscape of effective field
theories at the scale $\Lambda$ . Here we ask if constraints
of finiteness imposed on this landscape via its origin in string
theory might be sufficient to lead to a degree of predictability,
at least in some statistical sense. Previous authors have
discussed how continuous parameters can scan in a random landscape
of effective field theories \cite{hall, mcallister,
nimaetal,anarchy, nelsonstrassler, hoggartnielsen,gibbons}, and
there has been some study of the gauge groups and matter content
attainable from specific string theoretic scenarios \cite{dienes,
douglas1,douglas2,dijkstra,timoetal,douglastaylor}. For example,
\cite{timoetal} and \cite{douglastaylor} discuss the distribution
of gauge groups arising in intersecting brane models on torus
orientifolds.
We will impose the weakest of the constraints arising from string
theory -- namely that it should be possible to couple the
effective field theory consistently to a quantum theory of
gravity. It has been argued \cite
{swampland,swampland2,swampland3} that such consistency with
string theory requires that the rank of the gauge group and the
number of matter fields be bounded from above.\footnote{A possible
bound on the number of matter species in theories containing
gravity was originally discussed by Bekenstein \cite{bekenstein}.}
Since we will not impose any constraints based on rules arising
from symmetry or dynamics on the measure, we will call this an
``anarchic'' landscape, in recollection of the terminology in
\cite{anarchy}. Thus we will study simple anarchic landscapes of
field theories bounded in this way, and illustrate how
statistics can lead to characteristic predictions for the low
energy physics. These predictions are strongest for
appropriately coarse-grained attributes of a theory that possess
the property of {\it typicality} in such landscapes -- i.e. they
are overwhelmingly likely to lie close to certain typical values.
An example of such a typical property will be the fraction of
gauge groups with ranks lying within some range. We will
illustrate and develop our thinking using some simple examples.
\section{The set of field theories}
A natural, large class of field theories to consider is the set of
quiver gauge theories. For simplicity, we will restrict attention
to ${\cal N}=1$ supersymmetric quiver gauge theories
where the gauge group is a product
of unitary groups,
\begin{equation}
G = \prod_{i=1}^L U(N_i).
\end{equation}
In addition, there will be $A_{ii}$ hypermultiplets transforming
in the adjoint of $U(N_i)$, and $A_{ij}$ hypermultiplets
transforming in the $(\mathbf{N}_i,\bar{\mathbf{N}}_j)$ of
$U(N_i)\times U(N_j)$. The nonnegative integer matrix
\begin{equation}
A_{ij}\geq 0, \quad i,j=1\ldots L
\end{equation}
describes the number of arrows from site $i$ to site $j$ in the
quiver.
Of course, to specify the full gauge theory we also need to
describe the K\"ahler potential for the hypermultiplets, the gauge
kinetic terms, the superpotential and possibly Fayet-Iliopoulos
terms. We will postpone a discussion of these quantities for now and
will in this paper only discuss the matter and gauge group
content of the ${\cal N}=1$ theory.
Gauge theories of quiver type are ubiquitous in string theory, and
this is the main motivation to restrict attention to this class. Bifundamentals tend to appear in string theory
because strings have two endpoints
only. A typical setup to engineer quiver $N=1$ theories is to
consider D6-branes wrapping 3-cyles inside a Calabi-Yau manifold
in type IIA string theory, in which case the number of
bifundamentals is related to the intersection number of the
3-cycles. By including orientifolds, we can also easily engineer
quiver gauge theories with $SO$ and $Sp$ gauge factors, but we
will postpone a study of these theories to another occasion.
Our goal will thus be to study random $U(N)$ quiver gauge
theories. Before looking at some concrete examples, we are first
going to make some general remarks on possible
further restrictions on the set of gauge theories, on the choice of measure on the space of theories, and the
kinds of properties we might predict.
\subsection{Interesting classes of quiver gauge theories}
Three possible restricted sets of gauge theories are:
\begin{enumerate}
\item \underline{Anomaly free theories}. A simple physical
requirement that we should impose is the condition that
there are no anomalies. This translates into the statement that
for all $i$
\begin{equation} \sum_{j\neq i} (A_{ij}-A_{ji}) N_j =0. \label{anomaly}
\end{equation}
The expectation value of the left hand side of this equation in
the unconstrained set of quiver gauge theories with the uniform measure
is clearly zero, as the measure is invariant under
$A_{ij} \leftrightarrow A_{ji}$. Therefore, ``on average,'' random
quiver gauge theories are anomaly free, and one might be inclined
to not worry about anomalies anymore. However, from a physical
point of view it seems rather peculiar to allow forbidden theories
in an ensemble, as it is not at all clear that properties of the
set of anomaly free theories are properly reproduced by the full
set of random quiver gauge theories. Hence we will for the most
part restrict to field theories which obey (\ref{anomaly}).
\item \underline{Asymptotically free theories}. Another natural
constraint we can impose is that the theories we consider are
asymptotically free, which makes them well-defined in the UV.
Asymptotic freedom is less compelling than anomaly cancellation,
as the set of random quiver theories may well represent a set of
low-energy effective field theories obtained e.g. in string
theory. Gauge group factors that are IR free and strongly coupled
in the UV will typically start to act as global symmetries at
sufficiently low energies and will not directly lead to contradictions.
The condition for asymptotic freedom is that for all $i$,
\begin{equation} \label{free}
A_{ii} N_i +\sum_{j\neq i} (A_{ij}+A_{ji})N_j < 3 N_i.
\end{equation}
This tends to constrain the $A_{ij}$ to not be very large but to
be of order unity instead.
\item \underline{Purely chiral theories.}
If we imagine our field theory to be some effective field theory
at a high scale $M$, assume there are no other dimensionful
parameters around, and write down the most general superpotential,
it will contain many mass terms with masses of order $M$. At
energies below $M$, it makes sense to integrate out all massive
fields with masses of order $M$. The remaining gauge theory will
have fewer fields and will no longer allow for mass terms: all fields
that can be integrated out have been removed. The remaining
set of purely chiral theories with
\begin{equation} \label{chiral}
A_{ii}=0, \qquad A_{ij}=0\,\,\,{\rm or} \,\,\, A_{ji}=0 \,\,\,
{\rm for} \,\, i\neq j
\end{equation}
are therefore a natural starting point for viewing random quivers
as low-energy effective field theories. Such chiral theories
allow for general cubic superpotentials at the marginal level. Higher order terms are suppressed by a mass
scale in the Lagrangian, although some quartic superpotentials can become marginal in the infrared.
\item \underline{Equal rank theories.}
In order to simplify the analysis, we could take the
ranks of all the gauge groups to be fixed and equal. For such
theories both the anomaly cancellation constraint as well as the
asymptotic freedom constraint are much easier to implement.
However, we do not have an obvious physical motivation that would
prefer these theories, so they are mainly helpful to develop
intuition for the more general case.
\end{enumerate}
\subsection{Averages and typicality}
Given a set of gauge theories with a suitable measure on
them, we can compute expectation values of various quantities,
such as the average rank of a gauge group, the average number of
matter fields, etc. Though averages are useful to compute,
they are especially interesting when they also represent the
{\it typical} value of a quantity. Typicality is a notion that exists in situations when a thermodynamic limit can be
taken wherein some parameter $N$, controlling the size of the ensemble, can be taken to infinity.
Then, a quantity enjoys the property of typicality if its probability distribution becomes more and more narrowly
peaked around its expectation value as $N \to \infty$:
\begin{equation}
\lim_{N\rightarrow \infty} \frac{ \langle {\cal O}^2 \rangle -
\langle {\cal O} \rangle^2 }{ \langle {\cal O} \rangle^2} =0.
\Label{typical}
\end{equation}
In other words, quantities that are typical are equal to their
ensemble averages with probability one in the limit $N \to
\infty$\footnote{This criterion is not very useful when $\langle
{\cal O} \rangle=0$. More generally, we should normalize the
operator ${\cal O}$ in such a way that the range of values it can
take is independent of $N$ and then require that the variance
vanishes in the large $N$ limit.}.
Familiar examples of typical operators are statistical mechanical
quantities such as pressure and free energy. Also, we note that
for a standard Boltzmann distribution, for one particular
occupation number with
\begin{equation}
\langle N \rangle = \frac{\sum_{k\geq 0} k e^{-\beta
k}}{\sum_{k\geq 0} e^{-\beta k}} =
\frac{e^{-\beta}}{1-e^{-\beta}},\qquad \langle N^2 \rangle =
\frac{\sum_{k\geq 0} k^2 e^{-\beta k}}{\sum_{k\geq 0} e^{-\beta
k}} = \frac{e^{-\beta}(1+e^{-\beta})}{(1-e^{-\beta})^2}
\end{equation}
the variance to mean squared ratio appearing in (\ref{typical})
equals $e^{\beta}$. In other words, a microscopic quantity like a
single occupation number will not be typical. Observables that
achieve typicality are inevitably coarse-grained -- e.g. the
number of Boltzmann particles with energies between $c/\beta$ and
$(c+\epsilon)/\beta$ for constants $c$ and $\epsilon$ will be
typical. In studying the statistics of effective field theories
we should be interested in finding appropriately
``coarse-grained'' structures that are typical.
\subsection{Choice of measure}
In order to define and discuss averages and typicality for random
quiver gauge theories, we need to define a suitable measure on
this space. One could imagine that dynamics gives rise to an
interesting and complicated measure. For example, one could
imagine weighing field theories by the dimension or even size of
the cohomology of their respective moduli spaces, having the close
connection between quiver gauge theories and D-brane moduli spaces
in mind. As another simple example of how dynamics can affect
the measure, if we suppose that dynamical effects can give the
matter fields any expectation value, then generically all the
gauge groups will be broken to $U(1)$ and analysis of the
distribution of gauge factors is moot. However, in $N=1$
theories of the kind we study, the potential for the matter fields
typically develops isolated minima and the gauge group is broken
to a product of Abelian and non-Abelian factors (for instance, a
cubic superpotential for an adjoint superfield classically breaks
$U(N)\rightarrow U(p)\times U(N-p)$ for some $p$.). Classically,
in the context of Calabi-Yau compactification, one imagines that
the manifold has some set of distinct but intersecting cycles and
the non-abelian factors in the gauge theory are related to the
number of branes wrapped on each cycle. Then strong gauge dynamics
might break these gauge factors further. For the present we will
ignore such dynamical issues and use a uniform measure subject to
various constraints of boundedness. Since we are ignoring rules
arising from the underlying dynamics, we will call our measures
``anarchic''.
Finally, in the context of e.g. string landscape discussions, one
might want to associate various kinds of Bayesian measures to
different types of field theories. For example, to correctly
make statistical predictions for the UV field theory, given our
hypothetical bound on the the matter and gauge groups, we strictly
speaking condition our probability distribution on the known facts
about infrared physics. From this perspective, we actually want
the uniform measure on a bounded space of gauge theories that,
when run to the infrared, contain the standard model as a sector.
Conditioning in this way, is well beyond our ability at present,
and so we will simply investigate the uniform measure on bounded
spaces of quiver gauge theories, to study whether and how
typicality occurs.
Experience in statistical physics has shown that directly computing averages and variances over bounded
configuration spaces can be difficult. Thus, to simplify analysis we can try to use a grand canonical ensemble
to constrain the total rank and the total number of matter fields. This involves summing over theories with
arbitrary ranks and amounts of matter while including in the measure a Boltzmann factor for the rank of the
gauge group, and a separate Boltzmann factor for the total number of matter fields
\begin{equation} \label{measure}
\rho \sim \exp(-\beta \sum_i N_i - \lambda \sum_{ij} A_{ij} N_i
N_j ).
\end{equation}
One could also include Boltzmann factors for, e.g., the total number
of nodes, the total number of gauge bosons, etc., but for our purposes (\ref{measure}) will be sufficient to
illustrate the main ideas. Such an approach only works if the ensemble of theories does not grow
exponentially fast in the total rank and number of matter fields. If such exponential growth occurs, the
Boltzmann weight does not fall quickly enough for the microcanonical ensemble to be well approximated by the
canonical ensemble. We will see that the space of theories typically grows too fast with the number of fields to
permit use of the canonical approach to make statistical predictions from a bounded landscape of effective field
theories.
\section{Typicality in Toy Landscapes}
\subsection{Theories without matter: coarse graining and typicality}
As an example of our approach, consider a landscape of field theories with no matter, where the rank of the
gauge group is equal to a large number $N$. For simplicity, let the gauge group be a product of unitary factors
\begin{equation}
G = \prod_{i=1}^L U(N_i) \, .
\end{equation}
Then the rank of $G$ is $\sum_i N_i = N$; thus the $N_i$ form an integer partition of $N$. To study the
distribution of gauge factors in an anarchic landscape of such field theories, we can construct the canonical
partition function
\begin{equation}
Z = \sum_{\{ r_k \}} e^{-\beta \sum_k k \, r_k - \alpha \sum_k
r_k} = \prod_k {1 \over 1 - e^{-\beta \, k - \alpha}}
\equiv \prod_k {1 \over 1 - u \, q^k}
\Label{nomatterpartition}
\end{equation}
Here $r_k$ is the number of gauge factors of rank $k$, $\beta$ is
a Lagrange multiplier constraining the total rank to be $N$ and
$\alpha$ is a Lagrange multiplier that can be used to constrain
the number of gauge factors; sometimes it is more convenient to
work with $q=e^{-\beta}$ and $u=e^{-\alpha}$ instead. In writing
this we have used measure that treats the ordering of gauge
factors as irrelevant. So, for example, $U (2) \times U(3) \times
U(2)$ is the same as $U(3) \times U(2) \times U(2)$ and so on. In
such a measure, all $U (N_i)$ factors are treated as identical,
and not distinguished from each other by parameters like their
gauge couplings. This measure will be modified if the gauge theory
is realized by wrapping D-branes on cycles of Calabi-Yau because
in that case the locations of branes and the sizes of the cycles
will allow us to distinguish between many different configurations
that lead to the same gauge group. Nevertheless, the present
measure is interesting from a purely field theoretic point of
view, i.e. if one is simply counting field theories, and is
illustrative.
To fix $\beta$ and $\alpha$ we require that
\begin{equation}
N = \sum_{j=1}^\infty {j \, u \, q^j \over 1 - u \, q^j} ~~~~ ; ~~~~
L = \sum_j {u \, q^j \over 1 - u \, q^j} \, ,
\Label{NandL}
\end{equation}
where $N$ is the total rank and $L$ is the total number of gauge factors.
We will take
\begin{equation}
u \sim O(1) ~~~~;~~~~ \beta \sim {1 \over \sqrt{N}} \,
\Label{nomatterbeta}
\end{equation}
which, we will see later, implies $L \sim \sqrt{N}$. Then from (\ref{nomatterpartition}) it is easy to show that:
\begin{equation}
\langle r_j \rangle = {u \, q^j \over 1 - u \, q^j} ~~~;~~~
{\rm Var}(r_j) = {u \, q^j \over (1 - u \, q^j)^2} = { \langle r_j \rangle \over 1 - u \, q^j} \, .
\end{equation}
The variance to mean squared ratio is
\begin{equation}
{\rm{Var}(r_j) \over \langle r_j \rangle^2}
=
{1 \over u \, q^j} = e^{\beta j + \alpha} \geq e^{\alpha} \geq O(1) \, .
\end{equation}
To get the last inequality we simply used $\alpha, \beta > 0$.
Thus we see that in a universe with such anarchic landscapes, the
number of gauge factors $r_j$ with rank $j$ is not typical in the
sense defined in (\ref{typical}) and thus cannot be predicted with
confidence.
However, we could ask whether there are any more coarse grained
structures in such landscapes which are more predictable. For
example, consider the number of gauge factors whose ranks lie
between $c \sqrt{N}$ and $(c + \epsilon) \sqrt{N}$ where $c$ and
$\epsilon$ are both $O(1)$:
\begin{equation}
\langle R(c,\epsilon) \rangle \approx
\int_{c \sqrt{N}}^{(c + \epsilon) \sqrt{N}} dj \, \langle r_j \rangle = {1 \over \beta}
\ln \left[ {1 - u \, e^{-(c + \epsilon) \sqrt{N} \beta} \over 1 - u \, e^{-c \sqrt{N} \beta}} \right] \, ,
\end{equation}
where we approximated the sum as an integral. The variance of this coarse-grained variable is
\begin{equation}
{\rm Var}(R(c,\epsilon)) =
\int_{c \sqrt{N}}^{(c + \epsilon) \sqrt{N}} dj \, {\rm Var}(r_j)
= {u \over \beta} \left[ {e^{-c \sqrt{N} \beta} - e^{- (c + \epsilon) \sqrt{N} \beta} \over
(1 - u \, e^{- c \sqrt{N} \beta}) ( 1 - u \, e^{-(c + \epsilon) \sqrt{N} \beta})}\right] \, ,
\end{equation}
where used the fact that in this canonical ensemble the $r_j$ are statistically independent variables. Thus, for $
\beta \sim 1 / \sqrt{N}$ (\ref{nomatterbeta}),
\begin{equation}
\langle R(c,\epsilon) \rangle \sim O(\sqrt{N}) ~~~~;~~~~
{\rm Var}(R(c,\epsilon)) \sim O(\sqrt{N}) ~~~~\Longrightarrow~~~~
{\rm{Var}(R(c,\epsilon)) \over \langle R(c,\epsilon) \rangle^2} \sim O(1 / \sqrt{N}) \,.
\end{equation}
This means that the variance to mean squared ratio vanishes in the large $N$ limit indicating that $R(c,\epsilon)
$ is a typical variable. Thus, in such anarchic landscapes, the number of gauge factors with ranks between $c
\sqrt{N}$ and $(c + \epsilon) \sqrt{N}$ can be predicted with high confidence. Also, approximating the second
equation in (\ref{NandL}) as an integral, the total number of gauge factors turns out to be
\begin{equation}
L = - {\ln(1 - u) \over u \beta} \sim O(\sqrt{N}) \, .
\end{equation}
By the above variance analysis this number will also be typical. Thus, in such anarchic landscapes, the total
number of gauge factors is highly predictable. These results follow essentially because the unordered
partitions of a large integer enjoy a sort of central limit theorem -- representing such partitions by a Young
diagram, one can show that in the large $N$ limit, the boundary of an appropriately rescaled diagram
approaches a limit shape encoded by the $\langle r_j \rangle$ computed above \cite{vershik}.
\subsection{Cyclic, chiral quivers}
Above we saw how suitably coarse-grained aspects of the structure of a randomly chosen field theory in a
bounded landscape might be statistically predictable. The next step is to add matter to the theory to see how
this changes the analysis. As we discussed, we must insist that matter is added in an anomaly-free way and
implementing this constraint is one of the main difficulties in studying studying statistical ensembles of quiver
gauge theories. Thus, to make a beginning, we will study cyclic, chiral quiver gauge theories for which
anomaly cancelation is very easy to implement.
In cyclic quivers, each gauge group is connected to the next one
by bifundamentals, with the circle being completed when the last
group connects to the first one. Taking the ith group around
the circle to be $U(N_i) $, the constraint on the total rank will
be $\sum_i N_i = N$. So, as in the example without matter, the
number $N_i$ form a partition of $N$. Anomaly cancellation
requires that each gauge group have as many fundamentals as
antifundamentals. It is easy to show that, the minimal solution
to anomaly cancellation constraints is that the number of
bifundamentals between $U(N_i)$ and $U(N_{i+1})$ is
\begin{equation}
A_{i (i +1)} = C^{-1} \cdot \prod_{l\neq i,(i+1)} N_l ~~~~;~~~~ C
= \rm{GCD}(\{ \prod_{l\neq i,(i+1)} N_l \}) \Label{cyclicanomaly}
\end{equation}
All other solutions to the anomaly cancellation equations are
integer multiples of (\ref{cyclicanomaly}). We will examine an
ensemble in which the matter fields in the gauge theory are
presumed to satisfy (\ref {cyclicanomaly}) in such a way that the
total number of fields comes as close as possible to some bound
$K$. Thus for this setup the matter fields are uniquely chosen
once the gauge groups are selected. (More generally, we could
have imagined an ensemble where the number of matter fields was
allowed to vary, in which one would need to sum over multiples of
$A_{i(i+i)}$ subject to a bound. This is difficult to do here
since the GCD of the products of integer subsets appearing in the
denominator of (\ref{cyclicanomaly}) is presumably sporadically
behaved.)
One key difference from the matter-free case, is that the {\it
order} in which the gauge groups appear on the ring of the quiver
is important. In general, different orderings will lead to
different quiver gauge theories, except when the permutations
correspond to symmetries of the quiver, such as the cyclic
permutations of the nodes, or cyclic permutations combined with
one reflection. These are just elements of the dihedral group
corresponding to the symmetries of a regular polygon with vertices
corresponding to the nodes of the quivers. Additional symmetries
will arise if some of the $N_i$ are equal and we will treat the
exchange of groups with identical ranks as giving the same theory.
This sort of measure would arise if we imagined our field theory
landscape as arising from D-branes on a Calabi-Yau in which all
the cycles give rise to gauge theories with the same coupling,
which could for example happen if we would resolve an $A_k$
singularity in such a way that all two-cycles would have equal
size.
\subsubsection{The canonical ensemble breaks down}
We will first try to analyze the statistics of cyclic, chiral
quivers in a canonical ensemble. All along, as motivated above, we
will assume that the gauge groups uniquely fix the matter content.
Let $r_k$ be the number of times the group $U(k)$ appears. Then,
the total rank $N$, and the number gauge factors $L$, are
\begin{equation}
N= \sum_k k r_k ~~~~;~~~~ L = \sum_{k} r_k \, .
\end{equation}
We want to compute the partition function of this ensemble of {\it ordered} partitions of $N$:
\begin{equation}
Z= \sum_{\{r_k \} } {1 \over 2} \Bigl(\sum_k r_k -1\Bigr)! ~e^{ -\beta \sum_k k r_k - \alpha \sum_k r_k } \prod_k
{1 \over r_k!} \, .
\Label{Zcan}
\end{equation}
The combinatorial factor that appears here is simply the number of
ways we can choose $r_1,r_2,\ldots$ gauge group factors out of
$\sum_k r_k$, divided by $2(\sum_k r_k)$ to account for the cyclic
and reflection symmetry of the quiver\footnote{This counting
actually ignores certain accidental symmetries that can arise in
the structure of the quiver. For example, in a cyclic quiver in
which the gauge groups $U(N_1)$ and $U(N_2)$ alternate, only one
cyclic permutation gives a different configuration for the quiver.
The fully correct counting can be derived using Polya theory -- we
are simply using the leading terms of that analysis, which is
sufficient for our purposes.}. Rewriting the partition function in
terms of the $ \Gamma$ function, we obtain
\begin{equation}
Z
= {1\over 2} \sum_{\{r_k \} } \Gamma\Bigl(\sum_k r_k \Bigr) \prod_k ~{e^{-\beta k r_k - \alpha r_k}\over r_k!}
\Label{gamma}
\end{equation}
Using the integral representation of the $\Gamma$ function
\begin{equation}
\Gamma(z) =
\int_0^\infty dt ~t^{z-1} e^{-t}
\end{equation}
the partition function can be rewritten as
\begin{equation}
Z = {1 \over 2} \int_0^\infty dt~{ e^{-t} \over t} ~ \sum_{\{r_k \} } \prod_k { t^{r_k} \over r_k!} e^{-\beta k r_k -
\alpha r_k }
\end{equation}
Exchanging the sum and the product, and after some manipulations, we obtain
\begin{equation}
Z = {1 \over 2} \int_0^\infty dt~{ e^{-t} \over t} ~ \exp\Bigl( {t e^{-\alpha} e^{-\beta} \over 1-{e^{-\beta}}}\Bigr) \, .
\end{equation}
This integral is only convergent if
\begin{equation}
{e^{-\alpha} e^{-\beta} \over 1-{e^{-\beta}}} < 1 ~~~\Rightarrow ~~~ e^{-\beta} < { 1\over 1+e^{-\alpha}} \equiv e^
{-\beta_H}\, .
\Label{limitbeta}
\end{equation}
This implies that there is a limiting $\beta$ above which the partition function is undefined, because the
integrand diverges as $t \rightarrow \infty$. There is also always a divergence as $t \rightarrow 0$ which can
be regulated by recognizing that the divergence is a constant independent of $\alpha$ and $\beta$. To show
this, we define $\gamma = {e^{-\alpha} e^{-\beta} \over 1-{e^{-\beta}}}$, and find that
\begin{equation}
{d Z \over d \gamma} = \int_0^\infty dt~{ e^{-(1-\gamma)t} } = { 1 \over 1 - \gamma}
\end{equation}
which implies that, below the limiting temperature,
\begin{equation}
Z = - \log ( 1 - \gamma) = - \log( 1 - {e^{-\alpha} e^{-\beta} \over 1-{e^{-\beta}}}) = -\log( 1 - {uq \over 1-q})
\end{equation}
where $u= e^{-\alpha}$ and $q = { e^{-\beta}}$.
In order to achieve a large total rank, $\beta$ must be tuned to
close to its limiting value $\beta_H$ (\ref{limitbeta}). Then,
if, for example, we put $u=1$, the expectation value of the total
rank is
\begin{equation} \langle N \rangle = q\frac{\partial}{\partial q} \log Z \sim
\frac{-1}{2\epsilon \log (4\epsilon)}
\end{equation}
where we tuned $q=q_H-\epsilon=\frac{1}{2}-\epsilon$ to get a large rank. Similarly, in this approximation we
can compute
\begin{equation}
\langle r_k \rangle \sim \left( \frac{1}{2} \right)^{k+1}
\frac{-1}{2\epsilon \log(4\epsilon)} \sim \left( \frac{1}{2}
\right)^{k+1} \langle N \rangle \, .
\Label{canonicalav}
\end{equation}
This is completely different from the matter-free result for the typical partition: for example, on average one
quarter of the nodes will be abelian. However, we also find that
\begin{equation}
{\rm Var}(r_k) \sim \left( \frac{1}{2} \right)^{2r+2}
\frac{-1}{(2\epsilon)^2 \log(4\epsilon)} \sim - (1+ \log(4\epsilon)) \, \langle r_k \rangle^2
\Label{canonicalvar}
\end{equation}
This is much larger (as $\epsilon \rightarrow 0$) then the expectation value squared. In other words, the
number of group factors with a given rank is not typical in the sense of (\ref{typical}).
As in the matter-free case, we might wonder if a more
coarse-grained question would have a more statistically
predictable answer. For example, we might ask how many gauge
factors we expect to see within some range of ranks. The mean
and variance of such a coarse grained variable can be extracted by
summing over the quantities in
(\ref{canonicalav},\ref{canonicalvar}) because the $r_k$ are
independent random variables in our ensemble. In the central
limit theorem, summing $M$ identically distributed random
variables reduces their fluctuations because both the mean and the
variance are enhanced by a factor of $M$; thus the variance to
mean squared ratio is {\it reduced} by a factor of $M$. In the
matter-free example, something like this happened because,
although the $r_k$ were not identically distributed, their
dependence on $k$ was sufficiently to weak to allow the central
limit theorem to work. In the present case, the exponential
dependence of (\ref{canonicalav}, \ref{canonicalvar}) on the rank
$k$ means that this mechanism fails -- the mean and the variance
remain dominated by the smallest $k$ in the range of the sum.
Thus, it would appear that there is no simple statistically
predictable quantity in this landscape.
However, this is in fact happening because the canonical ensemble
is breaking down and is not a good approximation of the
microcanonical ensemble anymore. The canonical ensemble will
reproduce the microcanonical ensemble when the growth of the
configuration space with the total rank is slow enough so that,
when multiplied by an exponential Boltzmann factor, a nicely
localized measure results. Here the Gamma function and the
exponential in (\ref{gamma}) compete on a equal footing and lead
to a widely spread out measure in which the rank of the gauge
group fluctuates wildly over the ensemble, leading to a very large
variance. Indeed, we should expect this sort of behavior to
occur generally when studying the statistics of quivers since the
number of graphs increases rapidly with the number of nodes. Thus
we turn to the microcanonical ensemble in order to implement more
accurately our constraint on the total rank.
\subsubsection{Microcanonical analysis}
We consider once more a cyclic quiver and ignore accidental
symmetries. The microcanonical partition function for cyclic
gauge theories of rank $N$ and $L $ nodes is simply the number of
such theories. This is given by the coefficient of $q^N$ in
\begin{equation}
\frac{1}{2 L} \left[ q + q^2 + q^3 + \ldots \right]^L \, .
\end{equation}
Here the $1/2L$ divides out the cyclic permutations and reflections. We find that
\begin{equation}
Z_L=\frac{1}{2 L} \left( \begin{array}{c} N-1 \\ N-L \end{array}
\right).
\end{equation}
Summing this over $L$ we can write a partition function which is
canonical in the number of nodes and microcanonical in the total
rank $N$:
\begin{equation}
Z(u) = \sum_{L=1}^N u^L \, Z_ L =
{(1+u)^N -1 \over 2 N} \, .
\end{equation}
To get the unbiased landscape in which all theories of equal rank
have equal weight, we should take $u=1$, but we will consider
other values of $u$ as well. The expectation value of $L$ is
\begin{equation}
\langle L \rangle = u \partial_u \log(Z(u)) ={ u (1+u)^{N-1} \over (1+u)^N -1 }~ N \, .
\end{equation}
For the unbiased ensemble with $u=1$, we get $\langle L \rangle=
{N \over 2}$ in the large $N$ limit. However, if $u \sim {1 \over
\sqrt{N}}$, then $\langle L \rangle \sim \sqrt{N}$, and if $u \sim
{ 1 \over N}$, then $ \langle L \rangle \sim O(1)$. In fact, if $u
\sim N^{-a}$, $\langle L \rangle \sim N^{1-a}$. It can be checked
that the canonical analysis gives the same expectation values.
However, the microcanonical variance in $L$ is
\begin{equation}
{\rm Var}(L) = \Bigl(1 - {Nu \over (1+u)^N -1}\Bigr) { 1\over 1+u} \langle L \rangle
\end{equation}
For the three scalings of $u$, i.e. $u \sim N^{-a}$ , the variance in $L$ is some order 1 number times the mean
value of $L$, independent of $a$. Thus, when $\langle L \rangle$ is large, the variance to mean squared ratio
is small, unlike the canonical analysis. This means that in such landscapes the number of gauge factors is
typical in the sense of (\ref{typical}) and is therefore highly predictable.
The expectation value for the number of abelian factors is:
\begin{equation}
\langle r_1 \rangle = {1 \over Z} \sum_{L=1}^N { u^L \over L} L \left( \begin{array}{c} N-2
L-2 \end{array} \right)
= {u^2 (1+u)^{N-2}\over (1+u)^N-1} N
\end{equation}
When $u=1$, this becomes $\langle r_1 \rangle = {1 \over 4} N$ in the large $N$ limit. When $u \sim {1 \over
\sqrt{N}}$, $\langle r_1 \rangle \sim O(1)$. And when $u \sim {1 \over N}$, $\langle r_1 \rangle \rightarrow 0$. In
fact, for $u \sim {N^{-a}}$, $\langle r_1 \rangle \sim N^{1-2a}$. It can be checked that these expectation values
match the canonical ensemble. However, the the variance in $r_1$ is much smaller in the microcanonical
ensemble. First we compute that
\begin{eqnarray}
\langle r_1^2 \rangle& =& {1 \over Z} \sum_{L=1}^N { u^L \over L} \bigg\{ L \left( \begin{array}{c} N-2 \\
L-2 \end{array} \right) + L(L-1) \left( \begin{array}{c} N-3 \\
L-3 \end{array} \right) \bigg\}\\
& = & {u^2 ( 1+u)^{N-4} \bigl( u(uN+4) +1 \bigr) \over (1+u)^N -1} ~ N
\end{eqnarray}
Therefore, the ratio of the variance to the mean squared is
\begin{equation}
{\langle r_1^2 \rangle - \langle r_1 \rangle ^2 \over \langle r_1 \rangle^2} = { 1 + u \Bigl(4- {Nu \over (1+u)^N-1}
\Bigr) \over (1+u)^2} \times {1 \over \langle r_1 \rangle}
\end{equation}
The coefficient of $1/\langle r_1 \rangle$ in this expression is of $O(1)$ for for $u \sim N^{-a}$, with $0 \leq a \leq
1$.
Pulling everything together, in the unbiased ensemble with $u=1$, the average number of gauge factors is $N/
2$ and the number of abelian factors is $N/4$. Both of these quantities are typical in the sense of (\ref{typical})
and hence highly predictable in this landscape without any coarse-graining. In a biased ensemble with $u \sim
{1 / \sqrt{N}}$, the total number of gauge factors is $O(\sqrt{N})$, and the number of abelian factors is $O(1)$.
Since variance is of the same order as the mean for both quantities, the number of gauge factors is typical and
thus predictable, but the number of abelian factors is not. In this case, we expect that a coarse-grained statistic,
such the fraction of gauge groups in a given range, would be more predictable as in the matter-free case.
\paragraph{Higher Ranks}
To find the expectation value of the occupation number of rank $r$, we can insert a ``chemical potential'' for that
rank. So
\begin{equation}
Z(u,\{y_k\}) = \sum_{L=1}^N {u^L \over 2L} \left[ \sum_{k=1}^N
q^k y_k \right]^L |_{q^N} \, ,
\end{equation}
where the left hand side equals the coefficient of $q^N$ in the right hand side.
The expectation value $\langle r_k \rangle $ is given by
\begin{eqnarray}
\langle r_k \rangle &=& \partial_{y_k} \log \Bigl(Z(u,\{y_k\})\Bigr) |_{\{y_k\}=1} \\
& = & {1 \over Z[u]} \sum_{L=1}^N u^L \left( \begin{array}{c} N-k-1 \\
L-2 \end{array} \right) \\
&=& {u^2 (1+u)^{N-k-1} \over (1+u)^N -1}N
\end{eqnarray}
In the unbiased ensemble with $u\sim 1$, $\langle r_k \rangle \sim (1/2)^{k+1} N$ as we found in the canonical
ensemble. Similarly, the expectation value $\langle N_r^2 \rangle $ is:
\begin{eqnarray}
\langle r_k^2 \rangle &=& { 1 \over Z} y_k \partial_{y_k} y_k \partial_{y_k} \Bigl(Z(u,\{y_k\})\Bigr) |_{\{y_k\}=1} \\
& = & {1 \over Z} \sum_{L=1}^N u^L \bigg\{ \left( \begin{array}{c} N-r-1 \\
L-2 \end{array} \right) + (L-1) \left( \begin{array}{c} N-2r-1 \\
L-3 \end{array} \right) \bigg\} \\
& = & {u^2 (1+u)^{N-2r-2} \Bigl( 2u + (N-2r+1) u^2 + (1+u)^{r+1} \Bigr) \over (1+u)^N-1} N
\end{eqnarray}
So the ratio of the variance to the mean squared is
\begin{equation}
{ {\rm Var}(r_k) \over \langle r_k \rangle^2 } = {1 \over (1+u)^{k+1}} \bigg\{
(1+u)^{k+1} + u\bigl((1-2k)u+2\bigr) -{Nu^2 \over (1+u)^N-1} \bigg\} \times {1 \over \langle r_k \rangle}
\end{equation}
This is always $O(1)$ times $1 / \langle r_k \rangle$, and hence the number of gauge groups of a given rank is
typical, and hence highly predictable, if the average is large.
\paragraph{Lessons: } We are finding that in an anarchic landscape of cyclic quiver gauge theories, the actual
number of gauge factors of a given rank is highly predictable. Specifically, the distribution of ranks is
exponential and the low rank populations are statistically predictable with high confidence. In a biased
landscape in which the measure favors having a number of gauge factors that is sufficiently smaller than the
total rank, we found that the number of factors with a fixed rank in not typical in general although the total
number of factors can be. In this case, one could test whether an appropriately coarse grained quantity, like the
fraction of gauge groups with ranks in some range, is more predictable.
\section{Thinking about the general quiver}
To extend our analysis to the general quiver gauge theory we could try to compute a partition sum of the form
\begin{equation}
Z = \sum_L \sum_{N_i, A_{ij}}
\exp(-\beta \sum_i N_i - \lambda \sum_{ij} A_{ij} N_i N_j )
\end{equation}
where $L$ is the number of nodes of the quiver, $N_i$ are the
ranks of the gauge groups, and $A_{ij}$ are the numbers of
bifundamentals between nodes $i$ and $j$. One difficulty is that
this partition sum is canonical and, as we found, it may not
implement the constraints on the total rank and the amount of
matter very well because of the rapid growth of the space of
theories. Secondly the sum should only be over anomaly cancelled
theories. Thirdly, there are discrete symmetries which tend to
lead to vanishing expectation values. In view of this, below we
will develop some approaches to dealing with the two latter
issues.
\subsection{Implementing anomaly cancellation}
\paragraph{A loop basis for anomaly free theories: }
If all the gauge groups have the same rank, the general anomaly
free theory can be constructed by making sure that the
bifundamental fields always form closed loops. One can always
construct such matter distributions by saying that each of the
possible loops in the quiver has $n_i$ fields running around it.
Where loops overlap the matter content will either add or subtract
depending on the orientation of the loops (again here we are
supposing that non-chiral doublets decouple; in addition, we
identify negative $A_{ij}$ with a positive $A_{ji}$ and vice
versa.). Any loop in the quiver can be constructed by summation of
a basis of independent 3-loops and it can be shown that this basis
will have
\begin{equation}
N_L = {(L - 1) (L - 2) \over 2}
\Label{loopcount}
\end{equation}
elements. For example, consider the case with $L=6$ nodes, i.e.
there are six gauge groups that we label from 1 to 6. Then, the
following three loops form a basis for all loops: (123), (124),
(125), (126), (234), (235), (236), (345), (346), (456). The basis
has 10 elements which is equal to $N_6 = (6-1) (6-2)/2$. We can
check that the $N_L$ loops provide enough free parameters to
parameterize the space of anomaly free theories. To see this,
note that the solutions to the anomaly cancellation equations form
a vector space of dimension
\begin{equation}
{L(L -1) \over 2} - (L-1) = {(L-1) (L-2) \over 2} = N_L
\Label{anomcount}
\end{equation}
where $L(L-1)/2$ is the number of parameters $A_{ij}$ from which
we have subtracted the $(L-1)$ anomaly cancellation conditions on
$L$ groups.
Even when the ranks are unequal, anomaly free theories can be
constructed from a basis of 3-loops because (\ref{loopcount}) and
(\ref{anomcount}) are equal. However, the links of any given
3-loop will have to be populated with a different number of fields
in a way related to the GCDs of the three groups appearing in it.
For example, suppose one has the three gauge groups $SU(r_1 \cdot
g) \times SU(r_2 \cdot g) \times SU(r_3 \cdot g)$ where $r_i$ are
a triple of positive integers that do not share a common factor
and $g$ is any another positive integer. Then if we take number
of chiral bifundamentals between gauge group $i$ and $j$ to be
$A_{ij} = \epsilon^{ijk} r_k$, we get an anomaly free
theory.\footnote{As a specific example consider a four-node quiver
with gauge group $SU(3)_1 \times SU(5)_2 \times SU(7)_3 \times
SU(8)_4 $. Then, we can get an anomaly free theory by making a
loop of four with $A_{12}= 7 \cdot 8$, $A_{23}= 3 \cdot 8$,
$A_{34}= 3 \cdot 5$, $A_{41}= 7 \cdot 5$. We can obtain this as a
sum of two 3 loops: (124)+ (234). The loop (123) corresponds to an
$SU(3)_a \times SU(5)_b \times SU(8)_c$ with $A_{ab}=8$,
$A_{bc}=3$, $A_{ca}=5$ while the loop (234) corresponds to
$SU(5)_i \times SU(7)_j \times SU(8)_k$ with $A_{ij}=8$ ,
$A_{jk}=5$, $A_{ki}=7$. To get the four loop (1234), we need to
cancel the (24) link which means that need to add $7 \cdot (124) +
3 \cdot (234)$. This precisely reproduces the four loop numbers.
Another anomaly free theory could be generated by adding (124) to
(234). In this case, the fields along link (24) will not cancel,
but by construction, the number of fields going into and out of
each gauge group will cancel. }
This way of thinking suggests that one way to do the statistics of
anomaly free theories is to first select a basis of anomaly free
3-loops and then do the statistics of populations of these loops
given a bound on the total number of loops.
\paragraph{Anomaly free, asymptotically free, chiral, equal rank
gauge theories: } This set of theories is very easy to analyze, as there are can be
only five different types of vertices in such quivers (see
Fig.~\ref{fivevertex}). Therefore the most general quiver
arises by combining these five vertices in various combinations.
Superficially, the second vertex with two separate lines coming in
and two separate lines going out allows for the largest amount of
combinatorial freedom and will quite likely dominate this set of
theories. It would be interesting to explore this class further.
Possibly it can be mapped to an existing solvable lattice model in
statistical mechanics.
\begin{figure}
\begin{center}
\includegraphics[scale=0.9]{fivevertices.eps}
\caption{The five vertices from which all anomaly-free, chiral,
asymptotically free, equal rank theories are constructed}
\Label{fivevertex}
\end{center}
\end{figure}
\paragraph{Anomaly cancellation for a general quiver by using an extra node: }
If we drop the constraint of asymptotic freedom, the set of anomaly free, chiral, and equal rank theories
is easy to parametrize. It is not difficult to see that if we take any set of edges ${\cal
S}$ such that the edges together form a connected tree which
contain all vertices of the quiver, then the anomaly equations
uniquely determine the $A_{ij},A_{ji}$ with $(ij)\in {\cal S}$ in
terms of the $A_{ij},A_{ji}$ with $(ij) \not\in {\cal S}$. Thus we
can simply take an arbitrary set of chiral matter fields for all
edges not in ${\cal S}$, after which anomaly cancellation uniquely
fixes the remaining links.
A simple example of the set ${\cal S}$ is the star-shaped tree
consisting of all edges $(1i)$, $i=2\ldots L$. In other words, if
we remove one vertex and all its edges, and arbitrarily specify
the chiral matter content in the remaining quiver with $L-1$
vertices, this uniquely determines an anomaly free , chiral, equal
rank quiver gauge theory with $L$ gauge groups. To illustrate
this, consider a four-node quiver. Take $A_{12}=a$, $A_{32}=b$ and
$A_{13}=c$\,\mbox{}\footnote{If we say $A_{12}=a$, then we mean
that $A_{12}=a$ for $a\geq0$ and $A_{21}=-a$ for $a\leq 0$. This
will guarantee that the theory is chiral}. Then anomaly
cancellation uniquely fixes
\begin{equation}
A_{24}=a+b, \quad A_{41}=a+c,\quad A_{43}=b-c .
\end{equation}
This method can be extended to theories where the gauge groups
have unequal ranks. First consider an arbitrary chiral, quiver
with $L - 1$ nodes. Let the rank of the group at the ith node
be $N_i$. For anomaly cancellation, the net number of
fundamentals minus antifundamentals at each node must be zero.
Let $K_i$ be the net excess matter (number of fundamentals minus
anti-fundamentals) at each node. Then we can always add an
additional $U(1)$ gauge group with $N_i \, K_i$ bifundamental
fields under this $U(1)$ and the $U(N_i)$ of the ith node. This
will give an anomaly free theory. This extra node can be
non-abelian, but its rank is restricted to be a divisor of the set
$\{ N_i \, K_i \}$. In this way, the statistics of general
anomaly free quivers on $L$ nodes can be studied by first
constructing arbitrary $L-1$ node quivers and then adding a extra
node according to the above algorithm.
\subsection{Dealing with discrete quiver symmetries: an example}
From above, the set of anomaly free, chiral and equal rank theories with four nodes is
parametrized by the rank $N$ of the gauge groups and three
integers $a,b,c$. The measure (\ref{measure}) becomes
\begin{equation}
\rho = \exp\left(-4 \beta N - \lambda N^2 \left( |a| + |b| + |c| +
|a+b| + |a+c| + |b-c| \right)\right).
\end{equation}
In the remainder, we will fix the value of $N$ and look only at
the distribution of $a,b,c$.
By symmetry, the expectation values of $a,b,c$ are all zero. This
happens because there are a number of discrete symmetries of the
quiver due to which averages vanish. For example, for every
chiral quiver there is the anti-chiral quiver in which the
orientations of all fields are reversed. Averaging these two
will formally give $a=b=c=0$. Similar phenomena will always
happen whenever we consider sets of quivers with symmetries. More
structure appears once we break the symmetries and look at the
average quiver in an ensemble with some symmetry breaking
conditions imposed. Suppose for example that we impose $a>0$. This
leaves a $\mathbb Z_2$ symmetry that exchanges vertices 3 and 4.
Therefore, the expectation value of $A_{34}$ will be zero.
Symmetry considerations further show that
\begin{equation}
\langle \frac{1}{2} A_{12} \rangle = \langle A_{23}\rangle =
\langle A_{24} \rangle= \langle A_{31}\rangle =\langle
A_{41}\rangle .
\end{equation}
Furthermore, each of these expectation values is proportional to
$1/\lambda N^2$.
A boundary condition that completely breaks the symmetry is to
impose that $a\geq b \geq 0$. We can always achieve this up to a
permutation of the vertices so there is no loss of generality. The
analysis of the expectation values of the number of matter fields
in this ensemble is more tedious but can still be done explicitly.
To leading order in $\epsilon=\lambda N^2$ we
obtain\footnote{Here, by $\langle A_{ij} \rangle $ we really mean
$\langle A_{ij}-A_{ji} \rangle $.}
\begin{eqnarray}
& & \langle A_{12}\rangle = \frac{47}{84 \epsilon}, \quad \langle
A_{32} \rangle = \frac{4}{21\epsilon}, \quad \langle A_{31}
\rangle = \frac{61}{588\epsilon}, \nonumber \\ & &
\langle A_{24} \rangle =
\frac{3}{4\epsilon},\quad \langle A_{41} \rangle
=\frac{67}{147\epsilon} ,\quad \langle A_{43} \rangle =
\frac{173}{588\epsilon}.
\end{eqnarray}
Thus we see that after modding out the $Z_2$ symmetries of the
quiver we are able find an interesting average quiver. Of course,
since there are only four nodes here, we do not expect any notion
of statistical typicality. To study whether general large quivers
have some typical structure, we will have to proceed as above, by
parameterizing the space of anomaly cancelled theories and then
imposing symmetry breaking conditions.
\subsection{Towards dynamics}
\begin{figure}
\begin{center}
\includegraphics{quivers3a.eps} \\
\vspace{-0.2in}
(a) \\
\vspace{0.5in}
\includegraphics{quivers3b.eps} \\
(b) \\
\vspace{0.5in}
\includegraphics{quivers5.eps} \\\
(c) \\
\vspace{0.5in}
\includegraphics{quivers6.eps} \\
(d) \vspace{0.2in} \caption{Examples of flows to the infrared in
asymptotically free, four node quiver gauge theories with equal
rank gauge groups. The tree level superpotential is assumed to be
trivial. (a) This quiver flows to a theory with a Coulomb branch
at low energies. (b) This theory is a conformal field theory at
low energies. (c) Assuming the two gauge groups with $N$ flavors
have a higher scale than the groups with $2N$ flavors, this theory
flows to a theory with two confined groups, with the massless
mesons of the two groups then participating in an interacting
conformal field theory. (d) Assume that the group that confines
has a higher dynamical scale than the other groups, and that the
confinement is on the Baryonic branch. The massless mesons of this
confining factor then interact with the rest of the theory
leading to a flow to an interacting conformal field theory. }
\Label{fournodes}
\end{center}
\end{figure}
While we have been focusing on the structure of those field
theories in which anomalies cancel, we should also be paying
attention to dynamics. Since we are dealing with ${\cal N} = 1$
field theories, if $N_f > 3 N_c$ for any gauge group then it will
be infrared free. If $N_f < 3 N_c$ it will be asymptotically
free. If $N_f = 3 N_c$ the one-loop beta function vanishes. If we
distribute fields into a quiver, the bound of the total number of
fields will tend to cause the low rank gauge groups to contain
more fields. Thus they will tend to be infrared free. What is
more, because, as we have seen above, anomaly cancellation
including high rank gauge groups tends to require lots of fields,
if a high rank group is connected to the rest of the quiver it
would tend to push groups in the quiver towards infrared freedom.
In general, studying RG flow requires us to know the
superpotential or at least to scan statistically over them.
Minimally, we should include all cubic and quartic terms in the
superpotential with O(1) coefficients mutiplied by the appropriate
scale. (The cubic terms are classically marginal, and some
quartic terms are known to become marginal under RG flow.) Doing
such a dynamical analysis of general quiver gauge theories is
beyond the scope of this paper, but as an initial step to gain
some experience with how this works we will study some examples
without a superpotential.
\subsubsection{Four node, asymptotically free quivers}
First recall that $SU(N)$ gauge theory with $N$ flavors confines
at energies below its dynamical scale, while $SU(N)$ theory with
$2N$ flavors flows to an interacting conformal fixed point. We
will assume that the confining $SU(N)$ theory is on the baryonic
branch. We can then naively take a quiver and simply proceed to
allow individual gauge factors to confine, Seiberg dualize
\cite{seiberg} etc. as their dynamics becomes strong. A cursory
analysis of four node, asymptotically free quivers (see some
examples with equal ranks $N$ in Fig.~\ref{fournodes}, constructed
from the vertices in Fig.~\ref{fivevertex}) suggests that one will
tend to get interacting conformal field theories in which the
mesons of the confining factors participate. This suggests that
unparticles \cite{georgi} might be generic in these settings.
\subsubsection{General quiver with unequal gauge groups}
First consider the simple case of a loop of three gauge groups,
$SU(N_1) \times SU(N_2) \times SU(N_3)$ which has to cancel
anomalies by itself. For example, this can happen if the 3-loop
is isolated within a larger quiver. As we discussed, such
primitive 3-loops can be used to generate larger anomaly free
quiver gauge theories. To cancel anomalies, the $(12), (23), (31)$
links will generically contain $N_3, N_1, N_2$ bifundamentals
respectively.\footnote{The minimal solution to the anomaly
cancellation equations will actually be that the number of
bifundamentals connecting $i$ and $j$ is $N_k / {\rm GCD}(\{N_i,
N_j,N_k\})$ as in (\ref{cyclicanomaly}). But generically the GCD
will be $1$.} Thus for group $i$ to be asymptotically free one
will need that
\begin{equation}
3N_i > N_j N_k ~~~~~~ i \neq j \neq k \, .
\end{equation}
Taking all the $N_i > 3$ and $N_1 < N_2 < N_3$, it is clear that
$SU(N_3)$ is the only gauge group that has the possibility of
being asymptotically free.
So for any
anomaly-free, chiral connected quiver with three nodes with ranks
at least $3$ either all three groups are infrared free, or only
the largest one is asymptotically free if it has sufficiently
large rank.
The same argument no longer works for connected quivers with more
than three gauge groups, still it is easy to see that generically
high rank gauge groups with links to smaller rank gauge groups
have a chance to be asymptotically free, whereas low rank gauge
groups connected to higher rank gauge groups tend to be IR free.
Now consider three cases for the dynamics of a quiver with unequal gauge groups.
\begin{enumerate}
\item The number of fields $K$ is very large. If so,
it seems likely that in a randomly chosen field theory all
possible links in the quiver will be populated with some
multiplicity, although the links between low rank groups will be
enhanced. In this circumstance our arguments suggests that the
entire theory will be infrared free.
\item The number of fields $K$ is small.
Presumably the lowest rank gauge groups will tend to have matter
and the quiver will typically consist of several disconnected
smaller clusters that each for a connected quiver gauge theory.
The high rank gauge groups with little matter would then confine
at the appropriate dynamical scale.
\item For an intermediate number of fields the clusters
will percolate and presumably there is an interesting phase structure here.
\end{enumerate}
\section{Conclusion}
It is somewhat unsettling to attempt to make {\it statistical}
predictions for the structure of the theory describing nature
because, ever since Galileo, we have been fortunate that
observations and symmetries have constrained the possibilities
sufficiently to essentially give a unique theory describing the
phenomena under investigation. But string theorists are in the
unprecedented situation of hoping to make predictions for the
fundamental theory up to the Planck scale given observations below
the TeV scale, subject to only very general constraints such as
consistency and in particular a consistent coupling to quantum
gravity. In such a situation, the best one can do is to predict
the likelihood of possible high energy theories, conditioned on
the known facts below the TeV scale, the known constraints, and
our best guess regarding the measure on the space of theories.
This is literally all that we can know. While this sort of
Bayesian approach is unfamiliar in particle physics, it is much
less unusual in cosmology where one does conceive of ensembles of
possible universes or ensembles of domains with different
low-energy physics in a single universe. We would nevertheless
like to emphasize that we do not want to exclude the possibility
that consistency requirements plus experimental input will
eventually yield an (almost) unique fundamental theory, we are
merely entertaining the logical possibility that this will turn
out to not be the case.
In this paper we have used the uniform measure on specific
effective field theory landscapes, but it is not obvious that this
is the measure prescribed by string theory. For example,
dynamics can play a role in determining the appropriate measure
because there can be transitions between vacua with different
properties. Also, renormalization group flows can modify the
measure in the infrared as theories flow towards their fixed
points. Given the correct measure, our analysis could be repeated
to find the typical predictions. However, because the uniform
measure leads to typicality for some coarse-grained properties, an
alternative measure would have to concentrate on an exponentially
sparse part of the configuration space in order to change the
typical predictions of the uniform measure.
The general approach to model building suggested by these
considerations does not involve the usual desert with a high scale
GUT. Instead it appears that one would statistically expect a
plethora of gauge factors leading to interesting structures at all
scales up to the string scale. Amongst these gauge factors there
will be some groups with high ranks and others with low ranks. If
there is a bound on the total number of matter fields,
statistically, the higher rank groups will tend to have fewer
fundamentals (since this eats up matter). Thus they will tend
towards confinement at a relatively high dynamical scale if all
couplings are unified at the string scale. On the other hand if
you have too much matter in any group it will tend to infrared
triviality. Thus the low rank groups, if they are to have IR
dynamics, will tend to be largely decoupled from the high rank
groups. Thus if we study the statistics of anarchic landscapes
of field theories, conditioned on having interesting low energy
dynamics, we will tend towards a structure with dynamical low rank
groups largely decoupled from a complex, interacting higher rank
sector.
The explicit examples of toy landscapes that we studied in Sec.~3
do not have very interesting dynamics. The matter-free case
confines. The ring quivers that we studied in detail are
generically infrared free since anomaly cancellation imposes the
need for lots of matter unless the individual gauge group ranks
conspire to make the GCD in (\ref{cyclicanomaly}) large. Thus we
see that conditioning a field theory landscape on having
interesting low energy dynamics, along with anomaly cancellation,
will be a major constraint, and is likely to significantly modify
the measure on the space of theories. It would amusing if
curious number theoretic properties like the appearance of large
GCDs will have to be given more weight. It would also be very
interesting to explore other measures; for example the results in
\cite{timoetal,douglastaylor} suggest to weigh rank $k$ gauge
group factors with an extra factor of $1/k^2$ compared to the
anarchic measures we have been using.
\paragraph{Acknowledgments: }
We have benefited from conversations with Ofer Aharony, Michael
Douglas, Gary Gibbons, Florian Gmeiner, Dieter L\"{u}st, Juan
Maldacena, Yasunori Nomura, Carlos Nunez, Al Shapere, Tanmoy
Vachaspati, Brian Wecht, and Timo Weigand. We are grateful
to the organizers of the Sowers Workshop at Virginia Tech where
this work was initiated. V.B. thanks DAMTP, Cambridge, and the
Physics Department at Swansea, and A.N. thanks the Physics
Departments at Penn, the University of Amsterdam and the IAS for
for hospitality during this project. VB was supported by the DOE
under grant DE-FG02-95ER40893, by the NSF collaboration grant
OISE-0443607, and as the Helen and Martin Chooljian member of the
Institute for Advanced Study. AN is supported by a STFC advanced
fellowship. JdB is partially supported by the FOM foundation.
Finally, we have enjoyed the atriums of the British Library and
the British Museum while this paper was being completed.
| {
"attr-fineweb-edu": 1.84082,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdp45qX_BukO5hZkb | \section{Introduction and summary} \label{Intro}
The AdS/CFT correspondence \cite{Maldacena:1997re} provides a lot of new insight into wide regions of physics, and it is significant to reveal even more properties of this duality for understanding string theories and gauge theories. While many attempts succeed in confirming it in lower dimensions, the higher-dimensional versions of the correspondence are still mysterious. The main reason is that there are few known facts about conformal field theories in higher dimensions. However, recently it was found that the supersymmetric localization can be applied to 5D super Yang-Mills theories on curved geometries and their partition functions can be derived exactly as mentioned below. We can utilize them to verify the $\mathrm{AdS}_{d + 1}$/CFT$_{d}$ for $d \geq 5$. For example, there are a few pieces of evidence of the $\mathrm{AdS}_{6}$/CFT$_{5}$ \cite{Jafferis:2012iv,Assel:2012nf}.
The 5D $\mathcal{N} = 1$ super Yang-Mills theories are constructed on several curved backgrounds. Their partition functions and expectation values of Wilson loops have been calculated by the localization technique \cite{Kallen:2012cs, Hosomichi:2012ek, Kallen:2012va, Kim:2012ava, Imamura:2012xg, Kim:2012tr, Imamura:2012bm, Kim:2012qf, Qiu:2013pta, Kim:2013nva, Qiu:2013aga, Schmude:2014lfa}.
The 5D $\mathcal{N}=1^{*}$ theory on the round five-sphere with a radius $r$, which contains a vector multiplet and an adjoint hypermultiplet, has $\mathcal{N}=2$ supersymmetry if the mass for the hypermultiplet takes a specific value.
Then the partition functions and Wilson loops reduce to the Chern-Simons matrix model \cite{Kim:2012ava, Minahan:2013jwa} first considered in \cite{Marino:2002fk}. Also, we can produce the 5D maximal super Yang-Mills (MSYM) on $S^{5}$ from the (2,0) theory by the dimensional reduction with the appropriate twist to keep the supersymmetry \cite{Kim:2012ava}. It is argued in \cite{Witten:1995ex, Douglas:2010iu, Lambert:2010iw} that Kaluza-Klein modes in 6D can be identified with instanton particles in 5D under
\begin{align}
R_{6} = \frac{g_{YM}^{2}}{8 \pi^{2}}, \label{KKinst}
\end{align}
where $R_{6}$ is the radius of the compactified $S^{1}$, and $g_{YM}$ is the five-dimensional gauge coupling constant. Following their discussion, the 5D MSYM seems to contain all degrees of freedom of the (2,0) theory. An observation supporting this claim is that the free energy obtained by the Chern-Simons matrix model reproduces $N^3$ behavior\footnote{However, the free energies for the Chern-Simons matrix model and the supergravity do not completely coincide by an overall constant \cite{Kallen:2012zn, Minahan:2013jwa}.} of the supergravity analysis on $\mathrm{AdS}_{7} \times S^{4}$ \cite{Kim:2012ava, Minahan:2013jwa, Kallen:2012zn, Giasemidis:2013oea}.
In this paper, we focus on the expectation values of Wilson surfaces for the $\mathrm{AdS}_{7}$/CFT$_{6}$ correspondence. The Wilson surfaces in the (2,0) theory are a class of nonlocal operators localized on surfaces in 6D \cite{Ganor:1996nf}. Through the above argument, Wilson surfaces extending to the compatified direction are Wilson loops in the 5D theory. Therefore, we compute the expectation values of them by using the Chern-Simons matrix model. In particular we evaluate the expectation values of Wilson loops in large-rank anti-symmetric representations and symmetric representations in the large $N$ limit.
On the other hand, naively a probe M2-brane ending on multiple M5-branes is the M-theory description of the Wilson surface \cite{Ganor:1996nf}. The holographical description of a spherical Wilson surface has been studied in \cite{Berenstein:1998ij,Corrado:1999pi}. Recently, it has been clarified in \cite{Young:2011aa, Kim:2012qf, Minahan:2013jwa} that the expectation value of the Wilson surface wrapping on $S^1\times S^1$ in the fundamental representation matches that of the M2-brane wrapping $\mathrm{AdS}_{3}$.
In this paper, we consider a probe M5-brane description of the Wilson surface \cite{Lunin:2007ab,Chen:2007ir,Chen:2007zzr,Chen:2008ds,D'Hoker:2008qm} instead of the M2-brane. When the number of the overlapping and winding M2-branes becomes large, they blow up and make an M5-brane with worldvolume flux wrapping two types of submanifolds of $\mathrm{AdS}_{7} \times S^{4}$ due to the representation: one is $\mathrm{AdS}_{3} \times S^{3}$ totally in $\mathrm{AdS}_{7}$, and the other is $\mathrm{AdS}_{3} \times \tilde{S}^{3}$ belonging to $S^{4}$. This is the analogue of the D3-brane and D5-brane description of the symmetric and anti-symmetric Wilson loops in $\mathrm{AdS}_5$/CFT$_4$ correspondence\cite{Rey:1998ik,Drukker:2005kx,Hartnoll:2006hr,Yamaguchi:2006tq,Gomis:2006sb}. According to this analogy, we expect that an M5-brane wrapping on $\mathrm{AdS}_{3} \times S^{3}$ corresponds to the symmetric representation and one wrapping on $\mathrm{AdS}_{3} \times \tilde{S}^{3}$ corresponds to the anti-symmetric representation. We calculate the expectation values of the Wilson surfaces by evaluating the on-shell action of these M5-branes. In the calculation for the M5-branes, we use the so-called Pasti-Sorokin-Tonin (PST) action \cite{Pasti:1997gx, Bandos:1997ui, Bandos:1997gm}.
We compare the results on the CFT side and the gravity side, and we find new evidence supporting the $\mathrm{AdS}_{7}$/CFT$_{6}$ correspondence; the M5-brane wrapping $\mathrm{AdS}_{3} \times S^{3}$ and wrapping $\mathrm{AdS}_{3} \times \tilde{S} ^{3}$ perfectly agree with the Wilson surface in symmetric representation and in anti-symmetric representation respectively. We note that the authors of \cite{Minahan:2013jwa} have suggested that the relation \eqref{KKinst} be modified at strong coupling such that the constant coefficient becomes dependent on the square of the mass for the adjoint hypermultiplet. One can find that our results are truly consistent with their argument.
One of the interesting future directions is to study the relation between the bubbling geometry and Wilson surfaces in larger representations.
A class of bubbling solutions in the 11-dimensional supergravity as the gravity dual of the Wilson surfaces is obtained in \cite{Yamaguchi:2006te,Lunin:2007ab,D'Hoker:2008wc,D'Hoker:2008qm} along the line of the bubbling geometry for local operators \cite{Lin:2004nb} and Wilson loops \cite{Yamaguchi:2006te,Lunin:2006xr,D'Hoker:2007fq}. In these solutions, the eigenvalue distribution of the matrix model is suggested as the following form: the real line of the eigenvalue space is divided into black and white segments, and the density is a positive constant on the black segments and zero on the white segments. The unit length of a black segment is twice that of a white segment. Actually, the eigenvalue distribution of the Chern-Simons matrix model obtained in \cite{Halmagyi:2007rw} is consistent with the bubbling solutions. This observation is other evidence of the correspondence. It will be an interesting future work to calculate the expectation values of Wilson surfaces by using the bubbling solutions and compare them to the calculation in the Chern-Simons matrix model.
The rest of the paper is organized as follows: In section \ref{CSMM} we use the Chern-Simons matrix model and evaluate the expectation values of Wilson surfaces in anti-symmetric representation and symmetric representation.
In section \ref{M5}, we use probe M5-branes on the gravity side and calculate the expectation values of the Wilson surfaces.
\section{Wilson surfaces in Chern-Simons matrix model} \label{CSMM}
\subsection{Chern-Simons matrix model in large $N$}
We consider 6D A$_{N-1}$ type (2,0) theory on $S^1\times S^5$ and a Wilson surface in this theory. This Wilson surface is wrapping on $S^1\times S^1$ where the first $S^1$ is orthogonal to $S^5$ and the second $S^1$ is a great circle of $S^5$. This Wilson surface can be treated as a Wilson loop wrapping a great circle in 5D SU$(N)$ MSYM on $S^5$ if the boundary condition in the $S^1$ direction is twisted appropriately \cite{Kim:2012ava,Kim:2012qf}.
The expectation values of Wilson loops wrapping on the great circle on $S^5$ with a radius $r$ are calculated by using the localization technique \cite{Kallen:2012cs, Hosomichi:2012ek, Kallen:2012va, Kim:2012ava, Kim:2012qf}. In particular, the expectation value of the Wilson loop in the representation $R$ in MSYM with a coupling constant $g_{YM}$ reduces to the Chern-Simons matrix model
\begin{align}
\WLR
&= \frac{1}{\mathcal{Z}} \int \prod_{i = 1}^{N} d \nu_{i} \prod_{i, j, i \neq j} \left|\sinh \frac{N}{2}(\nu_{i} - \nu_{j}) \right|
\exp \left[ - \frac{N^{2}}{\beta} \sum_{i = 1}^{N} \nu_{i}^{2} \right] \text{Tr}_{R} e^{N \nu}, \label{WLR}
\end{align}
where $\beta = \frac{g_{YM}^{2}}{2 \pi r}$.
$\mathcal{Z}$ is the partition function given by
\begin{align}
\mathcal{Z}
&:= \int \prod_{i = 1}^{N} d \nu_{i} \prod_{i, j, i \neq j} \left|\sinh \frac{N}{2}(\nu_{i} - \nu_{j}) \right|
\exp \left[ - \frac{N^{2}}{\beta} \sum_{i = 1}^{N} \nu_{i}^{2} \right]. \label{CSMMpf}
\end{align}
We evaluate these integrals in the limit $N\to\infty$ while $\beta$ is kept finite in order to compare them to the gravity calculation. Notice that this limit is different from the 't Hooft limit. For the 't Hooft limit, the expectation values of the Wilson loops are computed in \cite{Drukker:2010nc}.
Let us first consider the eigenvalue distribution of the partition function before calculating the Wilson loop. When we take $N \rightarrow \infty$ with fixed $\beta$, the hyperbolic sine factor is simplified and we obtain
\begin{eqnarray}
\mathcal{Z} \sim \int \prod_{i = 1}^{N} d \nu_{i} \exp \left[ - \frac{N^{2}}{\beta} \sum_{i = 1}^{N} \nu_{i}^{2} + \frac{N}{2} \sum_{i, j, i \neq j} \left| \nu_{i} - \nu_{j} \right| \right].
\end{eqnarray}
In the limit, both terms in the exponential are $O ( N^{3} )$, and, therefore, this integral can be evaluated by saddle points. It yields to the saddle point equations for $\nu_{i}$
\begin{eqnarray}
0 = - \frac{2 N^{2}}{\beta} \nu_{i} + N \sum_{j, i \neq j} {\rm sign} (\nu_{i} - \nu_{j} ).
\end{eqnarray}
We can easily find the following solutions under the assumption $\nu_{i} > \nu_{j}$ for $i < j$:
\begin{eqnarray}
\nu_{i} = \frac{\beta}{2 N} ( N - 2 i ). \label{nui}
\end{eqnarray}
In other words the eigenvalue density is given by
\begin{eqnarray}
\rho ( \nu ) = \left\{ \begin{aligned}
\frac{1}{\beta} & \hspace{1em} \mbox{for } | \nu | \leq \frac{\beta}{2}, \\
0 & \hspace{1em} \mbox{for } | \nu | > \frac{\beta}{2}.
\end{aligned} \right.
\end{eqnarray}
We note that instanton factors do not appear in our computation. The full partition function of the $\mathcal{N} = 1$ SYM on $S^{5}$ including instantons is derived in \cite{Kim:2012qf} as
\begin{align}
\mathcal{Z} ( \beta, m, \epsilon_{1}, \epsilon_{2} ) \sim \int \prod_{i = 1}^{N} d \nu_{i} \exp \left[ - \frac{N^{2}}{\beta ( 1 + a ) ( 1 + b ) ( 1 + c )} \sum_{i = 1}^{N} \nu_{i}^{2} \right] \prod_{A = 1}^{3} \mathcal{Z}_{\rm pert}^{( A )} \mathcal{Z}_{\rm inst}^{( A )},
\end{align}
where $\mathcal{Z}_{\rm inst}^{( A )}$ is an instanton one-loop determinant (see \cite{Kim:2012qf} for details). For the maximally supersymmetric case obtained by taking appropriate limits of each parameter, the perturbative part $\mathcal{Z}_{\rm pert}^{( 1 )} \mathcal{Z}_{\rm pert}^{( 2 )} \mathcal{Z}_{\rm pert}^{( 3 )}$ reduces to \eqref{CSMMpf} and
\begin{align}
\mathcal{Z}_{\rm inst}^{( 1 )} \mathcal{Z}_{\rm inst}^{( 2 )} \mathcal{Z}_{\rm inst}^{( 3 )}
\to e^{\frac{N \pi^{2}}{3 \beta}} \prod_{n = 1}^{\infty} \left( 1 - e^{- \frac{8 \pi^{2} n}{\beta}} \right)^{- N}
= \eta ( e^{- \frac{8 \pi^{2}}{\beta}} )^{- N}.
\end{align}
Thus, the instanton factor in MSYM is just a constant independent of the integration valuables and does not affect the expectation value because this should be canceled by the normalization factor in \eqref{WLR}.
\subsection{Symmetric representation}
Let us consider symmetric representation $S_{k}$ where the rank $k$ is $O(N)$.
The trace in $S_{k}$ is expressed as
\begin{eqnarray}
\text{Tr}_{S_{k}} e^{N \nu} = \sum_{1 \leq i_{1} \leq \cdots \leq i_{k} \leq N} \exp \left[ N \sum_{l = 1}^{k} \nu_{i_{l}} \right]. \label{symmetric}
\end{eqnarray}
Although \eqref{symmetric} includes various contributions in the summation, the largest one comes with $\nu_{1} = \nu_{i_{1}} = \cdots = \nu_{i_{k}}$. Therefore, the leading contribution to the expectation value is given by
\begin{eqnarray}
\WLS \sim \int \prod_{i = 1}^{N} d \nu_{i} \exp \left[ - \frac{N^{2}}{\beta} \sum_{i = 1}^{N} \nu_{i}^{2} + \frac{N}{2} \sum_{i, j, i \neq j} \left| \nu_{i} - \nu_{j} \right| + N k \nu_{1} \right]. \label{WLS}
\end{eqnarray}
We again acquire $\nu_{1}$ by the saddle point equation
\begin{eqnarray}
0 = - \frac{2 N^{2}}{\beta} \nu_{1} + N \sum_{j=2}^{N} ( + 1 ) + N k.
\end{eqnarray}
Hence,
\begin{eqnarray}
\nu_{1} = \frac{\beta}{2 N} ( N + k ). \label{nu1s}
\end{eqnarray}
We put it back into \eqref{WLS}, then the leading one depending on $k$ becomes
{\set
\begin{eqnarray}
\WLS
&\sim& \left. \exp \left[ - \frac{N^{2}}{\beta} \sum_{i = 1}^{N} \nu_{i}^{2} + \frac{N}{2} \sum_{i, j, i \neq j} \left| \nu_{i} - \nu_{j} \right| + N k \nu_{1} \right] \right|_{\rm saddle\ point} \non \\
&\sim& \left. \exp \left[ - \frac{N^{2}}{\beta} \nu_{1}^{2} + N \sum_{j = 2}^{N} \left| \nu_{1} - \nu_{j} \right| + N k \nu_{1} + ( \mbox{terms independent of } k ) \right] \right|_{\rm saddle\ point} \non \\
&\sim& \exp \left[ \frac{\beta}{2} N k \left(1 + \frac{k}{2N} \right) \right]. \label{WLs}
\end{eqnarray} }%
Here we use the fact that $\WLS=1$ when $k=0$.
This expression \eqref{WLs} reproduces the result of the fundamental case when $k =1$ \cite{Kim:2012qf, Minahan:2013jwa}. The same result as \eqref{WLs} is also obtained by substituting $n=1,\ m=k$ in \eqref{WRect} or \eqref{WRectSU}. This result \eqref{WLs} is compared to the result on the gravity side in the next section.
\subsection{anti-symmetric representation}
We turn to calculating the expectation value of the Wilson loop in anti-symmetric representation $A_{k}$ with $k=O(N)$ boxes in the Young diagram. The trace in this representation is written as
\begin{eqnarray}
\text{Tr}_{A_{k}} e^{N \nu} = \sum_{1 \leq i_{1} < \cdots < i_{k} \leq N} \exp \left[ N \sum_{l = 1}^{k} \nu_{i_{l}} \right].
\end{eqnarray}
The largest contribution in the large $N$ limit is in the case of $i_{l} = l$ because of our ordering $\nu_{1} > \nu_{2} > \cdots > \nu_{N}$, namely, the leading one in \eqref{WLR} is
\begin{eqnarray}
\WLA \sim \int \prod_{i = 1}^{N} d \nu_{i} \exp \left[ - \frac{N^{2}}{\beta} \sum_{i = 1}^{N} \nu_{i}^{2} + \frac{N}{2} \sum_{i, j, i \neq j} \left| \nu_{i} - \nu_{j} \right| + N \sum_{l = 1}^{k} \nu_{l} \right]. \label{WLA}
\end{eqnarray}
Since this insertion does not change the eigenvalue distribution, we can find with \eqref{nui},
{\set
\begin{eqnarray}
\WLA
&\sim& \left. \exp \left[ N \sum_{l = 1}^{k} \nu_{l} \right] \right|_{\rm saddle\ point} \non \\
&\sim& \exp \left[ \frac{\beta}{2} Nk \left( 1 - \frac{k}{N} \right) \right]. \label{WLa}
\end{eqnarray} }The expression is invariant under the exchange of $k$ and $(N - k)$ as expected and reproduces the result of the fundamental case when $k =1$ \cite{Kim:2012qf, Minahan:2013jwa}.
The same result as \eqref{WLa} is also obtained by substituting $n=k,\ m=1$ in \eqref{WRect} or \eqref{WRectSU}. This result \eqref{WLa} is compared to the result on the gravity side in the next section.
\section{Probe M5-branes in 11D supergravity} \label{M5}
Let us now turn to the holographic description of the Wilson surfaces. An M2-brane wrapping $\mathrm{AdS}_{3}$ is the gravity dual to the Wilson surface in fundamental representation \cite{Berenstein:1998ij, Corrado:1999pi, Young:2011aa, Kim:2012qf, Minahan:2013jwa}. On the other hand, probe M5-branes are better descriptions for the Wilson loops in large-rank symmetric or anti-symmetric representations \cite{Lunin:2007ab,Chen:2007ir,Chen:2007zzr,Chen:2008ds,D'Hoker:2008qm}, and we employ this probe M5-brane description in this paper.
\subsection{Supergravity background}
We take the following forms for the AdS radius $L$ and the M5-brane tension $T_{5}$ as well as in \cite{Maldacena:1997re}
\begin{eqnarray}
L = 2 \left( \pi N \right)^{\frac{1}{3}} \ell_{\mathrm{P}}, \hspace{2em}
T_{5} = \frac{1}{( 2 \pi )^{5} \ell_{\mathrm{P}}^{6}},
\end{eqnarray}
where $\ell_{\mathrm{P}}$ is the 11-dimensional Planck length.
The metric of Euclidean $\mathrm{AdS}_7\times S^4$ is written in terms of the global coordinates
\begin{equation}
\begin{aligned}
d s^{2} &= L^{2} \left( \cosh^{2} \rho d \tau^{2} + d \rho^{2} + \sinh^{2} \rho d \Omega_{5}^{2} \right) + \frac{L^{2}}{4} d \Omega_{4}^{2}, \\
d \Omega_{5}^{2} &= d \eta^{2} + \sin^{2} \eta d \phi^{2} + \cos^{2} \eta d \Omega_{3}^{2}, \\
d \Omega_{4}^{2} &= d \theta^{2} + \sin^{2} \theta d \tilde{\Omega}_{3}^{2}, \\%3
&\rho\ge 0,\quad 0\le\phi<2\pi,\quad 0\le\eta\le \frac{\pi}{2},\quad 0\le \theta \le \pi,
\end{aligned} \label{global}
\end{equation}
where $d \Omega_{3}^{2}$ and $d \tilde{\Omega}_{3}^{2}$ are metrics of units $S^3$ and $\tilde{S}^3$, respectively. In order to make the boundary $S^1\times S^5$, we compactify the $\tau$ direction (see Fig. \ref{bdy1}) as
\begin{align}
\tau \sim \tau + \frac{2 \pi R_{6}}{r}. \label{identification}
\end{align}
To be precise, the identification \eqref{identification} is accompanied by the rotation of the isometry in the $\tilde{S}^3$ direction in order to compare the result from 5D MSYM \cite{Kim:2012ava,Kim:2012qf}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=9cm,trim=50 80 50 80,clip]{Bdy1.pdf}
\caption{The boundary of $\mathrm{AdS}_{7}$ in the global coordinates. The radii of $S^{1}$ and $S^{5}$ on the boundary are $R_{6}$ and $r$, respectively.}
\label{bdy1}
\end{center}
\end{figure}
Another convenient set of coordinates is the $\mathrm{AdS}_3\times S^3$ foliation. In these coordinates, the metric is expressed as
\begin{eqnarray}
\begin{aligned}
d s^{2} &= L^{2} \left( \cosh^{2} u d \check{\Omega}_{3}^{2} + d u^{2} + \sinh^{2} u d \Omega_{3}^{2} \right) + \frac{L^{2}}{4} d \Omega_{4}^{2}, \\
d \check{\Omega}_{3}^{2} &= \cosh^{2} w d \tau^{2} + d w^{2} + \sinh^{2} w d \phi^{2},
\end{aligned} \label{bubbling}
\end{eqnarray}
where $(u,w)$ are related to $(\rho,\eta)$ as
\begin{align}
&\sinh u=\sinh\rho\cos\eta,\\
&\tanh w=\tanh \rho \sin \eta.
\end{align}
We denote the vielbein for the spacetime by $E^{a}$, then divide each component such as $( E^{0}, E^{1}, E^{2} )$ for $\mathrm{AdS}_{3}$, $E^3=Ldu$, $( E^{4}, E^{5}, E^{6} )$ for $S^{3}$ belonging to $\mathrm{AdS}_{7}$, $E^7=Ld\theta$, and $(E^{8}, E^{9}, E^{\natural} )$ ($\natural = 10$) for $\tilde {S}^{3}$ in $S^4$.
The supergravity in 11 dimensions contains the 4-form field strength $B_{4}$ as a bosonic field besides the metric. When the background geometry is $\mathrm{AdS}_{7} \times S^{4}$, 4-form field strength $B_{4}$ is given by
\begin{eqnarray}
B_{4} = \frac{6}{L} E^{7 8 9 \natural},
\end{eqnarray}
where we abbreviated $E^{a_{1}} \!\wedge\! \cdots \!\wedge\! E^{a_{p}}$ as $E^{a_{1} \cdots a_{p}}$. In the following sections, all indices of field variables represent the ones in the local Lorentz frame.
\subsection{M5-brane wrapping $\mathrm{AdS}_{3} \times S^{3}$}
Here we consider an M5-brane wrapping $\mathrm{AdS}_3\times S^3$. In this calculation, we should carefully introduce the boundary term of the M5-brane action. Let us first consider the boundary term in the plane Wilson surface in $\mathbb{R}^6$ for simplicity. It is convenient to introduce the Poincar\'e coordinates
\begin{align}
d s^{2} =& \frac{L^{2}}{y^{2}} \left( d y^{2} + d r_{1}^{2} + r_{1}^{2} d \phi^{2} + d r_{2}^{2} + r_{2}^{2} d \Omega_{3}^{2} \right) + \frac{L^{2}}{4} d \Omega_{4}^{2},\nonumber\\
& y>0,\quad r_1,r_2\ge 0.
\label{Poincare}
\end{align}
The plane Wilson surface is located at $r_2=0, y\to 0$. We denote one of the worldvolume coordinates on the M5-brane by $\lambda$, and take the ansatz
\begin{eqnarray}
r_{2} = \kappa y, \hspace{2em}
y = y ( \lambda ),
\end{eqnarray}
where $\kappa$ is a constant. The induced metric is given by
\begin{eqnarray}
\begin{aligned}
d s_{\rm ind}^{2} &= \frac{L^{2}}{y^{2}} \left[ \left( 1 + \kappa^{2} \right) y'^2 d \lambda^{2} + d r_{1}^{2} + r_{1}^{2} d \phi^{2} + ( \kappa y )^{2} d \Omega_{3}^{2} \right], \\
\sqrt{g_{\rm ind}} &= \frac{\kappa^{3} L^{6}}{y^{3}} | y' | r_{1} \sqrt{1 + \kappa^{2}} \sqrt{g_{S^{3}}},
\end{aligned}
\end{eqnarray}
where $y':=dy/d\lambda$, and $g_{S^3}$ is the determinant of the metric of unit $S^3$.
Since the submanifold totally belongs to $\mathrm{AdS}_{7}$, we take account of the 7-form field strength $B_{7}$ which is the Hodge dual to $B_{4}$,
{\set
\begin{eqnarray}
B_{7}
&=& \ast B_{4} \non \\
&=& \frac{6}{L} E^{0 1 2 3 4 5 6} \non \\
&=& \frac{6 L^{6}}{y^{7}} r_{1} r_{2}^{3} d y \!\wedge\! d r_{1} \!\wedge\! d \phi \!\wedge\! d r_{2} \!\wedge\! \omega_{3},
\end{eqnarray}}%
where $\omega_3$ is the volume form of unit $S^3$. $B_{7}$ can be written as the following form with background gauge fields $C_{3}$ and $C_{6}$ to satisfy the equation of motion for $B_{4}$:
\begin{eqnarray}
B_{7} = d C_{6} + \frac{1}{2} C_{3} \!\wedge\! d C_{3}.
\end{eqnarray}
Since $C_{3} \!\wedge\! d C_{3} = 0$, we choose the gauge in which $C_{6}$ is given by
{\set
\begin{eqnarray}
C_{6}
&=& - \frac{L^{6}}{y^{6}} r_{1} r_{2}^{3} d r_{1} \!\wedge\! d \phi \!\wedge\! d r_{2} \!\wedge\! \omega_{3} \non \\
&=& \frac{\kappa^{4} L^{6}}{y^{3}} r_{1} y' d r_{1} \!\wedge\! d \phi \!\wedge\! \omega_{3}\!\wedge\! d\lambda. \label{c6}
\end{eqnarray} }%
There is the 2-form gauge field $A_{2}$ on the M5-brane and let us define $F_3=dA_2$ and $H_3=F_3-C_3$. Notice that $C_3=0$ on this M5-brane worldvolume. The flux quantization condition \eqref{fq} implies
{\set
\begin{eqnarray}
H_{3}
&=& \frac{k}{2 N} L^3\omega_3 =\frac{k}{2 N} \frac{y^{3}}{r_{2}^{3}} E^{4 5 6} \non \\[.5em]
\Rightarrow H_{4 5 6}
&=& \frac{k}{2 N \kappa^{3}}.
\end{eqnarray} }%
We use the gauge symmetry \eqref{gauge2} and set
\begin{align}
H_{012}=0. \label{H012}
\end{align}
Actually, the final result is independent of this gauge choice as far as we use the Legendre transformation prescription for the 2-form gauge field as in \cite{Drukker:1999zq,Drukker:2005kx}. In order to determine the field strength $\tilde{H}_{3}$ dual to $H_{3}$, we must fix an auxiliary field $a$ which makes the action covariant (see Appendix \ref{PSTaction}). Through the rest of this paper, we use
\begin{eqnarray}
a = \phi \label{afix},
\end{eqnarray}
then
\begin{eqnarray}
v_{2} = 1.
\end{eqnarray}
The component of $\tilde{H}^{a b}$ left under the fixing \eqref{afix} is
\begin{eqnarray}
\tilde{H}^{0 1} = H_{4 5 6}. \label{dh3}
\end{eqnarray}
Since the PST action \eqref{PST} is originally defined in the Lorentzian background, we make the Wick rotation $\tilde{H}_{t 1} = i \tilde{H}_{\tau 1}$. Accordingly, the PST action \eqref{PST} with nonzero $C_{6}$ becomes
{\set
\begin{eqnarray}
S_{\rm M5}
&=& T_{5} \int d^{6} \zeta \sqrt{g_{\rm ind}} \sqrt{\det \left( \delta_{m}^{\ n} + i \tilde{H}_{m}^{\ n} \right)}
+ T_{5} \int C_{6} \non \\[.5em]
&=& \mathcal{K} \int_{\lambda_{\rm min}}^{\lambda_{0}} d \lambda \frac{| y' |}{y^{3}} \left[ \sqrt{\left( 1 + \kappa^{2} \right) \left( \kappa^{6} + \left( \frac{k}{2 N} \right)^{2} \right)}
- \kappa^{4} \right],
\end{eqnarray} }where
\begin{eqnarray}
\mathcal{K} := 2 \pi^{2} T_{5} L^{6} \int_{0}^{2 \pi} d \phi \int_{0}^{\infty} d r_{1} r_{1}.
\end{eqnarray}
We assume $y'<0$ and introduce the cutoff denoted by $\lambda_{0}$ and the lower bound $\lambda_{\rm min}$. The equation of motion for $\kappa$ is
{\set
\begin{eqnarray}
0
&=& \frac{d}{d \kappa} \left[ \sqrt{\left( 1 + \kappa^{2} \right) \left( \kappa^{6} + \left( \frac{k}{2 N} \right)^{2} \right)}
- \kappa^{4} \right] \non \\[.5em]
&=& \frac{\left( \frac{k}{2 N} \right)^{2} \kappa + 3 \kappa^{5} + 4 \kappa^{7}}{\sqrt{\left( 1 + \kappa^{2} \right) \left( \kappa^{6} + \left( \frac{k}{2 N} \right)^{2} \right)}}
- 4 \kappa^{3},
\end{eqnarray} }%
hence $\kappa$ is related to $k$ by
\begin{eqnarray}
\kappa = \sqrt{\frac{k}{2 N}}.
\end{eqnarray}
We can rewrite the action with this relation as
\begin{eqnarray}
S_{\rm M5} = \mathcal{K} \frac{k}{2 N} \int_{\lambda_{\rm min}}^{\lambda_{0}} d \lambda \frac{| y' |}{y^{3}}.
\end{eqnarray}
Furthermore, we replace the bulk direction $y$ with $z$ such that
\begin{eqnarray}
z = \frac{1}{y^{2}}.
\end{eqnarray}
Because $z ( \lambda_{\rm min} ) = 0$ in the new coordinate, the PST action is given by
{\set
\begin{eqnarray}
S_{\rm M5}
&=& \frac{k}{4 N} \mathcal{K} \int_{\lambda_{\rm min}}^{\lambda_{0}} d \lambda z' = : \int_{\lambda_{\rm min}}^{\lambda_{0}} d \lambda \mathcal{L} \non \\[.5em]
&=& \frac{k}{4 N} \mathcal{K} z_{0}, \label{ppst}
\end{eqnarray} }where a new cutoff is defined as $z_{0} := z ( \lambda_{0} )$. Along the procedure of the Legendre transformation, we should impose the boundary condition on the conjugate momentum $P_{z}$ for $z$.\footnote{The coordinate $z$ is identified, up to a constant factor, with the radial coordinate of the asymptotically flat supergravity solution of M5-branes before taking the near horizon limit. Thus, this Legendre transformation is the analogue of the case of the Wilson loop case \cite{Drukker:1999zq,Drukker:2005kx}.}
We would like to set the condition where the variation of $P_{z}$ is zero on the boundary,
\begin{eqnarray}
\delta P_{z} |_{\rm bdy} = 0.
\end{eqnarray}
The conjugate momentum is given by
\begin{eqnarray}
P_{z} = \frac{\partial \mathcal{L}}{\partial z'} = \frac{k}{4 N} \mathcal{K},
\end{eqnarray}
and the boundary term can be written as
\begin{eqnarray}
S_{\rm bdy} = - P_{z} z_{0}. \label{boundary-term
\end{eqnarray}
We bring it and the original action together. Then the regularized action $S_{\rm M5}^{\rm reg}$ becomes
\begin{eqnarray}
S_{\rm M5}^{\rm reg} = S_{\rm M5} + S_{\rm bdy} = 0.
\end{eqnarray}
Thus, the expectation value for the M5-brane is 1. This result is expected since the plane Wilson surface preserves a part of the Poincar\'e supersymmetry. The boundary term \eqref{boundary-term} is proportional to the volume of the boundary including the finite contribution. Thus, we conclude that the boundary counter term is proportional to the volume of the boundary with the gauge choice \eqref{c6} and \eqref{H012}.
Let us move to the Wilson surface wrapping on $S^1\times S^1$. It is convenient to use the $\mathrm{AdS}_3\times S^3$ foliation coordinates \eqref{bubbling} with identification \eqref{identification}.
They are related by the coordinate transformation:
\begin{eqnarray}
\begin{aligned}
y &= \frac{e^{\tau}}{\cosh u \cosh w}, \\[.5em]
r_{1} &= e^{\tau} \tanh w, \\[.5em]
r_{2} &= \frac{e^{\tau} \tanh u}{\cosh w}.
\end{aligned} \label{PtoGlobal}
\end{eqnarray}
The M5-brane is wrapping $\mathrm{AdS}_3\times S^3$ expressed by $u=u_k=$(constant). From \eqref{PtoGlobal}, $\kappa$ is related to $u_k$ as
\begin{eqnarray}
\kappa = \sinh u_{k}.
\end{eqnarray}
Similary, $C_{6}$ on the worldvolume is given by
\begin{eqnarray}
C_{6} = - L^{6} \cosh^{2} u_{k} \sinh^{4} u_{k} \cosh w \sinh w d \tau \!\wedge\! d w \!\wedge\! d \phi \!\wedge\! \omega_{3}.
\end{eqnarray}
In addition, we must use the flux quantization condition in this coordinate, namely, $H_{3}$ is given by
{\set
\begin{eqnarray}
H_{3}
&=& \frac{k}{2 N \sinh^{3} u_{k}} E^{4 5 6} \non \\
\Rightarrow H_{4 5 6}
&=& \frac{k}{2 N \sinh^{3} u_{k}}.
\end{eqnarray} }On the other hand, \eqref{dh3} remains intact. Putting it all together, we can compute the PST action in these coordinates,
{\set
\begin{eqnarray}
S_{\rm M5}
&=& T_{5} \int L^{6} \omega_{6} \cosh^{3} u_{k} \sinh^{3} u_{k} \sqrt{1 + \left( H_{4 5 6} \right)^{2}} \non \\[.5em]
&& - T_{5} L^{6} \int \cosh^{2} u_{k} \sinh^{4} u_{k} \cosh w \sinh w d \tau \!\wedge\! d w \!\wedge\! d \phi \!\wedge\! \omega_{3} \non \\[.5em]
&=& \frac{2 \pi R_{6}}{r} k \left( 2 N + k \right) \sinh^{2} w_{0}
\end{eqnarray} }where $\omega_{6}$ is the volume form of unit $\mathrm{AdS}_{3} \times S^{3}$ and $w_{0}$ is a cutoff.
Since the boundary term is proportional to the volume of the boundary and cancels the divergence,
it is given by
\begin{align}
S_{\rm bdy}=-\frac{2 \pi R_{6}}{r} k \left( 2 N + k \right) \sinh w_{0} \cosh w_0.
\end{align}
The regularized PST action $S_{\rm M5}^{\rm reg}$ is obtained in the limit $w_0\to \infty$ as
{\set
\begin{eqnarray}
S_{\rm M5}^{\rm reg}
&=& S_{\rm M5} + S_{\rm bdy} \non \\[.5em]
&=& - \frac{\pi R_{6}}{r} k \left( 2 N + k \right) \non \\[.5em]
&=& - \frac{\beta}{2} N k \left( 1 + \frac{k}{2N} \right).
\end{eqnarray} }Finally, the expectation value of the Wilson surface for the M5-brane wrapping $\mathrm{AdS}_{3} \times S^{3}$ is given by
\begin{eqnarray}
\exp \left[ - S_{\rm M5}^{\rm reg} \right] = \exp \left[ \frac{\beta}{2} N k \left( 1 + \frac{k}{2N} \right) \right].
\end{eqnarray}
This result completely matches the value of the Wilson surface in symmetric representation \eqref{WLs}. As a result, we could obtain nontrivial support for the $\mathrm{AdS}_{7}$/CFT$_{6}$.
\subsection{M5-brane wrapping $\mathrm{AdS}_{3} \times \tilde{S}^{3}$}
In this section we consider a probe M5-brane wrapping $\mathrm{AdS}_{3} \times \tilde{S}^{3}$. Here $\mathrm{AdS}_{3}$ is a minimal surface in $\mathrm{AdS}_{7}$, while $\tilde{S}^{3}$ is included in $S^{4}$. It is convenient to use the global coordinates \eqref{global}. We take the ansatz
\begin{align}
\eta=\pi/2,\quad \theta=\theta_k=(\text{constant}).
\end{align}
The induced metric on the M5-brane is given by
\begin{eqnarray}
\begin{aligned}
d s_{\rm ind}^{2} &= L^{2} \left( \cosh^{2} \rho d \tau^{2} + d \rho^{2} + \sinh^{2} \rho d \phi^{2} \right) + \frac{L^{2}}{4} \sin^{2} \theta_{k} d \tilde{\Omega}_{3}^{2}, \\
\sqrt{g_{\rm ind}} &= \frac{L^{6}}{8} \cosh \rho \sinh \rho \sin^{3} \theta_{k} \sqrt{g_{\tilde{S}^{3}}},
\end{aligned} \label{induced1}
\end{eqnarray}
where constant $\theta_{k}$ is associated with integer $k$ parametrizing the flux quantization condition (see Appendix \ref{Flux}).
$B_{4}$ also can be written as the derivative of $C_{3}$; thus, for the global coordinates we have
{\set
\begin{eqnarray
B_{4} = d C_{3}
&=& \frac{6}{L} E^{7 8 9 \natural} \non \\[.5em]
&=& \frac{3}{8} L^{3} \sin^{3} \theta d \theta \!\wedge\! \tilde{\omega}_{3},
\end{eqnarray} }where $\tilde{\omega}_{3}$ is the volume form of unit $\tilde{S}^{3}$. By integrating this over $\theta$, $C_{3}$ can be obtained by
{\set
\begin{eqnarray}
C_{3}
&=& - \frac{L^{3}}{8} \left( 3 \cos \theta - \cos^{3} \theta - 2 \right) \tilde{\omega}_{3} \\[.5em]
&=:& - L^{3} f ( \theta ) \tilde{\omega}_{3}.
\end{eqnarray} }
We choose the gauge in which $C_{3} = 0$ at $\theta = 0$ because $\tilde{S}^{3}$ shrinks at that point. Combining it with the flux quantization condition \eqref{fq}, the 3-form field strength $H_{3}$ is
{\set
\begin{eqnarray}
H_{3}
&=& F_{3} - C_{3} \non \\[.5em]
&=& \left( \frac{k}{2 N} + f ( \theta_{k} ) \right) L^{3} \tilde{\omega}_{3} \non \\[.5em]
&=& \left( \frac{k}{2 N} + f ( \theta_{k} ) \right) \frac{8}{\sin^{3} \theta_{k}} E^{8 9 \natural} \non \\[.5em]
\Rightarrow H_{8 9 \natural}
&=& \left( \frac{k}{2 N} + f ( \theta_{k} ) \right) \frac{8}{\sin^{3} \theta_{k}}. \label{H1}
\end{eqnarray} } The component of $\tilde{H}^{a b}$ is
\begin{eqnarray}
\tilde{H}^{0 1} = H_{8 9 \natural}. \label{dualH1}
\end{eqnarray}
We choose the gauge $H_{012}=0$ again. Moreover, we have $C_{6}= 0$ on the worldvolume and $C_{3} \!\wedge\! H_{3} = 0$ because both are proportional to the volume form of $\tilde{S}^{3}$. Thus, the remaining part of the action is
{\set
\begin{eqnarray}
S_{\rm M5}
&=& T_{5} \int d^{6} \zeta \sqrt{g_{\rm ind}} \sqrt{\det \left( \delta_{m}^{\ n} + i \tilde{H}_{m}^{\ n} \right)} \non \\[.5em]
&=& T_{5} \int d^{6} \zeta \frac{L^{6}}{8} \cosh \rho \sinh \rho \sin^{3} \theta_{k} \sqrt{g_{\tilde{S}^{3}}} \sqrt{1 + \left( H_{8 9 \natural} \right)^{2}} \non \\[.5em]
&=& T_{5} \frac{\pi^{2} L^{6}}{4} \int d^{3} \zeta \cosh \rho \sinh \rho \sqrt{\sin^{6} \theta_{k} + 64 \left( \frac{k}{2 N} + f ( \theta_{k} ) \right)^{2}}.
\end{eqnarray} }%
Next we solve the equation of motion for $\theta_{k}$ to acquire the on-shell value. It is equivalent to
{\set
\begin{eqnarray}
0
&=& \frac{d}{d \theta_{k}} \left[ \sin^{6} \theta_{k} + 64 \left( \frac{k}{2 N} + f ( \theta_{k} ) \right)^{2} \right] \non \\[.5em]
&=& 8 \sin^{3} \theta_{k} \left[ - 2 \cos \theta_{k} - \frac{4 k}{N} + 2 \right],
\end{eqnarray} }then we have the relation between $\theta_{k}$ and $k$ as
\begin{eqnarray}
\cos \theta_{k} = 1 - \frac{2 k}{N}.
\end{eqnarray}
Substituting this into the action, we obtain
{\set
\begin{eqnarray}
S_{\rm M5}
&=& T_{5} \frac{\pi^{2} L^{6}}{4} \frac{4 k}{N} \left( 1 - \frac{k}{N} \right) \int_{0}^{\rho_{0}} d \rho \int_{0}^{\frac{2 \pi R_{6}}{r}} d \tau \int_{0}^{2 \pi} d \phi \cosh \rho \sinh \rho \non \\[.5em]
&=& \frac{4 \pi R_{6}}{r} k ( N - k ) \sinh^{2} \rho_{0},
\end{eqnarray} }where $\rho_{0}$ is a cutoff.
The boundary term $S_{\rm bdy}$ is again proportional to the volume of the boundary and given by
\begin{align}
S_{\rm bdy}=-\frac{4 \pi R_{6}}{r} k ( N - k ) \sinh \rho_{0} \cosh \rho_0.
\end{align}
We take the limit $\rho_{0} \rightarrow \infty$, and obtain {\set
\begin{eqnarray}
S_{\rm M5}^{\rm reg}
&=& S_{\rm M5} + S_{\rm bdy} \non \\[.5em]
&=& - \frac{2 \pi R_{6}}{r} k ( N - k ) \non \\[.5em]
&=& - \frac{\beta}{2} Nk \left( 1 - \frac{k}{N} \right).
\end{eqnarray} }The expectation value for the M5-brane wrapping $\mathrm{AdS}_{3} \times \tilde{S}^{3}$ results in
\begin{eqnarray}
\exp \left[ - S_{\rm M5}^{\rm reg} \right] = \exp \left[ \frac{\beta}{2} Nk \left( 1 - \frac{k}{N} \right) \right].
\end{eqnarray}
It perfectly agrees with the Wilson surface in anti-symmetric representation \eqref{WLa}; hence, this strongly stands for the $\mathrm{AdS}_{7}$/CFT$_{6}$.
\subsection*{Acknowledgements}
We would like to thank Yuhma Asano, Koji Hashimoto, Masazumi Honda, Kazuo Hosomichi, Yosuke Imamura, Hiroshi Isono, Johan K\"all\'en, Yoichi Kazama, Shota Komatsu, Sanefumi Moriyama, Tomoki Nosaka, Takuya Okuda, Yuji Okawa, Akinori Tanaka, and Seiji Terashima for discussions and comments. The work of H.M. was supported in part by the JSPS Research Fellowship for Young Scientists. The work of S.Y. was supported in part by JSPS KAKENHI Grant No. 22740165.
| {
"attr-fineweb-edu": 1.725586,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdpE4uBhhxIH737QK | \section{Introduction}
As reported in Monteiro {\em et al.} (2006, and references therein),
the stellar models
provided by the codes CESAM2k (Morel, 1997) and CLES (Scuflaire {\em et al.}, 2007a)
for a given set of standard input physics, differ by less than 0.5\%.
At variance with previous comparisons, in this new ESTA-TASK3 we deal with
stellar models that include microscopic diffusion.
The treatment of the microscopic diffusion process in the evolution codes
we test here, is not exactly the same.
CLES code computes the diffusion coefficients by solving the
Burgers' equations (Burgers, 1969) with the formalism developed in
Thoul {\em et al.} (1994, thereafter TBL94). CESAM2k provides two
approaches to compute diffusion velocities: one (which we will call CESAM2k~MP)
is based on Michaud \& Proffitt (1993) approximation,
the other (hereafter CESAM2k~B) is based on the Bugers' formalism, with
collision integrals derived from Paquette {\em et al.} (1986).
We will compare three sets of models Task3.1 (1.0~\mbox{${\rm M}_{\odot}$}), Task3.2 (1.2~\mbox{${\rm M}_{\odot}$})
and Task3.3 (1.3~\mbox{${\rm M}_{\odot}$}), whose input parameters and physics specifications
are described in Lebreton (2007).
In the next sections we will present the results of comparing the stellar
models that were calculated by CLES, CESAM2k~MP and CESAM2k~B
for the three sets of models, and we try to find out the reason of
the differences we get.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.45]{./figs/task3.HR.eps}
\caption{Evolutionary tracks corresponding to 1.0, 1.2 and 1.3~\mbox{${\rm M}_{\odot}$}. Solid lines: CLES models
with TBL94 diffusion algorithm.
Dotted-lines: CESAM2k with Burgers' equations and Paquette {\em et al.} (1986) collision
integrals. Dashed-lines: CESAM2k with Michaud \& Proffitt (1993) approach. The full-dots
along the evolutionary tracks correspond to models with $X_{\rm c}$=0.35 and 0.01, and with
a He-core mass $M_{\rm c}^{\rm He}=0.05\,M_{\star}$.}
\label{fighr}
\end{center}
\end{figure}
\section{Stellar structure and evolution}
For each Task3 we select three evolutionary stages: A: a main sequence stage with a central
hydrogen content $X_{\rm c}=0.35$; B: a stage close to the core hydrogen exhaustion
$X_{\rm c}=0.01$, and C: a post-main sequence stage in which the mass of the helium
core (defined as the central region where the hydrogen mass fraction is $X\leq0.01$)
is $M_{\rm c}^{\rm He}=0.05\,M_{\ast}$.
CESAM stellar models have a number of mesh points between 2700 and 3100, depending on the
evolutionary stage, while CLES models have about 2400 mesh points. Moreover, for
all the models considered in these comparisons, the stellar structure ends at $T=$\mbox{${T}_{\rm eff}$}.
Concerning the time step, both codes make from 1000 to 1500 (depending on the
stellar mass) time steps to reach stage C, and
the specifications for the stages A, B and C
are achieved with a precision better than $1\times 10^{-4}$.
Fig.~\ref{fighr} displays, for each microscopic diffusion implementation,
the evolutionary tracks for Task3.1, 3.2 and 3.3, and the HR diagram location of the
target models A, B and C. For each stellar mass, the main sequence computed with CESAM2k~B is slightly
hotter ($\sim 0.1$\% for task3.1, to 0.3\% for task3.2 and 3.3) than those calculated by
CESAM2k~MP and CLES.
Furthermore, CLES and CESAM2k~MP models are quite close ($\Delta R/R < 0.3$\%, and $\Delta L/L <0.4$\%)
with the exception of models in the second overall contraction phase, for which the differences
can reach 1\% in the stellar radius and 0.5\% in luminosity.
The fact that CESAM2k~B models are hotter than CESAM2k~MP and CLES ones
could suggest that the outer layer opacity for the former is lower than for
the latter because of a different content of hydrogen in their convective envelope.
The evolution of the helium abundance in the stellar convective envelope ($Y_{\rm S}$) is an eloquent
indicator of the microscopic diffusion effects. Fig.~\ref{figys} shows,
for each considered stellar mass and
diffusion treatment, the variation of $Y_{\rm S}$ as the central hydrogen content $X_{\rm c}$ decreases,
and reveals that the diffusion efficiency in CLES is always larger than in CESAM:
about 8, 10 and 20\% larger than in CESAM2k~MP for 1.0, 1.2 and 1.3~\mbox{${\rm M}_{\odot}$}\ respectively,
and 40\% larger than in CESAM2k~B for all stellar masses under consideration.
The irregular behaviour of $Y_{\rm S}$ {\em vs.} $X_{\rm c}$ curves for task3.2 and 3.3, is
a consequence of a semiconvection phenomenon that appears below the convective envelope
and, the longer main sequence for CESAM2k~MP models is probably due to semiconvection at the
border of the convective core (see next section).
\begin{figure}[t]
\begin{center}
\includegraphics[width=\textwidth]{./figs/figYs.eps}
\vspace*{-1cm}
\caption{Evolution of helium content in the convective envelope ($Y_{\rm S}$ {\em vs.} $X_{\rm c}$)
for 1.0~\mbox{${\rm M}_{\odot}$}\ (left panel), 1.2~\mbox{${\rm M}_{\odot}$}\ (central panel) and 1.3~\mbox{${\rm M}_{\odot}$}\ (right panel).}
\vspace*{-0.7cm}
\label{figys}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\vspace*{-0.5cm}
\includegraphics[angle=-90,width=\textwidth]{./figs/dlnc.eps}
\includegraphics[angle=-90,width=\textwidth]{./figs/dlng.eps}
\caption{Lagrangian differences of sound speed ($d\ln c$) (upper panels) and adiabatic
exponent ($d\ln\Gamma_1$) (lower panels) as a function of the normalised stellar radius for
models corresponding to task3.1 (left), task3.2 (centre), task3.3 (right). For each mass,
three evolutionary stages are considered: A, with $X_{\rm c}=0.35$ (solid lines), B, with
$X_{\rm c}=0.01$ (dotted lines), and C, with $M_c^{\rm He}=0.05 M_{\star}$ (dashed lines).}
\label{figdif}
\end{center}
\end{figure}
The internal structure at the given stages A, B and C can be studied by means of
the sound speed, $c$, and of the adiabatic exponent, $\Gamma_1$, variations.
The Lagrangian differences, $d\ln c$ and $d\ln \Gamma_1$, between CESAM2k (both B and MP) and CLES
models (calculated at the same mass by using the
ADIPLS package tools\footnote{http://astro.phys.au.dk/~jdc/adipack.n})
are plotted in Fig.~\ref{figdif} as a function of the normalised radius.
Note that the vertical scale in $d\ln c$ and $d\ln\Gamma_1$ plots are respectively
five and three times smaller for 1.0~\mbox{${\rm M}_{\odot}$}\ than for 1.2 and 1.3~\mbox{${\rm M}_{\odot}$}.
The $d\ln c$ values reflect: {\rm i}) the differences in stellar radius (note that the largest values are
reached in Task3.2 B CLES-CESAM2k~MP comparison, for which dlnR is of the order of 0.01);
{\em ii}) the different chemical composition gradients below the convective envelope
(features between $r/R=0.6$ and 0.8), as well as differences in the location of convection region
boundaries (at $r/R\sim 0.05$ for the convective core in Task3.2 and 3.3).
\begin{figure}[b]
\begin{center}
\vspace*{-1cm}
\resizebox{\hsize}{!}{\includegraphics{./figs/freq3.2.eps}\includegraphics{./figs/freq3.3.eps}}
\caption{Plots of the frequency differences after removing the scaling due to stellar
radius, between models computed
with CLES and CESAM2k~B (dotted lines) and with CLES and CESAM2k~MP (solid lines).
For each couple CLES-CESAM there are
four curves that correspond to different degrees $\ell=0,1,2,3$.
Left panel: 1.2~\mbox{${\rm M}_{\odot}$}\ models with $X_{\rm c}=0.35$
Right panel: like left panel for 1.3~\mbox{${\rm M}_{\odot}$}\ models.
}
\label{figfreq}
\end{center}
\end{figure}
The value of $\Gamma_1$ in the external regions is particularly sensitive to the He abundance.
Therefore, as one can see in the bottom panels of Fig.~\ref{figdif}, the variations $d\ln\Gamma_1$
are smaller for CESAM2k~MP--CLES comparisons than for CESAM2k~B--CLES ones, and
these differences increase with the mass of the model; these results are in good agreement with
what we would expect from $Y_{\rm S}$ curves in Fig.~\ref{figys}.
To clarify how all these differences affect the seismic properties of the models, we
compute by means of the adiabatic seismic code LOSC (Scuflaire {\em et al.}, 2007b) the frequencies
of oscillations of all the models at the evolutionary stage A (main sequence models).
In Fig.~\ref{figfreq} the frequency differences between CLES and CESAM models of 1.2 (left panel) and
1.3~\mbox{${\rm M}_{\odot}$}\ (right panel) are shown for p-modes with degrees $\ell=$0, 1, 2, 3.
The similar behaviour of curves with different degree indicates that the observed
frequency differences reflect mainly the near surface difference of the models.
In particular, the oscillatory component in CLES-CESAM2k~B frequency differences is
the characteristic signature of the different He content in the convective envelope.
Note that the vertical scale in both panels is not the same, and that the amplitude of the
oscillatory component is related to the difference of surface He content.
Comparisons for 1.0~\mbox{${\rm M}_{\odot}$}\ models showed frequency differences of about 0.4~$\mu$Hz.
\begin{figure}
\begin{center}
\vspace*{-0.75cm}
\includegraphics[width=\textwidth]{./figs/ce_cles.eps}
\vspace*{-1cm}
\caption{Evolution of the radius of the convective envelope for 1.0 (left), 1.2 (middle)
and 1.3~\mbox{${\rm M}_{\odot}$}\ (right). Black regions represent the convective envelope, and grey ones
the ``semiconvection'' region below the convective envelope.}
\label{semicon}
\end{center}
\end{figure}
\section{Boundaries of the convective regions}
The evolution of the convective region boundaries in models with metal diffusion
is difficult to study.
In fact, as it was already noted by Bahcall {\em et al.} (2001) in the case of
1.0~\mbox{${\rm M}_{\odot}$}\ models, the accumulation of metals below the convective envelope
can trigger the onset of semiconvection. As the metal abundance increases below the
convection region, the opacity locally increases
and the affected layers end up by becoming
convectively unstable\footnote{We recall that we use the classical Schwarzschild
criterion for convective instability}.
\begin{figure}
\begin{center}
\includegraphics[scale=0.4]{./figs/ce_cesam.eps}
\caption{Evolution of the radius of the convective envelope for 1.0, 1.2,
and 1.3~\mbox{${\rm M}_{\odot}$}\ CESAM models. Solid lines correspond to CESAM2k with Burgers equations, and
dotted lines to CESAM2k with Michaud \& Proffitt (1993).}
\label{cecesam}
\end{center}
\end{figure}
The evolution of these unstable layers strongly depends on the numerical treatment of convection
borders used in the stellar evolution code.
CLES does not treat semiconvection, and the algorithm computing the chemical
composition in convective regions includes a kind of ``numerical diffusion''.
In CLES, the convectively unstable shells may grow
and eventually join the convective
envelope. As a consequence, the latter becomes suddenly deeper, destroys the Z gradient, recedes,
and the process starts again.
So, the crinkled profiles of $Y_{\rm S}$ for Task3.2 and 3.3 are a consequence of the
sudden variations of the depth of the convective envelope.
Since the timescale of diffusion decreases as the mass of the convective envelope decreases,
semiconvection appears earlier in 1.3~\mbox{${\rm M}_{\odot}$}\ than in 1.2~\mbox{${\rm M}_{\odot}$}\ models. Furthermore,
in contrast with Bahcall {\em et al.} (2001) results, semiconvection does not appear
in our evolved 1.0~\mbox{${\rm M}_{\odot}$}\ models, probably because of the effect of ``numerical diffusion'' that
reduces the efficiency of metal accumulation.
All these effects can be seen in Fig.~\ref{semicon}. In Fig.~\ref{cecesam} we plot the evolution of
the convective envelope for CESAM models. The different treatment of convection borders in both
codes leads to different depth of the convective envelope. At $X_{\rm c}=0.05$,
CLES models have convective envelopes of about 0.1\% deeper than CESAM2k~B ones, and
of about 2.3\%, 0.6\% and 0.4\% shallower than CESAM2k~MP models for 1.0, 1.2 and 1.3~\mbox{${\rm M}_{\odot}$}\
respectively.
\begin{figure}[t]
\begin{center}
\vspace*{-1cm}
\resizebox{\hsize}{!}{\includegraphics{./figs/mcc.eps}\includegraphics{./figs/cesam_mcc.eps}}
\caption{Convective core mass evolution for 1.2~\mbox{${\rm M}_{\odot}$}\ evolution computed with CLES
(left panel) and with CESAM (right panel).
The black region in left panel correspond to the convective core , and grey ones
are convectively unstable regions outside the convective core.}
\label{semicon_core}
\end{center}
\end{figure}
Semiconvection can also appear at the border of the convective core. As explained in Richard {\em et al.}
(2001), because of the He abundance gradient generated at the border of the convective
core by nuclear burning, the diffusion term due to the composition gradient
counteracts the He settling term and He ends up by going out of the core. Since the outward He
flux interacts also with the metals, these may as well diffuse
outward the core and prevent the metals settling. The enhancement of metals at the border of the
convective core induces an increase in opacity and, finally, the onset of semiconvection.
For the masses considered in Task3.2 and Task3.3, semiconvection appears very easily, as
the mass of the convective core increases with time,
leading to a quasi--discontinuity in the He abundance.
As for the convective envelope, the numerical treatment of the border of the
convective regions is a key aspect of the convective core evolution.
In Fig.~\ref{semicon_core} we plot the evolution of the convective regions in the central part of
1.2~\mbox{${\rm M}_{\odot}$}\ models computed with CLES (left panel) and with CESAM (right panel).
While CLES treatment of convective borders keeps convectively unstable shells
separated from the convective core (grey region), it seems that CESAM tends to connect these
shells to the central convective region. In fact, the envelope of the curve M$_{cc}$ {\rm vs.} $X_{\rm c}$
for CESAM2k~MB model approximately coincides with the ``semiconvection'' region in CLES plot.
As a consequence, a larger central mixed region in CESAM2k~MP than in CLES leads to a longer
main sequence phase, as seen in Fig.~\ref{fighr}.
In fact, the value of $M_{\rm cc}$, just before it begins to decrease, is
6\% and 12\%, respectively for 1.2 and 1.3~\mbox{${\rm M}_{\odot}$}, larger for CESAM2k~MB models
than for CLES ones. On the other hand, the corresponding values for CESAM2k~B are
2\% and 10\% larger than CLES ones.
\section{Diffusion coefficient differences}
\begin{figure}[b]
\begin{center}
\vspace*{-1cm}
\resizebox{\hsize}{!}{\includegraphics{./figs/YsPaquette.eps}\includegraphics{./figs/ioni.eps}}
\caption{Curves of the helium mass fraction in the
convective envelope as a function of age for a 1.2\mbox{${\rm M}_{\odot}$}\ evolution.
Left panel: thick line corresponds to models computed by CLES
with Paquette coefficients, and the other three curves corresponds
to the results already shown in Fig.~\ref{figys}.
Right panel: evolution of the helium mass fraction in the convective envelope for
Task3.2 models computed by an updated version of CLES assuming: full ionization (dotted line),
partial ionization of the ``average'' element Z (solid line) and partial ionization of eleven
elements that diffuse separately (dashed line).}
\label{yspaquette}
\end{center}
\end{figure}
The discrepancies we found between CESAM2k~MP and CLES diffusion efficiency are in good agreement
with the comparisons already published by TBL94.
The large differences between CESAM2k~B and CLES are instead rather unexpected.
Both codes, in fact, derive the diffusion velocities by solving the Burgers' equations,
however, the values of friction coefficients appearing in those equations are different in
CESAM2k~B and CLES approaches. The resistance coefficients $K_{\rm ij}$, which represent
the effects of collisions between the particles i and j, are
$K_{\rm ij}=C_{\rm ij}\,F_{\rm ij}^{(11)}$ in CESAM2k~B, and
$K_{\rm ij}=C_{\rm ij}\,2\,\ln \Lambda_{\rm ij}$ in CLES (TBL94).
The term $C_{\rm ij}$ is the same
in both formulations and depends on the mass, charge and concentration of the particles i and j.
The values of the quantity $F_{\rm ij}^{(11)}$ are derived from the numerical fits of the
collision integrals (Paquette {\em et al.},1986), and the
term $\ln \Lambda_{\rm ij}$ is the Coulomb logarithm from Iben \& MacDonald (1985).
Furthermore, while TBL94
adopt for the heat flux terms $z_{\rm ij}$, $z'_{\rm ij}$ and $z''_{\rm ij}$ their
low density asymptotic values, CESAM2k~B computes them by using the
collision integrals from Paquette {\em et al.} (1986).
As shown in Thoul \& Montalb\'an (2007),
the assumptions done in TBL94 can lead, for the Task3.2~A model,
to diffusion velocities between 6 and 20\% larger than those that would be
obtained by using the Paquette's coefficients.
To further clarify this point, we replaced in Burgers equations the coefficients
used in CLES with those used in CESAM2k~B and
we re-computed the models for Task3.2. The new evolution of He surface abundance
is plotted in Fig.~\ref{yspaquette} (left panel, thick line)
together with the curves obtained directly by CESAM2k~B, standard CLES, and CESAM2k~MP.
We see that the approximation adopted in TBL94 implies a helium surface
abundances slightly smaller than those that would be obtained by using the numerical
fits by Paquette. The new CLES values are close to CESAM2k~MP ones, but
still quite far from CESAM2k~B results.
Another important difference between CESAM and CLES diffusion routines is
that, while CESAM follows separately each element inside Z and
determine the ionization degree of all the species, the standard version of CLES adopts
full ionization, and follows only four species: H, He , electrons and an ``average''
element Z with atomic mass 8, charge 17.84.
To test the consequences of these approximations we computed the evolution of 1.2~\mbox{${\rm M}_{\odot}$}\
with an updated version of CLES that computes the ionization degree, and allows to follow
separately up to 22 elements. In Fig.~\ref{yspaquette} (right panel)
we plot the evolution of the He surface abundance for calculations considering full ionization,
and partial ionization for the eleven most relevant elements in Z.
We can conclude that, at least for masses lower than or
equal to 1.2~\mbox{${\rm M}_{\odot}$}\, the effect of partial ionization on the He diffusion velocity is negligible.
Finally, we checked the effect of the time step by computing CLES evolution tracks
with smaller and larger steps, but no significant effect was detected in the
diffusion efficiency.
\section{Conclusions}
We compared models corresponding to Task3.1, Task3.2 and Task3.3 which were computed with
three different implementations of microscopic diffusion.
The largest discrepancy ($\sim$40\%) appears between codes that model diffusion velocities by solving
the Burgers' equations (CESAM2k~B and CLES). A detailed analysis showed that the approximations
used in Thoul et al. 1994 for the friction coefficients are not at the origin of this
discrepancy.
Computations with partial ionization has also shown that for masses smaller or equal to 1.2\mbox{${\rm M}_{\odot}$},
the full ionization assumption has no detectable effects.
Therefore, we conclude that the difference between CLES and CESAM2k~B results
origins from the routine solving the Burgers' equation system.
Moreover, we showed that the effect of the different treatments of the convection borders
can lead, when diffusion is included, to significant discrepancies (up to 12\%)
for the mass and radius of convective regions.
\section*{Acknowledgments}
The authors thank HELAS for financial support. JM and ST are supported by Prodex 8 COROT (C90197)
| {
"attr-fineweb-edu": 1.944336,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdrg5qoTBGNkifBgM | \section{Introduction}
The replica symmetric solution of the random $K$-sat model at high temperature was first proved by Talagrand in \cite{TKsat}, and later the argument was improved in \cite{SG} and, again, in \cite{SG2}. The main technical tool of the proof is the so called cavity method, but there are several other interesting and non-trivial ideas that play an important role. In this paper, we will translate these ideas into the language of asymptotic Gibbs measures developed by the author in \cite{Pspins}. The main advantage of this approach is that the cavity equations become exact in the infinite volume limit, which allows us to bypass all subtle inductions on the size of the system and to clarify the essential ideas. Using the exact cavity equations, we will also be able to prove that the system is in a pure state for a larger region of parameters.
Consider an integer $p\geq 2$ and real numbers $\alpha>0,$ called the connectivity parameter, and $\beta>0$, called the inverse temperature parameter. Consider a random function
\begin{equation}
\theta(\sigma_1,\ldots,\sigma_p)=-\beta\prod_{1\leq i\leq p} \frac{1+J_i \sigma_i}{2}
\label{Deftheta}
\end{equation}
on $\{-1,1\}^p$, where $(J_i)_{1\leq i\leq p}$ are independent random signs, $\mathbb{P}(J_i=\pm 1)=1/2.$ Let $(\theta_k)_{k\geq 1}$ be a sequence of independent copies of the function $\theta$, defined in terms of independent copies of $(J_i)_{1\leq i\leq p}$. Using this sequence, we define a Hamiltonian $H_N(\sigma)$ on $\Sigma_N = \{-1,1\}^N$ by
\begin{equation}
-H_N(\sigma)=\sum_{k\leq \pi(\alpha N)}\theta_k(\sigma_{i_{1,k}},
\ldots,\sigma_{i_{p,k}}),
\label{Ham}
\end{equation}
where $\pi(\alpha N)$ is a Poisson random variable with the mean $\alpha N$ and the indices $(i_{j,k})_{j,k\geq 1}$ are independent uniform on $\{1,\ldots,N\}$. This is the Hamiltonian of the random $K$-sat model with $K=p$, and our goal will be to compute the limit of the free energy
\begin{equation}
F_N = \frac{1}{N}\mathbb{E} \log \sum_{\sigma\in \Sigma_N} \exp \bigl(-H_N(\sigma) \bigr)
\end{equation}
as $N\to\infty$ in some region of parameters $(\alpha,\beta)$. It will be convenient to extend the definition of the function $\theta$ from $\{-1,1\}^p$ to $[-1,1]^p$ as follows. Since the product over $1\leq i\leq p$ in (\ref{Deftheta}) takes only two values $0$ and $1$, we can write
$$
\exp \theta(\sigma_1,\ldots,\sigma_p)=1+(e^{-\beta}-1)\prod_{1\leq i\leq p} \frac{1+J_i \sigma_i}{2}.
$$
At some point, we will be averaging $\exp \theta$ over the coordinates $\sigma_1,\ldots,\sigma_p$ independently of each other, so the resulting average will be of the same form with $\sigma_i$ taking values in $[-1,1].$ It will be our choice to represent this average again as $\exp \theta$ with $\theta$ now defined by
\begin{equation}
\theta(\sigma_1,\ldots,\sigma_p)=\log\Bigl(1+(e^{-\beta}-1)\prod_{1\leq i\leq p} \frac{1+J_i \sigma_i}{2}\Bigr).
\label{DefthetaG}
\end{equation}
Of course, on the set $\{-1,1\}^p$ this definition coincides with (\ref{Deftheta}). Note that this function takes values in the interval $[-\beta,0].$
Let us denote by $\Pr[-1,1]$ the set of probability measures on $[-1,1]$. Given $\zeta\in \Pr[-1,1]$, let $(z_i)_{i\geq 1}$ and $(z_{i,j})_{i,j\geq 1}$ be i.i.d. random variables with the distribution $\zeta$ and let
\begin{align}
{\cal P}(\zeta)
=&\,
\log 2 +
\mathbb{E} \log {\rm Av} \exp \sum_{k\leq \pi(p \alpha)} \theta_{k}(z_{1,k},\ldots,z_{p-1,k},{\varepsilon})
\nonumber
\\
& -\,
(p-1)\alpha \mathbb{E} \theta(z_1,\ldots,z_p),
\label{PPure}
\end{align}
where $\pi(\alpha p)$ is a Poisson random variable with the mean $\alpha p$ independent of everything else and ${\rm Av}$ denotes the average over ${\varepsilon}\in\{-1,1\}$. The functional ${\cal P}(\zeta)$ is called the replica symmetric formula in this model. Our first result will hold in the region of parameters
\begin{equation}
\min(4\beta,1) (p-1)p \alpha <1.
\label{region}
\end{equation}
In this case, we will show that asymptotically the system is always in a pure state in the sense that will be explained in Section \ref{SecPure} and the following holds.
\begin{theorem}\label{ThFE} If (\ref{region}) holds then
\begin{equation}
\lim_{N\to\infty} F_N = \inf_{\zeta\in \Pr[-1,1]} {\cal P}(\zeta).
\label{FE}
\end{equation}
\end{theorem}
Notice that when the connectivity parameter $\alpha$ is small, $(p-1)p\alpha<1$, the formula (\ref{FE}) holds for all temperatures, which is a new feature of our approach. One can say more under the additional assumption that
\begin{equation}
\frac{1}{2}(e^\beta-1)(p-1)p \alpha <1.
\label{region2}
\end{equation}
In particular, in this case one can show that the asymptotic Gibbs measure, which will be defined in the next section, is unique and, as a result, the infimum in (\ref{FE}) can be replaced by ${\cal P}(\zeta)$, where $\zeta$ can be characterized as a fixed point of a certain map arising from the cavity computations. For $r\geq 1$, let us consider a (random) function $T_r: [-1,1]^{(p-1)r}\to [-1,1]$ defined by
\begin{equation}
T_r\bigl((\sigma_{j,k})_{j\leq p-1, k\leq r}\bigr)
=
\frac{{\rm Av}\, {\varepsilon} \exp A({\varepsilon})}{{\rm Av} \exp A({\varepsilon})},
\label{DefTr}
\end{equation}
where
\begin{equation}
A({\varepsilon}) = \sum_{k\leq r} \theta_k(\sigma_{1,k},\ldots,\sigma_{p-1,k}, {\varepsilon}).
\label{DefTrA}
\end{equation}
We set $T_0 = 0$ and define a map
\begin{equation}
T: \Pr[-1,1] \to \Pr [-1,1]
\end{equation}
in terms of the functions $(T_r)$ as follows. Given $\zeta \in\Pr[-1,1]$, if we again let $(z_{j,k})_{j\leq p-1, k\geq 1}$ be i.i.d. random variables with the distribution $\zeta$ then $T(\zeta)$ is defined by
\begin{align}
T(\zeta) &= {\cal L}\Bigl(T_{\pi(\alpha p)}\bigl((z_{j,k})_{j\leq p-1, k\leq \pi(\alpha p)}\bigr)\Bigr)
\nonumber
\\
&= \sum_{r\geq 0} \frac{(\alpha p)^r}{r!} e^{-\alpha p }
{\cal L}\Bigl(T_{r}\bigl((z_{j,k})_{j\leq p-1, k\leq r}\bigr)\Bigr),
\label{DefT}
\end{align}
where ${\cal L}(X)$ denotes the distribution of $X$. In the second line, we simply wrote the distribution as a mixture over possible values of $\pi(\alpha p)$, since this Poisson random variable is independent of everything else. The following is essentially the main result in Chapter 6 in \cite{SG2}.
\begin{theorem}\label{ThFE2}
If (\ref{region2}) holds then the map $T$ has a unique fixed point, $T(\zeta)=\zeta$. If both (\ref{region}) and (\ref{region2}) hold then $\lim_{N\to\infty} F_N = {\cal P}(\zeta).$
\end{theorem}
As we already mentioned, the main ideas of the proof we give here will be the same as in \cite{SG2} but, hopefully, more transparent. Of course, there is a trade-off in the sense that, instead of working with approximate cavity computations for systems of finite size and using the induction on $N$, one needs to understand how these cavity computations can be written rigorously in the infinite volume limit, which was the main point of \cite{Pspins}. However, we believe that passing through this asymptotic description makes the whole proof less technical and more conceptual. Moreover, the results in \cite{Pspins} hold for all parameters, and here we simply specialize the general theory to the high temperature region using methods developed in \cite{TKsat, SG, SG2}.
In the next section, we will review the definition of asymptotic Gibbs measures and recall the main results from \cite{Pspins}, namely, the exact cavity equations and the formula for the free energy in terms of asymptotic Gibbs measures. In Section \ref{SecPure}, we will prove that, under (\ref{region}), all asymptotic Gibbs measures concentrate on one (random) function (so the system is in a pure state) and in Section \ref{SecThFE} we will deduce Theorem \ref{ThFE} from this fact. Finally, in Section \ref{SecThFE2}, we will prove Theorem \ref{ThFE2} by showing that, under (\ref{region}) and (\ref{region2}), the asymptotic Gibbs measure is unique. Of course, as in \cite{SG2}, the same proof works for diluted $p$-spin models as well but, for simplicity of notations, we will work only with the Hamiltonian (\ref{Ham}) of the $p$-sat model.
\section{Asymptotic Gibbs measures}
In this section we will review the main results in \cite{Pspins} starting with the definition of asymptotic Gibbs measures. The Gibbs measure $G_N$ corresponding to the Hamiltonian (\ref{Ham}) is a (random) probability measure on $\{-1,1\}^N$ defined by
\begin{equation}
G_N(\sigma) = \frac{1}{Z_N} \exp\bigl(- H_N(\sigma)\bigr)
\end{equation}
where the normalizing factor $Z_N$ is called the partition function. Let $(\sigma^\ell)_{\ell\geq 1}$ be an i.i.d. sequence of replicas drawn from the Gibbs measure $G_N$ and let $\mu_N$ denote the joint distribution of the array of all spins on all replicas, $(\sigma_i^\ell)_{1\leq i\leq N, \ell\geq 1}$, under the average product Gibbs measure $\mathbb{E} G_N^{\otimes \infty}$. In other words, for any choice of signs $a_i^\ell \in\{-1,1\}$ and any $n\geq 1,$
\begin{equation}
\mu_N\Bigl( \bigl\{\sigma_i^\ell = a_i^\ell \ :\ 1\leq i\leq N, 1\leq \ell \leq n \bigr\} \Bigr)
=
\mathbb{E} G_N^{\otimes n}\Bigl( \bigl\{\sigma_i^\ell = a_i^\ell \ :\ 1\leq i\leq N, 1\leq \ell \leq n \bigr\} \Bigr).
\label{muN}
\end{equation}
Let us extend $\mu_N$ to a distribution on $\{-1,1\}^{\mathbb{N}\times\mathbb{N}}$ simply by setting $\sigma_i^\ell=0$ for $i\geq N+1.$ Let ${\cal M}$ be the sets of all possible limits of $(\mu_N)$ over subsequences with respect to weak convergence of measures on the compact product space $\{-1,1\}^{\mathbb{N}\times\mathbb{N}}$. We will call these limits the \emph{asymptotic Gibbs measures}. One crucial property that these measures inherit from $\mu_N$ is the invariance under the permutation of both spin and replica indices $i$ and $\ell.$ Invariance under the permutation of the replica indices is obvious, and invariance under the permutation of the spin index holds because the distribution of the Hamiltonian (\ref{Ham}) is invariant under any such permutation. In other words, there is symmetry between coordinates in distribution, which is called \emph{symmetry between sites}.
Because of these symmetries, all asymptotic Gibbs measures have some special structure. By the Aldous-Hoover representation \cite{Aldous, Hoover2}, for any $\mu\in {\cal M}$, there exists a measurable function $\sigma:[0,1]^4\to\{-1,1\}$ such that $\mu$ is the distribution of the array
\begin{equation}
s_i^\ell=\sigma(w,u_\ell,v_i,x_{i,\ell}),
\label{sigma}
\end{equation}
where random variables $w,(u_\ell), (v_i), (x_{i,\ell})$ are i.i.d. uniform on $[0,1]$. The function $\sigma$ is defined uniquely for a given $\mu\in {\cal M}$, up to measure-preserving transformations (Theorem 2.1 in \cite{Kallenberg}), so we can identify the distribution $\mu$ of array $(s_i^\ell)$ with $\sigma$. Since, in our case, $\sigma$ take values in $\{-1,1\}$, the distribution $\mu$ is completely encoded by the function
\begin{equation}
\bar{\sigma}(w,u,v) = \mathbb{E}_x \sigma(w,u,v,x)
\label{fop}
\end{equation}
where $\mathbb{E}_x$ is the expectation in $x$ only. The last coordinate $x_{i,\ell}$ in (\ref{sigma}) is independent for all pairs $(i,\ell)$, and we can think of it as flipping a coin with the expected value ${\bar{\sigma}}(w,u_\ell,v_i)$. In fact, given the function (\ref{fop}), we can always redefine $\sigma$ by
$$
\sigma(w,u_\ell,v_i,x_{i,\ell}) = 2 I\Bigl(x_{i,\ell} \leq \frac{1}{2}\bigl(1+ {\bar{\sigma}}(w,u_\ell,v_i) \bigr)\Bigr) -1.
$$
One can think of the function ${\bar{\sigma}}$ in a more geometric way as a Gibbs measure on the space of functions, as follows. It is well known that asymptotically the joint distribution $\mu\in {\cal M}$ of all spins contains the same information as the joint distribution of all so called multi-overlaps
\begin{equation}
R_{\ell_1,\ldots, \ell_n}^N = \frac{1}{N} \sum_{1\leq i\leq N} \sigma_i^{\ell_1}\cdots \sigma_i^{\ell_n}
\label{multioverlapN}
\end{equation}
for all $n\geq 1$ and all $\ell_1,\ldots, \ell_n\geq 1$. This is easy to see by expressing the joint moments of one array in terms of the joint moment of the other. In particular, one can check that the asymptotic distribution of the array (\ref{multioverlapN}) over a subsequence of $\mu_N$ converging to $\mu\in {\cal M}$ coincides with the distribution of the array
\begin{equation}
R_{\ell_1,\ldots, \ell_n}
=
\mathbb{E}_v \,\bar{\sigma}(w,u_{\ell_1},v)\cdots \bar{\sigma}(w,u_{\ell_n},v)
\label{multioverlap}
\end{equation}
for $n\geq 1$ and $\ell_1,\ldots, \ell_n\geq 1$, where $\mathbb{E}_v$ denotes the expectation in the last coordinate $v$ only. The average of replicas over spins in (\ref{multioverlapN}) has been replaced by the average of functions over the last coordinate, and we can think of the sequence $({\bar{\sigma}}(w,u_{\ell_n},\cdot))_{\ell\geq 1}$ as an i.i.d. sequence of replicas sampled from the (random) probability measure
\begin{equation}
G_w = du \circ \bigl(u\to {\bar{\sigma}}(w,u,\cdot)\bigr)^{-1}
\label{Gibbsw}
\end{equation}
on the space $L^2([0,1], dv) \cap \{ \|{\bar{\sigma}}\|_\infty \leq 1\}$ with the topology of $L^2([0,1], dv)$. Here, both $du$ and $dv$ denote the Lebesgue measure on $[0,1]$. Thus, thanks to the Aldous-Hoover representation, to every asymptotic Gibbs measure $\mu\in {\cal M}$ we can associate a function ${\bar{\sigma}}$ on $[0,1]^3$ or a random measure $G_w$ of the above space of functions. One can find a related interpretation in terms of exchangeable random measures in \cite{Austin2}.
The main idea introduced in \cite{Pspins} was a special regularizing perturbation of the Hamiltonian $H_{N}(\sigma)$ that allows to pass some standard cavity computations for the Gibbs measure $G_N$ to the limit and state them in terms of the asymptotic Gibbs measures $\mu\in{\cal M}$. We will refer to \cite{Pspins} for details and only mention that the perturbation mimics adding to the system a random number (of order $\log N$) of cavity coordinates from the beginning. Because of this perturbation, treating a finite number of coordinates as cavity coordinates is ``not felt" by the Gibbs measure, which results in a number of useful properties in the limit. The perturbation is small enough and does not affect the limit of the free energy $F_N$. In the rest of this section, we will describe the cavity equations in terms of the functions $\sigma$ in (\ref{sigma}) and state some of their consequences.
Let us introduce some notation. We will often need to pick various sets of different spin coordinates in the array $(s_i^\ell)$ in (\ref{sigma}), and it is quite inconvenient to enumerate them using one index $i\geq 1$. Instead, we will use multi-indices $(i_1,\ldots, i_n)$ for $n\geq 1$ and $i_1,\ldots, i_n\geq 1$ and consider
\begin{equation}
s_{i_1,\ldots, i_n} = \sigma(w,u, v_{i_1,\ldots, i_n},x_{i_1,\ldots, i_n}),
\label{s}
\end{equation}
where $(v_{i_1,\ldots, i_n}), (x_{i_1,\ldots,i_n})$ are i.i.d. uniform on $[0,1]$. In addition to (\ref{s}), we will need
\begin{equation}
\hat{s}_{i_1,\ldots, i_n} = \sigma(w,u, \hat{v}_{i_1,\ldots, i_n}, \hat{x}_{i_1,\ldots, i_n}),
\label{hats}
\end{equation}
for some independent copies $\hat{v}$ and $\hat{x}$ of the sequences $v$ and $x$. Let $(\theta_{i_1,\ldots, i_n})$ and $(\hat{\theta}_{i_1,\ldots, i_n})$ be i.i.d. copies of the random function $\theta$.
Take arbitrary integer $n, m, q, r\geq 1$ such that $n\leq m.$ The index $q$ will represent the number of replicas selected, $m$ will be the total number of spin coordinates and $n$ will be the number of cavity coordinates. The parameter $r\geq 1$ will index certain terms in the cavity equations that are allowed because of the stability properties of the Hamiltonian (\ref{Ham}); these terms played an important role in \cite{Pspins} and will appear in the formulation of the mains results from \cite{Pspins}, but will not be used throughout this paper after that. For each replica index $\ell\leq q$ we consider an arbitrary subset of coordinates
$C_\ell\subseteq \{1,\ldots, m\}$ and split them into cavity and non-cavity coordinates
\begin{equation}
C_\ell^1 = C_\ell\cap \{1,\ldots, n\},\,\,\,
C_\ell^2=C_\ell\cap \{n+1,\ldots,m\}.
\label{C12}
\end{equation}
The following quantities represent the cavity fields for $i\geq 1$,
\begin{equation}
A_i({\varepsilon})=\sum_{k\leq \pi_i(\alpha p)} \theta_{k,i}(s_{1,i,k},\ldots,s_{p-1,i,k},{\varepsilon}),
\label{Ai}
\end{equation}
where ${\varepsilon}\in \{-1,1\}$ and $(\pi_i(\alpha p))_{i\geq 1}$ are i.i.d. Poisson random variables with the mean $\alpha p$. Let $\mathbb{E}'$ denote the expectation in $u$ and the sequences $x$ and $\hat{x}$, and ${\rm Av}$ denote the average over $({\varepsilon}_i)_{i\geq 1}$ in $\{-1,1\}^\mathbb{N}$ with respect to the uniform distribution. Define
\begin{align}
U_\ell &=\, \mathbb{E}' \Bigl({\rm Av} \Bigl(\prod_{i\in C_\ell^1} {\varepsilon}_i \exp \sum_{i\leq n} A_i({\varepsilon}_i) \Bigr)
\prod_{i\in C_\ell^2} s_i \exp\sum_{k\leq r} \hat{\theta}_k(\hat{s}_{1,k},\ldots,\hat{s}_{p,k}) \Bigr),
\nonumber
\\
V &=\, \mathbb{E}' \Bigl({\rm Av} \Bigl(\exp \sum_{i\leq n} A_i({\varepsilon}_i) \Bigr)
\exp\sum_{k\leq r} \hat{\theta}_k(\hat{s}_{1,k},\ldots,\hat{s}_{p,k}) \Bigr).
\label{Ul}
\end{align}
The following result proved in Theorem $1$ in \cite{Pspins} expresses some standard cavity computations in terms of the asymptotic Gibbs measures.
\begin{theorem}\label{ThSC}
For any $\mu\in {\cal M}$ and the corresponding function $\sigma$ in (\ref{sigma}),
\begin{equation}
\mathbb{E} \prod_{\ell\leq q} \mathbb{E}' \prod_{i\in C_\ell}s_i
=\mathbb{E} \frac{\prod_{\ell\leq q}U_\ell}{V^q}.
\label{SC}
\end{equation}
\end{theorem}
The left hand side can be written using replicas as $\mathbb{E} \prod_{\ell\leq q} \prod_{i\in C_\ell}s_i^\ell$, so it represent an arbitrary joint moment of spins in the array (\ref{sigma}). The right hand side expresses what happens to this joint moment when we treat the first $n$ spins as cavity coordinates. As in \cite{Pspins}, we will denote by ${\cal M}_{inv}$ the set of distributions of exchangeable arrays generated by functions $\sigma:[0,1]^4\to\{-1,1\}$ as in (\ref{sigma}) that satisfy the cavity equations (\ref{SC}) for all possible choices of parameters. Theorem \ref{ThSC} shows that ${\cal M}\subseteq {\cal M}_{inv},$ which was the key to proving the formula for the free energy in terms of asymptotic Gibbs measures. Let us consider the functional
\begin{align}
{\cal P}(\mu)
=&\
\log 2 + \mathbb{E} \log \mathbb{E}' {\rm Av} \exp \sum_{k\leq \pi(p \alpha)} \theta_{k}(s_{1,k},\ldots,s_{p-1,k},{\varepsilon})
\nonumber
\\
&-\, (p-1)\alpha\, \mathbb{E}\log \mathbb{E}' \exp \theta(s_1,\ldots,s_p).
\label{CalP}
\end{align}
The next result was proved in Theorem $2$ in \cite{Pspins}.
\begin{theorem}\label{ThFEG} The following holds,
\begin{eqnarray}
\lim_{N\to\infty} F_N =
\inf_{\mu\in {\cal M}} {\cal P}(\mu) =
\inf_{\mu\in {\cal M}_{inv}} {\cal P}(\mu).
\label{FEG}
\end{eqnarray}
\end{theorem}
\textbf{Remark.} This result was stated in \cite{Pspins} for even $p\geq 2$ only, where this condition was used in the proof of the Franz-Leone upper bound \cite{FL}. However, in the case of the $p$-sat model the proof works for all $p$ without any changes at all, as was observed in Theorem 6.5.1 in \cite{SG2}. The condition that $p$ is even is needed in the corresponding result for the diluted $p$-spin model, and that is why it appears in \cite{PT, Pspins}, where both models were treated at the same time.
For some applications, it will be convenient to rewrite (\ref{SC}) in a slightly different form. From now on, we will not be using the terms $\hat{\theta}_k$ in (\ref{Ul}), so we will now set $r=0$. Let us consider some function $f(\sigma_1,\sigma_2)$ on $\{-1,1\}^{m\times q}$ of the arguments
\begin{align}
\sigma_1 &=\, (\sigma_1^\ell,\ldots, \sigma_n^\ell)_{1\leq \ell\leq q} \in \{-1,1\}^{n\times q},
\nonumber
\\
\sigma_2 &=\, (\sigma_{n+1}^\ell,\ldots, \sigma_m^\ell)_{1\leq \ell\leq q}\in \{-1,1\}^{(m-n)\times q}.
\label{DefArgs}
\end{align}
For example, if we consider the function
\begin{equation}
f(\sigma_1,\sigma_2) = \prod_{\ell\leq q} \prod_{i\in C_\ell}\sigma_i^\ell
= \prod_{\ell\leq q} \Bigl(\prod_{i\in C_\ell^1}\sigma_i^\ell \prod_{i\in C_\ell^2}\sigma_i^\ell \Bigr)
\label{Deff}
\end{equation}
then the left hand side of (\ref{SC}) can be written as $\mathbb{E} f(s_1,s_2)$, where $s_1$ and $s_2$ are the corresponding subarrays of $(s_{i}^\ell)$ in (\ref{sigma}). To rewrite the right hand side, similarly to (\ref{s}), let us consider
\begin{equation}
s_{i_1,\ldots, i_n}^\ell = \sigma(w,u_\ell, v_{i_1,\ldots, i_n},x_{i_1,\ldots, i_n}^{\ell}),
\label{sell}
\end{equation}
where, as always, all the variables are i.i.d. uniform on $[0,1]$ for different indices and define, for ${\varepsilon}=({\varepsilon}_i^\ell)_{i\leq n, \ell\leq q}\in \{-1,1\}^{n\times q}$,
\begin{equation}
{\cal E}({\varepsilon}) = \prod_{\ell\leq q} \exp \sum_{i\leq n} A_{i,\ell}({\varepsilon}_i^l),
\label{DefE0}
\end{equation}
where
\begin{equation}
A_{i,\ell}({\varepsilon}_i^\ell) = \sum_{k\leq \pi_i(\alpha p)} \theta_{k,i}(s_{1,i,k}^\ell,\ldots, s_{p-1,i,k}^\ell, {\varepsilon}_i^\ell).
\end{equation}
Then, with this notation, the equation (\ref{SC}) can be rewritten as
\begin{equation}
\mathbb{E} f(s_1,s_2)
=
\mathbb{E} \frac{\mathbb{E}' {\rm Av} f({\varepsilon},s_2) {\cal E}({\varepsilon})}{\mathbb{E}' {\rm Av}\, {\cal E}({\varepsilon})}.
\label{SCinf}
\end{equation}
Simply, we expressed a product of expectations $\mathbb{E}'$ over replicas $\ell\leq q$ by an expectation of the product, using replicas of the random variables $u$ and $x$ that are being averaged. Since any function $f$ on $\{-1,1\}^{m\times q}$ is a linear combination of monomials of the type (\ref{Deff}), (\ref{SCinf}) holds for any such $f$. From here, it is not difficult to conclude that for any functions $f_1,\ldots, f_k$ on $\{-1,1\}^{m\times q}$ and any continuous function $F: \mathbb{R}^k\to\mathbb{R},$
\begin{equation}
\mathbb{E} F \bigl(\mathbb{E}' f_1(s_1,s_2),\ldots, \mathbb{E}' f_k(s_1,s_2)\bigr)
=
\mathbb{E} F\Bigl(\frac{\mathbb{E}' {\rm Av} f_1({\varepsilon},s_2) {\cal E}({\varepsilon})}{\mathbb{E}' {\rm Av}\, {\cal E}({\varepsilon})},\ldots,
\frac{\mathbb{E}' {\rm Av} f_k({\varepsilon},s_2) {\cal E}({\varepsilon})}{\mathbb{E}' {\rm Av}\, {\cal E}({\varepsilon})}\Bigr).
\label{SCinFk}
\end{equation}
It is enough to prove this for functions $F(a_1,\ldots,a_k) = a_1^{n_1}\cdots a_k^{n_k}$ for integer powers $n_1,\ldots,n_k\geq 0$, and this immediately follows from (\ref{SCinf}) by considering $f$ on $q(n_1+\ldots+n_k)$ replicas given by the product of copies of $f_1,\ldots, f_k$ on different replicas, so that each $f_i$ appears $n_i$ times in this product.
\section{Pure state}\label{SecPure}
In this section, we will show that in the region (\ref{region}) the function ${\bar{\sigma}}(w,u,v)$ in (\ref{fop}) corresponding to any $\mu\in {\cal M}_{inv}$ essentially does not depend on the coordinate $u$. In other words, for almost all $w$, the Gibbs measure $G_w$ in (\ref{Gibbsw}) is concentrated on one function in $L^2([0,1], dv) \cap \{ \|{\bar{\sigma}}\|_\infty \leq 1\}$. This is expressed by saying that the \emph{system is in a pure state}.
\begin{theorem}\label{ThPure}
Under (\ref{region}), ${\bar{\sigma}}(w,u,v) = \mathbb{E}_u {\bar{\sigma}}(w,u,v)$ for almost all $w,u,v \in [0,1]$, where $\mathbb{E}_u$ denotes the expectation in $u$ only.
\end{theorem}
When the system is in a pure state, we will simply omit the coordinate $u$ and write ${\bar{\sigma}}(w,v)$. In this case, a joint moment of finitely many spins,
$$
\mathbb{E} \prod_{i,\ell} s_i^\ell
= \mathbb{E}\prod_{i,\ell}{\bar{\sigma}}(w,u_i,v_\ell)
= \mathbb{E}\prod_{i,\ell}{\bar{\sigma}}(w, v_\ell),
$$
does not depend on replica indices, which means that we can freely change them, for example, $\mathbb{E} s_1^1 s_1^2 s_2^1 s_2^2 = \mathbb{E} s_1^1 s_1^2 s_2^3 s_2^4.$ As in \cite{SG2}, the strategy of the proof will be to show that we can change one replica index at a time,
\begin{equation}
\mathbb{E} s_1^1 \prod_{(i,\ell)\in C} s_i^\ell = \mathbb{E} s_1^{\ell'} \prod_{(i,\ell)\in C} s_i^\ell,
\label{EqPure}
\end{equation}
where a finite set of indices $C$ does not contain $(1,1)$ and $(1,\ell')$. Using this repeatedly, we can make all replica indices different from each other, showing that any joint moment depends only on how many times each spin index $i$ appears in the product. Of course, this implies that
$$
\mathbb{E} \prod_{i,\ell} s_i^\ell
= \mathbb{E}\prod_{i,\ell} \mathbb{E}_u{\bar{\sigma}}(w,u,v_\ell),
$$
so we could replace the function ${\bar{\sigma}}(w,u,v)$ by $\mathbb{E}_u {\bar{\sigma}}(w,u,v)$ without changing the distribution of the array $(s_i^\ell)$. This would be sufficient for our purposes, since we do not really care how the function ${\bar{\sigma}}$ looks like as long as it generates the array of spins $(s_i^\ell)$ with the same distribution. However, it is not difficult to show that, in this case, the function ${\bar{\sigma}}(w,u,v)$ essentially does not depend on $u$ anyway. Let us explain this first.
\medskip
\noindent
\textbf{Proof of Theorem \ref{ThPure}} \emph{(assuming (\ref{EqPure}))}. If (\ref{EqPure}) holds then $\mathbb{E} s_1^1 s_1^2 s_2^1 s_2^2 = \mathbb{E} s_1^1 s_1^2 s_2^3 s_2^4$. This can also be written in terms of the asymptotic overlaps $R_{\ell,\ell'}$ defined in (\ref{multioverlap}) as $$\mathbb{E} R_{1,2}^2 = \mathbb{E} R_{1,2} R_{3,4}.$$ Since $R_{\ell,\ell'}$ is the scalar product in $(L^2[0,1], dv)$ of replicas $\sigma^\ell$ and $\sigma^{\ell'}$ drawn from the asymptotic Gibbs measure $G_w$ in (\ref{Gibbsw}),
$$
0= \mathbb{E} R_{1,2}^2 - \mathbb{E} R_{1,2} R_{3,4} = \mathbb{E} \mbox{\rm Var}_{G_w}(\sigma^1\cdot \sigma^2),
$$
which implies that for almost all $w$ the overlap is constant almost surely. Obviously, this can happen only if $G_w$ is concentrated on one function (that may depend on $w$) and this finishes the proof.
\qed
\medskip
\medskip
\noindent
In the rest of the section we will prove (\ref{EqPure}). The main idea of the proof will be almost identical to Section 6.2 in \cite{SG2}, even though there will be no induction on the system size. One novelty will be that the cavity equations (\ref{SC}) for the asymptotic Gibbs measures will allow us to give a different argument for large values of $\beta$, improving the dependence of the pure state region on the parameters. We will begin with this case, since it is slightly simpler.
Without loss of generality, we can assume that $\ell'=2$ in (\ref{EqPure}). Given $m,q\geq 1$, for $j=1,2$, let us consider functions
$f_j(\sigma_1,\sigma_2)$ on $\{-1,1\}^{m\times q}$ with $\sigma_1$ and $\sigma_2$ as in (\ref{DefArgs}). We will suppose that
\begin{equation}
0<f_2 \,\mbox{ and }\,
|f_1|\leq f_2.
\label{Compfs}
\end{equation}
Let us fix $n\leq m$ and, as before, we will treat the first $n$ coordinates as cavity coordinates. Consider the map
\begin{equation}
T: \{-1,+1\}^{m\times q} \to \{-1,+1\}^{m\times q}
\end{equation}
that switches the coordinates $(\sigma_1^1,\ldots, \sigma_n^1)$ with $(\sigma_1^2,\ldots, \sigma_n^2)$ and leaves other coordinates untouched. The statement of the following lemma does not involve $\beta$, but it will be used when $\beta$ is large enough.
\begin{lemma}\label{SecPureLem1}
If $(p-1)p\alpha<1$ and the function $f_1$ satisfies $f_1\circ T = -f_1$ then
\begin{equation}
\mathbb{E} \Bigl| \frac{\mathbb{E}' f_1(s_1,s_2)}{\mathbb{E}' f_2(s_1,s_2)}\Bigr| = 0.
\label{EqSecPureLem1}
\end{equation}
\end{lemma}
To see that (\ref{EqSecPureLem1}) implies (\ref{EqPure}) with $\ell'=2$, take $n=1$, $f_2=1$ and $f_1 = 0.5(\sigma_1^1 -\sigma_1^2)\prod_{(i,\ell)\in C} \sigma_i^\ell.$
\medskip
\noindent
\textbf{Proof.}
By (\ref{Compfs}), the function $f_2$ on $\{-1,1\}^{m\times q}$ is strictly separated from $0$, so we can use (\ref{SCinFk}) with $k=2$ and $F(a_1,a_2)=a_1/a_2$ to get
\begin{equation}
\mathbb{E} \Bigl| \frac{\mathbb{E}' f_1(s_1,s_2)}{\mathbb{E}' f_2(s_1,s_2)}\Bigr|
=
\mathbb{E} \Bigl| \frac{\mathbb{E}' {\rm Av} f_1({\varepsilon}, s_2) {\cal E}({\varepsilon})}{\mathbb{E}' {\rm Av} f_2({\varepsilon}, s_2) {\cal E}({\varepsilon})}\Bigr|.
\label{Lem1cavity}
\end{equation}
Recall that ${\rm Av}$ is the average over ${\varepsilon} = ({\varepsilon}_i^\ell)_{i\leq n,\ell\leq q} \in \{-1,1\}^{n\times q}$ and
\begin{equation}
{\cal E}({\varepsilon}) = \prod_{\ell\leq q} \exp \sum_{i\leq n} A_{i,\ell}({\varepsilon}_i^\ell),\,\mbox{ where }\,
A_{i,\ell}({\varepsilon}_i^\ell) = \sum_{k\leq \pi_i(\alpha p)} \theta_{k,i}(s_{1,i,k}^\ell,\ldots, s_{p-1,i,k}^\ell, {\varepsilon}_i^\ell).
\label{DefE}
\end{equation}
For a moment, let us fix all the random variables $\pi_i(\alpha p)$ and $\theta_{i,k}$ and let $r: = \sum_{i\leq n} \pi_i(\alpha p).$ Observe right away that if $r=0$ then ${\cal E}({\varepsilon}) = 1$ and
\begin{equation}
{\rm Av} f_1({\varepsilon}, s_2) {\cal E}({\varepsilon}) = {\rm Av} f_1({\varepsilon}, s_2) =0.
\label{caser0}
\end{equation}
This is because the average ${\rm Av}$ does not change if we switch the coordinates $({\varepsilon}_1^1,\ldots, {\varepsilon}_n^1)$ with $({\varepsilon}_1^2,\ldots, {\varepsilon}_n^2)$ (in other words, just rename the coordinates) and, by assumption,
$$
{\rm Av} f_1({\varepsilon}, s_2) = {\rm Av} \bigl(f_1({\varepsilon}, s_2)\circ T\bigr) = -{\rm Av} f_1({\varepsilon}, s_2).
$$
Now, let us denote the set of all triples $(j,i,k)$ that appear as subscripts in (\ref{DefE}) by
\begin{equation}
J = \bigl\{ (j,i,k) \, :\, j\leq p-1, i\leq n, k\leq \pi_i(\alpha p)\bigr\}.
\label{setJdef}
\end{equation}
If we denote by $\tilde{s}_1 = (s_e^\ell)_{e\in J, \ell\leq q}$ all the coordinates of the array $s$ that appear in ${\cal E}({\varepsilon})$ then, for $r\geq 1$, we can think of the averages on the right hand side of (\ref{Lem1cavity}) as functions of $s_2$ and $\tilde{s}_1$,
\begin{equation}
\tilde{f}_j = \tilde{f}_j(\tilde{s}_1,s_2) := {\rm Av} f_j({\varepsilon}, s_2) {\cal E}({\varepsilon}).
\label{tildefj}
\end{equation}
Even though $s_2$ and $\tilde{s}_1$ are random variables, for simplicity of notation, here we think of them also as variables of the functions $\tilde{f}_j$. First of all, since $|f_1|\leq f_2$,
$$
|\tilde{f}_1 | \leq {\rm Av} |f_1({\varepsilon}, s_2)| {\cal E}({\varepsilon})
\leq {\rm Av} f_2({\varepsilon}, s_2) {\cal E}({\varepsilon}) = |\tilde{f}_2|.
$$
Similarly to $T$, let $\tilde{T}$ now be the map that switches the vectors of spins $(s_e^1)_{e\in J}$ and $(s_e^2)_{e\in J}$ in $\tilde{s}_1$ corresponding to the first and second replica. Let us show that $\tilde{f}_1 \circ \tilde{T} = -\tilde{f}_1.$ First, we write
$$
\tilde{f}_1 \circ \tilde{T} = {\rm Av} \bigl(f_1({\varepsilon}, s_2)\, ({\cal E}({\varepsilon})\circ \tilde{T}) \bigr).
$$
As above, we will use that the average ${\rm Av}$ does not change if we switch the coordinates $({\varepsilon}_1^1,\ldots, {\varepsilon}_n^1)$ with $({\varepsilon}_1^2,\ldots, {\varepsilon}_n^2)$, so
$$
\tilde{f}_1 \circ \tilde{T} = {\rm Av} \bigl( (f_1({\varepsilon}, s_2)\circ T) ({\cal E}({\varepsilon})\circ \tilde{T} T) \bigr).
$$
By assumption, $f_1\circ T = - f_1$ and it remains to notice that ${\cal E}({\varepsilon})\circ \tilde{T} T = {\cal E}({\varepsilon}),$ because $ \tilde{T} T$ simply switches all the terms $A_{i,1}$ and $A_{i,2}$ in the definition of ${\cal E}({\varepsilon})$. We showed that (\ref{Lem1cavity}) can be rewritten as
\begin{equation}
\mathbb{E} \Bigl| \frac{\mathbb{E}' f_1(s_1,s_2)}{\mathbb{E}' f_2(s_1,s_2)}\Bigr|
=
\mathbb{E} \Bigl| \frac{\mathbb{E}' \tilde{f}_1(\tilde{s}_1, s_2)}{\mathbb{E}' \tilde{f}_2(\tilde{s}_1, s_2)}\Bigr|,
\label{Lem1cavity2}
\end{equation}
and, conditionally on $\pi_i(\alpha p)$ and $\theta_{i,k}$, the pair of functions $\tilde{f}_1, \tilde{f}_2$ satisfies the same properties as $f_1, f_2$. The only difference is that now $n$ is replaced by the cardinality of the set $J$ in (\ref{setJdef}), equal to $(p-1)r$. For a fixed $n$, let us denote by $D(n)$ the supremum of the left hand side of (\ref{Lem1cavity}) over $m\geq n$ and all choices of functions $f_1, f_2$ with the required properties. Then, the equation (\ref{Lem1cavity2}) implies (first, integrating the right hand side conditionally on all $\pi_i(\alpha p)$ and $\theta_{i,k}$)
\begin{equation}
D(n)\leq \mathbb{E} D((p-1)r) = \mathbb{E} D\bigl((p-1)\pi(n \alpha p)\bigr),
\label{dtodn}
\end{equation}
where $\pi(n\alpha p): = r = \sum_{i\leq n} \pi_i(\alpha p)$ is a Poisson random variables with the mean $n\alpha p.$ Recall that, by (\ref{caser0}), $\tilde{f}_1=0$ when $r=0$, so we can set $D(0)=0$. Also, the assumption $|f_1|\leq f_2$ gives that $D(n)\leq 1$ and, thus, $D(n)\leq n.$ Then, (\ref{dtodn}) implies
$$
D(n)\leq \mathbb{E} (p-1)\pi(n \alpha p) = (p-1)p\alpha n.
$$
Using (\ref{dtodn}) repeatedly, we get, by induction on $j\geq 1$, that $D(n)\leq \bigl((p-1)p\alpha\bigr)^j n$. By assumption, $(p-1)p\alpha<1$, so letting $j\to\infty$ proves that $D(n)=0$ for all $n$. This finishes the proof.
\qed
\medskip
\medskip
\noindent
For small values of $\beta$, we will give a slightly different argument, following Section 6.2 in \cite{SG2}.
\begin{lemma}\label{SecPureLem2}
In the notation of Lemma \ref{SecPureLem1}, suppose that $n=1$ and
\begin{equation}
(p-1)p \alpha \beta \exp\bigl( 2\beta +\alpha p (e^{2\beta} - 1)\bigr) <1.
\label{SecPureLem2Eq}
\end{equation}
If $f_1\circ T = -f_1$ then (\ref{EqSecPureLem1}) still holds.
\end{lemma}
\textbf{Proof.} The first part of the proof proceeds exactly the same way as in Lemma \ref{SecPureLem1}, and we obtain (\ref{Lem1cavity2}) for the functions $\tilde{f}_1, \tilde{f}_2$ defined in (\ref{tildefj}). Since $n=1$, we can rewrite (\ref{DefE}) as
\begin{equation}
{\cal E}({\varepsilon}) = \prod_{\ell\leq q} \exp A_{\ell}({\varepsilon}_1^\ell),\,\mbox{ where }\,
A_{\ell} = \sum_{k\leq \pi_1(\alpha p)} \theta_{k}(s_{1,k}^\ell,\ldots, s_{p-1,k}^\ell, {\varepsilon}_1^\ell),
\label{DefE2}
\end{equation}
and the set (\ref{setJdef}) now becomes
\begin{equation}
J = \bigl\{ (j,k) \, :\, j\leq p-1, k\leq \pi_1(\alpha p)\bigr\}.
\label{setJdef2}
\end{equation}
Its cardinality if $(p-1)r$, where $r= \pi_1(\alpha p).$ Even though we showed that $\tilde{f}_1 \circ \tilde{T} = -\tilde{f}_1$, we can not draw any conclusions yet since the map $T$ switches only one spins in the first and second replicas, while $\tilde{T}$ switches $(p-1)r$ spins $(s_e^1)_{e\in J}$ and $(s_e^2)_{e\in J}$ in $\tilde{s}_1$, of course, conditionally on $\pi_1(\alpha p)$ and $\theta_{k}$. We will decompose $\tilde{f}_1$ into the sum $\tilde{f}_1 = \sum_{e\in J} \tilde{f}_{e},$ where each $\tilde{f}_e$ satisfies $\tilde{f}_e \circ \tilde{T}_e = -\tilde{f}_e$ with some map $\tilde{T}_e$ that switches $s_e^1$ and $s_e^2$ only. We begin by writing
$$
\tilde{f}_1 = \frac{1}{2}\bigl(\tilde{f}_1 - \tilde{f}_1\circ \tilde{T}\bigr)
=\frac{1}{2}\Bigl(\tilde{f}_1 - \tilde{f}_1\circ \prod_{e\in J}\tilde{T}_e\Bigr).
$$
If we order the set $J$ by some linear order $\leq$ then we can expand this into a telescopic sum,
$$
\tilde{f}_1 =\sum_{e\in J} \frac{1}{2}\Bigl(\tilde{f}_1 \circ \prod_{e'<e}\tilde{T}_{e'}
- \tilde{f}_1\circ \prod_{e'\leq e}\tilde{T}_{e'}\Bigr).
$$
Then we simply define
$$
\tilde{f}_{e}
:=
\frac{1}{2}\Bigl(\tilde{f}_1 \circ \prod_{e'<e}\tilde{T}_{e'}
- \tilde{f}_1\circ \prod_{e'\leq e}\tilde{T}_{e'}\Bigr)
$$
and notice that $\tilde{f}_e \circ \tilde{T}_e = -\tilde{f}_e$, since $\tilde{T}_e\tilde{T}_e$ is the identity. Equation (\ref{Lem1cavity2}) implies
\begin{equation}
\mathbb{E} \Bigl| \frac{\mathbb{E}' f_1(s_1,s_2)}{\mathbb{E}' f_2(s_1,s_2)}\Bigr|
\leq
\mathbb{E} \sum_{e\in J} \Bigl| \frac{\mathbb{E}' \tilde{f}_e(\tilde{s}_1, s_2)}{\mathbb{E}' \tilde{f}_2(\tilde{s}_1, s_2)}\Bigr|.
\label{Lem1cavity3}
\end{equation}
We keep the sum inside the expectation because the set $J$ is random. Recalling the definition of $\tilde{f}_j$ in (\ref{tildefj}), we can write (for simplicity of notation, we will write ${\cal E}$ instead of ${\cal E}({\varepsilon})$ from now on)
$$
\tilde{f}_{e}(\tilde{s}_1, s_2)
=
\frac{1}{2}{\rm Av} \Bigl({f}_1({\varepsilon},s_2) \Bigl({\cal E}\circ \prod_{e'<e}\tilde{T}_{e'}
- {\cal E}\circ \prod_{e'\leq e}\tilde{T}_{e'} \Bigr)\Bigr).
$$
All the maps $\tilde{T}_{e}$ switch coordinates only in the first and second replica. This means that if we write ${\cal E}$ defined in (\ref{DefE2}) as ${\cal E} = {\cal E}' {\cal E}''$ where
$$
{\cal E}' =\exp (A_{1}+A_2),\,\,
{\cal E}'' = \prod_{3\leq l\leq q} \exp A_{\ell}
$$
then
\begin{equation}
\tilde{f}_{e}(\tilde{s}_1, s_2)
=
\frac{1}{2}{\rm Av} \Bigl({f}_1({\varepsilon},s_2) {\cal E}'' \Bigl({\cal E}'\circ \prod_{e'<e}\tilde{T}_{e'}
- {\cal E}'\circ \prod_{e'\leq e}\tilde{T}_{e'} \Bigr)\Bigr).
\label{fstwoEs}
\end{equation}
If $e=(j,k)$ then the terms in the last difference only differ in the term $\theta_{k}(s_{1,k}^\ell,\ldots, s_{p-1,k}^\ell, {\varepsilon}_1^\ell).$ Since $\theta_k \in [-\beta, 0]$ and $A_1+A_2\leq 0$, we can use that $|e^x-e^y|\leq |x-y|$ for $x, y\leq 0$ to get that
$$
\Bigl|{\cal E}'\circ \prod_{e'<e}\tilde{T}_{e'}
- {\cal E}'\circ \prod_{e'\leq e}\tilde{T}_{e'} \Bigr|
\leq 2\beta.
$$
Therefore, from (\ref{fstwoEs}) we obtain
$$
|\tilde{f}_{e}(\tilde{s}_1, s_2)|
\leq \beta {\rm Av} \bigl( |{f}_1({\varepsilon},s_2)| {\cal E}''\bigr)
\leq \beta {\rm Av} \bigl( {f}_2({\varepsilon},s_2) {\cal E}''\bigr).
$$
Similarly, using that $A_1+A_2\in [-2\beta \pi_1(\alpha p),0]$ we get that
$$
\tilde{f}_{2}(\tilde{s}_1, s_2) = {\rm Av} \bigl( f_2({\varepsilon},s_2) {\cal E} \bigr) = {\rm Av}\bigl( f_2({\varepsilon},s_2) {\cal E}' {\cal E}''\bigr)
\geq \exp (-2\beta \pi_1(\alpha p)) {\rm Av} \bigl( f_2({\varepsilon},s_2) {\cal E}''\bigr),
$$
and together the last two inequalities yield
\begin{equation}
|\tilde{f}_{e}(\tilde{s}_1, s_2)|
\leq
\beta \exp (2\beta \pi_1(\alpha p)) \tilde{f}_{2}(\tilde{s}_1, s_2).
\label{almthr}
\end{equation}
Let $D$ be the supremum of the left hand side of (\ref{Lem1cavity3}) over all pairs of functions $f_1,f_2$ such that $|f_1|\leq f_2$ and $f_1\circ T = -f_1$ under switching one coordinate in the first and second replicas. Then conditionally on $\pi_1(\alpha p)$ and the randomness of all $\theta_k$, each pair $\tilde{f}_e, \tilde{f}_2$ of the right hand side of (\ref{Lem1cavity3}) satisfies (\ref{almthr}), and we showed above that $\tilde{f}_e \circ \tilde{T}_e = -\tilde{f}_e$ under switching one coordinate in the first and second replicas. Therefore, (\ref{Lem1cavity3}) implies that
\begin{equation}
D \leq D\, \mathbb{E} \sum_{e\in J} \beta \exp (2\beta \pi_1(\alpha p))
= D \beta (p-1) \mathbb{E} \pi_1(\alpha p) \exp (2\beta \pi_1(\alpha p)).
\label{almthr2}
\end{equation}
Even though, formally, this computation was carried out in the case when $\pi_1(\alpha p)\geq 1$, it is still valid when $\pi_1(\alpha p)=0$ because of (\ref{caser0}). Finally, since $\pi_1(\alpha p)$ has the Poisson distribution with the mean $\alpha p$,
$$
\mathbb{E} \pi_1(\alpha p) \exp (2\beta \pi_1(\alpha p))
=
\sum_{k\geq 0} k e^{2\beta k}\frac{(\alpha p)^k}{k!} e^{-\alpha p}
= \alpha p \exp\bigl( 2\beta +\alpha p (e^{2\beta} - 1)\bigr).
$$
The condition (\ref{SecPureLem2Eq}) together with (\ref{almthr2}), obviously, implies that $D=0$ and this finishes the proof.
\qed
\medskip
\medskip
\noindent
To finish the proof of Theorem \ref{ThPure}, it remains to show that the region (\ref{region}) is in the union of the two regions in the preceding lemmas.
\begin{lemma}
If (\ref{region}) holds then either $p(p-1)\alpha <1$ or (\ref{SecPureLem2Eq}) holds.
\end{lemma}
\textbf{Proof.}
If $\beta\geq 1/4$ then $p(p-1)\alpha <1.$ Now, suppose that $\beta\leq 1/4$ and $p(p-1)\alpha \beta<1/4$. First of all, we can bound the left hand side of (\ref{SecPureLem2Eq}) by
$$
(p-1)p \alpha \beta \exp\bigl( 2\beta +\alpha p (e^{2\beta} - 1)\bigr)
<
\frac{1}{4}\exp\bigl( 2\beta +\alpha p (e^{2\beta} - 1)\bigr).
$$
Using that $e^{2\beta} - 1 \leq \sqrt{e} 2\beta$ for $\beta\leq 1/4$ and $p\alpha\beta<1/4$, we can bound the right hand side by
$$
\frac{1}{4}\exp\Bigl(\frac{1}{2} +2\sqrt{e}p \alpha \beta \Bigr)
\leq
\frac{1}{4}\exp\Bigl(\frac{1}{2} +\frac{1}{2}\sqrt{e} \Bigr)\approx 0.94 <1,
$$
and this finishes the proof.
\qed
\section{Inside the pure state}\label{SecThFE}
Suppose now that the system is in a pure state and, for each $\mu\in {\cal M}_{inv}$, the corresponding function ${\bar{\sigma}}(w,u,v)$ does not depend on the second coordinate, in which case we will write it as ${\bar{\sigma}}(w,v)$. Let us begin by proving Theorem \ref{ThFE}.
\medskip
\noindent
\textbf{Proof of Theorem \ref{ThFE}.}
When the system is in a pure state, we can rewrite the functional ${\cal P}(\mu)$ in (\ref{CalP}) as follows. First of all, since the expectation $\mathbb{E}'$ is now only in the random variables $x$, which are independent for all spin and replica indices, we can write
$$
\mathbb{E}' \exp \theta(s_1,\ldots,s_p)
=1+(e^{-\beta}-1)\prod_{1\leq i\leq p} \frac{1+J_i {\bar{\sigma}}_i}{2}
=\exp \theta({\bar{\sigma}}_1,\ldots,{\bar{\sigma}}_p),
$$
where ${\bar{\sigma}}_i=\mathbb{E}'s_i = \mathbb{E}' \sigma(w,u,v_i,x_{i}) = \mathbb{E}' {\bar{\sigma}}(w,u,v_i) = {\bar{\sigma}}(w,v_i).$ Similarly,
$$
\mathbb{E}' {\rm Av} \exp \sum_{k\leq \pi(p \alpha)} \theta_{k}(s_{1,k},\ldots,s_{p-1,k},{\varepsilon})
=
{\rm Av} \exp \sum_{k\leq \pi(p \alpha)} \theta_{k}({\bar{\sigma}}_{1,k},\ldots,{\bar{\sigma}}_{p-1,k},{\varepsilon}),
$$
where ${\bar{\sigma}}_{i,k}={\bar{\sigma}}(w,v_{i,k})$. Therefore, the functional ${\cal P}(\mu)$ in (\ref{CalP}) can be written as
\begin{align}
{\cal P}(\mu)
= &\,
\log 2 + \mathbb{E} \log {\rm Av} \exp \sum_{k\leq \pi(p \alpha)} \theta_{k}({\bar{\sigma}}_{1,k},\ldots,{\bar{\sigma}}_{p-1,k},{\varepsilon})
\nonumber
\\
&- (p-1)\alpha\, \mathbb{E}\theta({\bar{\sigma}}_1,\ldots,{\bar{\sigma}}_p).
\label{PmuAgain}
\end{align}
Replacing the average over $w\in[0,1]$ by the infimum, this is obviously bigger than
$$
\inf_{w\in [0,1]} \mathbb{E}_v\Bigl(
\log 2 + \log {\rm Av} \exp \sum_{k\leq \pi(p \alpha)} \theta_{k}({\varepsilon}, {\bar{\sigma}}_{1,k},\ldots,{\bar{\sigma}}_{p-1,k})
- (p-1)\alpha \theta({\bar{\sigma}}_1,\ldots,{\bar{\sigma}}_p)
\Bigr),
$$
where $\mathbb{E}_v$ is the expectation only in the random variables $(v_i)$ and $(v_{i,k})$. For a fixed $w$, the random variables ${\bar{\sigma}}_i$ and ${\bar{\sigma}}_{i,k}$ are i.i.d. and, comparing with (\ref{PPure}), this infimum is bigger than $\inf_{\zeta\in \Pr[-1,1]} {\cal P}(\zeta)$. Since this lower bound holds for all $\mu\in {\cal M}_{inv}$, Theorem \ref{ThFEG} then implies that
$$
\lim_{N\to\infty} F_N \geq \inf_{\zeta\in \Pr[-1,1]} {\cal P}(\zeta).
$$
The upper bound follow from the Franz-Leone theorem \cite{FL} by considering functions ${\bar{\sigma}}(w,u,v)$ that depend only on the coordinate $v$ (see Section 2.3 in \cite{Pspins}, and also \cite{PT, SG2}). As we mentioned above, it was observed in Theorem 6.5.1 in \cite{SG2} that the upper bound holds for all $p\geq 2$.
\qed
\medskip
\medskip
\noindent
Let us also write down one consequence of the cavity equations (\ref{SC}) for a system in a pure state. Again, let ${\bar{\sigma}}_i={\bar{\sigma}}(w,v_i)$ and denote ${\bar{\sigma}}_{j,i,k}={\bar{\sigma}}(w,v_{j,i,k})$. Let
\begin{equation}
{\bar{\sigma}}_{i}' = \frac{{\rm Av}\, {\varepsilon} \exp A_i({\varepsilon})}{{\rm Av} \exp A_i({\varepsilon})},
\label{DefTrI}
\end{equation}
where
\begin{equation}
A_i({\varepsilon}) = \sum_{k\leq \pi_i(\alpha p)} \theta_{k,i}({\bar{\sigma}}_{1,i,k},\ldots,{\bar{\sigma}}_{p-1,i,k}, {\varepsilon}).
\label{DefTrI2}
\end{equation}
We will now show that the cavity equations (\ref{SC}) imply the following,
\begin{lemma}\label{LemCP}
If the system is in a pure state, for example in the region (\ref{region}), then
\begin{equation}
\bigl({\bar{\sigma}}_{i} \bigr)_{i\geq 1} \stackrel{d}{=} \bigl({\bar{\sigma}}_{i}'\bigr)_{i\geq 1}.
\label{SCPure}
\end{equation}
\end{lemma}
\textbf{Proof.} This can be seen as follows. Take $r=0$ and $n=m$ in (\ref{SC}), so all coordinates will be viewed as cavity coordinates. Since the expectation $\mathbb{E}'$ is now only in the random variables $x$, which are independent for all spin and replica indices, as in the proof of Theorem \ref{ThFE} we can write (slightly abusing notation)
$$
U_\ell = {\rm Av} \prod_{i\in C_\ell} {\varepsilon}_i \exp \sum_{i\leq n} A_i({\varepsilon}_i) \,\,\mbox{ and }\,\,
V = {\rm Av} \exp \sum_{i\leq n} A_i({\varepsilon}_i),
$$
where $A_i({\varepsilon})$ are now given by (\ref{DefTrI2}) instead of (\ref{Ai}), i.e. after averaging the random variables $x$. Therefore, $U_\ell/V = \prod_{i\in C_\ell} {\bar{\sigma}}_i'$. Since $\mathbb{E}' \prod_{i\in C_\ell} s_i = \prod_{i\in C_\ell} {\bar{\sigma}}_i$, (\ref{SC}) becomes
$$
\mathbb{E} \prod_{\ell\leq q} \prod_{i\in C_\ell} {\bar{\sigma}}_i
=
\mathbb{E} \prod_{\ell\leq q} \prod_{i\in C_\ell} {\bar{\sigma}}_i'.
$$
By choosing $q$ and the sets $C_\ell$ so that each index $i$ appears $n_i$ times gives $\mathbb{E} \prod_{i\leq n} {\bar{\sigma}}_i^{n_i} = \mathbb{E} \prod_{i\leq n} {\bar{\sigma}}_i^{\prime\, n_i}$ and this finishes the proof.
\qed
\section{Proof of Theorem \ref{ThFE2}} \label{SecThFE2}
In this section we will prove Theorem \ref{ThFE2} and we begin with the following key estimate. For a moment, we fix the randomness of $(\theta_k)$ and think of $T_r$ defined in (\ref{DefTr}) as a nonrandom function.
\begin{lemma}\label{LemTLip}
The function $T_r$ defined in (\ref{DefTr}) satisfies
\begin{equation}
\bigl| T_r\bigl((\sigma_{j,k})\bigr) - T_r\bigl((\sigma_{j,k}')\bigr)\bigr|
\leq
\frac{1}{2}(e^\beta -1)\sum_{j, k} |\sigma_{j,k} - \sigma_{j,k}'|.
\label{TLip}
\end{equation}
\end{lemma}
\textbf{Proof.}
Let us compute the derivative of $T_r$ with respect to $\sigma_{1,1}.$ If denote the derivative of
$$
\theta_1(\sigma_{1,1},\ldots,\sigma_{p-1,1}, {\varepsilon})
=
\log\Bigl(1+(e^{-\beta}-1)\frac{1+J_{p,1}{\varepsilon}}{2} \prod_{1\leq j\leq p-1} \frac{1+J_{j,1}\sigma_{j,1}}{2} \Bigr)$$
with respect to $\sigma_{1,1}$ by
$$
\theta_1' = \exp(-\theta_1) (e^{-\beta}-1)\frac{J_{1,1}}{2} \frac{1+J_{p,1}{\varepsilon}}{2} \prod_{2\leq j\leq p-1} \frac{1+J_{j,1}\sigma_{j,1}}{2}
$$
then
$$
\frac{\partial T_r}{\partial \sigma_{1,1}}
=
\frac{{\rm Av}\, {\varepsilon} \theta_1'\exp A({\varepsilon})}{{\rm Av} \exp A({\varepsilon})}
-
\frac{{\rm Av}\, {\varepsilon} \exp A({\varepsilon})}{{\rm Av} \exp A({\varepsilon})}\,
\frac{{\rm Av}\, \theta_1' \exp A({\varepsilon})}{{\rm Av} \exp A({\varepsilon})}.
$$
Since $\theta_1\in[-\beta,0]$, we see that $J_{1,1}\theta_1'\in[(1-e^{\beta})/2,0]$ and
$$
\Bigl| \theta_1' - \frac{{\rm Av}\, \theta_1' \exp A({\varepsilon})}{{\rm Av} \exp A({\varepsilon})}\Bigr|
\leq \frac{1}{2}(e^\beta-1),
$$
which implies that $|{\partial T_r}/{\partial \sigma_{1,1}}| \leq (e^\beta-1)/2$. The same, obviously, holds for all partial derivatives and this finishes the proof.
\qed
\medskip
\noindent
\emph{Step 1.}
Let us first show that, under (\ref{region2}), there exists unique fixed point $T(\zeta)=\zeta$. The claim will follow from the Banach fixed point theorem once we show that the map $T$ is a contraction with respect to the Wasserstein metric $W(\mathbb{P},\mathbb{Q})$ on $\Pr[-1,1]$. This metric is defined by
\begin{equation}
W(\mathbb{P},\mathbb{Q}) = \inf \mathbb{E}|z^1-z^2|,
\end{equation}
where the infimum is taken over all pairs $(z^1,z^2)$ with the distribution in the family $M(\mathbb{P},\mathbb{Q})$ of measures on $[-1,1]^2$ with marginals $\mathbb{P}$ and $\mathbb{Q}$. It is well known that this infimum is achieved on some measure $\mu\in M(\mathbb{P},\mathbb{Q})$. Let $(z_{j,k}^1,z_{j,k}^2)$ be i.i.d. copies for $j\leq p-1$ and $k\geq 1$ with the distribution $\mu$. By (\ref{TLip}) and Wald's identity,
\begin{align*}
&
\mathbb{E} \bigl| T_{\pi(\alpha p)}\bigl((z_{j,k}^1)\bigr) - T_{\pi(\alpha p)} \bigl((z_{j,k}^2)\bigr)\bigr|
\leq
\frac{1}{2}(e^\beta-1) \mathbb{E} \sum_{j\leq p-1, k\leq \pi(\alpha p)} |z_{j,k}^1 - z_{j,k}^2|
\\
& = \frac{1}{2}(e^\beta-1) (p-1) p \alpha\, \mathbb{E} |z_{1,1}^1 - z_{1,1}^2|
=\frac{1}{2}(e^\beta-1) (p-1) p \alpha W(\mathbb{P},\mathbb{Q}).
\end{align*}
On the other hand, by the definition (\ref{DefT}), the pair of random variables on the left hand side,
$$
\Bigl( T_{\pi(\alpha p)}\bigl((z_{j,k}^1)\bigr), T_{\pi(\alpha p)} \bigl((z_{j,k}^2) \bigr) \Bigr),
$$
has the distribution in $M(T(\mathbb{P}), T(\mathbb{Q}))$ and, therefore,
$$
W\bigl(T(\mathbb{P}), T(\mathbb{Q})\bigr) \leq \frac{1}{2}(e^\beta-1)(p-1) p \alpha W(\mathbb{P},\mathbb{Q}).
$$
The condition (\ref{region2}) implies that the map $T$ is a contraction with respect to $W$. Since the space $(\Pr[-1,1], W)$ is complete, this proves that $T$ has a unique fixed point $\zeta$.
\medskip
\noindent
\emph{Step 2.}
Now, suppose that both (\ref{region}) and (\ref{region2}) hold. Let $\zeta$ be the unique fixed point $T(\zeta)=\zeta$ and let ${\bar{\sigma}}(w,v,u)$ be the function corresponding to a measure $\mu\in {\cal M}_{inv}$ in the statement of Theorem \ref{ThFEG}. By Theorem \ref{ThPure}, we know that ${\bar{\sigma}}$ does not depend on $u$ and, therefore, ${\bar{\sigma}}(w,v)$ satisfies Lemma \ref{LemCP}. Recall that ${\bar{\sigma}}_i = {\bar{\sigma}}(w,v_i)$ and let $(z_i)_{i\geq 1}$ be i.i.d. random variables with the distribution $\zeta$. We will now show that
\begin{equation}
\bigl({\bar{\sigma}}_{i} \bigr)_{i\geq 1} \stackrel{d}{=} \bigl(z_{i}\bigr)_{i\geq 1},
\label{SCPureS}
\end{equation}
which together with (\ref{PmuAgain}) will imply that ${\cal P}(\mu) = {\cal P}(\zeta)$ for all $\mu\in {\cal M}_{inv}$, finishing the proof. (By the way, the fact that $({\bar{\sigma}}_{i})_{i\geq 1}$ are i.i.d. does not mean that the function ${\bar{\sigma}}(w,u)$ does not depend on $w$; it simply means that the distribution of $({\bar{\sigma}}_{i})_{i\geq 1}$ is independent of $w$.) To show (\ref{SCPureS}), we will again utilize the Wasserstein metric. For any $n\geq 1$, we will denote by $D(n)$ the Wasserstein distance between the distribution of $({\bar{\sigma}}_{i})_{i\leq n}$ and the distribution of $(z_{i})_{i\leq n}$ (equal to $\zeta^{\otimes n}$) with respect to the metric $d(x,y) = \sum_{i\leq n}|x_i-y_i|$ on $[-1,1]^n$. For any $r=(r_1,\ldots,r_n) \in\mathbb{N}^n$ (we assume now that $0\in\mathbb{N}$), let us denote
$$
p_r = \mathbb{P}\bigl( \pi_1(\alpha p)=r_1,\ldots, \pi_n(\alpha p) = r_n\bigr)= \prod_{i\leq n} \frac{(\alpha p)^{r_i}}{r_i!}e^{-\alpha p}.
$$
Since $\zeta = T(\zeta),$ recalling the definition of $T(\zeta)$ in (\ref{DefT}), we get
\begin{equation}
\zeta^{\otimes n} = T(\zeta)^{\otimes n} = \sum_{r\in \mathbb{N}^n} p_r\, \bigotimes_{i\leq n}
{\cal L}\Bigl(T_{r_i}\bigl((z_{j,k})_{j\leq p-1, k\leq r_i}\bigr)\Bigr),
\label{alm1}
\end{equation}
where the random variables $z_{i,k}$ are i.i.d. and have distribution $\zeta$. Next, similarly to (\ref{DefTr}), let us define
\begin{equation}
T_{i,r_i}\bigl(\sigma_{1,k},\ldots,\sigma_{p-1,k}\bigr) = \frac{{\rm Av}\, {\varepsilon} \exp A_i({\varepsilon})}{{\rm Av} \exp A_i({\varepsilon})},
\label{DefTrIAg}
\end{equation}
where
$$
A_i({\varepsilon}) = \sum_{k\leq r_i} \theta_{k,i}(\sigma_{1,k},\ldots,\sigma_{p-1,k}, {\varepsilon}).
$$
In other words, $T_{i,r_i}$ is defined exactly as $T_{r_i}$, only in terms of independent copies $(\theta_{k,i})$ of $(\theta_k)$. Then, Lemma \ref{LemCP} and (\ref{DefTrI}) imply that
\begin{equation}
\bigl({\bar{\sigma}}_i\bigr)_{i\leq n} \stackrel{d}{=}
\sum_{r\in \mathbb{N}^n} p_r\,
{\cal L}\Bigl( \Bigl(T_{i,r_i}\bigl( ({\bar{\sigma}}_{j,i,k})_{j\leq p-1, k\leq r_i} \bigr) \Bigr)_{i\leq n} \Bigr).
\label{alm2}
\end{equation}
Using the fact that $T_{i,r_i}$ are copies of $T_{r_i}$ defined independently over $i$, we can rewrite (\ref{alm1}) by expressing the product measure as a distribution of a vector with independent coordinates,
\begin{equation}
\zeta^{\otimes n} = \sum_{r\in \mathbb{N}^n} p_r\,
{\cal L}\Bigl( \Bigl(T_{i,r_i}\bigl( (z_{j,i,k})_{j\leq p-1, k\leq r_i} \bigr) \Bigr)_{i\leq n} \Bigr),
\label{alm12}
\end{equation}
where the random variables $z_{j,i,k}$ are i.i.d. with the distribution $\zeta$. For a given $r\in \mathbb{N}^n$, let us denote by $\mathbb{P}_r$ and $\mathbb{Q}_r$ the laws on the right hand side of (\ref{alm2}) and (\ref{alm12}). Since the Wasserstein metric satisfies an obvious inequality for convex combinations of measures
\begin{equation}
W\Bigl(\sum_{r\in \mathbb{N}^n} p_r \mathbb{P}_r, \sum_{r\in \mathbb{N}^n} p_r \mathbb{Q}_r\Bigr)
\leq
\sum_{r\in \mathbb{N}^n} p_r W \bigl(\mathbb{P}_r,\mathbb{Q}_r \bigr),
\label{Wconvex}
\end{equation}
it remains to estimate the distance between $\mathbb{P}_r$ and $\mathbb{Q}_r$. By Lemma \ref{LemTLip},
$$
\sum_{i=1}^{n}
\Bigl|
T_{i,r_i}\bigl( ({\bar{\sigma}}_{j,i,k})_{j\leq p-1, k\leq r_i} \bigr) - T_{i,r_i}\bigl( (z_{j,i,k})_{j\leq p-1, k\leq r_i} \bigr)
\Bigr|
\leq
\frac{1}{2}(e^\beta-1)
\sum_{i=1}^{n} \sum_{k=1}^{r_i} \sum_{j=1}^{p-1} \bigl|{\bar{\sigma}}_{j,i,k} - z_{j,i,k}\bigr|.
$$
Choosing the vectors $({\bar{\sigma}}_{j,i,k})$ and $(z_{j,i,k})$ on the right hand side with the optimal joint distribution that achieves the infimum in the definition of Wasserstein distance and taking expectations proves that
$$
W \bigl(\mathbb{P}_r,\mathbb{Q}_r \bigr) \leq \frac{1}{2}(e^\beta-1) D\Bigl((p-1)\sum_{i\leq n} r_i\Bigr).
$$
Plugging this into (\ref{Wconvex}) and using (\ref{alm2}) and (\ref{alm12}) proves that
\begin{equation}
D(n) \leq \frac{1}{2}(e^\beta-1) \sum_{r\in \mathbb{N}^n} p_r D\Bigl((p-1)\sum_{i\leq n} r_i\Bigr)
= \frac{1}{2}(e^\beta-1) \mathbb{E} D\bigl( (p-1) \pi(\alpha p n)\bigr),
\label{Dner}
\end{equation}
where $\pi(\alpha p n)$ is a Poisson random variable with the mean $\alpha p n.$ We start with an obvious bound $D(n)\leq 2n$. Then, by induction on $j$, (\ref{Dner}) implies that
$$
D(n) \leq 2 \Bigl(\frac{1}{2}(e^\beta-1) (p-1) p \alpha\Bigr)^j n
$$
for all $j\geq 1$. Letting $j\to\infty$ proves that $D(n)=0$ for all $n$, since we assumed (\ref{region2}), and this finishes the proof.
\qed
| {
"attr-fineweb-edu": 1.817383,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdt825V5iqPl1DzIS | \section{Introduction}
A question raised in one of the first papers on the rotor-router walk (introduced originally under the name ``Eulerian walkers model'' \cite{PDDK}) concerned the shape and size of the cluster of sites visited by the walker till time $t$. The finite-time behavior of the rotor walk on an infinite lattice is a much more difficult problem than the recurrent rotor walk on a finite graph which admits a theoretical treatment due to the Abelian structure similar to that in the sandpile model \cite{Dhar} (see \cite{HLMPPW} for more references).
Heuristic arguments used in \cite{PDDK} showed that, given a uniform random initial configuration of rotors on the infinite square lattice,
the average cluster of visited sites is roughly a disc of radius $R$ which grows as $t^{1/3}$.
The problem of circular shape appears also in the model of rotor aggregation introduced by Propp \cite{LP05,LP07}.
In this model, $n$ particles starting at the origin perform rotor-router walk until reaching an unoccupied lattice site which then becomes occupied.
Levine and Peres \cite{LP07} proved that for $d$-dimensional lattice the asymptotic shape of the set of occupied sites is an Euclidean ball.
The shape of the cluster of sites visited by a single rotor walker is another problem. In contrast to the rotor aggregation where each particle finishes its trajectory at the first unoccupied site, the single rotor walk continues its motion permanently modifying the rotor configuration. Then, an additional
essential problem appears on the number of returns of the rotor walk to the origin.
Recently, Florescu, Levine and Peres \cite{FLP} have proved that the number of distinct sites visited by the rotor-router walk on $d$-dimensional lattice in $t$ steps is greater or equal $c t^{d/(d+1)}$, where $c$ is a positive constant.
Particularly for $d=2$, this statement gives a tight lower-bound of the law $t^{1/3}$ for the growth of the average linear size of a two-dimensional cluster, but leaves aside the question of its asymptotic form.
Kapri and Dhar \cite{Kapri} used extensive numerical simulations to study the shape of the rotor-router cluster formed by an Eulerian walk on a random background in two dimensions and presented evidence that it converges to a perfect circle for large number of steps.
Further attempts to describe the structure of the cluster of visited sites were made in \cite{PPP1,PPP2}.
The motion of the rotor-router walker creates a sequence of loops of rotors and a set of labeled sites where the loops become closed.
In the case when the rule of clockwise rotations is applied to all rotors, the sequence of labeled sites forms a spiral-like structure which was conjectured to tend asymptotically to the {\it Archimedean spiral} in average \cite{PPP1,PPP2}.
The Archimedean property is consistent with the perfect circular asymptotic shape of the cluster of visited sites.
Also, it agrees with a scaling law for the number of visits to a site separated from the origin by distance $x$ for $N$ steps of the walker \cite{Kapri}.
While the circular shape of the rotor-router cluster is approved at least numerically, the fluctuations of this shape are more resistant to a clear description.
A standard approach to investigation of fluctuations of a growing surface implies a formulation in terms of the KPZ theory \cite{KPZ}.
Kapri and Dhar \cite{Kapri} calculated the width $W$ of the surface region of the rotor-router cluster on the square lattice to determine the saturation-width exponent $\alpha$ and the dynamical exponent $z$ in the KPZ scaling law
\begin{equation}
W\sim L^{\alpha}f_{KPZ}(t/L^z),
\label{KPZ}
\end{equation}
where $L$ is a characteristic length of the surface, and $f_{KPZ}(x)$ is the scaling function which behaves as $x^{\alpha
/z}$ for $0 < x \ll 1$ and tends to 1 for $x \gg 1$ \cite{KPZ}. Since for the growing two-dimensional cluster of radius $R$, both time $t$ and length $L$ are proportional to $R$, the expected KPZ values $\alpha=1/2$ and $z=3/2$ lead to asymptotic law
$W\sim R^{\gamma}=R^{1/3}$.
The exponent $\gamma$ obtained in \cite{Kapri} was estimated as $\gamma=0.40\pm0.06$. This value of $\gamma$ is consistent with the KPZ exponents although it was difficult to control its limiting behavior. A non-controlling influence of shape fluctuations can be seen also in a slow convergence of the spiral structure to the proposed Archimedean law \cite{PPP2}. A possible reason for these difficulties is a lack of the stationary state for fluctuations of the boundary of the radially growing cluster and an unrestricted growth of the boundary region $W$ which stands in contrast to an expected stationary regime for the infinitely long cylinder. Singha \cite{singha} compared surface fluctuations in growing clusters on the plane and those in the cylindrical geometry where width $W$ tends to a constant. The comparison showed quite different behavior of fluctuations in two geometries: the auto-correlation function tends with time to a non-zero value for the plane and decays to zero for the long cylinder.
An obvious advantage of the cylindrical geometry is the explicit separation of variables $L$ and $t$ in the scaling law (\ref{KPZ}) instead of their merging to single variable $R$ in the plane geometry. This allows to investigate the asymptotic behavior for large $L$ and large $t$ separately. As to detailed properties of the growing clusters, we may expect more regular behavior of the spiral structure which should be converted into a regular helix in the cylindrical case.
In this paper, we consider the rotor-router walk on a semi-infinite cylinder of square lattice with a boundary of perimeter $L$.
Rotors are considered as arrows attached to each lattice site and directed toward one of four neighbors on the lattice.
Arrows at the boundary sites have three possible directions.
A particle called usually {\it chip}, performs a walk jumping from a site to its neighbor along the current direction of rotor in the site.
Arriving to a given site, the chip rotates the arrow 90 degrees clockwise and moves toward the site pointed by new direction of the arrow.
On the boundary, the arrow pointed clockwise with respect to the top of the cylinder turns 180 degrees and becomes pointed anticlockwise.
Imposing special initial directions of rotors on $L$ boundary sites and random initial directions in the remaining lattice we provide a uniform propagation of the cluster of visited sites downward from the top of the vertically oriented semi-infinite cylinder.
Then the asymptotic average boundary of the growing cluster is flat and the non-trivial shape problem vanishes.
The average width $W$ of the surface region evolves with time to its stationary value presumably by the KPZ law (\ref{KPZ}).
A check of the scaling form of $W(L,t)$ and determination of the related exponents is the first goal of the present paper.
Another goal is an analysis of the internal structure of the growing clusters. There are plenty of models demonstrating the local growth mechanism and kinetic roughening phenomena, for instance, random and ballistic deposition, the Eden model, RSOS models and polymer growth (see \cite{KPZ} for references).
The growth of the cluster of visited sites generated by the rotor walk has some peculiarities.
Florescu, Levine and Peres \cite{FLP} defined an {\it excursion} from the origin $o$ as the rotor walk started at $o$ and runs until it returns to $o$ exactly once from each of the neighboring sites.
The cluster appearing at the end of each excursion consists of rotors oriented towards the boundary of cylinder.
These rotors are directed edges of a graph which is a forest of trees attached to the boundary sites.
Each excursion adds to the forest additional branches.
The accumulation of edges is accompanied by a coarse-grained accumulation of domains formed by loops of rotors, in particular the clockwise closed contours.
The latter process imposes a larger scale of the fluctuations of growing surface and is responsible for the asymptotic values of the sought exponents.
Beside the clusters of visited sites, we consider spiral structures introduced in previous papers \cite{PPP1,PPP2}.
The site, where the chip completes a next clockwise contour, was called {\it label}.
We have noticed in \cite{PPP1,PPP2} that the sequence of labels forms a spiral structure in the case of plane geometry. After averaging over initial random configurations of rotors, the sequence approaches asymptotically the Archimedean spiral. In the cylinder geometry, the spiral should be converted into a cylindrical helix. The determination of its parameters and examination of
the convergence to the limiting form is the concluding part of the work.
\section{Definitions and basic theorems}
Let $G=(V,E)$ be an infinite directed graph whose vertices $v \in V$ are sites of a semi-infinite cylinder of square lattice with a boundary $B \subset V$ of perimeter $L$. Let $E_v$ be the set of outgoing edges of site $v$. It consists of four outgoing edges $E_v^0,E_v^1,E_v^2,E_v^3$ for bulk sites $v \in V\setminus B$ and of three outgoing edges $E_v^0,E_v^1,E_v^2$ for boundary sites $v \in B$. In other words, $\deg(v)=4$ for the bulk sites and $\deg(v)=3$ for the boundary sites.
A rotor configuration is defined as a collection of outgoing edges $\rho(v)\in E_v$ from every site of the lattice. We assume that elements of both the bulk and the boundary sets $E_v$ are ordered locally clockwise.
A chip-and-rotor state $(w,\rho)$ is a pair consisting of a vertex $w \in V$ which represents the location of the chip and a rotor configuration $\rho(v)$.
The rotor-router walk is a sequence of chip-and-rotor states $(w_0,\rho_0)$,$(w_1,\rho_1)$,$(w_2,\rho_2),\dots $.
At each step $(w_t,\rho_t) \rightarrow (w_{t+1}, \rho_{t+1})$ of discrete time $t$, the chip arriving to site $w=w_t$, changes the position of rotor $\rho(w)=E^i_w$ to $E^j_w $ where index $j$ is changed as $j=i+1(\mod 4)$ if the vertex $w$ is in bulk and $j=i+1(\mod 3)$ if the vertex $w$ is on the boundary. Then, the chip moves to the neighboring site of $w$ pointed by the new position of the rotor.
The motion of the chip is determined by the initial chip-and-rotor state $(w_0,\rho_0)$.
We denote boundary sites $v\in B$ by numbers $1,2,\dots,L$.
The initial rotor configuration $\rho_0(v)$ consists of specifically chosen boundary rotors $\rho_0(i)$, $i=1,2,\dots, L$ and the randomly chosen rotor configuration $\rho_0(v)$ for $v \in V\setminus B$.
A general aim is a characterization of rotor states $\rho_t$ for $t>0$ and description of clusters of sites visited by the rotor walk.
Given a rotor state, we say that a group of rotors outgoing from sites $v_1,v_2, \dots, v_{n+1}$ forms a directed path if $v_i$ and $v_{i+1}$ are neighbors for
all $i=1,\dots,n$ and the rotor at $v_i$ is directed toward $v_{i+1}$. The directed path of rotors becomes a cycle if $v_1 = v_{n+1}$.
A shortest possible cycle consists of two adjacent sites $v_1$, $v_2$, which are connected by a pair of edges from $v_1$ to $v_2$ and back. We call such cycles {\it dimers} by analogy with lattice dimers covering two neighboring sites. A cycle formed by more than two edges is called {\it contour}. If rotors belonging the contour are oriented clockwise, we call it the clockwise contour. The clockwise or anticlockwise directions depend on an orientation of the surface. Below, we will fix the orientation of the surface by the mapping of the cylinder onto the two-dimensional annulus.
We use two technical tools for analysis of the structure of growing clusters of visited sites. They are: a selection of clockwise contours among the rotor configurations \cite{PPP1}, and a representation of the recurrent rotor-router walk as a sequence of excursions \cite{FLP}. A special role of clockwise contours follows from a property of the chip-and-rotor states proved in \cite{PPP1} and called {\it week reversibility}. A precise formulation of this property is given as Theorem 2 in \cite{PPP1}. Here we need a reduced form of Theorem 2 and its corollary, which can be formulated as follows.
{\it Proposition 1}. Let $C$ be the clockwise contour on a planar graph consisting of vertices $w_1,w_2,\dots,w_n$ ordered anticlockwise and containing an arbitrary rotor configuration inside allowing a recurrent rotor walk. The rotor-router operation is applied to the chip at $w_1\in C$ and continues until the moment when the chip returns to $w_1$ and the rotor $\rho(w_1)$ on the contour is made oriented anticlockwise. Then all rotors on $C$ are becoming oriented anticlockwise and the moments $t_1,t_2,\dots,t_n$ when the rotors at $w_1,w_2,\dots,w_n$ become anticlockwise for the first time are ordered as $t_1 < t_{2} < \dots < t_n$.
{\it Remark}. The semi-infinite cylinder can be considered topologically as a planar annulus. Then, the recurrence of the rotor configurations in the Proposition 1 guarantees that any rotor walk started at a contour enveloping the cylinder returns to the contour.
The proof of the first part of the Proposition 1 is shared with that of Theorem 2 in \cite{PPP1} while the second part is proved as Corollary \cite{PPP1}.
\begin{figure}[!ht]
\includegraphics[width=130mm]{Loops.eps}
\vspace{-1mm}
\caption{\label{loops} (a) The chip trajectory between vertices $x$ and $y$. The closed circles denote vertices where cycles become closed.
(b) The resulting configuration of rotors at the moment when the chip reaches vertex $y$. The bold line denotes a backbone of the resulting tree.
Arrows show orientations of branches formed by rotors.}
\end{figure}
Figure \ref{loops} shows how Proposition 1 works. Consider a chip trajectory between vertices $x$ and $y$ in Fig.\ref{loops}(a). At moment $t_1$, the chip visits the vertex denoted by 1 where the first clockwise contour becomes closed. The chip enters the contour, reverses its orientation into anticlockwise by Theorem 1 and leaves vertex 1 in left direction at moment $\bar{t}_1$. Due to Proposition 1, we can replace the steps between $t_1$ and $\bar{t}_1$ by the single operation of reversion of orientation of the first contour. The next two vertices where closed cycles appear are 2 and 3. The both corresponding cycles are dimers. Leaving dimers, the chip forms the anticlockwise contour of vertices 1,2,3,4 and
leaves it in vertex 4 at the next step. Then, the trajectory continues up to vertex 5 where the second clockwise contour becomes closed at moment $t_2$. The chip leaves this contour at $\bar{t}_2$ and we again skip the steps between $t_2$ and $\bar{t}_2$ reversing the orientation of the contour. Continuing, the chip closes the dimer cycle in vertex 6, the anticlockwise contour in 7, the dimer cycle in 8 and finally reaches vertex $y$.
The configuration of rotors obtained as a result of the chip motion from $x$ to $y$ is shown in Fig.\ref{loops}(b). Since all cycles were opened
during the rotor walk, the resulting configuration is a tree. Vertices $x$ and $y$ appear to be connected by a line which can be considered as a backbone of the tree. Thus, a complicated trajectory of the rotor-router walk can be represented as a directed path and a collection of tree branches attached to the path.
The backbone of the graph generated by the rotor-router walk can be compared with the Loop Erased Random Walk (LERW) introduced by Lawler \cite{Lawler}.
Like the LERW, the rotor backbone avoids closed loop and, after sufficiently large number of steps, can be considered as a "chemical path" of the resulting spanning tree. However, the spanning tree whose chemical path corresponds to the LERW is so called {\it uniform} spanning tree \cite{Wilson} and this chemical path has the fractal dimension $\nu_{LERW}=5/4$ \cite{Con,Satya}. We will see below that the spanning tree generated by the single rotor walk in a finite volume is not uniform. For a large volume, the fractal dimension of its chemical path $\nu_{rotor}=1$ as it follows from numerical simulations. A rigorous proof of $\nu_{rotor}=1$ is still absent.
The second tool we use below is the decomposition of the rotor walk into excursions.
The definition of excursions given in \cite{FLP} reads: Fix a vertex $o\in V$. An excursion from $o$ is a rotor walk started at $o$ and run until it returns to $o$ exactly $\deg (o)$ times. Lemma 2.4 in \cite{FLP} states the following properties of excursions.
(i)\ If time of $(n+1)$-th excursion is finite, the number of visits of site $x$ during this excursion does not exceed $\deg(x)$ for all $x\in V$.
(ii)\ Let $A_n$ be the set of sites visited during $n$-th excursion. Then the number of visits of site $x$ during $(n+1)$-th excursion is $\deg(x)$ for all $x\in A_n$.
(iii)\ $A_{n+1}\supseteq A_n \cup \partial A_n$ where $\partial A_n$ is the set of all vertices neighboring to $A_n$ and not belonging to $A_n$ .
Since each excursion is a closed path, the backbone either collapses into a single point, if the backbone path on the lattice is contractible,
or becomes a non-contractible loop enveloping the cylinder in the case of cylindrical geometry. In what follows, we consider the latter case imposing a specific initial conditions on the top boundary of the cylinder.
\section{Rotor configurations and the cluster growth }
Consider the initial rotor configuration $\rho_0(v)$ shown in Fig.\ref{Cyl}. All boundary rotors $\rho_0(i)$, $i=1,2,\dots, L$ are oriented
anticlockwise. The bulk rotors are chosen randomly with equal probabilities for four possible directions $E_v^0,E_v^1,E_v^2,E_v^3$. The bulk rotors near boundary can be combined to form a forest of oriented trees attached by their roots to the boundary. If there are no rotors directed to a given boundary vertex, the tree consists of the root alone.
We put the initial position of the chip $w_0$ to site $1$ and consider evolution of the chip-and-rotor state $(w_0,\rho_0)$,$(w_1,\rho_1)$,$(w_2,\rho_2),\dots $ in discrete time.
\begin{figure}[!ht]
\includegraphics[width=65mm]{Forest.eps}
\vspace{1mm}
\caption{\label{Cyl} Semi-infinite cylinder. Arrows at boundary sites denote rotors directed anticlockwise in the initial rotor configuration.
Arrows at bulk sites denote rotors constituting trees attached to the boundary. }
\end{figure}
The rotor walk is called {\it recurrent} if it visits the initial vertex infinitely many times, otherwise it is {\it transient} \cite{AngelHol,HussSava}. If the initial configuration $\rho_0(v)$ has an infinite path to initial vertex 1, then the walk is transient \cite{FLP}. Below, we need the recurrence property, then we assume that all random trees oriented to the boundary are finite, because otherwise they contain an infinite path.
The surface of the semi-infinite cylinder is equivalent topologically to a two-dimensional annulus whose internal ring coincides with the top boundary of the cylinder and the external ring is situated at infinite distance from the internal one. The anticlockwise direction of rotors in
Fig.\ref{Cyl} corresponds to the clockwise orientation of the internal ring against the surface of the annulus. The external ring is not reachable for the recurrent rotor walk, so its orientation does not matter. Then the initial rotor configuration $\rho_0(v)$ meets the conditions of Proposition 1 and we can assert after $n$ steps the existence of a chip-and-rotor state $(w_0,\rho_n)$ where all boundary rotors changed their orientation to opposite for the first time.
During the next $L$ steps $n+1,n+2,\dots,n+L$, the chip moving along the boundary rotates the boundary rotors at $1,L,L-1,\dots,2$ one by one so that at $(n+L)$-th step
the boundary rotors return to the initial positions and the chip is in the vertex 1. The sequence of steps from the first to $(n+L)$-th fits the definition of excursion given in \cite{FLP}. Denote the time when the first excursion is completed by $\tau_1$. Lemma 2.2 in \cite{FLP} claims that if $\tau_1$ is finite and there is
a directed path of initial rotors from vertex $x$ to the origin 1, then the rotor at $x$ performs the full rotation. Therefore, we come to an important conclusion: given a forest of trees attached to the boundary in the initial rotor configuration $\rho_0(v)$, the first excursion produces a new configuration $\rho_{\tau_1}(v)$ which contains the rotors of the initial forest at the same positions as in $\rho_0(v)$.
Proposition 1 allows us to say more about results of the first excursion. Consider a vertex $v\in V$ which is visited by the chip during the first excursion but does not belong to the initial boundary forest. Since the chip returns to the origin, there is a rotor configuration $\rho_t$ at step
$t$, $1<t<\tau_1$ where $v$ belongs to a contour $C_v$. If $C_v$ is clockwise, it will be open after reversing by Proposition 1; if $C_v$ is anticlockwise or it is a dimer, it will be open at next step $t+1$. In all cases $v$ will belong to a branch attached to the initial boundary forest. Therefore, during the first excursion the initial forest will be augmented by new branches containing all visited vertices not belonging to the initial forest. In particular, all vertices separated from the initial boundary trees by distance 1 are connected to them after the first excursion due to the rotor rotation rule.
The moments of time when new branches are attached to the trees rooted at $1,2,\dots,L$ are strictly ordered: no new branches are attached to $i$-th tree,
$1 <i\leq L$, until the replenishing of the $(i-1)$-th tree is completed. Indeed, the second part of Proposition 1 claims that the moments $t_1,t_2,\dots,t_L$ when the rotors at $1,2,\dots,L$ reverse their direction are ordered as $t_1<t_2<\dots,<t_L$. Any contour producing a branch of $(i-1)$-th tree should be closed before $t_{i-1}$ and a new branch can be added to $i$-th tree only after the moment when the chip enters the boundary site $i$ i.e. later than $t_{i-1}$.
The next excursions act similarly. Properties (ii) and (iii) of the excursions allow to consider each new excursion in the same way as the first one with a given initial rotor configuration. The rotor configuration $\rho_{\tau_n}$ obtained after $n$-th excursion contains the same (anticlockwise) rotors at boundary vertices $1,2,\dots,L$ and differs from $\rho_0(v)$ by the size of forest only. Each excursion adds new branches to the existing trees, so that the forest of trees attached to the boundary grows monotonically. If after some excursion a tree appears which has no free neighboring vertices not visited by the rotor walk during previous excursions, then the growth of the tree
stops. Actually, for a sufficiently large number of excursions and finite $L$, a single tree remains which continues its growth, whereas remaining $L-1$ trees are surrounded by branches of neighboring trees.
We can conclude that, given an arbitrary height $H$ of the part of cylinder, there is a number of excursions $n_H$ after which there are no free vertices with the vertical coordinate $h<H$, so that the obtained forest spans the lattice restricted by height $H$.
In two forthcoming sections, we consider statistical properties of a boundary of the forest and an internal structure of the cluster of sites visited by the rotor walk.
\section{ Statistical properties of the cluster boundary}
Starting with a recurrent initial configuration $\rho_0(v)$, the rotor walk generates a forest which covers all sites visited for time $t$. The surface of the forest is the set of visited sites which have at least one neighboring site not belonging to the forest.
For each vertex $v\in V$, we define the height $h(v)$ as a vertical distance of $v$ from the boundary of the cylinder.
The average height of the surface $H(t)$ at moment of time $t$ is the average of $h(v)$ over all surface sites. The average number of visited sites is
proportional to $H(t)L$, so we have for the velocity of growth
\begin{equation}
\frac{dH(t)}{dt}\sim \frac{1}{H(t)L}
\label{growth}
\end{equation}
This implies that for large time $t$
\begin{equation}
H(t) \sim \left(\frac{t}{L}\right)^{1/2}
\label{Growth}
\end{equation}
The time variable $t$ needs some elaboration. The most natural representation of the discrete time is the number of steps $t\in {0,1,2,\dots}$ of the rotor-router walk. However, the evolution of surface $H(t)$ proceeds in a coarsened scale determined by the sequence of excursions. Then, a convenient variable is $t_n, n=1,2,\dots$, where $t_n$ is the moment when the $n$-th excursion is completed.
In Section II, we defined $A_n$ as the set of sites visited during the $n$-th excursion. The surface of the set of visited sites after $n$-th excursion is subset $\Gamma(A_n) \subseteq A_n$ of visited sites that have at least one neighbor unvisited during first $n$ excursions or, by property (iii), during
the last $n$-th excursion.
To check if the growth of the cluster of visited sites is within the KPZ class of universality, consider the average width of the surface $\Gamma(A_n)$, defined as
\begin{equation}
W(L,n)= \left\langle \frac{1}{\#\Gamma(A_n)} \sum_{v \in \Gamma(A_n)} \left( h(v) - \overline{h}\, \right)^2 \right\rangle^{1/2},
\label{W-KPZ}
\end{equation}
where
\begin{equation}
\overline{h} = \frac{1}{\#\Gamma(A_n)} \sum_{v \in \Gamma(A_n)} h(v).
\end{equation}
Numerical simulations for the cylinders with circumference $L=100,200,\ldots,1000$, number of excursions $ N_{ex} = n \leq 200$ and number of samples $= 10^4$ showed that
\begin{equation}
W(L,n) \sim L^\alpha f(t_n/L^z),
\end{equation}
with scaling function $f(u)$ satisfying
\begin{equation}
f(u) \sim \left\{
\begin{array}{ll}
u^\beta, & u \ll 1 \\
1, & u \gg 1
\end{array}
\right. \quad .
\end{equation}
The estimated values of exponents are $\alpha = 0.51\pm 0.03$, $\beta =0.35 \pm 0.05$ in a good agreement with predictions of the KPZ theory $\alpha = 1/2$, $\beta =1/3$.
Average time $<t_n>$ when $n$-th excursion is completed is proportional to $n^2$ for sufficiently large $n$.
\section{The helix structure}
In section III, we have considered the evolution of the chip-and-rotor state $(w_t,\rho_t)$ in terms of excursions. Now, we consider the evolution as a sequence of clockwise loops. The graph representation of state $(w_t,\rho_t)$ is a spanning subgraph $G_s \subset G$ whose edges coincide with the current positions of rotors in $\rho_t$ and a selected vertex $w_t\in V$ shows the chip location. Each step of evolution $(w_t,\rho_t) \rightarrow (w_{t+1}, \rho_{t+1})$ moves the chip in one of neighboring lattice sites and modifies $G_s$ in vicinity of $w_t$. If at some moment of time $t=t_1>0$, the chip occurs on a clockwise loop for the first time, we say that the rotor walk creates a clockwise contour $C_1\subset G_s$ and label the vertex $w_{t_1}$ by $\alpha_1$. By Proposition 1, the subsequent evolution reverses clockwise contour $C_1$ into anticlockwise $\bar{C_1}$ and the chip leaves $\bar{C_1}$ at moment $\bar{t_1}$. We skip the part of evolution between $t_1$ and $\bar{t_1}$ and continue the rotor walk since $\bar{t_1}$. A next clockwise contour $C_2$ can be created at moment $t_2>\bar{t_1}$ outside $C_1$ (but can be adjacent with $C_1$ or may contain $C_1$ inside). Again,
we label the vertex $w_{t_2}$ by $\alpha_2$, skip the evolution between $t_2$ and $\bar{t_2}$ and continue till moment $t_3$ when $C_3$ appears.
As before, $C_3$ is outside $C_1$ and $C_2$ but can be adjacent with them or contain one of them or both $C_1$ and $C_2$ inside.
In this way, we obtain a sequence of distinct labels $\alpha_1,\alpha_2,\dots$ which mark the sequence of appeared clockwise contours.
Consider an initial part of the sequence of contours when they appear near the boundary of cylinder. The second part of Proposition 1 claims that the boundary rotors $1,2,\dots$ flip their directions strictly sequentially. This order imposes a preferable direction for propagation of the contour sequence
$C_1,C_2,\dots$ against the initial direction of rotors in $\rho_0(v)$ (Fig.\ref{Con}).
\begin{figure}[!ht]
\includegraphics[width=120mm]{Contours.eps}
\vspace{1mm}
\caption{\label{Con} The contours near boundary of cylinder.}
\end{figure}
When the sequence or contours makes a turn along the cylinder boundary, contour $\bar{C}_1$ becomes reachable for the rotor walk from the nearest contour $\bar{C}_{k-1}$. Then, the next contour $C_k$ starting from $\alpha_k$ passes a part of contour $\bar{C}_1$, boundary sites $1,2,\dots$ and returns to $\alpha_k$ via $\alpha_{k-1 }$ forming a non-contractible loop around the cylinder. The orientation of the loop is still clockwise so that Proposition 1 is applicable to $C_k$. If $t_k$ is the moment when contour $C_k$ is closed at vertex $w_{t_k}$, then $\bar{t_k}$ is the moment when the rotor walk leaves the anticlockwise contour $\bar{C_k}$ at the same vertex. An important event happens in the time interval between $t_k$ and $\bar{t_k}$. To characterize it we prove the following statement.
{\it Proposition 2}. Consider a planar graph $G$ and a rotor configuration representing a clockwise contour $C$ with a spanning graph $G_C$ inside. Let $\alpha, b,c,d$ be vertices such that $\alpha,b,d\in C$, $c\in G_C$, the edge $bc\in G$ does not belong to $G_C$, the vertex $b$ has only 3 neighbours in $G$, and $c$ is connected with $d$ by a directed path $p_{cd}\in G_C$. The initial chip position $\alpha$ is on a part of $C$ between $b$ and $d$ (Fig.\ref{Prop}). Starting at $\alpha$, let the rotor walk leaves anticlockwise contour $\bar{C}$ at some moment $t_{exit}$. Then, there exists a moment $t^{\star} < t_{exit}$ when a clockwise contour $bdcb$ occurs.
\begin{figure}[!ht]
\includegraphics[width=40mm]{Prop.eps}
\vspace{1mm}
\caption{\label{Prop} The clockwise contour with a spanning graph inside. Vertices $b$ and $c$ are neighbors on the graph. $\alpha$ is the initial chip position. }
\end{figure}
{\it Proof.} Fix temporarily the direction of the rotor at $b$ to $c$. By Proposition 1, rotors of the clockwise contour $\alpha,b,c,d,\alpha$ flip sequentially from $\alpha$ to $c$ via $d$. The first rotation of the rotor at $b$ is possible only after the jump of the chip from $c$ to $b$. But this is the moment $t^{\star}$ when contour $bdcb$ becomes closed if we return the rotor at $b$ to its initial place.
Returning to the sequence of contours near boundary depicted on Fig.\ref{Con}, we identify contour $C$ of the Proposition 2 with $C_k$, graph $G_C$ with the interior of $C_k$ together with boundary sites not belonging to $C_k$. The vertices $b$ and $c$ of the Proposition 2 are identified with the boundary vertices $1$ and $L$ and the path $p_{cd}$ with the path from $L$ to the contour $\bar{C}_{k-1}$. Then, by Propositions 1 and 2, there is a moment $t_k<t^{\star}<\bar{t_k}$ when the boundary contour $1,2,\dots,L,1$ becomes closed and a moment $t_k<t^{\star}+L<\bar{t_k}$ when the contour $1,L,L-1,\dots,1$
appears. The latter contour coincides with that in $\rho_0(v)$ and therefore, by the definition of excursion, the moment $t^{\star}+L$ is the end of the first excursion.
Three events turn out to be synchronized: the emergence of a non-contractible loop corresponding to a clockwise contour with label $\alpha_k$; the completing a turn around the cylinder by the sequence of clockwise contours; the end of the first excursion. This synchronization is preserved during the further evolution. Indeed, the rotors remaining on contours $\bar{C}_1,\bar{C}_2,\dots,\bar{C}_k$ constitute a forest attached to the boundary of cylinder. Then the rotor walk continuing from $w_{t_r}$ for $t>\bar{t}_r$ moves along this forest in the same way as the walk starting at $t=0$ moves along the original forest in $\rho_0(v)$. An essential difference is that the interiors of contours $\bar{C}_1,\bar{C}_2,\dots,\bar{C}_k$ are not available for new labels because no clockwise contours can be closed inside them. Therefore, the sequence of labels $\alpha_{k+1},\alpha_{k+2},\dots$ is disposed below the sequence $\alpha_{1},\alpha_{2},\dots,\alpha_{k}$ forming a next layer which propagates in the same direction until a label appears which corresponds to the next non-contractible loop. We come to a conclusion that a natural order for disposition of labels is a helix-like sequence on the surface of cylinder. The synchronization mentioned above implies that each turn of the helix corresponds to one excursion and to one label denoting a non-contractible contour.
The helical order of growth of the cluster of visited sites explains the mentioned non-uniformity of spanning trees generated by the rotor walk on the cylinder.Neverthelrss, the determination of the fractal dimension of the chemical path of the obtained tree remains an open problem.
A set of labels $\alpha_1,\dots,\alpha_m$, $m=369$, obtained for a particular random initial configuration $\rho_0(v)$ is shown in Fig.\ref{scr}.
\begin{figure}[!ht]
\includegraphics[width=85mm]{screw0_pap.eps}
\vspace{1mm}
\caption{\label{scr} The labels on the equivalent annulus with the internal ring of length $L=100$. Bold points show the positions of labels corresponding to non-contractible loops.}
\end{figure}
The sequence $\alpha_1,\dots,\alpha_m$ demonstrates a helix structure which agrees with the construction described above and is expectedly random since $\rho_0(v)$ is random. We can characterize the positions of labels by variables $h(\alpha_i)$ and $\theta(\alpha_i)=2\pi k_i+\varphi_i$, where $h(\alpha_i)$ is the distance of $i$-th label from the top of cylinder, $k_i$ is the number of chip rotations around cylinder before reaching a given site and $\varphi_i$ is the polar angle. The helix-like structure implies that $h_i$ is asymptotically proportional to $\theta_i$.
The relation $h(\alpha_i)=b_i \theta(\alpha_i)$ can be examined for a large number of steps $i$ after averaging over large number of initial rotor configurations $\rho_0(v)$. Let $\hat{\alpha}_1,\hat{\alpha}_2$ be a subsequence of labels corresponding to the non-contractible loops selected from sequence of all labels $\alpha_1,\dots,\alpha_m$,$m\gg 1$. Four bold points in Fig.\ref{scr} show the positions of selected labels for four first rotations of labels around the cylinder. If the helix with increasing number of turns tends in average to a regular form, the average $\langle h(\hat{\alpha}_n)\rangle$ should meet the relation
\begin{equation}
\langle h(\hat{\alpha}_n)\rangle=b_L(n) 2\pi n
\end{equation}
where $b_L(n)$ tends to a constant $b_L=\lim_{n\rightarrow \infty} b_L(n)$ for every $L$.
\begin{figure}[!ht]
\includegraphics[width=100mm]{fig-bn.eps}
\vspace{1mm}
\caption{\label{Fig} Coefficients $b_L(n)$ as functions of the number of excursions $1 \leq n \leq 500$ for cylinders of radius $L=100,\dots,1000$. }
\end{figure}
The functions $b_L(n)$ for different $L$ are shown in Fig.\ref{Fig}. For large $n$, we use the approximation
\begin{equation}
b_L(n)=b_L+\frac{A_L}{n}+\frac{B_L}{n^2}
\label{expansion}
\end{equation}
to find $b_L$.When $L$ increases, the values $b_L$ converge rapidly to the constant $b=\lim_{L\rightarrow \infty} b_L$ which is estimated as $b=1.80\pm0.05$.
The obtained value of the helix step $b$ can be compared with the estimation of the spiral step in \cite{PPP2}. In the case of planar lattice, we considered the average ratio of radius $r$ to angle $\theta$ $\langle r/\theta \rangle$ as a function of the number of labels which is similar to $b_L(n)$ in the present paper. In \cite{PPP2}, we were not able to declare a definite value of $\theta$ $\langle r/\theta \rangle$ in the limit of large numbers of the spiral rotations $n$ for a possible presence of logarithmic corrections $\sim \ln n$. In the case of cylindrical geometry, the expansion (\ref{expansion}) gives a perfect description of the large $n$ behavior and leads to the reliable value of $b$.
Despite the difference in the numerical estimations of the spiral and helical steps, the conceptual conclusions on the properties of the rotor-router walks are common in both geometries. In \cite{PPP2}, we have found that the number of visits of the origin $n_0$ depends on the number of spiral turns $n$ as
\begin{equation}
n_0=4n+O(1).
\end{equation}
Since one of four rotations of the initial rotor necessarily corresponds to the end of an excursion, we obtain the equivalence between the number of excursions and the number of spiral turns. The study in the present paper shows that this equivalence holds also for the helix turns.
\section*{Acknowledgments}
We thank Deepak Dhar for useful discussion and for drawing our attention to work \cite{singha}.
VBP thanks the RFBR for support by grant 16-02-00252.
VSP thanks the JINR program ``Ter-Antonyan - Smorodinsky'' and SCS MES RA project No 15YPR-1B0001.
| {
"attr-fineweb-edu": 1.749023,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUduw5qsBDGjv8hpAu | \section{\label{introduction}Introduction}
Over many decades substantial effort has been devoted to developing and improving nuclear energy density functionals (EDFs)~\cite{Bender2003}. While great progress has been made in recent years in $ab$ $initio$ methods, phenomenological EDFs remain the only computationally feasible many-body method capable of describing nuclei across the full mass table. Skyrme~\cite{Vautherin1972,Vautherin1973} and Gogny~\cite{Decharge1980} functionals are examples of phenomenological EDFs. These functionals have of order ten coupling constants, which are adjusted to selected experimental data. Despite their simplicity, such functionals provide a remarkably good description of a broad range of nuclear properties, such as binding energies, radii, giant resonances, $\beta$-decay rates, and fission cross sections. However, sophisticated analyses imply that EDFs of the standard Skyrme or Gogny forms have reached their limit of accuracy~\cite{Schunck2015a,Schunck2015b,McDonnell2015}. Furthermore, their phenomenological nature often leads to parametrization-dependent predictions and does not offer a clear path towards systematic improvement.
One possible strategy for improved functionals is to constrain the analytical form of the functional and possibly the values of its couplings from many-body perturbation theory (MBPT) starting from the free-space NN and 3N interactions~\cite{Kaiser2003a,Kaiser2003b,Kaiser2010,Bogner2009,Drut2010,Lesinski2009}. Progress in treating low-energy physics using the renormalization group (RG) and effective field theory (EFT)~\cite{Bogner2005,Bogner2007a,Bogner2007c,Bogner2010,Hebeler2011} plays a significant role in carrying out this strategy. RG methods can be used to evolve realistic nucleon-nucleon potentials (including both phenomenological and chiral EFT potentials), which typically have strong coupling between high- and low-momentum, to derive low-momentum potentials in which high- and low-momentum parts are largely decoupled. The Similarity Renormalization Group (SRG) provides a compelling method for this evolution to softer forms~\cite{Bogner2007a,Bogner2007b,Furnstahl2012}. After SRG evolution, we have a potential for which only low momenta contribute to low-energy nuclear observables, such as the binding energies of nuclei. We stress that the SRG does not lose relevant information for low-energy physics, which includes nuclear ground states and low-lying excitations, as long as the leading many-body interactions are kept~\cite{Bogner2007b}.
With an RG-evolved low-momentum interaction, the Hartree-Fock (HF) approximation becomes a reasonable starting point. However, the MBPT energy expressions are written in terms of density matrices when working with finite-range interactions, and Fock energy terms are inherently nonlocal objects. This nonlocality in the density matrices significantly increases the computational cost. The density matrix expansion (DME), first formulated by Negele and Vautherin~\cite{Negele1972,Negele1975}, provides a general framework to map the spatially nonlocal Fock energy into Skyrme-like local functionals with density-dependent couplings. The idea is that existing EDFs may have too-simple density dependencies to account for long-range physics, but this physics can be incorporated using the DME while still taking advantage of the Skyrme calculational infrastructure. The novel density dependence of the couplings is a consequence of the finite-range interaction and is controlled by the longest-ranged components. The effects of the density dependence couplings have been discussed in Ref.~\citep{Holt2011,Kaiser2012,Buraczynski2017}. Consequently, the DME can be used to map physics associated with long-range one- and two-pion exchange interactions into a local EDF form that can be implemented at minimal cost in existing Skyrme codes.
A program to construct a fully $ab$ $initio$ functional based on model-independent chiral interactions is underway. While Hartree-Fock becomes a reasonable zeroth-order approximation with softened low-momentum interactions, it is necessary to go to at least 2nd-order in MBPT to obtain a reasonable description of the bulk properties of infinite nuclear matter (INM), as well as the binding energies and charge radii of closed-shell nuclei.
A semi-phenomenological method somewhere between purely $ab$ $initio$ and phenomenological functionals, which has a richer set of density dependencies than traditional Skyrme functionals, was proposed in Refs.~\cite{Gebremariam2010,Gebremariam2011} and implemented in Refs.~\cite{Stoitsov2010,Bogner2011,Perez2018}. The idea is that the structure of the EFT interactions implies that each coupling in the DME can be written as the sum of a density-dependent coupling function arising from the long-range pion-exchange chiral potential and a Skyrme-like coupling constant from the zero-range contact interactions. The chiral couplings are parameter-free in the sense that they are frozen, fixed entirely by long-distance physics, while the Skyrme contacts are released for optimization to infinite nuclear matter and properties of finite nuclei. The refit of the Skyrme parameters to data has been loosely interpreted as incorporating the short-range part of a $G$-matrix with a zero-range expansion through second-order in gradients. This empirical procedure is supported by the observation that the dominant bulk correlations in nuclei and nuclear matter are primarily short-range in nature, as evidenced by the Brueckner $G$-matrix ``healing" to the free-space interaction at sufficiently large distances.
In this paper, we investigate this interpretation directly.
We also note other work on refitting Skyrme interactions from Brueckner-Hartree-Fock (BHF) calculations performed with NN and 3N interactions~\cite{Cao2006,Gambacurta2011}.
Many-body correlations beyond the HF level are clearly important for quantitative results. The BHF approximation gives an improved definition of the one-body HF potential $U$ by replacing the two-body interaction $V$ with the so-called reaction $G$-matrix. The $G$-matrix sums up ladder diagrams to infinite order and gives an effective two-body interaction, incorporating a class of many-body correlations. The diagrams in the perturbation expansion are summed by introducing the $G$-matrix operator, and the $G$-matrix can be obtained by solving the Bethe-Goldstone equation.
BHF is the only beyond-HF method
that can be immediately mapped into a quasi-local EDF via the DME with only mild approximations, and the class of correlations contained in BHF are known to be extremely important for bulk properties.
It can be applied to study evolved potentials all the way from hard to very soft.
To make progress, we consider the lessons learned from low-energy nuclear physics using the RG and EFT approaches~\cite{Bogner2010}. For example, it is well established that the RG evolution to low momentum primarily modifies the short-distance structure of the inter-nucleon interactions~\cite{Bogner2003,Bogner2007c,Bogner2010}, demonstrating insensitivity to the details of the short-range dynamics. This insensitivity
means that there are infinitely many theories that have the same low-energy behavior; all are identical at large distance but may be completely different from each other at short distances.
As the RG evolution integrates out the high-momentum modes,
general renormalization theory implies that the change in the potential should be expandable in a hierarchy of local counterterms.
The question of whether this is realized in the derivation of the so-called $V_{\mathrm{low}-k}$ potentials has been investigated in Ref.~\cite{Holt2004}. In that work, it is tested whether $V_{\mathrm{low}-k}$ can be expressed as $V_{\mathrm{NN}}$ plus a power series in the external momenta. The counterterm coefficients are determined using standard fitting techniques. In Ref.~\cite{Holt2004} this fitting was performed over all partial wave channels and a consistently good agreement was obtained.
In the literature, it has been noted that the $G$-matrix has many similarities to $V_{\mathrm{low}-k}$ NN interactions. In the equation for the $G$-matrix, the restriction of the sum over intermediate states to those above the Fermi surface because of Pauli blocking means that the Fermi momentum plays the analogous role of the UV momentum-space cutoff in the equation for $V_{\mathrm{low}-k}$.
Thus we anticipate that the success of expanding the difference $V_{\mathrm{low}-k} - V_{\mathrm{NN}}$ in a truncated series of contact interactions should carry over to the
difference of the $G$-matrix and the potential it is generated from.
In this paper, we test this argument.
That is, we ask: Is the calculation of the $G$-matrix as a sum of in-medium ladder terms well represented by a truncated series of counterterms? If so, then what are the properties of the counterterms so generated and can we use these counterterms as short-range contact interactions to model BHF correlations at the HF level?
The paper is organized as follows: In Sec.~\ref{background} we briefly review the DME and BHF. In Sec.~\ref{counterterms} we carry out an accurate determination of the counterterms and discuss that the counterterms represent generally a short-range effective interaction. In Sec.~\ref{couplings} we use SRG-evolved potentials to understand $V_{\mathrm{CT}}$ as density-dependent couplings. A summary and outlook are given in Sec.~\ref{summary}.
\section{\label{background}Background}
\subsection{\label{DME}DME}
The DME introduced by Negele and Vautherin~\cite{Negele1972,Negele1975} provides a route to an EDF based on microscopic nuclear interactions through a quasilocal expansion of the energy in terms of various densities. The central idea of the DME is to factorize the nonlocality of the one-body density matrix (OBDM) by expanding it in a finite sum of terms that are separable in relative and center-of-mass coordinates, yielding a general way to map nonlocal functionals into local ones. Adopting notation similar to that introduced in Refs.~\cite{Gebremariam2010,Gebremariam2011}, one expands the spin-scalar parts (in both isospin channels) of the one body matrix as
\begin{equation}
\rho_{t}(\mathbf{r_1},\mathbf{r_2})\approx \sum^{n_{max}}_{n=0} \Pi_{n}(kr) \mathcal{P}_n(\mathbf{R}) \;,
\end{equation}
where the $\Pi$ functions are specified by the DME variant and $\mathcal{P}_{n}(\mathbf{R})$ denote various local densities and their gradients. $k$ is an arbitrary momentum that sets the scale for the decay in the off-diagonal direction. We define the momentum scale $k$ to be the local Fermi momentum related to the isoscalar density through
\begin{equation}\label{eq2}
k \equiv k_{F}(\mathbf{R})=( \frac{3\pi^2}{2}\rho_{0}(\mathbf{R}))^{1/3} \;,
\end{equation}
although other choices are possible that include additional kinetic density and gradient density dependencies~\cite{Campi1978}. The DME has also been reformulated for spin-saturated nuclei using nonlocal low-momentum interactions in momentum representation~\cite{Bogner2009}.
Extensions of the first calculations from~\cite{Bogner2009} have modified the original DME formalism from Negele and Vautherin~\cite{Negele1972,Negele1975}, whose deficiencies include an extremely poor description of the vector part of the density matrix. Gebremariam and collaborators~\cite{Gebremariam2010,Gebremariam2011} introduced a new phase-space-averaging (PSA) approach. The PSA approach leads to substantial improvements, particularly for the vector density, where relative errors in integrated quantities are reduced by as much as an order of magnitude across isotope chains. In Ref.~\cite{Dyhdalo2017}, the DME density-dependent couplings from \emph{coordinate-space} chiral potentials up to next-to-next-to (N$^{2}$LO) were derived. Chiral potentials both with and without explicit $\Delta$ were considered and local regulators on the interactions were also included. These local regulators can mitigate the effects of singular potentials on the DME couplings and simplify the optimization of generalized Skyrme-like functionals. The use of regulators has been shown to have a significant influence on many-body calculations even at the HF level~\cite{Tews2016,Dyhdalo2016}.
The DME can be applied to both Hartree and Fock energies so that the complete HF energy is mapped into a local functional. However, it was found that treating the Hartree contributions exactly provides a better reproduction of the density fluctuations and the energy produced from an exact HF calculation~\cite{Negele1975,Sprung1975}. In addition, treating the Hartree contribution exactly does not complicate the numerical solutions of the resulting self-consistent equations compared to applying the DME to both Hartree and Fock terms. The Fock energy computed from chiral interactions exhibits spatial nonlocalities due to the convolution of finite-range interaction vertices with nonlocal density matrices. These nonlocalities significantly increase the computational cost of solving the HF equations.
A consistent and systematic extension of the DME procedure beyond the HF level of MBPT is underway. In previous work, attempts to microscopically construct a $quantitative$ Skyrme-like EDF used some phenomennological approximations when applying the DME to iterate contributions beyond the HF level and/or to reintroduce some phenomenological parameters to be adjusted to data~\cite{Negele1972,Negele1975,Hofmann1998,Kaiser2003a,Kaiser2003b,Kaiser2010}. Ultimately, we might build an $ab$ $initio$ nuclear energy density functional from the chiral potentials without the need to refit to INM and finite nuclear properties, although this is unlikely to be quantitatively competitive with fit EDFs.
Schematically, the EFT NN and 3N potentials have the following structure:
\begin{equation}
V_{\mathrm{EFT}}=V_{\pi}+V_{\mathrm{CT}} \;,
\end{equation}
where $V_{\pi}$ denotes finite-range pion-exchange interactions and $V_{\mathrm{CT}}$ denotes scale-dependent zero-range contact terms, which encode the effects of integrated-out degrees of freedom on low-energy physics. The structure of the chiral interactions is such that each DME coupling is decomposed into a density-dependent coupling function arising from long-range pion exchanges and a density-dependent coupling constant arising from the zero-range contact interaction, for example,
\begin{equation}
U_{t}^{\rho \rho}\equiv g_{t}^{\rho \rho}(\mathbf{R},V_{\pi})+C_{t}^{\rho \rho}(\mathbf{R},V_{\mathrm{CT}}) \;,
\end{equation}
and so on. As a result, the DME functional splits into two terms,
\begin{equation}
E[\rho]=E_{\pi}[\rho]+E_{\mathrm{ct}}[\rho] \;,
\end{equation}
where the first term $E_{\pi}[\rho]$ collects the long-range NN and 3N pion exchange contribution at the HF level, while the second term $E_{\mathrm{ct}}[\rho]$ collects the contribution from the contact part of the interaction plus high-order short-range contributions.
\subsection{\label{BHF}Brueckner-Hartree-Fock for the NN Force}
In Ref.~\cite{Dyhdalo2017}, density-dependent couplings from chiral potentials up to N$^2$LO in the chiral expansion are derived by applying the DME to OBDMs at the HF level. However, the HF method describes the motions of nucleons in the mean field of other nucleons and neglects higher-order many-body correlations. This work only considers the long-range part of the chiral potentials, with short-range contributions expected to be absorbed into a refit of Skyrme parameters. In doing so, the refit parameters could capture short-range correlation energy contributions beyond Hartree-Fock. In the present work, we investigate if a Skyrme-like short-range effective interaction can well represent the short-range part of the $G$-matrix and consider a direct density-dependent modification to model BHF correlation.
Historically, the $G$-matrix was developed by way of the Goldstone expansion for the ground-state energy in nuclear matter and closed-shell nuclei using NN interactions. The $G$-matrix method was originally developed by Brueckner~\cite{Brueckner1955}, and further developed by Goldstone~\cite{Goldstone1957} and Bethe, Brandow, and Petschek~\cite{Bethe1963}. The $G$-matrix is obtained by solving the Bethe-Goldstone equation,
\begin{equation}
G(\omega)=v+v\frac{Q}{e}G(\omega) \;.
\end{equation}
Here $v$ is a nucleon-nucleon interaction in free space, $Q$ is the Pauli-blocking operator, which forbids the two interacting nucleons from scattering into states already occupied by other nucleons. The denominator is $e=\omega-h_{0}$, $h_{0}$ is the single-particle Hamiltonians with the one-body mean field $U$, and $\omega$ is the starting energy. To define the denominator we will also make use of the angle-averaging and effective mass approximations as in Ref.~\cite{Haftel1970}. The single-particle energies in nuclear matter are assumed to have the quadratic form
\begin{equation}
\begin{split}
\varepsilon (k_{\mu})&=\frac{\hbar^2k_{\mu}^2}{2M^{*}}+\Delta \qquad \ \ \text{for} \qquad k_{\mu} \le k_{F} \\
&=\frac{\hbar^2k_{\mu}^2}{2M} \quad \qquad \qquad \text{for} \qquad k_{\mu} \ge k_{F} \;,
\end{split}
\end{equation}
where $M^{*}$ is the effective mass of nucleon and $M$ is the bare nucleon mass. For particle states above the Fermi surface $\varepsilon$ is a pure kinetic energy term, whereas for the states below the Fermi surface $\varepsilon$
is parameterized by $M^{*}$ and $\Delta$, the latter being an effective single-particle potential related to the $G$-matrix; these are obtained through the self-consistent BHF procedure. In this approach, the single-particle potential $U(k_{\mu})$ is determined by the self-consistent equation
\begin{equation}
U(k_{\mu})=\sum_{\nu< k_{\mathrm{F}}}\langle \mu\nu\arrowvert G(\varepsilon_{\mu}+\varepsilon_{\nu}) \arrowvert \mu\nu \rangle
\;.
\end{equation}
This self-consistent scheme consists in choosing initial values of $M^{*}$ and $\Delta$ and then using the obtained $G$-matrix in turn to obtain new values for $M^{*}$ and $\Delta$. This procedure continues until these parameters do not change.
The SRG evolution can significantly change the summations of the ladder diagrams in the $G$-matrix. When different $V_{\mathrm{NN}}$ are evolved, the differences between these potentials and their summations of the ladder diagrams are strongly quenched. In Fig.~\ref{fig:correlation_plot}, we present correlation plots of ($G-V_{\mathrm{SRG}}$) between the AV18~\cite{Wiringa1995} and N3LO~\cite{Entem2003} potentials in the $^{1}$S$_{0}$ channel at flow parameters $\lambda$=$\infty$, 2.0 fm$^{-1}$ and 1.5 fm$^{-1}$ with $k_{\mathrm{F}}$ at saturation density. The correlation plots compare the two different potentials' strengths at the same momenta ($k,k'$).
We use the Fermi momentum $k_{\text{F}}$ as the boundary to separate low/ high momentum regions, as $k_{\text{F}}$ plays an analogous role to the UV momentum-space cutoff $\Lambda$ for $V_{\mathrm{low}-k}$ and flow parameter $\lambda$ for the SRG. The correlation plots for the unevolved potentials show that the matrix elements of ($G-V_{\mathrm{SRG}}$) are significantly different. This is because the N3LO and AV18 potentials lead to similar $G$-matrices at low
momentum while the initial potentials are quite different. In evolving down to $\lambda$=2.0 fm$^{-1}$, the low-momentum region matrix elements approach the diagonal line. With the SRG flow evolution to $\lambda=$1.5 fm$^{-1}$, the low-momentum region points and the coupling momentum region points are close to the diagonal, showing a collapse to a universal residual ($G-V_{\mathrm{SRG}}$).
In the application of RG to nuclear interactions, universality is observed in that distinct initial NN potentials that reproduce the experimental low-energy scattering phase shifts are found to collapse to a single universal potential~\cite{Bogner2007a,Bogner2010,Furnstahl2013}. This universality can be attributed to common long-range pion physics and phase-shift equivalence of all potentials. Here we see that the same is quantitatively true for the residual interaction despite universality being only approximate for NN interactions. At the same time, the summation into the $G$-matrix has relatively small effects on SRG-evolved low-momentum interactions, in stark contrast to the original interactions.
\begin{figure}[t]
\hspace*{-2cm}
\includegraphics[width=20cm]{vct_correlation_plot}
\caption{\label{fig:correlation_plot} Correlation plots for the matrix $G-V_{\mathrm{SRG}}$ between the AV18~\cite{Wiringa1995} and N3LO~\cite{Entem2003} potentials in the $^{1}$S$_{0}$ channel at flow parameter (a) $\lambda$=$\infty$, (b) 2.0 fm$^{-1}$ and (c) 1.5 fm$^{-1}$( the x,y-axis scales in (a), (b) and (c) are different). The $G$-matrix is evaluated of saturation density ($k_{\mathrm{F}}$=1.3 fm$^{-1}$). The potentials are separated into 3 different regions, the low-momentum region ($k, k'<k_{\text{F}}$), high-momentum region ($k, k'>k_{\text{F}}$) and coupling region ($k>k_{\text{F}}$, $k'<k_{\text{F}}$ or $k<k_{\text{F}}$, $k'>k_{\text{F}}$). }
\end{figure}
\section{\label{counterterms}Counterterms}
In this section, we study quantitatively whether the low-momentum interaction $G$-matrix can be well represented by the low-momentum part of the SRG-evolved potential supplemented by counterterms. Specifically, we assume the $G$-matrix can be represented by
\begin{equation}\label{eq7}
G(q,q') \simeq V_{\mathrm{SRG}}(q,q')+V_{\mathrm{CT}}(q,q'), \ \ \ \ (q,q')<\Lambda \;,
\end{equation}
where $V_{\mathrm{SRG}}$ is a bare NN potential evolved by the SRG, $\Lambda$ denotes a momentum space cutoff, and $(q,q')\leq \Lambda$. We choose $\Lambda$ as $k_{\text{F}}$, which is for two reasons: (i) only momenta up to $k_{\text{F}}$ are probed for the BHF energy and (ii) the length at which the $G$-matrix ``heals" to the potential is set by $1/k_{\text{F}}$. See the Supplementary Material for different $\Lambda$ results~\cite{supplemental} . Our aim is to investigate if $V_{\mathrm{CT}}$ can be well-represented by a short-range effective interaction and to study the properties of the counterterm coefficients, with the aim of using this expanded $G$-matrix in HF-level calculations to simulate BHF correlations.
We shall proceed by expanding $V_{\mathrm{CT}}$ in a suitable form and testing how well it satisfies Eq.~(\ref{eq7}).
Past investigations found that $V_{\mathrm{low}-k}$ can be satisfactorily accounted for by the counterterms corresponding to a short-range effective potential~\cite{Holt2004}. A main point of the RG-EFT approach is that the effect of physics beyond a cutoff scale $\Lambda$ can be absorbed into simple short-range interactions. Thus for treating low-energy physics, one integrates out the modes beyond $\Lambda$, thereby obtaining a low-energy effective theory. In RG-EFT, this integrating out generates an infinite series of counterterms, which is a simple power series in momentum. Reference~\cite{Holt2004} has shown that the integration out of high-momentum modes in the derivation of $V_{\mathrm{low}-k}$ generates a series of counterterms and that $V_{\mathrm{low}-k}$ can be accurately cast into the form $V_{\mathrm{bare}}$+$V_{\mathrm{CT}}$.
Because $V_{\mathrm{SRG}}$ is generally given according to partial waves, as is the $G$-matrix, we shall determine $V_{\mathrm{CT}}$ separately for each partial wave with allowed quantum numbers. We consider the following momentum expansion for the partial-wave counterterm potential to test the assumption that $V_{\mathrm{CT}}$ is a very short-range interaction,
\begin{equation}\label{expansion}
\langle qJLS|V_{\mathrm{CT}}|q'J'L'S'\rangle=\delta_{JJ'}\delta_{SS'}q^{L}q^{L'}[C_{0}+C_{2}(q^2+q'^2)+C_{4}(q^4+q'^4)+C'_{4}(q^2q'^2)+\cdots] \;.
\end{equation}
The standard Skyrme forces include the zero-order (contact) and second-order ($q^{2}$) terms in the expansion, but conventional Skyrme forces do not have $q^{4}$ and higher-order terms.
Higher-order derivative terms have been investigated in Refs.~\citep{Carlsson2008,Carlsson2010,Davesne2016,Becker2017}. In these works it is concluded that extending the Skyrme functionals beyond the standard quadratic form, and including $q^{4}$ terms in particular, will provide an improved description of nuclei.
The counterterm coefficients will be determined such that the difference between $G$ and ($PV_{\mathrm{SRG}}P+V_{\mathrm{CT}}$) is minimized. $P$ is the projection operator to project onto states with momentum less than $\Lambda$. The $G$-matrices are obtained through the self-consistent BHF procedure at different $k_{\text{F}}$ as mentioned in Section \ref{BHF}. In the present calculation, we use the average COM momentum approximation~\cite{Haftel1970}.
We perform a standard chi-squared fitting procedure for all partial-wave channels at given $k_{\text{F}}$ and find consistently very good fits at all $k_{\text{F}}$, partial-wave channels and SRG flow parameter $\lambda$. See the Supplementary Material for different SRG flow parameter $\lambda$ results~\cite{supplemental}.
In Fig.~\ref{fig:vct_comparison_1S0_3S1} we compare $^1$S$_0$ and $^3$S$_1$ matrix elements of $V_{\mathrm{CT}}$ with those of the ($G$$-$$PV_{\mathrm{SRG}}P$) matrix below $k_{\text{F}}$ by taking a slice along the edge (i.e., $V_{\mathrm{CT}}(k,0)$) and along the diagonal (i.e., $V_{\mathrm{CT}}(k,k)$). A similar comparison for the $^{3}$S$_{1}$-$^{3}$D$_{1}$ and $^{3}$D$_{1}$ channels is displayed in Fig.~\ref{fig:vct_comparison_3S1_3D1}. We have also obtained good agreement for P-waves. Thus we find nearly identical interactions, giving strong support that the $G$-matrix can be very accurately represented by ($PV_{\mathrm{SRG}}P$$+$$V_{\mathrm{CT}}$).
\begin{figure}[t!]
\hspace*{-1cm}
\includegraphics[width=18cm]{vct_comparison_1S0_3S1_kf}
\caption{\label{fig:vct_comparison_1S0_3S1} Comparison of $G-PV_{\mathrm{SRG}}P$ (solid line) with $V_{\mathrm{CT}}$ (dot) for the $^1$S$_0$ and $^3$S$_1$ channels. The (a) left panel shows off-diagonal elements and the (b) right panel shows diagonal elements. $V_{\mathrm{SRG}}$ is the N3LO potential evolved by the SRG to $\lambda$=1.5 fm$^{-1}$ at $k_\text{F}$=1.3 fm$^{-1}$. . }
\end{figure}
\begin{figure}[hbt!]
\hspace*{-1cm}
\includegraphics[width=18cm]{vct_comparison_3S1_3D1_3D1_kf}
\caption{\label{fig:vct_comparison_3S1_3D1} Comparison of $G-PV_{\mathrm{SRG}}P$ (solid line) with $V_{\mathrm{CT}}$ (dot) for the $^3$S$_1$-$^3$D$_1$ and $^3$D$_1$ channels. The (a) left panel shows off-diagonal elements and the (b) right panel shows diagonal elements. $V_{\mathrm{SRG}}$ is the N3LO potential evolved by the SRG to $\lambda$=1.5 fm$^{-1}$ at $k_\text{F}$=1.3 fm$^{-1}$. . }
\end{figure}
\section{\label{couplings}Density dependent couplings}
\begin{figure}[t]
\includegraphics[width=0.95\columnwidth]{coefficient_by_lambda}
\caption{\label{fig:coefficient_by_lambda} The coefficients of the counterterms for the AV18 and N3LO potentials
as a function of Fermi momentum
in the $^{1}$S$_0$ channel with SRG flow parameters $\lambda = 10$, 2, and 1.5 fm$^{-1}$. }
\end{figure}
\begin{figure}[t]
\includegraphics[width=0.95\columnwidth]{coefficients_by_channels1_lam3}
\caption{\label{fig:coefficients_plot_up} Counterterms for different partial waves as a function of Fermi momentum obtained from SRG-evolved N3LO and AV18 potentials at flow parameter $\lambda$=3.0 fm$^{-1}$. Each column of plots is a single partial-wave channel, given at the top, while each row of plots is one of the counterterms($C_0$-$C'_4$), given at the left.}
\end{figure}
Next, we examine the counterterms themselves. The evolution of the counterterm coefficients as the SRG $\lambda$ decreases is illustrated in Fig.~\ref{fig:coefficient_by_lambda} for the SRG-evolved AV18 and N3LO potentials in the $^1$S$_0$ channel. With SRG evolution, the difference between the potential and the $G$-matrix decreases dramatically in this channel; that is, $V_{\mathrm{CT}}$ becomes smaller and smaller, particularly for $C_0$. This is consistent with the SRG modifying the short-range features of the potentials and confirms that the contact term $C_0$ is the dominant term in the expansion. At $\lambda$= 10 fm$^{-1}$, $C_0$ is non-zero throughout the range of $k_{\text{F}}$, while with SRG evolution to $\lambda$= 2 fm$^{-1}$ and 1.5 fm$^{-1}$, the counterterms decay to zero rapidly with $k_{\text{F}}$, consistent with Ref.~\citep{Bogner2005} that perturbation theory can be used in place of Brueckner resummations with the softened potentials.
The coefficients for AV18 and N3LO potentials are still noticeably different
at $\lambda$= 10 fm$^{-1}$, at which point the AV18 potential has been considerably softened,
but by $\lambda=2$ fm$^{-1}$, the differences have largely disappeared.
At the end of the evolution, the counterterm coefficients are essentially the same at all densities, consistent with Fig.~\ref{fig:correlation_plot} and a flow to an approximately universal value at low resolution.
Future plans include investigating whether analogous
counterterms for 3N potentials in density-dependent two-body form also show universality.
We find that the counterterms are significant only for S, P and D partial waves. In Figs.~\ref{fig:coefficients_plot_up} and~\ref{fig:coefficients_plot_down}, we plot counterterm coefficients in various partial waves, using the SRG-evolved N3LO and AV18 potentials at flow parameter $\lambda$=3.0 fm$^{-1}$ as our input potentials. From the figure, we can see that $C_0$ is always the most important term in the expansion.
As with $^1$S$_0$, this behavior is a reflection that $V_{\mathrm{CT}}$ is a very short-range effective interaction and also that the $G$-matrix does not modify long-range physics. The counterterms provide additional gradient terms into the Skyrme interaction and more complicated density-dependence in the EDF. Coefficients beyond $C_4$ generally have small effects in the fitting procedure ($C_6$ is one order smaller than $C_4$, typically) and can be ignored.
\begin{figure}
\includegraphics[width=0.95\columnwidth]{coefficients_by_channels2_lam3}
\caption{\label{fig:coefficients_plot_down} Counterterms as a function of Fermi momentum for different partial waves obtained from SRG-evolved N3LO and AV18 potentials at flow parameter $\lambda$=3.0 fm$^{-1}$. Each column of plots is a single partial-wave channel, given at the top, while each row of plots is one of the counterterms($C_0$-$C'_4$), given at the left.}
\end{figure}
The counterterm coefficients for the various partial waves in Figs.~\ref{fig:coefficients_plot_up}
and~\ref{fig:coefficients_plot_down} need to be converted to Skyrme-like interaction parameters to be used in EDFs.
Using the partial wave projections from Refs.~\cite{Erkelenz1971, Machleidt2001}, we can find relations between the counterterm coefficients and Skyrme couplings. A similar mapping of renormalization scale dependent counterterm coefficients to Skyrme-like couplings has been done in Ref.~\citep{Arriola2016}.
For example, the density-dependent contributions to the conventional Skyrme parameters $t_0$ and $x_0$ are given
by the leading $C_0$ terms in the $^1S_0$ and $^3S_1$ channels, $t_1$ and $x_1$ are given by the leading $C_2$ terms in the $^1S_0$, $^3S_1$ channels and $C_0$ terms in the $^3S_1$$-$$^3D_1$ channel:
\begin{equation}\label{eq11}
\begin{aligned}
t_0(\rho)&=\frac{1}{8\pi}(C^{1S_0}_{0}(\rho)+C^{3S_1}_{0}(\rho)) \;, \\
x_0(\rho)&=-\frac{C^{1S_0}_0(\rho)-C^{3S_1}_{0}(\rho)}{C^{1S_0}_{0}(\rho)+C^{3S_1}_{0}(\rho)} \; \\.
t_1(\rho)&=\frac{1}{8\pi}(C^{1S_0}_{2}(\rho)+C^{3S_1}_{2}(\rho)-\sqrt{2}C^{^3S_1-^3D_1}_{0}(\rho)) \;, \\
x_1(\rho)&=-\frac{C^{1S_0}_2(\rho)-C^{3S_1}_{2}(\rho)-\sqrt{2}C^{^3S_1-^3D_1}_{0}(\rho)}{C^{1S_0}_{2}(\rho)+C^{3S_1}_{2}(\rho)-\sqrt{2}C^{^3S_1-^3D_1}_{0}(\rho)} \; \\.
\end{aligned}
\end{equation}
The density-dependent Skyrme interaction parameters are plotted in Fig.~\ref{fig:Skyrme} with SRG-evolved N3LO and AV18 potential at flow parameter $\lambda$=3.0 fm$^{-1}$ as a function of the isoscalar density using the usual relation Eq.~(\ref{eq2}) between $\rho_0(\textbf{R})$ and $k_{\text{F}}(\textbf{R})$.
As a check, we compared the binding energy per nucleon in nuclear matter in the $^1S_0$ and $^3S_1$ channels
calculated by BHF and by HF+$t_0$+$t_1$, using the AV18 and N3LO potentials with
the density-dependent Skyrme interaction parameters from Fig.~\ref{fig:Skyrme}.
The two methods give nearly the same result at all densities, verifying that
the density-dependent Skyrme interaction models the BHF correlations very well.
\begin{figure}
\includegraphics[width=1\columnwidth]{Skyrme_t0_x0_lam3}
\caption{\label{fig:Skyrme} The density-dependent Skyrme-like couplings (a) $x_0$ and (b) $t_0$ from Eq.~\ref{eq11} are plotted as a function of the isoscalar density $\rho_0$, using SRG-evolved N3LO and AV18 potential at flow parameter $\lambda$=3.0 fm$^{-1}$.}
\end{figure}
\section{\label{summary}Summary}
The present paper is part of a long-term project to build an $\mathit{ab}$ $\mathit{initio}$ nuclear energy density functional from realistic NN and 3N-nucleon interactions using MBPT. The DME can be used as a bridge from MBPT to EDFs, as it can be used to construct numerically tractable approximations to the nonlocal HF energy. The DME-based functionals take the same general form as standard Skyrme functionals, with the key difference that each coupling is composed of a density-dependent coupling function determined from the HF contributions of the underlying finite-range NN and 3N interactions, plus a Skryme-like short-range contact interaction. The microscopically motivated DME-based functionals, which possess a richer set of density dependencies than traditional Skyrme functionals, can be implemented in existing EDF codes. In previous work, the Skyrme-like short-range contact couplings were optimized to data. Performing a refit of the Skyrme-like constants to data can be interpreted as approximating the short-distance part of the $G$-matrix with a zero-range expansion through second order in gradients.
In the present work, we derived density-dependent couplings for the short-distance part of the $G$-matrix by fitting a counterterm expansion. We used high-precision two-body nuclear interactions evolved to softer forms using the SRG, which makes the interactions suitable for a MBPT treatment. The issue addressed in this work was whether the $G$-matrix could accurately be cast in a form $V_{\mathrm{SRG}}$+$V_{\mathrm{CT}}$, where $V_{\mathrm{CT}}$ is a low-order counterterm series. We have shown that the $G$-matrix is nearly the same as $V_{\mathrm{SRG}}$+$V_{\mathrm{CT}}$, over all partial waves. Only the leading terms (up to quartic order) in the counterterm momentum expansion are significant, verifying that $V_{\mathrm{CT}}$ is primarily a short-range effective interaction.
We also transformed the partial waves counterterm to density-dependent Skyrme interactions. The quadratic and quartic counterterms except
for the $S$ channels will lead to higher-order density-dependent terms in an extension of the standard Skyrme force~\cite{Carlsson2010}. Higher-order terms could be neglected as a first step because their contribution becomes systematically less important, see~\cite{Carlsson2010,Becker2017}. The magnitudes of the contributions to $t_0$ and $t_1$ have been checked by calculating the binding energy per nucleon in nuclear matter. The structure of the chiral interactions is such that each coupling in the DME functional is decomposed into a density-dependent coupling constant from short-range interactions and a density-dependent coupling function arising from long-range pion exchange.
The clean separation between $V_{\mathrm{SRG}}$ and $V_{\mathrm{CT}}$ allows us to model BHF correlations with a HF-level calculation within the DME by
combining the new coupling terms with previous work~\cite{Dyhdalo2017} that derived couplings for the long-range parts of the chiral potentials.
\newpage
\begin{acknowledgments}
The authors would like to thank A.Dyhdalo, C. Drischler, and J. A. Melendez for useful discussions. This work was supported in part by the National Science Foundation under Grants No. PHY-1614460 and PHY-1713901, the NUCLEI SciDAC Collaboration under DOE Grants No. DE-SC0008511 and MSU subcontract RC107839-OSU. Y. N. Z. acknowledge support by FRIB-CSC Fellowship under Grant No. 201600090301.
\end{acknowledgments}
\newpage
| {
"attr-fineweb-edu": 1.670898,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdv85qoTBCaJeB8Rm | \section{Introduction}
Magnetization dynamics is conventionally described by the
Landau-Lifshitz-Gilbert (LLG) equation \cite{dau},\cite{gil}, which
provides a plausible phenomenological model for many experimental
results. Recently, the LLG equation and the Gilbert damping term
have been derived from an effective Hamiltonian including the
radiation-spin interaction (RSI) \cite{ho}. It has been assumed
there that the spin system maintains quasi-adiabatic evolution.
Various kinds of relaxation processes are usually melded together
into a single damping term. Relativistic relaxation processes result
in the Gilbert damping term with one damping parameter, while for
the case of both exchange and relativistic relaxation the damping
term is a tensor with several damping parameters \cite{bar},
\cite{saf}.
The relaxation processes are specified by interactions of spins with
each other and with other constituents of the magnetic system. A
derivation of the damping term from first principles should
therefore start with a microscopic description of the interactions.
Even though such microscopic derivation of damping has been
performed for some relaxation processes (for instance, \cite{kam}),
a full version of derivation for the Gilbert damping term has not
yet been given, and in particular for a system in non-equilibrium.
In the present talk, we aim to derive a magnetization equation for a
general non-equilibrium spin system without specifying the
interaction Hamiltonian and related relaxation processes. We start
with a system of spins precessing in the effective magnetic field
$\vec{\bf H}_{eff}$ neglecting for a moment mutual interactions.
Then at a fixed time later interactions in the system are switched
on and influence the original precessional motion.
The interactions are assumed to be time-dependent, and the spin
system evolves non-adiabatically out of equilibrium trying to relax
to a new equilibrium magnetization. We perform a transformation,
which is analogous to the one used in the transition to the
interaction picture, to connect the density and magnetic moment
operators before and after the time, when a new non-equilibrium
dynamics starts, and to find an explicit expression for the
interaction contribution to the magnetization equation.
\section{Magnetization Equation}
Let us consider a quantum spin system defined by
\begin{equation}
\hat{\cal H} = \hat{\cal H}_0 + {\lambda} \hat{\cal H}_{I},
\label{1}
\end{equation}
where $\hat{\cal H}_0$ is the Zeeman Hamiltonian describing the
interaction of spins with an effective magnetic field
\begin{equation}
\hat{\cal H}_0 = - {\gamma} \sum_{i} \hat{S}_{i} \cdot \vec{\bf
H}_{eff}(t), \label{2}
\end{equation}
${\gamma}$ being the gyromagnetic ratio and $\hat{S}_i$ being the
spin operator of the ith atom. The effective magnetic field
$\vec{\bf H}_{eff}$ is given by the energy variational with
magnetization , $\vec{\bf H}_{eff}=- {\delta}E(M)/{\delta}\vec{\bf
M}$, where $E(M)$ is the free energy of the magnetic system. This
field includes the exchange field, the anisotropy field, and the
demagnetizing field, as well as the external field, $\vec{\bf
H}_{ext}$.
The Hamiltonian $\hat{\cal H}_{I}$ represents other possible types
of interactions, which can include, for instance, higher order
spin-spin interactions. The interaction terms included in $\hat{\cal
H}_{I}$ are in general time-dependent, being switched on
adiabatically or instantly at a fixed time $t_0$. The parameter
${\lambda}$ in (\ref{1}) can be chosen small in order to take into
account the higher order effects perturbatively.
We introduce next the magnetic moment operator
\begin{equation}
\hat{\cal M} \equiv - \frac{{\delta}\hat{\cal H}}{{\delta}\vec{\bf
H}_{ext}}, \label{3}
\end{equation}
which is the response of the spin system to the external field. The
magnetization is defined as an ensemble average of the response
\begin{equation}
\vec{\bf M}=\langle \hat{\cal M} \rangle \equiv \frac{1}{V} {\bf \rm
Tr}\{ \hat{\rho} \hat{\cal M} \}, \label{4}
\end{equation}
where $V$ is the volume of the system, while $\hat{\rho}$ is the density operator satisfying the quantum
Liouville-von Neumann (LvN) equation
\begin{equation}
i \hbar \frac{{\partial}\hat{\rho}}{{\partial}t} + [ \hat{\rho},
\hat{\cal H} {]}_{-}=0. \label{llg}
\end{equation}
For systems in equilibrium,
the Hamiltonian itself satisfies the LvN equation and the density
operator is expressed in terms of the Hamiltonian. For
non-equilibrium systems, the density operator is constructed by
making use of the time-dependent adiabatic invariants
\cite{lewis},\cite{kim}.
To derive the magnetization equation, we proceed as follows. We
perform, on $\hat{\rho}(t)$, the transformation
\begin{equation}
\hat{\rho} \to \hat{\rho}_{int} \equiv \hat{U}(t_0,t) \hat{\rho}(t)
\hat{U}(t,t_0) \label{13}
\end{equation}
defined by the operator
\begin{equation}
\hat{U}(t_0,t) \equiv T\exp \{ \frac{i}{\hbar} \int_{t_0}^{t}
d{\tau} \hat{\cal H}_0({\tau}) \} \label{11} ,
\end{equation}
where $T$ denotes the time-ordering operator. For systems with
$\hat{\cal H}_0$ constant in time, the operator $\hat{U}(t_0,t)=
\exp\{ (i/{\hbar}) \hat{\cal H}_0 (t-t_0) \}$ leads to the
interaction picture, which proves to be very useful for all forms of
interactions since it distinguishes among the interaction times. For
our system with both $\hat{\cal H}_0$ and $\hat{\cal H}_{I}$
dependent on time, the operator (\ref{11}) plays the same role,
removing the unperturbed part of the Hamiltonian from the LvN
equation.
Substituting Eq.(\ref{13}) into (\ref{llg}), yields
\begin{equation}
i{\hbar} \frac{{\partial}\hat{\rho}_{int}}{{\partial}t} = {\lambda}
\Big[ \hat{\cal H}_{int}, \hat{\rho}_{int} {\Big]}_{-}, \label{14}
\end{equation}
where
\begin{equation}
\hat{\cal H}_{int}(t) \equiv \hat{U}(t_0,t) \hat{\cal H}_{I}(t)
\hat{U}(t,t_0). \label{15}
\end{equation}
The magnetic moment operator and the magnetization become
\begin{equation}
\hat{\cal M}=\hat{\cal M}_0 + \hat{\cal M}_{I} \label{16}
\end{equation}
and
\begin{equation}
\vec{\bf M} = \frac{1}{V} {\bf \rm Tr} \{ \hat{\rho}_{int} (
\hat{\cal M}_{0,int} + \hat{\cal M}_{I,int} ) \}, \label{18}
\end{equation}
where
\begin{equation}
\hat{\cal M}_0 = - \frac{{\delta}\hat{\cal H}_0}{{\delta}\vec{\bf
H}_{ext}} = {\gamma} \sum_{i} \hat{S}_i \label{6}
\end{equation}
and
\begin{equation}
\hat{\cal M}_{I} \equiv - {\lambda} \frac{{\delta} \hat{\cal
H}_{I}}{{\delta} \vec{\bf H}_{ext}}, \label{17}
\end{equation}
while $\hat{\cal M}_{0,int}$ and $\hat{\cal M}_{I,int}$ are related
with $\hat{\cal M}_0$ and $\hat{\cal M}_{I}$, respectively, in the
same way as $\hat{\cal H}_{int}$ is related with $\hat{\cal H}_{I}$.
The operators $\hat{\cal M}_{0}^{a}$, $a=1,2,3$, fulfill the $SU(2)$
magnetization algebra
\begin{equation}
\Big[ \hat{\cal M}_{0}^{a} , \hat{\cal M}_{0}^{b} {\Big]}_{-} =
i{\hbar}{\gamma} {\varepsilon}^{abc} \hat{\cal M}_{0}^{c}, \label{8}
\end{equation}
where the summation over repeated indices is assumed.
The operators $\hat{\cal M}_{0,{int}}$, $\hat{\cal M}_{I,{int}}$ are
generally used to calculate the magnetic susceptibility
\cite{white}. Let us show now how these operators determine the time
evolution of magnetization. The evolution in time of $\hat{\cal
M}_{0,{int}}$ is given by the equation
\[
\frac{{\partial}\hat{\cal M}_{0,{int}}}{{\partial}t} =
\frac{i}{\hbar} \hat{U}(t_0,t) \Big[ \hat{\cal H}_0, \hat{\cal M}_0
{\Big]}_{-} \hat{U}(t,t_0)
\]
\begin{equation}
= {\gamma} \hat{\cal M}_{0,{int}} \times \vec{\bf H}_{eff}.
\label{19}
\end{equation}
It describes the magnetization precessional motion with respect to
$\vec{\bf H}_{eff}$.
The equation for $\hat{\cal M}_{I,{int}}$ describes more complex
magnetization dynamics governed by the interaction Hamiltonian
$\hat{\cal H}_{I}$. However, this dynamics includes the precessional
motion as well. Introducing
\begin{equation}
\vec{\bf D}_{I} \equiv \frac{i}{\hbar} \Big[ \hat{\cal H}_0 ,
\hat{\cal M}_{I} {\Big]}_{-} - {\gamma} \hat{\cal M}_{I} \times
\vec{\bf H}_{eff} \label{20}
\end{equation}
to represent deviations from the purely precessional motion, we
bring the equation for $\hat{\cal M}_{I,{int}}$ into the following
form
\[
\frac{{\partial}\hat{\cal M}_{I,{int}}}{{\partial}t} = {\gamma}
\hat{\cal M}_{I,{int}} \times \vec{\bf H}_{eff}
\]
\begin{equation}
+ \hat{U}(t_0,t) \Big( \frac{{\partial}\hat{\cal
M}_{I}}{{\partial}t} + \vec{\bf D}_{I} \Big) \hat{U}(t,t_0).
\label{21}
\end{equation}
Taking the time-derivative of Eq.(\ref{18}) and using
Eqs.(\ref{14}),(\ref{19}) and (\ref{21}), we finally obtain
\begin{equation}
\frac{d\vec{\bf M}}{dt} = - |{\gamma}| \vec{\bf M} \times \vec{\bf
H}_{eff} + \vec{\bf D}, \label{22}
\end{equation}
where
\begin{equation}
\vec{\bf D} \equiv {\lambda} \langle \frac{1}{i{\hbar}} \Big[
\hat{\cal M}, \hat{\cal H}_{I} {\Big]}_{-} \rangle + \langle
\frac{{\partial}\hat{\cal M}_{I}}{{\partial}t} + \vec{\bf D}_{I}
\rangle. \label{23}
\end{equation}
Therefore, Eq.(\ref{22}) is the magnetization equation for the
system specified by (\ref{1}). This equation is general since it is
derived without specifying $\hat{\cal H}_{I}$. The $\vec{\bf
D}$-term contains all effects that the interactions, $\hat{\cal
H}_{I}$, can have on the magnetization precession, so that
Eq.(\ref{22}) is complete.
The contribution of $\hat{\cal H}_{I}$ to the $\vec{\bf D}$-term in
the magnetization equation can be divided into two parts. One is
proportional to $\langle [ \hat{\cal M} , \hat{\cal H}_{I} {]}_{-}
\rangle$ and is related to the change in the density matrix when the
interactions of $\hat{\cal H}_{I}$ are switched on. The second part
$\langle \frac{{\partial} \hat{\cal M}_{I}}{{\partial}t} + \vec{\bf
D}_{I} \rangle$ originates from the change in the magnetization
itself. Which part of $\vec{\bf D}$ is dominating depends on the
nature of the interactions. For the interactions related to the
relaxation processes, the $\vec{\bf D}$-term represents a general
form of magnetization damping.
\section{Example: spin-spin interactions}
The spin-spin interactions among the spins in the system introduce
many body effects, which can be treated perturbatively in the weak
coupling regime. In this case the $\vec{\bf D}$-term can be expanded
in powers of $\lambda$. To demonstrate this, we consider the
spin-spin interactions of a specific type. The interaction between
spins is usually an exchange interaction of the form
\begin{equation}
-2J \sum_{i,j} \hat{S}_i \hat{S}_j = - \frac{2J}{{\gamma}^2}
\hat{\cal M}_0^2, \label{52}
\end{equation}
the coupling constant $J$ being called the exchange integral. We
generalize the ansatz given by Eq.(\ref{52}) by assuming that the
exchange integral depends on the magnetization and introduce the
spin-spin interactions as follows
\begin{equation}
{\lambda}\hat{\cal H}_{I} = \sum_{i,j} J^{ab}(M) \hat{S}_i^a
\hat{S}_j^b, \label{53}
\end{equation}
where $J^{ab}={\lambda}M^aM^b$. Since $\hat{\cal H}_{I}$ does not
depend explicitly on the external field, its contribution to the
magnetic moment operator vanishes, $\hat{\cal M}_{I}=0$.
The non-vanishing commutator $[\hat{\cal M}_0, \hat{\cal H}_{I}
{]}_{-}$ in Eq.(\ref{23}) is the only contribution of the spin-spin
interaction to the magnetization equation, resulting in
\begin{equation}
\vec{\bf D} = \frac{\lambda}{\gamma} \vec{\bf M} \times
\vec{\bf{\Omega}}, \label{54}
\end{equation}
where
\begin{equation}
{\Omega}^a \equiv \langle \Big[ \hat{\cal M}_0^a, \hat{\cal M}_0^b
{\Big]}_{+} \rangle M^b, \label{55}
\end{equation}
and
\begin{equation}
\Big[ \hat{\cal M}_0^a , \hat{\cal M}_0^b {\Big]}_{+} \equiv
\hat{\cal M}_0^a \hat{\cal M}_0^b + \hat{\cal M}_0^b \hat{\cal
M}_0^a. \label{56}
\end{equation}
The correlation function $G^{ab} \equiv \langle \Big[ \hat{\cal
M}_0^a , \hat{\cal M}_0^b {\Big]}_{+} \rangle$ is the sum of spin
correlation functions,
\begin{equation}
G^{ab} = 2{\gamma}^2 \sum_{i} \sum_{j \neq i} \langle \hat{S}_i^a
\hat{S}_j^b \rangle, \label{57}
\end{equation}
excluding the self-interaction of spins. For the standard ansatz
given in Eq.(\ref{52}), $\vec{\bf D}=0$ and the magnetization
equation does not change.
If the spin-spin interactions are turned on at $t=t_0$, so that
$\hat{\rho}(t_0)=\hat{\rho}_0(t_0)$, then, integrating both sides of
Eq.(\ref{14}), we find
\begin{equation}
\hat{\rho}_{\lambda}(t) = \hat{\rho}_0(t_0) +
\frac{\lambda}{i{\hbar}} \int_{t_0}^{t} d{\tau} \Big[ \hat{\cal
H}_{\lambda}(\tau), \hat{\rho}_{\lambda}(\tau) {\Big]}_{-}.
\label{58}
\end{equation}
Substituting Eq.(\ref{58}) into the definition of $G^{ab}$, yields
the equation
\[
G^{ab}(t) = G_0^{ab}(t_0)
\]
\begin{equation}
+ \frac{1}{\gamma} \int_{t_0}^t d{\tau} J^{cd}({\tau}) \Big(
{\varepsilon}^{ace} G^{ebd}({\tau}) + {\varepsilon}^{bce}
G^{aed}({\tau}) \Big), \label{59}
\end{equation}
where
\begin{equation}
G_0^{ab} \equiv \langle \Big[ \hat{\cal M}_0^a , \hat{\cal M}_0^b
{\Big]}_{+} {\rangle}_{0}, \label{60}
\end{equation}
which relates $G^{ab}$ to the third order correlation function, i.e.
the correlation function of the product of three magnetic moment
operators,
\begin{equation}
G^{abc} \equiv \langle \Big[ \Big[ \hat{\cal M}_0^a , \hat{\cal
M}_0^b {\Big]}_{+} , \hat{\cal M}_0^c {\Big]}_{+} \rangle.
\label{61}
\end{equation}
The correlation function $G^{abc}$, in turn, is related to the
fourth order correlation function and etc., and we have therefore an
infinite number of coupled equations for spin correlation functions.
For any practical calculation this infinite hierarchy has to be
truncated. That then defines the approximation scheme which may be
considered on the basis of the physical requirements for the system.
The approximation scheme will depend on the physical properties such
as density and on the strength of the interactions.
If the Hamiltonian $\hat{\cal H}_{I}$ is a small perturbation to the
original $\hat{\cal H}_0$, we can solve Eq.(\ref{58})
perturbatively. In the lowest, zeroth order in $\lambda$, we replace
$\hat{\rho}_{\lambda}(t)$ by $\hat{\rho}_0(t_0)$, so that $G^{ab}
\approx G_0^{ab}(t_0)$. We choose the initial value for $G^{ab}$ as
\begin{equation}
\sum_{i} \sum_{j \neq i} \langle \hat{S}_i^a \hat{S}_j^b {\rangle}_0
= I^{ab} \label{62}
\end{equation}
with $I^{xx}=I^{yy}=0$, $I^{zz}=I$ and $I^{ab}=0$ for $a \neq b$. We
also define again the $z$-direction as the direction of the
effective magnetic field that is chosen uniform and static. Then the
$\vec{\bf D}$-term becomes, in component form,
\begin{eqnarray}
D_x & = & 2{\lambda} {\gamma} I M_y M_z,\\
D_y & = & -2{\lambda} {\gamma} I M_x M_z, \\
D_z & = & 0. \end{eqnarray}
producing two effects in the magnetization equation: the
magnetization is now precessing with respect to $(H_z - 2{\lambda}I
M_z)$, its z-component remaining constant in time, $(d/{dt})M_z=0$,
and the frequency of the precession is
\begin{equation}
\overline{\omega}_{0} \equiv {\omega}_0 \Big( 1- {\lambda}
\frac{2IM_z}{H_z} \Big). \label{64}
\end{equation}
Therefore, in the lowest order of perturbations, when the $\vec{\bf
D}$ is linear in ${\lambda}$, it shifts the direction and the
frequency of the precessional motion without introducing damping
effects. To find a role of the higher powers of ${\lambda}$ in
$\vec{\bf D}$ and to determine how they affect the magnetization
equation, a truncation of the chain of spin correlation equations is
needed. This would require a consistent perturbation approach to the
hierarchy of the coupled equations for the correlation functions.
\section{Conclusion}
We have derived a general form of magnetization equation for a
system of spins precessing in an effective magnetic field without
specifying the internal interactions. It can be applied in the study
of magnetization dynamics of any type, including non-equilibrium and
nonlinear effects, provided the interaction of individual spins with
each other and with other degrees of freedom of the system is
specified.
The $\vec{\bf D}$-term in the magnetization equation has been
obtained without using any approximation scheme. It is exact,
accumulates all effects of the internal interactions on the
magnetization precessional motion and can be a starting point for
practical calculations. For the spin-spin interactions, it is
determined by the spin correlation functions, which fulfil an
infinite chain of equations. A further analysis of the $\vec{\bf
D}$-term requires an approximation scheme to truncate the chain in a
consistent approach to higher order calculations.
In our talk, we have considered a specific type of the spin-spin
interactions, which do not contribute to the algebra of magnetic
moment operators. However, if spin-spin interactions depend
explicitly on the external field, the form of the algebra can
change. In this case, the total magnetic moment operator becomes
nonlinear in $\hat{\cal M}_0$, and this results in the magnetization
algebra with an infinite chain of commutation relations. The chain
has to be truncated in a way consistent with the truncation of the
chain of equations for the spin correlation functions in the same
approximation scheme.
Numerical computations along the lines developed in \cite{mars} can
provide a further insight into the problem. This talk is based on the
work \cite{skkm} where an extended list of references can be found.
| {
"attr-fineweb-edu": 1.878906,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdvI4eIZjvTAIZFym | \section{Introduction}
The Heisenberg $S = 1/2$ square lattice with competing antiferromagnetic
(AF) nearest- and next-nearest-neighbor interactions, $J_1$ on the sides
and $J_2$ on the diagonals of each square, presents a prototypical frustrated
magnetic system \cite{chandra_1988,darradi_2008}. As Fig.~\ref{fig:bonds}(a)
represents, when $J_1$ dominates the ground state is N\'eel AF order, whereas
dominant $J_2$ gives a columnar AF state, and a quantum spin liquid (QSL) has
been proposed \cite{morita_2015,gong_2014,wang_2018} within the non-ordered
parameter regime ($0.4 \lesssim J_2/J_1 \lesssim 0.6$) between the two AF
phases. Despite several decades of intense study, there remains no consensus
over the exact nature of the QSL and the model continues to provide
a focal point for QSL research.
Until recently, most such research in both experiment and theory was focused
on homogenous systems, where all magnetic sites are equal. However, many real
materials display intrinsic inhomogeneity, as a result of impurities or
(counter-)ion substitution, that results in site or bond disorder. This
is known as quenched randomness, and the loss of translational symmetry
it entails makes the system challenging to study theoretically. However,
quenched randomness in quantum magnets can lead to specific ground states
with no long-ranged order, including the Bose glass \cite{rfwgf,rps,rakpr1,
rnwh,rmkg,Yu2012}, the Mott glass \cite{rogl,rakpr2,Yu2012,savary_2017}, the
random-singlet state \cite{rbl,rdsf,liu_2018,uematsu_2018,ren_2020,baek_2020},
and the valence-bond glass \cite{tarzia_2008,singh_2010,watanabe_2018}.
These phases of matter are closely related to certain types of QSL and thus
raise the question of whether randomness in a frustrated system can produce
qualitatively different types of quantum coherence, as opposed to only
destroying such coherence.
Here we investigate the magnetically disordered states
found in the series of compounds Sr$_2$CuTe$_{1-x}$W$_x$O$_6$. At first sight this system
seems well suited for exploring the phase diagram of the $J_1$-$J_2$
Heisenberg square lattice, because the two parent compounds, Sr$_2$CuTeO$_6$\
and Sr$_2$CuWO$_6$, are respectively good $J_1$ and $J_2$ systems, displaying
N\'eel and columnar AF order. However, our diffuse polarized neutron
diffraction and inelastic neutron scattering (INS) measurements, combined
with insight from mean-field and linear spin-wave calculations, show that
the Sr$_2$CuTe$_{1-x}$W$_x$O$_6$\ family represents an altogether different but no less
interesting problem. We demonstrate that the random-bond model arising
from Te-W site disorder leads to a ground state of partially frozen moments
in alternating patches of N\'eel and columnar correlations. The patch sizes
depend on $x$, reaching a minimum of order 10 magnetic sites for $x = 0.4$.
Our calculations reproduce well the experimentally observed ground and
excited states, showing that disorder is more important than frustration
in determining the physics of Sr$_2$CuTe$_{1-x}$W$_x$O$_6$.
\begin{figure*}[t]
\includegraphics[width=\textwidth]{scwtof1}
\caption{{\bf The Heisenberg square lattice and Sr$_2$CuTe$_{1-x}$W$_x$O$_6$.} (a) Phase diagram
of the $J_1$-$J_2$ square-lattice Heisenberg model, showing the Ferromagnetic,
Columnar, and N\'eel AF states, as well as the frustrated parameter regimes
of QSL and Valence-Bond Crystal (VBC) behavior. (b) Magnetic interactions in
Sr$_2$CuTe$_{1-x}$W$_x$O$_6$, represented for the three cases where the counterions in each square
are both Te, both W, or one of each. The $J_1$ and $J_2$ parameters are those
obtained by quantum chemistry calculations \cite{vamshi_2020}; we note that
the nearest-neighbor interaction is always small in the presence of W.}
\label{fig:bonds}
\end{figure*}
\section{Materials}
The isostructural materials Sr$_2$Cu$B''$O$_6$ ($B''=\,$Te, W, Mo) are
layered antiferromagnets in which the network of Cu$^{2+}$ ions is well
described by the $J_1$-$J_2$ square-lattice model. Neutron scattering
measurements on large powder samples of the pure Te and W members of the
family have shown that the ground state and spin dynamics of Sr$_2$CuTeO$_6$\ are
dominated by the $J_1$ term \cite{koga_2016,babkevich_2016}, whereas in
Sr$_2$CuWO$_6$\ they are dominated by $J_2$ \cite{vasala_2014,walker_2016}. The fact
that Te$^{6+}$ and W$^{6+}$ have almost identical ionic radii \cite{shannon_1976}
gives every reason to expect that mixed compounds in the series between these
two end members might realize ideal random solid solutions interpolating
between the $J_1$ and $J_2$ limits. X-ray diffraction studies across the
doping series \cite{mustonen_prb_2018} have established that the chemical
structure of the mixed systems is indeed a true solid solution for all $x$,
and detailed characterization of the magnetic response by muon spin-rotation
($\mu$SR), specific-heat, magnetic susceptibility, and NMR measurements on
Sr$_2$CuTe$_{0.5}$W$_{0.5}$O$_6$\ \cite{mustonen_ncomms_2018,mustonen_prb_2018,watanabe_2018} indicate
no magnetic order above 19 mK over a wide range of doping, $0.1 \le x \le
0.6$, which is clearly different from a two-phase system of the end members.
In the quest to understand the dramatic difference between Sr$_2$CuTeO$_6$\ and Sr$_2$CuWO$_6$,
\textit{ab initio} quantum chemistry calculations \cite{vamshi_2020} have
demonstrated how the Cu-Cu superexchange paths change completely due to the
orbital hybridization of O 2$p$ with Te$^{6+}$ (empty 5\textit{p}) or W$^{6+}$
(empty 5\textit{d}) \cite{shannon_1976}. The magnetic interaction parameters
predicted by this analysis are shown in Fig.~\ref{fig:bonds}(b), and they
afford the key insight that shapes both the physics of Sr$_2$CuTe$_{0.5}$W$_{0.5}$O$_6$\ and the
applicability of our spin-wave methodology, namely that all the competing
bonds are very weak and hence strong local frustration is avoided. Because
the substitution of a nonmagnetic ion switches the dominant Cu-Cu interaction
so cleanly, while leaving the crystal structure basically unaltered, the random
Te-W distribution leads to a bond-disorder problem with bimodal distributions
of $J_1$ and $J_2$. Among other things, the concept of controlling a uniform
$J_2/J_1$ ratio to obtain a QSL by substitution is not valid. Nevertheless,
one may still anticipate randomness-induced magnetic disorder, as suggested by
magnetic measurements on samples spanning the doping range $0.1 \le x \le 0.6$,
\cite{mustonen_ncomms_2018,mustonen_prb_2018,watanabe_2018}, and recent
studies have stressed the very rapid destruction of N\'eel order at small
$x$ \cite{hong_2021,yoon_2021} (but not at high $x$ \cite{hong_2021}). These
results have been interpreted theoretically in terms of a random-singlet
state \cite{uematsu_2018,hong_2021} and of a valence-bond glass
\cite{watanabe_2018}. However, INS measurements show dispersive excitations
similar to spin waves \cite{vamshi_2020,hu_2021} and a partial freezing of
random moments has been reported both at $x = 0.5$ below 1.7 K \cite{hu_2021}
and at $x = 0.05$ below 0.5 K \cite{yoon_2021}. These somewhat contradictory
findings leave the true nature of the magnetic ground state in Sr$_2$CuTe$_{1-x}$W$_x$O$_6$\
undetermined.
\begin{figure}[b]
\includegraphics[width = \columnwidth]{scwtof2}
\caption{{\bf Polarized neutron diffraction.} Intensities measured for powder
samples of Sr$_2$CuTe$_{1-x}$W$_x$O$_6$\ with $x = 0.2$ (gray diamonds) and $x = 0.5$ (black
squares). Data for $x = 0.5$ are translated upwards by 0.06 arb.~u.~for
clarity. The inset shows intensities calculated using the random-bond model
of Fig.~\ref{fig:bonds}(b); four of these curves are scaled and superimposed
on the data in the main panel.}
\label{fig:diffraction}
\end{figure}
\section{Magnetic ground state}
The instantaneous spin structure factor, $S({\bf Q}) = \sum_j e^{i {\bf Q}
\cdot {\bf r}_j} \left\langle {\bf S}_0 \cdot {\bf S}_j \right\rangle$, can be
probed by diffuse polarized neutron diffraction. Here ${\bf Q}$ is the momentum
transfer and ${\bf S}_j$ the spin operator at lattice site ${\bf r}_j$. We
collected diffraction patterns on the D7 diffractometer at the Institut Laue
Langevin (ILL) using powder samples of 10-15 g of the $x = 0.2$ and $x = 0.5$
materials in Al cans at $T = 1.5$ K. The experimental wavelength of $4.8 \;
\mathrm{\AA}$ corresponds to $3.55$ meV, and thus captures fluctuations in the
lowest 20\% of the full band width \cite{babkevich_2016,walker_2016} by energy,
which nevertheless constitute the vast majority of fluctuations by spectral
weight (as we will show by INS). Thus our measurements observe the quasielastic
response, or slowly fluctuating part of $S({\bf Q})$. The magnetic contribution
to the scattering extracted by polarization analysis \cite{ehlers_2013} is
shown in Fig.~\ref{fig:diffraction} and indicates a disordered state whose
peak scattering intensity moves from $0.8 \; \mathrm{\AA^{-1}}$ for $x = 0.2$
to $0.6 \; \mathrm{\AA^{-1}}$ for $x = 0.5$, the former lying close to the
magnetic Bragg peak $(0.5,0.5)$ of N\'eel order and the latter to $(0.5,0)$
of columnar order.
To interpret the diffraction data, the interaction parameters shown in
Fig.~\ref{fig:bonds}(b) motivate a ground state for the Sr$_2$CuTe$_{1-x}$W$_x$O$_6$\ system
whose essential behavior is captured by considering only the strongest bonds,
i.e.~$J_1^{\mathrm{Te}}$ and $J_2^{\mathrm{W}}$. As Fig.~\ref{fig:spinconfig}(a)
makes clear, the way that the introduction of W eliminates so many $J_1$
bonds leads to a somewhat dilute interaction network, and in particular we
observe that direct $J_1$-$J_2$ frustration, in the form of $J_1$-$J_1$-$J_2$
triangles, is absent in this limit. Although the physics of the ground and
excited states in the random Sr$_2$CuTe$_{1-x}$W$_x$O$_6$\ system can be understood directly
from Fig.~\ref{fig:spinconfig}(a), we restore the weaker couplings in our
quantitative modelling of both. For this we substitute the calculated
interaction parameters \cite{vamshi_2020} shown in Fig.~\ref{fig:bonds}(b)
by the rather similar values extracted from spin-wave fits to the INS spectra
of Sr$_2$CuTeO$_6$\ \cite{babkevich_2016} and Sr$_2$CuWO$_6$\ \cite{walker_2016}, which are
$J_1^{\mathrm{Te}} = 7.6$ meV, $J_2^{\mathrm{Te}} = 0.6$ meV, $J_1^{\mathrm{W}} = 1.0$
meV, and $J_2^{\mathrm{W}} = 8.5$ meV, while the undetermined coupling
$J_1^{\mathrm{Te,W}}$ is set to zero.
\begin{figure}[p]
\includegraphics[width = \columnwidth]{scwtof3}
\caption{{\bf Ground-state spin configurations.} (a) Examples of
random-bond networks for Sr$_2$CuTe$_{1-x}$W$_x$O$_6$ with $x \approx 0$,
$x = 0.4$, and $x \approx 1$; weak interactions are omitted for clarity.
Thick gray lines at $x = 0.4$ indicate two representative geometrically
frustrated paths, with five bonds being the shortest possible. (b) Spin
configuration for $x = 0.4$, matching the bond network in panel (a).
The color code quantifies the correlations around each square,
$\sum_{\Box} {\bf S}_i \cdot {\bf S}_{i+1}$ ($i = \left\lbrace 1,2,3,4
\right\rbrace$ labelling the four sites), which vary from N\'eel (red) to
columnar (blue) character. Because the spins are of fixed size but rotate in
three dimensions, shorter arrows indicate an out-of-plane component. (c)
Calculated structure factor, $S(\mathbf{Q})$, showing the evolution from
N\'eel to columnar order. Only for $x = 0$ and 1 are the peaks very sharp,
as expected for long-range order.}
\label{fig:spinconfig}
\end{figure}
The spin configuration corresponding to the random-bond network for $x = 0.4$
in Fig.~\ref{fig:spinconfig}(a) is illustrated in Fig.~\ref{fig:spinconfig}(b).
We calculate these configurations by updating all sites in a random sequence,
orienting spin $i$ to minimize its energy, $E_i = \sum_{j} J_{ij} \, {\bf S}_i \!
\cdot \! {\bf S}_j = {\bf S}_i \! \cdot \! {\bf m}_i$, in the local classical
mean field, ${\bf m}_i = \left\langle J_{ij} {\bf S}_j \right\rangle$, due to
all neighboring spins, $j$. We repeat this procedure until the sum of
differences in total energy from the previous 100 updates is below $10^{-9}$
meV for a system of 50$\times$50 sites. The spin magnitude is fixed to $\langle
S \rangle = 1/2$ and the calculation is performed for 10 different random
initial configurations at each value of $x$. We stress that all the spin
configurations we obtain for $0 < x < 1$ are non-coplanar as a result of
the randomness and the remaining frustration [marked in gray in
Fig.~\ref{fig:spinconfig}(a)].
\begin{figure*}[t]
\centering
\includegraphics[width=1.0\textwidth]{scwtof4}
\caption{{\bf Magnetic excitations in Sr$_2$CuTe$_{1-x}$W$_x$O$_6$.} (a-e) Powder INS spectra for
samples with $x = 0$, 0.2, 0.4, 0.5, and 1, normalized to the nuclear Bragg
peak for comparison between different $x$ values. (f-j) Spectra calculated
using the random-bond model. The magnetic form factor for Cu$^{2+}$ was
estimated following Ref.~\onlinecite{ITC} and an energy broadening ($dE = 1.2$ meV
for $x = 0$ and 1.8 meV for $x > 0$) was convolved with each calculated
spectrum to approximate the instrumental resolution. Regions of
$|\mathbf{Q}|$ not covered in experiment are dimmed in the modelled
spectra. The dashed lines mark the constant $|\mathbf{Q}|$ and energy
values analyzed in Fig.~\ref{fig:SQvsQ}.}
\label{fig:spectra}
\end{figure*}
Averaged diffraction patterns derived from such spin structures are shown
in Fig.~\ref{fig:spinconfig}(c). We find from the peaks at $(0.5,0)$ and
equivalent positions that columnar ordering predominates for $x \gtrsim 0.5$.
By contrast, the $(0.5,0.5)$ peak shows that long-ranged N\'eel order is
destroyed even at $x \approx 0.1$ \cite{mustonen_prb_2018,hong_2021}. At
intermediate $x$, only short-range magnetic correlations are present and the
scattering is not isotropic, but forms instead a rounded, cross-like pattern
in $S(\mathbf{Q})$. This indicates the presence of coexisting regions of very
short-ranged N\'eel and columnar order, a real-space picture confirmed in
Fig.~\ref{fig:spinconfig}(b). The sizes of these ``patches'' depend on $x$,
and are of order 10 magnetic sites for $x = 0.4$.
The powder averages of the diffraction patterns in Fig.~\ref{fig:spinconfig}(c)
are shown in the inset of Fig.~\ref{fig:diffraction}. Because the spins in our
mean-field calculations are static, the comparison with the slowly fluctuating
part of $S({\bf Q})$, as probed by our D7 measurements, is fully justified.
While it is clear that our model is entirely consistent with the observed
diffuse diffraction signal, we cannot exclude other models on the basis of
Fig.~\ref{fig:diffraction} alone. One prominent example is the response of
sizeable magnetic domains of N\'eel and columnar order, which would give only
broadened peaks at the Bragg positions in Fig.~\ref{fig:spinconfig}(c), but on
powder-averaging would be difficult to distinguish from our measurements.
However, such a superposition could not explain the complete lack of magnetic
order observed by $\mu$SR and INS for $0.05 \le x \le 0.6$, and next we turn
to our measurements of the spin dynamics to obtain further information.
\section{Spin dynamics}
Our INS experiments were performed at the time-of-flight
spectrometer MERLIN \cite{merlin} at the ISIS Neutron and Muon Source, using
10 g powder samples of the $x = 0.2$, 0.4, 0.5, and 1 materials in Al cans,
with an incoming neutron energy of 45 meV and at a temperature of 5 K. In
the spectra shown in Figs.~\ref{fig:spectra}(a-e), a thermally adjusted
background factor (recorded above 100 K) was subtracted to
remove the phonon contribution at larger \ensuremath{|\mathbf{Q}|}. The $x = 0$ data are those
of Ref.~\onlinecite{babkevich_2016}. The pure compounds Sr$_2$CuTeO$_6$\ and Sr$_2$CuWO$_6$\ show spin
waves dispersing respectively from the zone centers of N\'eel and columnar
order. Bands of strong scattering found around 16 meV for $x = 0$ and 18 meV
for $x = 1$ correspond to van Hove singularities at the zone boundaries.
Although the spectra of the mixed Sr$_2$CuTe$_{1-x}$W$_x$O$_6$\ compounds show strong broadening
in momentum and energy transfer, both the low-energy dispersive features
and the van Hove band remain present.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{scwtof5}
\caption{{\bf Dynamical structure factor.} Measured (left) and calculated
(right panels) $S(\ensuremath{|\mathbf{Q}|},E)$ at constant $\ensuremath{|\mathbf{Q}|} = 1.5\;\mathrm{\AA^{-1}}$ (top)
and constant $E = 4\;\mathrm{meV}$ (bottom panels). The respective
integration widths are $\Delta \ensuremath{|\mathbf{Q}|} = 0.2\;\mathrm{\AA^{-1}}$ and $\Delta
E = 2$ meV. The curves are offset along the $y$-axis for clarity.}
\label{fig:SQvsQ}
\end{figure}
The success of the random-bond model in reproducing the ground states of the
mixed systems (Fig.~\ref{fig:diffraction}) suggests its utility for analyzing
their excitations. Despite having no long-range order, the mean-field states
have frozen moments and thus we can compute powder-averaged INS
spectra by using linear spin-wave theory, as implemented in the software
package \textsc{spinw} \cite{spinW}. We define supercells of 10$\times$10 sites with
random bond distributions of the type shown in Fig.~\ref{fig:spinconfig}(b)
and periodic boundary conditions; for each value of $x$, we average the
results from five different random configurations. For maximally quantitative
modelling, we apply the series-expansion correction factor, $Z_c = 1.18$
\cite{singh_1989}, to our calculated energies. The resulting spectra, shown
in Figs.~\ref{fig:spectra}(f-j), make clear that the random-bond model
captures all the primary features of the measured spectra. Spin-wave-type
modes, with substantial broadening in \ensuremath{|\mathbf{Q}|}\ and energy, remain centered at
$0.8\;\mathrm{\AA^{-1}}$ for $x = 0.2$, transitioning to $0.6\;\mathrm{\AA^{-1}}$
for $x = 0.5$, while the calculated band-width reduction is in quantitative
agreement with the data.
For further comparison, in Fig.~\ref{fig:SQvsQ}(a) we show a constant-\ensuremath{|\mathbf{Q}|}\
cut through the spectrum at $\ensuremath{|\mathbf{Q}|} = 1.5\;\mathrm{\AA^{-1}}$, where strong
scattering is observed around 16 meV for Sr$_2$CuTeO$_6$\ and 18 meV for Sr$_2$CuWO$_6$. For
$x = 0.2$, 0.4, and 0.5, these van Hove peaks become a broad feature around
12-15 meV that is reproduced accurately by our modelling, as shown in
Fig.~\ref{fig:SQvsQ}(b). The increased scattering observed below 10 meV is
the tail of the broadened elastic line, which we do not model. Similarly,
a constant-energy cut at 4 meV, examined as a function of \ensuremath{|\mathbf{Q}|}\ in
Figs.~\ref{fig:SQvsQ}(c-d), captures the excitations dispersing
upwards from the magnetic zone centers. For Sr$_2$CuTeO$_6$\ and Sr$_2$CuWO$_6$, the first
spin-wave branches emerge respectively from $\ensuremath{|\mathbf{Q}|} = 0.8$ and $0.6\;
\mathrm{\AA^{-1}}$, while the excitations from the next Brillouin zone are
found at $\ensuremath{|\mathbf{Q}|} = 1.8$ and $1.3\;\mathrm{\AA^{-1}}$. As Te is replaced by W,
these low-energy excitations change rapidly to resemble those of Sr$_2$CuWO$_6$, such
that the spectra of the $x = 0.4$ and 0.5 compounds show the fingerprints
mostly of the $x = 1$ system [both the $(0.5,0)$ and $(0.5,1.5)$ features
strengthening from $x = 0.4$ to 0.5]. We stress that our modelling procedure
has no free parameters, but clearly reproduces all the essential features of
the Sr$_2$CuTe$_{1-x}$W$_x$O$_6$\ system at a semi-quantitative level.
\section{Discussion and conclusions}
Thus we have shown that the Sr$_2$CuTe$_{1-x}$W$_x$O$_6$\ series realizes not highly frustrated
magnetism but a random-bond model whose ground state at intermediate $x$ is
a partially frozen disorder. This situation is a consequence of the strong
suppression of $J_1$ bonds as soon as one neighboring Te site is substituted
by W \cite{vamshi_2020}: Fig.~\ref{fig:spinconfig}(a) illustrates that, if
one neglects the 10\% effect of the subdominant bonds, then no triangles
are created and the shortest frustrated paths consist of five bonds, which
furthermore are rather sparse within the bonding network. With such moderate
geometric frustration, short-range order forms over patches considerably
larger than the individual squares, as shown in Fig.~\ref{fig:spinconfig}(b).
This partial frustration release explains why a semiclassical spin-wave and
mean-field treatment captures the leading behavior of the maximally quantum
mechanical $S = 1/2$ spin system reasonably well, even when more complex
combinations of the different bonds ($J_1^{\rm Te}, J_2^{\rm Te}, J_1^{\rm W},
J_2^{\rm W}$) are used. Here we stress that quantum-fluctuation corrections to
our analysis can act to reduce some local moments to zero, and future neutron
diffraction studies could probe this site-dependent effect. Nevertheless, our
experimental data and calculations suggest that the ground state is a
percolating network of frozen, randomly oriented average moments, within
which some sites and patches may be fully fluctuating.
The possible existence of a weak frozen-moment phase is emerging as a key
question in the understanding of bond randomness in $S = 1/2$ quantum magnets.
Although early $\mu$SR measurements indicating no magnetic order down to sub-K
temperatures for Sr$_2$CuTe$_{1-x}$W$_x$O$_6$\ with $0.1 \le x \le 0.6$, together with a $T$-linear
specific heat, were suggested initially as evidence for a QSL state
\cite{mustonen_prb_2018,mustonen_ncomms_2018}, further $\mu$SR investigations
at very small $x$ are being interpreted \cite{hong_2021,yoon_2021} as evidence
for the onset of a dominant random-singlet phase. The distinction between
the random-singlet \cite{liu_2018,ren_2020,uematsu_2018} and valence-bond-glass
phases \cite{watanabe_2018} is that all singlets in the latter state have
finite gaps, whereas in the former they form a continuum of values to the
gapless limit. However, INS data indicating a transition from liquid to weakly
frozen behavior below 1.7 K for $x = 0.5$ \cite{hu_2021} and the $\mu$SR
observation of a frozen component below 0.5 K at $x = 0.05$ \cite{yoon_2021}
raise the question of whether a random frozen spin network can persist as an
intermediate regime as long-range order is destroyed by bond randomness.
Our results provide a qualitative basis on which to interpret these findings.
Although both the diffuse diffraction pattern and the spin-wave-type
excitations we observe are consistent with a random network, the finite
momentum resolution in our experiment and the lack of quantum corrections
in our model prevent an unambiguous conclusion. The fact that reports of
a finite frozen moment are restricted to the low \cite{yoon_2021} and high
\cite{hu_2021} ends of the substitution range for the suppression of
long-range order suggests that $x$ may be an important factor in controlling
the crossover to an entirely disordered (fluctuating, moment-free) ground
state. The other key factor is likely to be the residual frustration, where
our results suggest that Sr$_2$CuTe$_{1-x}$W$_x$O$_6$\ may lack the degree of frustration required
to realize an unconventional all-singlet disordered state, but a system with
stronger local frustration could indeed do so. On this note we stress that
the Sr$_2$CuTe$_{1-x}$W$_x$O$_6$\ family remains an excellent framework for studying the interplay
between randomness and frustration in $S = 1/2$ quantum magnets. The bond
randomness of Sr$_2$CuTe$_{1-x}$W$_x$O$_6$\ also arises in systems including Ba$_2$Cu(Te,W)O$_6$
\cite{mustonen_2019,mustonen_2020} and Cr$_2$(Te,W)O$_6$ \cite{zhu_2014,
zhu_2015}, and we expect these compounds to allow a further investigation
of exotic magnetic states at the nexus of disorder and frustration.
In summary, we have presented neutron diffraction data and INS spectra for
the $J_1$-$J_2$ square-lattice system Sr$_2$CuTe$_{1-x}$W$_x$O$_6$\ with $0 \leq x \leq 1$ and
we have introduced a matching random-bond model based on \textit{ab initio}
calculations. The model diffraction pattern in the magnetically disordered
region ($0.1 < x < 0.5$) has a cross-like form that is fully consistent with
our diffuse polarized neutron diffraction measurements. The ground state
consists of small patches of predominantly nearest- or next-nearest-neighbor
correlated spins dictated by the nonmagnetic dopant sites. Powder spectra
obtained from the random-bond model reproduce the dispersive excitations and
suppressed band maximum observed in our INS experiments. These findings show
that it is the bond randomness in Sr$_2$CuTe$_{1-x}$W$_x$O$_6$, rather than the residual
frustration, that drives the physics of the system.
\section*{Acknowledgments}
We thank E. Cussen, A. Sandvik, and O. Yazyev
for helpful discussions. This work was funded by the Swiss National
Science Foundation, including by its Sinergia networks MPBH (Grant
Nos.~CRSII2\_141962/1 and CRSII2\_160765/1) and Nanoskyrmionics
(Grant No.~171003), by the European Research Council through the project
CONQUEST (Grant No.~259602) and the Synergy network HERO (Grant No.~810451), and by the
Leverhulme Trust through Research Project Grant No.~RPG-2017-109 and Early
Career Fellowship No.~ECF-2021-170. We thank the ILL and ISIS for allocating beamtime
for this study. Data collected at both facilites are available as Refs.~\onlinecite{ILLdoi1,ILLdoi2,RB1520052,RB1620093,RB1890186}.
| {
"attr-fineweb-edu": 1.599609,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdvo5qdmDLQcgb5Gt | \section{FIRST-ORDER LEVEL}
\section{INTRODUCTION}
Light-front (LF) quantization provides an intuitive (physical basis!)
description of hadron structure that stays close to the relevant degrees of
freedom in high energy scattering: many high-energy scattering processes
probe hadrons along a light-like direction, since
particles at very high energies travel close to the light-cone.
More detailed discussions
of this main motivation for studying LF field theories can be found in
Refs. \cite{mb:adv,bigguy}
as well as in the lecture by Stan Brodsky \cite{stan}.
If the main application for LF field theory is supposed to be the phenomenology
of high-energy scattering processes, must one worry about the structure
the vacuum in the LF formalism? At first one might think that the answer to this
question is no. However, since the naive (no 0-modes) LF vacuum is known to
be trivial, one might worry, for example, whether deep-inelastic structure
functions can be correctly calculated on the
LF in a theory like QCD where the vacuum is known to have a nontrivial structure
and where one knows that this nontrivial vacuum structure plays an important
role for phenomenology.
It is well known that LF Hamiltonians allow for a richer
counter-term structure \cite{osu:all}, and
spontaneous symmetry breaking in
normal coordinates can manifest itself as
explicit symmetry breaking counter-terms
in the corresponding LF Hamiltonian. In other words, the
vacuum structure is shifted from states to fields.
Thus, one can account for
a nontrivial vacuum structure in the renormalization procedure.
Some immediate questions that arise in
this context are
\begin{itemize}
\item Can a LF Hamiltonian, with a trivial vacuum,
have the same ``physics'' (in the sense of physical spectrum or
deep inelastic structure function) as an equal time Hamiltonian
with nontrivial vacuum?
\item What are implications for renormalization, i.e. how does one
have to renormalize in order to obtain the same physics?
\item What is the structure of the effective interaction for
non-zero-modes
\end{itemize}
Of course, the general answer (i.e. $QCD_{3+1}$) is difficult to find,
but above questions have been studied in simple examples
\footnote{For a
more complete list of examples and
references on this topic, see Ref. \cite{bigguy}.} :
$QED/QCD_{1+1}$ \cite{eps,ho:vac,zhit}, Yukawa$_{1+1}$ \cite{mb:parity},
scalar theories (in any number of dimensions)
\cite{fr:eps}, perturbative
$QED/QCD_{3+1}$ \cite{mb:alex} and
``mean field models'': Gross-Neveu/NJL-model \cite{njl}.
The goal of such toy model studies is to
build intuition which one can hopefully apply to $QCD_{3+1}$
(using trial and error).
However, while these models have been very useful for studying nonperturbative
renormalization in 1+1 dimensional LF field theories, it is not clear
to what extend these results can be generalized to
sufficiently nontrivial theories in 3+1 dimensions.
\section{A 3+1 DIMENSIONAL TOY MODEL}
One would like to study a 3+1 dimensional model which goes
beyond the mean field approximation (NJL !), but on the other hand
being too ambitious results in very difficult or unsolvable models.
\footnote{For example, demanding Lorentz invariance, chiral symmetry and
asymptotic freedom leaves QCD as the most simple model.}
We decided to place the following
constraints on our model:
\begin{itemize}
\item Most importantly, the model should be 3+1 dimensional,
but we do not require full rotational invariance.
\item The model should have spontaneous $\chi$SB (but not just mean field)
\item Finally, it should be solvable both on the LF and using a conventional
technique (to provide a reference calculation).
\end{itemize}
Given these constraints, the
most simple model that we found is described by the Lagrangian
\begin{eqnarray}
{\cal L} = \bar{\psi_k}\left[ \delta^{kl}\left(i\partial\!\!\!\!\!\!\not \;\; -
m \right)
-\frac{g}{\sqrt{N_c}}{\vec \gamma}_\perp {\vec A}^{kl}_\perp \right]\psi_l -
\frac{1}{2}
{\vec A}^{kl}_\perp \left(\Box +\lambda^2\right){\vec A}^{kl}_\perp ,
\label{eq:lager}
\end{eqnarray}
where $k,l$ are ``color'' indices ($N_c\rightarrow \infty$),
$\perp =x,y$ and where a
cutoff is imposed on the transverse momenta. A fermion mass was introduced
to avoid pathologies associated with the strict $m=0$ case.
$\chi SB$ can be studied by considering the $m\rightarrow 0$ limit of the
model.
The reasons for this bizarre choice of model [Eq. (\ref{eq:lager})]
are as follows.
If one wants to study spontaneous breaking of chiral symmetry, then
one needs to have a chirally invariant interaction to start with, which
motivates a vector coupling between fermions and bosons. However, we
restricted the vector coupling to the $\perp$ component of a vector field
since otherwise one has to deal with couplings to the bad current
$j^-$ \footnote{$j^-$ is bilinear in the constrained component of the
fermion field, which makes it very difficult to renormalize this component
of the current in the LF framework.}.
In a gauge theory, such couplings can be avoided by choice of gauge, but
we preferred not to work with a gauge theory, since this would give rise
to additional complications from infrared divergences.
Furthermore, we used a model with ``color'' degrees of freedom and
considered the limit where the number of colors is infinite, because such
a model is solvable, both on and off the LF. No interaction among the bosons
was included because this would complicate the model too much.
Finally, we used a cutoff on the transverse momenta because such a cutoff can
be used both on the LF as well as in normal coordinates and therefore
one can compare results from these two frameworks already for finite
values of the cutoff.
\section{DYSON-SCHWINGER SOLUTION OF THE MODEL}
Because we are considering the limit
$N_C \rightarrow \infty$, of Eq. (\ref{eq:lager}), the iterated
rainbow approximation (Fig. \ref{fig:rain})
for the fermion self-energy $\Sigma$ becomes exact, yielding
\begin{figure}
\unitlength1.cm
\begin{picture}(14,3.5)(.5,-12)
\special{psfile=regen.ps angle=-90 hscale=80 vscale=80}
\end{picture}
\caption{Typical Feynman diagram contributing to the fermion self-energy
in the large $N_C$ limit of the model.
No crossed ``gluon'' lines are allowed.}
\label{fig:rain}
\end{figure}
\begin{eqnarray}
\Sigma(p^\mu) &=& ig^2 \int \frac{d^4k}{(2\pi )^4} {\vec \gamma}_\perp
S_F(p^\mu-k^\mu){\vec \gamma}_\perp \frac{1}{k^2-\lambda^2+i\varepsilon}
\nonumber\\
&=& \not \! \! p_L\Sigma_L({\vec p}_L^2,{\vec p}_\perp^2) +
\Sigma_0({\vec p}_L^2,{\vec p}_\perp^2),
\label{eq:ds}
\end{eqnarray}
with:
\begin{equation}
S_F^{-1} = \not \! \! p_L\left[1-\Sigma_L({\vec p}_L^2,{\vec p}_\perp^2) \right]
+ \not \! \! p_\perp - \left[m+\Sigma_0({\vec p}_L^2,{\vec p}_\perp^2)\right] .
\end{equation}
These equations can be solved by iteration. From the self-consistently
obtained solution of the Dyson-Schwinger (DS) equation (\ref{eq:ds}) one can
extract the
physical mass of the fermion. For sufficiently large coupling constant,
the physical mass for the fermion remains finite in the limit
$m \rightarrow 0$, proving the spontaneous breakdown of chiral symmetry
in the model.
\section{\bf LF-SOLUTION OF THE MODEL}
Since we wanted to investigate the applicability of the effective LF
Hamiltonian formalism, we formulated above model without explicit
zero-mode degrees of freedom. In principle, the calculation should thus be
straightforward, using standard numerical techniques, such as DLCQ
\cite{pa:dlcq}. However, in this approach it is hard to take full advantage
of the large $N_C$ limit so that it is difficult
to compare the obtained spectrum with the results from solving the DS
equation. Instead, we use the following 2-step procedure to obtain
a formal solution for the LF formulation
\begin{itemize}
\item[1.] First, we derive a self-consistent Green's function equation
which is equivalent
to the DLCQ calculation. The Green's function calculation was
originally derived by starting from the covariant calculation and performing
$k^-$ integrations first (throwing away zero modes in $k^+$).
In order to convince even the skeptics
that this procedure is equivalent to DLCQ,
we demonstrate numerically that,
for finite and fixed DLCQ parameter $K$, the spectrum obtained
by diagonalizing the DLCQ matrix and the spectrum obtained
by solving the Green's function equation self-consistently
\footnote{Replacing integrals by finite sums in order to account for
the finite DLCQ parameter $K$.}
are identical.
\item[2.] In the next step we compare the self-consistent Green's function
equation with the DS equation. In order to facilitate the comparison with
the LF calculation, we
rewrite the DS equation (\ref{eq:ds}), using a spectral representation
for the fermion propagator $S_F$.
In the resulting DS equation with the spectral density,
we combine energy denominators, using Feynman parameter integral
and perform the longitudinal momentum integral covariantly.
\end{itemize}
Details of this procedure can be found in Ref. \cite{mb:hala}.
The main results from the comparison between LF and DS equations are as
follows
\begin{itemize}
\item The LF Green's function equation and the DS equation are identical
(and thus have identical solutions) if and only if
one introduces an additional (in addition to the self-induced inertias)
counterterm to the kinetic mass term for the fermion.
\item For fixed transverse momentum cutoff, this additional kinetic
mass term is finite.
\item The value of the vertex mass in the LF Hamiltonian is the same as
the value of the current mass in the DS equation.
\item In the chiral limit, mass generation for the (physical)
fermion occurs through
the kinetic mass counter term
\end{itemize}
\section{\bf IMPLICATIONS FOR RENORMALIZATION}
We have studied a 3+1 dimensional model with spontaneous breaking of chiral
symmetry both in a LF framework as well as in a Dyson-Schwinger framework.
Our work presents an explicit 3+1 dimensional example demonstrating
that there is no
conflict between chiral symmetry breaking and trivial LF vacua
provided the renormalization is properly done.
The effective interaction
(after integrating out 0-modes) can be summarized by a
few simple terms --- which are already present in the canonical
Hamiltonian.
The current quark mass in the covariant formulation and the ``vertex mass''
in the LF formulation
are the same if one does not truncate the Fock space and if one
uses the same cutoff on and off the LF.
This is perhaps surprising, since the vertex mass multiplies
the only term in the canonical Hamiltonian which explicitly breaks
(LF-) chiral symmetry. Thus one might think
that chiral symmetry breaking would manifest itself through a
nonzero vertex mass. If one does not truncate Fock space,
this is \underline{not} what happens in this model!
\footnote{Further studies show that
a renormalization of the vertex mass arises from a Tamm-Dancoff
truncation but not from integrating out zero-modes \cite{mb:meson}.}
$\chi$SB, in the sense of physical mass generation for the fermion,
manifests itself through a ``kinetic mass'' counterterm.
Even though we determined the kinetic mass counter term by directly
comparing the LF and DS calculation, several methods are conceivable which
avoid reference to a non-LF calculation in order to set up the LF problem.
One possible procedure would be to impose parity invariance for physical
observables as a constraint \cite{mb:parity}.
M.B. would like to acknowledge Michael Frank and Craig Roberts for
helpful discussions on the Schwinger-Dyson solution to the model.
We would like to thank Brett vande Sande for carefully reading the manuscript.
This work was supported by the D.O.E. under contract DE-FG03-96ER40965
and in part by TJNAF.
| {
"attr-fineweb-edu": 1.834961,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdwA4ubnjopEPvmGN | \section{#1}}
\newcommand{\setcounter{section}{0}{\setcounter{section}{0}
\addtocounter{section}{1} \setcounter{equation}{0}
\section*{Appendix \Alph{section}}}
\newcommand{\addtocounter{section}{1} \setcounter{equation}{0}{\addtocounter{section}{1} \setcounter{equation}{0}
\section*{Appendix \Alph{section}}}
\renewcommand{\theequation}{\thesection.\arabic{equation}}
\thispagestyle{empty}
\begin{document}
\vspace*{3cm}
\begin{center}
\begin{Large}
\begin{bf}
ONE LOOP CALCULATION OF THE \\
$\epsilon_3$ PARAMETER\\
WITHIN THE EXTENDED BESS MODEL\\
\end{bf}
\end{Large}
\vspace{2cm}
\begin{large}
R. Casalbuoni, S. De Curtis and M. Grazzini\\
\end{large}
\vspace{4mm}
Dipartimento di Fisica, Universit\`a di Firenze, Largo E. Fermi, 2 - 50125
Firenze (Italy)\\
I.N.F.N., Sezione di Firenze, Largo E. Fermi, 2 - 50125
Firenze (Italy)\\
\end{center}
\vspace{4cm}
\begin{center}
\begin{bf}
ABSTRACT
\end{bf}
\end{center}
\vspace{5mm}
\noindent
The existence of a strongly interacting sector responsible for the electroweak
symmetry breaking is assumed. As a consequence vector and axial-vector
bound states may be formed. These resonances mix with
the Standard Model gauge bosons and are of primary phenomenological
importance for the LEP physics.
The extended BESS model is an effective scheme based on the symmetry group
$SU(8)_L\otimes SU(8)_R$,
describing in a consistent way the interactions among the pseudo-Goldstone
bosons, vector and axial-vector resonances and the standard gauge bosons.
In a previous paper, the contribution from extended BESS to the electroweak
oblique corrections was evaluated. However, only an estimate of the effects
coming from mass and wave function renormalization of the new resonances,
was given. Here we complete the evaluation by computing explicitly these
effects. We confirm the previous result, that is, in spite of the great
precision of the present LEP measurements, the extended
BESS parameter space is not
very much constrained.
\newpage
\newcommand{\f}[2]{\frac{#1}{#2}}
\def\left [{\left [}
\def\right ]{\right ]}
\def\partial_{\mu}{\partial_{\mu}}
\def\partial^{\mu}{\partial^{\mu}}
\defg'{g'}
\defg''{g''}
\def\frac{g}{\gs}{\frac{g}{g''}}
\def{\epsilon}{{\epsilon}}
\newcommand{\begin{equation}}{\begin{equation}}
\newcommand{\end{equation}}{\end{equation}}
\newcommand{\begin{eqnarray}}{\begin{eqnarray}}
\newcommand{\end{eqnarray}}{\end{eqnarray}}
\newcommand{\nonumber}{\nonumber}
\newcommand{\displaystyle}{\displaystyle}
\setcounter{page}{1}
\resection{Introduction}
Precision measurements are now giving serious restrictions on
the possibility of a new strong interacting sector being at the origin of
the symmetry breaking in the electroweak theory.
A natural consequence of such a strong sector is the occurrence of resonances
in the $TeV$ region. Among them, spin-1 resonances would be of particular
interest as they would, already through mixing effects, affect the
self-energies
of the standard model gauge bosons.
An effective scheme (BESS model ref. \cite{bess}),
introduced several years ago, describes a minimal version of such a scenario
by introducing a triplet of vector resonances and no Goldstone bosons, being
based on a non linear $SU(2)_L \otimes SU(2)_R$ chiral model.
If the strongly interacting sector gives rise to pseudo-Goldstone bosons, it
has been shown \cite{pseudo} that their loop contribution to the vector
boson self-energies is not negligible.
For this reason, in ref. \cite{bessu8}, an extended version of the BESS model,
based on a chiral $SU(8)_L\otimes SU(8)_R$, was given.
The model contains 60 pseudo-Goldstone bosons in the spectrum and, as a
further generalization, it describes also axial-vector resonances together
with vector ones.
The physics of this model, as far as the LEP experiments are concerned,
is such that it gives corrections only to the self-energies of the standard
electroweak gauge bosons.
These corrections have been evaluated in ref. \cite{bessu8} at one-loop level
by defining the theory with a cut-off $\Lambda$.
In that calculation the contributions coming from mass and wave function
renormalization of the new resonances were evaluated in an approximate
way using a dispersive representation.
Here we want to give a more complete analysis by including in the calculation
all the one-loop diagrams contributing to self-energies of
the new vector and axial-vector
resonances.
\resection{Gauge boson self-energies within the extended BESS model}
The extended BESS model effective Lagrangian, as introduced in ref. [3],
describes
vector and axial-vector gauge bosons interacting
with the Goldstone bosons and the
Standard Model (SM) vector bosons.
The effective Lagrangian is obtained by enlarging
the symmetry group of the BESS model based on a non-linear $SU(2)_L\otimes
SU(2)_R$ $\sigma$ model as described in ref. \cite{bess}, to
contain a "hidden" $SU(8)_L\otimes SU(8)_R$
gauge group, by introducing the covariant derivatives
with respect to the gauge fields associated to the "hidden" symmetry and with
respect to the SM gauge fields, and by adding together
all the independent invariant terms containing at
most two derivatives.
The new resonances $V^A$ and $A^A$ ($A$=1,...,63)
are described as Yang-Mills fields which acquire
mass through the same symmetry breaking mechanism which gives mass to the
ordinary gauge bosons.
Working in the unitary gauge for the $V$, $A$ and
SM gauge bosons leaves us with
60 Goldstones as physical particles in the spectrum.
We will assume that they acquire mass through radiative corrections.
An important property is that
the mixing terms among the SM gauge bosons and
the new vector and axial-vector resonances
are the same as in the axial-vector
extension of the $SU(2)$-BESS model \cite{assiali} plus an extra term
in the neutral sector involving the singlet (under $SU(3)_c\otimes
SU(2)_L\otimes U(1)_Y$) field $V_D$.
The neutral Lagrangian which gives the mixing among the new resonances and the
standard $SU(2)_L\otimes U(1)_Y$ gauge bosons is
\begin{eqnarray}
{\cal L}^{mix} &=&
\f{v^2}{8}\Big[ a(g W^3-g' Y)^2+b(g W^3+g' Y-g'' V^3)^2\nonumber\\
& & +b (\f{2}{\sqrt{3}}
g' Y-g'' V_D)^2+c(g W^3-g' Y+g'' A^3)^2+dg''^2 (A^3)^2\Big]
\end{eqnarray}
where
$a,~b,~c,~d$ are free parameters, $g$ and $g'$
are the gauge coupling constants
of $W$ and $Y$ respectively, and $g''$ is the self
coupling of the new gauge bosons.
{}From eq. (2.1) we see that the mixing
terms are proportional to $x\equiv g/g''$ so one can
decouple the new resonances from the SM gauge bosons,
by taking
the large $g''$ limit (small $x$). The SM results are recovered by
putting $x=0$ and
fixing the normalization $a+cd/(c+d)=1$.
By diagonalizing the mass matrix it turns out that
the SM gauge boson masses get corrected by terms of the order of
$(g/g'')^2$ (the photon remains massless)
while the vector bosons (and the axial ones)
are degenerate if one neglects the
weak corrections, with their masses
proportional to $g''^2$.
In ref. \cite{bessu8} we have explicitly evaluated the interaction terms
among the SM gauge bosons, the new resonances and the pseudo-Goldstone bosons
(PGB). In the following application, we will need also the kinetic terms
for the $V$ and $A$ resonances which are triplets under $SU(2)$. The explicit
expression is given in ref. \cite{self}.
The BESS model parameter space is four-dimensional;
we will choose, as free parameters,
the masses of the vector and the axial-vector resonances $M_V$ and $M_A$,
their gauge coupling constant $g''$ and
$z=c/(c+d)$, which measures the ratio of the mixings
$W-A$ and $W-V$.
The mixing described in (2.1)
induces corrections to the self-energies of the SM.
We define the scalar part of the SM vector boson self-energies through the
relation
\begin{equation}
\Pi_{ij}^{\mu\nu}(p^2)=-i\Pi_{ij}(p^2)g^{\mu\nu}+p^\mu p^\nu~terms
\end{equation}
where the indices $i$ and $j$ run over the ordinary gauge vector bosons.
In the neutral sector we choose to work with the SM fields ($\theta$ is the
Weinberg angle)
\begin{eqnarray}
& &W^3 =c_\theta~ Z+s_\theta~ \gamma\\
& & Y =-s_\theta~ Z+c_\theta~ \gamma
\end{eqnarray}
The corrections from the vector and axial-vector resonances and from the
PGB of extended BESS are purely oblique and only affect
the scalar gauge boson
self-energy terms $\Pi_{WW}$, $\Pi_{33}$, $\Pi_{30}$, and $\Pi_{00}$.
It is then convenient to introduce the following combinations
(see ref. \cite{altarelli}):
\begin{eqnarray}
\epsilon_1&=&\frac{\Pi_{33}(0)-\Pi_{WW}(0)}{M^2_W}\\
\epsilon_2&=&\frac{\Pi_{WW}(M^2_W)-\Pi_{WW}(0)}{M^2_W}-
\frac{\Pi_{33}(M^2_Z)-\Pi_{33}(0)}{M^2_Z}\\
\epsilon_3 &=&\frac {c_\theta}{s_\theta}
\frac{\Pi_{30}(M^2_Z)-\Pi_{30}(0)}{M^2_Z}
\end{eqnarray}
In ref. \cite{bessu8} we have shown that the tree-level
contribution to the ${\epsilon}_i$ parameters coming from the mixing
of the $V$ and $A$ bosons with the $W$, $Z$ and $\gamma$ is the same we found
for $SU(2)$-BESS model in ref. \cite{self}, or
\begin{equation}
\epsilon_1=0~~~~\epsilon_2\simeq 0,~~~~\epsilon_3\simeq x^2 (1-z^2)
\end{equation}
where the last two results were obtained for large $M_{V,A}$
($M_{V,A}>>M_{W,Z}$).
The reason is that
the new $V_D$ boson mixes only with $Y$ (see (2.1)) and, as a consequence,
it does not affect $\Pi_{30}$ and it contributes only to the definition
of the electric charge and $M_Z$.
This means that our result can be extended to
a $SU(N)_L\otimes SU(N)_R$ model, in fact
in these models the new fields associated to the
diagonal generators will be mixed only with the hypercharge field $Y$.
In addition to the self-energy corrections arising from the mixing, one has
loop
corrections.
In ref. \cite{bessu8} we have implemented the calculation of the ${\epsilon}_i$
parameters at one-loop level.
Being BESS a non renormalizable model,
the loop integrals were evaluated using a cut-off $\Lambda$.
In Fig. 1 we list the graphs contributing to ${\epsilon}_3$ at one-loop
level.
In ref. \cite{bessu8} the calculation was performed
neglecting the contributions to ${\epsilon}_3$ coming from the self-energies
of the new resonances (which are schematically represented in Fig. 1 by the
first two graphs) and
we presented a simple estimate of these contributions
through dispersive integrals.
In particular
the first two graphs in Fig. 1 represent the propagators for the $V^3$
and $A^3$ gauge bosons, corrected and opportunely
renormalized at one-loop level, which must be inserted, via mixing,
in the $\Pi_{30}$ function.
Here we will give the complete one-loop result for ${\epsilon}_3$.
\resection{One-loop evaluation of $\epsilon_3$}
The graphs contributing to the $V^3$ and $A^3$ self-energies are listed
in Fig. 2. They have been evaluated using the technique of
dimensional regularization. We have not included the tadpoles because they give
a constant (as function of $p^2$) contribution to the self-energies
and so they can be disposed of by
mass renormalization. Furthermore, being ${\epsilon}_3$ an isospin symmetric
observable, we have assumed a common
non-vanishing mass $M_\pi$ for all the PGB.
The first contribution to the $A_3$ self-energy, given by the graph in
Fig. 2$a$, is
\begin{eqnarray}
\Pi_A^{(a)}(p^2)&=&
15~\f{g_{VA\pi}^2}{16\pi^2}\Big[\Big(\f{3}{4}-\f{1}{4}
\f{M^2_\pi}{M^2_V}+\f{p^2}{12M^2_V}\Big)\Big(\ln\f{\Lambda^2}{M_V M_\pi}
-\gamma\Big)\nonumber\\
& &+\f{M^2_V-M^2_\pi}{2p^2}\Big(1+\f{(M_\pi^2-M^2_V)^2}{12p^2M^2_V}
-\f{M^2_V+M_\pi^2}{4M^2_V}\Big)\ln\f{M^2_\pi}{M^2_V}\nonumber\\
& &+\Big(\f{1}{3}\f{M^2_\pi}{M^2_V}-\f{5}{3}-\f{1}{6}\f{p^2}{M^2_V}
-\f{(M^2_\pi-M^2_V)^2}{6M^2_V p^2}\Big)~\beta(M^2_V,M^2_\pi,p^2)
{}~\ln l(M_V,M_\pi,p^2)\nonumber\\
& &+\f{17}{12}-\f{7}{12}\f{M^2_\pi}{M^2_V}+\f{2}{9}\f{p^2}{M^2_V}
+\f{(M^2_\pi-M^2_V)^2}{12M^2_V p^2}\Big]
\end{eqnarray}
where the factor 15 gives the number of the pairs $V\pi$ circulating in the
loop (we are working in the unitary gauge where the $SU(2)$ triplet $\pi^a$
is eaten up by $W^\pm,Z$) and
\begin{eqnarray}
\beta(M_1^2,M^2_2,p^2)&=&\f{\sqrt{(M^2_1-M^2_2)^2-2(M^2_1+M^2_2)p^2+p^4}}
{{2p^2}}\\
l(M_1,M_2,p^2)&=&
\f{s_+(M_1,M_2,p^2)-s_-(M_1,M_2,p^2)}{s_+(M_1,M_2,p^2)+s_-(M_1,M_2,p^2)}\\
s_\pm(M_1,M_2,p^2)&=&\sqrt{(M_1\pm M_2)^2-p^2}
\end{eqnarray}
$\gamma$ is the Euler's constant $(\simeq 0.577)$,
and $g_{VA\pi}$ is a trilinear coupling given in ref. \cite{bessu8}
\begin{equation}
g_{VA\pi}= \f{v}{4}zg''^2\f{x^2}{r_V}\left(\f{r_V}{r_A}-1\right)
\end{equation}
with $v\simeq 246~GeV$ and $r_{V,A}\simeq M_W^2/M_{V,A}^2$
in the large $g''$ limit.
The second contribution to $\Pi_A$, given by the graph in Fig. 2$b$, is
\begin{eqnarray}
\Pi_A^{(b)} (p^2) &=& 16~\f{g''^2}{16 \pi^2} \Big[ a(M^2_A,M^2_V,p^2)
+b(M^2_A,M^2_V,p^2)\Big(\ln\f{\Lambda^2}{M_A M_V}-\gamma\Big)\nonumber\\
& &+c(M^2_A,M^2_V,p^2)
{}~\beta(M^2_A,M^2_V,p^2)~\ln
l(M_A,M_V,p^2)\nonumber\\
& &+d(M_A^2,M_V^2,p^2) \ln \f{M_V^2}{M^2_A}\Big]
\end{eqnarray}
where, again the factor 16 is a multiplicity factor, and
\begin{eqnarray}
a(x_1,x_2,t)&=&\f{1}{72x_1 x_2 t}\Big(-405 x_1 x_2 (x_1+x_2)t+90(x_1^3+x_2^3)t
+6f(x_1,x_2,t)\nonumber\\
& &-452x_1 x_2 t^2-182(x_1^2+x_2^2)t^2+16 t^4+70(x_1+x_2)t^3\Big)\\
b(x_1,x_2,t)&=&\f{1}{12x_1 x_2} \Big(9(x_1^3+x_2^3)-36x_1 x_2(x_1+x_2)
-17 (x_1^2+x_2^2)t\nonumber\\
& &-50 x_1 x_2 t+7(x_1+x_2)t^2+t^3 \Big)\\
c(x_1,x_2,t) &=& \f{1}{6x_1x_2t}
\Big(-f(x_1,x_2,t)-8(x^3_1+x_2^3)t+32 x_1 x_2 (x_1+x_2)t
+18(x_1^2+x_2^2)t^2\nonumber\\
& &+32 x_1 x_2 t^2-8 (x_1+x_2)t^3-t^4\Big)\\
d(x_1,x_2,t)&=&\f{x_1-x_2}{24 x_1 x_2 t^2} \Big( f(x_1,x_2,t)+7(x_1^3+y^3_2)t
-43 x_1 x_2 (x_1+x_2)t-17(x_1^2+x_2^2)t^2\nonumber\\
& &-53x_1 x_2 t^2+9(x_1+x_2)t^3\Big)\\
f(x_1,x_2,t)&=&(x_1-x_2)^2(x_1^2+10 x_1 x_2+x_2^2)
\end{eqnarray}
The contribution to the $V^3$ self-energy coming from the graph in
Fig. 2$c$ can be evaluated from $\Pi_A^{(a)}$ with the substitution
$M_V\to M_A$.
{}From the graph in Fig. 2$d$ we get
\begin{eqnarray}
\Pi_V^{(d)}(p^2)&=&15~
\f{g^2_{V\pi\pi}}{8\pi^2}\Big[\Big(\f{p^2}{6}-M^2_\pi\Big)\Big(
\ln\f{\Lambda^2}{M^2_\pi}-\gamma\Big)\nonumber\\
& &+\f{8}{3}p^2 \alpha^3(M^2_\pi,p^2) \arctan\f{1}{2\alpha (M^2_\pi,p^2)}
-\f{7}{3}M^2_\pi+\f{4}{9}p^2\Big]
\end{eqnarray}
where, again, the factor 15 is a multiplicity factor
\begin{equation}
\alpha(M^2_\pi,p^2)=\sqrt{\f{M_\pi^2}{p^2}-\f{1}{4}}
\end{equation}
and the trilinear coupling $g_{V\pi\pi}$ (as given in ref. \cite{bessu8}) is
\begin{equation}
g_{V\pi\pi}=\f{g''}{4}\f{x^2}{r_V}(1-z^2)
\end{equation}
Finally, the contributions corresponding to the graphs in Figs. 2$e$ and 2$f$
can be evaluated directly from $\Pi_A^{(b)}$ with the substitutions
$M_A\to M_V$ and $M_V\to M_A$ respectively.
We now introduce counterterms in order to get canonical propagator at the
mass of the resonance
\begin{equation}
\Pi^{ren}_i(p^2)= \Pi_i(p^2)+p^2(Z_{3i}-1)+Z_i
\end{equation}
with $i=A,~V$. The renormalization conditions are
\begin{equation}
\left\{
\begin{array}{lcc}
Re \Pi^{ren}_i(p^2)\Big|_{p^2=M^2_i}=0~~~~~~~~& &\\
\nonumber\\
{\displaystyle \lim_{p^2\rightarrow M^2_i} \f{p^2-M^2_i}
{p^2-M^2_i+Re\Pi^{ren}_i(p^2)}}=1 & &
\end{array}
\right.
\end{equation}
where we have taken the real part of $\Pi^{ren}_i$ because the vacuum
polarization tensor
develops an imaginary part above threshold for the possible decay processes
of $A^3$ and $V^3$.
By substituting (3.15) in (3.16) we get
\begin{equation}
Z_{3i}-1=-\f{d}{dp^2}Re \Pi_i(p^2)\Big|_{p^2=M^2_i}
\end{equation}
\begin{equation}
Z_i=-Re\Pi_i(M_i^2)-M_i^2(Z_{3i}-1)
\end{equation}
We are now able to compute the first two contributions to $\Pi_{30}$
shown in Fig. 1. In fact, one has only to substitute the expressions for
$\Pi^{ren}_A(p^2)$ and $\Pi^{ren}_V(p^2)$ in the following
relation (see ref. \cite{self})
\begin{equation}
\Pi_{30}^{self}(p^2)=
-x^2 \f{s_\theta}{c_\theta} p^2\Big(\f{M^2_V}{p^2-M^2_V+
\Pi^{ren}_V(p^2)}
-z^2 \f{ M^2_A}{p^2-M^2_A+
\Pi^{ren}_A(p^2)}\Big)
\end{equation}
and then, to use it in eq. (2.7) for the calculation of ${\epsilon}_3^{self}$.
The value of ${\epsilon}_3^{self}$ predicted by the model is real
because $p^2=M^2_Z$ is below threshold. Notice that by considering
$\Pi^{ren}_A(p^2)=\Pi^{ren}_V(p^2)=0$ in (3.19), one recovers,
for large $M_{V,A}$, the tree level
value of ${\epsilon}_3$ as given in eq. (2.8).
The contribution to ${\epsilon}_3$ of the remaining graphs of Fig. 1 was calculated
in ref. \cite{bessu8}:
\begin{eqnarray}
\epsilon_3^{loop} &\simeq &
\frac{g^2}{16\pi^2} \frac{5}{8}\Big \{\left
(\log\frac{\Lambda^2}{M_\pi^2}-\gamma\right)
\Big[2-\frac{1}{2} \frac{x^4}{r_V^2}(1-z^2)^2\nonumber\\
& &-\frac{x^2}{r_V}(1-z^2\frac{r_V}{r_A})
\Big(1 -z^2 -z^2(1-\frac{r_V}{r_A})\Big)-z^2
\frac{x^2}{r_A}(1-\frac{r_A}{r_V})^2\Big]\nonumber\\
& &-\frac{x^2}{r_V}(1-z^2\frac{r_V}{r_A})
\Big(1 -z^2 -z^2(1-\frac{r_V}{r_A})\Big)\Big(A(M_\pi^2,M_V^2)+1\Big)\nonumber\\
& &-z^2\frac{x^2}{r_A}(1-\frac{r_A}{r_V})^2\Big(A(M_\pi^2,M_A^2)+1\Big)
\Big\}
\end{eqnarray}
where
\begin{eqnarray}
A(M_\pi^2,M^2)&=&
\frac{M^6+9 M^4 M_\pi^2}{(M^2-M_\pi^2)^3}
\log{\frac{M_\pi^2}{M^2}}\nonumber\\
& &+\frac{1}{6 (M^2-M_\pi^2)^3} (M_\pi^6-27 M_\pi^4 M^2-9 M_\pi^2 M^4
+35 M^6)
\end{eqnarray}
Furthermore, the numerical analysis has to include electroweak radiative
corrections (${\epsilon}_3^{rad}$). Our assumption, physically plausible in the
presence of
new degrees of freedom related to a larger scale, is to take the usual one-loop
radiative corrections of the SM with the
identification of the Higgs mass as the cut-off $\Lambda$ which regularizes
BESS
at high momenta. Finally we have
\begin{equation}
{\epsilon}_3={\epsilon}_3^{rad}+{\epsilon}_3^{self}+{\epsilon}_3^{loop}
\end{equation}
\resection{Numerical results}
We now want to evaluate numerically the ${\epsilon}_3$ parameter as predicted by the
extended BESS model.
In order to reduce the parameter space, we will assume, as we did in ref.
\cite{bessu8}, the validity of the Weinberg sum rules (WSR) \cite{losecco}.
In this way we can eliminate two parameters, for example $M_A$ and $g''$ in
favour of $M_V$ and $z$. We get \cite{bessu8}
\begin{equation}
M_A^2=\f{M_V^2}{z}~~~~~~~~g''=2\f{M_V}{v}\sqrt{1-z}
\end{equation}
with $0<z<1$.
Then, the free parameters of our analysis are: $M_V$, $z$, $M_\pi$ and
$\Lambda$.
Using the most recent results from LEP experiments combined with the low
energy weak data \cite{lep}
\begin{equation}
{\epsilon}_3=(1.3\pm 3.1)\times 10^{-3}
\end{equation}
we can derive the bounds on the parameter space of the extended
BESS model coming from the estimate of the corrections to ${\epsilon}_3$.
Concerning the electroweak radiative corrections, following ref. \cite{cara},
we use
\begin{equation}
{\epsilon}_3^{rad}=0.00696
\end{equation}
corresponding to $m_{top}=150~GeV$ and $M_H=\Lambda=1.5~TeV$.
In Fig. 3 we give the 90\% C.L. allowed region in the plane $(z,M_V)$ for
two values of $M_\pi/\Lambda$ and $\Lambda=1.5~TeV$.
The effects of including the contribution coming from the $V$ and $A$
self-energies are generally small and
go in the direction of enlarging the allowed region.
This result is in accordance with the observations done in ref. \cite{bessu8}
that the $V$ and $A$
self-energy contributions are generally negative and decrease
the value of ${\epsilon}_3^{tree}$ of the tree level.
The corrections due to the self-energies are more sizeable in
the region close to $z=1$.
In fact, by using the WSR, the expression
of ${\epsilon}_3^{loop}$ in eq. (3.20) vanishes in the limit $z\to 1$.
The reason for this is twofold. First, the contributions
from the three graphs with PGB loops (see Fig. 1) cancel among themselves,
second, the WSR imply that for $z=1$ also
$r_V=r_A$ corresponding to a complete degeneration among $V$ and $A$
resonances (notice also that ${\epsilon}_3^{tree}=0$ for
$z=1$). This latter property is not
true anymore for the complete expression of ${\epsilon}_3$
since the $V$ and $A$ resonances
get different corrections from the self-energies, also at $z=1$.
In Fig. 4 we give the 90\% C.L. allowed region in the plane $(M_V,M_\pi)$ for
two values of $z$ and $\Lambda=1.5~TeV$. We notice that for $M_\pi>100~GeV$
the bounds do not depend very much on the mass of the PGB and that the
allowed region shrinks for increasing values of $z$.
Finally, for completeness, in Fig. 5 we also give the 90\% C.L.
allowed region in the plane $(z,M_\pi)$
for two values of $M_V$ and $\Lambda=1.5~TeV$.
{}From Figs. 3 and 4 we see that there is an absolute lower
bound on $M_V$, independent on $z$ and $M_\pi$,
of about 900 $GeV$.
In conclusion the calculation we have done here shows that the approximation
used in ref. \cite{bessu8} for the evaluation of the self-energies of the
new resonances was essentially correct. However it should be noticed that
a sizeable difference arises in the region close to $z=1$ where, as explained
before, the different one-loop contributions for the vector and the
axial-vector resonances forbid the cancellation arising from the other
diagrams.
| {
"attr-fineweb-edu": 1.330078,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdwTxK7kjXIdG_Gim | \section{Introduction}
Functional inequalities of Dirichlet forms are powerful tools in the study of Markov semigroups and spectral theory of Dirichlet operators, see \cite{B,D,G,Wbook} for accounts on functional inequalities and applications. To establish a functional inequality, one often needs to verify some conditions on the generator, for instances the Bakry-Emery curvature condition in the diffusion setting or the Lyapunov condition in a
general setting, see e.g. \cite{CGWW,GLWN,Wbook}. Since these conditions exclude generators with less regular coefficients, to establish functional inequalities in a more general setting one treats the singularity part as a perturbation. So, it is important to investigate perturbations of functional inequalities.
In \cite{BLW}, sharp growth conditions have been presented for perturbations of super Poincar\'e and weak Poincar\'e inequalities in the diffusion setting (i.e.\ the underlying Dirichlet form is local). Note that these two kinds of functional inequalities are general enough to cover all Poincar\'e/Sobolev/Nash type inequalities, and thus, have a broad range of applications. Recently, explicit sufficient conditions were derived in \cite{CW1, CW2, MRS, WW} for functional inequalities of stable-like Dirichlet forms. The aim of this paper is to extend perturbation results derived in \cite{BLW} to the non-local setting, so that combining with the existing sufficient conditions we are able to establish functional inequalities for more general Dirichlet forms. Due to the lack of the chain rule, the study of the non-local setting is usually more complicated. Nevertheless, we are able to present some relatively clean perturbation results, which are sharp as illustrated by some examples latter on, and when the range of jumps is finite, are natural extensions to the corresponding results derived earlier for diffusion processes.
Let $(E,d)$ be a Polish space equipped with the Borel $\si$-field $\F$ and a probability measure $\mu$. Let $\B(E)$ be the set of all measurable functions on $E$, and let $\B_b(E)$ be the set of all bounded elements in $\B(E).$ Let $q\in\B(E\times E)$ be non-negative with $q(x,x)=0, x\in E,$ such that
\begin{equation}\label{intro-02}
\ll:= \sup_{x\in E}\int_{E} \big(1\land d(x,y)^2\big)
q(x,y) \mu(\d y)<\infty.\end{equation}Then
\begin{equation}\label{intro-01}\beg{split}
&\GG(f,g)(x):= \int_{E} (f(x)-f(y))(g(x)-g(y)) q(x,y)\mu(\d y),\ \ \forall x \in E,\\
&f,g\in \scr A:= \big\{f\in \B_b(E): \GG(f,f)\in \B_b(E)\big\}\end{split}\end{equation} gives rise to a non-negatively definite bilinear map from $\A\times\A$ to $\B_b(E).$ We assume that $\A$ is dense in $L^2(\mu)$. Then it is standard that the form
\beq\label{intro-03} \beg{split}\E(f,g):&= \mu(\GG(f,g))\\
&=\iint_{E\times E} (f(x)-f(y))(g(x)-g(y))q(x,y)\mu(\d y)\mu(\d x),\ \ f,g\in \A\end{split}\end{equation} is closable in $L^2(\mu)$ and its closure $(\E,\D(\E))$ is a symmetric conservative Dirichlet form,
see e.g.\ \cite[Example 1.2.6]{FOT}. A typical example of the
framework is the $\aa$-stable-like Dirichlet form, where $E=\R^m$
with $d(x,y)=|x-y|$, and
$$q(x,y)= \ff{\tt q (x,y)}{u(y)|x-y|^{m+\aa}},\ \ \mu(\d x)= u(x)\d x$$ for some $\aa\in (0,2)$, non-negative $\tt q\in \B_b(\R^m\times\R^m)$, and positive $u\in\B(\R^m)$ such that $\mu(\d x)$ is a probability measure. In this case we have $\A\supset C_0^2(\R^m)$ which is dense in $L^2(\mu)$ for any probability measure $\mu$ on $\R^m$.
To investigate perturbations of functional inequalities using growth conditions as in \cite{BLW}, we fix a point $o\in E$ and denote $\rr(x)=d(o,x), x\in E.$ Now, for
a $\rho$-locally bounded measurable function $V$ on $E$ (i.e. $V$ is bounded on the set $\{x \in E: \ \rho(x)\le r\}$ for all $r>0$) such that
$\mu(\e^V)=1,$ let $\mu_V(\d x) = \e^{V(x)} \mu(\d x)$.
Since for every $f \in \A$, $\GG(f,f)$ given by (\ref{intro-01}) is a bounded measurable function on $E$, we have
$$
\EE_V(f,f):=\int_{E}\GG(f,f)(x)\,\mu_V(\d x) <\infty.
$$
Again by the argument in \cite[Example 1.2.6]{FOT}, the form
$$ \E_V(f,g):= \mu_V(\GG(f,g))=\iint_{E\times E} (f(x)-f(y))(g(x)-g(y))q(x,y)\mu(\d y)\mu_V(\d x)$$ defined for $f,g\in \A$ is closable in $L^2(\mu_V)$ and its closure $(\E_V,\D(\E_V))$ is a symmetric conservative Dirichlet form.
In this paper, we shall assume that $\EE$ satisfies a functional inequality and then search for conditions on $V$ such that $\EE_V$ satisfies the same type of functional inequality. In the following two sections, we study perturbations of
the super Poincar\'e inequality and
the weak Poincar\'e inequality respectively. Each section includes some typical examples to illustrate the main results. Throughout
the paper, we simply denote that $\GG(f)=\GG(f,f),$
$\EE(f)=\EE(f,f)$ and $\EE_V(f)=\EE_V(f,f).$
\section{Perturbations of the super Poincar\'{e} inequality}
In Subsection 2.1 we state two results for perturbations of the super Poincar\'e inequality using growth conditions and present some examples to illustrate the results. Subsection 2.2 includes proofs of these results. Finally, in Subsection 2.3 we prove that the super Poincar\'e inequality is stable for perturbations under a variation condition on the support of $q$, which extends a known result for diffusion processes.
\subsection{Main results and examples}
We consider the following super Poincar\'e inequality introduced in \cite{W00a, W00b}:
\beq\label{sp} \mu(f^2)\le r \EE(f) +\bb(r)\mu(|f|)^2,\ \ r>0, f\in \scr{D}(\EE),\end{equation}
where $\bb: (0,\infty)\to (0,\infty)$ is a decreasing function.
Note that if \eqref{sp} holds for some non-decreasing function $\bb$, it holds also for the decreasing function
$\tt\bb(r):=\inf_{s\in (0,r]} \bb(s)$ in place of $\bb(r).$
To establish a super Poincar\'e inequality for $\EE_V$, we need the following quantities. For any $n\ge 1$ and $k\ge 1$, let
\beg{equation*}\beg{split} &K_{n,k}(V)= \sup_{\rho(x)\le n+2}V(x)-\inf_{\rho(x)\le n+k+2}V(x),\\
&J_{n,k}(V)= \sup_{\rho(x)\le n+1}V(x)-2\inf_{\rho(x)\le n+k+2}V(x),\\
&\vv_{n,k}(V) = \sup_{m\ge n} \Big\{\bb^{-1}\big(1/[2\mu(\rho>m-1)]\big) \e^{K_{m,k}(V)}\Big\},
\end{split}\end{equation*}
where
$\bb^{-1}(s):=\inf\{r>0:\bb(r)\le s\}$ for $s>0$, with $\inf\emptyset :=\infty$ by convention.
When the jump is of finite range, i.e. there exists $k_0>0$ such
that $q(x,y)=0$ for $d(x,y)> k_0$, we have the following result
similar to \cite[Theorem 3.1]{BLW} for local Dirichlet forms.
\beg{thm} \label{finite-sp} Assume that $(\ref{sp})$ holds and there
exists $k_0\ge 1 $ such that $q(x,y)=0$ for $d(x,y)> k_0$.
\begin{itemize}
\item[$(1)$] If $\inf\limits_{n\ge1,k\ge k_0}\vv_{n,k}(V)=0,$ then the super Poincar\'{e} inequality
\eqref{th-sp-1} holds with
\beg{equation*}\beg{split} \bb_V(r):=\inf\Big\{&(1+8\lambda r')\e^{J_{n,k}(V)}\bb(s): s>0, r'\in (0,r], n\ge 1, k\ge k_0\\
&\quad {\rm such\ that}\ 8\vv_{n,k}(V)+s\e^{K_{n,k}(V)}\le \frac{r'}{2+16\ll r'} \Big\}.\end{split}\end{equation*}
\item[$(2)$] If $\inf\limits_{n\ge 1,k\ge k_0}\vv_{n,k}(V)<\infty,$ then the defective Poincar\'{e} inequality
\beq\label{th-sp-2}\mu_V(f^2)\le C_1\EE_V(f) +C_2\mu_V(|f|)^2,\ \ f\in\D(\EE_V)\end{equation} holds for some constants $C_1,C_2>0$.
\end{itemize}
\end{thm}
We note that according to \cite[Proposition 1.3]{RW} (see also
\cite[Proposition 4.1.2]{Wbook}), if $\EE_V$ satisfies the weak
Poincar\'e inequality (see Section 4 below) then the defective
Poincar\'e inequality (\ref{th-sp-2}) implies the Poincar\'e
inequality
\begin{equation*}\label{P}\mu_V(f^2)\le C\EE_V(f),\ \ f\in \D(\EE_V), \mu_V(f)=0\end{equation*}
for some constant $C>0.$
When the jump is of infinite range, we will need additional notation and assumptions to control the uniform norm appearing in the perturbed functional inequalities (see Lemmas \ref{L3.1} and \ref{L3.2} below), which is an essentially different feature from the diffusion setting. For any $n, k\ge 1$ and $\delta>1$, let
\beg{equation*}\beg{split}
&Z_n(V)= \sup_{\rho(x)\le n+1}V(x),\\
&\zeta_{n}(V)=\sup_{m\ge n} \Big\{\bb^{-1}\big(1/[2\mu(\rho>m-1)]\big) \e^{Z_{m+1}(V)}\Big\},\\
&t_{i,n,k}(\dd,V):= \bb^{-1}\Big(\ff 1 4 \dd^i\e^{-J_{n,k}(V)}\Big).\end{split}\end{equation*}
Moreover, let
\beg{equation*}\beg{split} &\gg_{n,k} =\iint_{\{d(x,y)>k, \rr(y)\ge n-1\}}q(x,y)\mu(\d y)\mu(\d x),\\
&\eta_{n,k} =\iint_{\{\rr(x)>n+k+2, \rr(y)\le n+1\}}q(x,y)\mu(\d
y)\mu(\d x).\end{split}\end{equation*} By \eqref{intro-02}, we see
that $\gg_{n,k}+\eta_{n,k} \downarrow 0$ as $n\uparrow \infty$ holds
for any $k\ge 1.$
\noindent We assume
\paragraph{(A)} \ There exist $\dd>1$ and sequences $\{(n_i,k_i)\}_{i\ge 1}\subset \mathbb N^2$ such that $n_i\uparrow\infty$ and\beg{enumerate}
\item[(A1)] $\lim_{i\to\infty}(\vv_{n_i,k_i}(V)+t_{i,n_i,k_i}(\dd,V)\e^{K_{n_i,k_i}(V)})=0;$
\item[(A2)] $\sum_{i=1}^\infty\big\{ \zeta_{n_i}(V)\gg_{n_i,k_i} +t_{i,n_i,k_i}(\dd,V)\e^{Z_{n_i}(V)}\eta_{n_i,k_i}\big\} <\infty.$\end{enumerate}
\
We shall let $I_\dd$ denote the set of all sequences $\{(n_i,k_i)\}\subset \mathbb N^2$ such that $n_i\uparrow \infty$ and (A1)-(A2) hold. Moreover, for any $r>0$ and $\{n_i,k_i\}\in I_\dd$,
let $D(r,\{(n_i,k_i)\})$ be the set of $j\in\mathbb N$ such that
\beg{equation}\label{A-01}\sup_{i\ge j}\big(8\vv_{n_i,k_i}(V)+ t_{i,n_i,k_i}(\dd,V)
\e^{K_{n_i,k_i}(V)}\big)\le \ff{1}{64\ll}\land \ff{c(\dd)r}{16},\ \ i\ge j,\end{equation} where $ c(\dd):= \big(\ff{\ss\dd-1}\dd\big)^2,$ and such that
\beq\label{A-02} \sum_{i=j}^\infty \Big(6\zeta_{n_i}(V)\gg_{n_i,k_i} +t_{i,n_i,k_i}(\dd,V)\e^{Z_{n_i}(V)}\eta_{n_i,k_i}\Big)\dd^{i+2}\le \ff{1}{256}.\end{equation}
By {\bf (A)}, we see that for any $r>0$ and $\{(n_i,k_i)\}\in I_\dd$, the set $D(r,\{(n_i,k_i)\})$ is non-empty.
\beg{thm}\label{th-sp} Assume that $(\ref{sp})$ holds.
\begin{itemize}
\item[$(1)$] If {\bf
(A)} is satisfied, then the super Poincar\'e inequality
\begin{equation}\label{th-sp-1}\mu_V(f^2)\le r\EE_V(f)+\bb_V(r) \mu_V(|f|)^2,\ \ r>0,\ \ f\in \scr{D}(\EE_V)\end{equation} holds with
$$\bb_V(r):= \inf\bigg\{2\dd^j:\ \{(n_i,k_i)\}\in I_\dd,\ j\in D(r,\{(n_i,k_i)\})\bigg\} <\infty,\ \ r>0.$$
\item[$(2)$] If $(A2)$ is satisfied and $(A1)$ is replaced by the following weaker assumption
\item[$(A1')$]$\limsup_{i\to\infty}(\vv_{n_i,k_i}(V)+t_{i,n_i,k_i}(\dd,V))<\infty,$\newline
then the defective Poincar\'e inequality \eqref{th-sp-2} holds for some $C_1,C_2>0.$
\end{itemize}
\end{thm}
The following example shows that Theorem \ref{th-sp} is sharp in some specific situations.
\begin{exa}\label{ex-sp-1}\rm
Let $E=\R^m$ with $d(x,y)=|x-y|$, and let $$q(x, y)=\frac{(1+|y|)^{m+\aa}\log(1+|y|)}{|x-y|^{m+\alpha}},\ \ \mu(\d
x)=\frac{c_{m,\aa}\d x}{(1+|x|)^{m+\alpha}\log(1+|x|)},$$ where
$ \alpha\in (0,2)$ and $c_{m,\aa}>0$ is the normalizing constant such that $\mu$ is a probability measure. It is easy
to see that all the assumptions in the introduction for
$(\EE,\scr{D}(\EE))$ are satisfied. We consider $V$ satisfying
\begin{equation}\label{ex-sp-1-01}
-s\varepsilon\log\log(e+|x|)-K \le V(x)\le (1-s)\varepsilon\log\log(e+|x|)+K,\ \ x\in \R^m
\end{equation}
for some constants $\varepsilon\in(0,1]$, $s \in [0,1]$ and $K \in \R$ such that $\mu(\e^V)=1.$
\begin{itemize}
\item[(1)] If $\vv<1$ then (\ref{th-sp-1}) holds
with
\begin{equation}\label{ex-sp-1-02}
\beta_V(r)=\exp\big(C_1(1+{r}^{-1/(1-\vv)})\big),
\end{equation}
for some constant $C_1>0$.
\item[(2)] $\beta_V$ in (1) can not be replaced by any essentially smaller functions, i.e.
when $V(x)=\vv\log\log(\e+|x|)+K_0$ for some constant $K_0\in \R$ such that $\mu(\e^V)=1$, the estimate
\eqref{ex-sp-1-02} is sharp in the sense that the super Poincar\'e
inequality (\ref{th-sp-1}) does not hold if
$$\lim_{r\to 0}r^{1/(1-\vv)}\log \bb_V(r)=0.$$
\item[(3)] In (1) the constant $K$ can not be replaced by any unbounded positive function, i.e. for
any increasing function $\phi: [0,\infty)\to [0,\infty)$ with $\phi(r)\uparrow\infty$ as $r\uparrow\infty$, there exists $V$ such that \begin{equation}\label{ABC1}
\vv\log\log (\e+|x|)\le V(x)\le \varepsilon\log\log(\e+|x|)+\phi(|x|)
\end{equation} with $\mu(\e^V)<\infty,$ but the super Poincar\'e
inequality (\ref{th-sp-1}) with $\beta_{V}$
given by (\ref{ex-sp-1-02}) does not hold for $\mu_V(\d x):= \ff{\e^{V(x)}\d x}{\mu(\e^V)}.$
\item[(4)] If $\vv=1$, then
$(\EE_V,\scr{D}(\EE_V))$ satisfies the Poincar\'e
inequality
\begin{equation}\label{ex2.2}
\mu_V(f^2)\le C\EE_V(f,f)+\mu_V(f)^2,\ \ \ f \in \scr{D}(\EE_V)
\end{equation} with some constant $C>0$.
\end{itemize}
\end{exa}
\begin{proof} As (2) is included in \cite[Corollary 1.3]{WW}, we only prove (1), (3) and (4).
(a) According to \cite[Corollary 1.3(3)]{WW}, we know the
logarithmic Sobolev inequality holds for $(\EE,\scr{D}(\EE))$, i.e.\
the super Poincar\'{e} inequality \eqref{sp} holds for
$(\EE,\scr{D}(\EE))$ with the rate function
$\beta(r)=\exp\big(c_1(1+{r}^{-1})\big)$ for some constant $c_1>0$.
Next, by (\ref{ex-sp-1-01}), there exists a constant $c_2>0$
such that for $n$ large enough
\begin{equation}\label{e3.10}
\begin{split}
& K_{n,n}(V)\le \vv\log\log n+c_2,\ \ J_{n,n}(V)\le (1+s)\vv \log\log n+c_2,\ \\
& Z_{n}(V)\le (1-s)\vv \log\log n+c_2,\ \ \zeta_n(V)\le \frac{c_2}{\log^{1-(1-s)\vv}n},\ \ \vv_{n,n}(V)\le
\frac{c_2}{\log^{1-\vv} n},\\
&\gg_{n,n}\le \frac{c_2}{n^{\alpha}},\ \ \eta_{n,n}\le \frac{c_2}{n^{\alpha}\log(1+n)}.
\end{split}
\end{equation}
For any $\delta>1$, taking
$\{n_i\}=\{\delta^i\}_{i=1}^{\infty}$, and $k_i=n_i$, we
get that $t_{i,n_i,k_i}(\dd,V)\le {c_3}{i}^{-1}$ for $i$ large
enough and some constant $c_3>0$. So, assumption {\bf (A)} is
fulfilled.
Moreover, by (\ref{A-01}) and (\ref{A-02}), it is easy to check that
for $r>0$ small enough we have
\begin{equation*}
\Big\{j \ge1: j \ge c_4r^{-\frac{1}{1-\vv}}\Big\}\subseteq D(r,\{(n_i,k_i)\}).
\end{equation*}
Then, the assertion in (1) follows from Theorem \ref{th-sp}(1) and \eqref{e3.10}.
(b) Let $ \psi(r)= (1+\log(1+r))\land \e^{\phi(r)}$ for $r\ge 0.$
Then $1\le\psi\le \e^\phi, \psi(r)\uparrow\infty$ as $r\uparrow\infty$ and $\psi(r)\le 2\log r$ for large $r$. Let
$$V(x)= \vv\log\log (\e+|x|) +\log\psi(|x|).$$ Then $V$ satisfies (\ref{ABC1}) and $\mu(\e^V)<\infty.$ Up to a normalization constant we may simply assume that $\mu_V(\d x)= \e^{V(x)}\d x.$
Now, suppose that (\ref{th-sp-1}) holds with $\bb_V$ given by
(\ref{ex-sp-1-02}).
For any $n\ge1$, let $f_n\in C^\infty(\R^d)$ satisfy that
\begin{equation*} f_{n}(x)
\begin{cases}
& =0,\ \ \ \ \ \ \ \quad \,\,|x|\le n,\\
& \in [0,1],\ \ \ \ \quad n\le |x|\le 2n,\\
& =1,\ \ \ \ \ \ \ \ \quad |x|\ge 2n,
\end{cases}
\end{equation*} and $|\nabla f_n|\le \frac{2}{n}.$ Then, there exists a constant $c_5>0$ independent of $n$ such that
\begin{equation}\label{AB0}
\begin{split}
\Gamma(f_n,f_n)(x)&=c_{m,\aa}\int\frac{(f_n(y)-f_n(x))^2}{|y-x|^{m+\aa}}\,\d y\\
&\le \frac{4c_{m,\aa}}{n^2}\int_{\{|y-x|\le n\}}\frac{\d y}{|y-x|^{m+\alpha-2}}+c_{m,\aa}\int_{\{|y-x|> n\}}\frac{\d y}{|y-x|^{m+\alpha}}\\
&\le \frac{c_5}{n^\alpha},\quad n\ge1.
\end{split}
\end{equation}
According to the definition of $f_n$ and the increasing property of $\psi$, there exists $c_6>0$ such that
\begin{equation}\label{proof-e-0100}\mu_V(f_n^2)\ge \frac{c_6 \psi(n)}{n^{\aa}\log^{1-\vv}n},\ \ n\ge 2.\end{equation}
On the other hand, since $\psi(r)\le 2\log r$ for large $r$, we have
$$\mu_V(|\cdot|>n)\le c_7 \int_n^\infty \ff{\psi(r)\d r}{r^{1+\aa}\log^{1-\vv}r }\le
\ff{c_8\log^{\vv} n}{n^\aa},\ \ n\ge 2.$$ Then
$$\mu_V(f_n)^2\le \mu_V(f_n^2) \mu_V(|\cdot|>n) \le \ff{c_8\log^{\vv}n}{n^\aa}\mu_V(f_n^2),\ \ n\ge 2.$$
Combining this with (\ref{th-sp-1}), (\ref{AB0}) and
(\ref{proof-e-0100}), we obtain
\beq\label{AD}1-\ff{c_8\log^{\vv}n}{n^\aa}\bb_V(r)\le\ff{c_5r}{n^\aa\mu_V(f_n^2)}
\le \ff{c_9r\log^{1-\vv}n}{\psi(n)},\ \ r>0, n\ge 2\end{equation}
for some constant $c_9>0.$ Since $\psi(n)\le 2\log n$ for large $n$,
it is easy to see that
$$r_n:= \bb_V^{-1}\Big(\ff{n^\aa}{2c_8\log^{\vv}n}\Big)\le \ff{c_{10}}{\log^{1-\vv}n}$$ holds for some constant $c_{10}>0$ and large $n$. Therefore, it follows from (\ref{AD}) that
$$\ff 1 2 \le \lim_{n\to\infty} \ff{c_9r_n\log^{1-\vv}n}{\psi(n)}=0,$$ which is a contradiction.
(c) If $\vv=1$ in (\ref{ex-sp-1-01}), the estimate (\ref{e3.10}) is
still true. According to Theorem \ref{th-sp}(2), the defective
Poincar\'e inequality (\ref{th-sp-2}) holds for $(\EE_V,
\scr{D}(\EE_V))$. On the other hand, by \cite[Theorem 1.1(3)]{WW},
the weak Poincar\'e inequality holds for $(\EE_V, \scr{D}(\EE_V))$.
Therefore, the required Poincar\'e inequality follows from \cite[Proposition 1.3]{RW} or
\cite[Proposition 4.1.2]{Wbook}. \end{proof}
Finally, we consider an example for finite range of jumps to
illustrate Theorem \ref{finite-sp}.
\begin{exa}\label{ex2}\rm
Let $E=\R^m$ with $d(x,y)=|x-y|$ and let
$$q(x, y)=\frac{\e^{|y|^\kk}}{{|x-y|}^{m+\alpha}}1_{\{{|x-y|}\le 1\}} ,\ \ \mu(\d x)=c_{m,\kappa}\e^{-|x|^{\kappa}}\d x $$ for some
constants $0<\alpha<2$, $\kappa>1$ and $c_{m,\kk}\ge 1$ such that
$\mu$ is a probability measure.
It is easy to see that all the assumptions in the introduction for
$(\EE,\scr{D}(\EE))$ is satisfied. Consider $V$ satisfying
\begin{equation}\label{ex2.0}
-C_1(1+|x|^{\theta-1})-K \le V(x)\le C_1(1+|x|^{\theta-1})+K,\quad
x\in \R^m
\end{equation}
for some constants $\theta \in (1,\kappa]$, $C_1>0$ and $K\in\R$
such that $\mu(\e^V)=1.$
\begin{itemize}
\item[(1)] If $\theta<\kk,$ then the super Poincar\'e inequality holds
for $(\EE_V, \scr{D}(\EE_V))$ with
$$
\beta_V(r)=C_2\exp\Big[C_2 \log^{\frac{\kappa}{\kappa-1}}(1+r^{-1})
\Big]
$$
for some positive constants $C_2>0.$
\item[(2)] Let $\theta=\kk$. Then if $C_1>0$ is small enough, then the Poincar\'e inequality \eqref{ex2.2} holds for some constants $C>0.$
\end{itemize}
\end{exa}
\begin{proof} According to \cite[Example 1.2(2)]{CW2}, we know the super Poincar\'{e} inequality
\eqref{sp} holds for $(\EE,\scr{D}(\EE))$ with
\beq\label{DG0}\beta(r)=c_1\exp\Big[c_1\log^{\frac{\kappa}{\kappa-1}}(1+
r^{-1})\Big]\end{equation} for some constant $c_1>0$.
(1) By (\ref{ex2.0}) and the fact that $\theta<\kappa$, one can find
some constants $c_2, c_3>0$ such that for $n$ large enough
\begin{equation*}
\begin{split}
& K_{n,n}(V)\le c_2n^{\theta-1},\ \ J_{n,n}(V)\le c_2n^{\theta-1},\
\ \vv_{n,n}(V)\le \e^{-c_3n^{\kappa-1}}.
\end{split}
\end{equation*}
Therefore, for every $r$ small enough, taking
$$n=c_4\log^{\frac{1}{\kappa-1}}(1 +{r}^{-1})$$ and
$$s=\exp\big(-c_5\log(1+{r}^{-1})\big)$$ for some constants
$c_5>>c_4>>1$, we obtain
\begin{equation*}
8\vv_{n,n}(V)+s\e^{K_{n,n}(V)}\le \frac{r}{2(1+8\ll r)}.
\end{equation*}
The first required assertion for super Poincar\'{e} inequality of
$(\EE_V,\scr{D}(\EE_V))$ follows from Theorem \ref{finite-sp}(1) and
all the estimates above.
(2) Let $\theta=\kk$. By \eqref{ex2.0}, (\ref{DG0}) and the definition of $\mu$, for $n$ large enough we have
\begin{equation*}
K_{n,n}(V)\le 3C_1n^{\kappa-1},\ \ \bb^{-1}(\mu(|x|>n-1)^{-1})\le
\e^{-c_6n^{\kk-1}},
\end{equation*} for some constant $c_6>0$ depending only on $\bb$ and $\mu$.
So, if $C_1< c_6/3$, then $\inf_{n\ge1}\vv_{n,n}(V)<\infty$.
Therefore, by Theorem \ref{finite-sp}(2), the defective Poincar\'e
inequality holds for $(\EE_V, \scr{D}(\EE_V))$. Finally, according
to \cite[Proposition 2.6]{CW2}, the weak Poincar\'e inequality holds
for $(\EE_V, \scr{D}(\EE_V))$. Therefore, the Poincar\'e inequality
holds for $(\EE_V, \scr{D}(\EE_V))$.\end{proof}
To show that in Example \ref{ex2}(2) it is essential to assume that
$C_1>0$ is small, we present below a counterexample inspired by
\cite[Proposition 5.1]{BLW}.
\beg{prp} In the situation of Example $\ref{ex2}$, let $\theta=\kk, \aa\in (0,1)$ and $m=1.$ Let
$$V(x)= K_0+ L \sum_{n=1}^\infty 1_{[nH, (n+1)H)}(x)(n+1)^{\kk-1} \Big( 2n+1-\ff{2x}H\Big),$$
where $H>4, L>\ff{\kk H^\kk}{H-2}$ and $K_0\in \R$ are constants
such that $\mu(\e^V)=1.$ Then $(\ref{ex2.0})$ holds for some
constant $C_1>0$ and $K\in\R$; however, for any $C>0$, the
Poincar\'e inequality $(\ref{ex2.2})$ does not hold.\end{prp}
\beg{proof} It suffices to disprove the Poincar\'e inequality, for which we are going to construct a sequences of functions $\{f_n\}\subset\A$ such that
\begin{equation*}\label{DPP} \lim_{n\to\infty} \ff{\EE_V(f_n)}{{\rm Var}_{\mu_V} (f_n)}=0.\end{equation*}
For $n\ge 1$, let
\begin{equation*}
f_{n}(x)=\begin{cases}
& \frac{\int_{Hn+1}^{x}\exp(y^{\kappa}-V(y))\d y}
{\int_{Hn+1}^{H(n+1)-1}\exp(y^{\kappa}-V(y))\d y},\ \ \ x \in (Hn+1,H(n+1)-1) ,\\
& \qquad\quad1,\ \ \ \ \ \ \qquad\qquad \,\,\,\,x \in [H(n+1)-1,+\infty),\\
& \qquad\quad0,\ \ \ \ \ \ \ \ \qquad \qquad\text{otherwise},
\end{cases}
\end{equation*}
which is a bounded Lipschitz continuous function on $\R$, so that $f_n\in\A$. In the following calculations $C$ stands for a constant which varies from line to line but is independent of $n$ (may depend on $H$ or $L$). We simply denote $K_n=L(n+1)^{\kk-1}$ for $n\ge 1$, so that
$$V(x)=K_0+ \sum_{n=1}^\infty 1_{[nH, (n+1)H)}(x)K_n\Big(2n+1-\ff{2x}H\Big).$$
(a) Estimate on ${\rm Var}_{\mu_V}(f_n).$ For $n$ large enough, since
$z\mapsto\big(z^{\kappa}-K_{n+1}(2n+3-\ff{2z}H)\big)$ is increasing, we have
\begin{equation*}
\begin{split}
\mu_{V}(f_{n}^2) &\ge \mu_{V}\big(H(n+1)\le x \le H(n+2)\big)\\
&\ge C\int_{H(n+1)}^{H(n+2)}\frac{\exp\big(K_{n+1}(2n+3-\ff{2z}H)-z^{\kappa} \big)}{\kappa z^{\kappa-1}+\frac{2K_{n+1}}{H}} \d\bigg(z^{\kappa}-K_{n+1}(2n+3-\ff{2z}H)\bigg)\\
&\ge \frac{C\big\{\exp\big(-(H(n+1))^{\kappa}+K_{n+1}\big)-\exp\big(-(H(n+2))^{\kappa}-K_{n+1}\big)\big\}}{ (n+2)^{\kappa-1}}\\
&\ge \frac{C\exp\big(-(H(n+1))^{\kappa}+K_{n+1}\big)}{ (n+2)^{\kappa-1}}.
\end{split}
\end{equation*} Noting that for $n$ large enough,
\begin{equation*}
\mu_{V}(f_{n})^2=\mu_{V}(f_{n} 1_{\{x \ge n\}})^2\le \mu_{V}(f_{n}^2)\mu_{V}(x \ge n)
\le \frac{ \mu_{V}(f_{n}^2)}{2},
\end{equation*} we arrive at
\beq\label{PRO-01} {\rm Var}_{\mu_V}(f_n)\ge
\frac{C\exp\big(-(H(n+1))^{\kappa}+K_{n+1}\big)}{(n+2)^{\kappa-1}}.\end{equation}
(b) Estimate on $\EE_V(f_n)$. Let $$g_{n}(x) =\bigg(\int_{Hn+1}^{H(n+1)-1}\exp\big(y^{\kappa}-V(y)\big)\d y\bigg)f_{n}(x).$$
Noting that
$g_{n}'(x)\neq 0$ only when $x \in (Hn+1,H(n+1)-1)$,
we have
\begin{equation*}
\begin{split}
\EE_{V}(g_{n})& \le C\int_{Hn}^{H(n+1)}\bigg(\int_{\{|x-y|\le 1\}}
\frac{|\int_{x}^{y} g_{n}'(z)\d z|^2}{{|x-y|}^{1+\alpha}}\d y \bigg)\exp\big(-x^{\kappa}+V(x)\big) \d x\\
&=C\int_{Hn}^{H(n+1)}\bigg(\int_{\{{|x-y|}\le 1, x \le y\}}
\frac{|\int_{x}^{y} g_{n}'(z)\d z|^2}{{|x-y|}^{1+\alpha}}\d y\bigg)\exp\big(-x^{\kappa}+V(x)\big) \d x\\
&\quad+C\int_{Hn}^{H(n+1)}\bigg(\int_{\{{|x-y|}\le 1, x > y\}}
\frac{|\int_{x}^{y} g_{n}'(z)\d z|^2}{{|x-y|}^{1+\alpha}}\d y \bigg)\exp\big(-x^{\kappa}+V(x)\big) \d x\\
&=:C(I_1+I_2).
\end{split}
\end{equation*}
Since
$g_{n}'(x)=\exp\big(x^{\kappa}-V(x)\big)$ for $x \in (Hn+1,H(n+1)-1)$, and since for large $n$ the function $z\mapsto z^\kk-V(z)$ is increasing on $(Hn, H(n+1))$,
by the Cauchy-Schwarz inequality we have, for $x\in (Hn, H(n+1)-1)$ and $ x\le y\le x+1$,
\beg{equation*}\beg{split} & \frac{|\int_{x}^{y} g_{n}'(z)\d z|^2}{{|x-y|}^{1+\alpha}} \\
&= \frac{|\int_{x \vee (Hn+1)}^{y\wedge (H(n+1)-1)} g_{n}'(z)\d z|^2}{{|x-y|}^{1+\alpha}} \\
&\le \frac{|\int_{x \vee (Hn+1)}^{y\wedge (H(n+1)-1)}
\exp\big(z^{\kappa}-V(z)\big)\d z|}{{|x-y|}^{1+\alpha}}\int_{x \vee (Hn+1)}^{y\wedge (H(n+1)-1)} |g_{n}'|^2(z)\e^{-z^{\kappa}+V(z)}\d z \\
&\le \frac{\exp\big((x+1)^{\kappa}-V(x+1)\big)}{{|x-y|}^{\alpha}}
\Big(\int_{Hn+1}^{H(n+1)-1}|g_{n}'|^2(z)\e^{-z^{\kappa}+V(z)}\d
z\Big).
\end{split}\end{equation*} Thus, since $\aa\in (0,1)$,
\begin{equation*}
\begin{split}
I_1 &\le
C\Big(\int_{Hn+1}^{H(n+1)-1}|g_{n}'|^2(z)\exp\big(-z^{\kappa}+V(z)\big)\d
z\Big) \\
&\qquad\times\bigg[\int_{Hn}^{H(n+1)-1}\Big(\int_{\{|x-y|\le 1\}}
\frac{{1}}{{|x-y|}^{\alpha}}\d y\Big)\exp\Big((x+1)^{\kappa}-x^{\kappa}-
V(x+1)+V(x)\Big) \d x\bigg]\\
&\le C \Big(\int_{Hn+1}^{H(n+1)-1}\exp\big(z^{\kappa}-V(z)\big)\d z\Big)
\exp\Big(\kappa (H(n+1))^{\kappa-1}+\frac{2K_{n}}{H}\Big).
\end{split}
\end{equation*}
Similarly, we have
\begin{equation*}
I_2 \le C\int_{Hn+1}^{H(n+1)-1}\exp\big(z^{\kappa}-V(z)\big)\d z.
\end{equation*}
So, for $n$ large enough,
\begin{equation*}
\EE_{V}(g_{n}) \le C
\Big(\int_{Hn+1}^{H(n+1)-1}\exp\big(z^{\kappa}-V(z)\big)\d z\Big)
\exp\bigg(\kappa (H(n+1))^{\kappa-1}+\frac{2K_{n}}{H}\bigg).
\end{equation*}
Moreover, for large $n$,
\begin{equation*}
\begin{split}
& \int_{Hn+1}^{H(n+1)-1}\exp\big(z^{\kappa}-V(z)\big)\,\d z\\
&=\int_{Hn+1}^{H(n+1)-1}\frac{\exp\big(z^{\kappa}-\frac{2K_{n}(Hn-z)}{H}-K_{n}\big)}{\kappa z^{\kappa-1}+\frac{2K_{n}}{H}}
\d\Big(z^{\kappa}-\frac{2K_{n}(Hn-z)}{H}-K_{n}\Big)\\
&\ge \frac{C\exp\big((H(n+1)-1)^{\kappa}+\frac{(H-2)K_{n}}{H}\big)-\exp\big((Hn+1)^{\kappa}-\frac{(H-2)K_{n}}{H}\big)}
{ (n+1)^{\kappa-1}}\\
&\ge \frac{C\exp\big((H(n+1)-1)^{\kappa}+\frac{(H-2)K_{n}}{H}\big)}{ (n+1)^{\kappa-1}},
\end{split}
\end{equation*}
Therefore, for large $n$,
\beq\label{PRO-02}\beg{split} \EE_V(f_n)&=\ff{\EE_V(g_n)}{(\int_{Hn+1}^{H(n+1)-1}\exp\big(z^{\kappa}-V(z)\big)\d z)^2}\\
&\le \ff{C \exp\big(\kappa (H(n+1))^{\kappa-1}+\frac{2K_{n}}{H}\big)}{\int_{Hn+1}^{H(n+1)-1}\exp\big(z^{\kappa}-V(z)\big)\d z}\\
&\le C(n+1)^{\kk-1} \exp\Big(\kk(H(n+1))^{\kk-1} -(H(n+1)-1)^\kk -\ff{(H-4)K_n}H\Big). \end{split}\end{equation}
(c) Combining (\ref{PRO-01}) with (\ref{PRO-02}) and noting that
$$(H(n+1))^\kk-(H(n+1)-1)^\kk\le\kk (H(n+1))^{\kk-1},$$ we obtain
\begin{equation*}
\begin{split}
&\lim_{n\to\infty} \frac{\EE_{V}(f_{n},f_{n})}{{\rm Var}_{\mu_{V}}(f_{n})}\\
&\le C\lim_{n\to\infty} (n+2)^{2(\kk-1)} \exp\Big(2\kk(H(n+1))^{\kk-1} -\ff{2(H-2)}H K_n\Big) =0\end{split}\end{equation*}
since $$L>\ff{\kk H^\kk}{H-2}.$$
\end{proof}
\subsection{Proofs of Theorems \ref{finite-sp} and \ref{th-sp}}
As in \cite[Theorem 3.1]{BLW}, we shall adopt a split argument by
estimating $\mu(f^21_{\{\rr\ge n\}})$ and $\mu(f^21_{\{\rr\le n\}})$
respectively. Unlike in the local setting where the chain rule is
available, in the present situation the uniform norm $\|f\|_\infty$
will appear in our estimates when the range of jumps is infinite.
Below we simply denote $K_{n,k}=K_{n,k}(V), J_{n,k}=J_{n,k}(V),
Z_n=Z_n(V),\vv_{n,k}=\vv_{n,k}(V), \zeta_n=\zeta_n(V)$ and
$t_{i,n,k}=t_{i,n,k}(\dd,V).$
\beg{lem}\label{L3.1} For any $n\ge 1$, $k\ge 1$ and $f\in \A$,
$$\mu_V(f^21_{\{\rr\ge n\}})\le 12 \vv_{n,k} \EE_V(f)+ 128\ll\vv_{n,k} \mu_V(f^2)+96 \zeta_{n} \gg_{n,k} \|f\|_\infty^2.$$\end{lem}
\beg{proof} If $f|_{\{\rho\le n-1\}}=0$,
by the Cauchy-Schwartz inequality we have
\begin{equation*}
\mu(|f|)^2=\mu\big(|f| 1_{\{\rho>n-1\}}\big)^2\le \mu(\rho>n-1)\mu(f^2).
\end{equation*}
Substituting this into \eqref{sp} with $r=
\bb^{-1}\big(\frac{1}{2\mu(\rho>n-1)}\big)$ we obtain
\beq\label{proof-e-01} \mu(f^2)\le 2\bb^{-1}
\big(1/[2\mu(\rho>n-1)]\big)\EE(f),\quad f\in \A, \ f|_{\{\rho\le
n-1\}}=0.\end{equation}
To apply (\ref{proof-e-01}) for general $f\in\A$, we consider $fl_{n}$ instead of $f$, where $l_{n}:= h_{n}\circ\rr$ for
some function $h_n\in C^\infty([0,\infty); [0,\infty))$ satisfying \begin{equation*} h_{n}(s)
\begin{cases}
& =1,\ \ \ \ \ \ \ \quad \,\, n\le s\le n+1,\\
& \in [0,1],\ \ \ \ n-1\le s<n \,\,\,\,{\textrm{ or}}\,\, \,\,n+1<s\le n+2,\\
& =0,\ \ \ \ \ \ \ \ \quad s<n-1\,\,\,\,{\textrm
{or}}\,\,\,\,s>n+2,
\end{cases}
\end{equation*}
and $$\sup_{s\in[0,\infty)}|h'_{n}(s)|\le 2.$$ It is easy to see that
$$|l_n(x)-l_n(y)|\le 2(1\wedge d(x,y))\big\{1_{\{\rr(x)\in (n-2,n+3)\}}+1_{\{\rr(y)\in (n-1,n+2), \rr(x)\notin (n-2,n+3)\}}\big\}.$$ This implies
\beg{equation}\label{proof-e-02}\beg{split} &\GG(fl_n)(x) \\
& \le 2\int_{E} \big\{l_n(y)^2 (f(x)-f(y))^2
+ f(x)^2 (l_n(x)-l_n(y))^2\big\}q(x,y)\mu(\d y)\\
&\le 2 \int_{\{ \rho(y)\in (n-1,n+2)\}} (f(x)-f(y))^2 q(x,y)\mu(\d y)+8\ll f(x)^21_{\{\rr(x)\in (n-2,n+3)\}}\\
&\quad + 8f^2(x) 1_{\{\rr(x)\notin(n-2,n+3)\}}\int_{\{\rr(y)\in (n-1,n+2)\}} q(x,y)\mu(\d y). \end{split}\end{equation}
Since $$1_{\{\rr(x)\notin(n-2,n+3)\}}\int_{\{\rr(y)\in (n-1,n+2)\}} q(x,y)\mu(\d y)\le
1_{\{\rr(x)\notin(n-2,n+3)\}}\int_{\{d(x,y)>1\}} q(x,y)\mu(\d y)\le \lambda,$$
we have $\GG(fl_n)\in \B_b(E)$, and $fl_n \in \A$.
Let
$$\dd_{n,k} = \e^{K_{n,k}} \bb^{-1} \big( 1/[2\mu(\rho>n-1)]\big),\
\theta_{n}:=\e^{\sup_{\rho\le n+2}V} \bb^{-1} \big( 1/[2\mu(\rho>n-1)]\big).$$
Combining
(\ref{proof-e-01}) with \eqref{proof-e-02}, and noting that $\rr(y)\in (n-1,n+2)$ and $\rr(x)\ge n+k+2$ imply $d(x,y)>k$,
and
\begin{equation*}
\begin{split}
&\iint_{\{\rr(x)>n+k+2, \rho(y)\in (n-1,n+2)\}} (f(x)-f(y))^2 q(x,y)\mu(\d y)\mu(\d x)\\
&\le 4 \|f\|_\infty^2 \iint_{\{\rr(x)>n+k+2, \rr(y) \in (n-1,n+2),d(x,y)>k\}}q(x,y)\mu(\d y)\mu(\d x),\\
\end{split}
\end{equation*}
we obtain
\beq\label{proof-e-03}\beg{split}&\mu_V(f^2l_n^2) \le
\e^{\sup_{\rho\le n+2}V}\mu(f^2l_n^2)\\
&\le 2\e^{\sup_{\rho\le n+2}V} \bb^{-1} \big( 1/[2\mu(\rho>n-1)]\big)\EE(fl_n)\\
&\le 2\e^{\sup_{\rho\le n+2}V} \bb^{-1} \big( 1/[2\mu(\rho>n-1)]\big)\mu(1_{\{\rho\le n+k+2\}}\GG(fl_n))\\
&\quad + 2\e^{\sup_{\rho\le n+2}V} \bb^{-1} \big( 1/[2\mu(\rho>n-1)]\big)\mu(1_{\{\rho> n+k+2\}}\GG(fl_n))\\
&\le 4 \dd_{n,k}\iint_{\{\rr(x)\le n+k+2, \rr(y)\in (n-1,n+2)\}}(f(x)-f(y))^2q(x,y)\mu(\d y)\mu_V(\d x) \\
&\quad +16\dd_{n,k}\ll \mu_V\big(1_{\{\rr\in (n-2,n+3)\}}f^2\big)\\
&\quad+16\dd_{n,k} \iint_{\{\rr(x)\le n+k+2, \rr(x) \notin (n-2,n+3),\rr(y)\in (n-1,n+2)\}}f(x)^2q(x,y)\mu(\d y)\mu_V(\d x)\\
&\quad+32 \theta_{n}\|f\|_\infty^2\iint_{\{\rr(y)\in (n-1,n+2),d(x,y)>k\}}q(x,y)\mu(\d y)\mu(\d x). \end{split}\end{equation}
Noting that $\vv_{n,k}=\sup_{m\ge n} \dd_{m,k}$, $\zeta_n=\sup_{m \ge n} \theta_m$, $\sum_{m=1}^\infty 1_{\{\rr(y)\in (m-1,m+2)\}}\le 3$ and $$1_{\{\rr(x)\le n+k+2, \rr(x) \notin (n-2,n+3),\rr(y)\in (n-1,n+2)\}}
\le 1_{\{d(x,y)>1, \rr(y)\in (n-1,n+2)\}},$$ and taking summations in (\ref{proof-e-03}) from $n$, we arrive at
\beg{equation*}\beg{split} & \mu_V(f^2 1_{\{\rho\ge n\}}) \le
\sum_{m=n}^\infty \mu_V(f^2l_m^2)\\
&\le 12 \vv_{n,k}\iint_{E\times E}(f(x)-f(y))^2q(x,y)\mu(\d y)\mu_V(\d x) +80\vv_{n,k}\ll \mu_V(f^2)\\
&\quad+48\vv_{n,k}\iint_{\{d(x,y)>1\}}f(x)^2q(x,y)\mu(\d y)\mu_V(\d x) \\
&\quad+
96\zeta_n\|f\|_\infty^2\iint_{\{\rr(y)\ge n-1,d(x,y)>k\}}q(x,y)\mu(\d y)\mu(\d x)\\
&\le 12 \vv_{n,k} \EE_V(f)+ 128\ll\vv_{n,k} \mu_V(f^2)+96 \zeta_{n} \gg_{n,k} \|f\|_\infty^2.\end{split}\end{equation*}
\end{proof}
\beg{lem} \label{L3.2} For any $n,k\ge 1$, $s>0$ and $f\in \A$,
$$\mu_V(f^21_{\{\rr\le n\}}) \le 2s \e^{K_{n,k}}\EE_V(f) +16\ll s\e^{K_{n,k}}\mu_V(f^2) + 16 s \e^{Z_{n}}\eta_{n,k} \|f\|_\infty^2
+\bb(s) \e^{J_{n,k}} \mu_V(|f|)^2.$$\end{lem}
\beg{proof} Let $\phi_n:[0,\infty)\to [0,\infty)$ be a smooth function
such that
\begin{equation*} \phi_n(s) \begin{cases}
& =1,\ \ \ \ \ \ \ \quad \,\, s\le n,\\
& \in [0,1],\ \ \ \ n\le s<n+1,\\
& =0,\ \ \ \ \ \ \ \ \quad s>n+1,
\end{cases}
\end{equation*}
and $$\sup_{s \in [0,\infty)}|\phi'_{n}(s)|\le 2.$$ Set
$g_n =\phi_n\circ\rho.$ Then $g_n\in\A$ and
$$|g_n(x)-g_n(y)|\le 2(1\wedge d(x,y))\big(1_{\{\rr(x)\le n+1\}}+1_{\{\rr(x)>n+1,\rr(y)\le n+1\}}\big).$$
So, similarly to (\ref{proof-e-02}) we have
\beg{equation*}\beg{split} \GG(fg_n)(x) &\le 2 \int_{\{\rr(y)\le n+1\}}(f(x)-f(y))^2q(x,y)\mu(\d y) \\
&\quad + 8\ll f^2(x)1_{\{\rr(x)\le n+k+2\}}
+ 8 f^2(x) 1_{\{\rr(x)>n+k+2\}} \int_{\{\rr(y)\le n+1\}} q(x,y) \mu(\d y).
\end{split}\end{equation*}
Note that $\rr(x)> n+k+2$ and $\rr(y)\le n+1$ imply that $d(x,y)>k$,
and
\begin{equation*}
\begin{split}
&\iint_{\{\rr(x)>n+k+2,\rho(y)\le n+1\}} (f(x)-f(y))^2 q(x,y)\mu(\d y)\mu(\d x)\\
&\le 4 \|f\|_\infty^2 \iint_{\{\rr(x)>n+k+2, \rr(y)\le n+1,d(x,y)>k\}}q(x,y)\mu(\d y)\mu(\d x).
\end{split}
\end{equation*}
Combining all the estimates above with \eqref{sp}, we obtain
\beg{equation*}\beg{split} &\mu_V(f^21_{\{\rho\le n\}})\\
& \le \mu_V(f^2g_n^2)
\le \e^{\sup_{\rho\le n+1}V}\mu(f^2g_n^2)\\
&\le \e^{\sup_{\rho\le n+1}V}\Big\{s\mu(\GG(fg_n)) +\bb(s) \mu(|fg_n|)^2\Big\}\\
&= \e^{\sup_{\rho\le n+1}V}\Big\{s\mu(1_{\{\rho\le n+k+2\}}\GG(fg_n))
+s\mu(1_{\{\rho> n+k+2\}}\GG(fg_n))+\bb(s) \mu(|fg_n|)^2\Big\}\\
&\le s\e^{K_{n,k}} \mu_V(1_{\{\rho\le n+k+2\}}\GG(fg_n))
+s\e^{\sup_{\rr\le n+1} V} \mu (1_{\{\rho> n+k+2\}}\GG(fg_n))+\bb(s)\e^{J_{n,k}} \mu_V(|f|)^2 \\
&\le 2s \e^{K_{n,k}}\EE_V(f) +16\ll s\e^{K_{n,k}}\mu_V(f^2) + 16 s\e^{Z_n} \eta_{n,k} \|f\|_\infty^2
+\bb(s) \e^{J_{n,k}} \mu_V(|f|)^2.\end{split}\end{equation*}
\end{proof}
Now, we are in a position to prove Theorem \ref{finite-sp}.
\beg{proof}[Proof of Theorem $\ref{finite-sp}$] It suffices to prove
for $f\in\A$.
(1) Since $q(x,y)=0$ for $d(x,y)> k_0$, we have $\gg_{n,k}=\eta_{n,k}=0$ for all $k\ge k_0$ So, by Lemmas \ref{L3.1} and \ref{L3.2},
\beq\label{DF}\beg{split} \mu_V(f^2)\le
&2\big(6\vv_{n,k} + s\e^{K_{n,k}}\big)\EE_V(f) + 16\ll\big(8 \vv_{n,k}
+ s\e^{K_{n,k}}\big)\mu_V(f^2)\\
& +\bb(s)\e^{J_{n,k}}\mu_V(|f|)^2,\ \ \ s>0, n\ge 1, k\ge k_0.\end{split}\end{equation} If $\inf_{n\ge1,k\ge k_0}\vv_{n,k}= 0$, then for any $r'\in (0,r]$ there exist $s>0$, $n\ge1$ and $k\ge k_0$ such that
$$8\vv_{n,k} +s \e^{K_{n,k}}\le \ff {r'}{2+16\ll r'}.$$ Combining this with (\ref{DF}) we obtain
$$\mu_V(f^2)\le r'\EE_V(f) + (1+8\ll r')\bb(s) \e^{J_{n,k}}\mu_V(|f|)^2\le r\EE_V(f) + (1+8\ll r')\bb(s) \e^{J_{n,k}}\mu_V(|f|)^2.$$ This implies
the super Poincar\'e inequality for the desired $\bb_V$.
(2) If \beq \label{AAA} \inf_{n\ge1,k\ge k_0} \vv_{n,k}<\ff 1 {128\ll},\end{equation} then there exist $n\ge 1$, $k\ge k_0$ and $s>0$ such that
$$16\ll\big(8 \vv_{n,k}
+ s\e^{K_{n,k}}\big)<1.$$ Therefore, the defective Poincar\'e inequality follows from (\ref{DF}).
To prove (\ref{th-sp-2}) without condition (\ref{AAA}), we follow the approach in the proof of \cite[Theorem 3.1(2)]{BLW} by making bounded perturbations of $V$. For any $N\ge 1$,
write
$$ V= V_N+ (V\land N)\lor (-N),\ \
V_N:=(V-N)1_{\{V \ge N\}}+(V+N)1_{\{V \le -N\}}.$$
Since $(V\land N)\lor (-N)$ is bounded and the defective Poincar\'e inequality is stable under bounded perturbations of $V$, we only need to prove that when $V$ is unbounded we may find $N\ge 1$ such that the defective Poincar\'e inequality holds for $V_N$ in place of $V$.
It is easy to check that for $n$ large enough
\begin{equation*}
\sup_{\rho \le n}V_N\le
\begin{cases}
& \big(\sup_{\rho \le n}V\big)-N,\ \ \ \ \text{if}\ \ \sup_{E}V=+\infty,\\
& \sup_{\rho \le n}V,\ \ \ \ \ \ \ \ \ \ \quad\ \text{if}\ \
\sup_{E}V<+\infty,
\end{cases}
\end{equation*} and
\begin{equation*}
\inf_{\rho \le n}V_N\ge
\begin{cases}
& \big(\inf_{\rho \le n}V\big)+N,\ \ \ \ \text{if}\ \ \inf_{E}V=-\infty,\\
& \inf_{\rho \le n}V,\ \ \ \ \ \ \ \ \ \ \ \quad \text{if}\ \
\inf_{E}V>-\infty.
\end{cases}
\end{equation*}Thus, for large $n$ we have $K_{n,k}(V_N)\le K_{n,k}(V)-N$, so that
\begin{equation}\label{proof-e-12}
\begin{split}
&\inf_{n\ge1,k\ge k_0}\vv_{n,k}(V_N) \le
\e^{-N}\inf_{n\ge 1,k\ge k_0}\vv_{n,k}(V).
\end{split}
\end{equation}
Since $\inf_{n\ge 1,k\ge k_0}\vv_{n,k}(V)<\infty,$ we see that (\ref{AAA}) holds for $V_N$ in place of $V$ when $N$ is large enough. Therefore, the defective Poincar\'e inequality holds for $V_N$ in place of $V$ as observed above.
\end{proof}
Next, we turn to the proof of Theorem $\ref{th-sp}$. To get rid of the uniform norm included in Lemmas \ref{L3.1} and \ref{L3.2}, we adopt a cut-off
argument as in the proof of \cite[Theorem 3.2]{W00a} or \cite[Theorem 3.3.3]{Wbook}. More precisely, for $\dd>1$ in assumption {\bf (A)} and a non-negative function $f$, let
\begin{equation}\label{pro-function}f_{\dd,i} =(f-\dd^{\ff i 2})^+\land (\dd^{\ff{i+1}2} -\dd^{\ff i 2}),\ \ i\ge 0.\end{equation}
According to \cite[Lemma 3.3.2]{Wbook}, for $f\in \D(\EE_V)$ and $i,j\ge 0$, we have $f_{\dd,i}$, $(f-\dd^{\ff j 2})^+$ and $f\land \dd^{\ff j 2}\in\D(\EE_V)$. Moreover,
\beq\label{dri-01} \sum_{i=j}^\infty \EE_V(f_{\dd,i})\le \EE_V((f-\dd^{\ff j 2})^+),\ \ \EE_V((f-\dd^{\ff j 2})^+)+\EE_V(f\land \dd^{\ff j 2})\le \EE_V(f).\end{equation}
We also have the following lemma.
\beg{lem}\label{L3.3} For any non-negative function $f$ and $k\in \mathbb Z_+:=\{0,1,2,\cdots\}$,
\beq\label{dri-02} \sum_{i=k}^\infty f_{\dd,i}^2\ge c(\dd) \big({(f-\dd^{\ff k 2})^+}\big)^2.\end{equation} \end{lem}
\beg{proof} We shall simply use $f$ to denote its value at a fixed point.
If $ f \le 1$, then both sides in (\ref{dri-02}) are equal to zero. Assume that $f\in (\dd^{\ff l 2}, \dd^{\ff{l+1}2}]$ for some $l\in \mathbb Z_+$. If $l\le k$ then
$$\sum_{i=k}^\infty f_{\dd,i}^2= \big({(f-\dd^{\ff k 2})^+}\big)^2 \ge c(\dd)\big({(f-\dd^{\ff k 2})^+}\big)^2. $$ Next, if $l>k$ then
$$\sum_{i=k}^\infty f_{\dd,i}^2\ge (\dd^{\ff l 2}-\dd^{\ff{l-1}2})^2= c(\dd)\dd^{l+1} \ge c(\dd)f^2.$$
In conclusion, (\ref{dri-02}) holds. \end{proof}
\beg{proof}[Proof of Theorem $\ref{th-sp}$] Since $\EE_V(|f|,|f|)\le \EE_V(f,f)$ for every
$f \in \A$, without loss of generality, we may and do assume that $f\in\A$ with $f\ge 0$ and $\mu_V(f^2)=1$.
(1) By Lemmas \ref{L3.1} and \ref{L3.2}, we have
\beq\label{proof-e-09}\beg{split}\mu_V(f^2)\le &
2\big(6\vv_{n,k} + s\e^{K_{n,k}}\big)\EE_V(f) + 16\ll\big(8 \vv_{n,k}+ s\e^{K_{n,k}}\big)\mu_V(f^2)\\
&+\bb(s)\e^{J_{n,k}}\mu_V(|f|)^2+16\big(6\zeta_{n}\gg_{n,k}+s\e^{Z_{n}}\eta_{n,k} \big) \|f\|_\infty^2,\
\ s,k\ge 1,n \ge 1.\end{split}\end{equation}
Next, let $r>0$,
$\{(n_i,k_i)\}\in I_\dd$, $j\in D(r,\{(n_i,k_i)\})$ be fixed, and let $f_{\dd,i}$ be defined by \eqref{pro-function}. Since
$\|f_{\dd,i}\|^2_\infty\le c(\delta)\dd^{i+2}$ and by the
Cauchy-Schwarz inequality
$$\mu_V(f_{\dd,i})^2=\mu_V(f_{\dd,i}1_{\{f\ge \delta^{i/2}\}})^2
\le \mu_V(f_{\dd,i}^2)\mu_V(f^2\ge \dd^i)\le
\mu_V(f_{\dd,i}^2)\dd^{-i},$$ it follows from (\ref{A-01}) and
(\ref{proof-e-09}) with $n=n_i$ and $s= t_{i,n_i,k_i}$ that
$$\mu_V(f_{\dd,i}^2)\le \frac{c(\dd)r}{8} \EE_V(f_{\dd,i})
+\ff 1 2 \mu_V(f_{\dd,i}^2) + 16c(\delta)(6
\zeta_{n_i}\gg_{n_i,k_i}+t_{i,n_i,k_i}\e^{Z_{n_i}}
\eta_{n_i,k_i}) \dd^{i+2},\ \ i\ge j.$$ That is,
$$\mu_V(f_{\dd,i}^2)\le \frac{c(\dd)r}{4} \EE_V(f_{\dd,i})
+ 32c(\delta)(6 \zeta_{n_i,k_i}\gg_{n_i,k_i}+t_{i,n_i,k_i}
\e^{Z_{n_i}}\eta_{n_i,k_i})\dd^{i+2},\ \ i\ge j.$$ Taking
summation over $i\ge j$ and using (\ref{dri-01}), (\ref{dri-02}) and
(\ref{A-02}), we obtain \beq\label{proof-e-10}
\mu_V\big({(f-\dd^{\ff j 2})^+}^2\big)\le \ff r 4 \EE_V((f-\dd^{\ff
j 2})^+) +\ff 1 8.\end{equation} On the other hand, noting that
$c(\delta)\in(0,1)$ and $\delta>1$, applying (\ref{proof-e-09}) with
$n=n_j$ and $s= t_{j,n_j,k_j}$ to $f \wedge \dd^{\frac{j}{2}}$, and
combining with (\ref{A-01}), (\ref{A-02}), we obtain
\beg{equation*}\beg{split}
\mu_V(f^2\land\dd^j)&\le\frac{c(\delta)r}{8} \EE_V(f\land \dd^{\ff
j 2}) +\ff 1 4 \mu_V(f^2\wedge\dd^{ j})+ \ff {\dd^j}4 \mu_V(|f|)^2 \\
&\quad+
16( 6 \vv_{n_j,k_j}\gg_{n_j,k_j}+t_{j,n_j,k_j}\e^{Z_{n_j}}\eta_{n_j,k_j})\dd^{j }\\
&\le \ff r {8} \EE_V(f\land \dd^{\ff j 2}) +\ff 1 2
\mu_V(f^2\wedge\dd^{ j})+ \ff {\dd^j}4 \mu_V(|f|)^2 + \frac{1}{16}.
\end{split}\end{equation*} Thus,
$$\mu_V(f^2\land\dd^j)\le \ff r 4 \EE_V(f\land \dd^{\ff j 2}) +\ff{\dd^j} 2 \mu_V(|f|)^2 +\ff 1 8.$$
Combining this with (\ref{proof-e-10}) and using (\ref{dri-01}), we
arrive at
\beg{equation*}\beg{split} 1&= \mu_V(f^2)\le \mu_V\big\{((f-\dd^{\ff j 2})^++(f\wedge
\dd^{\ff j 2}))^2\big\}\\
&\le 2 \mu_V\Big(\big({(f-\dd^{\ff j 2})^+}\big)^2\Big)+ 2 \mu_V(f^2\land\dd^j)\\
&\le \ff r 2\Big(\EE_V((f-\dd^{\ff j 2})^+)+\EE_V(f\land
\dd^{\ff j2})\Big)+ \ff 1 2 +\dd^j \mu_V(|f|)^2\\
&\le \ff r 2 \EE_V(f)
+\ff 1 2 +\dd^j \mu_V(|f|)^2.\end{split}\end{equation*} Therefore,
\beq\label{DDE} \mu_V(f^2)\le r\EE_V(f) + 2 \dd^j\mu_V(|f|)^2\end{equation}
holds for all $\{(n_i,k_i)\}\in I_\dd$ and $j\in D(r,\{(n_i,k_i)\})$. This proves (\ref{th-sp-1}) for the desired $\bb_V$.
(2) If \beq\label{DDF} \limsup_{i\to\infty} (8\vv_{n_i,k_i}
+t_{i,n_i,k_i}e^{K_{n_i,k_i}})\le \ff 1 {64\ll},\end{equation}then there exist a
constant $r>0$ and $j\ge 1$
such that
(\ref{A-01}) and (\ref{A-02}) hold, i.e.\ $j\in D(r,\{(n_i,k_i)\})$.
So, the arguments in (1) ensure (\ref{DDE}), and so the
defective Poincar\'e inequality for $C_1=r$ and $C_2= 2\dd^j.$ Let
$V_N$ be in the proof of Theorem \ref{finite-sp}(2). It follows from the proof of Theorem \ref{finite-sp}(2) that for $n$ large enough
$K_{n,k}(V_N)\le K_{n,k}(V)-N$. Since
$J_{n,k}(V_N)\le J_{n,k}(V)$ and $\beta(r)$ is decreasing, we have
\begin{equation*}
\begin{split}
&\limsup_{i \to\infty}t_{i,n_i,k_i}(\dd,V_N)e^{K_{n_i,k_i}(V_N)} \le
\e^{-N}\limsup_{i\to\infty}t_{i,n_i,k_i}(\dd,V)e^{K_{n_i,k_i}(V)},
\end{split}
\end{equation*} which combined with \eqref{proof-e-12} yields that
$$\limsup_{i \to\infty}\Big(\vv_{n_i,k_i}(V_N)+t_{i,n_i,k_i}(\dd,V_N)e^{K_{n_i,k_i}(V_N)} \Big)\le
\e^{-N}\limsup_{i\to\infty}\Big(\vv_{n_i,k_i}(V)+t_{i,n_i,k_i}(\dd,V)e^{K_{n_i,k_i}(V)}\Big).$$
Then the remainder of the proof is similar to that of Theorem
\ref{finite-sp}(2) by using (\ref{DDF}) instead of (\ref{AAA}).
\end{proof}
\subsection{Perturbations for the super Poincar\'e inequality under a variation condition
}
It is well known that in the diffusion case the super Poincar\'e inequality is stable under Lipschitz perturbations (see \cite[Proposition 2.6]{W00a}). The aim of this section is to extend this result to the non-local setting using a variation condition on ${\rm supp}\,q:=\{(x,y): q(x,y)>0\}$.
\beg{thm}\label{th-sup} Assume that $(\ref{sp})$ holds. If there exists a constant $\kk_1>0$ such that $\kk_2:= \mu(\e^{-2V})<\infty$ and
\beq\label{th-sup-1} |V(x)-V(y)|\le \kk_1(1\land d(x,y)),\ \ (x,y)\in{\rm supp}\,q.\end{equation} Then $(\ref{th-sp-1})$ holds for
$$\bb_V(s):= \inf\Big\{ 16\kk_2\bb(r)^3(4+\ll \kk_1^2 s'):\ s'\in (0,s], 0<r\le \ff{s'\e^{-\kk_1}}{4+\ll \kk_1^2s' }\Big\},\ \ s>0.$$\end{thm}
\beg{proof}
To prove (\ref{sp}) for the desired $\bb_V$, we may and do assume that $V$ is bounded. Indeed, for any $n\ge 1$ let
$$V_n= (V\land n)\lor (-n) -\log \mu(\e^{(V\land n)\lor(-n)}).$$ Then $\mu(\e^{V_n})=1$, (\ref{th-sup-1}) holds for $V_n$ in place of $V$, and
$$\lim_{n\to\infty} \mu(\e^{-2V_n})=\kk_2.$$ Thus, applying the assertion to the bounded $V_n$ and letting $n\to\infty$, we complete the proof.
Now, let $V$ be bounded and let $f\in\A$ with $\mu_V(|f|)=1$. Take $\tt f=f\e^{\ff V2}.$
By (\ref{th-sup-1}) we have for every $x,y \in \text{supp}q$,
\begin{equation*}
\begin{split}
&\big(1-\e^{\frac{V(x)-V(y)}{2}}\big)^2 \le \frac{\e^{\kk_1}\kk_1^2 }{4}\big(1\wedge d^2(x,y)\big),
\end{split}
\end{equation*}
hence
\beg{equation*}\beg{split} \GG(\tt f)(x)&\le 2\int_{E} \Big\{\e^{V(y)}(f(x)-f(y))^2+ f^2(x)(\e^{\ff {V(x)}2}-\e^{\ff {V(y)}2})^2\Big\}q(x,y)\mu(\d y)\\
&\le 2\e^{V(x)+\kk_1}\GG(f)(x) + \ff{1}2 \kk_1^2\ll \e^{V(x)+\kk_1}f^2(x).
\end{split}\end{equation*} Since $V$ is bounded, this implies $\tt f\in\A$. Moreover, combining this with (\ref{sp}), we obtain
\beg{equation}\label{th-sup-2}\beg{split} \mu_V(f^2)&= \mu({\tt f}^2)\le r \mu(\GG(\tt f)) +\bb(r) \mu(|\tt f|)^2\\
&\le 2 r \e^{\kk_1}\EE_V(f) + r\ff{ \kk_1^2 }2\ll \e^{\kk_1} \mu_V(f^2) +\bb(r) \mu_V( |f|\e^{-\ff V 2})^2.\end{split}\end{equation}
Since $\mu_V(|f|)=1$, for any $R>0$ we have $\mu_V(|f|>R)\le \ff 1 R$, and hence,
\beg{equation*}\beg{split} \mu_V(|f|\e^{-\ff V 2})^2 &\le 2 \mu_V(|f|\e^{-\ff V 2}1_{\{|f|\le R\}})^2 + 2 \mu_V(|f|\e^{-\ff V 2}1_{\{|f|\ge R\}})^2\\
&\le 2 R \mu_V(|f|^{\ff 1 2} \e^{-\ff V 2})^2 +2 \mu_V(f^2) \mu_V(\e^{-V} 1_{\{|f|>R\}})\\
&\le 2 R \mu_V(|f|)\mu_V(\e^{-V}) + 2\mu_V(f^2) \ss{\mu_V(|f|>R)\mu_V(\e^{-2V})}\\
&\le 2 R + 2 \mu_V(f^2)\ff{\ss{\kk_2}}{\ss R}.\end{split}\end{equation*} Taking $R= 16\kk_2\bb(r)^2$, we get
$$\bb(r)\mu_V(|f|\e^{-\ff V 2})^2 \le 32\kk_2\bb(r)^3 +\ff 1 2\mu_V(f^2).$$
Substituting this into \eqref{th-sup-2}, we arrive at
$$\mu_V(f^2)\le 4r\e^{\kk_1}\EE_V(f)+ r \kk_1^2 \ll \e^{\kk_1}\mu_V(f^2) +64 \kk_2\bb(r)^3.$$
Therefore, for any $s'\in (0,s]$ such that
$$r\le \ff {s'\e^{-\kk_1}}{4 + \ll \kk_1^2 s'},$$ we have
$$\mu_V(f^2)\le s' \EE_V(f)+ 16\kk_2\bb(r)^3(4+\ll\kk_1^2 s')\le s \EE_V(f) + 16\kk_2\bb(r)^3(4+\ll\kk_1^2 s').$$
This implies the desired super Poincar\'e inequality.
\end{proof}
Let $E=\R^m$ and $d(x,y)=|x-y|$. If the jump has a finite range, i.e. there is a constant $k\ge 1$ such that $q(x,y)=0$ for $|x-y|>k$, then (\ref{th-sup-1}) holds for any Lipschitz function $V$. Therefore, the above theorem implies that the super Poincar\'e inequality is stable for all Lipschitiz perturbations as is known in the diffusion case. In particular, since the defective log-Sobolev inequality
$$ \mu(f^2\log f^2)\le C_1 \EE(f)+C_2,\ \ f\in\D(\EE),\mu(f^2)=1$$ holds for some $C_1,C_2>0$ if and only if
the super Poincar\'e inequality $(\ref{sp})$ holds for $\bb(r)=\e^{c(1+r^{-1})}$ for some $c>0$, see \cite[Corllary 1.1]{W00b} for $\dd=1$,
we conclude from Theorem \ref{th-sup} that the defective log-Sobolev inequality is stable under perturbations of Lipschitz functions $V.$ See \cite[Example 1.2]{CW2} for examples of $\mu$ and $q$ having finite range of jumps such that the log-Sobolev inequality holds.
\section{Perturbations for the weak Poincar\'{e} inequality
Suppose that the weak Poincar\'{e}
inequality
\begin{equation}\label{wp} \mu(f^2)\le \beta(r)
\EE(f,f)+r\|f\|_\infty^2,\quad r>0, f\in \D(\EE), \mu(f)=0,\end{equation}
holds for some decreasing function $\beta:(0,\infty)\to (0,\infty).$
To derive the weak Poincar\'e inequality for $\EE_V$ using growth conditions on $V$, for any $n,k\ge 1$ let
\begin{equation*}
\begin{split}
&\tt K_{n,k} (V)=\sup_{\rho\le n}V -\inf_{\rho \le n+k+1}V,\quad\quad \tt Z_n(V)=\sup_{\rho\le n}V,\\
&\tt\eta_{n,k} =\iint_{\{\rr(x)> n+k+1, \rr(y)\le n+1\}}q(x,y)\mu(\d
y)\mu(\d x),\ \ \tt \gg_k=\iint_{\{d(x,y)>k\}}q(x,y)\mu(\d y)\mu(\d
x).
\end{split}
\end{equation*} It is clear that $\tt\eta_{n,k}\le \tt \gg_k$. By \eqref{intro-02} we have $\tt\eta_{n,k}\downarrow 0$ as $n\uparrow\infty$ or $k\uparrow\infty$.
\beg{thm}\label{th-wp} Assume that the weak Poincar\'{e} inequality
\eqref{wp} holds. If for any $\vv>0$
\beq\label{th-wp-1}\inf_{n,k\ge 1} \e^{\tt Z_{n}(V)}\beta\big(\vv \e^{-\tt Z_{n}(V)}\big)\big(\tt\eta_{n,k}
+\tt \gg_k+\mu(\rr>n-k)\big)=0,\end{equation} then
\begin{equation}\label{th-wp-2} \mu_V(f^2)\le
\beta_V(r)\EE_V(f,f)+r\|f\|_\infty^2,\quad r>0,f\in \D(\EE_V),
\mu_V(f)=0\end{equation} holds for
\beg{equation*}\beg{split} \beta_V(r):=\inf\Big\{&2\beta\Big(\ff r 8 \e^{-\tt Z_{n}(V)}\Big)\e^{\tt K_{n,k}(V)}:\
6\mu_V(\rr>n)\\
&+2\e^{\tt Z_{n}(V)}\beta\Big(\ff r 8 \e^{-\tt Z_{n}(V)}\Big)\big(4\tt \eta_{n,k} +\tt \gg_k+4\ll\mu(\rr>n-k)\big)\le \ff r 2\Big\}<\infty,\ \ r>0.
\end{split}\end{equation*} \end{thm}
\beg{proof} It is easy to see that (\ref{th-wp-1}) implies $\beta_V(r)<\infty$ for $r>0.$ Let $g_n$ be in the proof of Lemma \ref{L3.2}. Then for any $f\in\A$ with $\mu_V(f)=0$, we have
$$\mu_V(fg_{n})^2=\mu_V(f(1-g_{n}))^2\le\|f\|_\infty^2 \mu_V(\rr>n)^2.$$ Moreover,
\beg{equation*}\beg{split} {\rm Var}_{\mu_V}(fg_{n}) &\le \mu_V\big((fg_{n}-\mu(fg_{n}))^2\big)\\
&\le \e^{\sup_{\rr\le n}V}\mu\big(1_{\{\rr\le n\}}(fg_{n}-\mu(fg_{n}))^2\big) +\mu_V\big(1_{\{\rr>n\}}(fg_{n}-\mu(fg_{n}))^2\big)\\
&\le \e^{\sup_{\rr\le n}V}{\rm Var}_{\mu}(fg_{n}) +4\|f\|_\infty^2 \mu_V(\rr>n).\end{split}\end{equation*} Then
\beg{equation}\label{th-wp-3}\beg{split} \mu_V(f^2) &\le {\rm Var}_{\mu_V }(fg_{n})+\mu_V(fg_{n})^2 +\mu_V(f^21_{\{\rr>n\}})\\
&\le \e^{\sup_{\rr\le n} V}{\rm Var}_{\mu}(fg_{n}) +6\|f\|_\infty^2 \mu_V(\rr>n).\end{split}\end{equation}
On the other hand, we have
\beg{equation*}\beg{split} \e^{\sup_{\rr\le n}V} \EE(fg_{n})
\le &2 \e^{\sup_{\rr\le n}V}\iint_{E\times E} g_{ n}^2(y)(f(x)-f(y))^2q(x,y)\mu(\d y)\mu(\d x)\\
& + 2 \e^{\sup_{\rr\le n}V}\iint_{E\times E}f^2(x)(g_{n}(x)-g_{n}(y))^2 q(x,y)\mu(\d y)\mu(\d x)\\
\le &2 \e^{\tt K_{n,k}(V)}\EE_V(f) +2\e^{\tt Z_n(V)}(4\tt\eta_{n,k}+\tt \gg_k+4\ll\mu(\rr>n-k))\|f\|_\infty^2,\end{split}\end{equation*}
where in the last inequality we made
the first integral on the sets $\{\rr(x)\le n+k+1\}$ and $\{\rr(x)>n+k+1\}$, and the second
integral on the sets $\{\rr(x)\le n-k\}$ and $\{\rr(x)>n-k\}$, and also used the facts that
\beg{equation*}\beg{split} &\iint_{\{\rr(x)> n+k+1\}}g_n^2(y)(f(x)-f(y))^2
q(x,y)\mu(\d y)\mu(\d x) \le \tt4\eta_{n,k} \|f\|_\infty^2,\\
&\iint_{\{\rr(x)\le n-k\}}f^2(x)(g_n(x)-g_n(y))^2q(x,y)\mu(\d
y)\mu(\d x)
\le \tt \gg_k \|f\|_\infty^2,\\
&\iint_{\{\rr(x)> n-k\}}f^2(x)(g_n(x)-g_n(y))^2q(x,y)\mu(\d y)\mu(\d
x)\le 4\lambda\mu(\rr>n-k)\|f\|_\infty^2.
\end{split}\end{equation*}
Combining this with (\ref{th-wp-3}) and \eqref{wp}, we arrive at
\beg{equation*}\beg{split} \mu_V(f^2) \le &2\beta(s)\e^{\tt K_{n,k}(V)} \EE_V(f)\\
&+ \|f\|_\infty^2\big\{6\mu_V(\rr>n) + 4s\e^{\tt Z_{n}(V)}+
2\beta(s)\e^{\tt Z_{n}(V)} (4\tt\eta_{n,k} + \tt
\gg_k+4\ll\mu(\rr>n-k))\big\}.\end{split}\end{equation*} So, for any
$r>0$, let $s= \ff r 8 \e^{-\tt Z_{n}(V)}$. If for some $n,k\ge 1$
one has
$$ 6\mu_V(\rr>n)
+2\e^{\tt Z_{n}(V)}\beta\Big(\ff r 8 \e^{-\tt Z_{n}(V)}\Big)\big(4\tt \eta_{n,k} +\tt \gg_k+ 4\ll\mu(\rr>n-k)\big)\le \ff r 2,$$ then
$$\mu_V(f^2)\le 2\beta(s)\e^{\tt K_{n,k}(V)} \EE_V(f)+r\|f\|_\infty^2.$$ Therefore, the proof is finished.
\end{proof}
To conclude this section, we present an example where $(\EE,
\scr{D}(\EE))$ satisfies the Poincar\'e inequality, i.e. the weak
Poincar\'e inequality \eqref{wp} holds for a constant function
$\bb.$
\begin{exa}\label{exa-wp}\rm Let $E=\R^m$ with $d(x,y)=|x-y|$.
Let
$$q(x,y)=\ff{(1+|y|)^{m+\aa}}{|x-y|^{-(m+\alpha)}},\ \ \mu(\d x)=\frac{c_{m,\aa}\d
x}{(1+|x|)^{m+\alpha}}$$ for some constant $0<\alpha<2$, where
$c_{m,\aa}$ is a normalizing constant such that $\mu$ is a
probability measure. Then $\A\supset C_0^2(\R^m),$ and according to \cite[Corollary 1.2(1)]{WW}, \eqref{wp} holds
for a constant rate function $\beta(r)\equiv \beta>0.$
Now, let $V$ be measurable satisfying
\begin{equation}\label{exa-wp-01}
-s\varepsilon\log(1+|x|)-K \le V(x)\le
(1-s)\varepsilon\log(1+|x|)+K,\quad x\in \R^m
\end{equation}
for some constants $\varepsilon\in [0,\alpha)$, $s \in [0,1]$ and
$K\in \R$ such that $\mu(e^V)=1$. Then the weak Poincar\'{e} inequality \eqref{th-wp-2} holds
with
\begin{equation}\label{exa-wp-02}
\beta_V(r)=C\big(1+r^{-\varepsilon/(\aa-(1-s)\vv)}\big)
\end{equation}
for some constant $C>0$.
Moreover, the assertion is sharp in the following two cases with $s=0$.
\beg{enumerate}\item[(i)] $\beta_V$ in \eqref{exa-wp-02} is sharp, i.e. $\beta_V$ can not be replaced by any essentially smaller functions: if
$$\lim_{r\to0} r^{\vv/(\aa-\vv)}\beta_V(r) =0,$$ then the weak Poincar\'{e} inequality \eqref{th-wp-2} does not hold.
\item[(ii)] The constant $K$ can not be replaced by any unbounded functions: for \begin{equation}\label{exa-wp-03}
V(x)=
\varepsilon\log(1+|x|)+ \phi(|x|)+K_0,
\end{equation}
where $\vv\in [0,\aa)$,
$\phi:[0,+\infty)\rightarrow [0,+\infty)$
is an increasing function with $\phi(r)\uparrow\infty$ as $r\uparrow\infty$ such that
$\mu(e^{\varepsilon\log(1+|\cdot|)+ \phi(|\cdot|)})<\infty$, and $K_0\in\R^d$ is such that $\mu(e^V)=1$, the weak Poincar\'e inequality
(\ref{th-wp-2}) with the rate function $\beta_{V}$ given by
(\ref{exa-wp-02}) does not hold. \end{enumerate}
\end{exa}
\begin{proof} Take $k=\frac{n}{2}$. Then
there is a constant $c_1>0$ such that for $n$ large enough
\begin{equation*}
\begin{split}
&\tt K_{n,\frac{n}{2}}(V) \le \vv \log(1+n)+c_1,\ \ \ \tt Z_n(V)\le \vv(1-s) \log(1+n)+c_1,\\
&\tt\eta_{n,\frac{n}{2}}+\tt \gg_{\frac{n}{2}} +\mu\big(\rr>\frac{n}{2}\big)\le \frac{c_1}{n^{\aa}},
\ \ \mu_V(\rr>n)\le \frac{ c_1}{n^{\aa-(1-s)\vv}}.
\end{split}
\end{equation*}
Since $\vv \in [0,\aa)$, we see that (\ref{th-wp-1}) holds and there exists $c_2>0$ such that
$$6\mu_V(\rr>n)
+2\beta\e^{\tt Z_{n}(V)} \big(4\tt \eta_{n,\frac{n}{2}}+\tt \gg_{\frac{n}{2}} +4\ll\mu(\rr>\frac{n}{2})\big)\le \ff{c_2}{2n^{\aa-(1-s)\vv}},\ \ n\ge 1.$$
Thus, in the definition of $\beta_V$ for small $r>0$ we may take $n= (\ff{c_2}r)^{\ff 1{\aa- (1-s)\vv}}$ to get
$$\beta_V(r)\le 2\beta \e^{\tt K_{n,\frac{n}{2}}(V)}\le c_3r^{-\vv/(\aa-(1-s)\vv)}$$ for some constant $c_3>0.$
Therefore, there exists $C>0$ such that the weak Poincar\'{e} inequality \eqref{th-wp-2} holds for $\beta_V$ given in
\eqref{exa-wp-02}.
It remains to verify (i) and (ii), where the assertion (i) has been included in \cite[Corollary 1.2(3)]{WW}. So, it suffices to consider (ii).
Now, assume
that the weak Poincar\'e inequality holds for
$(\EE_{V},\scr{D}(\EE_{V}))$ with the rate function
$$
\beta_V(r)=c_4(1+r^{-\frac{\vv}{\alpha-\vv}})
$$ for some constant $c_4>0$. For any $n\ge1$, let $f_n\in C^\infty(\R^d)$ be in part (b) of the proof of Example \ref{ex-sp-1}. By (\ref{AB0}) and noting that $\mu_V(f_n)^2\le \ff 1 2 \mu_V(f_n^2)$ for large $n$, there exist constants $c_5, c_6>0$ such that for $n$ large enough,
$$\EE_V(f_n)\le c_5 n^{-\aa},\quad \mu_V(f_n^2)-\mu_V(f_n)^2\ge \frac{c_6\e^{\phi(n)}}{n^{\aa-\vv}}.$$ Combining these with \eqref{th-wp-2}, we obtain that there exists $c_7>0$ such that for all $r>0$ and for $n$ large enough,
$$\frac{c_7\e^{\phi(n)}}{n^{\aa-\vv}}\le \frac{\bb_V(r)}{n^\aa}+r.$$
Taking $r=\frac{c_7}{2}\frac{\e^{\phi(n)}}{n^{\aa-\vv}}$ in the inequality above, we arrive at
\begin{equation}\label{exa-wp-04}\bb_V\Big(\frac{c_7}{2}\frac{\e^{\phi(n)}}{n^{\aa-\vv}}\Big) \ge \frac{c_7}{2}n^\vv \e^{\phi(n)}.\end{equation}
Since there is $c_8>0$ such that for $n$ large enough
$$\frac{\e^{\phi(n)}}{n^{\aa-\vv}}\le c_8\int_{\{|x|\ge n\}}\frac{\e^{\phi(|x|)}}{(1+|x|)^{m+\aa-\vv}}
\,\d x,$$ it holds that $$\lim_{n\to \infty}\frac{\e^{\phi(n)}}{n^{\aa-\vv}}=0,$$ which, along with the definition of $\bb_V$, yields that there is a constant $c_9>0$ such that for $n$ large enough
$$ \bb_V\Big(\frac{c_7\e^{\phi(n)}}{2n^{\aa-\vv}}\Big)\le c_9n^\vv \e^{-\vv\phi(n)/(\aa-\vv)},$$
which is a contradiction to \eqref{exa-wp-04} since $\lim_{r\to\infty}\phi(r)=\infty.$
Therefore, the weak Poincar\'e
inequality does not hold with the rate function
(\ref{exa-wp-02}).
\end{proof}
| {
"attr-fineweb-edu": 1.197266,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdwc4uzlh5508A5HO | \section{Introduction}
Linear Quadratic Gaussian Synthesis is a (perhaps the) standard approach to constructing a compensator for a finite
or an infinite dimensional linear system. It consists of two parts. One first solves a Linear Quadratic Regulator problem for
a linear state feeback law for a finite number of actuators. If the state were fully measureable this linear state feedback would asymptotically stabilize the system. But the state is
usually not fully measureable, instead a finite number of linear functionals of state are available and these measurements are corrupted by noise.
A Kalman filter is used to process the measurents to get an estimate of the whole state. The certainty equivalence principle is envoked and the linear feedback is applied to estimate of the state. The result is a compensator of the same dimension as the original system. It can be shown that
spectrum of the combined system and its compensator is the union of the spectrum of system under linear full state feedback with the spectrum of the error
dynamics of the Kalman filter. If both these spectra lie in the open left half plane then the compensator will stabilize the original system.
This LQG approach works over both finite and infinite time horizons. In this paper we will only treat infinite horizons. To find the gain of the optimal
linear state feedback one has to solve an algebraic Riccati equation and to find the gain of the Kalman filter one has to solve a dual algebraic
Riccati equation. Good software exists to solve these equations in low to medium dimensions but they can be difficult to solve in high or infinte
dimensions. In infinite dimensions other difficulties can arise. The actuation could be at points in the spatial domain, e.g., boundary control.
The measuements could also be at points. Such systems are not "state linear systems" in the definition of Curtain an Zwart \cite{CZ95}, \cite{CZ20}. State linear systems must
have bounded input and output linear functionals.
The usual approach to dealing with a system with point actuation and/or point sensing is to approximate it by a state linear system. Boundary control actuation is replaced
by intense actuation over a short interval adjacent to the boundary, in effect the control input multiplies a shaping function that approximates a delta function at the boundary. Point sensing is replaced
by integration of the state against a shaping function that approximates a delta function at the measurment point. For more details about this we refer the reader to Chapter 6 of \cite{CZ95} and Chapter 9 of \cite{CZ20} and their extensive references. For more on boundary control of systems described by PDEs see the treatises of Lions \cite{JL71}, Lasiecka-Triggiani \cite{LT00} and Krstic-Smyshlyaev \cite{KS08}. Hulsing \cite{Hu99} and Burns-Hulsing have addressed the computational issues associated with boundary control.
We take a different approach, in effect, we model boundary and other point actuators by delta functions and we model point sensing also by delta functions. We use a method that we call completing the square
to overcome the mathematical technicalities associated with these delta functions. To keep the discussion concrete we limit our consideration to a rod heated/cooled at boundary and other points. We make noisy meaurements of its
temperature at some other points. Because we focus on systems modeled by the heat equation we are able to give explicit solutions to the LQR and Kalman filtering equations using the simple technique of completing the square.
In particular the infinte dimensional analogs of the algebraic Riccati equations are elliptic PDEs that we call Riccati PDEs. These Riccati PDEs can be explicitly solved in terms of the eigenfunctions of the Laplacian. We restrict our attention
to Neumann boundary conditions but our methods readily extend to other self adjoint boundary conditions for the Laplacian.
We started using the completing the square technique on a distributed control problem \cite{Kr20a} but we quickly realized that it works well for boundary control problems.
In \cite{Kr20b} we treated the LQR control of a rod heated/cooled at one end and insulated at the other. The boundary conditions were Neumann at the insulated end and Robin at the controlled end. We have also used the completing the square technique for the LQR boundary control of the wave equation \cite{Kr21a} and the beam equation \cite{Kr21b}.
The rest of the paper is as follows. In the next section we treat the LQR control of the rod heated/cooled at the boundary and other points. Section 3 contains an example of heating/cooling at both ends of the rod and Section 4 contains an example of heating/cooling at both ends of the rod and at its midpoint. In Section 5 we derive the Kalman filter for the rod with noisy measurements at several points by
converting the filtering problem into a family of LQR problems. Section 6 contains an example of the LQG synthesis of a Kalman filter with an LQR state feedback law. The conclusion is found in Section 7.
\section{LQR for a Rod Heated/Cooled at Multiple Points}
Consider a rod of length one that is heated/cooled at multiple locations. We let $x\in [0,1]$ denote position
on the rod and let $z(x,t)$ denote the temperature of the rod at position $x$ at time $t$. We assume the
rod can be heated or cooled at $0=\xi_1 <\xi_2<\ldots< \xi_m=1$. We let $u_k(t)$ be the heating/cooling flux applied to the
rod at $\xi_k$ for $k=1,\ldots,m$ and $u(t)=[u_1(t), \ldots u_m(t)]'$. We model the rod by these equations
\begin{eqnarray*}
0&=&\frac{\partial z}{\partial t} (x,t) -\frac{\partial^2 z}{\partial x^2}(x,t) \\
0&=&z(\xi_k^-,t) -z(\xi_k^+,t) \mbox{ for } k=2,\ldots, m-1\\
\beta_k u_k(t)&=&\frac{\partial z}{\partial x} (\xi_k^-,t) -\frac{\partial z}{\partial x} (\xi_k^+,t) \mbox{ for } k=1,\ldots, m
\\
z(x,t) &=& z^0(x)
\end{eqnarray*}
where
\begin{eqnarray*}
z(\xi_k^+,t)=\lim_{x\to \xi_k^+}z(x,t),&&z(\xi_k^-,t)=\lim_{x\to \xi_k^-}z(x,t)\\
\frac{\partial z}{\partial x} (\xi_k^+,t) =\lim_{x\to \xi_k^+}\frac{\partial z}{\partial x} (x,t),&&\frac{\partial z}{\partial x} (\xi_k^-,t) =\lim_{x\to \xi_k^-}\frac{\partial z}{\partial x} (x,t)
\end{eqnarray*}
with the understanding that $\frac{\partial z}{\partial x} (x,t) =0$ outside of $[0,1]$ and $\beta_k\ge 0$. If the rod is not heated/cooled at its endpoints we set $\beta_1=0$ and/or $\beta_m=0$.
The open loop system, $ u_k(t)=0$ for $k=1,\ldots,m$, reduces to the standard heat equation with Neumann boundary conditions,
\begin{eqnarray*}
\frac{\partial z}{\partial t} (x,t) &=&\frac{\partial^2 z}{\partial x^2}(x,t) \\
\frac{\partial z}{\partial x} (0,t)=0, &&\frac{\partial z}{\partial x} (1,t)=0
\end{eqnarray*}
so the open loop eigenvalues are $\lambda_n=-n^2\pi^2$ for $n=0,1,2,\ldots$
and the orthonormal eigenvectors are
\begin{eqnarray} \label{olef}
\phi^0(x)=1,&&\phi^n(x)=\sqrt{2} \cos n\pi x
\end{eqnarray}
for $n=1,2,\ldots$. Notice that $\lambda_0=0$ so the open loop system
is only neutrally stable. The rest of the eigenvalues are rapidly going to $-\infty$.
We wish to stabilize the rod to some uniform temperature which we conveniently take to be $z=0$ by using a Linear Quadartic Regulator (LQR).
We choose some $Q(x)\ge 0$ and a $m\times m$ positive definite matrix $R>0$
and we seek to minimize
\begin{eqnarray} \label{crit}
\int_0^\infty \int_0^1 z(x,t)Q(x)z(x,t)\ dx+ u'(t)Ru(t)\ dt
\end{eqnarray}
subject to the above dynamics.
Let $P(x_1,x_2)$ be any symmetric function, $P(x_1,x_2)=P(x_2,x_1)$, which is continuous on the unit square $S=[0,1]^2$ and suppose there is a control trajectory $u(t)$ such that $z(x,t)\to 0$ as $t\to \infty $. Then
by the Fundamental Theorem of Calculus
\begin{eqnarray} \nonumber
&&0=\iint_{S}z^0(x_1)P(x_1,x_2)z^0(x_2)\ dA +\int_0^\infty \iint_{S} {d\over dt}\left(z(x_1,t)P(x_1,x_2)z(x_2,t)\right)\ dA \ dt\\
&&0=\iint_{S}z^0(x_1)P(x_1,x_2)z^0(x_2)\ dA+\int_0^\infty \iint_{S} \frac{\partial^2 z}{\partial x_1^2}(x_1,t)P(x_1,x_2)z(x_2,t)\ dA \ dt\nonumber \\
&& +\int_0^\infty \iint_{S} z(x_1,t)P(x_1,x_2)\frac{\partial^2 z}{\partial x_2^2}(x_2,t)\ dA \ dt \label{FTC}
\end{eqnarray}
where $dA=dx_1dx_2$.
We assume that $P(x_1,x_2)$ satisfies Neumann boundary conditions in each variable
\begin{eqnarray*}
\frac{\partial P}{\partial x_1}(0,x_2)&=\ 0 \ =& \frac{\partial P}{\partial x_1}(1,x_2)\\
\frac{\partial P}{\partial x_2}(x_1,0)&=\ 0 \ =& \frac{\partial P}{\partial x_2}(x_1,0)
\end{eqnarray*}
Now we formally integrate by parts twice with respect to $x_1$ on each subinterval $[\xi_k,\xi_{k+1}]$ ignoring the fact that we only assumed
$P(x_1,x_2)$ is continuous,
\begin{eqnarray}
&&\int_{\xi_k}^{\xi_{k+1}} \frac{\partial^2 z}{\partial x_1^2}(x_1,t)P(x_1,x_2)z(x_2,t)\ dx_1=\int_{\xi_k}^{\xi_{k+1}} z(x_1,t)\frac{\partial^2 P}{\partial x_1^2}(x_1,x_2) z(x_2,t)\ dx_1\nonumber\\
&& +\left[\frac{\partial z}{\partial x_1}(x_1,t)P(x_1,x_2) z(x_2,t)\right]_{x_1=\xi_k^+}^{x_1=\xi_{k+1}^-} -\left[z(x_1,t)\frac{\partial P}{\partial x_1}(x_1,x_2) z(x_2,t)\right]_{x_1=\xi_k^+}^{x_1=\xi_{k+1}^-} \nonumber\\
\label{int_k}
\end{eqnarray}
Since we assumed that $\frac{\partial z}{\partial x_1}(x,t)=0$ off of $[0,1]$ we have
\begin{eqnarray*}
\frac{\partial z}{\partial x_1}(\xi_1^-,t)&=& \frac{\partial z}{\partial x_1}(\xi_m^+,t)\ =\ 0
\end{eqnarray*}
We sum (\ref{int_k}) over $k=1,\ldots,m-1$ and obtain \begin{eqnarray*}
&& \int_0^1 \frac{\partial^2 z}{\partial x_1^2}(x_1,t)P(x_1,x_2)z(x_2,t)\ dx_1\\
&&=\sum_{k=1}^{m-1} \int_{\xi_k}^{\xi_{k+1}} \frac{\partial^2 z}{\partial x_1^2}(x_1,t)P(x_1,x_2)z(x_2,t)\ dx_1\\
&&=\int_0^1 z(x_1,t)\frac{\partial^2 P}{\partial x_1^2}(x_1,x_2) z(x_2,t)\ dx_1+\sum_{k=1}^{m}\left( \frac{\partial z}{\partial x_1}(\xi^-_{k},t)-
\frac{\partial z}{\partial x_1}(\xi^+_{k},t)\right)P(\xi_{k},x_2) z(x_2,t)\\
&&=\int_0^1 z(x_1,t)\frac{\partial^2 P}{\partial x_1^2}(x_1,x_2) z(x_2,t)\ dx_1+\sum_{k=1}^{m}\beta_k u_k(t)P(\xi_{k},x_2) z(x_2,t)
\end{eqnarray*}
Similarly
\begin{eqnarray*}
&& \int_0^1 z(x_1,t)P(x_1,x_2)\frac{\partial^2 z}{\partial x_2^2}(x_2,t)\ dx_2\\
&&=\int_0^1 z(x_1,t)\frac{\partial^2 P}{\partial x_2^2}(x_1,x_2) z(x_2,t)\ dx_2+\sum_{k=1}^{m}z(x_1,t)P(x_1,\xi_{k}) \beta_k u_k(t)
\end{eqnarray*}
We plug these into (\ref{FTC}) and obtain the identity
\begin{eqnarray} \label{FTC1}
&&0=\iint_{S}z^0(x_1)P(x_1,x_2)z^0(x_2)\ dA\\
&&+\int_0^\infty \iint_{S} z(x_1,t)\nabla^2 P(x_1,x_2)z(x_2,t)\nonumber \\
&&+\sum_{k=1}^{m}\beta_k u_k(t)P(\xi_{k},x_2) z(x_2,t) +\sum_{k=1}^{m}z(x_1,t)P(x_1,\xi_{k}) \beta_k u_k(t)\ dA \ dt \nonumber
\end{eqnarray}
where $\nabla^2 P(x_1,x_2)$ denotes the two dimensional Laplacian of $P$.
We add the right side of this identity (\ref{FTC1}) to the criterion (\ref{crit}) to be minimized to get the equivalent criterion
\begin{eqnarray} \label{crit1}
&&\iint_S z^0(x_1)P(x_1,x_2)z^0(x_2)\ dA \\
&&+\int_0^\infty \iint_{S}z(x_1,t)\delta(x_1-x_2)Q(x_1)z(x_2,t)+ u'(t)Ru(t) \nonumber\\
&&+ z(x_1,t)\nabla^2 P(x_1,x_2)z(x_2,t)\nonumber \\
&&+\sum_{k=1}^{m}\beta_k u_k(t)P(\xi_{k},x_2) z(x_2,t) +\sum_{k=1}^{m}z(x_1,t)P(x_1,\xi_{k}) \beta_k u_k(t)\ dA \ dt \nonumber
\end{eqnarray}
where $\delta(x)$ is the Dirac delta function.
We would like to find a function $K(x)$ taking values in ${I\!\!R}^{m\times 1}$ such that
\begin{eqnarray}
&&\iint_S \left(u(t)-K(x_1)z(x_1,t)\right)'R\left(u(t)-K(x_2)z(x_2,t)\right) dA
\\
&&=\iint_Sz(x_1,t)\delta(x_1-x_2)Q(x_1)z(x_2,t)+ u'(t)Ru(t) \nonumber+ z(x_1,t)\nabla^2 P(x_1,x_2)z(x_2,t)\nonumber \\
&&+\sum_{k=1}^{m}\beta_k u_k(t)P(\xi_{k},x_2) z(x_2,t) +\sum_{k=1}^{m}z(x_1,t)P(x_1,\xi_{k}) \beta_k u_k(t) \ dA\nonumber
\end{eqnarray}
Clearly the terms quadratic in $u(t)$ agree so we we compare terms bilinear in $ z(x_2,t)$
and $u_k(t)$. This yields
\begin{eqnarray} \label{k_j}
-\sum_{j=1}^m R_{k,j}K_j(x_2) =\beta_kP(\xi_k,x_2)
\end{eqnarray}
Let $\beta$ be the $m\times m$ diagonal matrix with diagonal entries $\beta_1,\ldots,\beta_m$ and
$\bar{P}(x_2)=[P(\xi_1,x_2),\ldots,P(\xi_m,x_2)]'$
then (\ref{k_j}) becomes the equation
\begin{eqnarray*}
-RK(x_2) &=& \beta \bar{P}(x_2)
\end{eqnarray*}
Therefore we define
\begin{eqnarray*}
K(x) &=& -R^{-1}\beta \bar{P}(x)
\end{eqnarray*}
Next we compare terms bilinear in $ z(x_1,t)$ and $ z(x_2,t)$,
\begin{eqnarray*}
&&\iint_S z(x_1,t)\left(\nabla^2 P(x_1,x_2)+\delta(x_1-x_2)Q(x_1)\right)z(x_2,t)\ dA\\
&&=
\iint_S \left(\beta \bar{P}(x_1)z(x_1,t)\right)'R^{-1}\beta \bar{P}(x_2)z(x_2,t)\ dA
\end{eqnarray*}
This will hold if $P(x_1,x_2)$ is a solution what we call a Riccati PDE,
\begin{eqnarray} \label{rpde}
\nabla^2 P(x_1,x_2)+\delta(x_1-x_2)Q(x_1)= \left(\beta \bar{P}(x_1)\right)'R^{-1}\beta \bar{P}(x_2)
\end{eqnarray}
Recall we only assumed that $P(x_1,x_2)$ is continuous on the unit square so the Riccati
PDE must be interpreted in the weak sense. This means that given any $C^2$ function $\psi(x)$ satisfying
Neumann boundary conditions it must be true that
\begin{eqnarray*}
0=\iint_S \left(\nabla^2 P(x_1,x_2)+\delta(x_1-x_2)Q(x_1)- \left(\beta \bar{P}(x_1)\right)'R^{-1}\beta \bar{P}(x_2)\right)
\psi(x_1)\psi(x_2)\ dA
\end{eqnarray*}
where the integral is evaluated using integration by parts.
We assume $Q(x)=q>0 $ then formally
\begin{eqnarray*}
Q(x)\delta(x_1-x_2)&=& q \sum_{n_1,n_2=0}^\infty \delta_{n_1,n_2} \cos n_1\pi x_1\cos n_2\pi x_2\
\end{eqnarray*}
where $\delta_{n_1,n_2} $ is the Kronecker delta.
We assume that $P(x_1,x_2)$ has a similar expansion
\begin{eqnarray*}
P(x_1,x_2)&=& \sum_{n_1,n_2=0}^\infty \delta_{n_1,n_2} P^{n_1,n_2}\cos n_1\pi x_1\cos n_2\pi x_2
\end{eqnarray*}
and we define
\begin{eqnarray*}
\gamma^{n,n}&=&\left[ \begin{array}{ccccccccc} \cos n\pi \xi_1\\ \vdots\\ \cos n\pi \xi_m\end{array}\right]' \beta' R^{-1} \beta \left[ \begin{array}{ccccccccc} \cos n\pi \xi_1\\ \vdots\\ \cos n\pi \xi_m\end{array}\right]
\end{eqnarray*}
Then the Riccati PDE (\ref{rpde}) reduces to a sequence of quadratic equations
\begin{eqnarray*}
-2n^2\pi^2 P^{n,n}+q &=& \gamma^{n,n}\left(P^{n,n}\right)^2
\end{eqnarray*}
with roots
\begin{eqnarray} \label{Pnn}
P^{n,n}&=& -n^2\pi^2+\sqrt{n^4\pi^4+ \gamma^{n,n} q}
\end{eqnarray}
The roots are positive since $q>0$ and $\gamma^{n,n}> 0 $ if at least one $\beta_i\ne 0$. But as we now show they are going to zero like ${1\over n^2}$. The Mean Value Theorem implies
there exists an $s$ between $n^4\pi^4 $ and $n^4\pi^4+ \gamma^{n,n} q$
such that
\begin{eqnarray*}
P^{n,n}&=&{1\over 2\sqrt{s}} \gamma^{n,n} q
\end{eqnarray*}
The maximum of ${1\over 2\sqrt{s}} $ between $n^4\pi^4 $ and $n^4\pi^4+ \gamma^{n,n} q$
occurs at $s=n^4\pi^4 $ so we get the estimate
\begin{eqnarray*}
P^{n,n}&\le &{1\over 2n^2\pi^2} \gamma^{n,n} q
\end{eqnarray*}
for $n>0$.
Hence the series
\begin{eqnarray} \label{Psum}
P(x_1,x_2) &=& \sum_{n=0}^\infty P^{n,n}\cos n\pi x_1\cos n\pi x_2
\end{eqnarray}
converges uniformly to a continuous function which is a weak solution of the Riccati PDE (\ref{rpde}).
\section{Example One}
We first consider a simple example with heating/cooling at both ends of the rod, $m=2$, $\xi_1=0$,
$\xi_2=1$, $\beta_1=\beta_2=1$, $q=1$, $R=[1,0;0,1]$ and $\gamma^{n,n}=2$.
Then
\begin{eqnarray*}
K_1(x)&=& - \sum_{n=0}^\infty P^{n,n}\cos n\pi x \\
K_2(x)&=&- \sum_{n=0}^\infty (-1)^n P^{n,n}\cos n\pi x
\end{eqnarray*}
The closed loop "boundary" conditions are
\begin{eqnarray*}
\frac{\partial z}{\partial x}(0^+,t)&=& \sum_{n=0}^\infty\int_0^1 P^{n,n}\cos n\pi x \ z(x,t)\ dx \\
\frac{\partial z}{\partial x}(1^-,t)&=& -\sum_{n=0}^\infty\int_0^1 (-1)^n P^{n,n}\cos n\pi x \ z(x,t)\ dx
\end{eqnarray*}
Notice that these nonstandard "boundary" conditions are linear. A straightforward calculation
from (\ref{Pnn}) yields
$P^{0,0}=1.4142$, $P^{1,1}=0.1008$, $P^{2,2}= 0.0253$, $P^{3,3}= 0.0113$, $P^{4,4}= 0.0063$ and $P^{5,5}= 0.0041$.
An eigenfunction of the Laplacian under any linear "boundary" conditions
is of the form
\begin{eqnarray*}
\psi(x)&=& a\cos \nu x+b\sin \nu x
\end{eqnarray*}
for some $\nu$ and the corresponding eigenvalue is $-\nu^2$.
For this example the "boundary" conditions are
\begin{eqnarray} \label{BC1}
b \nu &=&\sum_{n=0}^\infty \int_0^1 P^{n,n} \cos n\pi x \left( a \cos \nu x+b\sin \nu x\right)\ dx \\ \nonumber
\\
a \nu \sin \nu - b \nu \cos \nu &=& \sum_{n=0}^\infty\int_0^1(-1)^n P^{n,n} \cos n\pi x \left( a \cos \nu x+b\sin \nu x\right)\ dx\nonumber \\
\label{BC2}
\end{eqnarray}
Due to the symmetry of the problem the eigevectors must be invariant under the action of replacing $x$ by $1-x$.
So the eigenvectors are either $\psi(x)=\cos 2k\pi x$ or $\psi(x)=\sin (2k+1)\pi x$ for some nonegative integer $k$.
If $k=0$ and $\psi(x)=1$ the "boundary" conditions become
\begin{eqnarray*}
0&=&\sum_{n=0}^\infty \int_0^1 P^{n,n} \cos n\pi x \ dx\ =\ 1\\
\\
0&=&\sum_{n=0}^\infty \int_0^1 (-1)^n P^{n,n} \cos n\pi x \ dx \ =\ 1
\end{eqnarray*}
so $\psi(x)=1$ is not an eigenfunction.
If $k>0$ and $\psi(x)=\cos 2k\pi x$ the "boundary" conditions become
\begin{eqnarray*}
2k\pi &=&\sum_{n=0}^\infty \int_0^1 P^{n,n} \cos n\pi x \cos 2k\pi x\ dx\ =\ {1\over 2}\\
\\
-2k\pi &=&\sum_{n=0}^\infty \int_0^1 (-1)^n P^{n,n} \cos n\pi x \cos 2k\pi x \ dx \ =\ {1\over 2}
\end{eqnarray*}
so $\psi(x)=\cos 2k\pi x$ is not an eigenfunction.
If $k\ge 0$ and $\psi(x)=\sin (2k+1) \pi x$ the "boundary" conditions are
\begin{eqnarray*}
(2k+1)\pi &=& \sum_{n=0}^\infty \int_0^1 P^{n,n} \cos n\pi x \sin (2k+1) x\ dx \\
\\
(2k+1)\pi &=& \sum_{n=0}^\infty \int_0^1 (-1)^n P^{n,n} \cos n\pi x \sin (2k+1) x\ dx
\end{eqnarray*}
If $n$ is odd then $n\pm (2k+1)$ are even
so
\begin{eqnarray*}
&&\int_0^1 \cos n\pi x \sin (2k+1) \pi x\ dx \\
&&={1\over 2} \int_0^1 \sin (n+ (2k+1))\pi x +\sin (n-(2k+1))\pi x \ dx=0
\end{eqnarray*}
This shows that the two boundary conditions are identical.
So the closed loop eigenfunctions are $\psi_k(x)=\sin (2k+1) \pi x$ and the closed loop eigenvalues are $\mu_k =- (2k+1)^2 \pi^2$
for $k=0,1,2,\ldots$.
Being able to heat/cool the rod at both ends is a big improvement over being able to heat/cool the rod at just one end.
As we saw in \cite{Kr20b} for control only at one end the least stable closed loop eigenvalue is $-1.0409$. For control at both ends the least stable closed loop eigenvalue is $-\pi^2=-9.8696$.
\section{Example Two}
We assume that the rod can be heated/cooled at both endpoints and also at the midpoint,
$m=3$, $\xi_1=0, \ \xi_2=0.5,\ \xi_3=1$, $\beta_1=1,\ \beta_2=2,\ \beta_3=1$, $q=1$, $R=[1,0,0;0,1,0;0,0,1]$ and
$\gamma^{n,n}=6$ if $n$ is even and $\gamma^{n,n}=2$ if $n$ is odd.
The $P^{n,n}$ are given by (\ref{Pnn}).
The optimal feedback gains are
\begin{eqnarray}
\label{K1}
K_1(x)&=& -\sum_{n=0}^\infty P^{n,n} \cos n\pi x\\
\label{K2}
K_2(x)&=&-2\sum_{k=0}^\infty (-1)^{k} P^{2k,2k}\cos 2k\pi x\\
\label{K3}
K_3(x)&=& -\sum_{n=0}^\infty(-1)^n P^{n,n} \cos n\pi x
\end{eqnarray}
We assume that a closed loop eigenfunction $\psi(x)$ is composed of different sinusoids on $[0,0.5]$ and $[0.5,1]$
with a common frequency $\nu$. Because the system is
symmetric with respect to replacing $x$ with $1-x$ we expect a closed eigenfunction to reflect this symmetry,
\begin{eqnarray*}
\psi (x)&=& \left\{\begin{array}{ccc} a \cos \nu x+b \sin \nu x& \mbox{ if } & 0\le x\le 0.5\\ \\
a \cos \nu (1-x)+b \sin \nu (1-x)& \mbox{ if } &0.5\le x\le 1
\end{array}\right.
\end{eqnarray*}
Notice such a solution immediately satisfies the continuity condition,
\begin{eqnarray} \label{cont}
\psi (0.5^-)&=&\psi (0.5^+)
\end{eqnarray}
and
\begin{eqnarray} \label{jump}
\frac{\partial \psi}{\partial x}(0.5^-)&=&-\frac{\partial \psi}{\partial x}(0.5^+)
\end{eqnarray}
The first closed loop "boundary" condition is
\begin{eqnarray*}
b &=& \sum_{n=0}^\infty \int_0^{0.5} P^{n,n} \cos n\pi x\left(a \cos \nu x+b \sin \nu x\right)\ dx
\\
&& +\sum_{n=0}^\infty \int_{0.5}^1 P^{n,n} \cos n\pi x\left(a \cos \nu (1-x)+b\sin (1-x)\right)\ dx\\
\end{eqnarray*}
Now $\cos n\pi (1-x)=(-1)^n \cos n\pi x$ so
\begin{eqnarray*}
\int_{0.5}^1 P^{n,n} \cos n\pi x\left(a \cos \nu (1-x)+b\sin \nu (1-x)\right)\ dx
\\
=\int_0^{0.5} (-1)^n P^{n,n} \cos n\pi x\left(a \cos \nu x+b\sin \nu x\right)\ dx
\end{eqnarray*}
Hence the first closed loop "boundary" condition becomes
\begin{eqnarray*}
b&=& 2\sum_{k=0}^\infty \int_0^{0.5} P^{2k,2k} \cos 2k\pi x\left(a \cos \nu x+b \sin \nu x\right)\ dx
\end{eqnarray*}
The second closed loop "boundary" condition is
\begin{eqnarray*}
\frac{\partial \psi}{\partial x}(0.5^-) -\frac{\partial \psi}{\partial x}(0.5^+) &=& \int_0^1 K_2(x)
\psi(x)\ dx
\end{eqnarray*}
From (\ref{jump}) and (\ref{K2})
we obtain
\begin{eqnarray*}
&&2\nu\left (-a \sin 0.5 \nu +b\cos 0.5 \nu\right)\\
&&= -2\sum_{k=0}^\infty \int_0^{0.5}(-1)^{k} P^{2k,2k}\cos 2k\pi x\left( a\cos \nu x+b\sin \nu x\right)\ dx\\
&&-2\sum_{k=0}^\infty \int_{0.5}^1(-1)^{k} P^{2k,2k}\cos 2k\pi x \left( a\cos \nu (1-x)+b\sin \nu (1-x)\right)\ dx\\
&&=-4\sum_{k=0}^\infty \int_0^{0.5}P^{2k,2k}\cos 2k\pi x\left( a\cos \nu x+b\sin \nu x\right)\ dx
\end{eqnarray*}
If $0.5 \nu$ is an odd integer, i.~e.~, $\nu=4j+2$, then the two boundary conditions are identical so the closed loop
eigenvalues and eigenvectors are $\mu_j=-(4j+2)^2\pi^2$ and $\psi_j(x)=\sin (4j+1)\pi x$ for $j=0,1,2,\ldots$ and $x\in [0,0.5]$.
In particular the least stable closed loop eigenvalue is $-4\pi^2=-39.4784$ so being able to heat/cool in the middle has a big impact.
\section{Kalman Filtering of the Heat Equation with Point Observations}
In the previous sections we constructed optimal feedbacks to control the heat equation.
These feedbacks assumed that the full state $z(x,t)$ is known at every $ x\in [0,1]$ and $t\ge 0$. But in practice we may only be able to
measure the temperature at a finite number of points $z(\zeta_1,s), z(\zeta_2,s), z(\zeta_p,s)$
where $ 0\le \zeta_1<\zeta_2<\ldots \zeta_p\le 1$ for $-\infty< s\le t$ and these measurements may be corrupted by noise.
Our model for the measured but uncontrolled rod is
\begin{eqnarray*}
\frac{\partial z}{\partial t}(x,t)&=& \frac{\partial^2 z}{\partial x^2}(x,t) +B v(t)\\
y_i(t)&=&C_i z(\zeta_i,t)+ D_i w_i(t)\\
\frac{\partial z}{\partial x}(0,t)=0,&&\frac{\partial z}{\partial x}(1,t)=0
\end{eqnarray*}
where $v(t),\ w_i(t)$ are independent white Gaussian noise processes. Without loss of generality we can assume $C_i>0$ for $i=1,\ldots,p$.
We
wish to construct an estimate $\hat{z}(x,t)$ for all $x\in[0,1]$ based on the past measurements, $y_i(s),\ s\le t$ . We assume the estimates
are linear functionals of the past observations of the form
\begin{eqnarray*}
\hat{z}(x,t)&=& \int_{-\infty}^t \sum_{i=1}^p {\cal K}_i(x,s-t)y_i(s) \ ds
\end{eqnarray*}
Since we are taking
measurements for $-\infty<s\le t$ we expect the filter to be stationary. Therefore it suffices to solve the problem for $t=0$ and we only need to consider $y_i(s)$ for $-\infty<s\le 0$.
Given a possible set of filter gains ${\cal K}_i(x,s)$ we define ${\cal H}x,x_1,s)$ as the solution of a driven backward generalized heat equation
\begin{eqnarray*}
\frac{\partial {\cal H}}{\partial s}(x,x_1,s)\ &=& -\frac{\partial^2 {\cal H}}{\partial x_1^2}(x,x_1,s) +\sum_{i=1}^p {\cal K}_i(x,s)C_i \delta(x_1-\zeta_i) \\
{\cal H}(x,x_1,0)&=&\delta(x-x_1)
\end{eqnarray*}
where ${\cal H}(x,x_1,s)$ satisfies Neumann boundary conditions with respect to $x$ and $x_1$.
Then
\begin{eqnarray*}
\hat{z}(x,0)&=& \int_{-\infty}^0 \sum_{i=1}^p \int_0^1 {\cal K}_i(x,s) C_i \delta(x_1-\zeta_i) z(x_1,s)\ dx_1\ ds\\&&+\int_{-\infty}^0 \sum_{i=1}^p {\cal K}_i(x,s) D_iw_i(s)\ ds\\
&=&\int_{-\infty}^0 \int_0^1 \left(\frac{\partial {\cal H}}{\partial s}(x,x_1,s)+ \frac{\partial^2{\cal H}}{\partial x_1^2}(x,x_1,s)\right)z(x_1,s) \ dx_1 \\
&&+\sum_{i=1}^p {\cal K}_i(x,s)D_iw_i(s)\ ds
\end{eqnarray*}
We integrate by parts with respect to $s$ assuming ${\cal H}(x,x_1,s)\to 0$ as $ s\to -\infty$ and obtain
\begin{eqnarray*}
\hat{z}(x,0)-z(x,0)&=&
-\int_{-\infty}^0 \int_0^1 {\cal H}(x,x_1,s) \frac{\partial z}{\partial s}(x_1,s)\\
&&-\frac{\partial^2 {\cal H}}{\partial x_1^2}(x,x_1,s)z(x_1,s) \ dx_1\ ds \\
&&+\int_{-\infty}^0\sum_{i=1}^p {\cal K}_i(x,s) D_iw_i(s)\ ds
\end{eqnarray*}
\begin{eqnarray*}
\hat{z}(x,0)-z(x,0)&=&
-\int_{-\infty}^0 \int_0^1 {\cal H}(x,x_1,s)\left( \frac{\partial^2 z}{\partial x_1^2}(x_1,s)+ Bv(s)\right)\\
&& -\frac{\partial^2 {\cal H}}{\partial x_1^2}(x,x_1,s)z(x_1,s) \ dx_1\ ds \\
&&+\int_{-\infty}^0\sum_{i=1}^p {\cal K}_i(x,s)D_iw_i(s)\ ds
\end{eqnarray*}
We integrate the second term on the right by parts twice with respect to $x_1$ using the Neumann boundary conditions and obtain
\begin{eqnarray*}
\hat{z}(x,0)-z(x,0)&=&
-\int_{-\infty}^0 {\cal H}(x,x_1,s)Bv(s) \ dx_1\ ds \\
&&+\int_{-\infty}^0\sum_{i=1}^p {\cal K}_i(x,s) D_iw_i(s)\ ds
\end{eqnarray*}
Since $v(s)$ and $w_i(s)$ are independent white Gaussian noise processes, the estimation error $\tilde{z}(x,0)=z(x,0)-\hat{z}(x,0)$ has error variance
\begin{eqnarray} \label{EE} \nonumber
{\rm E}\left(\tilde{z}(x,0)\left(\tilde{z}(x,0)\right)'\right)&=& \int_{-\infty}^0 \int_0^1 {\cal H}(x,x_1,s) B^2{\cal H}(x,x_1,s) \ dx_1 \ dt\\
&&+ \sum_{i=1}^p \int_{-\infty}^0 {\cal K}_i(x,s) D^2_i {\cal K}_i(x,s) \ ds
\end{eqnarray}
For each $x\in [0,1] $ this is a backward LQR optimal control problem with infinite dimensional state $x_1\to {\cal H}(x,x_1,s)$ and $m$ dimensional control ${\cal K}_i(x,s), \ i=1,\ldots,p$.
Our goal is to minimize for each $x$ the error variance subject to the backward dynamics
\begin{eqnarray*}
\frac{\partial {\cal H}}{\partial s}(x,x_1,s)\ &=& -\frac{\partial^2 {\cal H}}{\partial x_1^2}(x,x_1,s) +\sum_{i=1}^p {\cal K}_i(x,s)C_i \delta(x_1-\zeta_i)
\end{eqnarray*}
for $-\infty<s\le 0$. The terminal condition is
\begin{eqnarray*}
{\cal H}(x,x_1,0)&=&\delta(x-x_1)
\end{eqnarray*}
and the boundary conditions are
\begin{eqnarray*}
\frac{\partial {\cal H}}{\partial x_1}(x,0,s)=0,&& \frac{\partial {\cal H}}{\partial x_1}(x,1,s)=0
\end{eqnarray*}
For any $x\in[0,1]$, let $P(x,x_1,x_2)$ be any continuous function symmetric with respect to $x_1,x_2$,
$P(x,x_1,x_2)=P(x,x_2,x_1)$, then given that $ {\cal H}(x,x_1,s)\to 0$ as $s\to -\infty$ then
\begin{eqnarray*}
&&0=\iint_S P(x,x_1,x_2){\cal H}(x,x_1,0){\cal H}(x,x_2,0) \ dA \\
&&-\int_{-\infty}^0 \iint_S {d\over ds} P(x,x_1,x_2){\cal H}(x,x_1,s){\cal H}(x,x_2,s)\ dA \ ds \\
&&=- \int_{-\infty}^0 \iint_S P(x,x_1,x_2) {\partial {\cal H}\over \partial s}(x,x_1,s){\cal H}(x,x_2,s)\ dA \ ds
\\
&& -\int_{-\infty}^0 \iint_S P(x,x_1,x_2){\cal H}(x,x_1,s){\partial {\cal H}\over \partial s}(x,x_2,s)\ dA \ ds
\end{eqnarray*}
where $dA=dx_1dx_2$ and $S$ is the unit square in $x_1,x_2$ plane.
We plug in the dynamics of ${\cal H}$ to get
\begin{eqnarray*}
&&0=\iint_S P(x,x_1,x_2){\cal H}(x,x_1,0){\cal H}(x,x_2,0) \ dA \\
&&- \int_{-\infty}^0 \iint_S P(x,x_1,x_2)\left(-\frac{\partial^2 {\cal H}}{\partial x_1^2}(x,x_1,s)+\sum_{i=1}^p {\cal K}_i(x,s)C_i\delta(x_1-\zeta_i)\right)\\
&&\times {\cal H}(x,x_2,s)\ dA \ ds\\
&&-\int_{-\infty}^0 \iint_S P(x,x_1,x_2){\cal H}(x,x_1,s)\\
&& \times \left(-\frac{\partial^2 {\cal H}}{\partial x_2^2}(x,x_2,t)+ \sum_{i=1}^p {\cal K}_i(x,s)C_i\delta(x_2-\zeta_i) \right) \ dA \ ds\
\end{eqnarray*}
then we integrate by parts twice to get
\begin{eqnarray*}
&&0=\iint_S P(x,x_1,x_2,0){\cal H}(x,x_1,s){\cal H}(x,x_2,s) \ dA \\
&&-\int_{-\infty}^0 \iint_S -\nabla^2 P(x,x_1,x_2,s){\cal H}(x,x_1,s){\cal H}(x,x_2,s)\ dA \ ds
\\
&&-\int_{-\infty}^0 \int_0^1 {\cal H}(x,x_2,s)\sum_{i=1}^p P(x,\zeta_i,x_2){\cal K}_i(x,s)C_i\ dx_2 \ ds
\\
&&-\int_{-\infty}^0 \int_0^1 {\cal H}(x,x_1,s) \sum_{i=1}^p P(x,x_1\zeta_i){\cal K}_i(x,s)C_i\ dx_1 \ ds
\end{eqnarray*}
where $\nabla^2$ is the two dimensional Lagrangian with respect to $x_1$ and $x_2$.
We add the right side of this identity to the estimation error variance (\ref{EE}) to get an equivalent
quantity to be minimized
\begin{eqnarray*}
&& \int_{-\infty}^0 \iint_S \sum_{j=1}^m {\cal H}(x,x_1,s) B^2 \delta(x_1-x_2){\cal H}(x,x_2,s) \ dA \ dt\\
&&+ \sum_{i=1}^p \int_0^\infty {\cal K}_i(x,s) D^2_i {\cal K}_i(x,s)\ ds \\
&&-\iint_S P(x,x_1,x_2){\cal H}(x,x_1,s){\cal H}(x,x_2,s) \ dA \\
&&- \int_{-\infty}^0 \iint_S- \nabla^2 P(x,x_1,x_2){\cal H}(x,x_1,s){\cal H}(x,x_2,s)\ dA \ ds
\\
&&-\int_{-\infty}^0 \int_0^1{\cal H}(x,x_2,s) \sum_{i=1}^p P(x,\zeta_i,x_2){\cal K}_i(x,s)C_i\ dx_2 \ ds
\\
&&-\int_{-\infty}^0 \int_0^1 {\cal H}(x,x_1,s)\sum_{i=1}^p P(x,x_1,\zeta_i){\cal K}_i(x,s)C_i\ dx_1 \ ds
\end{eqnarray*}
For each $i=1,\ldots, p$ and each $x\in[0,1]$ we would to choose $L_i(x,x_1,s)$ so the time integrand of the quantity
to be minimized is a perfect square of the form
\begin{eqnarray*}
\int_S \sum_{i=1}^p \left({\cal K}_i(x,s)-L_i(x,x_1){\cal H}(x,x_1,s)\right) D^2_i \left({\cal K}_i(x,s)-L_i(x,x_2){\cal H}(x,x_2,s)\right)\ dA\ dt
\end{eqnarray*}
Clearly the terms quadratic in ${\cal K}_i(x,s)$ match up so we compare terms bilinear in ${\cal K}_i(x,s)$ and ${\cal H}(x,x_2,s)$,
\begin{eqnarray*}
&&\int_0^1\sum_{i=1}^p {\cal K}_i(x,s)D^2_i L_i(x,x_2){\cal H}(x,x_2,s)\ dx_2\\
&&= \int_0^1 \sum_{i=1}^p {\cal H}(x,x_2,s)P(x,\zeta_i,x_2){\cal K}_i(x,s)C_i\ dx_2
\end{eqnarray*}
This will hold if
\begin{eqnarray*}
D^2_i L_i(x,x_2){\cal H}(x,x_2,s)&=& {\cal H}(x,x_2,s)P(x,\zeta_i,x_2)C_i
\end{eqnarray*}
for $i=1,\ldots,p$ so we define
\begin{eqnarray*}
L_i(x,x_1)&=&D_i^{-2} P(x,x_1,\zeta_i)C_i
\end{eqnarray*}
Then we compare terms bilinear in ${\cal H}(x,x_1,s)$ and ${\cal H}(x,x_2,s)$ and obtain
\begin{eqnarray*}
&&\iint_S {\cal H}(x,x_1,s)B^2\delta(x_1-x_2){\cal H}(x,x_2,s) \\
&&- \iint_S -\nabla^2 P(x,x_1,x_2){\cal H}(x,x_1,s){\cal H}(x,x_2,s)\ dA
\\
&&=\iint_S \sum_{i=1}^p L_i(x,x_1){\cal H}(x,x_1,s)D^2L_i(x,x_2){\cal H}(x,x_2,s)\ dA\\
&&=\iint_S \sum_{i=1}^p {\cal H}(x,x_1,s)P(x,\zeta_i,x_1)C^2_iD^{-2} P(x,\zeta_i,x_2)\ dA
\end{eqnarray*}
So we are looking for a weak solution to what we call the filter Riccati PDE,
\begin{eqnarray*}
0&=&- \nabla^2 P(x,x_1,x_2)+ B^2\delta(x_1-x_2)
\\&&+\sum_{i=1}^p P(x,x_1,\zeta_i)C^2_i\ D^{-2} P(x,\zeta_i,x_2)
\end{eqnarray*}
We guess that $P(x,x_1,x_2)$ has an expansion
\begin{eqnarray*}
P(x,x_1,x_2)&=& \sum_{n=0}^\infty P^{n,n}(x) \phi_n(x_1)\phi_n( x_2)
\end{eqnarray*}
where $ \phi_n(x)$ are the orthonomal eigenfunctions of the Laplacian
under Neumann boundary conditions (\ref{olef}).
We plug this into filter Riccati PDE and we obtain for $n=0,1,2,\ldots$ the equations
\begin{eqnarray*}
\left(2n^2 \pi^2\right) P^{n,n}(x) +B^2 \delta_{0,n} +\sum_{i=1}^p C_i^2 \ D_i^{-2}\left(P^{n,n}(x)\right)^2
\phi_n( \zeta_i)
\end{eqnarray*}
One solution to these equations is $P^{n,n}(x)=0$ if $n>0$ and
\begin{eqnarray*}
P^{0,0}(x)&=& P^{0,0}\ =\ \sqrt{B^2\over \sum_{i=1}^p C_i^2 \ D_i^{-2}}
\end{eqnarray*}
So $P(x,x_1,x_2)=P^{0,0}$ and
the fact that it does not depend on $x$ is not surprising as the coefficient $B$
of the driving noise is constant. If the driving noise had some spatial variation
we suspect that $P(x,x_1,x_2)$ would vary with $x$.
Then $L_i(x,x_1) $ is a constant and
\begin{eqnarray*}
L_i(x,x_1)&=&L_i\ = \ P^{0,0}C_i\\
{\cal K}_i(x,s) &=& \int_0^1 L_i{\cal H}(x,x_1,s) \ dx_1\ = \ P^{0,0}C_i \int_0^1 {\cal H}(x,x_1,s) \ dx_1
\end{eqnarray*}
Since $P^{0,0}$ and $ C_i$ are both positive so is $L_i$ for $i=1,\ldots, p$.
Now ${\cal H}(x,x_1,s)$ satisfies the backward linear partial differential equation
\begin{eqnarray} \label{bpde}
\frac{\partial {\cal H}}{\partial s}(x,x_1,s)&=&- \frac{\partial {\cal H}^2}{\partial x_1^2}(x,x_1,s)+\sum_{i=1}^p L_i {\cal H}(x,x_1,s)
\end{eqnarray}
subject to the terminal condition ${\cal H}(x,x_1,0)=\delta(x-x_1)$ and Neumann boundary conditions in both $x$
and $x_1$.
We assume that the solution to this PDE takes the form
\begin{eqnarray*}
{\cal H}(x,x_1,s)&=&\sum_{m,n=0}^\infty \gamma_{m,n}(s) \phi_m(x) \phi_n(x_1)
\end{eqnarray*}
where $\phi_n(x)$ are the orthonormal open loop eigenfunctions (\ref{olef}).
We plug this into (\ref{bpde}) and we get a sequence of ODEs,
\begin{eqnarray*}
{ d\over ds} \gamma_{m,n}(s) &=&\left( n^2\pi^2+ \sum_{i=1}^p L_i\right) \gamma_{m,n}(s)
\end{eqnarray*}
The terminal condition
$
{\cal H}(x,x_1,0)= \delta(x-x_1)
$
implies
\begin{eqnarray*}
\gamma_{m,n}(0)&=& \delta_{m,n}
\end{eqnarray*}
so
\begin{eqnarray*}
\gamma_{m,n}(s)&=&\delta_{m,n}\exp\left(\left( n^2\pi^2+ \sum_{i=1}^p L_i\right) s\right)
\end{eqnarray*}
and
\begin{eqnarray} \label{H}
{\cal H}(x,x_1,s)&=&\sum_{n=0}^\infty \gamma_{n,n}(s) \phi_n( x) \phi_n( x_1)
\end{eqnarray}
Notice that ${\cal H}(x,x_1,s)\to 0 $ as $s\to -\infty$.
Recall the estimate of the state at time zero is
\begin{eqnarray*}
\hat{z}(x,0)&=& \int_{-\infty}^0 \sum_{i=1}^p {\cal K}_i(x,s)y_i(s)\ ds
\end{eqnarray*}
but this is a stationary estimator so the estimate of the state at time $t$ is
\begin{eqnarray*}
\hat{z}(x,t)&=& \int_{-\infty}^t \sum_{i=1}^p {\cal K}_i(x,s-t) y_i(s)\ ds\\
&=& \int_{-\infty}^t \sum_{i=1}^p L_i {\cal H}(x,x_1,s-t) y_i(s)\ ds
\end{eqnarray*}
and
\begin{eqnarray*}
\frac{\partial \hat{z}}{\partial t} (x,t)&=& \sum_{i=1}^p {\cal K}_i(x,0)y_i(t)+ \int_{-\infty}^t \sum_{i=1}^p \frac{\partial {\cal K}_i}{\partial t}(x,s-t) y_i(s)\ ds\\
&=&\sum_{i=1}^p {\cal K}_i(x,0)y_i(t)+ \int_{-\infty}^t \int_0^1\sum_{i=1}^p L_i \frac{\partial {\cal H}}{\partial t}(x,x_1,s-t) y_i(s)\ dx_1 \ ds\\
&=&\sum_{i=1}^p {\cal K}_i(x,0)y_i(t )+ \sum_{i=1}^p \int_{-\infty}^t \int_0^1 L_i \frac{\partial^2 {\cal H}}{\partial x^2_1} (x,x_1,s-t)y_i(s)\ dx_1 \ ds \\
&&- \sum_{i=1}^pL_i\int_{-\infty}^t \sum_{j=1}^p L_j C_j {\cal H}( x,\zeta_j,s)
y_i(s)\ ds
\end{eqnarray*}
Notice
\begin{eqnarray*}
{\cal K}_i(x,0) &=& \int_0^1 L_i H(x,x_1,0) dx_1\ = \ \int_0^1 L_i \delta(x-x_1) dx_1\ = \ L_i
\end{eqnarray*}
From (\ref{H}) we see that
\begin{eqnarray*}
\frac{\partial^2 {\cal H}}{\partial x^2_1} (x,x_1,s-t)&=&\frac{\partial^2 {\cal H}}{\partial x^2} (x,x_1,s-t)
\end{eqnarray*}
so
\begin{eqnarray*}
\frac{\partial \hat{z}}{\partial t} (x,t)&=& \sum_{i=1}^p L_iy_i(t )+ \int_{-\infty}^t \int_0^1\sum_{i=1}^p L_i \frac{\partial^2 {\cal H}}{\partial x^2_1} (x,x_1,s-t)y_i(s)\ dx_1 \ ds \\
&&- \sum_{i=1}^pL_i\int_{-\infty}^t \sum_{j=1}^p L_j C_j {\cal H}( x,\zeta_j,s)
y_i(s)\ ds\\
&=& \sum_{i=1}^p L_iy_i(t )+\frac{\partial^2 \hat{z}}{\partial x^2}(x,t)\\
&& -\sum_{j=1}^pL_j \int_{-\infty}^t \sum_{i=1}^p C_i{\cal H}(x,\zeta_j,s)y_i(s)\\
&=& \sum_{i=1}^p L_iy_i(t )+\frac{\partial^2 \hat{z}}{\partial x^2}(x,t)\\
&& -\sum_{j=1}^pL_j \int_{-\infty}^t \sum_{i=1}^p C_i{\cal H}(x,\zeta_j,s)y_i(s)\\
&=& \sum_{i=1}^p L_iy_i(t )+\frac{\partial^2 \hat{z}}{\partial x^2}(x,t) -\sum_{j=1}^pL_j \hat{z}(t,\zeta_j)
\end{eqnarray*}
Now we define
\begin{eqnarray*}
\hat{y}_i(t)&=& \hat{z}(t,\zeta_j)\\
\tilde{y}_i(t)&=&y_i(t)-\hat{y}_i(t)
\end{eqnarray*}
The quantities $\tilde{y}_i(t)$ are called the innovations.
So the dynamics of optimal estimator is a copy of original dynamics driven by the innovations
\begin{eqnarray*}
\frac{\partial \hat{z}}{\partial t} (x,t)&=&\frac{\partial^2 \hat{z}}{\partial x^2}(x,t) +\sum_{i=1}^pL_i \tilde{z}(t,\zeta_i)
\end{eqnarray*}
The error dynamics is
\begin{eqnarray} \label{errdyn}
\frac{\partial \tilde{z}}{\partial t} (x,t)&=&\frac{\partial^2 \tilde{z}}{\partial x^2}(x,t) -\sum_{i=1}^pL_i \tilde{z}(t,\zeta_i)
\end{eqnarray}
Because both $z(x,t)$ and $\hat{z}(x,t)$ satisfy Neumann boundary conditions so does $\tilde{z}(x,t)$.
From the form of the error dynamics we see that the closed loop eigenvalues $\eta_n$ and eigenfunctions $\theta_n(x)$ are weak solutions of the equations
\begin{eqnarray*}
\frac{\partial^2 \theta_n}{\partial x^2}(x) -\sum_{i=1}^p L_i\delta (x-\zeta_i) \theta_n(x)&=&\eta_n \theta_n(x)
\end{eqnarray*}
subject to Neumann boundary conditions.
It is easy to see that that $\theta_0(x)=1$ and $\eta_0=-\sum_i L_i<0$ since $L_i>0$.
For $n>0$ we expect that on each subinterval $\zeta_i< x< \zeta_{i+1}$ the eigenfunctions are a sinusoid
of a given frequency $\tau_n $. The corresponding eigenvalue is $-\tau_n^2 $.
The eigenfuctions $\theta_n(x)$ are continuous at the mesurement points $\zeta_i$ but their derivatives jump
\begin{eqnarray*}
\frac{\partial \theta_n}{\partial x} \zeta_i(x^+)- \frac{\partial \theta_n}{\partial x} \zeta_i(x^-)&=& L_i \theta_n(\zeta_i)
\end{eqnarray*}
Next we consider the point controlled and point measured system
\begin{eqnarray*}
\frac{\partial z}{\partial t}(x,t)&=& \frac{\partial^2 z}{\partial x^2}(x,t) +B v(t)\\
z(\xi_k^-,t)&=&z(\xi_k^+,t) \mbox{ for } k=2,\ldots, m-1\\
\beta_k u_k(t)&=&\frac{\partial z}{\partial x} (\xi_k^-,t) -\frac{\partial z}{\partial x} (\xi_k^+,t) \mbox{ for } k=1,\ldots, m
\\
y_i(t)&=&C_i z(\zeta_i,t)+ D_i w_i(t)\\
z(x,t) &=& z^0(x)
\end{eqnarray*}
We use the linear feedback on the state estimate to get the control inputs
\begin{eqnarray*}
\hat{u}(t)&=&\int_0^1 K(x) \hat{z}(x,t) \ dx
\end{eqnarray*}
where $\hat{z}(x,t)$ satisfies the Kalman filtering equation modified by the linear feedback on the state estimate,
\begin{eqnarray*}
\frac{\partial \hat{z}}{\partial t} (x,t)&=&\frac{\partial^2 \hat{z}}{\partial x^2}(x,t) +\sum_{i=1}^pL_i \tilde{z}(t,\zeta_i)\\
\hat{z}(\xi_k^-,t)&=&\hat{z}(\xi_k^+,t) \mbox{ for } k=2,\ldots, m-1\\
\beta_k \hat{u}_k(t)&=&\frac{\partial \hat{z}}{\partial x} (\xi_k^-,t) -\frac{\partial \hat{z}}{\partial x} (\xi_k^+,t) \mbox{ for } k=1,\ldots, m
\\
\hat{z}(x,t) &=& \hat{z}^0(x)
\end{eqnarray*}
By linearity the error dynamics is still
\begin{eqnarray*}
\frac{\partial \tilde{z}}{\partial t} (x,t)&=&\frac{\partial^2 \tilde{z}}{\partial x^2}(x,t) -\sum_{i=1}^pL_i \tilde{z}(t,\zeta_i)\\
\tilde{z}(x,0)&=& \tilde{z}^0(x)\ =\ z^0(x)-\hat{z}^0(x)
\end{eqnarray*}
We can consider the combined system in coordinates $z(x,t)$ and $\hat{z}(x,t)$ but it is more useful to consider it in coordinates $z(x,t)$ and $\tilde{z}(x,t)$ because in the latter coordinates the combined dynamics is block upper triangular, the error dynamics does not depend on $z(x,t)$.
In these coordinates the
control is given by
\begin{eqnarray*}
\hat{u}(t)&=&\int_0^1 K(x) \left(z(x,t)- \tilde{z}(x,t)\right) \ dx
\end{eqnarray*}
so when $\tilde{z}(x,t)=0$ the
dynamics of $z(x,t)$ takes the form of the original system under full state feedback.
This shows the spectrum of the LQG synthesis is the union of the spectrum of the original system under LQR full state feedback
and spectrum of the error dynamics of the Kalman filter. So if these spectra are
in the open left half plane then the LQG synthesis is asymptotically stable, the rod goes to the desired temperature $z(x,t)\to 0$ and the estimation
error goes to zero $\tilde{z}(x,t)\to 0$
\section{Example 3}
We consider a Linear Quadratic Gaussian synthesis for the heat equation. As in Example 2 we assume that there are three actuators, $m=3$, at $\xi_1=0,\ \xi_2=0.5, \ \xi_3=1$
with coefficients $\beta_1=1,\ \beta_2=2,\ \beta_3=1$ and the rest of the constants are as in Example 2.
We further assume that are two sensors at $\zeta_1=0.25,\ \zeta_2=0.75$ with $C_1=C_2=1$. The coefficient of the driving noise is $B=1$ and
the coefficients of measurement noise are $D_1=D_2=1$.
Then
\begin{eqnarray*}
P^{0,0}&=& {\sqrt{2}\over 2}
\\
L_1&=& L_2\ = \ {\sqrt{2}\over 2}
\end{eqnarray*}
Then the zeroth order eigenfunction is $\theta_0(x)=1$ and the corresponding eigenvalue is $\eta_0=-\sqrt{2}$. For $n>0$ the eigenfunctions are of the form
\begin{eqnarray*}
\theta_n(x)&=&\left\{ \begin{array}{ccccc} a_1\cos \tau_n x +b_1 \sin \tau_n x& 0\le x\le 0.25\\
a_2\cos \tau_n x +b_2 \sin \tau_n x& 0.25 \le x\le 0.75\\
a_3\cos \tau_n x +b_3 \sin \tau_n x& 0.75\le x\le1
\end{array} \right.
\end{eqnarray*}
By symmetry $\theta_n(x)=\theta_n(1-x)$ so only need to find $a_1,b_1,a_2,b_2$. Since $\theta_n(x)$ satisfies Neumann boundary conditions
at $x=0$, $b_1=0$ and we can take $a_1=1$.
On the interval $[0.25,0.75]$ the solution must be symmetric around $0.5$ so we make the change of coordinates $\bar{x} =0.5-x$ then
it must be of the form
\begin{eqnarray*}
\theta_n(x)\ =\ \bar{a} \cos \tau_n \bar{x} & =&\bar{a} \cos \tau_n (0.5-x)
\end{eqnarray*}
for $-0.25 \le \bar{x} \le 0.25$.
Since $\theta_n(x)$ is continuous at $x=\zeta_1=0.25$ which corresponds $\bar{x}=0.25$ we get the condition
\begin{eqnarray*}
\cos \tau_n {\pi \over 4}&=& \bar{a} \cos \tau_n {\pi \over 4}
\end{eqnarray*}
so we conclude that $ \bar{a}=1$.
The derivative jumps at $x=\zeta_1=0.25$ so we have
\begin{eqnarray*}
2\tau_n\sin {\tau_n \over 4} &=& {\sqrt{2}\over 2} \cos{\tau_n \over 4}
\end{eqnarray*}
This leads to the equation
\begin{eqnarray*}
{\tau_n \over 4}&=& {\sqrt{2}\over 16} \cot{\tau_n \over 4}
\end{eqnarray*}
Let $\sigma ={\tau_n \over 4}$ then we need to solve
\begin{eqnarray*}
\sigma_n &=& {\sqrt{2}\over 16} \cot \sigma_n
\end{eqnarray*}
There is exactly one root $ \sigma_n$ of this equation between $(n-1)\pi$ and
$(n-1/2)\pi$ for each $n=1,2,3,\ldots$ so
\begin{eqnarray*}
4(n-1)<\tau_n< 4(n-1/2)
\end{eqnarray*}
and so the eigenvalues $\eta_n$ of the error dynamics satisfy
\begin{eqnarray*}
-16(n-1)^2>\eta_n>-16(n-1/2)^2
\end{eqnarray*}
We showed in Example 2 the closed loop eigenvalues under full state
feedback are $\mu_j=-(4j+2)^2\pi^2$ so this Linear Quadratic Gaussian
synthesis is asymptotically stable.
\section{Conclusion}
We have explicitly derived an LQG synthesis for the heated/cooled rod under point actuation and point sensing. The key is the completing
the square technique. We have already solved boundary control problems for the wave equation \cite{Kr21a} and beam equation \cite{Kr21b}
using the completing
the square technique. So we are confident that this technique can be used to solve the LQG Synthesis problems for the wave and beam equations
under point actuation and point sensing.
| {
"attr-fineweb-edu": 1.951172,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdyo4eIZjrLrbKl7j |
\section{Acknowledgments}
The authors would like to thank Dr. Michael E. Birnbaum for fruitful discussion on systems-level TCR-antigen specificity. This work was supported by National Science Foundation (NSF) grant NSF PHY-2019745 (Center for Theoretical Biological Physics).
\section{Distribution of TCR-pMHC binding energy}\label{sec:PDF}
The TCR-pMHC binding energy $U(t, q)$ is the indicator of the affinity between a T cell and an antigen. When assuming the pairwise AAs interaction energies to be independent Gaussian random variables, $U(t, q)$ in \eqref{Etq} becomes a weighted sum of these variables with weights given by the contact map $\mathbb{W}$. Hence, $U(t, q)$ is also a normally distributed random variable, and since its mean is automatically zero, knowledge of the variance $\sigma^2_{tq}$ of its PDF allows us to fully characterize how $U(t, q)$ varies as we vary the particular realization of $\mathbb{E}$. The contact-map dependence of $U(t, q)$ has a two-fold impact on the variance of its PDF when compared to the case of the addition of equal variance random variables (as in the RICE approach from \cite{George2017}). On one hand, the total number of non-vanishing contacts $W_{ij}$ given by the contact map directly determines the number of random energies $E_{ij}$ contributing to $U(t, q)$, thus increasing $\sigma^2_{tq}$ as the number of non-vanishing $W_{ij}$'s increases. On the other hand, the particular repeat structure of AAs in the TCR sequence and in the pMHC sequence also influences $\sigma^2_{tq}$, as a particular pair of AAs that appears multiple times in the energy summation gives rise to a variance increase. In this section we explore how the variance of the PDF of $U(t, q)$ depends on the two aforementioned factors.
Before proceeding, we must discuss various statistical ensembles of interest here. So far, we have focused on varying the coefficient matrix, thus generating an ensemble values for each specific $t,q$. However, we imagine that the biophysical problem is defined by a fixed $\mathbb{E}$, which may be chosen (as done here) in a random fashion but, as mentioned above, may be learned from the data as done in other work \cite{Lin2021}. Thus, we are actually interested in the distribution of binding energies as we vary either the peptide (fixing the TCR), the TCR (fixing the peptide) or both, as these are what is necessary to determine the effects of negative selection. To see how to determine these distributions, we return to the basic equation
\begin{equation}\label{EnergyLoop}
U(t, q) = \sum_i^{k_t} \sum_j^{k_q} W_{ij} \cdot E_{t(i)q(j)},
\end{equation}
where we have limited ourselves to one class of MHC molecule and hence $U_c$ becomes an irrelevant constant. Also, we will assume for the purpose of our analysis that $W_{ij}$ is either 0 or 1; this is true for all but a very small number of possible pairs. Finally, we will assume take the distribution over AA to be uniform, although it might be useful in future work to use the known AA distribution in the human proteome. With these number of assumptions, the mean value of $U (t,q)$ sampled over peptide sequence and/or TCR sequence constrained to have no repeats is just the sample mean of drawing a number of values from a mean zero, variance $\sigma^2$ Gaussian distribution. This number is very much peaked around zero. Similarly, the mean value of $U^2$ will be strongly peaked around the variance times the contact number $N_c$. Perhaps not surprisingly, these are the same answers we get when averaging over $\mathbb{E}$; in other words, as long as we average over sufficient numbers of sequence choices, the results for all choices of coefficient matrices are the same; see the SI (Section S8) for a more complete discussion.
Let us now extend this analysis to the more general case. We introduce the following notation: A pair repeat structure is denoted as $C_p = (l_1^{r_1}, l_2^{r_2}, \cdots, l_N^{r_N})$, with $\sum r_i \cdot l_i = N_C$, where $l_i$ denotes the number of times an amino acid pair is repeated in different contacts and $r_i$ denotes how many such $l_i$ repetitions there are. For example, for a total of 20 contacts, if there are three contacts with the same AA pair and two set of two contacts with the same AA pair, this would be denoted as $C_p=(3,2^2,1^{13})$. An extension of the previous argument allows us to determine the most likely value of the mean energy and its variance, averaged over all possible peptide and TCR sequences that do not change the class. The mean is still zero and the variance now becomes
\begin{equation}\label{EqVariance}
\textup{Var}(C_p) = \sigma^2 \sum r_i l_i ^2.
\end{equation}
Again, this is exactly the same as the result obtained when averaging over energy coefficient matrices. A more precise version of this correspondence is presented in the SI (S5 and S6). If one wants to find the total variance, we have to average over different choices of $C$ weighted by their respective probabilities of occurrence given the assumed uniform distribution of residue choice.
\subsection{Variance scales with the number of contacts}
It is clear from the previous analysis that the variance in the binding energy distribution increases with $N_C$ the total number of contacts. It is easy to see from the above that there are bounds on the total variance
\begin{equation}\label{EqVarianceBounds}
\sigma^2 N_C \leq \Var U \leq \sigma^2 N_C^2 .
\end{equation}
The lower bound comes from the case where all pairs are distinct whereas the upper bound arises from assuming that all contacts are the same AA pair, i.e. $C=(N_C)$. From the size of the AA alphabet $|\mathcal{A}|$, the total number of AA pairs (irrespective of ordering) is $M = \binom{|\mathcal{A}| + 1}{2}$. Now, we have just seen that the precise value of the variance depends on the exact repeat structure of the peptide ($q$) and TCR ($t$) AA sequences, together with the contact map. In the case where we wish to obtain the variance of the PDF obtained by varying both $t$ and $q$, we can obtain a useful approximation of this variance by ignoring the exact configuration of $\mathbb{W}$ and instead simply counting the number of times each of the $M$ AA pairs is selected with equal probability, where there are $N_c$ total opportunities. In this case, the number of times each AA pair is realized follows a multinomial distribution, and the variance can be calculated from the second moment of this distribution as
\begin{equation}\label{EqVarvsR}
\textup{Var}\left(U(t,q) | \mathbb{W}\right) \approx \frac{1}{M} N_C^2 + \left( 1 - \frac{1}{M} \right) N_C.
\end{equation}
See the SI (Sections S5 and S6) for a detailed derivation. In figure \ref{fig:VarvsR} the variances computed by simulation for the CDR3$\alpha$-pMHC interfaces of 3QIB, 3QIU, 3QIW, and 5C0A (top row of Fig. \ref{fig:CMs}) are presented along with the predicted variance from \eqref{EqVarvsR}. As we can see, this approximation captures the basic dependence on the total number of contacts. In the SI (Fig. S5), we provide further evidence for this result by considering the effects of varying the cutoff used in the definition of the contact matrix.
\subsection{Variance depends on the repeat structures of the TCR and pMHC AA sequences}
If we are looking for the distribution of energies for a fixed TCR sequence, there is no simple formula that can encompass the dependence of the variance on the exact TCR sequence and on the exact contact map. As already mentioned, we have to find the variance for different possible repeat structures and then weight them appropriately by their occurrence probability. Specifically,
\begin{equation}\label{VarRSGen}
\sigma_{t}^2 \,=\, \sum_{n=1}^{N_R} p_n \sigma_n^2.
\end{equation}
Where $N_R$ is the total number of different possible structures.
We would like to work out a specific and relatively simple example to illustrate how this works. To simplify the analysis, we focus on the 3QIB CDR3$\alpha$-pMHC contact map $\mathbb{W}_{\text{3QIB}}^{\alpha}$ in figure \ref{fig:CMs} (top left), and assume that the TCR is a constant sequence of a single repeated AA $t = (t_1,t_1,t_1...)$. Note that this makes labelling of repeat motifs dependent on the pMHC's primary sequence only. In $\mathbb{W}_{\text{3QIB}}^{\alpha}$, only 7 AAs in $t$ and 7 AAs in $q$ make significant contacts, so the effective lengths are $k_t = k_q = 7$.
We will break down the problem of computing the terms in this sum as follows: We will first focus on the probable configurations of the peptide by itself and consider how the different sites are chosen. Drawn from a $|\mathcal{A}| = 20$ AA alphabet, there are $N = 15$ different repeat configurations of length 7; when randomly generating AA sequences, the four most likely repeat configurations $C_{q,1} = (2, 1^5)$, $C_{q,2} = (1^7)$, $C_{q,3} = (2^2, 1^3)$, and $C_{q,4} = (3, 1^4)$ (in the section above, $C$ is the repeat structure of the TCR-pMHC pairing, whereas $C_{q,n}$ ($n = 1, \cdots, N$) here indicate the repeat structure only of the pMHC), cover about $p_c = 96.66 \%$ of the AA sequence space. A complete breakdown of these probabilities can be found in SI Table 1. We thus truncate the sum in \eqref{VarRSGen} to the pairings that can be obtained from these leading order structures.
Now, each peptide configuration can give rise to a set of different possible pairing structures, depending on the specific non-vanishing elements of the contact matrix. These then need to be averaged together (with proper weighting). This somewhat complicated calculation is presented in the SI (section S6) and is carried out by using the self-averaging property to allow for computing the average over different realizations of the energy coefficient matrix; no rounding to 0 or 1 for the values $W_{ij}$ is made in this calculation and the results to follow. Finally, we obtain $\sigma_{t} (p_c) = 9.7833 \sigma$, and extrapolating this value to approximate the full analytical value in \eqref{VarRSGen}, we get
\begin{equation*}
\sigma_{t} \approx \sqrt{\dfrac{1}{p_c}} \cdot \sigma_{t} (p_c) = 9.95 \sigma.
\end{equation*}
This estimation has relative error of $0.6 \%$ as compared to the simulated value of the standard deviation, the blue plot in figure \ref{fig:VarPartitions}. The simulated PDFs related to the four most likely repeat structures are also shown in figure \ref{fig:VarPartitions}.
It is worth noting that in \eqref{VarRSGen} the contributions of higher values of variances are dominated by the even faster vanishing of the corresponding probabilities. For reference, the standard deviation for this contact map ranges from $\sigma_2 = 9.0761$ for $C_{q,2} = (1^7)$ to $\sigma_{15} = 21.4090$ for $C_{q,15} = (7)$; whereas the probabilities are $p_2 = 30.52 \%$ and $p_{15} = 1.56 \times 10^{-6} \%$, respectively.
\section{Conclusions}\label{sec:conclusions}
In this manuscript, we considered the role of a non-trivial contact map acting as a template for the explicit interactions between the TCR and pMHC AA sequences. This approach is a compromise between making an arbitrary rule as to how these sequences interact (for example, assuming only diagonal coupling as done in previous models) or using measured crystal structure for each considered pair, an obvious impossibility for anything resembling a large repertoire undergoing negative selection. The formulation isolates contributions from spatial conformation of CDR3 loops and pMHC complexes into these contact maps, while remaining features are encapsulated in energy coefficient matrices. Although all the analysis here was done using randomly generated energy matrices, serving as a baseline ``toy" model, the methodology is not restricted to such a choice, and other energy matrices such as the hydrophobicity-driven MJ matrix \cite{Miyazawa1985, Miyazawa1996}, or data-driven matrices \cite{Lin2021} can be used instead.
We observed that the inclusion of contact maps gave rise to several features impacting the variance of the TCR-pMHC binding energy: a density-related one, as the number of non-vanishing contacts correlates with increased variance; and a topology-related one, in which the repeat structure of the AAs in CDR3-loops' and in pMHC-complexes' sequences also skews the variance, with additional repeats correlating with increased variance. These changes in variance also affect negative selection recognition probabilities, with larger variances driving higher recognition probabilities. The proposed generalization is therefore useful for characterizing the distributional behavior of TCR systems with a relatively fixed contact structure. Given that even at fixed MHC allele there are likely to be several distinct spatial conformations that can give rise to effective binding, a full treatment of the repertoire should include finding the set of templates that gives rise to the largest possible binding for the sequences under consideration. This extension will be reported on elsewhere.
Another influence of the topology of the contact map manifest in the recognition probability of point-mutated antigens by T cells that have been negatively selected. Here, some pMHC-AAs have higher number of non-vanishing contacts with TCR-AAs, that upon mutation make the antigen to be perceived more like foreign by the T cells than when mutating pMHC-AAs with fewer non-vanishing contacts; this results in higher recognition probability of high-contact site point-mutants. Conversely, this notion can provide at least some information about which mutations in a previously detected peptide could prevent the detection of an evolved virus by memory T cells generated in an earlier infection. Data to this effect is now becoming available in the context of COVID-19-specific T cells in never infected individuals resulting from prior responses to other endemic coronaviruses \cite{braun2020}.
As seen here, the problem of dissecting the generation and functioning of the post-selection T cell repertoire is incredibly complex, even utilizing a number of vastly simplifying assumptions. The full problem requires attention to biases in the generation of the na\"ive repertoire \cite{murugan2012}, inclusion of a set of different MHC alleles for different individuals, a better handle on the statistical properties of the negative selection training set, and of course the full range of molecular biophysics effects that contribute to binding energy and on-off kinetics. These cannot all be included in any useful theoretical model. By isolating and improving our understanding of the effects of specific contact geometries, we hope to build intuition for how different aspects of this complex system contribute to different functional aspects of the full T cell arm of adaptive immunity.\\
\section{Contact Map Based Random Energy Model}\label{sec:model}
Our goal is to analyze a model of negative selection in which the TCR-pMHC interaction exhibits antigen-specificity of T cells dependent both on the AA occurrence and on the spatial conformation of TCR and pMHC, while retaining enough simplicity so that it can be studied analytically and with feasible computations. We represent a TCR $t$ via its CDR3 loops in form of a sequence of $k_t$ AAs, $t = \{ t(i) \}_{i=1}^{k_t}$, and a pMHC $q$ as a sequence of $k_q$ AAs, $q = \{ q(j) \}_{j=1}^{k_q}$. A symmetric energy coefficient matrix of size $20 \times 20$, $\mathbb{E} = (E_{nm})$, has entries $E_{nm}$ that represent the pairwise binding coefficients between AAs $n$ and $m$. The binding energy contributions are then assumed to be the product of a contact map $\mathbb{W} = (W_{ij})$, containing the weights $W_{ij}$ for the interaction between $t$ and $q$ in a given structure, and the coefficient corresponding to the amino acid interaction. In detail,
\begin{equation}\label{Etq}
U(t, q) \,=\, U_c \;+\; \sum_{i, j} W_{ij} \cdot E_{t(i)q(j)},
\end{equation}
where $U_c$ represents the contribution of the TCR's CDR1 and CDR2 complexes interacting with the MHC molecule, as discussed in \cite{Kosmrlj2008, Kosmrlj2009, Chakraborty2010, Kosmrlj2010}.
This form of the binding energy in \eqref{Etq} explicitly separates the effects on CDR3-pMHC interaction due to spatial configuration from the effects due to the rest of the pair-dependent factors, assigning the former ones to $\mathbb{W}$ and coarsely accounting for the latter ones in $\mathbb{E}$. The particular choices for the contact map $\mathbb{W}$ will depend on the specific TCR-pMHC being used as a template. Also, this formulation does not pre-suppose any specific choice for $\mathbb{E}$. We discuss in detail specific choices of $\mathbb{E}$ and $\mathbb{W}$ in the sections below.
\subsection{Contact maps}\label{sec:ContactMaps}
Crystal structures of TCRs bound to pMHCs show a variety of spatial configurations. Each one of these can be thought of as defining a binding template which can be used to determine the energy of a set of possible pairs. In general, we expect there to be a small number of possible templates, as a specific template would presumably be valid for a subset of all pairs; even then, we must necessarily ignore the small structural changes seen between the same TCR-pMHC systems that differ e.g. by a single AA mutation \cite{Cole2016, Newell2011, Sethi2013, Ting2020}. We expect, based on a recent computational study \cite{Lin2021}, that this approach will be reasonable if we stick to a fixed MHC allele, as structures with different alleles can look very different. We will see this directly in Fig. \ref{fig:ContactMaps} below. In the calculations reported in this paper, we typically restrict ourselves to one template.
To derive a contact map from a crystal structure, we utilize the associative memory, water mediated, structure and energy model (AWSEM) \cite{Davtyan2012}, developed in the context of protein folding. We use the position of C$_\beta$ (C$_\alpha$ in the case of glycine) atoms to characterize the position of the residues of the AAs in both the TCRs and pMHCs, and to use AWSEM's negative-sigmoid switching function as the screening weight $W_{ij}$ in computing the interaction energy
\begin{equation}\label{Wij}
W_{ij} (r_{ij}) = \frac{1}{2} \, \left( 1 - \tanh{[\eta \cdot (r_{ij} - r_{\text{max}})]} \right).
\end{equation}
Here, $r_{ij}$ is the distance separating the residues at positions $i$ and $j$, $r_{\text{max}}$ acts like a cutoff and is the inflection point of $W_{ij}$ after which the function vanishes rapidly for $r_{ij} > r_{\text{max}}$, and $\eta$ controls how rapidly this vanishing occurs. We use crystal structures (see Fig. \ref{fig:ContactMaps}a) of TCR bound to pMHC deposited in the Protein Data Bank to determine a list of AAs in the TCR $t$ and in the pMHC $q$, and to calculate each distance $r_{ij}$, $i = 1, \cdots, k_t$, $j = 1, \cdots, k_q$. We then compute the corresponding weights $W_{ij}$ from \eqref{Wij} and construct the contact map $\mathbb{W} = (W_{ij})$. Given that both CDR3$\alpha$ and CDR3$\beta$ loops of the TCR interface with the peptide, we construct a separate contact map for each of these CDR3-loop-pMHC interactions.
To show how the proposed screening weight given by \eqref{Wij} derives from different TCR-pMHC crystal structures, we choose $r_{\text{max}} = 9.5 \; \si{\angstrom}$ and $\eta = 1 \; \si{\angstrom}^{-1}$ and focus on four test cases. For the first three test cases we use data from Newell et al. \cite{Newell2011} who present three TCR-pMHC crystal structures; first, of the 2B4 TCR bound to the MCC/I-E\textsuperscript{k} complex (PDB ID 3QIB); second, of the 226 TCR bound to the MCC/I-E\textsuperscript{k} complex (PDB ID 3QIU), and; third, of the 226 TCR bound to the MCC-p5E/I-E\textsuperscript{k} complex (PDB ID 3QIW). For the fourth case, we follow Cole et al. \cite{Cole2016} who studied the 1E6 TCR bound to HLA-A02 carrying MVW peptide (PDB ID 5C0A). For simplicity, we will refer to specific crystal structures by their PDB IDs unless further details need to be more precisely mentioned about the TCR or the pMHC. Note that 3QIB and 3QIU represent different TCRs bound to the same pMHC complex, whereas 3QIU and 3QIW represent the same TCR bound to two pMHCs that differ by a single AA mutation in the peptide sequence. In addition, 3QIB, 3QIU, and 3QIW share the same mouse MHC class-II restriction and indeed the same I-E$^\textup{k}$ MHC-II allele, whereas the 5C0A TCR-pMHC system is presented on the human HLA A$^*$02 MHC class-I allele.
As defined here, contact maps are sensitive to the choice of distance cutoff. Clearly, the number of contacts in a contact map for a given crystal structure increases with increasing $r_{\text{max}}$ values. The contact map of 3QIB's CDR3$\alpha$-pMHC interface is plotted at four different $r_{\text{max}}$ values, from 6.5 to 9.5 \si{\angstrom} in 1 \si{\angstrom} increments, while keeping $\eta = 1 \, \si{\per\angstrom}$ fixed (see SI Fig. S1). The contact profile gradually forms with ever-increasing number of contacts from about 5 AA pairs in contact at $r_{\text{max}} = 6.5 \, \si{\angstrom}$, to about 22 AA pairs in contact at $r_{\text{max}} = 9.5 \, \si{\angstrom}$. For the remainder of this paper, all contact maps are calculated with $r_{\text{max}} = 9.5 \, \si{\angstrom}$ and $\eta = 1 \, \si{\per\angstrom}$.
The contact maps in figure \ref{fig:CMs} correspond to CDR3$\alpha$-pMHC interfaces (top row) and CDR3$\beta$-pMHC interfaces (bottom row) from crystal structures 3QIB, 3QIU, 3QIW, and 5C0A. The contact profiles of CDR3$\alpha$-pMHC are different from the CDR3$\beta$-pMHC contact profiles, as these parts of the TCR contact different residues on the displayed peptide. The contact maps consistently represent the physical proximity of a particular CDR3 loop to a specific portion of the pMHC, as can be seen in 3QIB's crystal structure shown in figure \ref{fig:Crystal}, wherein the CDR3$\alpha$ loops primarily contact AAs 2-8, whereas CDR3$\beta$ loops primarily contact AAs 7-12. The detailed differences among the first three contact maps do capture slight changes in position-dependent interfacing, even when comparing contact maps for the same TCR bound to two pMHCs diverging by peptide single-AA mutation. Different weights of, for example, position pairs $(i,j)$ = $(4, 4)$, $(4, 8)$, $(6, 4)$ and $(7, 6)$ are observed when comparing contact maps of 3QIU and 3QIW in figure \ref{fig:CMs} (coordinates in AA pairs are labeled as $(i, j)$ for $t(i)$ and $q(j)$). But, clearly from a more coarse-grained perspectives, these three can be considered to fall within one template. Conversely, the fourth map is very different, as should be expected because it is based on a different MHC molecule. Our conclusion is that we can use a single map for a class of possible parings and thereby learn about a significant set of contributors to the T cell repertoire. We include more contact maps from other crystal structures in the SI to further support our findings (Figs. S2-4).
In the remainder of this paper, we will explore the segment of the repertoire that depends on one template and its corresponding contact map, and determine how the features of that map affect repertoire properties.
\subsection{Energy matrix}\label{sec:EnergyMatrix}
As discussed above, we propose for the recognition of an antigen by a T cell an affinity-based criterion in which the TCR-pMHC binding energy $U(t, q)$ given in \eqref{Etq} equates to recognition (evasion) if $U(t, q)$ is above (below) a particular energy threshold $U_n$. Thus, we need to specify a symmetric energy coefficient matrix $\mathbb{E} = (E_{nm})$. The first example of matrix choice was one based primarily on hydrophobicity, as developed by Miyazawa-Jerningan (MJ) \cite{Miyazawa1985} and used in studies of thymic selection \cite{Kosmrlj2008, Chen2018}. More recent efforts have focused on developing immune-specific energy matrices \cite{Woelke2011}. A recent study \cite{Lin2021} used machine learning to derive the optimal matrix separating strong from weak binders within a single contact map template; this optimization approach would lead to a different such matrix for each assumed template. Here, our interest is in the role of the contact map and so we have opted for the expedient choice of a random model where all matrix elements are chosen to be independent, mean-zero, unit-variance normally distributed random variables, $E_{mn} \sim \mathcal{N} (\mu = 0, \sigma^2 = 1)$. Note the assumption that the $n$-$m$ interaction coefficient has the same value independently of the AAs' location in the TCR or the pMHC sequences. Thus, our model is distinct from the RICE approach \cite{George2017} which assumed that the spatial location of the amino acid directly affected the energy coefficient.
\section{Introduction}\label{sec:intro}
One of the major components of the human immune system consists of a large repertoire of T lymphocytes (or T cells). Each T cell carries a particular T cell receptor (TCR) capable of binding to a specific antigen in the form of a peptide (p) displayed by major histocompatibility complex (MHC) molecules (shortened as pMHC) on the surface of host cells \cite{Ding2012, Robinson2016, Schumacher2015, Verdegaal2016}. The activation of the T cell response depends on the strength \cite{Das2015}, and possibly kinetics \cite{Francois2016}, of this TCR-pMHC binding \cite{Alam1996, Krogsgaard2005}. A typical repertoire of a healthy individual consists of $\sim10^7$ distinct clonotypes, each with a unique TCR \cite{Arstila1999}. A growing body of research has been focused on understanding the systems-level interactions between the T cell repertoire and its recognition of peptide landscapes indicating foreign or cancer threats.
A critical feature of a properly functioning immune system is its ability to discriminate healthy cells of the host from those infected by pathogens, reacting to the latter ones while tolerating the former ones. In order to achieve the aforementioned discrimination, T cells must survive a rigorous selection process in the thymus before being released into the bloodstream. The first step in this process, called positive selection, ensures that TCRs in thymocytes (developing T cells) can adequately interface with pMHCs. Positive selection occurs in the thymic cortex, where cortical epithelial cells present self-peptides to thymocytes. As long as a thymocyte is able to interface with some presented pMHC, it receives a survival signal and migrates inward to the thymic medulla. This step ensures that the thymocyte has a properly functioning TCR, a rare event as only 5\% of thymocytes survive this step. In the inner medulla, they encounter thymic medullary epithelial cells. Here, surviving immature T cells are again presented with a diverse collection of $\sim10^4$ self peptides \cite{DeBoer1993, Yates2014} representing a variety of organ types. T cells binding too strongly to any self peptide die off in a process known as negative selection \cite{Detours1999, Klein2014}.
As already pointed out, a key ingredient in the aforementioned process as well as in any subsequent recognition of a foreign antigen by a T cell is the molecular interaction of the TCR and the pMHC molecules. Crystal structures of TCR bound to pMHC show that the interface of the TCR-pMHC interaction is complex, with TCR complementarity determining regions 1 and 2 (CDR1 and CDR2, respectively) primarily binding to the MHC molecule, whereas the CDR3 complex mainly contacts the peptide in the MHC's cleft \cite{Lanzarotti2011, Newell2011}. The CDR3 complex is comprised of two loops, CDR3$\alpha$ and CDR3$\beta$; Baker et al. showed these loops can exhibit spatial and molecular flexibility during the TCR-pMHC binding process \cite{Baker2012}; moreover, the same TCR can bind to different pMHCs \cite{Colf2007}, for example to a pMHC with point-mutated peptide \cite{Newell2011}. This can involve subtle changes in the CDR3 complexes' spatial conformation. It is clear then that the intricacies of the TCR binding to the pMHC as a dynamic process remain as yet to be fully understood.
In lieu of a complete first-principles understanding, several groups have pioneered the idea of employing relatively simple models so as to get a sense of how negative selection affects the T cell repertoire. In the original set of models, TCRs and peptides were represented as strings of amino acids (AAs) which interacted in a manner that did not incorporate any structural information. In one such set of models, each AA in the pMHC binding pocket interacted with and only with the complementary AA in the TCR CDR3 complex. This interaction was described by either one or a set of 20x20 matrices \cite{Kosmrlj2008, Kosmrlj2009, Chakraborty2010, George2017, Wortel2020}. These works indeed have provided a framework for describing how selection shapes the discrimination ability of the T cell repertoire, and have been applied to understanding HIV control \cite{Kosmrlj2010} and for assessing the detectability of cancer neoantigens \cite{George2017}. In a more recent study, Chen et al. \cite{Chen2018} introduced nonuniform interaction profiles that translated into some AAs in the TCRs having a more pronounced effect in pMHC recognition, but did not consider how these non-uniformities could vary between TCRs, as shown by existing crystal structures.
In this paper, we introduce the idea of a crystal-structure dependent contact map that weights the binding energies based on the distance separating the residues on the AAs. A contact map can be thought of as a specific template for a class of TCR-pMHC interactions, which then will yield an actual binding energy once we specify the specific AA strings on the two molecules. To focus attention on the role of the contact map, we use a simple random energy model which assigns a fixed random energy to each of the possible AA pairs. Our model, described in detail below, can be thought of a more realistic version of the the Random Interaction Between Cell Receptor and Epitope (RICE) model \cite{George2017}, in which contact map effects were simply assumed to decorrelate pair energies at different sites along a uniform binding surface.
The paper is structured as follows. In section \ref{sec:model}, we present the model description along with how crystal-structure dependent contact maps are created and also discuss the choice of energy matrix in the model. In Section \ref{sec:PDF}, we analyze how the variance of the TCR-pMHC binding energy PDF is impacted by the choice of contact map, including the roles of the total number of contacts and the topology of the contact map. We then present two applications of the model that are affected by the choice of contact map: in Section \ref{sec:NegSelSP} we focus on the negative selection recognition probability, and in Section \ref{sec:PMRecProb} we discuss the point-mutant recognition probability by T cells that have survived negative selection. We present our closing remarks in Section \ref{sec:conclusions}.
\section{Recognition probability of point-mutated antigens by negatively-selected T cells}\label{sec:PMRecProb}
One of the motivations to model negative selection is to understand how the rejection of T cells that detect self-peptides negatively impacts the chances that T cells can detect tumor neo-antigens; after all, these neo-antigens are typically just one mutated amino acid away from a self-peptide sequence. We therefore turn to the probability that a T cell ($t$) that has survived negative selection is able to recognize an antigen ($\tilde{q}$) whose primary sequence differs by only one AA from a self-peptide ($q$) included in the negative-selecting repertoire ($\mathcal{Q}$). We call such antigen a point-mutant. In general, this probability for fixed T cell is defined via
\begin{equation}\label{PMRecProbGen}
\tilde{D}_t (N_q) = \mathbb{P} \left[ U(t, \tilde{q}) \geq U_n | \max\{U(t, \mathcal{Q}) \} < U_n \right],
\end{equation}
where we have averaged over all possible point-mutants with non-trivial contacts. Here $\mathcal{Q}$ denotes the selecting repertoire of $N_q$ peptides, one of which is $q$. In the limiting case where $t$ has not undergone negatively selection ($N_q = 0$), equation \eqref{PMRecProbGen} reduces to the recognition probability of a randomly generated antigen. Another extreme case corresponds to $t$ negatively trained only on $q$ ($N_q = 1$) where the point mutant position has $k$ contacts, resulting in
\begin{align}\label{PMRecProb1}
D_t (1) &=1-F_R(U_N)^{-1}\bigg[\int_{\mathbb{R}} F_{R-k}(U_n-x)F_k(x)f_k(x)dx \nonumber \\
&+ \int_\mathbb{R} \int_{[x,\infty)}F_{R-k}(U_n-\tilde{x})f_k(\tilde{x})f_k(x)d\tilde{x}dx\bigg],
\end{align}
where $F_k(x)$ and $f_k(x)$ denote the distribution function and density function of mean-zero normal random variables with variance $\sigma^2k$ (see SI Section S7 for a full derivation). We expect that for relatively small $N_q$, it is unlikely that any of the peptides in the training set will be close enough to $q$ or $\tilde{q}$ to help distinguish the two binding energies; hence $\tilde{p}_{1}$ should be a reasonable approximation to $D_t$. This agreement should decrease as $N_q$ increases. The accuracy of this approximation is explored in SI Fig. S8.
More generally, we ran a set of simulations with varying sizes $N_q = \{ 10^2, 10^3, 10^4 \}$ to assess the detection of $\tilde{q}$ by a T cell trained to evade $q$. We used the CDR3$\alpha$-pMHC interface of 3QIB (top left of Fig. \ref{fig:CMs}) as contact map for the simulations for simplicity. Figure \ref{fig:PMRP-Nq} shows the simulated point-mutant recognition probabilities as a function of T cell negative selection survival probability at three different sizes of the selecting repertoire. At lower (resp. higher) values of negative selection survival probability, i.e. when negative selection is more (resp. less) stringent during T cell maturation, a mature T cell's sense of an antigen resembling self-antigens is rather strict (resp. lenient), therefore, almost (resp. hardly) any deviation from this criterion caused by point-mutations triggers recognition of the point-mutant by the T cell; this results in higher (resp. lower) point-mutant recognition probability at lower (resp. higher) T cell negative selection survival probability.
Next, we compare the results at different $N_q$. This is a bit tricky, because fixing the negative selection probability leads to different thresholds $U_n$ at different training set sizes. This accounts for a large part but not all of the difference in the curves seen in Fig. \ref{fig:PMRP-Nq}; see SI Fig S8. By increasing the size of the negative-selecting repertoire $N_q$, a mature T cell's sense for self-antigen resemblance broadens; thus leading to higher tolerance (less detectability) for point-mutants at higher $N_q$ values.
Another feature impacting point-mutant recognition probability that stems from incorporating contact maps into the model, pertains to the site in the pMHC sequence of the mutated AA. As can be seen in the contact maps in Fig. \ref{fig:CMs}, some pMHC AAs make more significant contacts with TCR AAs than other pMHC AAs. In the case of 3QIB's CDR3$\alpha$-pMHC contact map (top left of Fig. \ref{fig:CMs}), the number of non-vanishing contacts for a particular pMHC AA ranges from 1 (sparse-contact site) to 5 (high-contact site), with an averaged 3.06 TCR AAs in contact by the 7 pMHC AAs with non-vanishing contacts. Accordingly, a point-mutant $\tilde{q}$ with its mutation occurring in a sparse-contact site (resp. high-contact site) bears higher (resp. lower) resemblance with the non-mutant $q$ for a T cell. This effect clearly should impact the point-mutant recognition probability, with high-contact site point-mutants having higher recognition probability than their sparse-contact counterparts, and point-mutants with randomly chosen mutation sites having recognition probability somewhere in between the aforementioned two. We investigated this idea by running three simulations as explained in the paragraph above, but with the additional constraint that in each round of simulations the mutated site was: one, always a high-contact site; two, always a sparse-contact site; and three, randomly chosen. The negative-selection repertoire was fixed at $N_q = 10^4$. The point-mutant recognition probability of these simulations are shown in Fig. \ref{fig:PMRP-Contacts} and exhibit agreement with the expected behavior.
The aforementioned RICE framework cannot adequately distinguish high contact sites from sparse ones on either the TCR or pMHC amino acid sequences. RICE's prediction for neo-epitope recognition probability therefore represent fixed estimates for a typical `one-contact' mutation. On the other hand, this new approach enables a quantitative estimate of this obvious dependence. This aligns with previous strategies calling for mutations to target TCR-facing peptide amino acids; see for example \cite{Chowell2015}, \cite{Shang2009}
\section{Negative selection recognition probability}\label{sec:NegSelSP}
\begin{figure}
\includegraphics[width=0.45\textwidth]{figures/RecProb.png}
\caption{\small Negative selection recognition probability as a function of the survival energy threshold for T cells auditioning for negative selection. All curves involving the use of contact maps are generated from simulations sharing the same parameters apart from the contact maps. The prediction of the RICE model (brown), the identity matrix giving a diagonal contact map case (black) and the limiting case where all AAs in the CDR3 loop interact with all AAs in the pMHC (yellow) are included for comparison. Plots are averaged over the different random energy matrices in use, and shaded areas indicate the corresponding standard error of the mean.}
\label{fig:NegSelRecProb}
\end{figure}
Negative selection trains the na\"{\i}ve T cell repertoire to avoid host cells by eliminating T cells that bind too strongly to any of the self-peptides. We now wish to consider the effects on the post-selection repertoire due to incorporating crystal-structure motivated contact maps into the negative selection process.
We focus on determining the negative selection recognition probability as a function of the energy survival threshold $U_n$. For a T cell to survive negative selection, it must not bind strongly i.e. $U < U_n$ to any of the self-selecting pMHCs it encounters during selection. This is described by the probability that the maximum of the TCR-pMHC binding energies, $\max \{U(t, q_i)\}_{i=1}^{N_q}$, resulting from a T cell $t$ undergoing negative selection against a repertoire, $\mathcal{Q} = \{ q_i \}_{i=1}^{N_q}$ of $N_q$ self-pMHCs, is below the threshold $U_n$\cite{George2017}. This recognition probability is thus a monotonically decreasing function that gradually transitions from 1 to 0 with ever increasing values of $U_n$. For a fixed TCR, the scale of the transition correlates with a typical value of $\sigma^2_{t}$. Averaging this over different TCRs will give rise to a width that strongly correlates with the number of contacts, as suggested by the phenomenological relationship given above and verified in the SI.
We simulate negative selection for various CDR3-pMHC interfaces (contact maps), using fixed randomly generated TCR and pMHC repertoires and 16 zero-mean, unit-variance randomly generated energy matrices $\mathbb{E}$. In figure \ref{fig:NegSelRecProb}, we show the recognition probability averaged over energy matrices $\mathbb{E}$ for seven different simulations, four of them using contact maps 3QIB, 3QIU, 3QIW, and 5C0A; along with a $7 \times 7$ identity-matrix contact map case, as well as the original RICE model, and a $7 \times 7$ contact map with all unit entries case simulating the scenario where all AAs in $t$ are interacting with all AAs in $q$. At a given $U_n$, the recognition probability is higher for those contact maps with higher $\sigma^2$, giving a higher probability for a pair of $t$ and $q$ to bind strongly enough and thus for $t$ to face deletion. Interestingly, the data in the figure show directly that, similar to what we argued earlier, the recognition probability curve for a single realization is quite accurately given by the average over energy matrices.
| {
"attr-fineweb-edu": 1.152344,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdzQ4dbjiVFhKgJty | \section{Introduction}
In their classic study of vacuum decay via bubble nucleation, Coleman
and De Luccia (CDL)~\cite{Coleman:1980aw} discovered a surprising
feature of decays from a Minkowski vacuum to an anti-de-Sitter (AdS)
vacuum or from one AdS vacuum to another. If a potential has two
vacua of differing energy, decay from the higher energy false vacuum
to the lower energy true vacuum is always possible, if gravitational
effects are ignored. However, if the higher vacuum has either zero or
negative energy, such decays are quenched if the two vacua are
sufficiently close in energy. In the thin-wall approximation of CDL,
bubble nucleation is only possible if
\begin{equation}
\sigma < \frac{2}{\sqrt{3\kappa}}\left(\sqrt{|U_{\rm tv}|}
-\sqrt{|U_{\rm fv}|} \right) \, .
\label{ineq}
\end{equation}
Here $\sigma$ is the surface tension in the bubble wall, $U_{\rm fv}$ and
$U_{\rm tv}$ are the energy densities of the false and true vacua, and
$\kappa=8\pi G_N$.
As in the non-gravitational case, the CDL thin-wall approximation
requires that the two vacua be close in energy. In addition, one must
require that the radius of curvature of the wall can be treated as
being constant as one moves through the bubble wall.
Recently it was shown~\cite{Masoumi:2016pqb} that there are regions of
parameter space that allow a new type of thin-wall regime in which the
latter requirement is violated. In this case not only does the CDL
derivation of Eq.~(\ref{ineq}) fail, but also its very formulation
becomes ambiguous, because the surface tension is not well-defined.
In this paper we will show how this inequality can be generalized to
this new thin-wall regime. Furthermore, we show how these bounds for
the thin-wall cases can be seen as special cases of a more general
bound, applicable even to bounce solutions that are in no sense
thin-wall.
We also discuss the case where the parameters of the theory are taken
to the boundary beyond which nucleation is quenched. As the boundary
is approached, the bubble radius at nucleation increases without
bound. When the critical values of the parameters are actually
achieved, the bounce solution is absent. In its stead there is a
static planar domain wall~\cite{Vilenkin:1981zs,Ipser:1983db}. Such
walls have been constructed as BPS solutions in supergravity
theories~\cite{Cvetic:1992bf,Cvetic:1992st,
Cvetic:1996vr,Cvetic:1993xe,Ceresole:2006iq}, but they can also
arise as solutions that only possess what has been termed ``fake
supersymmetry''~\cite{Freedman:2003ax,DeWolfe:1999cp,Skenderis:2006jq}.
We will describe how this happens in our approach. We will also
recall the related work of Abbott and
Park~\cite{Abbott:1985qa,Park:1986xa} connecting the existence of
bounces to the vacuum stability results of
Boucher~\cite{Boucher:1984yx}
The remainder of this manuscript is organized as follows. In
Sec.~\ref{CDL} we review the CDL formalism, including their thin-wall
approximation. In Sec.~\ref{beyond} the new thin-wall regime is
described and the generalization of Eq.~(\ref{ineq}) to this new
regime is derived. In Sec.~\ref{AllBound} we derive the more general
bound that applies to all bounces. In Sec.~\ref{critical} we discuss
the approach to the critical quenching limit where the Euclidean
bounce disappears and a static planar domain wall appears, and make
connections to supersymmetry. Section~\ref{conclude} summarizes our
results and comments on the extension to theories with multiple scalar
fields. There is an appendix that addresses some special issues that
arise when the false vacuum is Minkowskian.
\section{The CDL formalism}
\label{CDL}
We consider a theory with a real scalar field $\phi$ governed by a
potential $U(\phi)$ that has two metastable vacua at $\phi_{\rm tv}$
and $\phi_{\rm fv}$. The values of the potential at these vacua
satisfy $U_{\rm tv} < U_{\rm fv} \le 0$.
Thus, the higher false vacuum can be either Minkowski
or AdS, while the true vacuum is AdS.\footnote{Note that an AdS vacuum
can correspond to a local maximum of $U(\phi)$, provided that the
Breitenlohner-Freedman bound is respected.} The AdS vacua have
characteristic lengths given by
\begin{equation}
\ell_{\rm fv} = \left(\frac{\kappa}{3} |U_{\rm fv}| \right)^{-1/2}
= \left(-\frac{\kappa}{3} U_{\rm fv} \right)^{-1/2}
\label{ellDef}
\end{equation}
and similarly for $\ell_{\rm tv}$.
Following CDL, we seek bounce solutions of the
Euclidean field equations. Making the standard assumption of O(4)
symmetry, we can write the Euclidean metric in the form
\begin{equation}
ds^2 = d\xi^2 + \rho(\xi)^2 \, d\Omega_3^2 \, .
\end{equation}
For the cases we are considering, decays from a Minkowski or AdS
vacuum, $\rho(\xi)$ has a single zero and $\xi$ runs from 0 to $\infty$.
The bounce thus has $R^4$ topology, in contrast with the de Sitter
bounces that are topologically four-spheres.
The Euclidean action can then be written in the form\footnote{The
Gibbons-Hawking boundary term~\cite{Gibbons:1976ue}~ does not appear
here because it is exactly canceled by the surface term from the
integration by parts that removes the $\rho''$ that appears in the
curvature scalar $R$. In fact, the tunneling rate is unaffected by
the inclusion or omission of the boundary term, because its
contributions to the bounce action and the false vacuum action are
equal, and so cancel in the tunneling exponent $B$~\cite{Masoumi:2016pqb}.}
\begin{equation}
S= 2\pi^2 \int_0^\infty d\xi \, \left\{ \rho^3
\left[\frac12\phi'^2 +U(\phi)\right] -\frac{3}{\kappa}
\left(\rho \rho'^2 +\rho\right) \right\}
\label{action}
\end{equation}
and a bounce must satisfy
\begin{equation}
\phi'' + \frac{3\rho'}{\rho}\, \phi' = \frac{dU}{d\phi} \, ,
\label{phieq}
\end{equation}
\begin{equation}
\rho'^2 = 1 +\frac{\kappa}{3}\rho^2
\left[\frac12\phi'^2-U(\phi)\right] \, ,
\label{rhoeq}
\end{equation}
subject to the boundary conditions
\begin{equation}
\phi'(0) = 0 \, ,\qquad \phi(\infty)=\phi_{\rm fv} \, ,
\qquad \rho(0)=0 \, ,
\end{equation}
where primes denote derivatives with respect to $\xi$.
Equations~(\ref{phieq}) and ~(\ref{rhoeq})
imply the useful equation
\begin{equation}
\rho''= - \frac{\kappa}{3}\,\rho\left[\phi'^2 +U(\phi)\right] \, .
\label{ddrhoeq}
\end{equation}
We now note that $\rho(\xi)$ is a
monotonically increasing function. To establish this, note first that
the boundary conditions imply that $\rho'(0)=1$. Requiring that the
bounce approach the pure false vacuum solution at large $\xi$ implies
that $\rho$ must asymptotically increase with $\xi$ either linearly
(for a Minkowski false vacuum) or exponentially (for an AdS false
vacuum). If $\rho$ were not monotonic between these limits, it would
have a local minimum at some finite $\bar\xi$. This would require that
$\rho'(\bar\xi)=0$ and $\rho''(\bar\xi) > 0$. However, this cannot be,
since Eq.~(\ref{rhoeq}) shows that $U(\phi)$ must be positive at any
zero of $\rho'$, while Eq.~(\ref{ddrhoeq}) implies that $\rho''$ can
only be positive if $U(\phi)$ is negative.
We will find it useful to rewrite some of these results in terms of a
Euclidean pseudo-energy
\begin{equation}
E = \frac12\phi'^2 - U(\phi) \, .
\label{Edef}
\end{equation}
Because of the $\phi'$ ``friction'' term in Eq.~(\ref{phieq}), this is not
conserved, but instead obeys
\begin{eqnarray}
E' &=& - \frac{3 \rho'}{\rho}\, \phi'^2 \cr
&=& - \phi'^2\sqrt{\frac{9}{\rho^2} + 3\kappa E }
\label{Eprime}
\end{eqnarray}
with the second line following with aid of Eq.~(\ref{rhoeq}).
We just showed that $\rho$ is a monotonically
increasing function of $\xi$. It then follows that $E$ is
monotonically decreasing from an initial maximum
$E(0) < |U_{\rm tv}|$ to an asymptotic minimum $E(\infty) =
|U_{\rm fv}|$.
The tunneling exponent $B$ is obtained by subtracting the action
of the homogeneous false vacuum solution from that of the bounce.
For configurations that satisfy Eq.~(\ref{rhoeq}) the
action can be rewritten as
\begin{eqnarray}
S &=& 4\pi^2 \int_0^\infty d\xi\, \left[\rho^3 U(\phi)
- \frac{3}{\kappa}\rho \right] \cr
&=& 4\pi^2 \int_0^\infty d\rho \, \frac{1}{\rho'} \left[\rho^3 U(\phi)
- \frac{3}{\kappa}\rho \right] \, .
\end{eqnarray}
The actions of the bounce and the false vacuum are both
divergent, so we must regulate the integrals.
Thus, we define
\begin{equation}
S(L) = 4\pi^2 \int_0^L d\rho\, \frac{1}{\rho'} \left[
\rho^3U(\phi) - \frac{3}{\kappa} \, \rho\right]
\end{equation}
and obtain a finite value for
\begin{equation}
B = \lim_{L\to\infty} \left[S_{\rm bounce}(L)
- S_{\rm fv}(L) \right] \, .
\label{regularizedB}
\end{equation}
In particular, for an AdS false vacuum,
with $\phi=\phi_{\rm fv}$
everywhere,
\begin{equation}
\rho_{\rm fv} = \ell_{\rm fv} \sinh(\xi/\ell_{\rm fv}) \, .
\label{rhoFV}
\end{equation}
Integrating the action density gives
\begin{equation}
S_{\rm fv}(L) = A_{\rm fv}(0,L)
\end{equation}
where
\begin{eqnarray}
A_{\rm fv}(\rho_1,\rho_2) &=& 4\pi^2 \int_{\rho_1}^{\rho_2} d\rho\, \frac{1}{\rho'}
\left[ \rho^3U_{\rm fv} - \frac{3}{\kappa} \, \rho\right] \cr
&=& -\frac{4\pi^2}{\kappa} \ell_{\rm fv}^2 \left[
\left(1+ \frac{\rho_2^2}{\ell_{\rm fv}^2}\right)^{3/2}
-\left(1+ \frac{\rho_1^2}{\ell_{\rm fv}^2}\right)^{3/2}
\right] \, .
\end{eqnarray}
\subsection{The CDL thin-wall approximation}
In the CDL thin-wall approximation the bounce solution is divided into
three parts: an exterior region of pure false vacuum, an interior
region of pure true vacuum, and a thin wall that separates the two.
For such a configuration we can write
\begin{equation}
B(\rho) = B_{\rm exterior}(\rho)
+B_{\rm interior}(\rho)+B_{\rm wall}(\rho)
\end{equation}
with $\rho$ being the curvature radius of the wall. In the false
vacuum exterior region the actions of the bounce and the false vacuum
cancel completely, and so $B_{\rm exterior} =0$. The contribution in
the interior region is the difference of true- and false-vacuum terms,
\begin{equation}
B_{\rm interior} = A_{\rm tv}(0,\rho) - A_{\rm fv}(0,\rho) \, .
\end{equation}
Finally, we have the contribution from the wall, which can be written in
the form
\begin{equation}
B_{\rm wall} = 2\pi^2 \rho^3 \sigma
\label{CDLwall}
\end{equation}
where the surface tension $\sigma$ is given by the flat-spacetime
expression\footnote{The absence of gravitational corrections in the
CDL expression for the surface tension will be justified in
Sec.~\ref{beyond}.}
\begin{equation}
\sigma = 2 \int_{\rm wall} d\xi
\left[U(\phi_{\rm bounce}) - U_{\rm fv} \right]
\label{sigmaDef}
\end{equation}
and the integration over $\xi$ is restricted to the wall
region\footnote{More precisely, CDL replace $U$ in the integral
by a function $U_0$ that has minima at $\phi_{\rm tv}$ and $\phi_{\rm fv}$
and is equal to $U_{\rm fv}$ at both minima. In the thin-wall limit
the effect of this replacement is higher order.}
It is crucial here that the field profile in the wall, and hence
$\sigma$, are to a good approximation independent of $\rho$, a
consequence of the fact that the bounce radius is much greater than
the thickness of the wall. This approximation becomes better as the
difference between the true and false vacuum energies decreases, not
because the wall gets thinner (it doesn't), but because the bounce
gets bigger. Indeed, one might term this the ``large-bounce
approximation''.
Note that Eq.~(\ref{CDLwall}) implicitly assumes that $\rho$ is
essentially constant as one moves through the wall. The CDL thin-wall
analysis is only valid if this is the case. In Sec.~\ref{beyond} we
will consider thin-wall configurations for which this assumption
fails.
The bounce is obtained by requiring that the wall radius be a stationary
point $\bar\rho$ of $B(\rho)$. Setting $dB/d\rho = 0$ leads to
\begin{eqnarray}
\sigma &=& \frac{2}{\kappa}
\left(\sqrt{ \frac{1}{\ell^2_{\rm tv}} - \frac{1}{\bar\rho^2} }
- \sqrt{ \frac{1}{\ell^2_{\rm fv}} - \frac{1}{\bar\rho^2} } \right)\cr
&<& \frac{2}{\kappa} \left( \frac{1}{\ell_{\rm tv}}
-\frac{1}{\ell_{\rm fv}} \right) \, .
\label{ineq2}
\end{eqnarray}
with the bound being approached in the limit $\bar\rho \to \infty$.
Using Eq.~(\ref{ellDef}) to rewrite the inequality on the second line
yields the bound in Eq.~(\ref{ineq}).
\section{Thin-wall bounces beyond CDL}
\label{beyond}
Reference~\cite{Masoumi:2016pqb} examined tunneling in the more general case where
the true and false vacua are not close in energy and the conditions
for CDL's thin-wall approximation are not met. It was found that as the
mass scales in the potential are increased, making gravitational
effects stronger, a new type of thin-wall regime emerges. More
specifically, for any given potential one can define a quantity $\beta$ as
the ratio of a mass scale in the potential to the Planck
mass. Gravitational effects become stronger as $\beta$ is increased.
Eventually, as $\beta$ approaches a critical value, the bounce
radius tends to infinity. In the limit the bounce solution
disappears, tunneling is completely quenched, and the false vacuum is
stabilized.
In this new thin-wall regime the scalar field profiles are
qualitatively similar to those in the CDL thin-wall bounces. There is
an interior region, $0<\xi<\xi_1$, that is approximately pure true
vacuum, an exterior region, $\xi_2 < \xi < \infty$, that is almost pure
false vacuum, and a narrow transition region, or wall, that separates
the two, with the wall thickness\footnote{This definition of the wall
thickness differs from that used in~\cite{Masoumi:2016pqb}, which only
included regions where $U(\phi) > U_{\rm fv}$.} $\Delta\xi = \xi_2
-\xi_1$ being small compared to $\xi_1$. However, they differ from
their CDL counterparts in that $\rho(\xi)$ grows considerably as one
passes through the wall, and so cannot be approximated as being
constant.
As with the CDL thin-wall approximation, it is convenient to write the
tunneling exponent as the sum of three terms, each of which is the
difference between a bounce action term and a corresponding false
vacuum term. In order to be consistent
with the form of the long-distance regulator of the action integrals,
Eq.~(\ref{regularizedB}), the corresponding regions of the bounce and the false
vacuum must be defined by values of $\rho$, rather than $\xi$.
Thus, if the wall in the bounce solution runs between $\xi_1$ and $\xi_2$,
then the corresponding false vacuum region is bounded by
$\rho_1 \equiv \rho(\xi_1)$ and $\rho_2 \equiv \rho(\xi_2)$.
With this understanding, we obtain for the interior region, $\rho<\rho_1$,
\begin{eqnarray}
B_{\rm interior} &=& A_{\rm tv}(0,\rho_1) - A_{\rm fv}(0,\rho_1)
\cr &=& - \frac{4\pi^2}{\kappa} \left\{ \ell_{\rm tv}^2
\left[\left(1+ \frac{\rho_1^2}{\ell_{\rm tv}^2}\right)^{3/2}
-1\right]
-\ell_{\rm fv}^2
\left[\left(1+ \frac{\rho_1^2}{\ell_{\rm fv}^2}\right)^{3/2}
-1\right] \right\} \, ;
\end{eqnarray}
i.e., the CDL result with $\rho =\rho_1$.
In the exterior region the actions of the bounce and the false vacuum
exactly cancel, so $B_{\rm exterior} = 0$.
In the wall region we have
\begin{eqnarray}
B_{\rm wall} &=& 4\pi^2\int_{\rho_1}^{\rho_2}d\rho \left\{ \frac{1}{\rho_b'}
\left[\rho^3 U(\phi_b) - \frac{3}{\kappa} \rho \right]
- \frac{1}{\rho_{\rm fv}'}
\left[\rho^3 U_{\rm fv} - \frac{3}{\kappa} \rho \right] \right\} \cr
&=&
4\pi^2\int_{\xi_1}^{\xi_2} d\xi \,
\left[\rho^3 U(\phi_b) - \frac{3}{\kappa} \rho \right]
- A_{\rm fv}(\rho_1,\rho_2) \, .
\label{WallContrib}
\end{eqnarray}
In the first line the subscripts on $\rho'$ and in the potential term
indicate that in the first term these are to be evaluated from the
bounce solution, and in the second term from the pure false
vacuum solution.
This expression for $B_{\rm wall}$ should reduce to the CDL result in
the limit where $\Delta \rho = \rho_2- \rho_1$ is small. To verify
this, we write the false vacuum contribution to
Eq.~(\ref{WallContrib}) as
\begin{equation}
- A_{\rm fv}(\rho_1,\rho_1+\Delta\rho)
= \frac{4 \pi^2}{\kappa} (3 \rho_1)\sqrt{1+
\frac{\rho_1^2}{\ell_{\rm fv}^2}} \, \Delta\rho
+ O[(\Delta\rho)^2] \, .
\end{equation}
Now $\Delta\rho = \rho'(\xi_1)\Delta \xi$. In the false
vacuum, $\rho'= \sqrt{1 + \rho^2/\ell_{\rm fv}^2}$. Using these
facts, we obtain
\begin{eqnarray}
- A_{\rm fv}(\rho_1,\rho_1+\Delta\rho) &=&
\frac{4 \pi^2}{\kappa} (3 \rho_1)
\left(1+ \frac{\rho_1^2}{\ell_{\rm fv}^2} \right) \Delta \xi
+ O[(\Delta\xi)^2] \cr
&=& 4 \pi^2 \left( \frac{3\rho_1}{\kappa} - \rho_1^3 U_{\rm fv}
\right)\Delta \xi + O[(\Delta\xi)^2] \, .
\end{eqnarray}
Combining this result with the contribution from the bounce, and
working to first order in $\Delta\xi$, we recover the CDL result, with
the surface tension given by Eq.~(\ref{sigmaDef}). Note that this
justifies CDL's use of the flat-spacetime expression for the surface
tension.
Let us now return to the more general case, with $\rho_2-\rho_1$ not
assumed to be small. We can no longer approximate $\rho$ as being
constant through the wall. One consequence is that the identification
of a surface tension becomes problematic. One usually defines surface
tension in terms of an energy per unit area (or action per unit
hypersurface area). Because $\rho(\xi)$ grows in the wall, the area
of the outer surface of the wall is larger than that of the inner
surface of the wall. Which, if either, should be used? In fact, it
is not even obvious that the wall action can be written as the product
of an area and a radius-independent factor.
To answer these questions we need to examine the form of these new
thin-wall solutions in more detail. The scalar field at the center of
the bounce, $\phi(0)$, is very close to $\phi_{\rm tv}$. The
field remains close to $\phi_{\rm tv}$ until $\xi \approx \xi_1$, so for
the interior region, $\xi \lesssim \xi_1$, we have, analogously to
Eq.~(\ref{rhoFV}),
\begin{equation}
\rho \approx \ell_{\rm tv} \sinh(\xi/\ell_{\rm tv}) \, .
\end{equation}
If gravitational effects are made stronger by increasing $\beta$,
$\xi_1$ increases and $\rho_1$ grows exponentially.
In the near critical regime the growth of $\rho$ in the interior
region is such that at $\xi_1$ the first term on the right-hand side
of Eq.~(\ref{rhoeq}) can be neglected. If $\rho_1 \gg \ell_{\rm fv}$
this remains true throughout the wall, and beyond. We can then write
\begin{equation}
\frac{\rho'}{\rho}
= \sqrt{\frac{\kappa}{3}} \sqrt{\frac12 \phi'^2 - U(\phi)}
\label{drhoOverRhoApprox}
\end{equation}
so that Eq.~(\ref{phieq}) becomes
\begin{equation}
\phi'' + \sqrt{3\kappa} \sqrt{\frac12 \phi'^2 - U(\phi)} \, \,
\phi' = \frac{dU}{d\phi} \, .
\label{phiOnlyEq}
\end{equation}
Note that $\rho$ does not appear in this equation. Hence the profile
of $\phi(\xi)$ in the wall is independent of $\rho$.
Furthermore, integration of Eq.~(\ref{drhoOverRhoApprox}) gives
\begin{equation}
\rho(\xi) = \rho_1 \,e^{G(\xi)}
\end{equation}
where
\begin{equation}
G(\xi) = \sqrt{\frac{\kappa}{3}}
\int_{\xi_1}^\xi d\xi\,\sqrt{\frac12 \phi'^2 - U(\phi)}
\label{rhoExp}
\end{equation}
is also independent of $\rho$.
This allows us to rewrite Eq.~(\ref{WallContrib}) as
\begin{equation}
B_{\rm wall} = 4\pi^2 \int_0^{\ln(\rho_2/\rho_1)} dG\, \left\{\frac{1}{G_b'}
\left[\rho_1^3 U(\phi_b) \, e^{3G}
- \frac{3}{\kappa} \rho_1 \, e^{G}\right]
-\sqrt{\frac{3}{\kappa}}\frac{1}{\sqrt{-U_{\rm fv}}}\left[\rho_1^3 U_{\rm fv} e^{3G}
- \frac{3}{\kappa} \rho_1 e^G \right]
\right\} \, .
\end{equation}
In the limit of large bounce radius ($\rho_1 \gg l_{\rm fv}$), the
terms cubic in $\rho_1$ dominate. Keeping only these, we have
\begin{equation}
B_{\rm wall} = 4\pi^2 \rho_1^3\sqrt{\frac{3}{\kappa}}
\int_0^{ \ln(\rho_2/\rho_1)} dG\, e^{3G} \left[
\frac{U(\phi_b)}{\sqrt{\frac12 {\phi'}_b^2-U(\phi_b)}}
+ \sqrt{- U_{\rm fv}} \right] \, .
\end{equation}
This suggests that we write
\begin{equation}
B_{\rm wall} = 2 \pi^2 \rho_1^3 \tilde \sigma
\end{equation}
where $2 \pi^2 \rho_1^3$ is the area of the inner surface of the wall
and
\begin{equation}
\tilde\sigma = \sqrt{\frac{12}{\kappa}}
\int_0^{\ln(\rho_2/\rho_1)} dG\, e^{3G} \left[
\frac{U(\phi_b)}{\sqrt{\frac12 {\phi'}_b^2-U(\phi_b)}}
+ \sqrt{- U_{\rm fv}} \right]
\label{sigmaTilde}
\end{equation}
can be viewed as a generalization of the CDL surface tension
$\sigma$.\footnote{In the CDL expression for the surface tension,
Eq.~(\ref{sigmaDef}), the integrand is everywhere positive so
$\sigma$ is manifestly positive. This is not the case for
$\tilde\sigma$. Indeed, the integrand in Eq.~(\ref{sigmaTilde}) is
negative in the lower part of the integration range and positive in
the upper part. In the next section we will show that this
expression for $\tilde \sigma$ is a special case of a more general
expression that is manifestly positive.}
(Note that, like $\sigma$, it is independent of $\rho$.) With this
definition, the total expression for $B$ takes the same form as in the
CDL thin-wall limit, but with the replacements $\bar\rho \to \rho_1$
and $\sigma \to \tilde\sigma$. The line of reasoning that led to
Eq.~(\ref{ineq2}) and then to Eq.~(\ref{ineq}) now leads
to
\begin{equation}
\tilde\sigma < \frac{2}{\sqrt{3\kappa}}\left(\sqrt{|U_{\rm tv}|}
-\sqrt{|U_{\rm fv}|} \right) \, .
\label{ineq3}
\end{equation}
\section{A bound for all bounces}
\label{AllBound}
We have obtained upper bounds on the surface tension for both the
thin-wall approximation of CDL and the generalized thin-wall regime
of~\cite{Masoumi:2016pqb}. However, thin-wall bounces of either type are
special cases. There are bounce solutions that are not in any sense
thin-wall, including some for which it is difficult to even define a
surface tension. This raises the question of whether there is a more
general bound that applies to all bounces and that reduces to
Eqs.~(\ref{ineq}) and (\ref{ineq3}) in the appropriate limits.
We now show that there is. To begin, we recall the
definition of the pseudo-energy, Eq.~(\ref{Edef}), and the expression
for its derivative, Eq.~(\ref{Eprime}). If follows from the latter
that
\begin{equation}
\frac{d\sqrt{E}}{d\xi} = - \frac{\sqrt{3\kappa}}{2}
\sqrt{1 + \frac{3}{\kappa E \rho^2} } \, \, \phi'^2 \, .
\label{sqrtderiv}
\end{equation}
Integrating this, we find that
\begin{eqnarray}
\sqrt{E(0)} - \sqrt{E(\infty)} &=& \frac{\sqrt{3\kappa}}{2}
\int_0^\infty d\xi \,\sqrt{1 + \frac{3}{\kappa E \rho^2} }
\,\, \phi'^2 \cr
&>& \frac{\sqrt{3\kappa}}{2} \int_0^\infty d\xi \, \phi'^2 \, .
\end{eqnarray}
Noting that $E(0) \le |U_{\rm tv}|$ and recalling that
$E(\infty) = |U_{\rm fv}|$, we have
\begin{equation}
\int_0^\infty d\xi \, \phi'^2_b < \frac{2}{\sqrt{3\kappa}}
\left( \sqrt{|U_{\rm tv}|} -\sqrt{|U_{\rm fv}|} \right) \, .
\label{generalineq}
\end{equation}
This inequality is exact, and does not depend on any approximations.
It therefore applies to any bounce solution. In particular, it should
reduce to our previous results for thin-wall bounces. In these
bounces $\phi'$ is taken to vanish outside the wall region, so the
integration can be restricted to the range $\xi_1<\xi<\xi_2$. In the
CDL thin-wall approximation the bounce profile in the wall region is
approximately that of a (1+1)-dimensional kink.
Equation~(\ref{sigmaDef}) gives the surface tension in terms of an
integral of the potential $U(\phi_b)$. A virial
theorem~\cite{Weinberg:2012pjx} relates this to the integral of
$\phi'^2$ and shows that the bounds of Eqs.~(\ref{ineq}) and
(\ref{generalineq}) are equivalent within the accuracy of the
approximation.
For the new thin-wall case, demonstrating the equivalence of
Eqs.~(\ref{ineq}) and (\ref{ineq3}) requires a bit more work.
We begin by noting the identity
\begin{equation}
\frac{1}{3\kappa} \int_{\xi_1}^{\xi_2} d\xi \,
\frac{d}{d\xi}\left(\rho^3 \sqrt{E} \right)
= \int_{\xi_1}^{\xi_2} d\xi \, \rho^3 U(\phi_b) \sqrt{1 +
\frac{3}{\kappa E \rho^2} } \, ,
\label{integral1}
\end{equation}
which is obtained by evaluating the derivative inside the integral
on the left-hand side and using Eq.~(\ref{sqrtderiv}).
Alternatively, using the fact that the integrand is a total
derivative gives
\begin{eqnarray}
\frac{1}{3\kappa} \int_{\xi_1}^{\xi_2} d\xi \,
\frac{d}{d\xi}\left(\rho^3 \sqrt{E} \right)
&=& \frac{1}{3\kappa} \left[ -(\rho_2^3 - \rho_1^3)\sqrt{E_2}
- \rho_1^3 (\sqrt{E_2}- \sqrt{E_1} ) \right] \cr
&=& \frac{1}{3\kappa} \left[- (\rho_2^3 - \rho_1^3)\sqrt{E_2}
-\rho_1^3 \int_{\xi_1}^{\xi_2} d\xi \, \frac {d\sqrt{E}}{d\xi} \right] \cr
&=& - \frac{1}{\kappa\ell_{\rm fv}} (\rho_2^3 - \rho_1^3)
-\frac12 \rho^3_1 \int_{\xi_1}^{\xi_2} d\xi \,
\phi'^2 \sqrt{1 + \frac{3}{\kappa E\rho^2}} \, .
\label{integral2}
\end{eqnarray}
In the last equality we have used the definition of the AdS length,
Eq.~(\ref{ellDef}), and the fact that $E_2 = -U_{\rm fv}$.
Comparing Eqs.~(\ref{integral1}) and (\ref{integral2}), we have
\begin{equation}
\int_{\xi_1}^{\xi_2} d\xi \, \rho^3 U(\phi_b) \sqrt{1 +
\frac{3}{\kappa E\rho^2}}
+ \frac{1}{\kappa\ell_{\rm fv}} (\rho_2^3 - \rho_1^3)
= -\frac12 \rho^3_1 \int_{\xi_1}^{\xi_2} d\xi \,
\phi'^2 \sqrt{1 + \frac{3}{\kappa E\rho^2}} \, .
\label{IntEqualInt}
\end{equation}
In the range of integration, $\xi_1 \le \xi \le \xi_2$, the pseudo-energy
satisfies $E \ge E_2 = |U_{\rm fv}|$, while $\rho \ge \rho_1$. It follows
that
\begin{equation}
\frac{3}{\kappa E\rho^2} < \frac{\ell_{\rm fv}^2}{\rho_1^2} \, .
\end{equation}
Hence in the limit of large bounce radius, $\rho_1 \gg
\ell_{\rm fv}$,
the square roots in Eq.~(\ref{IntEqualInt}) can be set equal to
unity, giving
\begin{equation}
\int_{\xi_1}^{\xi_2} d\xi \, \rho^3 U(\phi_b)
+ \frac{1}{\kappa\ell_{\rm fv}} (\rho_2^3 - \rho_1^3)
= -\frac12 \rho^3_1 \int_{\xi_1}^{\xi_2} d\xi \, \phi'^2 \, .
\label{intEqualint}
\end{equation}
In this same limit
Eq.~(\ref{WallContrib}) reduces to
\begin{eqnarray}
B_{\rm wall} &=& 4\pi^2 \int_{\xi_1}^{\xi_2} d\xi \,\rho^3 U(\phi_b)
+ \frac{4\pi^2}{\kappa \ell_{\rm fv}} (\rho_2^3 - \rho_1^3) \cr
&=& 2 \pi^2 \rho^3_1 \int_{\xi_1}^{\xi_2} d\xi \, \phi'^2
\end{eqnarray}
where the second line follows from Eq.~(\ref{intEqualint}).
Dividing by the surface area, $2 \pi^2 \rho^3_1$, gives
\begin{equation}
\tilde\sigma = \int_{\xi_1}^{\xi_2} d\xi \, \phi'^2
\label{sigEqual}
\end{equation}
and demonstrates the equivalence of Eqs.~(\ref{ineq3}) and (\ref{generalineq})
in this regime.
\section{Bounces and walls in the critical limit}
\label{critical}
It is instructive to examine the behavior of the bounce solution as
the parameters of the theory approach the critical limit where the
nucleation of AdS bubbles is totally quenched. As this limit
is approached the wall radius of the bounce solution grows
without bound, with both $\xi_1$ and $\rho_1$ diverging. When the
parameters are actually at their limiting values, there is no
Euclidean bounce at all.
Now let us focus on the fixed time slice through the center of the
bounce, $t=0$, when the bubble is nucleated. The bubble is
instantaneously at rest, while the spatial part of the metric is
\begin{equation}
d\ell^2 = d\xi^2 +\rho(\xi)^2 d\Omega^2_2
\end{equation}
with $\rho(\xi)$ taken over from the bounce and $\xi$
again being a radial coordinate. The limit of infinite
bubble radius can be viewed, in a certain sense, as a
planar wall separating two metastable vacua.
The metric for a static planar wall can be written as
\begin{equation}
ds^2 = A(z)(-dt^2 +dx^2 +dy^2) +dz^2
\end{equation}
with $z$ being the spatial coordinate orthogonal to the wall. For our
theory with a single scalar field, the field equations are
\begin{equation}
A'^2 = \frac{\kappa}{3} A^2 \left(\frac12 \phi'^2 - U \right)
\end{equation}
\begin{equation}
\phi'' + \frac{3A'}{A} = \frac{dU}{d\phi}
\end{equation}
with primes indicating differentiation with respect to $z$. These
are to be solved subject to the boundary conditions that $\phi$
take its false (true) vacuum value at $z=\infty$ ($z=-\infty$).
If we make the correspondence
\begin{equation}
\xi \longleftrightarrow z \, , \qquad \rho \longleftrightarrow A \, ,
\end{equation}
these equations differ from Eqs.~(\ref{phieq}) and (\ref{rhoeq})
only by the omission of the factor of unity on the right-hand
side of Eq.~(\ref{rhoeq}). The boundary conditions at $\xi=\infty$
and $z=\infty$ agree. Although those at $\xi=0$ and $z=-\infty$
differ, they coincide in the limit of infinite bubble radius.
There is an interesting connection with supersymmetry when the parameters
are near the critical limit.
Let us suppose that we are given $\phi(\xi)$ and $\rho(\xi)$
satisfying the bounce equations Eqs.~(\ref{phieq}) and (\ref{rhoeq}).
Because $\phi$ is a monotonic function of $\xi$, we can view $\xi$,
and therefore $\phi'$, $E$, and $\rho$, as functions of $\phi$ in the
neighborhood of the bounce solution. We can then define a function
$f(\phi)$ by
\begin{equation}
f(\phi)^2 = \frac{1}{3\kappa} \left( \frac12 \phi'^2 -U \right)
= \frac{1}{3\kappa}E \, .
\label{defOFf}
\end{equation}
The derivative of $f$ with respect to $\phi$ is
\begin{eqnarray}
\frac{df}{d\phi} &=& \frac{df/d\xi}{d\phi/d\xi} \cr
&=& \frac{1}{\sqrt{3\kappa}} \frac{d\sqrt {E}}{d\xi}\frac{1}{\phi'}
\cr &=& -\frac12 \phi' \alpha^{-1}
\label{dfdphi}
\end{eqnarray}
where the last line follows from Eq.~(\ref{sqrtderiv})
and
\begin{equation}
\alpha(\phi) = \left(1 + \frac{1}{\kappa^2 \rho^2 f^2} \right)^{-1/2} \, .
\end{equation}
Solving Eq.~(\ref{dfdphi}) for $\phi'$ and substituting the result into
Eq.~(\ref{defOFf}) leads to
\begin{equation}
U = 2\alpha^2\left(\frac{df}{d\phi}\right)^2
- 3 \kappa f^2 \, .
\label{quasiSugra}
\end{equation}
If $\alpha$ were equal to unity, this would be the form for the
potential in a supergravity theory, with $f$ being the
superpotential. In fact, $\alpha \approx 1$ wherever $\rho \gg
\ell_{\rm fv}$. Thus, for a near-critical bounce the deviation from
the supersymmetric form is confined to a region of approximate true
vacuum in the center of the bounce.\footnote{This assumes that the
false vacuum is AdS. The case of a Minkowski false vacuum is
addressed in the appendix.}
Repeating the calculation for the static planar wall, one finds that
$\alpha = 1$ everywhere in space. The domain wall would then have the form
of a supersymmetric wall interpolating
between isolated vacua of an ${\cal N}=1$ supergravity
potential, with the disappearance of the bounce solution guaranteeing the
stability of each of these vacua against decay by nucleation of
bubbles of the other.
Note, however, that our construction is restricted to the
interval of field space between the two vacua; the form of $U(\phi)$
outside this interval is unconstrained and need not be derivable from
a superpotential.
An alternative path to demonstrating vacuum stability is to prove a positive
energy theorem or a BPS bound. In the presence of gravity, Boucher
\cite{Boucher:1984yx}, generalizing Witten's work
\cite{Witten:1981mf,Nester:1982tr}, gave the following criteria: an extremum
$\bar{\phi}$ of a potential $U(\phi)$ with $U(\bar{\phi}) < 0$ is
stable if there exists a real function $W(\phi)$ such that
\begin{equation}
2 \left(\frac{d W}{d \phi}\right)^2- 3 \kappa W^2
= U(\phi) \hspace{4ex} \forall \phi
\label{boucher}
\end{equation}
and
\begin{equation}
W(\bar{\phi})=\left[ -U(\bar{\phi})/{3 \kappa}\right]^{1/2} \, .
\end{equation}
These criteria make no direct reference to bubble nucleation. That
connection was drawn by Abbott and Park~\cite{Park:1986xa}. Given a
potential $U(\phi)$, one can obtain $W(\phi)$ by integrating
Eq.~(\ref{boucher}), provided that $U+3\kappa W^2$ remains positive.
Abbott and Park showed that if the latter becomes negative at some
$\phi=\phi_s$ before one reaches an extremum of $U$, then $\phi_s$ is
the starting point $\phi(0)$ for a bounce solution that governs the
decay via bubble nucleation of the $\bar\phi$ vacuum.
\section{Concluding remarks}
\label{conclude}
Gravity can quench the nucleation of bubbles in a Minkowski or AdS
false vacuum. This result was proven analytically by Coleman and De
Luccia in a thin-wall limit where the energy difference between the
true and false vacua is small. Their analysis implies an inequality,
Eq.~(\ref{ineq}), that relates the surface tension $\sigma$ and the
true- and false-vacuum energies. It is essential for this inequality
that $\sigma$ is independent of the radius of curvature of the bounce
wall, and that the contribution of the wall to the Euclidean action
is the product of $\sigma$ and the surface area of the wall.
Subsequently, numerical analyses have generalized this claim to a
wider variety of potentials \cite{Samuel:1991dy, Bousso:2006am,
Aguirre:2006ap, Masoumi:2016pqb}. As the boundary beyond which
nucleation is quenched is approached, the bubble radius at nucleation
increases without bound and a new thin-wall regime emerges
\cite{Masoumi:2016pqb}. This new thin-wall regime differs from the CDL
thin-wall regime in that the wall radius of curvature $\rho$ grows
exponentially as one moves through the bubble wall. It is then far
from obvious that the wall action can be decomposed as the product of
a surface tension and an area. Indeed, it is not even clear how the
area should be defined. In Sec.~\ref{beyond} we used the fact that
the matter field profile in the wall is independent of the curvature
of the wall to define a modified surface tension $\tilde \sigma$ that
is $\rho$-independent. We were then able to show analytically that
the wall action is $\tilde\sigma$ times the area of the inner surface
of the bounce wall. Closely following the reasoning of CDL then led
to the bound of Eq.~(\ref{ineq3}) on $\tilde\sigma$.
In Sec.~\ref{AllBound} we proved that the upper bounds on the surface
tensions in the two thin-wall regimes are limiting cases of a more
general bound, Eq.~(\ref{generalineq}), that is satisfied by all
bounces. Gravity sources a frictional force that depletes the
pseudo-energy as it evolves through the radial direction of the
bounce. This bound expresses the constraint on this frictional force
that is required if the bounce is to interpolate between the two vacua.
The work presented here was limited to single-field potentials, but the
proof of the bound of Eq.~(\ref{generalineq}), the existence of the
generalized thin-wall regime~\cite{Masoumi:2016pqb} and the
definition of surface tension in this regime, Eq.~(\ref{sigmaTilde}), should
extend to multifield potentials. With $N$ scalar fields $\phi_i$ the bounce
should satisfy
\begin{eqnarray}
{\phi_i}'' + \frac{ 3 \rho'}{\rho} \phi_i' = \frac{d U}{d \phi_i} \, , \\
\rho'^2 = 1 + \frac{\kappa}{3}\rho^2
\left[ \sum_i \frac{1}{2} \phi_i'^2 - U (\phi_i) \right] \, .
\end{eqnarray}
Solving these will lead to a trajectory through field space of the
form $\phi_i(\xi) = g_i(\xi)$. In the neighborhood of this bounce
trajectory we can introduce a coordinate $\Phi$ along the trajectory,
defined by
\begin{equation}
d \Phi^2 = \sum_{i=1}^{N} dg_i^2
\end{equation}
together with $N-1$ fields normal to the trajectory.
In these new coordinates the action along the bounce
\begin{equation} S= 2 \pi^2 \int_0^{\infty} d\xi \, \left\{ \rho^3 \left[
\frac{1}{2} \Phi'^2 - U(\Phi) \right]-\frac{3}{\kappa} ( \rho
\rho'^2+\rho)\right\}
\end{equation}
only depends on $\Phi$. The form of $U(\Phi)$ depends on the explicit
bounce solution, but the only information that was assumed about the
potential was that it was a continuous function interpolating between
$U_{\textrm{tv}}$ and $U_{\textrm{fv}}$. The formal arguments made in
Secs.~\ref{beyond} and \ref{AllBound} should extend to the field
$\Phi$, and the bound of Eq.~(\ref{generalineq}) will be satisfied.
In the single-field case $U(\phi)$ approached a supersymmetric form as
the parameters of the theory approached the boundary where nucleation
was quenched. Precisely on this boundary, where the false vacuum
becomes stable against decay by tunneling, a static planar domain wall
appears. $U(\phi)$ takes on the supergravity form and can be written
in terms of a fake superpotential $W(\phi)$. However, this form is only
guaranteed on the interval in field space lying between the true and
false vacua. For the multifield case similar arguments allow one
to define a fake superpotential, but only along the trajectory of
the bounce in field space and only encoding a dependence on $\Phi$, but
not on the fields normal to the trajectory.
Even though these restrictions weaken the connection between stability
and supersymmetry, recent
claims~\cite{Ooguri:2016pdq,Freivogel:2016qwc,Banks:2016xpo} based on
the weak gravity conjecture~\cite{ArkaniHamed:2006dz} assert the
instability of all non-supersymmetric vacua in a UV complete
theory. Said differently, theories for which stable domain walls do
not obey Eq.~(1.1) live in the swampland, with gravity never strong
enough to quench a decay. See \cite{Danielsson:2016rmq,
Ooguri:2017njy, Danielsson:2016mtx,Danielsson:2017max} for more
examples of this instability.
\begin{acknowledgments}
We thank Adam Brown and Dan Freedman for useful discussions.
S.P. thanks the members of the Stanford Institute of Theoretical Physics for their
hospitality. E.W. thanks the Korea Institute for Advanced Study and
the Department of Applied Mathematics and Theoretical Physics of the
University of Cambridge for their hospitality. Part of this work was
done at Aspen Center for Physics, which is supported by U.S. National
Science Foundation grant PHY-1607611. This work was also supported by
the U.S. National Science Foundation under Grants No.PHY-1518742, PHY-1521186 and
PHY-1620610 and by the U.S. Department of Energy under Grant
No. DE-SC0011941.
\end{acknowledgments}
| {
"attr-fineweb-edu": 1.709961,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdzzxK7kjXLlzcnqu | \section{Introduction}
\noindent
Hofer \cite{hofer} introduced in the 90's a conjugation invariant norm that comes from a Finsler structure on the group of compactly supported Hamiltonian symplectomorphisms of the standard symplectic Euclidean space $(\mathbb{R}^{2n},{\omega_{st}})$. This norm has then been generalized to any symplectic manifold by Lalonde and McDuff \cite{lalondemcduff}. The Hofer norm has been intensively studied (see for example \cite{bialypolterovich}, \cite{ostrover}, \cite{hoferzehnder}, \cite{lalondemcduff}, \cite{polterovich}, \cite{schwarz}, \cite{usher}, \cite{ustilovsky}) since the non-degeneracy and the geodesics of this norm are notions that are intimately linked to symplectic rigidity phenomena such as non-squeezing or Lagrangian intersection properties.\\
By contrast, in the contact setting Fraser, Polterovich and Rosen \cite{FPR} showed that there does not exist any conjugation invariant norm coming from a Finsler structure on the identity component of the group of compactly supported contactomorphisms of any contact manifold. More precisely they showed that any conjugation invariant norm on this group should be discrete, which means that there exists a positive constant such that any element that is not the identity has norm greater than this constant. In some sense this group is too big to carry a non-discrete conjugation invariant norm. Indeed, one important ingredient in their proof is an argument of contact flexibility: any Darboux ball can be contactly squeezed into an arbitrarily small one. \\
\noindent
Nevertheless, Sandon in \cite{sandonmetrique} constructed an integer-valued unbounded conjugation invariant norm on the identity component of the group of compactly supported contactomorphisms of $\mathbb{R}^{2n}\times S^1$ with its standard contact structure. Contact rigidity, in particular the existence of translated and discriminant points (see section \ref{section 3} for a definition), plays a crucial role for existence of such a norm. Indeed, if we forget the contact structure, Burago, Ivanov and Polterovich \cite{BIP} proved that there is no unbounded conjugation invariant norm on the identity component of the group of compactly supported diffeomorphism of $\mathbb{R}^{2n}\times S^1$. Since then several authors constructed different norms (conjugation invariant or not) on the identity component of the group of compactly supported contactomorphisms and on its universal cover \cite{discriminante}, \cite{FPR}, \cite{shelukhin}, \cite{Zapolsky}.\\
\noindent
The idea of this paper is to study the geodesics of some of these norms in this context. We focus our study on the discriminant norm \cite{discriminante}, the oscillation norm \cite{discriminante}, the Shelukhin norm \cite{shelukhin} and the Fraser-Polterovich-Rosen norm (FPR norm) \cite{FPR} on $\text{Cont}_0^c(\mathbb{R}^{2n}\times S^1,\xi_{st})$, the identity component of the group of compactly supported contactomorphisms of $\mathbb{R}^{2n}\times S^1$ endowed with its standard contact structure, and on its universal cover $\widetilde{\Cont_0^c}(\mathbb{R}^{2n}\times S^1,\xi_{st})$. As we recall below, for any co-oriented contact manifold, once a global contact form is fixed for the contact distribution, there is an explicit bijection between the space of smooth compactly supported time dependent functions on the contact manifold and the space of smooth paths of compactly supported contactomorphisms starting at the identity. While Shelukhin and Fraser, Polterovich and Rosen use this correspondence to define their norms in terms of the corresponding functions' size, the discriminant norm and the oscillation norm count in some sense the minimal number of discriminant points we should encounter while going from the identity to the considered contactomorphism along a smooth path.\\
Because the bijection mentioned above - between paths of contactomorphisms and functions - depends on the choice of a contact form, the Shelukhin norm is not conjugation invariant. All the other mentioned norms are conjugation invariant. However, as we will see in section \ref{section 2}, the Shelukhin norm, the oscillation norm and the discriminant norm share the common property to be defined by minimizing some length functionals on the space of smooth paths of compactly supported contactomorphisms. Therefore it makes sense to talk about the geodesics of these three norms: they are the paths that minimize the length (see section \ref{section 1} for a precise definition).\\
The main result that we present in this paper is the following: we show that a path of contactomorphisms generated by a function satisfying certain conditions is a geodesic for the discriminant, oscillation and Shelukhin norms. We show moreover that the norm of a contactomorphism that is the time-one of such a geodesic can be expressed in terms of the maximum of the absolute value of the corresponding function. In addition, even if the FPR norm does not \textit{a priori} come from a length functional - therefore we cannot talk about its geodesics - we still can express the FPR norm of such a time-one path in terms of the maximum of the absolute value of the corresponding function. In particular we get as a corollary a new proof of the unboundedness of the discriminant norm, the oscillation norm, the Shelukhin norm and the FPR norm on $\Cont_0^c(\mathbb{R}^{2n}\times S^1,\xi_{st})$. Before stating precisely these results let us define the standard contact structure and the standard contact form on $\mathbb{R}^{2n}\times S^1$. This will allow us to explicit the bijection between compactly supported time-dependent functions and compactly supported paths of contactomorphisms starting at the identity. \\
\noindent
For any positive natural number $n\in\mathbb{N}_{>0}$, the standard contact structure $\xi_{st}$ on $\mathbb{R}^{2n+1}$ is given by the kernel of the $1$-form $\alpha_{st}=dz-\sum\limits_{i=1}^ny_idx_i$, where $(x_1,\dots,x_n,y_1,\dots,y_n,z)$ are the coordinate
functions on $\mathbb{R}^{2n+1}$. The Reeb vector field $R_{\alpha_{st}}$ is given by $\frac{\partial}{\partial z}$. The $1$-form $\alpha_{st}$ is invariant by the action of $\mathbb{Z}$ on $\mathbb{R}^{2n+1}$ given by $k\cdot(x,y,z)=(x,y,z+k)$ for all $k\in\mathbb{Z}$ and $(x,y,z)\in\mathbb{R}^{2n+1}$, so it descends
to the quotient $\mathbb{R}^{2n}\times S^1$ to a $1$-form we also denote $\alpha_{st}$. The kernel of this latter form, which we still denote by $\xi_{st}$, is the standard contact distribution on $\mathbb{R}^{2n}\times S^1$, and the Reeb vector field $R_{\alpha_{st}}$ is again given by $\frac{\partial}{\partial z}$.\\
\noindent
To any compactly supported smooth path of contactomorphisms $\{\phi^t\}_{t\in[0,1]}\subset\text{Cont}_0^c(\mathbb{R}^{2n}\times S^1,\xi_{st})$ starting at the identity one can associate the compactly supported function $h:[0,1]\times \mathbb{R}^{2n}\times S^1\to\mathbb{R}$, $(t,p,z)\mapsto h^t(p,z)$ that satisfies
\[\alpha_{st}\left(\frac{d}{dt}\phi^t\right)=h^t\circ\phi^t\ \text{for all }t\in[0,1].\]
Conversely to any time dependent compactly supported smooth function $h :[0,1]\times\mathbb{R}^{2n}\times S^1\to\mathbb{R}$ one can associate the smooth path of vector fields $X_h : [0,1]\to\chi(M)$ satisfying
\begin{equation}\label{contact vector field} \left\{
\begin{array}{ll}
\alpha_{st}(X_h^t)=h^t \\
d\alpha_{st}(X_h^t,\cdot)=dh^t\left(\frac{\partial}{\partial z}\right)\alpha_{st}-dh^t.
\end{array}
\right.
\end{equation}
Using the Cartan's formula one can show that the flow $\{\phi_{X_h}^t\}_{t\in\mathbb{R}}$ of the time dependent vector field $X_h$, i.e.\ the path of diffeomorphisms such that $\frac{d}{dt}\phi_{X_h}^t=X_h^t(\phi_{X_h}^t)$ for all $t\in\mathbb{R}$ and $\phi_{X_h}^0=\Id$, is a smooth path of contactomorphisms. We denote by $\phi_h:=\{\phi_{X_h}^t\}_{t\in[0,1]}$ the restriction of this flow to the time interval $[0,1]$ and we say that $h$ is the Hamiltonian function that generates $\phi_h$. \\
In Theorem \ref{geodesique shelukhin/fpr} below $\nu_S^{\alpha_{st}}$ and $\widetilde{\nu_S^{\alpha_{st}}}$ denote the Shelukhin (pseudo-)norms on $\Cont_0^c(\mathbb{R}^{2n}\times S^1,\xi_{st})$ and $\widetilde{\Cont_0^c}(\mathbb{R}^{2n}\times S^1,\xi_{st})$ respectively, and the notation $\mathcal{L}_S^{\alpha_{st}}$ stands for the Shelukhin length functional that we define in section \ref{section 2}. We denote by $\nu^{\alpha_{st}}_{FPR}$ and $\widetilde{\nu^{\alpha_{st}}_{FPR}}$ the FPR norms on $\Cont_0^c(\mathbb{R}^{2n}\times S^1,\xi_{st})$ and $\widetilde{\Cont_0^c}(\mathbb{R}^{2n}\times S^1,\xi_{st})$ respectively.\\
\begin{theoreme}\label{geodesique shelukhin/fpr}
Let $H:\mathbb{R}^{2n}\to\mathbb{R}$ be a smooth function with compact support. If the Hessian of $H$ satifies $\underset{p\in\mathbb{R}^{2n}}\max \ |||\Hess_p(H)|||<2\pi$, where $|||\cdot|||$ denotes the operator norm, then the path $\phi_h$ generated by the Hamiltonian function
\[\begin{aligned}
h:\mathbb{R}^{2n}\times S^1\to\mathbb{R}\ \ \ \ \ \ \
(p,z)&\mapsto H(p)
\end{aligned}\]
is a geodesic for the Shelukhin pseudo-norms $\widetilde{\nu_S^{\alpha_{st}}}$ and $\nu^{\alpha_{st}}_S$. More precisely we have
\[\mathcal{L}^{\alpha_{st}}_S(\phi_h)=\widetilde{\nu^{\alpha_{st}}_S}([\phi_h])=\nu^{\alpha_{st}}_S(\phi_h^1)=\max\{\max h;-\min h\}.\]
Moreover, for any compactly supported contactomorphism $\varphi\in\Cont_0^c(\mathbb{R}^{2n}\times S^1,\xi_{st})$ we have
\[\widetilde{\nu^{\alpha_{st}}_{FPR}}\left([\varphi\circ\phi_h\circ\varphi^{-1}]\right)=\nu^{\alpha_{st}}_{FPR}\left(\varphi\circ\phi_h^1\circ\varphi^{-1}\right)=\max\left\{\lceil\max h\rceil,\lceil -\min h\rceil\right\}.\]
\end{theoreme}
\vspace{3.ex}
In Theorem \ref{geodesique shelukhin/fpr} below $\nu_d$, $\widetilde{\nu_d}$ and $\nu_{osc}$, $\widetilde{\nu_{osc}}$ denote respectively the discriminant and oscillation norms on $\Cont_0^c(\mathbb{R}^{2n}\times S^1,\xi_{st})$ and $\widetilde{\Cont_0^c}(\mathbb{R}^{2n}\times S^1,\xi_{st})$ respectively. The notations $\mathcal{L}_d$ and $\mathcal{L}_{osc}$ stand respectively for the discriminant length functional and the oscillation length functional that we define in section \ref{section 2}.\\
\begin{theoreme}\label{geodesique discriminante/oscillation}
Let $H:\mathbb{R}^{2n}\to\mathbb{R}$ be a smooth function with compact support such that $0$ is a regular value of $H$ in the interior of its support, i.e for all $x\in\Interior\left(\Supp(H)\right)\cap H^{-1}\{0\},\ d_xH\ne 0$. Suppose that the Hessian of $H$ satisfies $\underset{p\in\mathbb{R}^{2n}}\max \ |||\Hess_p(H)|||<2\pi$, and consider the path $\phi_h$ generated by the Hamiltonian function
\[\begin{aligned}
h:\mathbb{R}^{2n}\times S^1\to\mathbb{R}\ \ \ \ \ \ \
(p,z)&\mapsto H(p).
\end{aligned}\]
Then for any $\varphi\in\Cont_0^c(\mathbb{R}^{2n}\times S^1,\xi_{st})$ the path $\varphi\circ\phi_h\circ\varphi^{-1}$ is a geodesic for the discriminant norms $\widetilde{\nu_d}$ and $\nu_d$. More precisely we have
\[\mathcal{L}_d(\varphi\circ\phi_h\circ\varphi^{-1})=\widetilde{\nu_d}([\varphi\circ\phi_h\circ\varphi^{-1}])=\nu_d(\varphi\circ\phi_h^1\circ\varphi^{-1})=\max\{\lfloor \max h\rfloor+1,\lfloor -\min h\rfloor +1\}\ .\]
If we assume moreover that $H :\mathbb{R}^{2n}\to\mathbb{R}_{\geq 0}$ is non-negative (resp.\ $H :\mathbb{R}^{2n}\to\mathbb{R}_{\leq 0}$ is non-positive), then the path $\phi\circ\phi_h\circ\phi^{-1}$ is a geodesic for the oscillation norms $\widetilde{\nu_{osc}}$ and $\nu_{osc}$. More precisely
\[\begin{aligned}\mathcal{L}_{osc}(\varphi\circ\phi_h\circ\varphi^{-1})=\widetilde{\nu_{osc}}([\varphi\circ\phi_h\circ\varphi^{-1}])=\nu_{osc}(\varphi\circ\phi_h^1\circ\varphi^{-1})=&\lfloor \max h\rfloor+1\\
( \text{resp.\ }\mathcal{L}_{osc}(\varphi\circ\phi_h\circ\varphi^{-1})=\widetilde{\nu_{osc}}([\varphi\circ\phi_h\circ\varphi^{-1}])=\nu_{osc}(\varphi\circ\phi_h^1\circ\varphi^{-1})= &\lfloor-\min h\rfloor +1).\end{aligned}\]
\end{theoreme}
\vspace{3.ex}
\begin{corollaire}\label{corollaire 1}
The norms $\nu^{\alpha_{st}}_{FPR}$, $\nu^{\alpha_{st}}_S$, $\nu_d$, $\nu_{osc}$ are unbounded on $\Cont_0^c(\mathbb{R}^{2n}\times S^1,\xi_{st})$.
\end{corollaire}
\begin{proof}
Let $B>0$ be a positive number and consider the smooth compactly supported function
\[\begin{aligned}
f_B :\mathbb{R} &\to\mathbb{R}_{\geq 0}\\
x &\mapsto
\left\{
\begin{array}{ll}
e^{-B^2(x^2-B)^{-2}} & \text{ if } x\in[-\sqrt{B},\sqrt{B}] \\
0 & \text{ if }|x|>\sqrt{B}.
\end{array}
\right.
\end{aligned}\]
Denoting by $\lVert \cdot\rVert$ the standard Euclidean norm on $\mathbb{R}^{2n}$, for any $A>0$ the smooth non-negative, compactly supported Hamiltonian function
\[h_{B,A} :\mathbb{R}^{2n}\times S^1\to\mathbb{R}_{\geq 0},\ \ \ \ \ \ (p,z)\mapsto \frac{A}{f_B(0)}f_B\left(\lVert p\rVert\right) \]
is independent of the $S^1$-coordinate and satisfies $h_{B,A}(0,z)=\max h_{B,A}=A$ for all $z\in S^1$. Note also that $h_{B,A}$ does not vanish inside the interior of its support. Moreover an easy computation shows that there exists an increasing function $B_0 :\mathbb{R}_{>0}\to\mathbb{R}_{>0}$ such that $\underset{p\in\mathbb{R}^{2n}}\max |||\Hess_p h_{B,A}(\cdot,z)|||<2\pi$ for any $B\geq B_0(A)$ and $z\in S^1$. So by Theorem \ref{geodesique shelukhin/fpr} and Theorem \ref{geodesique discriminante/oscillation}, the contactomorphism $\phi:=\phi_{h_{B_0(A),A}}^1$ satifies \[\nu_{osc}(\phi)=\nu_d(\phi)=\left\lfloor A\right\rfloor +1,\ \ \ \ \nu^{\alpha_{st}}_{FPR}(\phi)=\left\lceil A\right\rceil\ \ \text{ and }\ \ \nu^{\alpha_{st}}_{S}(\phi)=A.\] Since $A$ can be choosen as big as we want, we deduce Corollary \ref{corollaire 1}. Note that the diameter of the support of $h_{B_0(A),A}$ grows with $A$.\end{proof}
Unboundedness of these norms on $\Cont_0^c(\mathbb{R}^{2n}\times S^1,\xi_{st})$ was actually already known \cite{discriminante}, \cite{FPR}, \cite{shelukhin}. The novelty of our result is that given any positive number $A>0$ one can construct an explicit Hamiltonian function such that the norm of the time-one of the path generated by this Hamiltonian function is exactly $A$.\\
Even if these norms do not \textit{a priori} measure the same thing it turns out that they almost agree for the contactomorphisms described in Theorems \ref{geodesique shelukhin/fpr} and \ref{geodesique discriminante/oscillation}. It seems then reasonnable to ask whether these norms are equivalent. Similar questions can be found in \cite{FPR}, \cite{survey}.\\
Another natural question that arises from these results comes from the similarity with the geodesics of Hofer geometry. Indeed, it is well known that a time independent compactly supported function $H:\mathbb{R}^{2n}\to\mathbb{R}$ with small Hessian generates a path of Hamiltonian symplectomorphisms of the standard symplectic Euclidean space $(\mathbb{R}^{2n},{\omega_{st}})$ that is a geodesic for the Hofer norm \cite{bialypolterovich}, \cite{hoferzehnder}. So the above theorems say that these geodesics can be lifted to paths of contactomorphisms that are still geodesics for the Shelukhin norm, and under some more assumptions, geodesics for the discriminant norms and the oscillation norm too. It would be interesting to know which are the geodesics of the Hofer norm that lift to geodesics of $\nu_S^{\alpha_{st}}$, $\nu_d$ and $\nu_{osc}$.\\
The main tool to prove Theorems \ref{geodesique shelukhin/fpr} and \ref{geodesique discriminante/oscillation} is the translation selector $c :\text{Cont}_0^c(\mathbb{R}^{2n}\times S^1,\xi_{st})\to\mathbb{R}_{\geq 0}$ constructed by Sandon using generating functions \cite{sandonthese}. The strategy for the proof is to first bound all these norms from below in terms of this translation selector (see Corollary \ref{corollaire shelukhin} and Proposition \ref{crucial}) and in a second time to show that the paths we are considering realize this lower bound.\\
Before moving to the next section, it is interesting to note that Sandon in \cite{sandonthese} use this translation selector to associate to any domain $U\subset\mathbb{R}^{2n}\times S^1$ the following number
\[c(U):=\sup\left\{c(\phi)\ |\ \phi\in\Cont_0^c(U,\xi_{st})\right\}, \]
where $\Cont_0^c(U,\xi_{st})$ is the set of time-one maps of smooth paths whose supports are compacts which lie inside $U$. The integer part of this function turns out to be a contact capacity, more precisely we have
\begin{enumerate}
\item $c(U)\leq c(V)$ for any $U\subset V$ lying inside $\mathbb{R}^{2n}\times S^1$,
\item $\left\lceil c(U)\right\rceil=\left\lceil c(\phi(U))\right\rceil$ for any $\phi\in\Cont_0^c(\mathbb{R}^{2n}\times S^1,\xi_{st})$ and any $U\subset\mathbb{R}^{2n}\times S^1$.
\item $c(\mathcal{B}^{2n}(r)\times S^1)=c(\mathcal{B}^2(r)\times\mathbb{R}^{2n-2}\times S^1)=\pi r^2$ where $\mathcal{B}^{2n}(r)$ and $\mathcal{B}^2(r)$ denote the standard Euclidean ball of radius $r$ centered at $0$ lying inside $\mathbb{R}^{2n}$ and $\mathbb{R}^2$ respectively.
\end{enumerate}
\noindent
In the same paper \cite{sandonthese} it is shown that if a contactomorphism $\phi\in\Cont_0^c(\mathbb{R}^{2n}\times S^1,\xi_{st})$ displaces $U\subset \mathbb{R}^{2n}\times S^1$, i.e.\ $\phi(U)\cap U=\emptyset$, then
\begin{equation}\label{sandoninegalite}
\left\lceil c(U)\right\rceil\leq \left\lceil c(\phi)\right\rceil+\left\lceil c(\phi^{-1})\right\rceil.
\end{equation}
Corollary \ref{corollaire shelukhin} and Proposition \ref{crucial} will allow us to have similar results for the norms we are considering:
\begin{proposition}\label{capacite-energie}
Let $\phi\in\Cont_0^c(\mathbb{R}^{2n}\times S^1,\xi_{st})$ be a contactomorphism that displaces $U\subset\mathbb{R}^{2n}\times S^1$, i.e.\ $\phi(U)\cap U=\emptyset$. Then $\left\lceil \nu(\phi)\right\rceil \geq\frac{1}{2}\left\lceil c(U)\right\rceil $ where $\nu$ denotes the FPR, discriminant, oscillation or Shelukhin norm.\\
\end{proposition}
In the next section we give the basic definitions of the group of contactomorphisms, its universal cover, conjugation invariant norms coming from length functionals and their geodesics. In the third section we recall the definition of the discriminant norm, the oscillation norm, the Shelukhin norm and the FPR norm. Then in the fourth section we give the main steps of the construction of Sandon's translation selector. Finally in the last section we give the proofs of Theorems \ref{geodesique shelukhin/fpr}, \ref{geodesique discriminante/oscillation} and Proposition \ref{capacite-energie}.\\
\textbf{Acknowledgement -} This project is a part of my PhD thesis done under the supervision of Sheila Sandon. I am very grateful to her for introducing me to this beautiful subject of research. I thank her for all the interesting discussions that we had without which this work would not have existed. I would also like to thank Miguel Abreu, Jean-François Barraud and Mihai Damian for their support and their useful advices. The author is partially supported by the Deutsche Forschungsgemeinschaft under the Collaborative Research Center SFB/TRR 191 - 281071066 (Symplectic Structures in Geometry, Algebra and Dynamics).
\section{Basic definitions}\label{section 1}
\noindent
Let $M$ be a connected manifold of dimension $2n+1$ endowed with a co-oriented contact distribution $\xi$, i.e.\ a distribution of hyperplanes given by the kernel of a $1$-form $\alpha\in\Omega^1(M)$ such that $\alpha\wedge (d\alpha)^n$ is a volume form. We say that a diffeomorphism $\phi\in\text{Diff}(M)$ is a contactomorphism if it preserves the contact distribution, i.e.\ $d_x\phi(\xi_x)=\xi_{\phi(x)}$ for all $x$ in $M$ or equivalently if there exists a smooth function $f: M\to\mathbb{R}\setminus\{0\}$ such that $\phi^*\alpha=f\alpha$. We say that a $[0,1]$-family of contactomorphisms $\{\phi^t\}_{t\in[0,1]}$ is a smooth path of contactomorphisms if the map $[0,1]\times M\to M$,
$(t,x)\mapsto \phi^t(x)$ is smooth. From now on we denote a smooth path of contactomorphisms by $\{\phi^t\}$ and omit the subscript $t\in[0,1]$. We will study the set of all contactomorphisms $\phi$ of $M$ that can be joined to the identity by a smooth path of compactly supported contactomorphisms $\{\phi^t\}$, that is to say that $\phi^0=\Id$, $\phi^1=\phi$ and $\Supp(\{\phi^t\}):=\overline{\underset{t\in[0,1]}\bigcup\{x\in M\ |\ \phi^t(x)\ne x\}}$ is a compact subset of $M$. This set is a group with respect to composition and we denote it by $\Cont^c_0(M,\xi)$. Another group that will be of interest is the universal cover of $\Cont^c_0(M,\xi)$: the set of classes of smooth compactly supported paths starting from the identity, where two different paths are identified if they are homotopic with fixed endpoints. More precisely, denoting by $\mathcal{C}^\infty([0,1],\Cont_0^c(M,\xi))$ the set of smooth paths of compactly supported contactomorphisms, we define $\widetilde{\Cont^c_0}(M,\xi):=\left\{\{\phi^t\}\in\mathcal{C}^\infty([0,1],\Cont_0^c(M,\xi))\ |\ \phi^0=\Id\right\} /\sim$, where $\{\phi^t\}\sim\{\varphi^t\}$ if and only if \begin{enumerate}
\item they both finish at the same point, i.e.\ $\phi^1=\varphi^1$
\item there exists a smooth function
\[\begin{aligned}
\Psi : [0,1]\times [0,1]\times M&\to M\\
(s,t,x)&\mapsto \Psi_s^t(x)\
\end{aligned}\]
such that
\begin{enumerate}
\item for all $s\in[0,1]$, $\{\Psi_s^t\}_{t\in[0,1]}$ is a smooth path of contactomorphisms starting at identity and ending at $\phi^1=\varphi^1$
\item for all $t\in[0,1]$, $\Psi_0^t=\phi^t$ and $\Psi_1^t=\varphi^t$.
\end{enumerate}
\end{enumerate}
\noindent
The application
\[\begin{aligned}
* : \widetilde{\Cont^c_0}(M,\xi)\times\widetilde{\Cont^c_0}(M,\xi)&\to\widetilde{\Cont^c_0}(M,\xi)\\
([\{\phi^t\}],[\{\varphi^t\}])&\mapsto [\{\phi^t\}]*[\{\varphi^t\}]:= [\{\phi^t\circ\varphi^t\}]
\end{aligned}\]
is well defined, and provides a group law on $\widetilde{\Cont^c_0}(M,\xi)$. Using the fact that time reparametrisation acts trivially on $\widetilde{\Cont^c_0}(M,\xi)$, one can show that this group law coincides with the concatenation of paths. More precisely let us fix a smooth bijective and increasing function $a : [0,1]\to[0,1]$ such that all the derivatives of $a$ at the time $0$ and $1$ vanish, and define for all smooth paths $\{\phi^t\}$, $\{\varphi^t\}$ contained in $\Cont_0^c(M,\xi)$ the concatenated path $\{\varphi^t\}\cdot\{\phi^t\}$ by
\[\{\varphi^t\}\cdot\{\phi^t\}:= \left\{
\begin{array}{ll}
\phi^{a(2t)} & \text{ if } t\in[0,1/2] \\
\varphi^{a(2t-1)}\circ\phi^1 & \text{ if }t\in[1/2,1].
\end{array}
\right.\]
Then $[\{\phi^t\}\cdot\{\psi^t\}]=[\{\phi^t\}]*[\{\psi^t\}]$. Moreover for all $[\{\phi^t\}]$, $[\{\varphi^t\}]\in\widetilde{\Cont_0^c}(M,\xi)$ we have
\begin{enumerate}
\item $[\{\phi^t\}]^{-1}=[\{\phi^{1-t}\circ(\phi^1)^{-1}\}]=[\{(\phi^t)^{-1}\}]$
\item $[\{\varphi^t\}][\{\phi^t\}][\{\varphi^t\}]^{-1}=[\{\varphi^1\circ\phi^t\circ(\varphi^1)^{-1}\}]$.\\
\end{enumerate}
\begin{remarque}
By putting the strong $\mathcal{C}^\infty$-topology on the group of compactly supported contactomorphisms $\Cont^c(M,\xi)$, the identity component of $\Cont^c(M,\xi)$ corresponds to $\Cont^c_0(M,\xi)$. Moreover this component is a sufficiently pleasant topological space so that its universal cover exists and can naturally be identified to $\widetilde{\Cont^c_0}(M,\xi)$. For more details we refer to the book of Banyaga \cite{banyagachemin}. \\
\end{remarque}
We will study in this paper four different norms on the groups $\text{Cont}_0^c(\mathbb{R}^{2n}\times S^1,\xi_{st})$ and $\widetilde{\Cont_0^c}(\mathbb{R}^{2n}\times S^1,\xi_{st})$. Three of them will be conjugation invariant.\\
\begin{definition}
Let $G$ be a group. We say that an application $\nu : G\to \mathbb{R}_{\geq 0}$ is a norm if for all $h,g\in G$ it satisfies:
\begin{enumerate}
\item the triangular inequality, i.e.\ $\nu(gh)\leq \nu(g)+\nu(h)$
\item symmetry, i.e.\ $\nu(g)=\nu(g^{-1})$
\item non-degeneracy, i.e.\ $\nu(g)=0$ if and only if $g$ is the neutral element of the group.
\end{enumerate}
If $\nu$ satisfies only the first and second property we say that $\nu$ is a pseudo-norm. Moreover we say that $\nu$ is conjugation invariant if $\nu(hgh^{-1})=\nu(g)$.\\
\end{definition}
\begin{remarque}
Another way to study conjugation invariant norms on a group $G$ is given by the point of view of bi-invariant metrics $d : G\times G\to\mathbb{R}_{\geq 0}$, i.e.\\ metrics $d$ such that $d(hg_1,hg_2)=d(g_1h,g_2h)=d(g_1,g_2)$ for all $g_1,g_2,h\in G$. Indeed to any conjugation invariant norm $\nu : G\to\mathbb{R}_{\geq 0}$ one can associate the bi-invariant metric $d(g_1,g_2):=\nu(g_1g_2^{-1})$. Conversely to any bi-invariant metric $d : G\times G\to\mathbb{R}_{\geq 0}$ one can associate the conjugation invariant norm $\nu(g)=d(\Id,g)$.\\
\end{remarque}
The norms we are interested in are of two types.\\
The first type of norms $\nu$ on $\Cont_0^c(M,\xi=\Ker(\alpha))$ and $\widetilde{\nu}$ on $\widetilde{\Cont_0^c}(M,\xi)$ we consider come from some length functional
\[\mathcal{L} : \mathcal{C}^\infty([0,1],\Cont_0^c(M,\xi))\to\mathbb{R}_{\geq 0}\cup\{+\infty\}\]
satisfying the following properties:
\begin{enumerate}
\item $\mathcal{L}$ is invariant by time reparamatrisation, i.e.\ if $a :[0,1]\to[0,1]$ is a smooth bijective increasing function then $\mathcal{L}(\{\phi^t\})=\mathcal{L}(\{\phi^{a(t)}\})$
\item $\mathcal{L}(\{\phi^t\}\cdot\{\psi^t\})\leq\mathcal{L}(\{\phi^t\})+\mathcal{L}(\{\psi^t\})$
\item $\mathcal{L}(\{\phi^t\})>0$ for any path that is non constant
\item $\mathcal{L}(\{\phi^{1-t}\circ(\phi^1)^{-1}\})=\mathcal{L}(\{\phi^t\})$ or $\mathcal{L}(\{(\phi^t)^{-1}\})=\mathcal{L}(\{\phi^t\})$.
\end{enumerate}
\noindent
The associated applications $\nu$ and $\widetilde{\nu}$ are then defined for any $\phi\in\Cont_0^c(M,\xi)\setminus\{\Id\}$ and for any $[\{\phi^t\}]\in\widetilde{\Cont_0^c}(M,\xi)\setminus\{\Id\}$ by
\[\nu(\phi)=\inf\left\{\mathcal{L}(\{\phi^t\})\ |\ \phi^0=\Id\ , \phi^1=\phi)\right\}\ \ \ \text{ and }\ \ \ \widetilde{\nu}([\{\phi^t\}])=\inf\left\{\mathcal{L}(\{\varphi^t\})\ |\ [\{\varphi^t\}]=[\{\phi^t\}]\right\},\]
and $\nu(\Id)=\widetilde{\nu}(\Id)=0$.\\
Because of these properties of the length functional, the associated application $\nu$ (resp.\ $\widetilde{\nu})$ is automatically a pseudo-norm whenever $\nu(\phi)<+\infty$ for all $\phi\in\Cont_0^c(M,\xi)$ (resp.\ whenever $\widetilde{\nu}([\{\phi^t\}])<+\infty$ for all $[\{\phi^t\}]\in\widetilde{\Cont_0^c}(M,\xi)$).\\
The discriminant norms $\nu_d$, $\widetilde{\nu_d}$ and the Shelukhin norms $\nu^\alpha_S$, $\widetilde{\nu_S^\alpha}$ defined on $\Cont_0^c(M,\xi)$ and $\widetilde{\Cont_0^c}(M,\xi)$), come respectively from the discriminant length functional $\mathcal{L}_d$ and the Shelukhin length functional $\mathcal{L}^\alpha_S$ that we will define in section \ref{section 2} below. For this type of norms we give the following definition of a geodesic.\\
\begin{definition}
A smooth compactly supported path of contactomorphisms $\{\phi^t\}$ starting at the identity is a geodesic for $\nu : \Cont_0^c(M,\xi)\to\mathbb{R}_{\geq 0}$ (resp.\ for $\widetilde{\nu} : \widetilde{\Cont_0^c}(M,\xi)\to\mathbb{R}_{\geq 0}$) if
$\mathcal{L}(\{\phi^t\})=\nu(\phi^1)$ (resp.\ $\mathcal{L}(\{\phi^t\})=\nu([\{\phi^t\}])$).\\
\end{definition}
Following the terminology used by McDuff in \cite{geometricvariants}, the second type of norms $\nu$ on $\Cont_0^c(M,\xi)$ (resp.\ $\widetilde{\nu}$ on $\widetilde{\Cont_0^c}(M,\xi)$) we consider come from two seminorms $\widetilde{\nu_+}$ and $\widetilde{\nu_-}$ on $\widetilde{\Cont_0^c}(M,\xi)$ each one arising from some "semilength" functionals $\mathcal{L}_+, \mathcal{L}_-$. More precisely the functionals
\[\mathcal{L}_\pm : \mathcal{C}^\infty([0,1],\Cont_0^c(M,\xi))\to\mathbb{R}_{\geq 0}\cup\{+\infty\}\]
verify the following properties:
\begin{enumerate}
\item they are invariant by time reparametrisation, i.e.\ if $a :[0,1]\to[0,1]$ is a smooth bijective increasing function then $\mathcal{L}_\pm(\{\phi^t\})=\mathcal{L}_\pm(\{\phi^{a(t)}\})$
\item $\mathcal{L}_\pm(\{\phi^t\}\cdot\{\psi^t\})\leq\mathcal{L}_\pm(\{\phi^t\})+\mathcal{L}_\pm(\{\psi^t\})$
\item if $\{\phi^t\}$ is not the constant path then $\mathcal{L}_+(\{\phi^t\})>0$ or $\mathcal{L}_-(\{\phi^t\})>0$
\item $\mathcal{L}_+(\{\phi^t\})=\mathcal{L}_-(\{\phi^{1-t}\circ(\phi^1)^{-1}\})$ or $\mathcal{L}_+(\{\phi^t\})=\mathcal{L}_-(\{(\phi^t)^{-1}\})$.
\end{enumerate}
The seminorms $\widetilde{\nu_+}$ and $\widetilde{\nu_-}$ coming from these functionals are defined for any element $[\{\phi^t\}]\in\widetilde{\Cont_0^c}(M,\xi)$ by
\[\begin{aligned}
\widetilde{\nu_+}([\{\phi^t\}])&:=\inf\left\{\mathcal{L}_+\left(\{\varphi^t\}\right)\ |\ [\{\varphi^t\}]=[\{\phi^t\}]\right\}\\
\widetilde{\nu_-}([\{\phi^t\}])&:=-\inf\left\{\mathcal{L}_-\left(\{\varphi^t\}\right)\ |\ [\{\varphi^t\}]=[\{\phi^t\}]\right\}.
\end{aligned}\]
From properties 2, 3 and 4 above, one can deduce that for any $[\{\phi^t\}]\in\widetilde{\Cont_0^c}(M,\xi)$
\begin{enumerate}
\item $\widetilde{\nu_+}([\phi^t])=-\widetilde{\nu_-}([\{\phi^t\}]^{-1})$
\item $\widetilde{\nu_+}\geq 0\geq \widetilde{\nu_-}$
\item $\widetilde{\nu_+}([\{\phi^t\}][\{\psi^t\}])\leq\widetilde{\nu_+}([\{\phi^t\}])+\widetilde{\nu_+}([\{\psi^t\}])$\\
$\widetilde{\nu_-}([\{\phi^t\}][\{\psi^t\}])\geq\widetilde{\nu_-}([\{\phi^t\}])+\widetilde{\nu_-}([\{\psi^t\}])$.\\
\end{enumerate}
\noindent
We then define the (pseudo-)norm $\widetilde{\nu}$ for all elements $[\{\phi^t\}]$ in $\widetilde{\Cont_0^c}(M,\xi)$ and the (pseudo-)norm $\nu$ for all elements $\phi$ in $\Cont_0^c(M,\xi)$ by
\[\widetilde{\nu}\left([\{\phi^t\}]\right)=\max\left\{\widetilde{\nu_+}\left([\{\phi^t\}]\right),-\widetilde{\nu_-}\left([\{\phi^t\}]\right)\right\}\ \ \text{ and }\ \ \nu(\phi)=\inf\left\{\widetilde{\nu}\left([\{\phi^t\}]\right)\ |\ \phi^1=\phi\right\}.\]
\noindent
Noticing that the functional $\mathcal{L}:=\max\{\mathcal{L}_+,\mathcal{L}_-\}$ is a genuine length functional and using again the terminology of McDuff \cite{geometricvariants} we give the following definition of a geodesic for this second type of norms.\\
\begin{definition}
We say that a smooth path of compactly supported contactomorphisms $\{\phi^t\}$ starting at the identity is a geodesic
\begin{enumerate}
\item for $\widetilde{\nu}$ if $\widetilde{\nu}\left([\{\phi^t\}]\right)=\mathcal{L}\left(\{\phi^t\}\right)$
\item for $\nu$ if $\nu(\phi^1)=\mathcal{L}\left(\{\phi^t\}\right)$.\\
\end{enumerate}
\end{definition}
The oscillation norms $\nu_{osc}$, $\widetilde{\nu_{osc}}$ and the FPR norms $\nu^\alpha_{FPR}$, $\widetilde{\nu_{FPR}^\alpha}$ are defined via two seminorms. While the seminorms used to define the oscillation norm come from some semilength functionals, the seminorms of the FPR norms come from some functionals that are not invariant by time reparametrization. Therefore we are not going to talk about geodesics for the FPR norm.\\
\begin{remarque}
For all $[\{\phi^t\}]\in\widetilde{\Cont_0^c}(M,\xi)$ note that $\widetilde{\nu}\left([\{\phi^t\}]\right)\leq\inf\left\{\mathcal{L}\left(\{\varphi^t\}\right)\ |\ [\{\varphi^t\}]=[\{\phi^t\}]\right\}$. This inequality has no reason \textit{a priori} to be an equality.
\end{remarque}
\section{The different norms}\label{section 2}
In this section we recall the definition of the discrimant norm \cite{discriminante}, the oscillation norm \cite{discriminante}, the Shelukhin norm \cite{shelukhin} and the FPR norm \cite{FPR}.
\subsection{The discriminant norm}\label{sous section discriminant}
Let $\phi$ be a contactomorphism of $(M,\xi=\Ker(\alpha))$. A point $x$ in $M$ is said to be a discriminant point of $\phi$ if $\phi(x)=x$ and $(\phi^*\alpha)_x= \alpha_x$. This definition does not depend on the choice of the contact form. Moreover a point $x\in M$ is a discriminant point of a contactomorphism $\phi\in\Cont(M,\xi)$ if and only if:
\begin{enumerate}
\item $x$ is a discriminant point of $\phi^{-1}$
\item $\varphi(x)$ is a discriminant point of $\varphi\circ\phi\circ\varphi^{-1}$ for all $\varphi\in\Cont(M,\xi)$.\\
\end{enumerate}
\begin{remarque}
Another way to define the notion of discriminant point is to consider the symplectization of the contact manifold $(M,\xi)$. Let us denote by $\mathcal{S}_\xi(M):=\underset{x\in M}\bigcup \left\{\alpha\in T_x^*M,\ \Ker(\alpha)=\xi\right\}\subset T^*M$ the symplectization of $(M,\xi)$. It is a $(\mathbb{R}^*,\cdot)$-principal fiber bundle on $M$; the action of $\theta\in\mathbb{R}^*$ on any element $(x,\mu)\in\mathcal{S}_\xi(M)$ is given by $\theta\cdot(x,\mu)=(x,\theta\mu)$. Recall that $(\mathcal{S}_\xi(M),d\lambda_\xi)$ is a symplectic manifold, where $\lambda_\xi$ is the restriction of the canonical Liouville form $\lambda_M$ of $T^*M$ to $\mathcal{S}_\xi(M)$, i.e.\ $\lambda_M(q,p)(u)=p(d_{(q,p)}\pi(u))$ for all $(q,p)\in T^*M$, $u\in T_{(q,p)}T^*X$ and where $\pi : T^*X\to X$ is the canonical projection. To any contactomorphism $\phi\in\Cont_0(M,\xi)$ one can associate its $\mathbb{R}^*$-lift: the symplectomorphism $\psi$ of $(\mathcal{S}_\xi(M),d\lambda_\xi)$ defined by
\[\psi(x,\mu)=(\phi(x),\mu\circ d_{\phi(x)}\phi^{-1})\ ,\ \forall (x,\mu)\in \mathcal{S}_\xi(M)\ .\]
So $x\in M$ is a discriminant point of a contactomorphism $\phi\in\Cont(M,\xi)$ if and only if any point $(x,\mu)\in \mathcal{S}_\xi(M)$ is a fixed point of its $\mathbb{R}^*$-equivariant lift $\psi$.\\
\end{remarque}
\noindent
For any contactomorphism $\phi\in\Cont_0^c(M,\xi)$ we denote by $DP(\phi)$ the set of all discriminant points of $\phi$. We say that a compactly supported smooth path of contactomorphisms $\{\phi^t\}_{t\in[a,b]}$ does not have discriminant points if
\begin{enumerate}
\item $DP((\phi^s)^{-1}\circ\phi^t)=\emptyset$ for all $s\ne t\in[a,b]$, in the when case $M$ is compact
\item $DP((\phi^s)^{-1}\circ\phi^t)\cap\Interior\left(\Supp(\{\phi^t\})\right)=\emptyset$ for all $s\ne t\in[a,b]$, in the case when $M$ is not compact.
\end{enumerate}
\noindent
The discriminant length of a smooth path of contactomorphisms will count in how many pieces we have to cut this path so that each piece does not have discriminant points. More precisely, the discriminant length of a non-constant smooth path of contactomorphisms $\{\phi^t\}$ of $(M,\xi)$ is defined by:
\[\begin{aligned}
\mathcal{L}_d(\{\phi^t\}):=\inf\{N\in\mathbb{N}^*\ |&\ \text{there exists } 0=t_0<...<t_N=1, \text{ such that for all } i\in [0,N-1]\cap\mathbb{N}\\
&\ \{\phi^t\}_{t\in[t_i,t_{i+1}]}\ \text{does not have discriminant point} \}\in\mathbb{N}_{>0}\cup\{+\infty\}.
\end{aligned}\]
By convention we set $\inf\emptyset=+\infty$.\\
\noindent
Because of the two properties of discriminant points mentioned at the beginning of this subsection, it is straightforward that the functional $\mathcal{L}_d$ is a length functional which is invariant under conjugation of elements in $\Cont_0^c(M,\xi)$. The discriminant norms $\widetilde{\nu_d}$ on $\widetilde{\Cont_0^c}(M,\xi)$ and $\nu_d$ on $\Cont_0^c(M,\xi)$ are then defined as follow :
\[\begin{aligned}
\widetilde{\nu_d}\left([\{\phi^t\}]\right)&=\min\left\{\mathcal{L}_d(\{\varphi^t\}) \ |\ [\{\varphi^t\}]=[\{\phi^t\}]\right\},\ \forall [\{\phi^t\}]\in\widetilde{\Cont_0^c}(M,\xi)\setminus\{\Id\} \text{ and } \widetilde{\nu_d}(\Id)=0\\
\nu_d(\phi)&=\min\left\{\mathcal{L}_d(\{\phi^t\})\ |\ \phi^0=\Id \text{ and } \phi^1=\phi \right\}, \forall \phi\in\Cont_0^c(M,\xi)\setminus\{\Id\}\text{ and } \nu_d(\Id)=0.
\end{aligned}\]
\textit{A priori} the applications $\nu_d$ and $\widetilde{\nu_d}$ may take the value $+\infty$, but this is not the case. For a proof of this statement we refer the reader to \cite{discriminante}. So for any co-oriented contact manifold $(M,\xi)$ the applications $\nu_d$ and $\widetilde{\nu_d}$
are well defined conjugation invariant norms.\\
\noindent
Another way to define the norms $\widetilde{\nu_d}$ and $\nu_d$ is to see them as word norms where the generating sets are respectively
\[\begin{aligned}\widetilde{\mathcal{E}}&:=\left\{[\{\phi^t\}]\in\widetilde{\Cont_0^c}(M,\xi)\ |\ \{\phi^t\} \text{ does not have discriminant points}\right\} \text{ and }\\
\mathcal{E}&:=\Pi(\widetilde{\mathcal{E}})
\end{aligned}\]
where $\Pi :\widetilde{\Cont_0^c}(M,\xi)\to\Cont_0^c(M,\xi)$ is the natural projection.
\subsection{The oscillation norm}
Following Eliashberg and Polterovich \cite{EP00}, we say that a smooth path $\{\phi^t\}_{t\in[a,b]}\subset\Cont^c_0(M,\xi)$ is positive (resp.\ negative) if \begin{enumerate}
\item $\alpha_{\phi^t(x)}\left(\frac{d}{dt}\phi^t(x)\right)> 0\ (\text{resp.\ } <0),\ \text{ for all } t\in[a,b],\text{ and for all } x\in M$, when $M$ is compact
\item $\alpha_{\phi^t(x)}\left(\frac{d}{dt}\phi^t(x)\right)> 0\ (\text{resp.\ } <0),\ \text{ for all } t\in[a,b],\text{ and for all } x\in \Interior\left(\Supp\left(\{\phi^t\}\right)\right)$, when $M$ is non compact.
\end{enumerate}
\noindent
Positivity or negativity of a path does not depend on the choice of the contact form but only on the choice of the co-orientation of $\xi$. One can also define the notion of non-negative (resp.\ non-positive) smooth path, by replacing the above strict inequalities with large inequalities. \\
If there exists a non-constant and non-negative contractible loop of compactly supported contactmorphisms then the contact manifold $(M,\xi)$ is called non universally orderable, in the other case we say that $(M,\xi)$ is universally orderable. When $(M,\xi)$ is universally orderable, Eliashberg and Polterovich in \cite{EP00} defined a bi-invariant partial order on $\widetilde{\Cont_0^c}(M,\xi)$. For more details we refer the reader to \cite{EP00}, \cite{chernovnemirovski2} and to the remark \ref{compatibilite avec la relation d'ordre} below.\\
\begin{remarque}
To prove that a contact manifold is orderable involves some "hard" symplectic/contact techniques. See for instance in \cite{chernovnemirovski1}, \cite{chernovnemirovski2}, \cite{CFP}, \cite{EKP}, \cite{EP00} pseudo-holomorphic curves or generating functions techniques are used to prove the universal orderability of the unitary cotangent bundle and the $1$-jet bundle of any compact manifold. We will see below (Remark \ref{remarque sur ordonnabilite}) how Bhupal \cite{bhupal} and Sandon \cite{sandonthese} show that $(\mathbb{R}^{2n+1},\xi_{st})$ and $(\mathbb{R}^{2n}\times S^1,\xi_{st})$ are universally orderable using also generating functions techniques.
\end{remarque}
\begin{definition} Let $\{\phi^t\}$ be a non-constant smooth path in $\Cont_0^c(M,\Ker(\alpha))$. We define the semilength functionals used to construct the oscillation norm by
\[\begin{aligned}
\mathcal{L}_+^{osc}(\{\phi^t\}):=\inf\{ N\in\mathbb{N}^*\ |&\ \text{ there exist } K\in\mathbb{N}^*, K\geq N,\ \text{ and }\ 0=t_0<...<t_K=1 \text{ so that}\\
&\ \text{ Card}\{i\in[0,K-1]\cap\mathbb{N}\ |\ \{\phi^t\}_{t\in]t_i,t_{i+1}[}\ \text{ is positive}\}=N\\
&\ \text{ Card}\{i\in[0,K-1]\cap\mathbb{N}\ |\ \{\phi^t\}_{t\in]t_i,t_{i+1}[}\ \text{ is negative}\}=K-N\\
&\ \ \mathcal{L}_d\left(\{\phi^t\}_{t\in[t_i,t_{i+1}]}\right)=1,\ \text{ for all } i\in[0,K-1]\cap\mathbb{N}\}
\end{aligned}\]
\[\begin{aligned}
\mathcal{L}_-^{osc}(\{\phi^t\}):=\inf\{ N\in\mathbb{N}^*\ |&\ \text{ there exist } K\in\mathbb{N}^*,\ K\geq N,\ \text{ and }\ 0=t_0<...<t_K=1 \text{ so that}\\
&\ \text{ Card}\{i\in[0,K-1]\cap\mathbb{N}\ |\ \{\phi^t\}_{t\in]t_i,t_{i+1}[}\ \text{ is positive}\}=K-N\\
&\ \text{ Card}\{i\in[0,K-1]\cap\mathbb{N}\ |\ \{\phi^t\}_{t\in]t_i,t_{i+1}[}\ \text{ is negative}\}=N\\
&\ \ \mathcal{L}_d\left(\{\phi^t\}_{t\in[t_i,t_{i+1}]}\right)=1,\ \text{ for all } i\in[0,K-1]\cap\mathbb{N}\}.
\end{aligned}\]
By convention we set $\inf\emptyset=+\infty$. \\
\end{definition}
\noindent
The oscillation length of a smooth path $\{\phi^t\}$ is then by definition \[\mathcal{L}_{osc}(\{\phi^t\})=\max\{\mathcal{L}_+^{osc}(\{\phi^t\}),\mathcal{L}_-^{osc}(\{\phi^t\})\}\] and the corresponding seminorms are
\[\widetilde{\nu_\pm^{osc}}\left([\{\phi^t\}]\right):=\pm\inf\left\{\mathcal{L}_\pm^{osc}\left(\{\varphi^t\}\right)\ |\ [\{\varphi^t\}]=[\{\phi^t\}]\right\} \text{ for all } [\{\phi^t\}]\in\widetilde{\Cont_0}(M,\xi).\]
As discussed in the previous section, to these seminorms we associate the application $\widetilde{\nu_{\text{osc}}}:=\max\left\{\widetilde{\nu_+},-\widetilde{\nu_-}\right\}$ on $\widetilde{\Cont_0^c}(M,\xi)$ and the application $\nu_{\text{osc}}$ on $\Cont_0^c(M,\xi)$ defined by
\[\nu_{\text{osc}}(\phi)=\inf\{\widetilde{\nu_{\text{osc}}}\left([\{\phi^t\}])\ | \phi^1=\phi\right\}\ \text{ for all }\phi\in\Cont_0^c(M,\xi).\]
Colin and Sandon in \cite{discriminante} showed that the applications $\nu_{\text{osc}}$ and $\widetilde{\nu_{\text{osc}}}$ are well defined conjugation invariant norms on $\Cont_0^c(M,\xi)$ and on $\widetilde{\Cont_0^c}(M,\xi)$) respectively if and only if $(M,\xi)$ is universally orderable.\\
\noindent
\begin{remarque}\label{compatibilite avec la relation d'ordre}
An interesting property of the norm $\widetilde{\nu_{osc}}$ is the compatibility with the bi-invariant partial order $\succeq$ defined on $\widetilde{\Cont_0^c}(M,\xi)$ by Eliashberg and Polterovich in \cite{EP00} when $(M,\xi)$ is an universally orderable contact manifold. More precisely for any $[\{\varphi^t\}]\succeq[\{\phi^t\}]\succeq \Id$ we have $\widetilde{\nu_{osc}}\left([\{\varphi^t\}]\right)\geq\widetilde{\nu_{osc}}\left([\{\phi^t\}]\right)$.
\end{remarque}
\subsection{The Shelukhin norm}
The Shelukhin length of a smooth path of compactly supported contactomorphisms $\{\phi^t\}_{t\in[0,1]}$ in $\Cont_0^c(M,\xi=\Ker(\alpha))$ is defined by
\[\mathcal{L}^\alpha_S\left(\{\phi^t\}\right)=\int_0^1\underset{x\in M}\max \left|\alpha_{\phi^t(x)}\left(\frac{d}{dt}\phi^t(x)\right)\right|dt.\]
\noindent
To these length functionals we associate the applications
\[\begin{aligned}
\widetilde{\nu^\alpha_S}\left([\{\phi^t\}]\right)&=\inf\left\{\mathcal{L}^\alpha_S(\{\varphi^t\}) \ |\ [\{\varphi^t\}]=[\{\phi^t\}]\right\}\ \text{ and}\\
\nu^\alpha_S(\phi)&=\inf\left\{\mathcal{L}^\alpha_S(\{\phi^t\})\ |\ \phi^0=\Id \text{ and } \phi^1=\phi \right\}.
\end{aligned}\]
The application $\widetilde{\nu_S^\alpha}$ is a pseudo-norm, and Shelukhin proved in \cite{shelukhin} that the application $\nu_S^\alpha$ is a norm. None of them is conjugation invariant, indeed for all $\varphi\in\Cont_0^c(M,\xi)$ and for all $[\{\phi^t\}]\in\widetilde{\Cont_0^c}(M,\xi)$ we have
\[\widetilde{\nu_S^\alpha}\left([\{\varphi\circ\phi^t\circ\varphi^{-1}]\right)=\widetilde{\nu_S^{\varphi^*\alpha}}\left([\{\phi^t\}]\right)\ \ \text{ and }\ \ \nu_S^\alpha\left(\varphi\circ\phi^1\circ\varphi^{-1}\right)=\nu_S^{\varphi^*\alpha}\left(\phi^1\right).\]
\vspace{2.ex}
\noindent
The fact that $\nu_S^\alpha$ is non-degenerate is non-trivial. It is proved by Shelukhin in \cite{shelukhin} using an energy-capacity inequality: if $\phi\in\Cont_0^c(M,\xi)$ is not the identity then it displaces a ball, and its norm is greater than the capacity of this ball. Since the same argument cannot be applied to loops of contactomorphisms based at the identity, it may exist $[\{\phi^t\}]\in \pi_1\left(\Cont_0^c(M,\xi)\right)\setminus\{\Id\}$ such that $\widetilde{\nu_S^\alpha}\left([\{\phi^t\}]\right)=0$. For more details we refer the reader to \cite{shelukhin}.
\subsection{The FPR norm}
As the oscillation norm, the FPR norm \cite{FPR} comes from two seminorms $\widetilde{\nu^{\alpha}_\pm}$ and is well defined for a contact manifold $(M,\xi=\Ker(\alpha))$ that is universally orderable. The seminorms are defined for any $[\{\phi^t\}]\in\widetilde{\Cont_0^c}(M,\xi)$ by
\[\begin{aligned}
\widetilde{\nu^\alpha_+}([\{\phi^t\}])&=\min\left\{\left\lceil \underset{t,x}\max\ \alpha_{\varphi^{t}(x)}\left(\frac{d}{dt}\varphi^{t}(x)\right)\right\rceil\ |\ [\{\varphi^t\}]=[\{\phi^t\}]\in\widetilde{\Cont_0^c}(M,\xi)\right\}\text{ and }\\
\widetilde{\nu^\alpha_-}([\{\phi^t\}])&=-\min\left\{\left\lceil -\underset{t,x}\min\ \alpha_{\varphi^{t}(x)}\left(\frac{d}{dt}\varphi^{t}(x)\right)\right\rceil\ |\ [\{\varphi^t\}]=[\{\phi^t\}]\in\widetilde{\Cont_0^c}(M,\xi)\right\} .\end{aligned}\]
The applications $\widetilde{\nu^\alpha_{\text{FPR}}}=\max\left\{\widetilde{\nu_+^\alpha},-\widetilde{\nu_-^\alpha}\right\}$ and $\nu^\alpha_{\text{FPR}} : \phi\mapsto\inf\left\{\widetilde{\nu^\alpha_{\text{FPR}}}\left([\{\phi^t\}]\right)\ |\ \phi^1=\phi\right\}$ are well defined norms on $\widetilde{\Cont_0^c}(M,\xi)$ and on $\Cont_0^c(M,\xi)$ respectively.\\
Moreover Fraser, Polterovich and Rosen showed in \cite{FPR} that if the Reeb flow associated to the contact form $\alpha$ is periodic then $\widetilde{\nu^\alpha_{FPR}}$ and $\nu^\alpha_{FPR}$ are conjugation invariant norms. This comes from the fact that the fundamental group of $\Cont_0^c(M,\xi)$ is included in the center of $\widetilde{\Cont}_0^c(M,\xi)$ (they are actually equal).\\
\begin{remarque}
The norm $\widetilde{\nu_{FPR}^\alpha}$ on $\widetilde{\Cont_0^c}(M,\xi)$ is also compatible with the partial order: $\widetilde{\nu_{FPR}^\alpha}([\{\phi^t\}])\geq \widetilde{\nu_{FPR}^\alpha}([\{\varphi^t\}])$ for any $[\{\phi^t\}]\succeq[\{\varphi^t\}]\succeq\Id$. \\
\end{remarque}
\noindent
Because the functionals
\[\begin{aligned}
\mathcal{C}^\infty([0,1],\Cont_0^c(M,\xi))&\to\mathbb{R}\\
\{\phi^t\}&\mapsto \left\lceil \underset{t,x}\max\ \alpha_{\phi^{t}(x)}\left(\frac{d}{dt}\phi^{t}(x)\right)\right\rceil
\end{aligned}\]
\[\begin{aligned}
\mathcal{C}^\infty([0,1],\Cont_0^c(M,\xi))&\to\mathbb{R}\\
\{\phi^t\}&\mapsto \left\lceil \underset{t,x}-\min\ \alpha_{\phi^{t}(x)}\left(\frac{d}{dt}\phi^{t}(x)\right)\right\rceil
\end{aligned}\]
are not invariant under time reparametrization we cannot talk about semilength functionals, and so we are not going to talk about the geodesics of the FPR norm.
\section{Generating functions and translation selector}\label{section 3}
An important ingredient to prove Theorems \ref{geodesique shelukhin/fpr} and \ref{geodesique discriminante/oscillation} is to be able to detect translated points of contactomorphisms and their translations. To do so and to avoid confusion we will sometimes see contactomorphisms of $(\mathbb{R}^{2n}\times S^1,\xi_{st})$ as $1$-periodic contactomorphisms of $(\mathbb{R}^{2n+1},\xi_{st})$. More precisely, let $\Cont_0^c(\mathbb{R}^{2n+1},\xi_{st})^{1-per}$ be the group of contactomorphisms $\phi$ of $(\mathbb{R}^{2n+1},\xi_{st})$ that can be joined to the identity by a smooth path $\{\phi^t\}$ such that
\begin{enumerate}
\item $\Supp(\{\phi^t\})$ is contained in $K\times\mathbb{R}$, where $K$ is a compact subset of $\mathbb{R}^{2n}$
\item $\phi^t(x,y,z+k)=\phi^t(x,y,z)+(0,0,k)$ for any $t\in[0,1]$ and for any $k\in\mathbb{Z}$.
\end{enumerate}
The natural projection $\Cont_0^c(\mathbb{R}^{2n+1},\xi_{st})^{1-per}\to\Cont_0^c(\mathbb{R}^{2n}\times S^1,\xi_{st})$ is a group isomorphism whose inverse is given by lifting. \\
\begin{definition}[\cite{sandonthese}]
Let $\phi\in\Cont_0^c(\mathbb{R}^{2n}\times S^1,\xi_{st})$ be a contactomorphism and denote by $\widetilde{\phi}\in \Cont_0^c(\mathbb{R}^{2n+1},\xi_{st})^{1-per}$ its lift. A point $(p,[z])\in\mathbb{R}^{2n}\times S^1$ is a $t$-translated point for $\phi$ if $\widetilde{\phi}(p,z)=(p,z+t) \text{ and } g(p,[z])=0$
where $(p,z)\in\mathbb{R}^{2n+1}$ is any point that projects on $(p,[z])$ and $g$ denotes the conformal factor of $\phi$ with respect to $\alpha_{st}=dz-\sum\limits_{i=1}^ny_idx_i$, i.e.\ $\phi^*\alpha_{st}=e^{g}\alpha_{st}$. The spectrum of $\phi$ is then defined by
\[\text{Spectrum}(\phi):=\left\{ t\in\mathbb{R}\ |\ \phi \text{ has a } t\text{-translated point}\right\}.\]
\end{definition}
\vspace{2.ex}
\begin{remarque}\label{remarque2}
\begin{enumerate}
\item The definition of translated points depends on the choice of a contact form.
\item Let $(p,[z])\in\mathbb{R}^{2n}\times S^1$ be a $k$-translated point of $\phi\in\Cont_0^c(\mathbb{R}^{2n}\times S^1,\xi_{st})$ with $k\in\mathbb{Z}$. Then $(p,[z])$ is a discriminant point of $\phi$. In particular, $\mathbb{Z}$-translated points do not depend on the choice of the contact form. In the same spirit we deduce that for all $\phi,\psi\in\Cont_0^c(\mathbb{R}^{2n}\times S^1,\xi_{st})$ \[\text{Spectrum}(\phi)\cap\mathbb{Z}=\text{Spectrum}(\psi\circ\phi\circ\psi^{-1})\cap\mathbb{Z}.\]
\end{enumerate}
\end{remarque}
\vspace{2.ex}
\noindent
Sandon in \cite{sandonthese} constructed a function
\[c :\Cont_0^c(\mathbb{R}^{2n}\times S^1,\xi_{st})\to\mathbb{R}_{\geq 0}\ \ \ \ \ \ \phi\mapsto c(\phi)\in\text{Spectrum}(\phi)\ \]
that satisfies algebraic and topological properties that we list at the end of this section in Theorem \ref{translation selector} and that will be intensively used for the proofs of Theorem \ref{geodesique shelukhin/fpr} and Theorem \ref{geodesique discriminante/oscillation}. We call this function a translation selector. The rest of this section will be devoted to give the main steps of its construction that we will need for the last section concerning the proofs of Theorem \ref{geodesique shelukhin/fpr}, Theorem \ref{geodesique shelukhin/fpr} and Proposition \ref{inégalité géodésique shelukhin}. For this purpose we will follow mainly \cite{sandonthese}.
\subsection{The graph of a contactomorphism as a compact Legendrian of the 1-jet bundle}\label{section graphe}
The aim of this paragraph is to associate to any contactomorphism $\phi\in\Cont_0^c(\mathbb{R}^{2n}\times S^1,\xi_{st})$ a Legendrian $\Lambda_\phi$ of the $1$-jet bundle of $S^{2n}\times S^1$ in such a way that there is a one-to-one correspondence between translated points of $\phi$ and intersections of $\Lambda_\phi$ with the $0$-wall of the $1$-jet bundle. We give a rather explicit and detailed description of this construction since for proving the estimate of Proposition \ref{inégalité géodésique shelukhin} below it will be important to make sure that all the maps involved in the process of this construction preserve not only the contact structures but also the contact forms.\\
First recall that for a smooth manifold $X$ the $1$-jet bundle of $X$ is the manifold $J^1X:=T^*X\times\mathbb{R}$, where $T^*X$ is the cotangent bundle of $X$. This space carries a canonical contact structure $\xi^X$ given by the kernel of the $1$-form $\alpha_X:=dz-\lambda_X$ where $z$ is the coordinate function on $\mathbb{R}$ and $\lambda_X$ is the Liouville form of the cotangent bundle $T^*X$. We denote by $\mathbb{O}_X:=\{(q,0)\in T^*X\ |\ q\in X\}$ the $0$-section of the cotangent bundle $T^*X$ and refer to $\mathbb{O}_X\times\mathbb{R}$ as the $0$-wall of $J^1X$.\\
\noindent
Let $\phi\in\Cont_0^c(\mathbb{R}^{2n}\times S^1,\xi_{st})$ be a compactly supported contactomorphism and consider its $1$-periodic correspondent $\widetilde{\phi}\in\Cont_0^c(\mathbb{R}^{2n+1},\xi_{st})^{1-per}$. We write $g$ and $\widetilde{g}$ for their conformal factors with respect to $\alpha_{st}$, i.e.\ $\phi^*\alpha_{st}=e^g\alpha_{st}$ and $\widetilde{\phi}^*\alpha_{st}=e^{\widetilde{g}}\alpha_{st}$. Then the image of the map
\[\text{gr}_{\alpha_{st}}(\widetilde{\phi}) : \mathbb{R}^{2n+1}\to\mathbb{R}^{2n+1}\times\mathbb{R}^{2n+1}\times\mathbb{R}
\ \ \ \ \ \ \ \ p\mapsto (p,\widetilde{\phi}(p),\widetilde{g}(p))\]
is a Legendrian of $\mathbb{R}^{2n+1}\times\mathbb{R}^{2n+1}\times\mathbb{R}$
endowed with the contact $1$-form $\beta:=\alpha_{st}^2-e^\theta\alpha_{st}^1$, where $\theta$ is the coordinate function on $\mathbb{R}$ and where $\alpha_{st}^i$ for $i\in\{1,2\}$ is the pull back of $\alpha_{st}$ by the projection $pr_i(p_1,p_2,\theta)=p_i$.\\
\noindent
The application
\[\begin{aligned}
\Theta : \mathbb{R}^{2n+1}\times\mathbb{R}^{2n+1}\times\mathbb{R}&\hookrightarrow J^1(\mathbb{R}^{2n+1})\\
(x,y,z,X,Y,Z,\theta)&\mapsto (x,Y,z,Y-e^\theta y,x-X,e^\theta-1,xY-XY+Z-z)
\end{aligned}\]
is an exact contact embedding of $((\mathbb{R}^{2n+1})^2\times\mathbb{R},\Ker(\beta))$ in $(J^1(\mathbb{R}^{2n+1}),\Ker(\alpha_{\mathbb{R}^{2n+1}}))$, i.e.\ $\Theta^*(dz-\lambda_{\mathbb{R}^{2n+1}})=\beta$. Denote by $\Lambda_{\widetilde{\phi}}^{\mathbb{R}^{2n+1}}:=\Theta\circ\text{gr}_{\alpha_{st}}(\widetilde{\phi})$. Then the following lemma is a direct consequence of the definition of $\Theta$.\\
\begin{lemme}\label{lemme1}
\begin{enumerate}
\item $\Lambda^{\mathbb{R}^{2n+1}}_{\Id}$ is the $0$-section of $J^1(\mathbb{R}^{2n+1})$.
\item If $\{\phi^t\}\subset\Cont_0^c(\mathbb{R}^{2n}\times S^1,\xi_{st})$ is a smooth path of contactmorphisms then $\left\{\text{Image}\left(\Lambda_{\widetilde{\phi^t}}^{\mathbb{R}^{2n+1}}\right)\right\}$ is a smooth path of Legendrians.
\item For any $\phi\in\Cont_0^c(\mathbb{R}^{2n}\times S^1,\xi_{st})$, there is a one-to-one correspondence between the set of $t$-translated points of $\phi$ and $\Lambda_{\widetilde{\phi}}^{\mathbb{R}^{2n+1}}\left(\mathbb{R}^{2n+1}\right)\bigcap \mathbb{O}_{\mathbb{R}^{2n+1}}\times\{t\}$.\\
\end{enumerate}
\end{lemme}
\noindent
The aim now is to be able to replace the Euclidean space $\mathbb{R}^{2n+1}$ of the previous lemma with a compact manifold without loosing the 3 properties above. This will be done in two steps. First using the periodicity we will replace $\mathbb{R}^{2n+1}$ by $\mathbb{R}^{2n}\times S^1$. Then we will compactify the $\mathbb{R}^{2n}$ coordinates into the standard Euclidean sphere $S^{2n}\subset \mathbb{R}^{2n+1}$. \\
\noindent
Because $\phi$ is $1$-periodic in the
$z$-direction, the map
\[\Lambda_\phi^{\mathbb{R}^{2n}\times S^1} : \mathbb{R}^{2n}\times S^1\to J^1(\mathbb{R}^{2n}\times S^1)\ \ \ \ \ \ (p,[z])\mapsto \text{pr}\left(\Lambda_{\widetilde{\phi}}^{\mathbb{R}^{2n+1}}(p,z)\right) ,\]
where $z\in\mathbb{R}$ is any representant of $[z]\in S^1=\mathbb{R}/\mathbb{Z}$ and $\text{pr} :J^1(\mathbb{R}^{2n+1})\to J^1(\mathbb{R}^{2n}\times S^1)$ the natural projection, is well defined. Note that the projection $\text{pr}$ is a covering map that preserves the contact forms, i.e.\ $\text{pr}^*(\alpha_{\mathbb{R}^{2n}\times S^1})=\alpha_{\mathbb{R}^{2n+1}}$, and that the map $\Lambda_\phi^{\mathbb{R}^{2n}\times S^1}$ enjoys again the 3 properties of Lemma \ref{lemme1}. \\
\noindent
Finally, fixing $p_0$ a point on $S^{2n}$ the stereographic projection $\psi : S^{2n}\setminus\{p_0\}\to\mathbb{R}^{2n}$ gives a diffeomorphism $\overline{\psi}:=\psi\times\Id : S^{2n}\setminus\{p_0\}\times S^1\to\mathbb{R}^{2n}\times S^1$ that lifts to the strict contactomorphism
\[\begin{aligned}
\Psi :J^1(S^{2n}\setminus\{p_0\}\times S^1)&\to J^1(\mathbb{R}^{2n}\times S^1)\\
(x,\mu,z)&\mapsto (\overline{\psi}(x),\mu\circ d_{\overline{\psi}(x)}\overline{\psi}^{-1},z).
\end{aligned}\]
Since $\phi$ is compactly supported in the $\mathbb{R}^{2n}$-direction, the map
\[\Lambda_\phi : S^{2n}\times S^1\ \ \ \ \ (p,z)\mapsto \left\{
\begin{array}{ll}
\Psi^{-1}\left(\Lambda_\phi^{\mathbb{R}^{2n}\times S^1}(\overline{\psi}(p,z))\right) & \mbox{if } p\ne p_0 \\
((p,z),0,0) & \mbox{ if } p=p_0
\end{array}
\right.\]
is a smooth Legendrian embedding that still enjoys the three properties of Lemma \ref{lemme1}. Moreover now this Legendrian is compact.\\
In the case when $\phi$ is $\mathcal{C}^1$-small, $\Lambda_\phi$ is a Legendrian section of $J^1(S^{2n}\times S^1)$ and so it is given by the $1$-jet of a function $f:S^{2n}\times S^1\to\mathbb{R}$, i.e.\ $\Lambda_\phi(x)=(x,d_xf,f(x))$ for all $x\in S^{2n}\times S^1$. In particular there is a one-to-one correspondence between critical points of $f$ of critical value $t$ and $t$-translated points of $\phi$. So when $\phi\in\Cont_0^c(\mathbb{R}^{2n+1},\xi_{st})^{z-per}$ is $\mathcal{C}^1$-small, looking for translated points of $\phi$ is equivalent to looking for critical point of $f$. For the latter problem, Morse theory can be applied to ensure existence of critical points. \\
For a general contactomorphism $\phi\in\Cont_0^c(\mathbb{R}^{2n}\times S^1,\xi_{st})$ the map $\Lambda_\phi$ does not need to be a section anymore, however it is smoothly isotopic to the $0$-section of $J^1(S^{2n}\times S^1)$ through a path of Legendrians thanks to Lemma \ref{lemme1}. We will describe in the next paragraph how one can associate to such a Legendrian a function $f: S^{2n}\times S^1\times\mathbb{R}^N\to\mathbb{R}$, for some $N\in\mathbb{N}$, such that we have again a one-to-one correspondence between critical points of $f$ of critical value $t$ and $t$-translated points of $\phi$. Moreover a control of the behaviour of $f$ at infinity will allow again to ensure existence of critical points of such a function $f$ and so the existence of translated points of $\phi$.
\subsection{Generating functions}\label{generating function}
Let $X$ be a smooth manifold. For any integer $N\in\mathbb{N}$, a function
\[f: X\times\mathbb{R}^N\to\mathbb{R}\ \ \ \ \ \ (x,v)\mapsto f(x,v)\]
is said to be a generating function if $0$ is a regular value of
\[\frac{\partial f}{\partial v} : X\times\mathbb{R}^N\to(\mathbb{R}^N)^*\]
where $(\mathbb{R}^N)^*$ is the set of linear forms on $\mathbb{R}^N$, and where $\frac{\partial f}{\partial v}$ is the derivative of $f$ in the $\mathbb{R}^N$ direction. It follows from the definition that $\Sigma_f:=\left(\frac{\partial f}{\partial v}\right)^{-1}\{0\}$ is a smooth submanifold of $X\times\mathbb{R}^N$ of the same dimension of $X$ whenever $f : X\times\mathbb{R}^N\to\mathbb{R}$ is a generating function. In this case, denoting by $\frac{\partial f}{\partial x}$ the derivative of $f$ in the $X$ direction, the map \[\begin{aligned}
j^1_f :\Sigma_f&\to J^1X\\
(x,v)&\mapsto \left(x,\frac{\partial f}{\partial x}(x,v),f(x,v)\right)
\end{aligned}\]
is a Legendrian immersion. We say that the immersed Legendrian $\Lambda_f:=j^1_f(\Sigma_f)$ is generated by $f$. For all $a\in\mathbb{R}$ there is a one-to-one correspondence between critical points of $f$ with critical value $a$ and intersections of the immersed Legendrian $\Lambda_f$ with $\mathbb{O}_X\times\{a\}$.\\
\noindent
When $X$ is a compact manifold, a sufficient condition to guarantee the existence of critical points of a smooth function $f :X\times\mathbb{R}^N\to\mathbb{R}$ is to ask that the function $f$ is quadratic at infinity, i.e.\ there exists a non-degenerate quadratic form $Q :\mathbb{R}^N\to\mathbb{R}$ and a compactly supported function $g:X\times\mathbb{R}^N\to\mathbb{R}$ such that $f(x,v)=g(x,v)+Q(v), \text{ for all } (x,v)\in X\times\mathbb{R}^N$. The fundamental result about existence of generating functions is the following.\\
\begin{theoreme}[Chaperon \cite{Chaperon}, Chekanov \cite{chekanov}]\label{existence}
Let $X$ be a compact manifold and $\{\Lambda^t\}_{t\in[0,1]}$ a smooth path of Legendrians in $(J^1X,\xi^X)$ such that $\Lambda^0=\mathbb{O}_X\times\{0\}$. Then there exists an integer $N\in\mathbb{N}$ and a continuous path $\left\{f^t :X\times\mathbb{R}^N\to\mathbb{R}\right\}$ of generating functions quadratic at infinity such that $f^t$ generates the Legendrian $\Lambda^t$ for all $t\in[0,1]$. \\
\end{theoreme}
\subsection{Extracting critical values of generating functions}\label{minimax}
Classical minimax methods can be applied to extract a critical value of a function $f:X\times\mathbb{R}^N\to\mathbb{R}$ that is quadratic at infinity. Indeed, for any $a\in\mathbb{R}$ we write $X^a:=\{f\leq a\}$ to designate the sublevel $a$ of $f$. Since $f$ is quadratic at infinity, there exists $a_0\in\mathbb{R}$ such that for all $a\leq a_0$ the homotopy type of the sublevels $X^a$ and $X^{a_0}$ are the same, and we write $X^{-\infty}$ to refer to such sublevel sets. In addition to that there exists a splitting of $\mathbb{R}^N=\mathbb{R}^{N_+}\times\mathbb{R}^{N_-}$ for which the non-degenerate quadratic form $Q$ is negative definite on $\mathbb{R}^{N_-}$, and so the Thom isomorphism guarantees the existence of an isomorphism
\[ T : H^*(X)\to H^{*+{N_-}}(X\times\mathbb{R}^N,X^{-\infty}),\]
where $H^*(X)$ is the cohomology of $X$ with coefficient in $\mathbb{Z}_2$ and $H^*(X\times\mathbb{R}^N,X^{-\infty})$ is the relative cohomology of $(X\times\mathbb{R}^N,X^{-\infty})$ with coefficient in $\mathbb{Z}_2$. For any $u\in H^*(X)\setminus\{0\}$ the number
\[c(f,u):=\inf\left\{a\in\mathbb{R}\ |\ i_a^*(T(u))\ne 0\right\},\]
where $i_a : (X^a,X^{-\infty})\hookrightarrow (X\times\mathbb{R}^N,X^{-\infty})$ is the inclusion, is a critical value of $f$.\\
\begin{remarque}\label{remarque minimax}
Zapolski in \cite{Zapolsky} uses homology instead of cohomology to extract the wanted critical values. Let $f:X\times\mathbb{R}^N\to\mathbb{R}$ be a generating function quadratic at infinity and $\alpha\in H_*(X)\setminus\{0\}$ a non zero homology class. Then
\[\mathcal{C}(f,\alpha):=\inf\{a\in\mathbb{R}\ |\ \widetilde{T}\alpha\in (i_a)_*(H_{*+{N_-}}(X^a,X^{-\infty}))\}, \]
where $\widetilde{T} : H_*(X)\to H_{*+N_-}(X\times\mathbb{R}^N,X^{-\infty})$ comes from the Thom isomorphism in homology, is a critical value of $f$. If $\alpha_0$ is a generator of $H_n(X)$, and $\mu_0$ a generator of $H^n(X)$, where $n\in\mathbb{N}$ is the dimension of $X$, then $\mathcal{C}(f,\alpha_0)=c(f,\mu_0)$ (see for instance \cite{Viterbo}).\\
\end{remarque}
\noindent
Due to the uniqueness property of generating functions quadratic at infinity proved by Viterbo \cite{Viterbo} and Théret \cite{theret}, one deduces that if $f_1$ and $f_2$ are two generating functions quadratic at infinity that generate the same Legendrian $\Lambda\subset J^1X$ that is Legendrian isotopic to $\mathbb{O}_X\times\{0\}$, then $c(f_1,u)=c(f_2,u)$ for any $u\in H^*(X)\setminus\{0\}$. Thanks to this and to Theorem \ref{existence}, for any Legendrian $\Lambda$ that is Legendrian isotopic to $\mathbb{O}_X\times\{0\}$ and for any $u\in H^*(X)\setminus\{0\}$ the number $c(\Lambda,u):=c(f,u)$ is well defined, i.e.\ it does not depend on the generating function quadratic at infinity $f$ that generates $\Lambda$. Combining this with the previous discussion in Subsection \ref{generating function} we deduce that
\begin{equation}\label{eq1}
\Lambda\cap \mathbb{O}_X\times\{c(\Lambda,u)\}\ne\emptyset.
\end{equation}
\noindent
The translation selector constructed by Sandon is then given by
\[c :\text{Cont}_0^c(\mathbb{R}^{2n}\times S^1,\xi_{st})\to\mathbb{R}\ \ \ \ \ \ \phi\mapsto c(\Lambda_\phi,u_0),\]
where $u_0$ is a generator of $H^{2n+1}(S^{2n}\times S^1)$ and $\Lambda_\phi\subset J^1(S^{2n}\times S^1)$ is the Legendrian associated to $\phi$ constructed in Subsection \ref{section graphe}. By \eqref{eq1} and Lemma \ref{lemme1} we come to the conclusion that $c(\phi)\in\text{Spectrum}(\phi)$. Furthermore Sandon in \cite{sandonthese} proved the following result.
\begin{theoreme}[\cite{sandonthese}]\label{translation selector}
The translation selector $c$ satisfies the following properties:
\begin{enumerate}
\item $c(\phi)\geq 0$ for any $\phi$;
\item if $\{\phi^t\}$ is a smooth path of compactly supported contactomorphisms starting at the identity, then $t\mapsto c(\phi^t)$ is continuous;
\item $\left\lceil c(\varphi)\right\rceil \geq\left\lceil c(\phi\varphi)-c(\phi)\right\rceil$, in particular $\left\lceil c(\phi\varphi)\right\rceil\leq\left\lceil c(\phi)\right\rceil+\left\lceil c(\varphi)\right\rceil$ for any $\phi,\varphi$;
\item if $\{\phi^t\}$ and $\{\varphi^t\}$ are smooth paths of compactly supported contactomorphisms starting at the identity such that $\alpha_{st}\left(\frac{d}{dt}\phi^t(x)\right)\leq\alpha_{st}\left(\frac{d}{dt}\varphi^t(x)\right)$ for all $t\in[0,1]$ and $x\in\mathbb{R}^{2n}\times S^1$ then $c(\phi^1)\leq c(\varphi^1)$.\\
\end{enumerate}
\end{theoreme}
\begin{remarque}\label{remarque sur ordonnabilite}
In \cite{sandonthese} Sandon shows that the translation selector satisfies also the following properties
\begin{enumerate}
\item $c(\phi)=c(\phi^{-1})=0$ if and only if $\phi=\Id$.
\item $\left\lceil c(\varphi\phi\varphi^{-1})\right\rceil=\left\lceil c(\phi)\right\rceil$.
\end{enumerate}
These extra properties of the translation selector will not be used for the proof of Theorem \ref{geodesique shelukhin/fpr} and Theorem \ref{geodesique discriminante/oscillation}. However let us notice that they are used in \cite{sandonthese}, \cite{sandonmetrique} to deduce (universal) orderability of $(\mathbb{R}^{2n}\times S^1,\xi_{st})$ (similarly to the case of $(\mathbb{R}^{2n+1},\xi_{st})$ done by Bhupal \cite{bhupal}) and to show that the map
\[ \nu_c :\Cont_0^c(\mathbb{R}^{2n}\times S^1,\xi_{st})\to\mathbb{N}\ \ \ \ \phi\mapsto \left\lceil c(\phi)\right\rceil +\left\lceil c(\phi^{-1})\right\rceil \]
is a conjugation invariant norm.
\end{remarque}
\section{Proof of the results}\label{section proof}
The idea of the proof of Theorem \ref{geodesique shelukhin/fpr} and Theorem \ref{geodesique discriminante/oscillation} is the following. First we will compute the selected translation of contactomorphisms that are generated by Hamiltonian functions satisfying the hypotheses of Theorems \ref{geodesique shelukhin/fpr} and \ref{geodesique discriminante/oscillation}. Then we will show that all the norms we are working with are bounded from below by the translation selector. Finally a direct computation will allow to show that the length of the paths we are considering are equal to the selected translation of the time-one of these paths. We thus deduce that these paths are length minimizing paths and so are geodesics.\\
\subsection{Computation of the selected translation}
To any compactly supported time dependent function $H:[0,1]\times\mathbb{R}^{2n}\to\mathbb{R}, (t,p)\mapsto H^t(p)$, one can associate the smooth path of vector fields $ X^{{\omega_{st}}}_H : [0,1]\to\chi(\mathbb{R}^{2n})$ defined by $\iota_{X^{{\omega_{st}}}_H(t)}{\omega_{st}}=-dH^t$ for all $t\in\mathbb{R}$, where ${\omega_{st}}=\sum\limits_{i=1}^ndx_i\wedge dy_i$. We denote by $\psi_H^t$ its time $t$-flow for any $t\in\mathbb{R}$. Note that $\psi_H^t$ is an Hamiltonian symplectomorphism for any $t\in\mathbb{R}$. Recall that from the formula \eqref{contact vector field} of the introduction one can similarly associate to a smooth compactly supported function $h:[0,1]\times\mathbb{R}^{2n}\times S^1\to\mathbb{R}$ a path of contactomorphisms $\phi_h$.\\
In the next lemma for all $z\in\mathbb{R}$ we will write $[z]$ the corresponding element in $\mathbb{R}/\mathbb{Z}=S^1$ and debote by $\lambda_{st}$ the $1$-form $\sum\limits_{i=1}^ny_idx_i$ on $\mathbb{R}^{2n}$. \\
\begin{lemme}\label{relevé de contact}
Let $\{\phi_h^t\}$ be a smooth path of contactomorphisms generated by the Hamiltonian function
\[\begin{aligned}
h:\mathbb{R}^{2n}\times S^1&\to\mathbb{R}\\
(p,z) &\mapsto H(p)
\end{aligned}\]
where $H : \mathbb{R}^{2n}\to\mathbb{R}$ is a smooth compactly supported function. Then for all $(p,z)\in\mathbb{R}^{2n}\times \mathbb{R}$ and for all $t\in\mathbb{R}$
\[\phi_h^t(p,[z])=\left(\psi_H^t(p),[z+F^t(p)]\right)\ \ \text{ and }\ \ (\phi_h^t)^*\alpha_{st}=\alpha_{st}\]
where
\begin{equation}\label{2} F^t(p)=\int_0^t\lambda_{st}\left(\psi_H^s(p)\right)\left(X^{\omega_{st}}_H\left(\psi_H^s(p)\right)\right)ds+tH(p).
\end{equation}
\end{lemme}
\vspace{2.ex}
\begin{proof}
A direct computation shows that the contact vector field $X_h$ generated by the function $h:\mathbb{R}^{2n}\times S^1\to\mathbb{R}$ (see the relations \eqref{contact vector field} in the introduction) in this case is given by
\[X_h(p,[z])=X^{\omega_{st}}_H(p)+\left(H(p)+\lambda_{st}(p)\left(X^{\omega_{st}}_H(p)\right)\right)\frac{\partial}{\partial z}\ ,\ \text{ for all } (p,[z])\in\mathbb{R}^{2n}\times S^1.\]
Moreover, because the Hamiltonian function $H$ is time-independent, $H$ is constant along its flow, i.e.\ $H(\psi_H^t(p))=H(p)$ for all $p\in\mathbb{R}^{2n}$ and for all $t\in\mathbb{R}$. So we deduce the formula \eqref{2}. To prove the fact that $\phi_h^t$ is an exact contactomorphism for all $t$, i.e.\ it preserves not only the contact distribution $\xi_{st}$ but also the contact form $\alpha_{st}=dz-\lambda_{st}$, we use the Cartan formula :
\[\left.\frac{d}{dt}\right\lvert_{t=s}(\phi_h^t)^*\alpha_{st}=(\phi_h^s)^*\left(\iota_{X_h}d\alpha_{st}+d\iota_{X_h}\alpha_{st}\right)=(\phi_h^s)^*\left(-dH+dH\right)\equiv 0\ ,\ \forall s\in\mathbb{R}\ .\]
Since $\phi_h^0=\Id$, we deduce that $\left(\phi_h^s\right)^*\alpha_{st}=\left(\phi_h^0\right)^*\alpha_{st}=\alpha_{st}$ for all $s\in\mathbb{R}$.\end{proof}
\vspace{2.ex}
\noindent
The hypothesis that we make about the smallness of the Hessian of $H:\mathbb{R}^{2n}\to\mathbb{R}$ allows us to guarantee that the only periodic orbits of $\{\psi_H^t\}_{t\in\mathbb{R}}$ of small periods are the constant ones. More precisely, identifying in the usual way the dual Euclidean space $(\mathbb{R}^{2n})^*$ with $\mathbb{R}^{2n}$, one can see the Hessian of any smooth function $H:\mathbb{R}^{2n}\to\mathbb{R}$ at a point $p\in\mathbb{R}^{2n}$ as a linear map $\Hess_p(H):\mathbb{R}^{2n}\to\mathbb{R}^{2n}$. The smallness of such maps will be measured in terms of the operator norm, i.e.\ if $A:\mathbb{R}^{2n}\to\mathbb{R}^{2n}$ is a linear map then \[\left|\left|\left|A\right|\right|\right|:=\underset{v\in\mathbb{R}^{2n}\setminus\{0\}}\sup \frac{|A v|}{|v|} \text{ where } |v| \text{ is the standard Euclidean norm of } v\in\mathbb{R}^{2n}\setminus\{0\} .\]
\begin{lemme}\label{petite hessienne}
Let $H:\mathbb{R}^{2n}\to\mathbb{R}$ be a compactly supported Hamiltonian function such that $\underset{p\in\mathbb{R}^{2n}}\sup|||\operatorname{Hess}_p(H)|||<2\pi$. If there exists $0<T\leq 1$ and $p\in\mathbb{R}^{2n}$ such that $\psi_H^T(p)=p$ then $\psi_H^t(p)=p$ for every $t\in\mathbb{R}$.\\
\end{lemme}
\begin{proof}
Let $T\in\mathbb{R}_{>0}$ and $p\in\mathbb{R}^{2n}$ be such that $\psi_H^T(p)=p$ and denote by $\gamma :\mathbb{R}/\mathbb{Z}\to\mathbb{R}^{2n}$ the loop $\gamma(t)=\psi_H^{tT}(p)$. The speed $t\mapsto \dot\gamma(t)=X_{TH}(\gamma(t))$ of $\gamma$ is again a smooth loop. Consider the Fourier series associated to this loop $\dot\gamma$
\[\sum\limits_{k\in\mathbb{Z}}e^{2i\pi Jt}x_k \text{ for all } t\in S^1 \text{ where } x_k\in\mathbb{R}^{2n} \text{ and } J=\left (
\begin{array}{c c c}
0 & \Id_n \\
\text{-Id}_n & 0\\
\end{array}\
\right)\ .\]
Note that $\dot\gamma(t)=JT\nabla H(\gamma(t))$ for any $t\in S^1$, where $\nabla H$ is the Euclidean gradient of $H$, so $\ddot\gamma(t)=T\operatorname{Hess}_{\gamma(t)}H(\dot\gamma(t))$ for any $t\in S^1$ and the Fourier series of the loop $\ddot\gamma$ is given by \[t\mapsto \sum\limits_{k\in\mathbb{Z}}2i\pi Je^{2i\pi Jt}x_k\ .\] Using the fact that $\int_0^1\dot\gamma(t)dt=\gamma(1)-\gamma(0)=x_0=0\in\mathbb{R}^{2n}$ and the Parseval's identity we have
\begin{equation}\label{Fourier}
||\dot\gamma||_{\mathcal{L}^2}=\sqrt{\sum\limits_{k\in\mathbb{Z}\setminus\{0\}}|x_k|^2}\leq\sqrt{\sum\limits_{k\in\mathbb{Z}\setminus\{0\}} k^2|x_k|^2}=\frac{1}{2\pi}||\ddot\gamma||_{\mathcal{L}^2}\ .
\end{equation}
Moreover the above inequality is a strict inequality in the case when $\gamma$ is not the constant loop.\\
On the other hand
\[\begin{aligned}||t\mapsto\ddot\gamma(t)||_{\mathcal{L}^2}&=||t\mapsto T\Hess_{\gamma(t)}H(\dot\gamma(t))||_{\mathcal{L}^2}\\
&=T||t\mapsto\sum\limits_{k\in\mathbb{Z}}e^{2i\pi Jt}\Hess_{\gamma(t)} H(x_k)||_{\mathcal{L}^2}\\
&\leq T\sqrt{\sum\limits_{k\in\mathbb{Z}}\underset{p\in\mathbb{R}^{2n}}\sup |||\Hess_p(H)|||^2|x_k|^2}\\
&=T\underset{p\in\mathbb{R}^{2n}}\sup|||\Hess_p(H)|||\sqrt{\sum\limits_{k\in\mathbb{Z}\setminus\{0\}}|x_k|^2}\\
&\leq T2\pi||\dot\gamma||_{\mathcal{L}^2}\ .
\end{aligned}\]
So combining this inequality with the inequality \eqref{Fourier} we deduce that $||\dot\gamma||_{\mathcal{L}^2}<T||\dot\gamma||_{\mathcal{L}^2}$ when $\gamma$ is not a constant loop, and so $T>1$.
\end{proof}
\begin{remarque}
The reader can find a similar proof in \cite{hoferzehnder}.
\end{remarque}
\noindent
From the two previous lemmas we deduce the selected translation of the paths considered in Theorems \ref{geodesique shelukhin/fpr} and \ref{geodesique discriminante/oscillation} .\\
\begin{proposition} \label{calcul explicite}
Let $\{\phi^t\}_{t\in[0,1]}$ be a smooth path of contactomorphisms generated by a compactly supported Hamiltonian function
\[h : \mathbb{R}^{2n}\times S^1\to\mathbb{R}\ \ \ \ \ \
(p,[z])\mapsto H(p)\]
with $\underset{p\in\mathbb{R}^{2n}}\sup |||\Hess_p(H)|||<2\pi$. Then
\[c(\phi_h^1)=\max H\ \ \text{ and }\ \ c\left((\phi_h^1)^{-1}\right)=-\min H.\]
\end{proposition}
\vspace{2.ex}
\begin{proof}
First let us show that \[\text{Spectrum}(\phi_h^t)=\{tH(p)\ |\ d_pH=0\} \text{ for all } t\in[0,1].\]
Indeed if $p\in\mathbb{R}^{2n}$ is such that $d_pH=0$ then using Lemma \ref{relevé de contact} we have that for all $[z]\in S^1$ and $t\in[0,1]$
\[\phi_h^t(p,[z])=\left(p,[z+tH(p)]\right)\ \ \ \text{ and }\ \ \ (\phi_h^t)^*\alpha_{st}=\alpha_{st}\ ,\]
and so $\{tH(p)\ |\ d_pH=0\}\subset\text{Spectrum}(\phi_h^t)$. Conversely, let $t\in[0,1]$ and $a\in\text{Spectrum}(\phi_h^t)$. By definition of the spectrum and using again Lemma \ref{relevé de contact}, this implies that there exists $(p,z)\in\mathbb{R}^{2n}\times \mathbb{R}$ such that
\[\widetilde{\phi_h^t}(p,z)=(p,z+a)=\left(\psi_H^t(p),z+F^t(p)\right)\ .\]
In particular $p$ is a fixed point of $\psi_H^t$. Lemma \ref{petite hessienne} ensures that under the assumptions we made on the Hessian of $H$ we have $\psi_H^s(p)=p$ for all $s\in[0,1]$ and $d_pH=0$. So we conclude that $F^t(p)=tH(p)=a$ and that $\text{Spectrum}(\phi_h^t)\subset\{tH(p)\ |\ d_pH=0\}$. \\
\noindent
Moreover, by Theorem \ref{translation selector}, the map $t\mapsto c(\phi_h^t)\in\text{Spectrum}(\phi_h^t)$ is continuous, and since the set of critical values of $H$ is a nowhere dense set due to Sard's theorem, there exists a critical point $p_1\in\mathbb{R}^{2n}$ of $H$ such that $c(\phi_h^t)=tH(p_1)$ for all $t\in [0,1]$. It remains to show that $H(p_1)=\max H$. \\
\noindent
For $\varepsilon>0$ small enough, $\Lambda_{\phi_h^t} : S^{2n}\times S^1\to J^1(S^{2n}\times S^1)$ is a Legendrian section for all $t\in[0,\varepsilon]$, and so $\Lambda_{\phi_h^t}=j^1f^t$ is equal to the $1$-jet of a generating function $f^t : S^{2n}\times S^1\to\mathbb{R}$ without extra coordinates. It is well known that in this case $c(f^t,u_0)=\max f^t$ when $u_0$ denotes the generator of $H^{2n+1}(S^{2n}\times S^1)$: to see this one can combine Remark \ref{remarque minimax} with the arguments of Chapter 10 of \cite{polterovichrosen}. In addition to this, since $f^t$ is a generating function for $\phi_h^t$ we have
\[\{\text{Critical values of } f^t\}=\text{Spectrum}\left(\phi_h^t\right)=\{tH(p)\ |\ d_pH=0\}\ .\]
We then deduce that \[c(\phi_h^t)=\max\ f^t=t\max H=tH(p_1),\]
so $H$ reaches its maximum at the point $p_1$. Thus $c(\phi_h^1)=\max H$. Noticing that the Hamiltonian function $-h$ generates the path $\{(\phi_h^t)^{-1}\}=\{\phi_h^{-t}\}$ the same proof allows us to show that $c((\phi_h^1)^{-1})=-\min H$.\end{proof}
\vspace{2.ex}
\noindent
\subsection{A lower bound for the Shelukhin norms and the FPR norms}
The following proposition will allow us to find a lower bound for the Shelukhin and FPR norms in terms of the translation selector.\\
\begin{proposition}\label{inégalité géodésique shelukhin}
Let $\phi\in\Cont_0^c(\mathbb{R}^{2n}\times S^1,\xi_{st})$ be a contactomorphism and $k:[0,1]\times \mathbb{R}^{2n}\times S^1\to\mathbb{R}$ a compactly supported Hamiltonian that generates $\phi$, i.e.\ $\phi_k^1=\phi$. Then
\[c(\phi)\leq\int_0^1\max k^t dt\ \ \ \ \text{ and }\ \ \ \ c(\phi^{-1})\leq -\int_0^1\min k^tdt\ .\]
\end{proposition}
\vspace{2.ex}
\noindent
To prove this proposition we will use Lemma 2.6 of \cite{Zapolsky}. To state his lemma Zapolski in \cite{Zapolsky} fixes a non-zero homology class. We will state this lemma by fixing a non-zero cohomology class instead. By Remark \ref{remarque minimax} the two approaches are equivalent. \\
\begin{lemme}[\cite{Zapolsky}]\label{zapolski}
Let $X$ be a closed manifold of dimension $n$, $\{\Phi^t\}_{t\in[0,1]}$ be a smooth path of contactomorphisms in $\Cont_0^c(J^1X,\Ker(\alpha_X))$ starting at the identity and $u_0\in H^n(X)\setminus\{0\}$ be the top class. Then
\[ c(\Phi^1(\mathbb{O}_X\times\{0\}),u_0)\leq \int_0^1\underset{x\in X}\max\ \alpha_X\left(\left.\frac{d}{ds}\right\lvert_{s=t}\Phi^s(\sigma_0(x))\right)dt\ ,\]
where $\sigma_0 : X\to J^1X$, $x\mapsto (x,0,0)$ is the $0$-section.\\
\end{lemme}
\noindent
In the following proof of Proposition \ref{inégalité géodésique shelukhin} we use the notations and construction of Subsection \ref{section graphe}.\\
\begin{proof}[Proof of Proposition \ref{inégalité géodésique shelukhin}]
Let $\widetilde{\phi_k}^t\in\Cont_0^c(\mathbb{R}^{2n+1},\xi_{st})^{z-per}$ be the $1$-periodic contactomorphism associated to $\phi_k^t$ for all $t\in[0,1]$ and
\[ [0,1]\times S^{2n}\times S^1\to J^1(S^{2n}\times S^1)\ \ \ \ \ \ \ (t,(p,[z]))\mapsto \Lambda_{{\phi_k}^t}(p,[z])\]
the associated Legendrian isotopy. The Legendrian isotopy extension theorem guarantees the existence of a smooth path of contactomorphisms $\{\Phi^t\}\subset\Cont_0^c\left(J^1(S^{2n}\times S^1),\Ker(\alpha_{S^{2n}\times S^1})\right)$ starting at the identity such that $\Phi^t\circ\Lambda_{\Id}\equiv \Lambda_{{\phi_k}^t}$ for all $t\in[0,1]$. We claim that for all $(p,[z])\in S^{2n}\setminus\{p_0\}\times S^1$ and for all $t\in[0,1]$
\begin{equation}\label{equation4}
\alpha_{S^{2n}\times S^1}\left(\frac{d}{dt}\Phi^t(\Lambda_{\Id}(p,[z]))\right)=k^t(\phi_k^t(\overline{\psi}(p,[z])))\ .
\end{equation}
Indeed, since the maps $\Psi$, $\text{pr}$ and $\Theta$ of Subsection \ref{section graphe} preserve the contact forms we have
\[\begin{aligned}
\alpha_{S^{2n}\times S^1}\left(\frac{d}{dt}\Phi^t(\Lambda_{\Id}(p,[z]))\right)&=\alpha_{S^{2n}\times S^1}\left(\frac{d}{dt}\Psi^{-1}\left(\Lambda^{\mathbb{R}^{2n}\times S^1}_{\phi_k^t}(\overline{\psi}(p,[z]))\right)\right)\\
&=(\Psi^{-1})^*\alpha_{S^{2n}\times S^1}\left(\frac{d}{dt}\text{pr}\left(\Lambda^{\mathbb{R}^{2n+1}}_{\widetilde{\phi_k}^t}\left(\overline{\psi}(p,z)\right)\right)\right)\\
&=\text{pr}^*\alpha_{\mathbb{R}^{2n}\times S^1}\left(\frac{d}{dt}\Theta\left(\text{gr}_{\alpha_{st}}(\widetilde{\phi_k^t})\left(\overline{\psi}(p,z)\right)\right)\right)\\
&=\Theta^*\alpha_{\mathbb{R}^{2n+1}}\left(\frac{d}{dt}\text{gr}_{\alpha_{st}}(\widetilde{\phi_k^t})(\overline{\psi}(p,z))\right)\\
&=(\alpha^2_{st}-e^\theta\alpha^1_{st})\left(\frac{d}{dt}\left((p,z),\widetilde{\phi_k^t}(p,z),\tilde{g}_k^t(\overline{\psi}(p,z))\right)\right)\\
&=\alpha_{st}^2\left(\frac{d}{dt}\widetilde{\phi_k^t}(p,z)\right)-e^\theta\alpha_{st}^1\left(0,\frac{d}{dt}\widetilde{g_k^t}(\overline{\psi}(p,z))\right)\\
&=k^t(\phi_k^t(\overline{\psi}(p,[z]))).
\end{aligned}\]
Moreover for all $[z]\in S^1$ and for all $t\in[0,1]$ a direct computation shows that \begin{equation}\label{equation5}
\alpha_{S^{2n}\times S^1}\left(\frac{d}{dt}\Phi^t(\Lambda_{\Id}(p_0,[z]))\right)=0\ ,
\end{equation}
since $\Lambda_{\phi_k^t}(p_0,[z])=((p_0,[z]),0)\subset J^1(S^{2n}\times S^1)$ is independent of time. So by Lemma \ref{zapolski} and equalities \eqref{equation4}, \eqref{equation5} we have
\[\begin{aligned}
c(\phi_k^1)=c(\Lambda_{\phi_k^1},u_0)&\leq \int_0^1\underset{(p,z)\in S^{2n}\times S^1}\max\alpha_{S^{2n}\times S^1} \left(\frac{d}{dt}\Phi^t(\Lambda_{Id}(p,z))\right)dt\\
&=\int_0^1\max\left\{\underset{S^{2n}\setminus\{p_0\}\times S^1}\max k^t(\phi_k^t(\overline{\psi}(p,z))),0\right\}dt\\
&=\int_0^1\max k^tdt\ ,
\end{aligned}\]
which proves the first inequality of Proposition \ref{inégalité géodésique shelukhin}. Using the fact that $(\phi_k^1)^{-1}$ can be generated by the Hamiltonian function
\[ [0,1]\times\mathbb{R}^{2n}\times S^1\to\mathbb{R}\ \ \ \ (t,p,z)\mapsto -k^{1-t}(p,z)\] we deduce the second inequality of Proposition \ref{inégalité géodésique shelukhin} exactly in the same way.\end{proof}
\vspace{2.ex}
\noindent
We deduce the following corollary.\\
\begin{corollaire}\label{corollaire shelukhin}
Let $k :[0,1]\times\mathbb{R}^{2n}\times S^1\to\mathbb{R}$ be a compactly supported Hamiltonian function. Then
\[\begin{aligned}
\max\left\{c(\phi_k^1),c\left((\phi_k^1)^{-1}\right)\right\}&\leq\nu_S\left(\phi_k^1\right)\leq\widetilde{\nu_S}\left([\phi_k]\right)\ \text{ and }\\
\max\left\{\left\lceil c(\phi_k^1)\right\rceil,\left\lceil c\left((\phi_k^1)^{-1}\right)\right\rceil\right\}&\leq \nu_{FPR}\left(\phi_k^1\right)\leq\widetilde{\nu_{FPR}}\left([\phi_k]\right).
\end{aligned}\]
\end{corollaire}
\vspace{2.ex}
\begin{proof}
By Proposition \ref{inégalité géodésique shelukhin},
\[\max\left\{c(\phi_k^1),c\left((\phi_k^1)^{-1}\right)\right\}\leq\max\left\{\int_0^1\max k^tdt,-\int_0^1\min k^tdt\right\}.\]
By definition of the Shelukhin length functional we have
\[\max\left\{\int_0^1\max k^tdt,-\int_0^1\min k^tdt\right\}\leq \mathcal{L}_S^{\alpha_{st}}(\phi_k)\]
for any compactly supported Hamiltonian function $k :[0,1]\times\mathbb{R}^{2n}\times S^1\to\mathbb{R}$. By taking the infinimum of the length over all paths that represents $\phi_k^1$ (resp.\ $[\{\phi_k\}]$) we deduce the first line of inequalities of Corollary \ref{corollaire shelukhin}. The second line of inequalities can be proved exactly in the same way and is left to the reader.\end{proof}
\vspace{2.ex}
\noindent
\subsection{Computation of the Shelukhin length and proof of Theorem \ref{geodesique shelukhin/fpr}}
The Shelukhin length of a path of contactmorphisms $\{\phi^t\}\subset\Cont_0^c(\mathbb{R}^{2n}\times S^1,\xi_{st})$ generated by a time independant compactly supported Hamiltonian function $h:\mathbb{R}^{2n}\times S^1\to\mathbb{R}$ is given by $\mathcal{L}_S^{\alpha_{st}}(\{\phi^t\})=\max\{\max h,-\min h\}$. Using Corollary \ref{corollaire shelukhin} we then deduce that the inequalities of Proposition \ref{inégalité géodésique shelukhin} are equalities in the case when $\{\phi^t\}$ is generated by an Hamiltonian function satisfying the hypothese of Theorem \ref{geodesique shelukhin/fpr} and so that it is a geodesic of the Shelukhin norm. A similar argument allows to compute the FPR norm of $\{\phi^t\}$ in this case and conclude the proof of Theorem \ref{geodesique shelukhin/fpr}.
\begin{proof}[Proof of Theorem \ref{geodesique shelukhin/fpr}]
Thanks to Corollary \ref{corollaire shelukhin} and Lemma \ref{calcul explicite} we deduce that
\[\max\left\{\max h,-\min h\right\}\leq \nu_S(\phi_h^1)\leq\widetilde{\nu_S}([\phi_h])\ .\]
On the other side, by definition we have \[\nu_S\left(\phi_h^1\right)\leq\widetilde{\nu_S}\left([\phi_h]\right)\leq\mathcal{L}_S^{\alpha_{st}}\left(\phi_h\right)=\int_0^1\underset{(t,(p,z))}\max|h^t(p,z)|dt=\max\{\max h,-\min h\} .\]
So all these inequalities are in fact equalities
\[\mathcal{L}_S^{\alpha_{st}}\left(\phi_h\right)=\widetilde{\nu_S^{\alpha_{st}}}\left([\phi_h]\right)=\nu_S^{\alpha_{st}}\left(\phi_h^1\right)=\max\{\max h,-\min h\}\ .\]
In the same way, using Corollary \ref{corollaire shelukhin} and Lemma \ref{calcul explicite} we have
\[\max\left\{\left\lceil \max h\right\rceil, \left\lceil -\min h\right\rceil\right\}\leq\nu_{FPR}^{\alpha_{st}}\left(\phi_h^1\right)\leq\widetilde{\nu_{FPR}^{\alpha_{st}}}\left(\phi_h\right)\ ,\]
and by definition of the FPR norm we have
\[\nu^{\alpha_{st}}_{FPR}\left(\phi_h^1\right)\leq\widetilde{\nu^{\alpha_{st}}_{FPR}}\left([\phi_h]\right)\leq \max\{\left\lceil\max h\right\rceil,\left\lceil-\min h\right\rceil\}\ .\]
Again we deduce that all these inequalities are in fact equalities. Finally using the fact the the FPR norm is conjugation invariant we get the desired result.
\end{proof}
\vspace{2.ex}
The idea to prove the results for the discriminant and oscillation norm will follow the same lines: we have to show that the translation selector is a lower bound for these norms, and that the discriminant/oscillation length of the considered path realizes this lower bound.
\subsection{A lower bound for the discriminant and oscillation norms}
In the next proposition we formulate precisely how the translation selector gives a lower bound for the discriminant and oscillation norms.\\
\begin{proposition}\label{crucial}
For any element $\phi\in\Cont_0^c(\mathbb{R}^{2n}\times S^1,\xi_{st})$ that is not the identity we have
\[\begin{aligned}
\max\left\{\left\lfloor c(\phi)\right\rfloor+1,\left\lfloor c\left(\phi^{-1}\right)\right\rfloor+1\right\}&\leq \nu_d(\phi) \text{ and }\\
\max\left\{\left\lfloor c(\phi)\right\rfloor+1,\left\lfloor c\left(\phi^{-1}\right)\right\rfloor+1\right\}&\leq \nu_{osc}(\phi).
\end{aligned}\]
\end{proposition}
\begin{proof}
\vspace{2.ex}
Let us first show that if $\{\phi^t\}$ is a smooth path of compactly supported contactomorphisms starting at the identity in $\Cont_0^c(\mathbb{R}^{2n}\times S^1,\xi_{st})$ such that $\mathcal{L}_d\left(\{\phi^t\}\right)=1$ then $0\leq c(\phi^t)<1$ for all $t\in [0,1]$. Indeed, since the application $t\in[0,1]\mapsto c(\phi^t)$ is continuous and $c(\phi^0)=0$, if there exists $t\in]0,1]$ such that $c(\phi^t)\geq 1$ then there exists $t_0\in]0,t]$ such that $c(\phi^{t_0})=1$. This means that $\phi^{t_0}$ has a $1$-translated point hence a discriminant point, which contradicts the fact that $\mathcal{L}_d\left(\{\phi^t\}\right)=1$. So $c(\phi^t)<1$ for all $t\in[0,1]$ whenever $\mathcal{L}_d\left(\{\phi^t\}\right)=1$ and $\phi^0=\Id$. \\
\noindent
Let us consider an element $\phi\in\Cont_0^c(\mathbb{R}^{2n}\times S^1,\xi_{st})\setminus\{\Id\}$. Denote by $k=\nu_d(\phi)\in\mathbb{N}_{>0}$ its discriminant norm. Using the word metric definition of the discriminant norm, this means that there exist $k$ smooth paths of compactly supported contactomorphisms $\left(\{\phi_i^t\}_{t\in[0,1]}\right)_{i\in[1,k]\cap\mathbb{N}}$ such that
\begin{enumerate}
\item each of them starts at the identity and is of discriminant length $1$
\item $\phi=\prod\limits_{i=1}^k\phi_i^1$.
\end{enumerate}
The triangular inequality property that is satisfied by the translation selector (see Theorem \ref{translation selector}) and the previous estimate of the translation selector on paths of discriminant length equals to $1$ allow us to deduce that
\[\left\lceil c(\phi)\right\rceil\leq\sum\limits_{i=1}^k \left\lceil c(\phi_i^1)\right\rceil\leq k=\nu_d(\phi)\ ,\]
and so that $\nu_d(\phi)\geq \left\lceil c(\phi)\right\rceil $. Since $\nu_d(\phi^{-1})=\nu_d(\phi)$ we deduce that
\[\nu_d(\phi)\geq\max\left\{\left\lceil c(\phi)\right\rceil,\left\lceil c(\phi^{-1})\right\rceil\right\} .\]
So if $\max\{c(\phi),c(\phi^{-1})\}\notin\mathbb{N}$ we proved the desired first inequality \[\nu_d(\phi)\geq\max\{\left\lfloor c(\phi)\right\rfloor+1,\left\lfloor c(\phi^{-1})\right\rfloor+1 \}.\]
\noindent
In remains to show the case when $\max\{c(\phi),c(\phi^{-1})\}\in\mathbb{N}$. To set the ideas down, let us assume that $\max\{c(\phi),c(\phi^{-1})\}=c(\phi)$, the case where $\max\{c(\phi),c(\phi^{-1})\}=c(\phi^{-1})$ leads to the same conclusion by the same arguments. We already proved that $k=\nu_d(\phi)\geq c(\phi)$, and we want to show that $\nu_d(\phi)=k>c(\phi)$. If $c(\phi)=1$ then using the argument of the first paragraph of this proof we know that $\nu_d(\phi)\geq 2>c(\phi)$. So it remains to show the inequality for the cases where $c(\phi)\geq 2$. Suppose by contradiction that $c(\phi)=c(\prod\limits_{i=1}^k\phi_i^1)=k$. Then $\lceil c(\phi_i^1)\rceil =1$ for all $i\in[1,k]$, in particular $c(\phi_i^1)\in ]0,1[$. Thanks to Theorem \ref{translation selector} we have $\lceil c(\prod\limits_{i=2}^{k}\phi_i^1)\rceil \geq \lceil c(\prod\limits_{i=1}^k\phi_i^1)-c(\phi_1^1)\rceil $. Since $c(\phi_1^1)\in]0,1[$ and $c(\Pi_{i=1}^k\phi_i^1)\in\mathbb{N}$ we deduce that
\[\lceil c(\Pi_{i=1}^k\phi_i^1)-c(\phi_1^1) \rceil > c(\Pi_{i=1}^k\phi_i^1)-\lceil c(\phi_1^1) \rceil\ \text{ so }\ c(\Pi_{i=1}^k\phi_i^1)<\lceil c(\phi_1^1) \rceil+\lceil c(\Pi_{i=2}^{k}\phi_i^1)\rceil\ .\] This leads us to the following contradiction
\[k=c(\phi)=c(\Pi_{i=1}^k\phi_i^1)<\lceil c(\phi_1^1)\rceil +\lceil c(\overset{k}{\underset{i=2}{\Pi}}\phi_i^1)\rceil = \sum\limits_{i=1}^k\lceil c\left(\phi_i^1\right)\rceil = k\ ,\]
which concludes the proof of the first inequality of the proposition.\\
The proof for the second inequality concerning the oscillation norm goes in the same way but will use one more argument based on the fourth point of Theorem \ref{translation selector}. Let $\phi\in\Cont_0^c(\mathbb{R}^{2n}\times S^1,\xi_{st})\setminus\{\Id\}$. By definition there exists $[\{\phi^t\}]\in\widetilde{\Cont_0^c}(\mathbb{R}^{2n}\times S^1,\xi_{st})$ such that $\phi^1=\phi$ and \[\nu_{osc}\left(\phi\right)=\widetilde{\nu_{osc}}\left([\{\phi^t\}]\right)=\max\left\{\widetilde{\nu_{+}^{osc}}\left([\{\phi^t\}]\right),-\widetilde{\nu_{-}^{osc}}\left([\{\phi^t\}]\right)\right\}\ .\]
To set the ideas down we will assume that $\nu_{osc}(\phi)=\widetilde{\nu_{osc}}\left([\{\phi^t\}]\right)=\widetilde{\nu_{+}^{osc}}\left([\{\phi^t\}]\right)$, the case where $\widetilde{\nu_{osc}}\left([\{\phi^t\}]\right)=-\widetilde{\nu_{-}^{osc}}\left([\{\phi^t\}]\right)$ leads to the same conclusion and is left to the reader.\\
\noindent
We will prove that $\widetilde{\nu_{+}^{osc}}\left([\{\phi^t\}]\right)\geq \left\lfloor c(\phi)\right\rfloor+1$. The proof that $-\widetilde{\nu_{-}^{osc}}\left([\{\phi^t\}]\right)\geq \left\lfloor c(\phi^{-1})\right\rfloor+1$ goes in the same way and is left to the reader.\\
\noindent
Let us denote by $k:=\widetilde{\nu_{+}^{osc}}\left([\{\phi^t\}]\right)$. By definition this means that there exists $N\in\mathbb{N}^*$ smooth paths $\left(\{\phi_i^t\}\right)_{i\in[1,N]\cap\mathbb{N}}$ such that
\begin{enumerate}
\item each of them is of discriminant length $1$, $k$ of them are positive and $N-k$ of them are negative
\item $\phi=\prod\limits_{i=1}^N\phi_i^1$.
\end{enumerate}
If $\{\varphi^t\}$ is non-positive by Theorem \ref{translation selector} we have $c(\varphi^1)=0$. So using the triangular inequality we have again that $\left\lceil c(\phi)\right\rceil \leq k=\widetilde{\nu_+^{osc}}([\{\phi^t\}])$. \\
\noindent
If we assume that $c(\phi)\notin\mathbb{N}$ then $\nu_{osc}(\phi)\geq \left\lfloor c(\phi)\right\rfloor +1$ and we have the second inequality of the proposition. \\
\noindent
Finally suppose that $c(\phi)\in\mathbb{N}$. If $c(\phi)=1$ the argument from the first paragraph of this proof allows us to show that $\nu_{osc}(\phi)\geq 2=\lfloor c(\phi)\rfloor+1$ which proves again the second inequality of the proposition. So it remains to show the case where $c(\phi)$ is an integer greater than $1$. Let $j=\min\{i\in[1,N]\ |\ \{\phi_i^t\} \text{ is positive}\}$. Then thanks to Theorem \ref{translation selector} we have
\begin{equation}\label{mega relou}
\left\lceil c\left(\prod\limits_{i=j+1}^N\phi_i^1\right)\right\rceil \geq \left\lceil c\left(\prod\limits_{i=j}^N\phi_i^1\right)-c(\phi_j^1)\right\rceil\ .
\end{equation}
\noindent
Let us assume by contradiction that $c(\phi)=k$. This implies as for the discriminant norm that for any $i\in[1,N]$ such that $\{\phi_i^t\}$ is positive, we must have $c(\phi_i^1)\in]0,1[$. By the fourth point of Theorem \ref{translation selector} we have $k=c(\phi)\leq c(\prod\limits_{i=j}^N\phi_i^1)$ and by the triangular inequality we have $c(\prod\limits_{i=j}^N\phi_i^1)\leq k$. So we deduce that $c(\prod\limits_{i=j}^N\phi_i^1)=k$. Plugging this in the inequality \eqref{mega relou} we obtain the following contradiction:
\[k=c\left(\prod\limits_{i=j}^N\phi_i^1\right)<\left\lceil c(\phi_j^1)\right\rceil +\left\lceil c\left(\prod\limits_{i=j+1}^n\phi_i^1\right)\right\rceil \leq \sum\limits_{i=j}^N\lceil c(\phi_i^1)\rceil=k.\]\end{proof}
\vspace{2.ex}
\noindent
\subsection{Computation of the discriminant and oscillation lengths and proof of Theorem \ref{geodesique discriminante/oscillation}}
The next lemma will allow us to compute the discriminant and oscillation lengths of paths generated by time independent Hamiltonian functions and prove Theorem \ref{geodesique discriminante/oscillation}.\\
\begin{lemme}\label{longueur autonome}
Let $h:\mathbb{R}^{2n}\times S^1\to\mathbb{R}$ be a smooth compactly supported Hamiltonian function that generates the path of compactly supported contactomorphisms $\phi_h$. Then
\[\mathcal{L}_d\left(\phi_h\right)=\left\lfloor \frac{1}{t_0}\right\rfloor+1\ \text{ where }\ t_0=\inf\left\{t>0\ |\ \Interior(\Supp(h))\cap DP(\phi_h^t)\ne\emptyset \right\} .\]
If moreover we suppose that $h:\mathbb{R}^{2n}\times S^1\to\mathbb{R}$ is non-negative (resp.\ non-positive), then
\[\mathcal{L}_d\left(\phi_h\right)=\mathcal{L}_{osc}\left(\phi_h\right)=\left\lfloor\frac{1}{t_0}\right\rfloor+1,\]
where by convention we set $\frac{1}{0}=+\infty$.\\
\end{lemme}
\begin{proof}
Recall that \[\Supp(\phi_h)=\overline{\underset{t\in[0,1]}{\bigcup}\{x\in \mathbb{R}^{2n}\times S^1\ |\ \phi_h^t(x)\ne x\}}\ \text{ and }\ \Supp(h)=\overline{\{x\in \mathbb{R}^{2n}\times S^1 \ |\ h(x)\ne 0\}}.\]
Straightforward arguments allow to show that
\[\Supp(\phi_h)=\Supp(h)\ ,\]
so $t_0=\inf\left\{t>0\ |\ \Interior(\Supp(\phi_h))\cap DP(\phi_h^t)\ne\emptyset \right\}$.\\
\noindent
Suppose there exist $0\leq t<s\leq 1 $ such that $DP((\phi_h^t)^{-1}\phi_h^s)\cap\Interior(\Supp(\phi_h))\ne\emptyset$. Since the Hamiltonian function $h$ does not depend on time this implies that $(\phi_h^t)^{-1}\phi_h^s=\phi_h^{s-t}$ and so $DP(\phi_h^{s-t})\cap\Interior(\Supp(\phi_h))\ne\emptyset$. So by definition of $t_0$ we have that $s-t\geq t_0$.\\
\noindent
So if $t_0>0$ and $\frac{1}{t_0}$ is not an integer, by cutting the interval $[0,1]$ in $\left\lceil\frac{1}{t_0}\right\rceil$ intervals of same length $1/\left\lceil\frac{1}{t_0}\right\rceil$, the discriminant length of $\phi_h$ restricted to any of this interval is equal to one, i.e.\
\[\mathcal{L}_d\left(\{\phi_h^t\}_{t\in[i/\lceil\frac{1}{t_0}\rceil; (i+1)/\lceil\frac{1}{t_0}\rceil]}\right)=1,\ \forall i\in \left[0,\left\lceil \frac{1}{t_0}\right\rceil-1\right]\ .\]
Therefore $\mathcal{L}_d(\phi_h)\leq \left\lceil \frac{1}{t_0}\right\rceil$. Moreover if we cut $[0,1]$ in stricly less than $\left\lceil \frac{1}{t_0}\right\rceil$ intervals, then there exists at least one interval $I\subset [0,1]$ with length greater than $t_0$ and so $\mathcal{L}_d\left(\{\phi_h^t\}_{t\in I}\right)\geq 2$. We then deduce that when $1>t_0>0$ and $\frac{1}{t_0}$ is not an integer we have
\[ \mathcal{L}_d(\phi_h)=\left\lceil \frac{1}{t_0}\right\rceil=\left\lfloor\frac{1}{t_0}\right\rfloor+1 \ .\]
\noindent
Now let us consider the case when $t_0>0$ and $\frac{1}{t_0}$ is an integer. First we can see that if we cut $[0,1]$ into $\frac{1}{t_0}$ intervals then at least one of them will be of length greater or equal to $t_0$, and so the discriminant length of $\phi_h$ restricted to this interval will be greater or equal than $2$. So we deduce that $\mathcal{L}_d(\phi_h)\geq \frac{1}{t_0}+1$. However if we cut $[0,1]$ into $\frac{1}{t_0}+1$ pieces such that each piece is of length $1/(\frac{1}{t_0}+1)$ then the same argument as before allows to show that the discriminant length of $\phi_h$ restricted to any of these intervals is equal to one. So we conclude that in this case again
\[\mathcal{L}_d(\phi_h)=\left\lfloor \frac{1}{t_0}\right\rfloor +1.\]
\noindent
Finally in the case $t_0=0$ it is obvious that $\mathcal{L}_d(\phi_h)=+\infty$. \\
\end{proof}
\noindent
We are now ready to prove Theorem \ref{geodesique discriminante/oscillation}.\\
\begin{proof}[Proof of Theorem \ref{geodesique discriminante/oscillation}]
Let us first compute the $t_0$ coming from Lemma \ref{longueur autonome} :
\[t_0=\inf\left\{t>0\ |\ \Interior\left({\Supp(h)}\right)\cap DP(\phi_h^t)\ne\emptyset\right\}\ .\]
For all $0<t\leq 1$, $(p,z)\in DP(\phi_h^t)$ if and only if $(p,z)$ is a $k$-translated point for $\phi_h^t$, where $k$ is an integer. Using Lemmas \ref{relevé de contact} and \ref{petite hessienne}, we know that $(p,z)$ is a $k$-translated point of $\phi_h^t$, for some $t\in ]0,1]$, if and only if $p$ is a critical point of $H$ and $tH(p)=k$ (see the proof of Lemma \ref{calcul explicite}). So
\[t_0=\inf\left\{t>0 \ |\ \exists (p,z)\in\Interior\left({\Supp(h)}\right),\ d_pH=0\ \text{ and }\ tH(p)\in\mathbb{Z}\right\}.\]
Since $\Supp(h)=\Supp(H)\times S^1$ and since $0$ is a regular value of $H$ inside of its support, we deduce that
\[\begin{aligned}
t_0&=\min\left\{t>0\ |\ \exists p\in \Interior\left(\Supp(H)\right),\ d_p H=0\ \text{ and } tH(p)\in\mathbb{Z}\setminus\{0\}\right\}\\
&=\min\left\{t>0 \ |\ \exists p\in \Interior\left(\Supp(H)\right),\ d_p H=0\ \text{ and } t|H(p)|=1\right\}\\
&=\min\left\{t>0 \ |\ \exists p\in \Interior\left(\Supp(H)\right),\ d_p H=0\ \text{ and } t=\frac{1}{|H(p)|}\right\}\\
&=\frac{1}{\max\{\max H,-\min H\}}=\frac{1}{\max\{\max h,-\min h\}}.
\end{aligned}\]
So by Lemma \ref{longueur autonome} we have \[\mathcal{L}_d(\phi_h)=\max\{\lfloor \max h \rfloor +1,\lfloor -\min h+\rfloor+1\}.\]
On the other side, Proposition \ref{crucial} and Lemma \ref{calcul explicite} allow us to say that
\[\nu_d\left(\phi_h^1\right)\geq \max\left\{\lfloor c(\phi_h^1)\rfloor+1, \left\lfloor c\left((\phi_h^1)^{-1}\right)\right\rfloor+1\right\}=\max\left\{\lfloor \max h \rfloor+1, \lfloor \min h\rfloor+1 \right\}.\]
By definition $\mathcal{L}_d(\phi_h)\geq\nu_d(\phi_h^1)$ all these inequalities are equalities :
\[\max\{\lfloor \max H+1 \rfloor ,\lfloor -\min H+1 \rfloor\}=\mathcal{L}_d(\phi_h)\geq\nu_d(\phi_h^1)\geq \max\{\lfloor \max H+1 \rfloor ,\lfloor -\min H+1 \rfloor\}.\]
Finally, because the discriminant length and the discriminant norm are conjugation invariant we deduce Theorem \ref{geodesique discriminante/oscillation}. \\
\noindent
The proof for the oscillation norm goes exactly the same way. \end{proof}
\vspace{2.ex}
\subsection{Proof of Proposition \ref{capacite-energie}}
Let us recall the statement of Proposition \ref{capacite-energie}.
\begin{proposition}
Let $\phi\in\Cont_0^c(\mathbb{R}^{2n}\times S^1,\xi_{st})$ be a contactomorphism that displaces $U\subset\mathbb{R}^{2n}\times S^1$, i.e.\ $\phi(U)\cap U=\emptyset$. Then $\left\lceil \nu(\phi)\right\rceil \geq\frac{1}{2}\left\lceil c(U)\right\rceil $ where $\nu$ denotes the FPR, discriminant, oscillation or Shelukhin norm.\\
\end{proposition}
The proof of Proposition \ref{capacite-energie} is an immediate consequence of Corollary \ref{corollaire shelukhin}, Proposition \ref{crucial} together with the inequality \eqref{sandoninegalite} of the introduction.
\begin{proof}[Proof of Proposition \ref{capacite-energie}]
From Corollary \ref{corollaire shelukhin} and Proposition \ref{crucial} we deduce that
$2\left\lceil \nu(\phi)\right\rceil\geq \left\lceil c(\phi)\right\rceil+\left\lceil c(\phi^{-1})\right\rceil$, where $\nu$ stands for any of the norms $\nu_d,\ \nu_{osc},\ \nu^{\alpha_{st}}_{FPR}$ or $\nu^{\alpha_{st}}_S$, which concludes the proof of Proposition \ref{capacite-energie}.
\end{proof}
\bibliographystyle{plain}
| {
"attr-fineweb-edu": 1.55957,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUe0_xK0iCl4YJohW1 | \section{Introduction}
The mystery surrounding dark energy and dark matter, as well as the expected detection of gravitational radiation in the near future, and consequent observation of strong gravity dynamics, are powerful motivations to explore gravitational theories beyond Einstein's general relativity. In the most optimistic scenarios some distinct phenomenological signature of Einstein's gravity or of some alternatives/extension thereof could be identified. Moreover, such studies, commonly shed light on general relativity itself. An example is the extension of general relativity to higher dimensions, from which one understands how special the uniqueness theorems for black hole solutions of general relativity are. One other example is higher curvature gravity, from which one understands how special the fact that the equations of motion of general relativity are only second order in derivatives is.
In this paper we further explore an extension of general relativity recently proposed by some of us \cite{Herdeiro:2011km}. This gravitational theory was motivated by conformal invariance and the realisation that a Dirac-Born-Infeld (DBI) type conformal scalar theory has a phenomenologically interesting dynamics when its degree of freedom is interpreted as the conformal factor of a conformally flat universe. Such cosmological model can yield non-eternal (primordial) inflation spontaneously, smoothly connected to a radiation and matter era and a subsequent (present) accelerated expansion. The two distinct accelerating periods, with two distinct effective cosmological constants, are a manifestation that the cosmological constant can vary in this theory. Moreover, a large hierarchy between these two cosmological constants can be naturally achieved, if the naive cosmological constant appearing in a weak curvature expansion of the theory is associated to the TeV scale, suggesting also a new mechanism to address the cosmological constant problem.
The extension of this model to include all gravitational field degrees of freedom (rather than just an overall conformal factor) suggested the introduction of an everywhere time-like unit vector field ${\bf n}$, coupled to the gravitational sector (but not necessarily to the matter sector) of the theory. We therefore dub this theory `$n$-DBI gravity'. Such coupling leads to the breakdown of Lorentz invariance, but since ${\bf n}$ decouples from the gravitational dynamics in the weak curvature limit, Lorentz invariance is restored and Einstein gravity is recovered in this limit. This breakdown of Lorentz invariance at strong curvature and restoring at weak curvature is somewhat reminiscent of what was proposed in Horava-Lifschitz gravity \cite{Horava:2009uw}; in our theory, however, it is explicit.
Mathematically, the introduction of ${\bf n}$, allows $n$-DBI gravity - which has an infinite power series in the Ricci curvature - to yield equations of motion which are \textit{at most} second order in time, albeit higher order in spatial derivatives. This is in principle a desirable property, so as to avoid ghosts in the quantum theory, a property normally associated to only Lovelock gravity \cite{Lovelock}, of which Einstein gravity is a particular example. Thus, our model, illustrates one other path to achieve this property. Also, the existence of this time-like vector field naturally induces a foliation of space-time. The allowed diffeomorphisms of the theory should preserve ${\bf n}$ and hence this foliation, which will therefore be only a subgroup of general coordinate transformations. Physically this means that there are quantities, defined on each leave, which have no invariant meaning in Einstein gravity but become invariant under this symmetry group; two that shall be discussed are the shearing and expansion of the congruence tangent to ${\bf n}$.
Remarkably, we shall show that many solutions of Einstein's gravity are also solutions of $n$-DBI gravity, despite the fact that the latter has an infinite power series in scalar curvature. Moreover, this is not only the case for vacuum solutions, or even Einstein spaces. We shall actually explicitly show that the Reissner-Nordstr\"om black hole with (or without) a cosmological constant solves the field equations of $n$-DBI gravity. In our theory, however, this is a larger family of solutions than in Einstein gravity, not only parameterised by total mass, charge and cosmological constant, but also by the \textit{slicing}. This is a novel type of non-uniqueness, which, albeit trivial from the geometric viewpoint, has physical significance in this framework. Also, in agreement with the aforementioned, the cosmological constant for these solutions will arise as an integration constant and can therefore take different values.
This paper is organised as follows. Section 2 defines $n$-DBI gravity, introducing further details as compared to those presented in \cite{Herdeiro:2011km}. Section 3 addresses spherically symmetric solutions, keeping some technicalities to Appendix A. We draw conclusions in Section 4.
\section{Formulation of $n$-DBI gravity}\label{model}
$n$-DBI gravity \cite{Herdeiro:2011km} is defined by the action
\begin{align}
S=-{1\over 16\pi G_N}\int d^4x\left\{\frac{12\lambda}{G_N}\sqrt{-g}\biggl[\sqrt{1+{G_N\over 6\lambda}\left(\mbox{}^{\mbox{}^{(4)}\!}\! R+{\cal K}\right)}-q\biggr]+16\pi G_N\mathcal{L}_{\rm matter}\right\}\ ,\label{DBIgravity}
\end{align}
where $G_N$ is Newton's constant, $\mbox{}^{\mbox{}^{(4)}\!}\! R$ is the four dimensional Ricci scalar constructed in the standard way from the space-time metric $g_{\mu\nu}$ and $\mathcal{L}_{\rm matter}$ is the matter Lagrangian density. The theory includes two dimensionless constants: $\lambda$ and $q$. We have introduced a scalar quantity closely related to the Gibbons-Hawking-York boundary term \cite{Gibbons:1976ue, York:1972sj}:
\begin{equation}
{\cal K}:=-{2\over \sqrt{h}}n^{\sigma}\partial_{\sigma}\left(\sqrt{h}K\right)
=-2\left(K^2+n^{\sigma}\partial_{\sigma}K\right)\ .
\end{equation}
This quantity is specified by the choice of a unit time-like vector ${\bf n}$, and the consequent first and second fundamental forms (induced metric and extrinsic curvature)
\begin{align}
h_{\mu\nu}\equiv g_{\mu\nu}+n_{\mu}n_{\nu}\ ,\qquad\qquad
K_{\mu\nu}\equiv {1\over 2} n^{\sigma}\partial_{\sigma}h_{\mu\nu}\ ,
\end{align}
from which $K=K_{\mu\nu}h^{\mu\nu}$, the trace of the extrinsic curvature, is constructed.
We assume the time-like vector ${\bf n}$ defines a foliation of space-time, $\mathcal{F}$, by space-like leaves. Being hyper-surface orthogonal, it is natural to adopt the ADM decomposition of space-time \cite{Arnowitt:1960es}
\begin{equation}
ds^2=-N^2 dt^2+h_{ij}\left(dx^i+N^i dt\right)\left(dx^j+N^j dt\right)\ .\label{ADM}
\end{equation}
Standard nomenclature is that $N$ and $N^i$ are the \textit{lapse} and \textit{shift}. Then, as a 1-form, ${\bf n}=-Ndt$ and therefore ${\bf n}\wedge d{\bf n}=0$, in agreement with Frobenius' theorem. In terms of the ADM decomposition, the normal derivative is expressed as
\begin{equation}
n^{\sigma}\partial_{\sigma}=N^{-1}\left(\partial_t-\pounds_N\right) \ ,
\end{equation}
using the Lie derivative $\pounds_N$ for the vector field $N^i$; also, the extrinsic curvature reads
\begin{equation}
K_{ij}={1\over 2N}\left(\dot{h}_{ij}-\nabla_iN_j-\nabla_jN_i\right)\ .
\end{equation}
Moreover, to reexpress the theory in terms of the ADM variables $N$, $N^i$ and $h_{ij}$ note that \cite{York:2004gb}:
\begin{equation}
\mbox{}^{\mbox{}^{(4)}\!}\! R+{\cal K}=R+K_{ij}K^{ij}-K^2-2N^{-1}\Delta N\equiv \mathcal{R}\ , \label{defr}
\end{equation}
where $R$ is the Ricci scalar of $h_{ij}$.
Then, varying the action with respect to $N$, $N^i$ and $h_{ij}$ one obtains, respectively, the Hamiltonian constraint
\begin{align}
\hspace{-.3cm}
{1+{G_N\over 6\lambda}\left(R-N^{-1}\Delta N\right)\over
\sqrt{1+{G_N\over 6\lambda}{\cal R}}}-q
-\frac{G_N}{6\lambda}\Delta\left(1+{G_N\over 6\lambda}{\cal R}\right)^{-1/2}=-\frac{4\pi G_N^2}{3\lambda \sqrt{h}}\frac{\delta \mathcal{L}_{\rm matter}}{\delta N} \ ,\label{Nvariation}
\end{align}
the momentum constraints
\begin{equation}
\nabla^j\left({K_{ij}-h_{ij}K \over \sqrt{1+{G_N\over 6\lambda}{\cal R}}}\right)=-\frac{8\pi G_N}{\sqrt{h}}\frac{\delta \mathcal{L}_{\rm matter}}{\delta N^i} \ ,
\end{equation}
and the $h_{ij}$ evolution equation
\begin{align}
\hspace{-.3cm}
\frac{1}{N}(\partial_t-\pounds_N)\left({K^{ij}-h^{ij}K \over \sqrt{1+{G_N\over 6\lambda}{\cal R}}}\right)=(\nabla^i\nabla^j-h^{ij}(\nabla_l \ln N)\nabla^l)\left(1+{G_N\over 6\lambda}{\cal R}\right)^{-1/2}\nonumber \\
+\frac{-R^{ij}+KK^{ij}-2K^{il}K^j_{\ l}+N^{-1}\nabla^i\nabla^jN+h^{ij}(K_{mn}K^{mn}-N^{-1}\Delta N)}{ \sqrt{1+{G_N\over 6\lambda}{\cal R}}}+\frac{16\pi G_N}{N\sqrt{h}}\frac{\delta \mathcal{L}_{\rm matter}}{\delta h_{ij}} \ . \label{gammavariation}
\end{align}
In $n$-DBI gravity, the space-time manifold is a differentiable manifold $\mathcal{M}$, with an additional structure: an everywhere time-like vector field ${\bf n}$, defining a co-dimension one foliation $\mathcal{F}$, which should be regarded as defining part of the topology of $\mathcal{M}$. The leaves of $\mathcal{F}$ are hypersurfaces of constant time. The allowed diffeomorphisms in $n$-DBI gravity must preserve $\mathcal{F}$. They are dubbed \textit{foliation preserving diffeomorphisms} (FPDs) \cite{Horava:2009uw} and are denoted by Diff$_{\mathcal{F}}(\mathcal{M})$. In the ADM coordinates, FPDs are generated by
\begin{equation}
t\rightarrow t +\xi^0(t) \ , \qquad x^i\rightarrow x^i+\xi^i(t,x^j) \ .
\end{equation}
It follows that under FPDs the ADM fields transform as
\begin{align}
\delta N & = N\dot{\xi}^0+\xi^i\partial_i N+\xi^0\dot{N} \ , \nonumber \\
\delta N_i & = N_j\partial_i\xi^j+\xi^j\partial_jN_i+\dot{\xi}^jh_{ij}+\dot{\xi}^0N_i+\xi^0\dot{N}_i \ , \nonumber \\
\delta h_{ij} & =\xi^0\dot{h}_{ij}+\xi^k\partial_k h_{ij}+h_{ki}\partial_j\xi^k+h_{kj}\partial_i\xi^k \ .
\end{align}
The second fundamental form $K_{ij}$ transforms covariantly under Diff$_{\mathcal{F}}(\mathcal{M})$. Consequently, scalar quantities such as $K$ or $K_{ij}K^{ij}$ will be non-vanishing in all allowed frames if non-vanishing in a particular frame. A convenient way to think about these invariants is to regard the traceless $K_{ij}$ and its trace K as the shear, $\sigma_{ij}$, and expansion, $\theta$, of the congruence of time-like curves with tangent ${\bf n}$ (there is no rotation since the ${\bf n}$ is hypersurface orthogonal):
\begin{equation}
\sigma_{ij}=K_{ij}-\frac{1}{3}Kh_{ij} \ , \qquad \theta=K \ . \end{equation}
To exemplify the loss of symmetry, as compared to Einstein gravity, consider the \textit{non}-FPD
\begin{equation}
t\rightarrow \tilde{t}=t+f(r) \ , \label{nonfpd}
\end{equation}
acting on Minkowski space-time in spherical coordinates: $ds^2=-dt^2+dr^2+r^2d\Omega_2$ (which is of course a solution of vacuum $n$-DBI gravity). If one requires a vanishing trace for the extrinsic curvature of the resulting foliation, $K=0$, the resulting line element becomes completely defined up to an integration constant $C_3$:
\begin{align}
ds^2= & -\left(1+\frac{C_3}{r^4}\right)dt^2 +\left(\frac{dr}{\sqrt{1+\frac{C_3}{r^4}}}+\sqrt{\frac{C_3}{r^4}}dt\right)^2+r^2d\Omega_2 \ . \label{c3}
\end{align}
The fact that the constant $C_3$ cannot be gauged away in $n$-DBI gravity (i.e that (\ref{nonfpd}) is a prohibited transformation) is now manifest in the fact that $K_{ij}K^{ij}=\sigma_{ij}\sigma^{ij}=6C_3/r^6$ is non-vanishing. Thus, for $C_3\neq 0$, $r=0$ is a \textit{shearing singularity}, which is a physical singularity, since the shear cannot be gauged away by transformations of Diff$_{\mathcal{F}}(\mathcal{M})$. As we shall see below this foliation is actually a solution of vacuum $n$-DBI gravity, which is not the case for the foliation obtained by (\ref{nonfpd}) with a generic $f(r)$. In other words: non-FPDs generically map solutions to geometrically equivalent space-times which are nevertheless {\it not} solutions of the theory. In special cases, however, as illustrated here, a non-FPD maps a solution to another solution, albeit physically inequivalent. In spirit, such non-FPDs, are analogous to an electromagnetic duality transformation, or more general duality transformations found, for instance, in string theory.
\subsection{Einstein gravity limit}
Taking the Einstein gravity limit, $\lambda\rightarrow \infty$ and $q\to 1$ with $\lambda(1-q)$ fixed, the action (\ref{DBIgravity}) becomes
\begin{align}
S=-{1\over 16\pi G_N}\int_{\mathcal{M}} d^4x\left\{\sqrt{-g}\biggl[\mbox{}^{\mbox{}^{(4)}\!}\! R-2G_N\Lambda_{\rm Einstein}\biggr]+16\pi G_N\mathcal{L}_{\rm matter}\right\}+\frac{1}{8\pi G_N}\int_{\partial M} d^3x \sqrt{h} K\ ,\label{Einsteingravity}
\end{align}
where the effective cosmological constant is
\begin{equation}
\Lambda_{\rm Einstein}= \frac{6\lambda(q-1)}{G_N^2} \ , \end{equation}
and the Gibbons-Hawking-York term is taken in a hypersurface orthogonal to ${\bf n}$. Equivalently,
(\ref{Nvariation})-(\ref{gammavariation}) reduce to the corresponding equations of Einstein gravity with a cosmological constant (see eg. \cite{Gourgoulhon:2007ue}):
\begin{equation}
R+K^2-K_{ij}K^{ij}=2G_N\Lambda_{\rm Einstein}
\ ,\label{E1}\end{equation}
\begin{equation}
\nabla^j\left({K_{ij}-h_{ij}K }\right)=0 \ ,
\end{equation}
\begin{equation}
\frac{1}{N}(\partial_t-\pounds_N)K_{ij}=N^{-1}\nabla_i\nabla_j N-R_{ij}-KK_{ij}+2K_{il}K_j^{\ l} \ , \label{E3}
\end{equation}
where we have neglected the matter terms, for simplicity. As can be seen in both the action and in the equations of motion, in this limit, there is no coupling between the gravitational dynamics and ${\bf n}$.
Thus, covariance under the {\it full} diffeomorphisms ${\rm Diff}({\cal M})$ is restored and so is Lorentz invariance. Even without taking the limit, whenever the curvature ${\cal R}$ is very small, the equations (\ref{Nvariation})-(\ref{gammavariation}) are well approximated by these Einstein equations. This means that Lorentz invariance is restored to a very good accuracy in weakly curved space-times of $n$-DBI gravity.
\subsection{Solutions with constant $\mathcal{R}$}
Taking $\mathcal{R}$ to be constant, and dubbing $\sqrt{1+G_N\mathcal{R}/(6\lambda)}\equiv C$, the equations of motion (\ref{Nvariation})-(\ref{gammavariation}) reduce to
\begin{align}
\hspace{-.3cm}
R-N^{-1}\Delta N+\frac{6\lambda}{G_N}(1-qC)=-\frac{8\pi G_NC}{ \sqrt{h}}\frac{\delta \mathcal{L}_{\rm matter}}{\delta N} \ ,\label{NvariationR}
\end{align}
\begin{equation}
\nabla^j\left({K_{ij}-h_{ij}K }\right)=- \frac{8\pi G_NC}{\sqrt{h}}\frac{\delta \mathcal{L}_{\rm matter}}{\delta N^i} \ ,
\end{equation}
\begin{align}
\hspace{-.3cm}
\frac{1}{N}(\partial_t-\pounds_N)\left({K^{ij}-h^{ij}K}\right)=h^{ij}(K_{mn}K^{mn}-N^{-1}\Delta N)\nonumber \\
{-R^{ij}+KK^{ij}-2K^{il}K^j_{\ l}+N^{-1}\nabla^i\nabla^jN}+\frac{16\pi G_NC}{N\sqrt{h}}\frac{\delta \mathcal{L}_{\rm matter}}{\delta h_{ij}} \ . \label{gammavariationR}
\end{align}
The momentum constraints and the dynamical equations are equivalent to those of Einstein gravity, but with a renormalisation of the matter terms by a factor of $C$. The Hamiltonian constraint is also equivalent to that of Einstein gravity with, besides the renormalisation by $C$ of the matter term, a cosmological constant
\begin{equation}
\Lambda_{C}=\frac{3\lambda}{G_N^2}(2qC-1-C^2) \ . \label{lambdac}
\end{equation}
As discussed in section 3, this cosmological constant can be positive, negative or zero, depending on the choices of $C$ (and the value of $q$).
We are thus led to the following theorem and corollary:
\bigskip
{\bf Theorem:} \textit{Any solution of Einstein gravity with a cosmological constant plus some matter Lagrangian admitting a foliation with constant $\mathcal{R}$, as defined in (\ref{defr}), is a solution of $n$-DBI gravity (with an appropriate renormalisation of the solution parameters).}
\bigskip
{\bf Corollary:} \textit{Any Einstein space (hence solution of the Einstein equations with a cosmological constant) admitting a foliation such that $R-N^{-1}\Delta N$ is constant - where $R$ and $\Delta$ are the Ricci scalar and the Laplacian of the 3-metric $h_{ij}$ and $N$ is the lapse in the ADM decomposition - is a solution of $n$-DBI gravity (with an appropriate renormalisation of the solution parameters).}
\section{Spherically symmetric solutions of $n$-DBI gravity}\label{solutions}
The most generic spherically symmetric line element reads
\begin{equation}
ds^2=-g_{TT}(R,T)dT^2+g_{RR}(R,T)dR^2+2g_{TR}(T,R)dTdR+g_{\theta\theta}(R,T)d\Omega_2 \ .
\end{equation}
Defining a new radial coordinate $r^2=g_{\theta\theta}(R,T)$ and a new time coordinate $t=t(R,T)$ it is possible to transform this line element into a standard diagonal form, with only two unknown functions: $g_{tt}(t,r)$ and $g_{rr}(t,r)$. Then, the vacuum Einstein equations yield, as the only solution, the Schwarzschild black hole, and as a corollary Birkhoff's theorem, namely that spherical symmetry implies staticity. In $n$-DBI gravity, however, only FPDs are allowed. Thus, only $t=t(T)$ is allowed. As a consequence, the most general line element compatible with spherical symmetry is:
\begin{equation}
ds^2=-N^2(t,r)dt^2+e^{2f(t,r)}\left(dr+e^{2g(t,r)}dt\right)^2+r^2d\Omega_2 \ . \label{ansatz}
\end{equation}
Herein we consider the case with only $r$ dependence for the three metric functions. To include the possibility of charge, we take
\begin{equation}
\mathcal{L}_{\rm matter}= -\frac{1}{16\pi}\sqrt{-g}F_{\mu\nu}F^{\mu\nu} \ , \end{equation}
where ${\bf F}=d{\bf A}$ is the Maxwell 2-form, and to address the purely electric case we take the ansatz:
\begin{equation}
{\bf A}=A(r)dt \qquad \Rightarrow \qquad \mathcal{L}_{\rm matter}= \frac{r^2\sin\theta e^{-f}}{8\pi N}(A')^2\ .\end{equation}
To directly solve the equations of motion (\ref{Nvariation})-(\ref{gammavariation}) is quite tedious. It proves more convenient to consider the reduced system obtained by specialising the action (\ref{DBIgravity}) to our ansatz. One obtains the effective Lagrangian
\begin{equation}
\mathcal{L}_{eff}=\lambda r^2e^{f}N\left[\sqrt{1+\frac{G_N}{6\lambda}\mathcal{R}}-q\right] +\frac{G_N^2}{6N}r^2e^{-f}(A')^2\ , \label{efelag}
\end{equation}
whose equations of motion are a subset of the full set of constraints (\ref{Nvariation})-(\ref{gammavariation}).\footnote{One should check that the final solution satisfies the equations of motion (\ref{Nvariation})-(\ref{gammavariation}), which is indeed the case.} These equations of motion can be solved with full generality (see Appendix A), but it turns out that the most interesting solutions are the subset with $\mathcal{R}$ constant. These are given by
\begin{align}
ds^2= & -\left(1-\frac{2G_N M_1}{r}+{CQ^2\over r^2}+\frac{C_3}{r^4}-\frac{\Lambda_1 r^2}{3}\right)dt^2\nonumber \\
& +\left(\frac{dr}{\sqrt{1-\frac{2G_NM_1}{r}+{CQ^2\over r^2}+\frac{C_3}{r^4}-\frac{\Lambda_1 r^2}{3}}}+\sqrt{\frac{2G_NM_2}{r}+\frac{C_3}{r^4}+\frac{\Lambda_2r^2}{3}}dt\right)^2+r^2d\Omega_2 \ , \label{dRNdS}
\end{align}
where
\begin{equation}
\Lambda_1\equiv \frac{2\lambda}{G_N}\left({qC}-1\right) \ , \qquad
\Lambda_2 \equiv \frac{\lambda}{G_N}\left({4qC}-1-3C^2\right) \ ,\end{equation}
and $Q,CM_1,M_2,C_3$ are integration constants. Moreover, as expected,
\begin{equation}
\Lambda_1+\Lambda_2=\Lambda_C \ ,
\end{equation}
where $\Lambda_C$ is defined in (\ref{lambdac}). This family of solutions is therefore characterised by these five integration constants and the two dimensional constants of the theory $(\lambda,q)$.
\subsection{Analysis of the solutions}
\subsubsection*{Isometry to Reissner-Nordstr\"om-(A)dS}
Performing a coordinate transformation $t\rightarrow T=T(t,r)$,
\begin{equation}
dT=dt-\frac{1}{1-\frac{2G_NM}{r}+{CQ^2\over r^2}-\frac{\Lambda r^2}{3}}\sqrt{\frac{\frac{2G_NM_2}{r}+\frac{C_3}{r^4}+\frac{\Lambda_2r^2}{3}}{1-\frac{2G_NM_1}{r}+{CQ^2\over r^2}+\frac{C_3}{r^4}-\frac{\Lambda_1 r^2}{3}}}dr \ , \label{ct}
\end{equation}
where $M\equiv M_1+M_2$, the line element (\ref{dRNdS}) becomes recognisable as the Reissner-Nordstr\"om-(Anti)-de-Sitter solution with mass $M$, charge $\sqrt{C}Q$ and cosmological constant $\Lambda_C$:
\begin{align}
ds^2= -\left(1-\frac{2G_NM}{r}+{CQ^2\over r^2}-\frac{\Lambda_C r^2}{3}\right)dT^2 +\frac{dr^2}{1-\frac{2G_NM}{r}+{CQ^2\over r^2}-\frac{\Lambda_C r^2}{3}}+r^2d\Omega_2 \ . \label{RNdS}
\end{align}
Observe the renormalisation of the charge, as anticipated in Section 2.
Geometrically, the solution we have found is nothing but this standard solution of Einstein gravity, written in an unusual set of coordinates that can be thought of as a superposition of Gullstrand-Painlev\'e and Schwarzschild coordinates. The coordinate transformation (\ref{ct}) is not, however, a foliation preserving diffeomorphism. Thus, in $n$-DBI gravity, (\ref{dRNdS}) and (\ref{RNdS}) describe the same solution if and only if $M_2=C_3=\Lambda_2=0$. Otherwise they are two distinct solutions with different physical invariants (discussed below) which {\it happen to} be mapped by a non-FPD.
\medskip
Notice that there is, for example, no Reissner-Nordstr\"om (A)dS solution in Gullstrand-Painlev\'e coordinates. If the symmetry group of $n$-DBI gravity was the set of general coordinate transformations, we would have found such solution. In other words, the breakdown of the symmetry to FPDs is explicitly reflected in the form of the solutions (\ref{dRNdS}).
\subsubsection*{Expansion and shear}
As discussed above the shear and expansion transform covariantly under Diff$_{\mathcal{F}}(\mathcal{M})$. For the solution (\ref{dRNdS}) the scalar invariants read
\begin{equation}
\theta =-\frac{3 \left(G_NM_2+\Lambda_2r^3 \right)}{\sqrt{C_3+2 G_NM_2 r^3+ \Lambda_2r^6}}\ ,\qquad
\sigma_{ij}\sigma^{ij}=6\left[\frac{C_3+G_NM_2 r^3}{r^3 \sqrt{C_3+2 G_NM_2 r^3+\Lambda_2r^6}}\right]^2 \ ;
\end{equation}
we also have
\begin{equation}
K_{ij}K^{ij}-K^2=6\left(\frac{C_3}{r^6}-\Lambda_2\right)\ .
\end{equation}
It is manifest that $M_2$, $\Lambda_2$ and $C_3$ enter in these scalar invariants and therefore have physical meaning in defining the solution. As usual in Einstein gravity, one can invoke smoothness to rule out some solutions as unphysical. For instance smoothness of the constant (Riemann) curvature spaces $(M_1=M_2=Q=0)$, requires $C_3=0$ to avoid the shearing singularity at $r=0$.
\subsubsection*{Asymptotic behaviour}
Asymptotically ($r\rightarrow \infty$) the solution (\ref{dRNdS}) becomes a constant curvature space
\begin{equation}
R_{\mu\nu\alpha \beta}=\frac{\Lambda_C}{3}\left(g_{\mu\alpha}g_{\nu\beta}-g_{\mu\beta}g_{\nu\alpha}\right) \ . \label{cc}
\end{equation}
With appropriate choices of $C$ one may set either $\Lambda_1=0$ or $\Lambda_2=0$, keeping the other non-vanishing. In both cases one recognises de Sitter space: either written in Painlev\'e-Gullstrand coordinates (with cosmological constant $\Lambda_2$), or written in static coordinates (with cosmological constant $\Lambda_1$). In the latter case, Anti-de-Sitter space may also occur, written in global coordinates. Keeping both $\Lambda_1$ and $\Lambda_2$ non-vanishing one has an unusual slicing of constant curvature spaces. This can represent de Sitter space-time, Anti-de Sitter space-time or Minkowski space, depending on the sign of the total cosmological constant $\Lambda_1+\Lambda_2$. Indeed, the integration constant $C$ controls the magnitude of the cosmological constant:
\begin{align}
C & \ \in \ \ ]q-\sqrt{q^2-1}, q+\sqrt{q^2-1}[ \ , \ \ \ & {\rm de \ Sitter}\nonumber \\
C & \ = q\pm \sqrt{q^2-1} \ , & {\rm Minkowski} \nonumber \\
C & \ {\rm otherwise} & {\rm AdS} \ . \nonumber
\end{align}
dS and Minkowski space solutions can only exist if $q\ge 1$.
\section{Conclusions and Discussion}\label{conclusion}
In this paper we have explored some further properties of $n$-DBI gravity, beyond those studied in \cite{Herdeiro:2011km}, which focused on cosmology.
A crucial property of the theory is the existence of an everywhere time-like vector field ${\bf n}$. We have assumed it to be hyper-surface orthogonal - which is expressed by the relation we have chosen between ${\bf n}$ and the ADM quantities - albeit this condition could be dropped and an even more general framework considered. The existence and role played by ${\bf n}$ is reminiscent of Einstein-Aether theory (see \cite{Jacobson:2008aj} for a review).
It follows that the symmetry group of the theory is that which preserves ${\bf n}$ and therefore the foliation defined by it, $\mathcal{F}$. This group, denoted by Diff$_{\mathcal{F}}(\mathcal{M})$, is the group of FPDs, and it is therefore smaller than general coordinate transformations; it is the group that leaves invariant the equations of motion. This means that a non-FPD applied to a solution of $n$-DBI gravity is \textit{not}, in general, a solution of $n$-DBI gravity. Exceptionally, however, this may happen and a non-FPD may map two solutions. These should be regarded, however, as physically distinct solutions, perhaps in the same orbit of a larger symmetry group, in the same spirit of many duality symmetries or solution generating techniques that have been considered in the context of supergravity or string theory. In $n$-DBI gravity it is unclear, at the moment, if such larger symmetry group exists, but an explicit example of a non-FPD mapping (inequivalent) solutions was provided by (\ref{ct}). The solutions are, of course, isometric; in this particular example they are the standard Reissner-Nordstr\"om-(A)dS geometry in two different coordinate systems. Observe, however, the non-trivial dynamics of the theory, where the mass and the cosmological constant can in effect be split between two slicings but not the charge.
The fact that the spherically symmetric solutions of $n$-DBI gravity minimally coupled to a Maxwell field contain precisely the Reissner-Nordstr\"om geometry (with or without a cosmological constant) is remarkable and, as far as we are aware, unparalleled, within theories of gravity with higher curvature terms. This leads to the natural question of how generic is it that Einstein gravity solutions are solutions of $n$-DBI gravity (with the same matter content)? Following the theorem and corollary presented in Section 2 this question can be recast very objectively as the existence of a foliation with a specific property. How generically can such foliation be found? Can it be found for the Kerr solution?
Finally, as discussed at the beginning of Section 3, the ansatz compatible with spherical symmetry in $n$-DBI gravity has more degrees of freedom than in Einstein gravity. It will be quite interesting to see if, even in vacuum, such ansatz can accommodate a non-trivial time dependence, prohibited in Einstein gravity by Birkhoff's theorem, which would manifest the existence of a scalar graviton mode.
\section*{Acknowledgement}
S. H. would like to thank Masaki Shigemori for useful conversations. Y. S. would like to thank the Niels Bohr Institute for their warm hospitality.
This work was partially supported by the Grant-in-Aid for Nagoya
University Global COE Program (G07) and by FCT (Portugal) through the project CERN/FP/116341/2010.
| {
"attr-fineweb-edu": 1.849609,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUe1DxK7kjXJwSSd2n | \section{Introduction}
Data dependence analysis answers questions about which memory accesses
in which loop iterations access the same memory location thus creating
a partial ordering (or dependence) between loop iterations.
Determining this information enables
iteration space slicing~\cite{Pugh97},
provides input to race detection, makes automatic parallelization
and associated optimizations such as tiling or communication/computation
overlap possible,
and enables more precise data-flow analysis, or abstract interpretation.
A data dependence exists
between two array accesses that (1) access the same array element with
at least one access being a write and (2) that access occurs within the loop
bounds for each of the accesses' statement(s).
These conditions for a data dependence has been posed
as a constraint-based decision problem~\cite{PW93}, a data-flow
analysis with polyhedral set information~\cite{Feau91}, and
linear memory access descriptors~\cite{Paek2002}.
However, such approaches require a runtime component when analyzing
codes with indirect memory accesses (e.g., A[B[i]], B being an index array)
such as those that occur in sparse matrix codes.
In this paper, we present an approach to improve the precision of
compile-time dependence analysis for sparse matrix codes
and simplification techniques for decreasing the complexity of
any remaining runtime checks.
Sparse matrix computations occur in many codes, such as graph analysis,
partial differential equation solvers, and molecular dynamics
solvers. Sparse matrices save on storage and computation by only
storing the nonzero values in a matrix.
Figure~\ref{fig:csr}
illustrates one example of how the sparse matrix vector multiplication
($\vec{y} = A\vec{x}$) can be written to use a common sparse matrix format
called compressed sparse row (CSR). In CSR, the nonzeros are organized by row,
and for each nonzero, the column for that nonzero is explicitly stored.
Since the percentage of nonzeros in a matrix can be less than 1\%,
sparse matrix formats are critical for many computations to fit
into memory and execute in a reasonable amount of time.
Unfortunately, the advantage sparse matrices have on saving storage compared
to dense matrices comes with the cost of complicating program analysis.
Compile-time only approaches resort to conservative
approximation~\cite{barthou97fuzzy,Paek2002}.
Some approaches use runtime dependence tests to complement compile time
analysis and obtain more precise answers~\cite{PughWonn95,Rus2003,Oancea2012}.
Runtime dependence information is also used to detect task-graph
or wavefront parallelism that arises due to the
sparsity of dependences~\cite{Saltz91,Rauchwerger95, Streit2015}.
\begin{figure}
\centering
\includegraphics[width=0.55\columnwidth]{csr.pdf}
\caption{Compressed Sparse Row (CSR) sparse matrix format.
The {\tt val} array stores the nonzeros by packing each row in contiguous
locations. The {\tt rowptr} array points to the start of each row in the {\tt
val} array. The {\tt col} array is parallel to {\tt val} and maps each nonzero
to the corresponding column. }
\label{fig:csr}
\end{figure}
The data dependence analysis approach presented here is constraint-based.
Some constraint-based data dependence analysis
approaches~\cite{Nonlinear94,PughWonn95,pugh98constraintbased,Venkat:2016}
represent the index arrays in sparse matrix computations as uninterpreted functions
in the data dependence relations. For example, the loop bounds for the {\tt k} loop in
Figure~\ref{fig:csr} can be represented as $rowptr(i) \leq k < rowptr(i+1)$.
Previous work by others generates simplified constraints at compile time
that can then be checked at runtime
with the goal of finding fully parallel loops~\cite{PughWonn95}.
We build on the previous work of~\cite{Venkat:2016}.
In that work, dependences in a couple of sparse matrix codes were
determined unsatisfiable manually, simplified using equalities found through
a partial ordering of uninterpreted function terms, or approximated by
removing enough constraints to ensure a reasonable runtime analysis complexity.
In this paper, we automate the determination of unsatisfactory data dependences,
find equalities using the Integer Set Library (ISL)~\cite{isl10},
have developed ways to detect data dependence subsets for simplifying runtime analysis,
and perform an evaluation with eight popular sparse kernels.
\emph{In this paper, we have two main goals:
(1) prove as many data dependences as possible
to be \textit{unsatisfiable}, thus reducing the number of dependences
that require runtime tests; and
(2) simplify the satisfiable data dependences so that a
runtime inspector for them has complexity less than or equal
to the original code}.
Figure~\ref{fig:overview} shows an overview of our approach.
We use the ISL library~\cite{isl2018} like an SMT solver to determine which
data dependences are unsatisfiable. Next, we manipulate any
remaining dependence
relations using IEGenLib~\cite{Strout16} and ISL libraries to discover equalities
that lead to simplification.
Fortunately, much is known about the index arrays that
represent sparse matrices as well as the assumptions made by
the numerical algorithms that operate on those matrices.
For example, in the CSR representation shown in Figure~\ref{fig:csr},
the {\tt rowptr} index array is monotonically strictly increasing.
In Section~\ref{unSatTheory} we explain how such information
can be used to add more inequality and equality constraints
to a dependence relation. The new added constraints in some cases
cause conflicts, and hence we can detect that
those relations are unsatisfiable.
\begin{figure}
\centering
\includegraphics[width=1\columnwidth]{new_overview_tikz.pdf}
\caption{This figure shows the overview of our approach
to eliminate or simplify dependences from sparse computation
utilizing domain information.
}
\label{fig:overview}
\end{figure}
The dependences that cannot be shown as unsatisfiable
at compile time, still require runtime tests.
For those dependences, the goal is to simplify the constraints and reduce
the overhead of any runtime test. Sometimes index array properties
can be useful for reducing the complexity of runtime inspector
by finding equalities that are implied by the dependence constraints
in combination with assertions about the index arrays.
The equalities such as $i = col(m')$ can help us remove
a loop level from the inspector, $i$ in the example. Since some equality
constraint would allow us deduce value of an iterator in the inspector from
another one, e.g, we can deduce $i$ values from $m'$ values using $i = col(m')$.
Another simplification involves determining when data dependence relations
for a code are involved in a subset relationship. When this occurs,
runtime analysis need only occur for the superset.
This paper makes the following contributions:
\begin{enumerate}
\item An automated approach for the determination of unsatisfiable
dependences in sparse codes.
\item An implementation of an instantiation-based decision procedure that discovers
equality relationships in satisfiable dependences.
\item An approach that discovers
subset relationships in satisfiable dependences thus reducing run-time
analysis complexity further.
\item A description of common properties of index arrays arising in sparse
matrix computations, expressed as universally quantified constraints.
\item Evaluation of the utility of these properties for determining
unsatisfiability or simplifying dependences from a
suite of real-world sparse matrix codes.
\end{enumerate}
\section{Background: Data Dependence Analysis}
\label{sec:background}
Data dependence analysis of a loop nest is a common code analysis that is used
in different applications, such as automatic parallelization~\cite{Brandes88}
and data race detection~\cite{AtzeniGRALSLPM16}.
This section reviews the data dependence analysis process and how that process
differs when analyzing sparse matrix codes. Then, we review some of the applications
of data dependence analysis including an example of its use for
finding wavefront parallelism in sparse codes.
\subsection{Data Dependence Analysis}
\label{sec:datadep}
A data dependence occurs between two
iterations of a loop when both of the iterations access the same memory
location and at least one of the accesses is a write. Data dependence
constraints are of the following form:
\[
Dep : (\exists \vec{I},\vec{I'})(\vec{I} \prec \vec{I'} \wedge F(\vec{I})=G(\vec{I'})
\wedge Bounds(\vec{I}) \wedge Bounds(\vec{I'})),
\]
where $\vec{I}$ and $\vec{I'}$ are iteration vector instances from the same loop
nest, $F$ and $G$ are array index expressions to the same array
with at least one of the accesses being a write, and $Bounds(\vec{I})$ expands
to the loop nest bounds for the $\vec{I}$ iteration vectors.
In this paper, the term \emph{dependence relation} is
used interchangeably with dependence constraints by viewing them as a relation
between $\vec{I}$ and $\vec{I'}$.
\begin{figure}
\begin{lstlisting}[columns=flexible,frame=single,keepspaces=true]
// forward solve assuming a lower triangular matrix.
for(i=0; i<N; i++) {
tmp = f[i];
for(j=0; j<i; j++) {
S1: tmp -= A[i][j]*u[j];
}
S2:u[i] = tmp / A[i][i];
}
\end{lstlisting}
\caption{Forward solve for a dense matrix.}
\label{lst:fsdense}
\end{figure}
For example, consider the dense matrix implementation for forward solve
in Figure~\ref{lst:fsdense}. Forward solve solves for the vector
$\vec{u}$ in the equation $A\vec{u}=\vec{f}$ assuming that the matrix is lower
triangular (i.e., nonzeros are only on the diagonals or below as shown in the
example in Figure~\ref{fig:csr}). The dense forward solve code
has the following dependences for the outermost loop {\tt i}:
\begin{itemize}
\item A loop-carried dependence due to the scalar {\tt tmp}
variable. However, since {\tt tmp} is written before being read in
each iteration of the {\tt i} loop, it is privatizable,
which means each processor in a potential parallelization of the {\tt i}
loop can be given its own private copy of {\tt tmp}.
\item A loop-carried dependence between the write {\tt u[i]}
in Statement {\tt S2} and the read {\tt u[j]} in Statement {\tt S1}
with constraints
\begin{align*}
(\exists i,j, i',j')(&i<i' \wedge i=j' \wedge 0 \leq i,i' <N \wedge
0 \leq j <i \wedge 0 \leq j' <i').
\end{align*}
\end{itemize}
The iterators $i'$ and $j'$ are different instances of $i$ and $j$.
This dependence due to the writes and reads to array {\tt u} is satisfiable
because the computation for any row $i'$ depends on all previous rows $i< i'$.
This means that the outer loop in dense forward solve is fully ordered
due to data dependences
and therefore not parallelizable.
\subsection{Sparse Codes and Runtime Parallelism}
\label{sec:wavefront}
For sparse codes, after compile time dependence analysis, some remaining
dependences may involve index arrays as subscript expressions.
The data dependence constraints can use uninterpreted functions to
represent the index arrays at compile time.
Because the values of the index arrays are unknown until run time,
proving such dependences are unsatisfiable may require runtime dependence testing.
However, even when dependences arise at runtime, it still may be possible to implement
a sparse parallelization called wavefront parallelization.
Identifying wavefront parallelizable loops combines
compile time and runtime analyses. The compiler generates inspector
code to find the data dependence graph at runtime.
We now consider the sparse forward solve with Compressed Sparse Row
CSR format in Figure~\ref{lst:FsCSR}. We are interested in detecting
loop-carried dependences of the outermost loop. There are two pairs of
accesses on array {\tt u} in {\tt S1} and {\tt S2} that can potentially cause
loop-carried dependences: {\tt u[col[k]]} (read), {\tt u[i]} (write); and {\tt
u[i]} (write), {\tt u[i]} (write). The constraints for the two dependence tests
are shown in the following.
\begin{figure}
\begin{lstlisting}[frame=single, columns=flexible]
// Forward solve assuming a lower triangular matrix.
for(i=0; i<N; i++) {
tmp = f[i];
for(k=rowptr[i]; k<rowptr[i+1]-1; k++) {
S1: tmp -= val[k]*u[col[k]];
}
S2:u[i] = tmp / val[rowptr[i+1]-1];
}
\end{lstlisting}
\caption{Forward solve for a sparse matrix in compressed
sparse row (CSR).}
\label{lst:FsCSR}
\end{figure}
Dependences for the write/write {\tt u[i]} in {\tt S2}:
\begin{gather*}
(1) \;\; i = i' \wedge i < i' \wedge 0 \leq i < N \wedge 0 \leq i' < N \; \\
\wedge rowptr(i) \leq k < rowptr(i+1)
\wedge rowptr(i') \leq k' < rowptr(k'+1)
\end{gather*}
\begin{gather*}
(2)\;\; i = i' \wedge i' < i \wedge 0 \leq i < N \wedge 0 \leq i' < N \; \\
\wedge rowptr(i) \leq k < rowptr(i+1)
\wedge rowptr(i') \leq k' < rowptr(k'+1)
\end{gather*}
Dependences for read {\tt u[col[k]]} and write {\tt u[i]} in {\tt S1}, and {\tt S2}:
\begin{gather*}
(3)\;\; i = col(k') \wedge i < i' \wedge 0 \leq i < N \wedge 0 \leq i' < N \;
\wedge \\ rowptr(i') \leq k' < rowptr(i'+1)
\end{gather*}
\begin{gather*}
(4)\;\; i = col(k') \wedge i' < i \wedge 0 \leq i < N \wedge 0 \leq i' < N \;
\wedge \\ rowptr(i') \leq k' < rowptr(i'+1)
\end{gather*}
These dependences can be tested at runtime when concrete interpretations for
the index arrays (contents of arrays {\tt rowptr} and {\tt col}) are available.
The runtime dependence analyzers, called \textit{inspectors}~\cite{Saltz91},
may be generated from the dependence constraints~\cite{Venkat:2016}.
Suppose the matrix
in the forward solve code in Figure~\ref{lst:FsCSR}
had the nonzero pattern as in Figure~\ref{fig:csr}. The runtime check
would create the dependence graph for this example based on the
four dependences above as shown in Figure~\ref{fig:dep_graph}.
Once the dependence graph is constructed
a breadth-first traversal of the dependence
graph can derive sets of iterations that may be safely
scheduled in parallel without a dependence violations, with each level set
being called a wavefront as shown in Figure~\ref{fig:dep_graph}.
\begin{figure}
\centering
\includegraphics[width=.5\columnwidth]{dataDepGraph.pdf}
\caption{Dependence graph for forward solve for sparse matrix
in Figure~\ref{fig:csr}.}
\label{fig:dep_graph}
\end{figure}
\subsection{Applications of the Sparse Data Dependence Analysis}
Besides wavefront parallelism, there are many other uses for
sparse data dependence analysis. Any application of sparse data dependence
analysis would benefit from a reduction in the number of data dependences
that need to be inspected at runtime and from any complexity reduction
of data dependences that do require runtime inspection.
Here we summarize some of those
applications.
{\bf Race detection:}
Dynamic race detection is an essential prerequisite to the
parallelization of existing sequential codes.
While the front-end static analysis methods employed
in these checkers~\cite{Atzeni2016} can often suppress race
checks on provably race-free loops, they fail
to do so when presented with non-affine access patterns
that occur in sparse matrix codes.
In addition to significantly increasing runtimes,
the shadow memory cells employed by dynamic race
checkers also increases memory pressure, often by a factor of four.
The techniques presented in this paper can help suppress
race checks when we can prove the independence of loop iterations.
{\bf Dynamic program slicing:}
Pugh and Rosser introduced the concept of iteration space slicing where
program slicing is done on a loop iteration basis using
Presburger representations~\cite{Pugh97}.
Similar dynamic approaches for tiling across loops in sparse codes
were presented by various groups~\cite{dimeEtna00,StroutIJHPCA}.
All of these techniques would require runtime data dependence
analysis, thus disproving dependences or reducing the complexity
of inspecting dependences at runtime would be applicable.
{\bf High-level synthesis:}
Optimizations in high-level synthesis (HLS) uses runtime dependence checks.
In HLS, it is important to pipeline the innermost loops to get
efficient hardware. Alle et al. have proposed using runtime dependence checks
to dynamically check if an iteration is in conflict with those currently in the
pipeline, and add delays only when necessary~\cite{Alle2013}.
{\bf Distributed memory parallelization:}
Another possible application of our work can be found in the work by~\cite{Ravishankar2015}.
The authors produce distributed parallel code that uses MPI
for loops where there might be indirect loop bounds,
and/or array accesses.
The read and write
sets/data elements of each process are computed via an inspector
where indirect accesses are involved to determine if each process is
reading/writing data that is owned by other processes.
Basumallik and Eigenmann use run-time inspection of data dependences
to determine how to reorder a loop to perform computation and communication
overlap~\cite{Basumallik06}.
\section{Automating (UN)Satisfiability Analysis for Sparse Data Dependences}
For any application of data dependence analysis for sparse codes,
the best outcome is to determine that a potential data dependence
is unsatisfiable. Any dependence that is unsatisfiable does not
for runtime analysis.
Previous work used domain-specific knowledge about the index
arrays used to represent sparse matrices to guide manual determination
of unsatisfactory data dependences~\cite{Venkat:2016}.
In this paper, we show how to automate this process
by specifying the domain-specific knowledge as universally
quantified constraints on uninterpreted functions and then
using instantiation methods similar to those used by
SMT solvers to produce more constraints that can cause
conflicts.
\subsection{Detecting Unsatisfiable Dependences Using Domain Information}
\label{unSatTheory}
As an example of how domain information can be used to
show dependences are unsatisfiable consider the following
constraints from a dependence relation:
$$i' < i\; \wedge\; k = m' \wedge\; 0 \le i,i' <n \wedge\;
rowptr(i) \leq k < rowptr(i+1) \wedge\; rowptr(i'-1) \leq m' < rowptr(i').$$
Some relevant domain information is that
the {\tt rowptr} index array
is strictly monotonically increasing:
$$(\forall x_1,x_2)(x_1 < x_2 \implies rowptr(x_2) < rowptr(x_2)).$$
Since the dependence relation in question has the constraints
$i' < i$. Then, using the above strict monotonicity information
would result in adding $rowptr(i') < rowptr(i)$.
But, considering the constraint, $k = k'$, $rowptr(i) \leq k$, and
$m' < rowptr(i')$ we know that, $rowptr(i) < rowptr(i')$.
This leads to a conflict,
$$rowptr(i) < rowptr(i') \; \wedge \; rowptr(i') < rowptr(i).$$
This conflict would indicate the dependence in question was
unsatisfiable and therefore does not require any runtime analysis.
\subsection{Universally Quantified Assertions about Index Arrays}
Even if a formula that includes uninterpreted function calls is satisfiable in
its original form, additional constraints about the uninterpreted functions may
make it unsatisfiable. This has been exploited abundantly in
program verification community to obtain more precise results~\cite{Bradley2006,
Habermehl2008, Ge2009}. A common approach to express such additional
constraints is to formulate them as universally quantified assertions.
For instance, \cite{Bradley2006} use following to indicate
that array $a$ is sorted within a certain domain:
$$(\forall x_1, x_2)(i < x_1 \le x_2 < j \implies a(x_1) \le a(x_2) ).$$
There are several methods that SMT solvers use to reason about quantified
formulas, the most common one being quantifier
instantiation~\cite{Bradley2006,Ge2009,moura2007,
reynolds2014finding,reynolds2015counterexample,Loding2017}.
In quantifier instantiation, instances of universally quantified assertions,
where the universally quantified variables are replaced with ground terms,
are added to the original formula.
Any of the added constraints might contradict constraint(s) in the formula
that would show the original formula is unsatisfiable.
For the general case of quantified first order logic, there is no complete
instantiations procedure. That means the instantiation can go on forever
not exactly knowing whether the formula is satisfiable or unsatisfiable.
In some limited cases, the quantified assertions can be completely
replaced by a set of quantifier instances to construct an equisatisfiable
quantifier-free formula~\cite{Bradley2006, Ge2009}.
Combining the constraints from dependences with arbitrary universally
quantified assertions would create a first order
logic theory that in general is undecidable. Undecidability would
imply that we cannot implement an algorithm for deciding the formulas
that would always terminate.
Numerous works such as~\cite{Bradley2006,Habermehl2008,Ge2009} present
different decidable fragments of first order logic.
The approach that these works use to make decidable fragments is to
put restrictions what type of universally quantified assertions
can be used.
The restriction are usually on on syntax of the allowed
assertions~\cite{Bradley2006,Habermehl2008},
and sometimes specific properties that a specific instantiation
procedure for assertions must have~\cite{Ge2009}.
We perform a terminating instantiation that is sound but incomplete.
In other words, the dependences we determine unsatisfiable
are in fact unsatisfiable, but we may characterize some unsatisfiable
constraints as may satisfiable.
\subsection{Domain Information about Index Arrays}
\label{sec:domain-assertions}
We represent domain information about index arrays as universally
quantified assertions. In this section, we illustrate
some assertions relevant to numerical benchmarks
and relate the corresponding assertions to the existing
theory fragments.
Table~\ref{tab:diffInfo} in Section~\ref{sec:eval} lists all the
assertions we use in the evaluation. Below are some example properties.
For the forward solve with compressed sparse
row (CSR) code in
Figure~\ref{lst:FsCSR}, we know the following:
\begin{itemize}
\item \textbf{Monotonic index arrays}:
The row index array values increase monotonically.
This property of index arrays can be expressed with an
assertion about the uninterpreted function symbol
that represents the index array.
For instance, in the example the $rowptr()$ function is monotonically increasing.
If we assume that all the sparse matrix rows have at least one nonzero,
then $rowptr()$ is strictly monotonically increasing.
This assertion can be encoded as follows:
$$(\forall x_1,x_2)(x_1<x_2 \iff rowptr(x_1)<rowptr(x_2)).$$
\item \textbf{Lower Triangular Matrix}:
The forward solve algorithm shown in Figure~\ref{lst:FsCSR}
operates on lower triangular matrices.
For the CSR format that leads to the following domain-specific
assertion:
$$(\forall x_1,x_2)(x_1<rowptr(x_2) \implies col(x_1)<x_2 )$$
This indicates that nonzeros of rows before row $i$ have columns less than $i$.
\end{itemize}
The domain information in Table~\ref{tab:diffInfo} in Section~\ref{sec:eval}
can be represented with following
general forms:
$$
1. \;\; (\forall x_1, x_2)(x_1+c_1 \le x_2 \implies f(x_1)+c_2 \le f(x_2)
$$
$$
2. \;\; (\forall x_1, x_2)(x_1+c_1 \le x_2 \implies f(x_1)+c_2 \le g(x_2)
$$
$$
3. \;\; (\forall x_1, x_2)(x_1+c_1 \le f(x_2) \implies g(x_1)+c_2 \le x_2)
$$
Where $c_1$ and $c_2$ can be 0 or 1.
The first and second assertions fit the decidable LIA fragment that is
presented by \cite{Habermehl2008}. However, to the best of
our knowledge the third assertion form does not fit any previously presented
decidable fragment, and its decidability remains open.
Modern SMT solvers are equipped with heuristic-based quantifier instantiations
to reason about quantified formulas. Existing techniques for quantifier
instantiation can construct the set of instantiations for deciding some of our
assertions, e.g., non-strict monotonicity, but not for all of them. For
unsatisfiable formulas with universal quantifiers where the solver only needs a
small set of relevant instances to find contradicting constraints, the existing
heuristics can work well. For all our examples, both Z3 and CVC4 were able to
identify all unsatisfiable dependences. The solvers also time out for
satisfiable ones given a small timeout (5 seconds). This is as expected, since
specific instances of universally quantified formulas usually do not help in
proving that the quantified formula is satisfiable.
Nonetheless, we cannot just use a conventional SMT solver like Z3 in
our context.
The key reason is that we are not just interested in satisfiability of the dependence
constraints. If unsatisfiability cannot be proven statically, runtime checks
will be generated. It is important for these runtime checks to be as fast as
possible, and hence we are also interested in using the assertions to decrease
the cost of runtime checks.
For example, additional equalities means the data dependence inspector
iteration space
has lower dimensionality, thus reducing the algorithmic complexity of
runtime checks. We illustrate the complexity reduction through instantiation
of assertions with two examples in Section~\ref{sec:simplifying-equality}.
\subsection{Detecting Unsatisfiable Sparse Dependences}
\label{sec:unsat-procedure}
Instantiation-based quantifier elimination is a natural choice for our context,
since we seek to either prove unsatisfiability or find additional constraints
that simplifies runtime checks. Unfortunately, our assertions are not fully
covered by decidable fragments~\cite{Bradley2006,Ge2009} where equisatisfiable
quantifier-free formulas can always be obtained. Nonetheless, using
inspiration from the decidable fragments~\cite{Bradley2006,Ge2009} we have a
procedure that detects all unsatisfiable examples from our benchmark suite that
represent a wide range of numerical analysis codes.
Note that we can show our general assertions (1), (2), and (3), presented
in Section~\ref{sec:domain-assertions} as:
$$\forall \vec{x},\; \varphi_{I}(\vec{x}) \implies \varphi_{V}(\vec{x})$$
Where $\vec{x}$ denotes vector of quantified variables,
$\varphi_{I}(\vec{x})$ denotes antecedent of the assertion,
and $\varphi_{V}(\vec{x})$ denotes consequent of the assertion.
Then the following definitions define our procedure to instantiate
quantified variables,
and potentially use a SMT to detect their unsatisfiability.
\vspace{.1in}
\noindent
{\bf Definition 1 (E)}
We define E to be the set of expressions used as arguments to
all uninterpreted function calls in the original set of constraints.
We use this set to instantiate quantified assertions.
\vspace{.1in}
\noindent
{\bf Definition 2 ($UNSAT_\psi$)}
\begin{enumerate}
\itemsep2pt
\item The inference rules for turning the universally
quantified predicates into quantifier-free predicates is as follows:\\
\vspace{1pt}
\inference[\textsc{forall}]
{
\psi [ \forall \vec{x},\; \varphi_{I}(\vec{x}) \implies \varphi_{V}(\vec{x}) ]
}
{
\psi [ \bigwedge_{\vec{x} \in E^n} (\varphi_{I}(\vec{x}) \implies \varphi_{V}(\vec{x})) ]
}\\
\vspace{1pt}
where $E^n$ is the set of vectors of size $n=|\vec{x}|$ produced as Cartesian product of $E$.
\item Solve the quantifier-free formula $\psi$ output of step with an SMT solver that
decide union of quantifier-free theories of uninterpreted functions with equality
and Presburger Arithmetics.
\end{enumerate}
\textbf{Correctness:}
Although the above procedure is incomplete, we do have soundness.
This means if a dependence is determined unsatisfiable, it in fact
is not a dependence. However, if a dependence is determined
satisfiable at compile time, it could be that at runtime the actual values of index
arrays lead to the dependence not being satisfiable.
Since our procedure is conservatively correct, it is sound.
To show that the decidability procedure $UNSAT_\psi$ is sound,
we need to show that if the original formula $\psi$ is
satisfiable, then so is the unquantified formula $\psi'$,
$$\psi \in SAT \implies \psi' \in SAT.$$
This is equivalent to
$$ \psi' \notin SAT \implies \psi \notin SAT.$$
Since universal quantification is being replaced with
specific expression instantiations to create $\psi'$,
$\psi'$ is a potentially weaker set of constraints
than $\psi$.
This means that $\psi'$ is a conservative approximation
of $\psi$. As such, if $\psi'$ is not satisfiable,
then $\psi$ is not satisfiable.
\section{Simplifying the Dependences utilizing equalities}\label{sec:simplifying-equality}
The finite instantiation proposed in Section~\ref{sec:unsat-procedure} can
prove many of the dependence relations to be unsatisfiable. However, some of
the relations remain satisfiable, thus requiring runtime checks. It is then
important to minimize the runtime cost by simplifying the dependence relations
as much as possible. In this section, we discuss one of such simplifications
utilizing additional equalities after finite instantiations.
\subsection{Discovering New Equality Constraints and Their Usefulness}
Sometimes index array properties
can help reduce the complexity of runtime inspectors
through introducing equalities to the dependence's constraints.
The new equalities are discoverable after instantiating the
universally quantified assertions and combining those with other inequality
and equality relationships.
For instance, consider the following set of constraints;
it is a satisfiable dependence that needs a runtime inspector with
complexity of $O(n^2)$ to traverse the space of values for $i$ and $i'$:
$$(i \le i') \wedge (f(i') \le f(i)) \wedge (0 \le i,i' <n).$$
And assume we also know following universally quantified rule
about the uninterpreted function $f$ (strict monotonicity):
$$(\forall x_1,x_2), (x_1 < x_2) \implies (f(x_1) < f(x_2)).$$
With any universally quantified implication,
if the left side of the implication is true, then the right side must
be true to satisfy the assertion (i.e., $p \implies q$). It is also
the case that the contrapositive is true (i.e., $\neg q \implies \neg p$).
For this example, the negation of the right-hand side of the implication is
$f(x_2) \le f(x_1)$, which matches one of the constraints in the dependence.
Thus the negation of the left-hand side must be true and therefore
$x_2 \le x_1$. With $x_1$ matching $i$ and $x_2$ matching $i'$,
we find $i' \le i$. Thus an equality has been found:
$$ (i \le i' \wedge i' \le i) \implies i = i' $$
Using this equality we can iterate over either $i$ or $i'$
in the inspector and calculate the other by taking advantage
the equality. The runtime inspection would
have complexity of only $O(n)$.
\begin{figure*}
\begin{lstlisting}[firstnumber=1,frame=tlrb,escapechar=|,
mathescape=true, numbers=left, columns=flexible ]{Name}
for(int colNo = 0; colNo < n; ++colNo) {
std::fill_n(f,n,0); //Zero initialization
for(int nzNo = c[colNo]; nzNo < c[colNo + 1]; ++nzNo)
f[r[nzNo]] = values[nzNo];
for(int i = prunePtr[colNo], sw=0; i < prunePtr[colNo + 1]; ++i){
for (int l = lC[pruneSet[i]], bool sw=false;; l < lC[pruneSet[i] + 1]; ++l){
if (lR[l] == colNo && !sw) {
tmp = lValues[l];
sw=true;
}
if(sw){
S1: f[lR[l]] -= lValues[l] * tmp;
}
}
}
if (f[colNo] <= 0) return false; //The matrix is not SPD
lValues[lC[colNo]] = sqrt(f[colNo]);
for(int j = lC[colNo] + 1; j < lC[colNo + 1]; ++j)
S2: Values[j] = f[lR[j]] / sqrt(f[colNo]);
}
\end{lstlisting}
\caption{Static Left Cholesky code, which is a modified version of
Left Cholesky code~\cite{cheshmi2017sympiler}. }
\label{fig:lChol}
\end{figure*}
\subsection{Finding Equalities Example: Left Cholesky}
\label{sec:example}
For a more realistic example from one of the benchmarks used
in the evaluation, consider a maybe satisfiable dependence from
our Static Left Cholesky shown in Figure~\ref{fig:lChol}. Following
dependence is coming from a read in S1 (\code{lValues[l]}), and a
write in S2 (\code{lValues[j]}):
\begin{gather*}
\{ [colNo] \Rightarrow [colNo']\; : \; \exists j, i', l', (j = l') \wedge (colNo < colNo') \\
\wedge (0 \le colNo < n) \wedge (0 \le colNo' < n)
\wedge (lcolptr(pruneSet(i')) \le l' < lcolptr(pruneSet(i')+1)) \\
\wedge (prunePtr(colNo') \le i' < prunePtr(colNo'+1))
\wedge (lcolptr(colNo) < j < lcolptr(colNo+1)) \}
\end{gather*}
An inspector for this dependence is
shown in Figure~\ref{leftChInsBefore}.
We do not need loops for $j$ and $l'$
in the inspector, because they can be projected out.
Note, index array \code{prunePtr} points to nonzeros in
the sparse matrix, ranging from 0 to number of nonzeros, $nnz$,
and $n$ denotes the number of column.
The two loops, \code{colNop} and \code{ip}, combined are traversing
all the nonzero values and hence have a combined complexity of $nnz$,
followed by the \code{colNo} loop traversing over columns, $n$.
Consequently, the complexity of this inspector is $n(nnz)$.
The equality $colNo = pruneSet(i')$ is found using the additional knowledge
that {\tt lcolptr} is strictly monotonically increasing as demonstrated in the
following.
We have the following constraints in the original dependence:
\begin{align*}
&lcolptr(pruneSet(i')) <= l' < lcolptr(pruneSet(i')+1)\\
&\wedge \;\; j = lp \wedge lcolptr(colNo) < j < lcolptr(colNo+1),
\end{align*}
which gives the following through transitivity:
\begin{align*}
& lcolptr(pruneSet(i')) < lcolptr(colNo+1)\;\; \\
&\wedge\;\; lcolptr(colNo) < lcolptr(pruneSet(i')+1).
\end{align*}
We have the following assertion:
$$(\forall x_1,x_2)(lcolptr(x_1) < lcolptr(x_2) \implies x_1 < x_2)$$
where two instances $x_1=pruneSet(i'),x_2=colNo+1$ and $x_1=colNo,x_2=pruneSet(i')+1$ give new constraints:
\begin{align*}
&pruneSet(i')<colNo+1 \wedge colNo<pruneSet(i')+1\\
\Rightarrow \;\; &pruneSet(i')\le colNo \wedge colNo\le pruneSet(i')\\
\Rightarrow \;\; &colNo = pruneSet(i')
\end{align*}
The optimized inspector based on new discoveries
is shown in Figure ~\ref{leftChInsAfter}.
We do not need loop for $colNo$, since we can get its values from
$pruneSet(i')$ based on
$colNo = pruneSet(i')$.
This simplified inspector
has a complexity of $(nnz)$,
which is significantly better than the original $n(nnz)$.
\begin{figure}
\noindent\begin{subfigure}{.48\textwidth}
\begin{lstlisting}[firstnumber=1,frame=tlrb,escapechar=|,
mathescape=true, numbers=left ]{Name}
for(colNop = 0; colNop <n ; colNop++)
for(ip = prunePtr(colNop);
ip < prunePtr(colNop+1); ip++) {
for(colNo=0; colNo<n; colNo++) {
if(lcolptr(colNo) < lcolptr(colNo+1) && ...)
// Add a flow dependence between colNo and ColNop
}
}
\end{lstlisting}
\caption{Inspector with the original dependence constraints.}
\label{leftChInsBefore}
\end{subfigure}\hfill
\begin{subfigure}{.48\textwidth}
\begin{lstlisting}[frame=tlrb,
numbers=left,
escapechar=|,
mathescape=true,
firstnumber=1]{Name}
for(colNop = 0; colNop <n ; colNop++)
for(ip = prunePtr(colNop);
ip < prunePtr(colNop+1); ip++) {
colNo = pruneSet(ip);
if(lcolptr(colNo) < lcolptr(colNo+1) && ...)
// Add a flow dependence between colNo and ColNop
}
\end{lstlisting}
\caption{Inspector with an additional equality: $colNo = pruneSet(i')$.}
\label{leftChInsAfter}
\end{subfigure}
\caption{Inspector pseudo-code for dependence constraints
in Section~\ref{sec:example}, before and after utilizing index array properties
to add new equalities. We obtain the equality $colNo = pruneSet(i')$
using the properties as described in Section~\ref{sec:example}. Notice how this
equality is used to remove loop iterating over {\tt i} in Line 3.}
\label{fig:lCholins}
\end{figure}
\section{Simplifying the Dependences Utilizing Superset Relationship}
\label{sec:simplifying-superset}
Another way to deal with data dependence relations that cause complex
runtime analysis is to remove it from consideration by determining it is a
subset of a less expensive relation.
Consider two dependence relations $R1$ and $R2$, and two iterations of the
outermost loop $i$ and $i'$. If we can show that for all $i$ and $i'$ that are
dependent according to $R2$, the same pairs of $i$ and $i'$ are also dependent
according to $R1$, then it is sufficient to only test $R1$. We say that $R1$ is
a superset of $R2$, written $R1 \supseteq R2$, in such cases, and remove $R2$
from runtime check. Note that in the above definition, $R1$ may have more pairs
of outermost iterators that are dependent than $R2$.
Taking advantage of this redundancy can result in lower complexity runtime analysis.
As an example, consider the Incomple Cholesky code shown in
Figure~\ref{fig:IC0_CSC}. In section, we refer to an array access $A$ at statement $S$ as
$A@S$ for brevity. One of the dependence tests is between the write
\code{val[k]@S3} and the read \code{val[m]@S3}. This test is
redundant with the test between the write \code{val[k]@S3} and the read
\code{val[m]@S2}. This is because an iteration of the \code{i} loop
that make the read from \code{val[m]} in \code{S3} is guaranteed to access the same
memory location while executing the loop surrounding \code{S2}. Thus, the more
expensive check between accesses in \code{S3} can be removed.
\begin{figure}
\begin{lstlisting}[columns=flexible,firstnumber=1,frame=tlrb,escapechar=|,
mathescape=true, numbers=left ]{Name}
for (i = 0; i < n; i++) {
S1: val[colPtr[i]] = sqrt(val[colPtr[i]]);
for (m = colPtr[i] + 1; m < colPtr[i+1]; m++)
S2: val[m] = val[m] / val[colPtr[i]];
for (m = colPtr[i] + 1; m < colPtr[i+1]; m++)
for (k = colPtr[rowIdx[m]] ; k < colPtr[rowIdx[m]+1]; k++)
for ( l = m; l < colPtr[i+1] ; l++)
if (rowIdx[l] == rowIdx[k] && rowIdx[l+1] <= rowIdx[k])
S3: val[k] -= val[m]* val[l];
}
\end{lstlisting}
\vspace{-0.3cm}
\caption{\label{fig:IC0_CSC} Incomplete Cholesky0 code from
SparseLib++~\cite{pozo1996sparselib++}. Some variable names have been changed.
The arrays \code{col} and \code{row} are to represent common colPtr, and rowIdx in CSC format.}
\vspace{-0.3cm}
\end{figure}
In this section, we describe our approach to identify redundant dependence
relations. The key challenge is to determine superset relations between two
dependence tests involving uninterpreted functions. We present two approaches
that cover important cases, and discuss possible extensions.
\subsection{Trivial Superset Relations}
Given a polyhedral dependence relation, it is easy to characterize the pairs of
loop iterations that are dependent. All the indices that do not correspond to
the loop iterators in question can be projected out to obtain the set of
dependent iterations. These sets can be compared to determine if a dependence
test is subsumed by another. In principle, this is what we do to check if a
dependence relation is redundant with another. However, dependence relations
from sparse codes
may have variables passed as parameters to uninterpreted functions.
Such variables cannot be projected out.
Thus, we employ
an approach based on similarities in the constraint systems. The trivial case
is when the dependence relation $R1$ is expressed with a subset of constraints
in another relation $R2$. If this is the case, then $R1$ can be said to be
superset equal to $R2$.
We illustrate this approach with the earlier example from Incomplete Cholesky.
We take two dependence relations, $R1$ between \code{val[k]@S3}
and \code{val[m]@S2}, and $R2$ between \code{val[k]@S3} and
\code{val[m]@S3}. The relations---omitting the obviously common constraints
for \code{val[k]@S3}---are:
\begin{gather*}
R1 = \{ [i,m,k,l] \rightarrow [i',m'] : k = m' \wedge i < i' \wedge 0 \le i' < n \wedge \; col(i')+1 \le m' < col(i'+1) \}\\
R2 = \{ [i,m,k,l] \rightarrow [i',m',k',l'] : k = m' \wedge i < i' \wedge 0 \le i' < n \wedge \; col(i')+1 \le m' < col(i'+1) \\
\wedge \; col(row(m'))\le k' < col(row(m)+1) \wedge m'\le l' < col(i+1)\\
\wedge \; row(l')=row(k') \wedge row(l'+1)\le row(k') \}
\end{gather*}
Since $R1$ is expressed with a subset of constraints in $R2$,
we may conclude that $R1 \supseteq R2$.
\subsection{Superset Relation due to Overlapped Accesses}
\label{sec:OverlappedSuperset}
The trivial check is sufficient for many pairs of relations. However, some
relations require a more involved process. We use a different dependence
relation from Incomplete Cholesky (Figure~\ref{fig:IC0_CSC}) as an example of
such cases. We consider the dependence relation $R3$ between \code{val[k]@S3}
and \code{val[l]@S3} that is redundant with $R1$. This can be intuitively
observed from the code structure: the set of memory locations that may be
accessed by the read of \code{val[l]} when $l=m$, i.e., the first iteration of
the $l$ loop, is exactly the same as the reads by \code{val[m]@S2}. This
guarantees that even if the guard on \code{S3} always evaluated to true, the
dependence between iterations of the \code{i} loop would be redundant with that
imposed by \code{S2}.
The constraints for $R3$ (omitting those for \code{val[k]@S3}) are as follows:
\begin{gather*}
R3 = \{ [i,m,k,l] \rightarrow [i',m',k',l'] : k = l' \wedge i < i' \wedge 0 \le i' < n \wedge \; col(i')+1 \le m' < col(i'+1)\\
\wedge \; col(row(m'))\le k' < col(row(m)+1) \wedge m'\le l' < col(i+1)\\
\wedge \; row(l')=row(k') \wedge row(l'+1)\le row(k') \}
\end{gather*}
\begin{enumerate}
\item We first identify that $k=m'$ in $R1$ is not a constraint in $R3$.
\item The equality $k=m'$ has a similar (one side of the equality is the same) equation $k=l'$ in $R3$.
\item The bounds on $m'$ and $l'$ are collected from the respective constraints.
\item Because the bound on $m'$ subsumes that of $l'$, and since $k=m'$ was the
only constraint that was not in $R3$, we may conclude that $R1 \supseteq R3$.
\end{enumerate}
It is important to note that the bounds on $l'$---the set of values accessed in
the subset relation---can be conservative, i.e., \emph{may} accesses, but the
bounds on $m'$---the set of values accessed in the superset relation---must be
exact. If both bounds represent may accesses, then the superset relation does
not hold. This is important for situations as illustrated in the example above,
where statements have data-dependent guards.
\subsection{Limitations and Extensions}
Although the approach presented above was able to cover all the important cases
we encountered, it is by no means complete. The difficulty of manipulating
integer sets with uninterpreted function symbols have led us to work directly
on the constraints. This may cause our approach to miss some superset
relations, since the same relation can be expressed in many different ways. Adding some
form of normalization to the constraint system will help us avoid such pitfalls.
The overlapped iterator approach to finding a superset in
Section~\ref{sec:OverlappedSuperset} was developed specifically for the problematic
data dependence relation R3. Future work includes developing a more general
simplification approach based on this overlapping iterator concept.
In terms of scaling, there is potentially a
problem of selecting the pairs of dependence relations to
test for redundancy. We currently try all possible candidate pairs, which does
not pose a problem since a large number of dependence relations are filtered
out through unsatisfiability test described in
Section~\ref{sec:unsat-procedure}. However, selecting promising pairs to limit
the number of tests would be an useful extension.
\section{Implementation}
The data dependence analysis and simplification have been automated
except for the superset simplification.
This section summarizes the software packages the implementation relies on,
discusses some important optimization to make our implementation scalable,
and compares the ISL-based implementation with that of an SMT solver.
\subsection{Software Description}
The artifact for reproducing the results presented in this article
can be found at the following public github repository:
https://github.com/CompOpt4Apps/Artifact-datadepsimplify-arXiv-July2018
We use three software packages to automate applying methods described
in this paper:
IEGenLib library~\cite{Strout16},
ISL library~\cite{isl2018},
and CHILL compiler framework~\cite{chill:ctop}.
CHiLL is a source-to-source compiler framework for composing and applying
high level loop transformations to improve the performance of nested loop
written in C. We utilize CHILL to extracted the dependence relations
from the benchmarks.
ISL is a library for manipulating integer sets and relations that
only contain affine constraints.
It can act as a constraint solver by testing the
emptiness of integer sets. It is also equipped with a function for detecting
equalities in sets and relations. ISL does not support uninterpreted
functions, and thus cannot directly represent the dependence constraints in
sparse matrix code.
IEGenLib is a set manipulation library that can manipulate
integer sets/relations that contain uninterpreted function symbols.
The IEGenLib library utilizes ISL for some of its fundamental functionalities.
We have implemented detecting unsatisfiable dependences and finding
the equalities utilizing the IEGenLib and ISL libraries.
The following briefly describes how the automation works.
First, we extract the dependence relations utilizing CHILL, and
store them in IEGenLib. The user defined index array properties
are also stored in IEGenLib.
Next, the instantiation procedure is carried out in IEGenLib.
Then inside IEGenLib, the uninterpreted functions are removed by replacing each
call with a fresh variable, and adding constraints that encode functional
consistency~\cite[Chapter 4]{Kroening2016}.
Next, ISL can be utilized by IEGenLib to
find the empty sets, i.e, unsatisfiable relations.
Additionally, equality detection is available
as one of many operations supported by ISL
The finite instantiations described in
Section~\ref{sec:unsat-procedure}
are intersections of the assertions with the dependence
relation.
\subsection{Optimization}
A straightforward approach to implementing the procedure in
Section~\ref{sec:unsat-procedure} would be to take the quantifier-free
formula resulting from instantiation,
replace the uninterpreted functions, and directly pass it to ISL.
However, this approach does not scale to large numbers of instantiations.
An instantiated assertion is encoded as a union of two constraints (${\neg}p
\vee q$). Given $n$ instantiations, this approach introduces $2^n$
disjunctions to the original relation, although many of the clauses may be
empty. In some of our dependence relations, the value of $n$ may exceed
$1000$, resulting in a prohibitively high number of disjunctions. We have
observed that having more than 100 instantiations causes ISL to start
having scalability problems.
We apply an optimization to avoid introducing disjunctions when possible.
Given a set of instantiations, the optimization adds the instantiations to
the dependence relation in two phases.
The first phase only instantiates those that do not introduce disjunctions
to the dependence relation.
During this phase, we check if the antecedent is already part of
the dependence constraint, and thus is always true.
If this is the case, then $q$ can be directly added to the dependence relation.
We also perform the same for ${\neg}q \implies {\neg}p$
and add ${\neg}p$ to the dependence relation if ${\neg}q$ is always true.
The second phase adds the remaining instantiations that introduce disjunctions.
This optimization helps reducing the cost of dependence testing in two ways:
(1) if the relation is unsatisfiable after the first phase, disjunctions are completely avoided;
and
(2) the second phase only instantiates the remainder, reducing the number of disjunctions.
If the dependence relation remains non-empty after the second phase, then the
relation is checked at runtime. All equalities in a relation is made explicit
before inspector code generation with ISL so that the code generator can take
advantage of the equalities to simplify the generated code.
\subsection{Contrasting SMT with ISL}
SMT solvers are specialized for solving satisfiability problems expressed as a
combination of background theories. ISL is a library for manipulating integer
sets, and is specialized for the theory of Presburger arithmetic over integers.
The finite instantiation in Section~\ref{sec:unsat-procedure} is well-suited
for SMT solvers. In fact, SMT solvers are equipped with their own instantiation
algorithms that also work well for our dependence relations. However, SMT
solvers do not provide any equality relationships they might derive while answering
the satisfiability question.
Although it is possible to use SMT solvers to test if an equation is true for a
set of constraints, we cannot search for an equation given the constraints
(unless all candidates are enumerated---but there are infinite candidates in general).
For our implementation, the choice was between adding finite instantiation to
ISL or adding equality detection to SMT solvers. We have chosen the former
option as it seemed simpler to do, and also because we are more familiar with
ISL.
\section{Evaluation of Unsatisfiability and Simplification Approaches}
\label{sec:eval}
In this section, we study the impact of our approach of utilizing domain information
about index arrays on the data dependence analysis of eight sparse kernels.
Our approach may help data dependence analysis in three ways:
(1) The runtime check can be completely removed
if the dependences are proven unsatisfiable;
(2) Deriving equalities from instantiated universally quantified
assertions about domain information
can simplify dependences and reduce respected runtime check complexity; and
(3) Reducing all maybe satisfiable relations of a given code to
a set of dependence relations that encompass all potential dependences.
We do this by finding relations that are superset equal of other relations.
This can discard even more dependence relations that potentially
might need expensive runtime checks.
We first describe the suite of numerical kernels that we have compiled to
evaluate our approach. Then we evaluate the impact of each step in our
approach, from the relevance of the index property assertions to the simplification
using superset relations. Finally, we report the complexity of inspectors
with and without our proposed simplifications.
\subsection{Numerical Algorithms in Benchmark}
\label{sec:benchmark}
We have included some of the most popular sparse kernels
in a benchmark suite: (1) The Cholesky factorization, Incomplete LU0 and
Incomplete Cholesky0, and the sparse triangular solver, which are commonly used
in direct solvers and as preconditioners in iterative solvers; (2) sparse
matrix vector multiplication, and
Gauss-Seidel methods, which are often used in iterative solvers.
Table~\ref{tab:suite} summarizes the benchmarks indicating which library each
algorithm came from and how the benchmark compares with the implementations in
existing libraries.
\begin{table}
\begin{center}
\begin{threeparttable}
\caption{The benchmark suite used in this paper. The suite includes the
fundamental blocks in several applications. The suite is also selected to cover
both static index arrays, such as Gauss-Seidel, and dynamic index arrays, such
as Left Cholesky. The modification column shows the type of modification
applied to the original code.}
\begin{tabular}{| l | c | c | p{.35in} |}
\hline
Algorithm name & Format & Library source & Mod.\\ \hline
Gauss-Seidel solver & CSR & Intel MKL~\cite{wang2014intel} & None \\ \hline
Gauss-Seidel solver & BCSR & Intel MKL~\cite{wang2014intel} & None \\ \hline
Incomplete LU & CSR & Intel MKL~\cite{wang2014intel} & None \\ \hline
Incomplete Cholesky & CSC and CSR & SparseLib++~\cite{pozo1996sparselib++} & None \\ \hline
Forward solve & CSC & Sympiler~\cite{cheshmi2017sympiler} & None \\ \hline
Forward solve & CSR & \cite{vuduc2002automatic} & None \\ \hline
Sparse MV Multiply & CSR & Common & None \\ \hline
Static Left Cholesky & CSC & Sympiler~\cite{cheshmi2017sympiler} & P\tnote{a}\, + R\tnote{b} \\ \hline
\end{tabular}%
\label{tab:suite}
\begin{tablenotes}\footnotesize
\item [a] Privatization of temporary arrays
\item [b] Removal of dynamic index array updates
\end{tablenotes}
\end{threeparttable}
\end{center}
\end{table}
We modified onr of the benchmarks, left Cholesky, to make temporary arrays
privatizable and to remove dynamic index array updates so that the compiler can
analyze the sparse code.
\textbf{Left Cholesky}:
This code has following changes compared to
a more common implementation in CSparse~\cite{davis2006direct}:
\textit{(i) Privatization of temporary arrays:} We analyzed dependences between
reads and writes to temporary arrays to detect privatizable arrays. This
can be challenging for a compiler to do with
sparse codes since accesses to these arrays are irregular.
We set the values of these arrays
to zero at the beginning of each loop so
a compiler could identify them as privatizable.
\textit{(ii) Removal of dynamic index array updates: }
Previous data dependence analysis work
focuses on cases where index arrays are not updated.
However, in some numerical codes, updating
index arrays is a common optimization.
We refer to this as \emph{dynamic index array updates},
and it usually occurs
when the nonzero structure of an output matrix
is modified in the sparse code during the computation.
This would make
dependence analysis very complicated for the compiler.
We removed dynamic index arrays by partially decoupling symbolic
analysis from the numerical code in these benchmarks.
\emph{Symbolic analysis} here refers to terminology used in the numerical
computing community. Symbolic analysis uses the nonzero pattern of the matrix
to determine computation patterns and the sparsity of the resulting data.
To remove dynamic index array updates, we decouple symbolic analysis from
the code similar to the approach used by
~\cite{cheshmi2017sympiler}.
{\bf Performance Impact:}
The changes made to Left Cholesky do not have a
noticeable effect on the code performance. Based on our
experiments using five matrices\footnote{Problem1, rdb450l,
wang2, ex29, Chebyshev2} from the Florida Sparse
Matrix Collection~\cite{davis2011university}
the performance cost of these modifications is on average less
than 10\% than the original code.
\subsection{Relevance of Index Array Properties}
We have extracted the constraints to test for dependences that are carried by
the outermost loop for the sparse matrix codes in Table~\ref{tab:suite}. A
total of 124 data dependences relations were collected from the benchmarks.
Of those 124, only 83 of them were unique, the repetition coming from accesses with
same access indices in the same statements, or other situations.
Table~\ref{tab:diffInfo} summarizes the index array assertions relevant to the
benchmarks.
\begin{table}
\begin{center}
\caption{Categorization of index array properties in our evaluation of their
utility in detecting unsatisfiability.}
\label{table:properties}
\begin{tabular}{| p{1.25in} | p{2.5in} | p{1.25in} |}
\hline
\makecell{Array property} & Formulation with examples from Left Cholesky code
& What codes found in \\ \hline
Monotonocity &
($ x_1 < x_2 \Leftrightarrow lcolptr(x_1) < lcolptr(x_2)$). & All \\ \hline
Correlated &
($x_1 = x_2 \Rightarrow rowPtr(x_1) \le diagPtr(x_2) $). & Incomplete LU0, \\
Monotonicity & ($ x_1 < x_2 \Rightarrow diagPtr(x_2) < rowPtr(x_1)$). &
\\ \hline
Triangular & ($lcolptr(x_1) < x_2 \Rightarrow x_1 < lrow(x_2) $). & Cholesky's,\\
Matrix & ($x_1 < prunePtr(x_2) \Rightarrow pruneSet(x_1) < x_2 $). & Forward Solves \\
\hline
\end{tabular}%
\label{tab:diffInfo}
\end{center}
\vspace*{-.2in}
\end{table}
We are not claiming to have found all the array
properties that exist either in our example suite nor in general.
Also, we only consider dependence relations
for outermost loops, however, dependence relations can be extracted for other
loop levels in a loop nest and can be used
for vectorization
and in other applications of dependence analysis.
\begin{figure}[ht]
\centering
\includegraphics[width=.75\columnwidth]{figure9.pdf}
\caption{Reduction in the number of different inspectors' complexities after adding
array properties individually and in combination.
Please note, $nnz$ is number of non-zeros,
and $n$ is number of
columns or rows in a matrix.
The array properties discussed in the paper can help us detect 45
relations as unsatisfiable out of 71 baseline relations.
Note, the number of unsatisfiable relations detected with
combination of information is not the accumulation
of all others. Sometimes combination of information
together helps detect unsatisfiability.}
\label{fig:indCom}
\end{figure}
\subsection{Detecting Unsatisfiability}
\label{detectUnsat}
In this section, we show the impact of using index array properties
to detect unsatisfiability for the relations collected
from dependences from our benchmark suite.
To not conflate the impact of the index
array properties that we are evaluating with what
traditional methods are capable of, we first apply functional consistency
in the theory of Presburger arithmetic combined with
uninterpreted functions ~\cite{Shostak79}.
This detects 12 dependences as unsatisfiable.
Nevertheless, we must note that, all of the 12 dependences have inconsistencies in
their affine parts and functional consistency does not help detect
any more unsatisfiable relations; like the first two
dependences from the Forward Solve CSR example in
Section~\ref{sec:wavefront}. After detecting 12 out of 83
dependences as unsatisfiable we are left with 71 dependences to use in our
evaluation. We call these 71 dependences that are satsfiable just by looking
at their affine constraints our baseline.
Figure~\ref{fig:indCom} categorizes the complexity of an
inspector for each dependence into 7 different classes in total.
In this figure, $nnz$ is number of non-zeros, and for simplicity
$n$ denotes the number of rows or columns of the input matrix.
The black bar, ``baseline'', in each class
shows the baseline number of relations with that complexity in our suite.
The bars show how many dependences would remain
after we instantiate certain index array properties.
The last bar in each class, the red bar,
shows the effect of adding all the information in combination.
The main observations from analyzing Figure~\ref{fig:indCom} are as
follows:
(1) Combining the array properties and non-domain information has
the biggest impact and helps detect significantly more unsatisfiable
dependences than any single property.
Combining all the index array properties helped us detect 45 out of 71 relations
as unsatisfiable, with 26 remaining as maybe satisfiable.
(2) Monotonicity has the highest impact on detecting unsatisfiable relations
when array properties are applied independently.
(3) The Triangular Matrix property helped detect 3 relations
when applied independently and 11 more in combination
with Monotonicity (not obvious in the figure).
This property helped us detect unsatisfiability in cases
where Monotonicity was completely handicapped; see the
$nnz$ and $nnz*n$ classes in Figure~\ref{fig:indCom}.
\subsection{Simplifying Inspector Complexity Utilizing Equalities}
\label{sec:evalEq}
As stated in the previous section,
instantiating index array properties results in
45 out of 71 dependence relations being detected as unsatisfiable.
At this point, without any further simplification,
to perform a partial parallelism transformation,
inspectors are needed for the remaining 26 dependences.
One question we can ask about those 26 inspector is whether
their complexity is even reasonable. We consider a runtime dependence
analysis complexity reasonable, if it is bound by the complexity of the
original computation.
The computations would certainly
do much more operations compared to the analysis as
numerical algorithms
usually call these computations several time for the same sparse
matrix nonzero structure.
Thus runtime data dependence analysis is reasonable
if it is the same complexity as the original computation.
Nonetheless, for numerical algorithms,
it is common to aim for a runtime data dependence analysis
that is of $O(nnz)$,
where $nnz$ is the number of nonezeros in the input.
By instantiating index array properties with expressions
from the data dependences, it is also possible to derive
equalities between some of the iterators in the dependence.
These new useful equalities can be used to eliminate extra loops in
the runtime inspector.
Table~\ref{tab:insReduction} shows that the additional equalities increases
the number of dependence relations with reasonable complexities ($\le
\mathrm{kernel}$). For instance, the Left Cholesky code have 4 high
complexity dependence relation left. As illustrated in
Section~\ref{sec:example}, the additional equalities can be used to reduce
the complexity of all those relations. Finding equalities also help reduce the
complexity of 4 dependences for Incomplete Cholesky0 and 2 dependences of
Incomplete LU0 to become reasonable.
We should also mention that in addition to these 10 complexity reductions,
the complexity of another 4 dependence relations were reduced. However,
the complexity after simplification is still higher than the kernel, and hence
these simplifications are not visible in Table~\ref{tab:insReduction}.
\begin{table*}
\begin{center}
\caption{Effect of simplifications based
additional equalities (Section~\ref{sec:simplifying-equality}) and redundancy
elimination (Section~\ref{sec:simplifying-superset}) on the remaining 26 maybe
satisfiable dependences for each code in the benchmark. The Total
columns show the number of dependence relations that needs to be checked at
runtime. The $\le \mathrm{kernel}$ columns show the number of such tests that
have the same or lower complexity than the kernel. Equality Impact is the
numbers after using additional equalities, reducing the number of high complexity checks.
Supserset Impact is the composed effect of using supserset relations after
adding equalities, reducing the total number of checks.
}
\begin{tabular}{| p {3.4 cm} || p {1.4 cm} | p {1.4 cm} || p {1.3 cm} | p {1.3 cm} || p {1.3 cm} | p {1.3 cm} |}
\hline
Kernel name
& \multicolumn{2}{|c||}{Remaining satisfiables}
& \multicolumn{2}{|c||}{Equality Impact} & \multicolumn{2}{|c|}{Superset Impact}
\\ \hline
& \textcolor{green}{ $\le$ kernel } & Total
& \textcolor{green}{ $\le$ kernel } & Total
& \textcolor{green}{ $\le$ kernel } & Total
\\ \hline
Gauss-Seidel CSR & 2 & 2 & 2 & 2 & 2 & 2 \\ \hline
Gauss-Seidel BCSR & 4 & 4 & 4 & 4 & 2 & 2 \\ \hline
Incomplete LU & 0 & 4 & 2 & 4 & 2 & 4 \\ \hline
\textbf{Incomplete Cholesky} & \textbf{1} & \textbf{9} & \textbf{5} & \textbf{9} & \textbf{2} & \textbf{2} \\ \hline
Forward solve CSR & 1 & 1 & 1 & 1 & 1 & 1 \\ \hline
Forward solve CSC & 2 & 2 & 2 & 2 & 1 & 1 \\ \hline
Sparse MV Mul. & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline
\textbf{Left Cholesky} & \textbf{0} & \textbf{4} & \textbf{4} & \textbf{4} & \textbf{2} & \textbf{2} \\ \hline
\end{tabular}%
\label{tab:insReduction}
\end{center}
\end{table*}
\subsection{Impact of Utilizing Superset Relationship}
\label{eval:sup}
The superset relations we identify uncovers dependence relations that are
redundant. We can discard the dependence relations that are found to be
subsets of another and only generate runtime inspectors for remaining
relations. As shown in Table~\ref{tab:insReduction}, this results in
fewer dependence relations to be checked at runtime. Most notably, the
number of runtime checks were reduced from 4 to 2 for Left Cholesky,
and both of those dependences are less comlex than the original algorithm.
As discussed in Section~\ref{sec:simplifying-superset}, the superset relation
may reveal that a relation is redundant being subset of another relation with
lower complexity. The Incomplete Cholesky kernel were left with 4
expensive relations even after adding equalities.
As you can see in Table~\ref{tab:insReduction}, these relations are removed
from runtime checks by identifying the superset relations.
For Incomplete Cholesky kernel, we have found 2 relations with less than
orginal algorithm complexity to be superset of all the dependences that
we need to have a runtime check for. The composed effect of our proposed
technique reduces the inspector cost to 2 or fewer inexpensive tests for all of
our kernels, except for the Incomplete LU.
\subsection{Putting It All Together}
We have presented a series of techniques to simplify dependence relations with
the main motivation being automatic generation of efficient inspector code.
Our approach aims to simplify the dependence relations starting from array
properties that can be succinctly specified by the experts. We show that the
array properties can be used to automatically disprove a large number of
potential dependences, as well as reduce the complexity of remaining
dependences. Combined with a method for detecting redundancies in dependence
tests, we are able to generate efficient inspectors.
Table~\ref{tab:suiteIns} summarizes the impact of our proposed
approach on inspector complexity. It is interesting to note that
Incomplete LU0 is the only kernel left with expensive inspector
(more complex than kernel). This case is discussed further in Section~\ref{sec:limit}.
\begin{table*}
\begin{center}
\caption{The impact of our simplifications on inspector complexity.
The baseline inspector complexity is when all possible dependences are tested at runtime, without
using any of the simplifications proposed in this paper. The simplified
inspector complexity reports the final cost of inspection generated by our approach.
The overall complexity of inspectors decreases considerably.
The complexity of the kernels are included for comparison; k and K denote
constant factors, with K signaling a bigger number.}
\begin{tabular}{| p {3 cm} | p {3.7 cm} | p {3 cm} || p {2.7 cm} |}
\hline
Kernel name & Inspector complexity & Simplified inspector & Kernel complexity\\ \hline
Gauss-Seidel CSR & $(n) + 2 (nnz)$ & $2 (nnz)$ & $k (nnz)$ \\ \hline
Gauss-Seidel BCSR & $4 (n) + 4 (nnz)$ & $2 (nnz)$ & $k (nnz)$ \\ \hline
Incomplete LU CSR
& $4 (nnz \times (nnz/n)) + (n^2) + 2 (n \times nnz) + 2 (nnz^2) + 2 (nnz^2 \times (nnz/n)^2) + 2 (nnz^2 \times (nnz/n)^3)$
& $2 (nnz \times (nnz/n)^2) + 2 (nnz \times (nnz/n)^4)$ & $K (nnz \times (nnz/n)^2)$ \\ \hline
Incomplete Cholesky CSR
& $10 (n^2) + 8 (nnz^2) + 6 (nnz^2 \times (nnz/n)) + 4 (nnz^2 \times (nnz/n)^3)$
& $(nnz \times (nnz/n)) + (nnz \times (nnz/n)^2) $ & $K (nnz \times (nnz/n)^2)$ \\\hline
Forward solve CSC & $3 (n) + 4 (nnz)$ & $nnz$ & $k (nnz)$ \\ \hline
Forward solve CSR & $(n) + 2 (nnz) $ & $nnz$ & $k (nnz)$ \\ \hline
Sparse MV Mul. CSR & $3 (n)$ & $0$ & $k (nnz \times (nnz/n))$ \\ \hline
Left Cholesky CSC & $8 (n \times nnz) + 4 (n^2)$ & $2(nnz)$ & $K (nnz \times (nnz/n))$ \\ \hline
\end{tabular}%
\label{tab:suiteIns}
\end{center}
\end{table*}
\subsection{Discussion: Limitations}
\label{sec:limit}
Table~\ref{tab:suiteIns} demonstrates that
our method significantly reduces both the number of runtime checks and their complexity.
Nonetheless, our approach is not free of limitations, which are discussed in this section.
Two of the original kernels include dynamic index arrays and temporary arrays that
require privatization. As discussed in Section~\ref{sec:benchmark},
these kernels can be preprocessed such that it can be accepted by our compiler. This preprocessing
is currently done manually.
Using the associativity of reductions is important for Forward Solve CSC
and Incomplete Cholesky0. We do not automate the reduction detection in this
paper, as it is a complex task on its own. It is common for compilers and
programming models, such as openMP, to provide pragma interfaces for
programmers to signal which update should be considered as a reduction. We have
followed the same approach.
Incomplete LU0 has two dependence relations that has higher complexity
than the kernel, even with domain information. Related work by~\cite{Venkat:2016} presents
approximation techniques that reduce the inspector complexity for these high
complexity relation to $nnz \times (nnz/n)$. Such approximation can
potentially result in loss of some parallelism. Nevertheless,
~\cite{Venkat:2016} show that the approximation of dependences does not
significantly affect the performance of the partial parallelism for this code.
We have not used approximations in our work, but it would be interesting to see
how the two approaches can be combined.
\section{Related Work}
Array data dependence analysis has been used for a variety of applications,
including automatic parallelization~\cite{Paek2002}, locality optimization~\cite{Wolfe:1989},
communication generation, program slicing~\cite{Pugh97}, detecting race
conditions~\cite{Zheng:2015},
and high-level synthesis~\cite{Alle2013}.
For sparse matrix codes, this analysis is made more difficult
due to indirection through index arrays, such that the source and sink
of dependences cannot be resolved until their values are available at runtime.
For these and other situations where dependences arise that cannot
be resolved until runtime, a number of techniques for compile time and
runtime dependence analysis have been developed.
\subsection{User-Provided Assertions}
\cite{McKinley} exploit user assertions about index arrays
to increase the precision
of dependence testing. The assertions certify common
properties of index arrays, e.g., an index array can be a permutation
array, monotonically increasing, and monotonically
decreasing.
\cite{Lin:2000:CAI} present a compile time analysis
for determining index array properties, such as monotonicity.
They use the analysis results for parallelization of sparse matrix
computations.
Our approach also uses these assertions, but in
addition we use more domain-specific assertions and provide
a way to automate the general use of such assertions.
In this paper, the idea of applying constraint instantiation of universally
quantified constraints as is done in SMT solvers to find unsatisfactory
dependences is novel and the assertions about index arrays we use
are more general.
\subsection{Proving Index Arrays Satisfy the Assertions}
In this work, we assume that the assertions provided by the programmer is
correct. It is useful to verify the user-provided assertions by analyzing the
code that constructs the sparse matrix data structures. There is a
large body of work in abstract interpretation that address this problem.
The major challenge in verifying the assertion about programs that manipulate
arrays is the trade-off between scalability and precision. When there is a
large number of updates to an array, keeping track of individual elements do
not scale, but approximating the whole array as a single summary significantly
degrades the precision. Many techniques to verify/infer important properties
about array contents from programs have been developed, e.g.,
\cite{cousot2011parametric,gopan2005framework,halbwachs2008}.
In the work by \cite{kovacs-et-al-vmcai-2010}, the authors present an approach
for inferring shape invariants for matrices.
While their work does not deal with sparse matrices and index arrays,
it may help generate domain-specific assertions that we could
employ to show that the data dependences are unsatisfiable.
The main subject of our work - dependence tests - does not involve array
updates, since all the index arrays, which alter the control-flow and indexing
of the data arrays, are not updated. This makes the verification of the
assertions a closely related but orthogonal problem, which we do not address in
this paper.
\subsection{More General Quantifier Elimination Techniques}
The area of SMT-solving is advancing at a significant pace; the webpage for
SMT-COMP\footnote{\url{http://smtcomp.sourceforge.net/2017/}} provides a list
of virtually all actively developed solvers, and how they fared in each theory
category.
As these solvers are moving into a variety of domains,
quantifier instantiation and elimination has become a topic of
central interest.
Some of the recent work in this area are: E-matching~\cite{moura2007},
Model-Based~\cite{Ge2009}, Conflict-Based~\cite{reynolds2014finding}, and
Counter-Example Guided~\cite{reynolds2015counterexample}.
These efforts make it clear that
quantifier instantiation is challenging, and is an area of active development.
SMT solvers often rely on
heuristic-driven instantiations to show unsat for difficult problems.
In this context,
our work can be viewed as heuristic instantiation where
the heuristic is inspired by decidable fragments of the array theory.
Dependence constraints with universally quantified assertions
are related to the first order theory fragments
described by~\cite{Bradley2006} as undecidable extensions
to their array theory fragment.
However, ~\cite{Loding2017} claim that the proofs for undecidability
of extension theories by~\cite{Bradley2006} are incorrect,
and declare their decidability status as an open problem.
Regardless of whether the theory fragment that encompasses our
dependence constraints is decidable or not following is true: if we
soundly prove that a relation is unsatisfiable just with compile time
information, the unsatisfiability applies in general, and having
runtime information would not change anything. However, if a dependence
detected to be satisfiable just with compile time information, we need
to have runtime tests to see if it is actually satisfiable given runtime
information, and even if it is, run time tests would determine
for what values the dependence holds.
\subsection{Dependence Analysis for Full Parallelization}
Some compilation techniques have been developed to extend the dependence analysis to
sparse, or non-affine programs~\cite{Poly2010}. These
techniques extend to non-affine programs of various forms: while loops,
polynomial expressions, function calls, data-dependent conditions, and
indirection arrays. The outcome of such analysis is often an approximation,
which is quite pessimistic for sparse computations involving indirection
arrays. The focus of our work is not to identify (approximated) dependences,
but to reduce the cost of runtime dependence checks by disproving potential
dependences as much as possible at compile-time.
The work by \cite{pugh98constraintbased} also formulate the
problem in the theory of Presburger sets with uninterpreted functions.
However, they only allow affine expressions of unquantified variables as
indexing expressions to the function symbols, excluding some of the examples in
this paper. They propose an analysis to identify conditions for a dependence
to exist through the use of gist operator that simplifies the constraint system
given its context. The result of this analysis may involve uninterpreted
functions, and can be used to query the programmer for their domain knowledge.
This is an interesting direction of interaction that complements our work.
Several runtime approaches focus on identifying loops
we denoted \textit{fully parallel} whose iterations are
independent and can safely execute in
parallel~\cite{barthou97fuzzy,pugh98constraintbased,Moon:1999}
or speculatively execute in parallel while testing safety~\cite{Rauchwerger:1999}.
\subsection{Dependence Analysis for Wavefront Parallelization}
For sparse codes, even when loops carry dependences, the dependences
themselves may be sparse, and it may be possible to execute some iterations of the
loop in parallel (previously denoted \textit{partially parallel}.
The parallelism is captured in a task graph, and typically executed as a parallel wavefront.
A number of prior works write specialized code to derive this task graph as part
of their application~\cite{Saltz91,Rauchwerger95,Zhuang09,Bell:SpMV:SC:2009,Park2014,Jongsoo14SC}
or with kernel-specific code generators~\cite{pOski12}.
For example, Saltz and Rothbergs worked on manual parallelization of
sparse triangular solver codes in the 1990s~\cite{Saltz90,Rothberg92}.
There is also more recent work on optimizing sparse triangular solver
NVIDIA GPUs and Intel's multi-core CPUs~\cite{rennich2016accelerating, wang2014intel}.
Even though these manual optimizations have been successful
at achieving high performance in some cases, significant programmer
effort has to be invested for each of these codes and automating
these parallelization strategies can significantly reduce this effort.
Other approaches automate the generation of inspectors that
find task-graph, wavefront or partial parallelism. \cite{rauchwerger95scalable}
and others~\cite{HuangJBJA13} have developed efficient and parallel inspectors
that maintain lists of iterations that read and write each memory location.
By increasing the number of dependences found unsatisfiable,
the approach presented in this paper reduces
the number of memory accesses that would need to be tracked.
For satisfiable dependences, there is a tradeoff between inspecting
iteration space dependences versus maintaining data for each memory
access. That choice could be made at runtime. There are also
other approaches for automatic generation of inspectors that
have looked at simplifying the inspector by finding equalities,
using approximation, parallelizing the inspector, and applying
point-to-point synchronization to the executor~\cite{Venkat:2016}.
\subsection{Algorithm-Specific Data Dependence Analysis}
An algorithm-specific approach to represent data dependences and optimize
memory usage of sparse factorization algorithms such as Cholesky
~\cite{pothen2004elimination} uses an \textit{elimination tree}, but to the
best of our knowledge, this structure is not derived automatically from source code.
When factorizing a column of a sparse
matrix, in addition to nonzero elements of the input matrix new nonzero
elements, called fill-in, might be created. Since the sparse matrices
are compressed for efficiency, the additional fills during
factorization make memory allocation ahead of factorization difficult.
The elimination tree is used to predict the sparsity pattern of the L
factor ahead of factorization so the size of the factor can be computed~\cite{coleman1986predicting} or predicted~\cite{gilbert1994predicting,gilbert1993predicting},
and captures a potential parallel schedule of the tasks.
Prior work has investigated the applicability of the elimination
tree for dependence analysis for parallel implementation
~\cite{george1989communication,gilbert1992highly,pothen1993mapping,
karypis1995high,schenk2002two,henon2002pastix,hogg2010design,
zheng2015gpu,rennich2016accelerating}.
Some techniques such
as~\cite{pothen1993mapping,george1989communication,henon2002pastix}
use the elimination tree for static scheduling while
others use it for runtime scheduling.
\section{Conclusion}
In this paper, we present an automated approach for
showing sparse code data dependences are unsatisfiable or if
not reducing the complexity for later runtime analysis.
Refuting a data dependence brings benefits to many areas of sparse matrix code
analysis, including verification and loop optimizations such as parallelization, pipelining,
or tiling by
completely eliminating the high runtime costs of
deploying runtime dependence checking.
Additionally, when a dependence remains satisfiable, our
approach of performing constraint instantiation within
the context of the Integer Set Library (ISL)
enables equalities and subset relationships to be
derived that simplify the runtime complexity
of inspectors for a case study with wavefront parallelism.
Parallelization of these sparse numerical methods is
an active research area today, but one where most current approaches
require {\em manual parallelization}.
It is also worth noting that without inspector complexity reduction, most
inspectors would timeout, thus underscoring the pivotal role of the work in
this paper in enabling parallelization and optimization of sparse codes.
Our results are established over
71 dependences extracted from 8 sparse numerical methods.
| {
"attr-fineweb-edu": 1.832031,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUe2HxK4sA-46xPfZG |
\section{Introduction}
\input{intro}
\section{Reconfigurable Petri Nets}
\label{s.recPN}
\input{reconPN}
\section{Review of $\M$ adhesive HLR Systems}
\label{s.madHLR}
\input{madHLR}
\section{Inhibitor Arcs}
\label{s.inhib}
\input{inhib}
\section{Transition Priorities}
\label{s.prior}
\input{prior}
\section{Conclusion}
\label{s.conc}
\input{conc}
\bibliographystyle{splncs}
\subsection{Decorated Place/Transition Nets}
A decorated place/transition net is a marked P/T net $N=(P,T,pre,post,M)$ together with names and labels. A capacity is merely a function $cap :P\to \N^\omega_+$. Based on name spaces $A_P$, $A_T$ with $pname:P \to A_P$ and $tname:T \to A_T$ we have explicit names for places and transitions. Moreover, transitions are equipped with labels that may change when the transition fires. This
feature is given by a mapping of transitions to functions.
For example the net $N_2$ in Fig. \ref{f.pnvf2_ex1} yields the marking $3p_a+p_b+2p_c$
after firing transitions $t_b$ and $t_d$ in parallel. Furthermore, this parallel firing yields the new transition labels $2$ for transition $t_b$ and $false$ for transition $t_d$. So, we compute the follower label $tlb \fire{t_b+t_d} tlb'$, where $tlb, tlb': T\to W$ are label functions with
$tlb'(t_b) = inc(tlb(t_b)) = inc(1) = 2$, where the renew function $inc:\N\to \N$ increases the label by one and
$tlb'(t_d) = not(tlb(t_d)) = not(true) = false$. For more details see \cite{Pad12}.
\begin{definition}[Decorated place/transition net]
A decorated place/transition net is a marked place/transition net $N=(P,T,pre,post,M)$ together with
\begin{itemize}
\item a capacity as a function $cap :P\to \N^\omega_+$
\item name spaces $A_P$, $A_T$ with
$pname:P \to A_P$ and $tname:T \to A_T$
\item the function $tlb: T \to W$ mapping transitions to transition labels $W$ and
\item the function
$rnw: T \to END$ where $END$ is a set containing some endomorphisms on
$W$, so that
$rnw(t): W \to W$ is the function that renews the transition label.
\end{itemize}
\end{definition}
The firing of these nets is the usual for place/transition nets except for changing the transition labels.
Moreover, this extension works for parallel firing as well.
\begin{definition}[Changing Labels by Parallel Firing]
Given a transitions vector $v= \sum_{t \in T} k_t \cdot t$
then the label is renewed by firing $tlb \fire{v} tlb'$ and for each $t \in T$ the transition label
$tlb':T \to W$ is defined by:
$$tlb'(t) = rnw(t)^{k_t} \circ tlb (t)$$
\end{definition}
\subsection{Transformations of Decorated Nets}
For decorated place/transition nets as given above, we obtain with the following notion of morphisms an $\M$-adhesive HLR category (see \cite{Pad12}).
$\M$-adhesive HLR systems can be considered as a unifying framework for graph and Petri net transformations
providing enough structure
that most notions and results from algebraic graph transformation systems are available, as results on parallelism and concurrency of rules and transformations, results on negative application conditions and constraints, and so on (e.g. in \cite{FAGT,EGHLO12}).
Net morphisms map places to places and transitions to transitions. They are given as a pair of mappings for the places and the transitions, so that the structure and the decoration is preserved and the marking may be mapped strictly.
\begin{definition}[Morphisms between decorated place/transition nets \cite{Pad12}]\label{def.morphism}
A net morphism $f:N_1 \to N_2$ between two decorated place/transition nets $N_i =( P_i,T_i, pre_i, post_i,M_i,cap_i, pname_i,tname_i, tlb_i, rnw_i)$ for $i\in\{1,2\}$ is given by $f=(f_P:P_1 \to P_2,f_T:T_1 \to T_2)$, so that the following equations hold:
\begin{enumerate}
\item \label{i} $pre_2 \circ f_T = f_P^\oplus \circ pre_1$ and $post_2 \circ f_T = f_P^\oplus \circ post_1$
\item \label{ii} $cap_1 = cap_2 \circ f_p$
\item \label{iii} $pname_1 = pname_2 \circ f_P$
\item \label{iv} $tname_1 = tname_2 \circ f_T$ and $tlb_1 = tlb_2 \circ f_T$ and $rnw_1 = rnw_2 \circ f_T$
\item \label{v} $M_1(p) \le M_2(f_P(p))$ for all $p \in P_1$
\end{enumerate}
Moreover, the morphism $f$ is called strict
\begin{enumerate}
\setcounter{enumi}{5}
\item \label{vi} if both $f_P$ and $f_T$ are injective and
$ M_1(p) =M_2(f_P(p))$ holds for all $p \in P_1$.
\end{enumerate}
\end{definition}
A rule in the DPO approach is given by three nets
called left hand side $L$, interface $K$ and right hand side $R$, respectively, and a span of two strict net morphisms $K \to L$ and $K\to R$.\\
Additionally, a match morphism $m: L\to N$ is required that identifies the
relevant parts of the left hand side in the given net $N$. Then a transformation step
$N \deriv{(r,m)}M$ via rule $r$ can be constructed in two steps.
Given a rule with a match $m:L\to N$ the gluing conditions have to be satisfied in order to apply a rule at a given match. These conditions ensure the result is again a well-defined net.
It is a sufficient condition for the existence and uniqueness of the so-called pushout complement which is needed for the first step in a transformation.
In this case, we obtain a net $M$ leading to a direct transformation $N \deriv{(r,m)} M$ consisting of the following pushouts (1) and (2) in Fig. \ref{dpo}.
\begin{wrapfigure}[7]{r}{.4\textwidth}
$
\xymatrix{ L \ar[d]|m \ar@{}[dr]|{\bf (1)}
& K \ar[l] \ar[r] \ar[d] \ar@{}[dr]|{\bf (2)}
& R \ar[d]\\
N
& D \ar[l] \ar[r]
& M
}
$
\caption{Transformation of a net}
\label{dpo}
\end{wrapfigure}
Next we show that
decorated place/transition nets yield an $\M$-adhesive HLR category for $\M$ being the class of strict morphisms. Hence we obtain all the well-known results, as transformation, local confluence and parallelism, application conditions, amalgamation and so on.
\begin{lemma}[see \cite{Pad12}]
\label{l.mahlr}
The category $\cdPT$ of decorated place/transition nets is an $\M$-adhesive HLR category.
\end{lemma}
This construction as well as a huge amount of notion and results are available since
decorated place/transition nets can be proven to be an $\M$-adhesive HLR category.
Hence we can combine one net together with a set of rules
leading to reconfigurable place/transition nets.
\begin{definition}[Reconfigurable Nets]
A reconfigurable decorated place/transition net $RN=(N,\R)$ is given by an decorated $N$
and a set of rules $\R$.
\end{definition}
| {
"attr-fineweb-edu": 1.594727,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUe6A5qhDCUnl_d-26 | \section{Introduction}
Ladder approximations have been one of the most basic attempts to
simplify and truncate Dyson--Schwinger equations in field theory in
a still meaningful way \cite{george}. From a mathematical viewpoint
they simplify the combinatorics of the forest formula considerably,
and are solvable by a scaling Ansatz for sufficiently simple
kinematics.
Here, we discuss such a scenario, but iterate one- and two-loop
skeletons jointly, combining some analytic progress with a thorough
discussion of the underlying algebraic properties.
\subsection{Purpose of this paper}
The main purpose is to sum an infinite series of graphs based on the
iteration of two underlying skeleton graphs. We progress in a manner
such that our methods can be generalized to any countable number of
skeletons. We restrict to linear Dyson Schwinger equations, a case
relevant for theories at a fixpoint of the renormalization group. We
proceed using one-dimensional Mellin transforms, a privilege of
linearity of which we make full use. See \cite{BKDSE,dktor,KY,KY2} for the general approach.
\section{The Dyson--Schwinger Equation}
\subsection{The integral equation}
The equation which we consider is in massless Yukawa theory in four-dimensional Minkowski space $\mathbb M$, for
pedagogical purposes. We define a renormalized Green function describing the coupling of a scalar particle to
a fermion line by
\begin{eqnarray} G_R(a,\ln (-q^2/\mu^2)) & = &
1-a\int_{\mathbb M} \frac{d^4k}{i \pi^2}
\left\{ \frac{1}{k\!\!/}G_R(a,\ln (-k^2/\mu^2))\frac{1}{k\!\!/}\frac{1}{(k-q)^2}\right\}_- \nonumber\\
& & + a^2 \int_{\mathbb{M}} \frac{d^4k}{i \pi^2} \int_{\mathbb{M}} \frac{d^4l}{i \pi^2}
\left\{\frac{l\!\!/(l\!\!/+k\!\!/)G_R(a,\ln (-(k+l)^2/\mu^2))(l\!\!/+k\!\!/)(k\!\!/+q\!\!/)}{[(k+l)^2]^{2}l^2(k+q)^2k^2(l-q)^2}\right\}_-
, \label{DSEint}
\end{eqnarray}
where $\{\}_-$ indicates subtraction at $\mu^2=-q^2$,
so that $G_R(a,0)=1$:
\begin{eqnarray}
\left\{ G_R(a,\ln (-k^2/\mu^2))\right\}_{-}
= G_R(a,\ln (-k^2/\mu^2))-G_R(a,\ln (-k^2/-q^2)).
\end{eqnarray}
The kinematics are such that the fermion has momentum $q$ and the external scalar particle carries zero momentum.
The equation can graphically be represented as
\\
\\
{\epsfxsize=160mm\epsfbox{BKW0.eps}}
\\
\\
where the blob represents the unknown Green function $G_R(a,\ln (-q^2/\mu^2))$.
This linear Dyson--Schwinger equation can be solved by a
scaling solution, $L=\ln (-q^2/\mu^2)$, \begin{equation} G_R(a,L)=\exp{\{-\gamma_G(a)L\}}.\end{equation} Indeed, this
satisfies the desired normalization and leads to the equation \begin{equation}
\exp{\{-\gamma_G(a)L\}}=1+
\left(\exp{\{-\gamma_G(a)L\}}-1\right)\left[aF_1(\gamma_G)+a^2F_2(\gamma_G)\right],\end{equation}
where the two Mellin transforms are the functions determined by
\begin{equation}
F_1: \rho\to
- \int \frac{d^4k}{i \pi^2} \frac{1}{k^2(k-q)^2} \left( \frac{k^2}{q^2} \right)^{-\rho}
\end{equation} and
similarly
\begin{equation}
F_2: \rho\to
\int_{\mathbb{M}} \frac{d^4k}{i \pi^2} \int_{\mathbb{M}} \frac{d^4l}{i \pi^2}
\frac{l\!\!/(k\!\!/+q\!\!/)}{(k+l)^2 l^2(k+q)^2k^2(l-q)^2} \left[ \frac{(l+k)^2}{q^2} \right]^{-\rho}.
\end{equation}
Clearing the factor
$[\exp\{-\gamma_G(a)L\}-1]$ in this equation gives \begin{equation}
1=aF_1(\gamma_G)+a^2F_2(\gamma_G).\end{equation} It remains to determine
$F_1,F_2$ explicitely and solve this implicit equation for
$\gamma_G$ in terms of $a$. We do so in the next sections but first
discuss the perturbative structure behind this solution.
\subsection{The algebraic structure}
We can identify any graph in this resummation with a word in two
letters $u,v$ say, for example:
\begin{equation}\;\raisebox{-8mm}{\epsfysize=24mm\epsfbox{BKW1_mod.eps}}\;\end{equation}
We have renormalized Feynman rules $\phi_R$ such that \begin{equation}\phi_R(u)(L)=-L\lim_{\rho\to 0}\rho F_1(\rho),\end{equation}
and \begin{equation}\phi_R(v)(L)=-L\lim_{\rho\to 0}\rho F_2(\rho).\end{equation}
The Green function $G_R(a,L)$ is obtained as the
evaluation by $\phi_R$ of the fixpoint of the combinatorial Dyson Schwinger equation
\begin{equation} X(a)=\mathbb{I}+aB_+^u(X(a))+a^2B_+^v(X(a)).\end{equation}
We have
\begin{equation}
X(a)=1+au+a^2(uu+v)+\ldots=\exp_{\curlyvee}{[au+a^2v]},\end{equation}
where $\curlyvee$ is the shuffle \begin{equation} H_{\rm lin}\times H_{\rm
lin}\to H_{\rm lin},\end{equation} \begin{equation} B_+^i(w_1)\curlyvee B_+^j(w_2)=B_+^i(w_1\curlyvee B_+^j(w_2))+
B_+^j(w_2\curlyvee B_+^i(w_1)),\end{equation}
$\forall i,j\in\{u,v\}$. Note that for example $u\curlyvee u=2 uu$.
The two maps $B_+^i$ are Hochschild one-cocycles, and $X(a)$ is group-like: \begin{equation} \Delta X(a)=X(a)\otimes X(a).\end{equation}
Correspondingly, decomposing $X(a)=1+\sum_{k\geq 1}a^k c_k$, we have
\begin{equation} \Delta c_k=\sum_{j=0}^k c_j\otimes c_{k-j},\end{equation}
with $c_0=1$.
This is a decorated version of the Hopf algebra of undecorated ladder trees $t_n$ with coproduct $\Delta t_n=\sum_{j=0}^n t_j\otimes t_{n-j}$.
Feynman rules become iterated integrals as
\begin{equation} \phi_R(B_+^i(w))(L)=\int \left\{\phi(w)(\ln k^2/\mu^2)d\mu_i(k)\right\}_-,\end{equation}
where $d\mu_i$ is the obvious integral kernel for $i\in u,v$, cf.\ (\ref{DSEint}).
Apart from the shuffle product,
we have disjoint union as a product which makes the Feynman rules into a character
\begin{equation} \phi(w_1\cdot w_2)=\phi(w_1)\phi(w_2).\end{equation}
These two commutative products $\curlyvee,\cdot$ allow to express the primitive elements associated with shuffles of letters $u,v$, see for example \cite{Bigrad}:
\begin{thm}
The primitive elements are given by polarization of the primitive elements $p_n$ of the undecorated ladder trees $t_n$.
These are given by $p_n=\frac{1}{n}[S\star Y](t_n)$.
\end{thm}
Here, $Y$ denotes the grading operator, defined by $Y(t_k) = k t_k$ and the star product is defined as usual
by $O_1 \star O_2 = \cdot \circ ( O_1 \otimes O_2 ) \circ \Delta$.
Polarization of the undecorated primitive elements $p_n$ means that we decorate each vertex of $p_n$ with $u+v$.
The set ${\mathcal P}(u,v)$ of primitive elements is hence spanned by elements $p_{i_u,i_v}$, where the integeres $i_u,i_v$ count the number of letters $u$ and $v$ in the polarization of $t_{i_u+i_v}$.
For example the primitive element corresponding to the undecorated ladder tree $t_2$ is $p_2=t_2-\frac{1}{2}t_1t_1$.
Polarization yields
\begin{eqnarray}
& & p_{2,0} = \frac{1}{2} ( u \curlyvee u - u \cdot u ) = u u - \frac{1}{2} u \cdot u,
\;\;\;
p_{0,2} = \frac{1}{2} ( v \curlyvee v - v \cdot v ) = v v - \frac{1}{2} v \cdot v,
\\
& & p_{1,1} = u \curlyvee v - u \cdot v = u v + v u - u \cdot v.
\nonumber
\end{eqnarray}
\section[Mellin]{The Mellin transforms}
The general structure of the Mellin transform can be obtained from
quite general considerations. The crucial input comes from
powercounting and conformal symmetry. \begin{thm} The Mellin transforms above are invariant
under the transformation $\rho\to 1-\rho$.
\end{thm}
Proof: Explicit computation. We give it here for $F_1$. We assume
$\Re\rho>0$ so that $F_i$ is well defined as a function. Then, the
conformal inversion $k_\mu\to k^\prime_\mu=k_\mu/k^2$ gives explicitly
\begin{equation}
- \int \frac{d^4k^\prime}{i \pi^2}
\frac{1}{{k^\prime}^2(k^\prime-q)^2} \left( \frac{{k^\prime}^2}{q^2} \right)^{-1+\rho}
\end{equation}
for $F_1$. $F_2$ can be treated similarly by conformal inversion in both Minkowski spaces.\hfill $\Box$
\subsection{The Mellin transform of the one-loop kernel}
This Mellin transform is readily integrated to deliver \begin{equation} F_1(\rho)=
-\frac{1}{\rho(1-\rho)},\end{equation} exhibiting the expected conformal symmetry.
\subsection{The Mellin transform of the two-loop kernel}
Determining this Mellin transform is the core part of this paper. We
proceed by making use of the advantage that we remain in four
dimensions, and use results of \cite{BGK}.
We are interested in the integral
\begin{equation}
F_2(\rho) =
(-q^2)^\rho \int_{\mathbb{M}} \frac{d^4k}{i \pi^2} \int_{\mathbb{M}} \frac{d^4l}{i \pi^2}
\frac{l\!\!/(k\!\!/+q\!\!/) [-(l+k)^2]^{-\rho}}{(k+l)^2 l^2(k+q)^2k^2(l-q)^2}.
\end{equation}
Integration is over the eight dimensional cartesian product of two Minkowski spaces furnished with a quadratic form \begin{equation} a^2=a_0^2-a_1^2-a_2^2-a_3^2.\end{equation}
A simple tensor calculus delivers
\begin{equation}
F_2(\rho)=\frac{1}{2}\left\{-2G_4(1,1+\rho)G_4(1,1+\rho)+I_6(1,1,\rho,1,1,2-\rho)+I_6(1,1,1+\rho,1,1,1-\rho)\right\},
\end{equation}
where \begin{equation} G_D(a,b)=\frac{\Gamma(a+b-D/2)\Gamma(D/2-a)\Gamma(D/2-b)}{\Gamma(a)\Gamma(b)\Gamma(D-a-b)},\end{equation}
so that $G_4(1,1+\rho)=\frac{1}{\rho(1-\rho)}$. We use the notation of \cite{BGK} for $I_6$.
In this notation, we have $I_6=\overline{I_6}$.
Setting $u\to 0$ and $v=-\rho$ or $v=1-\rho$ we can determine the two $I_6$ integrals as a limit $u\to 0$ in Eq.(19)(op.cit.)
as
\begin{equation} I_6(1,1,1-v,1,1,1+v)=8\sum_{n=1}^{\infty}n\zeta_{2n+1}(1-2^{-2n})v^{2n-2},\end{equation}
and similarly for $v=1-\rho$.
We hence find the above DSE in the form
\begin{equation}
1=-a\frac{1}{\gamma_G(1-\gamma_G)}-a^2\left\{ \frac{1}{\gamma_G^2(1-\gamma_G)^2}-4\sum_{n=1}^\infty n\zeta_{2n+1}(1-2^{-2n})\left[\gamma_G^{2n-2}+(1-\gamma_G)^{2n-2}\right]\right\}.\end{equation}
\section{The solution}
We can solve for $\gamma_G$ in the above in two different ways, expressing the solution as an infinite product or via the logarithmic derivative of the $\Gamma$ function.
\subsection{Solution as an infinite product}
We have:
\begin{equation} G_R(a,L)=\exp{\left\{\sum_{p\in {\mathcal P}(u,v)}a^{|p|}\phi_R(p)(L)\right\}}.\end{equation}
Here, the sum is over all primitives $p\in {\mathcal P}(u,v)$, where ${\mathcal P}$ is the set of primitives assigned to any tree $t_n$ decorated arbitrarily by letters
in the alphabet $u,v$, as described above. The proof is an elementary exercise in the Taylor expansion of the two Mellin transforms.
Note that $\phi_R(p)(L)$ is linear in $L$ for primitive $p$, $\partial^2_L \phi_R(L)=0$. We hence find for $\gamma_G(a)$
\begin{equation} \gamma_G(a)=-\frac{\partial \ln G}{\partial L}|_{L=0}=-\sum_p a^{|p|}\phi_R(p)/L.\end{equation}
Convergence of the sum is covered by the implicit function theorem, which provides for $\gamma_G$ through the two Mellin transforms in the DSE. We hence proceed to the second way of expressing the solution.
\subsection{Solution via the $\psi$-function}
We can express the DSE using the logarithmic derivative of the $\Gamma$ function and we obtain
\begin{eqnarray}
1 & = & -a\frac{1}{\gamma_G(1-\gamma_G)}
-a^2\left\{
\frac{1}{\gamma_G^2(1-\gamma_G)^2}
\right. \\
& & \left.
+ \frac{1}{\gamma_G} \left[ \psi'\left(1+\gamma_G\right) - \psi'\left(1-\gamma_G\right) \right]
+ \frac{1}{1-\gamma_G} \left[ \psi'\left(2-\gamma_G\right) - \psi'\left(\gamma_G\right) \right]
\right. \nonumber \\
& & \left.
- \frac{1}{2\gamma_G} \left[ \psi'\left(1+\frac{\gamma_G}{2}\right) - \psi'\left(1-\frac{\gamma_G}{2}\right) \right]
- \frac{1}{2(1-\gamma_G)} \left[ \psi'\left(\frac{3-\gamma_G}{2}\right) - \psi'\left(\frac{1+\gamma_G}{2}\right) \right]
\right\}.
\nonumber
\end{eqnarray}
Here
\begin{equation}
\psi'(x) = \frac{d^2}{dx^2} \ln \Gamma(x).
\end{equation}
Again, the two-loop solution shows explicitly the conformal symmetry $\gamma_G \rightarrow 1-\gamma_G$.
Note that the apparent second order poles at $\gamma_G=0$ and $\gamma_G=1$ on the rhs are only first order poles upon using standard properties
of the logarithmic derivative of the $\Gamma$ function, as it has to be.
This provides an implicit equation for $\gamma_G$, which can be solved numerically.
\section{Conclusions}
We determined the Mellin transform of the two-loop massless vertex
in Yukawa theory. We used it to resum a linear Dyson--Schwinger
equation. Following \cite{BKDSE,dktor,KY}, more complete
applications to non-linear Dyson--Schwinger equations will be
provided elsewhere. Techniques to deal with non-linearity have
indeed been developed recently \cite{BKDSE,dktor,KY}, and involve
transcendental functions even upon resummation of terms from the
first Mellin transform \cite{BKDSE}. In the non-linear case one gets
indeed results very different from scaling, as has been demonstrated
early on in field theory \cite{Roman}. Finally, we note that the
same two-loop Mellin transform also appears in setting up the full
DSE in other renormalizable theories \cite{KY2}.
\section*{Acknowledgments} We thank Roman Jackiw for pointing his thesis \cite{Roman} out
to us.
| {
"attr-fineweb-edu": 1.230469,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUe9A5qhDCeBHYr29x | \section{\uppercase{Introduction}}
\label{sec:introduction}
Mathematical models of human posture control are used in the analysis of experiments as well as in the control of humanoid robots. For this reason system identification techniques have been developed for the identification of human balance as dynamic system with feedback control \cite{vanderKooij2007,van2005comparison,van2006disentangling,goodworth2018identifying,mergner2010,engelhart2014impaired,pasma2014impaired,jeka2010dynamics,boonstra2014balance}. Most of the studies performed on human posture control exploit linear models such as the \textit{independent channel} model \cite{peterka2002sensorimotor}, and in general assume a linear and time invariant behavior for human posture control \cite{engelhart2016comparison}. Linear models have the advantage of being simple to analyze and relatively easy be fit on the data. However experiments reveal that human posture control exhibits nonlinearities such as dead-bands and gain-non-linearity. Nonlinear models are more complex to be fit on human data and, in the general case expensive iterative procedures should be used. In this work we propose a deep learning system to identify the parameters of a nonlinear bio-inspired posture control system, the DEC (\textit{Disturbance Estimation and Compensation}). The obtained set of parameters represents a concise and expressive representation of the outcome of a posture control experiment that can be used for scientific studies and as a basis of future diagnostic tools for clinicians. The proposed technique is based on \textit{Convolutional Neural Networks}, CNN. Such deep learning system, has been recently applied with promising result to human movement analysis, e.g. in \cite{icinco19}, but so far it has not been a tool typically used in posture control analysis.
\section{\uppercase{Methods}}
\subsection{Posture Control Scenario: Support Surface tilt}
The scenario considered here models a human (or humanoid) balancing on a tilting support surface. The support surface tilt $\alpha_{FS}$ represents the input of the system and it is the same for all the simulations. The profile of the tilt of the support surface is the \textit{pseudo-random ternary sequence}, PRTS, shown in Fig. 1 (C). Such stimulus is used in human experiments because, thanks to its pseudo-random nature, it is not predictable for the subject\cite{peterka2002sensorimotor}. Furthermore it is composed by a sequence of velocity steps suitable to excite the dynamics of the system over several frequencies. The output of the system is the sway of the COM $\alpha_{BS}$
\subsection{Human and Humanoid Posture Control: The DEC model}
\begin{figure}[t]
\centering
\includegraphics[width=1.00\columnwidth]{Figures/FIG1ABC.pdf} \\
\caption{A scheme of a controller based on the DEC concept. (A) the sensory inputs are used to reconstruct the physical disturbances acting on the body. Such disturbances are compensated feeding them in the form of an \textit{angle equivalent} as input to the servo controller. (B) The single inverted pendulum model used to simulate human posture control. The kinematics are described by the COM sway angle $\alpha_{BS}$ and by the support surface tilt $\alpha_{FS}$. (C) The \textit{Pseudo-Random Ternary Signal}, PRTS, used as reference for the support surface tilt.}
\label{DEC}
\end{figure}
The DEC concept provides a a model of the human postural control mechanisms \cite{mergner2010,lippi2017human}. This approach has been applied to multiple DoF robots \cite{lippi2017human,lippi2017human,zebenay2015human,ott2016good,lippi2018prediction,Hettich2013,hettich2015human}. In this work the implications of a modular control architecture will not be covered because the proposed single inverted pendulum (SIP) model consists only of one control module. A general scheme describing the DEC control is shown in Fig. \ref{DEC}.
In general a posture control system based on the DEC concept is implemented as:
(1) A servo loop, implemented as a PD controller (neural controller in Fig. \ref{DEC}). In the presented 1DoF case the controlled variable consists in the body center of mass with sway respect to the gravitational vertical passing through the ankle joint $\alpha_{BS}$, where $BS$ stands for \textit{Body in Space}. (2) The disturbance estimations are, in general, identifying support surface rotation and \textit{translation}, external contact forces and field forces such as gravity . The sensory channels are shown in Fig. \ref{DEC} as \textit{Vestibular}, \textit{Proprioceptive}, and \textit{Force}. The disturbance estimates are fed into the servo so that the joint torque on-line compensates for the disturbances while executing the desired movements. The \textit{lumped delay} in Fig. \ref{DEC} represents all the delay effects that in humans (but also in real world humanoids) are distributed \cite{antritter2014stability,G.Hettich2014}.
The model used in this work considers gravity and support surface tilt as disturbances. Specifically the estimators are defined as follows:\\
\underline{Gravity estimator}
\begin{equation}
T_G = G_g \alpha_{BS}^{vest}
\end{equation}
where $G_g$ is a gain associated with the estimator. In the framework of the DEC control the disturbances are represented by an angle equivalent, i.e. the body lean that would produce, in a linear approximation, the disturbance torque as gravitational torque. This makes all the values that are summed to obtain the input of the neural controller (error and disturbances) expressed in radians. In the specific case of the gravitational torque the equivalent angle is the body lean $\alpha_{BS}^{vest}$. With an ideal $G_g$ of $1$ and a proportional gain $K_p = mgh$, where $m$ is body mass, $g$ gravity acceleration and $h$ is the height of the COM respect to the ankle joint, the gravity would be exactly compensated. When fitting the model to human behavior the gravity appears to be slightly under-compensated \cite{G.Hettich2014,asslander2015visual}. In this work $G_g$ will be set to 1, and hence the gravity compensation gain will be determined by $K_p$.
The signal $\alpha_{BS}^{vest}$ comes from the vestibular system and it is affected by a \textit{red noise} $\nu(N_V)$ with frequency power density $N_v^2/f$, where $N_v$ is a parameter of the system.\\
\underline{Support surface tilt estimator}
\begin{equation}
\alpha_{FS}=\int_0^t f_{\theta} \left( \frac{d}{dt} \alpha_{BS}^{vest} + \frac{d}{dt} \alpha_{BF}^{prop} \right)
\label{fs}
\end{equation}
where $\alpha_{BF}^{prop}$ is the ankle joint angle signal from proprioception. $BF$ stands for \textit{Body-to-Foot}. In some implementations of the DEC concept the integral in eq. \ref{fs} is implemented as a leaky integrator \cite{lippi2017human}, in this work it is set to zero at the beginning of the simulation. The function $f_{\theta}$ is a dead-band threshold defined as
\begin{equation}
f_{\theta}(\alpha) = \left\{
\begin{array}{llc}
\alpha + \theta & if & \alpha< -\theta \\
0 & if & -\theta< \alpha< \theta \\
\alpha - \theta & if & \alpha> \theta \\
\end{array}
\right.
\end{equation}
The threshold function is added to reproduce the behavior observed in humans \cite{T.Mergner2009,Mergner2003}. The reconstructed body-in-space variable used for the servo controller is then
\begin{equation}
\alpha_{BS}^{servo}=\alpha_{FS}-\alpha_{BF}^{prop}
\end{equation}
affected by the non-linearity introduced by $f_{\theta}$. Considering that in the present scenario the simulated agent aims to maintain the upright stance, i.e. the reference signal is zero, the total torque commanded by the servo controller is:
\begin{equation}
\tau_{active}=-e^{-s\Delta } (K_p+s K_d)\left( T_g+ \alpha_{BS}^{servo}\right)
\end{equation}
where $K_d$ is the derivative coefficient for the PD controller (for the sake of brevity in this equation and the following the derivatives, the integrators and the delay are represented using Laplace transform variable $s$ so that they can be conveniently expressed as a multiplicative operator, although the rest of the formula refers to operations in time domain). $\Delta$ is the lump delay. Notice that the derivative component is acting also on gravity compensation, representing a sort of anticipation of the disturbance. There is also a passive torque acting on the ankle joint defined as:
\begin{equation}
\tau_{passive}=-(K_p^{pass}+s K_d^{pass})\left(\alpha_{BF}^{prop}\right)
\end{equation}
In order to show the role of all the parameters (listed in Table \ref{tab:parameters}, here highlighted in blue) the total torque can be written as:
\begin{equation}
\begin{array}{l}
\tau_{ankle}= \tau_{active}+\tau_{passive} \\ \\
=-e^{-s\color{blue}{\Delta} }({\color{blue}{K_p}}+s {\color{blue}{K_d}}) \\ \\
\left(\alpha_{BS}^{vest} + \alpha_{FS} -\frac{1}{s} f_{\color{blue}{\theta}} \left( s (\alpha_{BS}^{vest} +\nu({\color{blue}{N_v}})) + s \alpha_{BF}^{prop} \right) \right) \\ \\
-({\color{blue}{K_p^{pass}}}+s {\color{blue}{K_d^{pass}}})\left(\alpha_{BF}^{prop} \right)
\end{array}
\end{equation}
\subsection{The Training set}
The training and the validation set for the neural network have been generated with random parameters from uniform distributions (the range is shown in Table. \ref{tab:parameters}). A set of parameters is used as a sample only if the behavior it produces is stable: simulations with $\alpha_{BS}$ amplitude larger than $5^{\circ}$ are not considered realistic balancing scenarios and are discarded. Most of the stable simulations obtained with such random sampling were associated with a relative small COM sway, the amplitude distribution is shown in Fig. \ref{fig:HIST} in blue. In order to obtain a data-set including larger oscillations, representative of a \textit{relaxed} human behavior, the data-set was enriched with samples that were produced repeating the simulations with larger outputs ( $> 0.05 rad$) with parameters subject to a relatively small modification ($\approx 10\%$ of the range). The resulting enriched data-set is shown in Fig. \ref{fig:HIST} in orange. The performance of the neural network on the two sets was almost the same and hence only the enriched data-set is considered. The distribution of realistic human data-set is discussed more in detail in section \ref{sec:conclusion}. The resulting data-set included $12766$ samples. Half of the samples are used for training, half as validation set.
The neural network is trained to identify the simulation parameters on the basis of body sway profiles. The \textit{Target} for the training is represented by the vector of parameters, centered with respect to the mean normalized by the standard deviation (both computed on the training set). The \textit{Input} is a convenient representation of the output. The simulation was performed with a fixed integration step of $1 ms$ and produced $12100$ $\alpha_{BS}$ samples with a resolution of $10 \; ms$. In order to adapt the signal to the convolutional network used the input was transformed into a two channel image, with the channels representing, respectively, the modulus and the phase of the FFT of the signal computed on non overlapping time windows. Empirical tests have shown that the best performance was achieved with a time window of $110$ samples resulting in a square $110 \times 110$ two-channel image (Fig. \ref{fig:Neuralarchitecture} above).
\begin{table*}[t!]
\center
\renewcommand{\arraystretch}{2}
\begin{tabular}{|l|l|l|l|l|l|l|}
\hline
Parameter & Symbol & min & max & mean & std & unit \\ \hline
\rowcolor{Gray}
Active proportional gain & $K_p$ & 503.3943 & 1258.4857 & 811.2951 & 338.0956 & $\frac{N \cdot m}{rad}$ \\
Active derivative gain & $K_d$ & 125.8486 & 377.5457 & 284.5640 & 122.7999 & $\frac{N \cdot m \cdot s}{rad}$ \\
\rowcolor{Gray}
Passive stiffness & $K_{p_{pass}}$ & 62.9243 & 377.5457 & 312.2075 & 102.1054 & $\frac{N \cdot m}{rad}$ \\
Passive damping & $K_{d_{pass}}$ & 62.9243 & 188.7729 & 174.3144 & 68.5447 & $\frac{N \cdot m \cdot s}{rad}$ \\
\rowcolor{Gray}
Vestibular noise gain & $N_V$ & 0 & 1.0000 & 0.4695 & 0.2928 & $1$ \\
Foot rotation velocity threshold & $\theta_{vfs}$ & 0 & 0.0052 & 0.0003 & 0.0124 & $rad / s$ \\
\rowcolor{Gray}
Lumped delay & $\Delta$ & 0 & 0.2400 & 0.1210 & 0.0672 & $s$ \\ \hline
\end{tabular}
\caption{Simulation parameters with an overview of their distributions in the examples used as training and validation set. \textit{Min} and \textit{Max} represent the minimum and the maximum values of the uniform distributions used to generate the samples. \textit{Mean} and \textit{std} (standard deviation) are computed on the the selected simulations that resulted in a stable behavior (i.e. maximum $\alpha_{BS}$ oscillation under $5^{\circ}$) and included the \textit{enrichment} (see text).}
\label{tab:parameters}
\end{table*}
\begin{figure}[ht]
\centering
\includegraphics[width=1.00\columnwidth]{HIST.pdf}
\caption{Peak-to-peak body sway amplitude distribution}
\label{fig:HIST}
\end{figure}
\subsection{The Neural Network}
\begin{figure}[htbp]
\centering
\includegraphics[width=1.00\columnwidth]{Figures/Neuralarchitecture.pdf}
\caption{(Top) Example of the appearance of an input sample. The image has two channels, red and green, associated to the modulus the phase of the FFT of the body sway over a time window, respectively. (Bottom) Neural network architecture (below).}
\label{fig:Neuralarchitecture}
\end{figure}
The neural network architecture is shown in Fig \ref{fig:Neuralarchitecture}, where the layers are listed. The network has been implemented with Deep Learning Toolbox\texttrademark. Such network is not designed for 1-D convolution and hence the input is transformed into an image. The filters of convolutional layers apply the same weight to different parts of the input. The axis of the input image can be seen as \textit{time} and \textit{frequency}. This means that the convolutional filters allow the network to recognize pattern translated in time (horizontal) and in frequency (vertical). While the former invariance has the understandable meaning that the network is able to recognize the same movement patterns appearing at different times, as expected from an 1-D CNN applied to time series, the latter has no straightforward physical explanation. An example of the activation of the filters in the first layers is shown in Fig. \ref{fig:Activation}.
\begin{figure}
\centering
\includegraphics[width=1.00\columnwidth]{Figures/Activation.png}
\caption{Graphic rendering of the first convolutional layer of the network. Each image in the $8 \times 8$ grid represent the response of one of the 64 filters to the input image. Lighter pixels are associate with a larger activation than darker pixels.}
\label{fig:Activation}
\end{figure}
The network has been trained using \textit{stochastic gradient descent with momentum} as policy. The training was set to a limit of 100 epochs. The data-set is divided in two equally sized subsets of $6383$ samples used as training set and validation set, respectively. The loss function is the \textit{Mean Squared Error} MSE, coherently for the regression task. The evolution of the MSE through the training iterations is shown in Fig. \ref{fig:TrainingLoss}.
\begin{figure}[ht]
\centering
\includegraphics[width=1.00\columnwidth]{TrainingLoss.pdf}
\caption{Training and validation loss through the iterations}
\label{fig:TrainingLoss}
\end{figure}
\section{Results}
\subsection{Validation Set}
The validation set loss, MSE, is $0.2851$. It is comparable to the one obtained for the training set which was $0.2250$. Figure \ref{fig:TrainingLoss} shows how the loss function is stable for several iteration on both training set and validation set. Thanks to the availability of a large enough number of samples the system is not showing signs of over-fitting. An example of the output obtained using a specific sample is shown in Fig. \ref{Res}.
\begin{figure}[htbp]
\centering
\includegraphics[width=1.00\columnwidth]{Figures/TestValidation.pdf}
\caption{The CNN applied to a specific sample: (Top) network output and target sample, and (Bottom) the associated $\alpha_{BS}$.}
\label{Res}
\end{figure}
\subsection{Double inverted pendulum case}
The system is now applied to DIP data produced with a simulation with default parameters\cite{Hettich2013}.
In Fig. \ref{Dip2Sip} the control parameters for the ankle module used in the simulation are compared with the ones identified by the CNN. The accuracy of the result is, as expected slightly worse than the one obtained using the validation set (with a SE (Squared Error) for the normalized parameters of $10.4783$ as compared to the smaller validation set MSE of $0.2851$).
\begin{figure}[htbp]
\centering
\includegraphics[width=1.00\columnwidth]{Figures/DIP2SIP.pdf}
\caption{The CNN applied to a sample produced with a double inverted pendulum (DIP) model: (Top) network output compared with the parameters applied to the control of the ankle in the DIP model. The hip has also its set of parameters, not displayed. In (Bottom) the $\alpha_{BS}$ trajectory produced by the DIP is compared with the one of a SIP using the parameters predicted by the CNN.}
\label{Dip2Sip}
\end{figure}
\subsection{Identification of Human posture control parameters}
The CNN is here applied to human data. A single trial from one subject is used. Like in the previous example, the experiment does not include any device to block the hip so that the center of mass sway is influenced by the ankle-hip coordination. The identified parameters and the simulated body sway are shown in Fig. \ref{fig:ResultHuman}. The peak-to-peak sway amplitude exhibited by the human subject was $2.8533^{\circ}$. This example suggests that is beneficial to include larger body sway examples in the training-set. The result in Fig. \ref{fig:ResultHuman} shows a good, although not perfect, similarity between the simulation and the original data.
\begin{figure}[htbp]
\centering
\includegraphics[width=1.00\columnwidth]{Figures/ResultHuman.pdf}
\caption{The CNN applied to data from a human experiment. (Top) The identified parameters are shown. (Bottom) The original input is compared with the input produced by the simulation using the parameters identified by the CNN.}
\label{fig:ResultHuman}
\end{figure}
\section{\uppercase{Conclusions and Future Work}}
\label{sec:conclusion}
In this work we presented a method for posture control parameter identification based on CNN. The system provides an efficient way to fit a model of the non-linear bio-inspired control system DEC on experimental data. This represents an advantage with respect to previous solutions relying on iterative methods. With the used training set noisy by design and because noise power is one of the parameters, the system provides an average error that is non-zero. The obtained parameters can be used a description of the analyzed data, as guideline for more accurate fitting procedures and as features for further analysis algorithms, for example a diagnostic tool could be implemented using SVM trained with the parameters vector as input.
The training set is produced with parameters from uniform distributions, filtered with the constraint, checked empirically, so that they produce a stable simulation. In order to obtain more \textit{human-like} examples the data-set has been enriched with samples producing larger body sways. Future work may introduce preliminary studies on the distribution of human data, generating some training samples from sample obtained by fitting the model on the experimental data. The CNN can also be tested \textit{a posteriori} comparing the distribution of the parameters it produces on the validation set with the expected distribution on real data. This can help the process of choosing between different possible network hyperparameters sets as shown in \cite{sforza2011rejection,sforza2013support}.
The SIP model used in this work proved to be suitable to describe the analyzed posture control scenario, this even in the sub-optimal case of identifying the control parameter of the ankle joint in a DIP model. Future work will aim to the design of a solution that identifies also the parameters controlling the hip joint, which is known to have a relevant function in balancing \cite{G.Hettich2014,horak1986central,park2004postural}.
The modeling and the analysis of human posture control and balance provides and get inspirations from the study of humanoid robots control, e.g. \cite{icinco07,icinco12,zebenay2015human,10.3389/fnbot.2018.00021}, or can be used to improve the design of assistive systems and devices \cite{icincoChugo.2019,Mergner2019}. The proposed deep-learning-based tool will be also published as a tool to benchmark humanoids and wearable devices \cite{torricelli2020benchmarking}, within the framework of the COMTEST project \cite{Lippi2019} that aims to make a posture control testbed available for the humanoid robotics community.
\section*{\uppercase{Acknowledgements}}
\setlength{\intextsep}{-1pt}%
\setlength{\columnsep}{15pt}%
\begin{wrapfigure}{l}{0.08\columnwidth}
{\includegraphics[width=0.12\columnwidth]{Logo2.pdf}}
\label{LOGO}
\end{wrapfigure}
\noindent This work is supported by the project COMTEST, a sub-project of EUROBENCH (European Robotic Framework for Bipedal Locomotion Benchmarking, www.eurobench2020.eu) funded by H2020 Topic ICT 27-2017 under grant agreement number 779963.
\bibliographystyle{apalike}
{\small
| {
"attr-fineweb-edu": 1.630859,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUeArxK1ThhBMLhDOi |
\section{Introduction}
\ifx0\undefined
\else
\thispagestyle{fancy}
\fi
Symbolic controller synthesis is attracting
considerable attention in the last two decades,
see
\cite{Tabuada09,
ReissigWeberRungger17,
KhaledZamani19,
MacoveiciucReissig19}
and the many references in these works.
This synthesis scheme takes
a plant and a control specification formulation
as input and attempts to solve
the resulting control problem algorithmically
without an intermediate intervention
of the engineer.
In case the process terminates successfully,
a controller is returned possessing
the formal guarantee that
the resulting closed loop
meets the given specification.
A subsequent verification
routine is not required.
Processable plants are sampled-data control systems
whose underlying continuous-time dynamics are given by
nonlinear differential equations/inclusions.
Specifications can be in principle quite arbitrary.
However, efficient algorithms have been presented
only for safety, reach-avoid \cite{MalerPnueliSifakis95}
and GR(1)-specifications \cite{BloemJobstmanPitermanPnueliSaar12}.
(Applications of these algorithms in
symbolic control can be also found in
\cite{Girard11,ReissigWeberRungger17,HsuMajumdarMallikSchmuck18}.)
In addition, the basic theory has been recently extended
to optimal control problems so that
near-optimal controllers with respect
to a given non-negative cost function can be synthesized \cite{ReissigRungger13,ReissigRungger18}.
Though making most of
the empirical synthesis techniques obsolete in principle,
symbolic controller synthesis
has not become a standard technique until now.
The main problem is the ``curse of dimensionality",
from which this approach suffers.
I.e. memory consumption and runtime
are growing exponentially
with increasing state space dimension
of the given plant.
The runtime is due to, among other things,
the solution of initial value problems,
typically millions
during the first out of
two steps of the synthesis scheme.
Saving data generated from these solutions causes
the huge memory consumption.
The data is then used in the second step, where aforementioned
algorithms may be used to solve an auxiliary discrete problem.
The latter steps classically execute sequentially,
both in terms of theory and software implementation.
To raise runtime performance methods for parallel execution
(in theory \cite{RunggerStursberg12,HsuMajumdarMallikSchmuck18,MacoveiciucReissig19} and
in implementation \cite{KhaledZamani19})
have been presented recently.
More concretely,
the pioneering work \cite{KhaledZamani19} indicates
the potential boost that can be achieved
by utilizing high-performance computing platforms and
\cite{RunggerStursberg12,HsuMajumdarMallikSchmuck18,MacoveiciucReissig19}
present theories for concurrent execution
of the two steps.
Against this backdrop,
the contribution of this paper is twofold.
Firstly,
we extend the class of solvable
optimal control problems to problems, whose
cost functions take negative values.
Moreover, the presented algorithm
allows for efficient implementation by parallelizing both the execution of
the algorithm itself and the two steps of the synthesis scheme described above.
In fact, we present a version of the well-known
Bellman-Ford algorithm \cite{Bellman58,Ford56,Moore59} for directed hypergraphs.
The Bellman-Ford algorithm (on ordinary directed graphs)
not only applies to negative edge weights;
In contrast to the Dijkstra algorithm \cite{Dijkstra59}
it also allows parallelization
relatively easily \cite{BusatoBombieri16}.
As we will show, the latter properties pass over to our novel variant.
Moreover, we present a method to regulate the memory consumption during execution in the sense that processing speed can be traded for memory consumption.
We particularly show that our algorithm outperforms
in a concrete example
the memory-efficient Dijkstra-like algorithm
recently proposed in \cite{MacoveiciucReissig19}.
We will motivate the requirement of handling negative edge weights, which arise from arbitrary cost functions, by a practical example: automated aerial firefighting with an unmanned aerial vehicle.
The firefighting aircraft shall not only reach
the hot spot as fast as possible
but shall be rewarded for flying over it as long as necessary
in order to release its firefighting water.
The existing theory on reach-avoid specifications
is sufficient to reach the target while optimizing a non-negative cost function but insufficient to formulate a reward mechanism.
We will set up a detailed scenario of aerial firefighting
and thereby show the applicability of our theoretic results and the great performance of our algorithm.\looseness=-1
The rest of the paper includes
the notation used for presenting our theory
in Section \ref{s:notation}.
In Section~\ref{s:optimalcontrol}
we summarize the existing theory about symbolic optimal control
such that our main contributions,
which are included in Section~\ref{s:algorithm},
can be presented rigorously.
The application of our results to sampled-data systems are discussed in Section \ref{s:application} and
two numerical examples are included in Section~\ref{s:example}.
A conclusion is given in Section \ref{s:conclusion}.
\section{Notation}
\label{s:notation}
The symbols
$\mathbb{R}$, $\mathbb{Z}$ and $\mathbb{R}_+$, $\mathbb{Z}_+$ stand for the set of real numbers, integers and non-negative reals and integers, respectively.
The symbol $\emptyset$ denotes the empty set.
For a map $f \colon A \to \mathbb{R}$ and $c \in \mathbb{R}$ the relations $\leq$, $\geq$ hold point-wise, e.g. $f \geq c$ iff $f(a) \geq c$ for all $a \in A$.
For $f$ and another map $g \colon A \to \mathbb{R}$ we write $f \geq g$ iff $f(a) \geq g(a)$ for all $a\in A$.
The derivative of a map $f$ with respect to
the first argument is denoted by $D_1f$.
For sets $A$ and $B$ we denote
the set of all functions $A \to B$ by $B^A$.
A map with domain $A$ and taking values in the powerset of $B$ is denoted by $A \rightrightarrows B$.
A set-valued map $f \colon A \rightrightarrows B$ is \begriff{strict} iff
$f(a) \neq \emptyset$ for all $a \in A$.
The cardinality of the set $A$ is denoted by $|A|$.
\section{The Notion of System and Optimal Control}
\label{s:optimalcontrol}
The class of optimal control problems
that is considered in this work will
be formalized in this section.
To this end,
we first introduce
the notions of system,
closed loop and
controller. Then we define the notion of
optimal control problem. We use herein the concepts introduced in\cite{ReissigWeberRungger17,ReissigRungger18}.
To summarize in advance,
we investigate subsequently
a closed loop scheme as depicted in Fig.~\ref{fig:closedloop}:
The primary controller is to be synthesized such that
the total cost of the evolution of the closed loop is minimized.
The total cost is obtained
from accumulating running costs and adding
a terminal cost instantaneously
when the primary controller hands over
the control to a secondary controller.
The hand-over at some finite time is
mandatory.
(This scenario was rigorously
defined in \cite{ReissigRungger13}.)
\input{figures/closedloop.tikz}
\subsection{System and Behavior}
In this paper
we use the following notion of system,
which is frequently considered in literature
in the following or similar variants,
e.g.
\cite{Girard10,RunggerZamani16,ReissigWeberRungger17}.
\begin{definition}
A \begriff{system} is a triple
\begin{equation}
\label{e:def:system}
(X, U, F),
\end{equation}
where $X$ and
$U$ are non-empty sets and
$F \colon X \times U \rightrightarrows X$
is strict.
\end{definition}
The first two components of a system $S$ in \eqref{e:def:system}
are called \begriff{state} and \begriff{input space}, respectively.
The third component,
the \begriff{transition function},
defines a dynamic evolution for the system %
through the difference inclusion
\begin{equation}
\label{e:dynamics:discrete}
x(t+1) \in F(x(t),u(t)).
\end{equation}
For example, a dynamical system obtained by discretizing an ordinary differential equation may be formulated as a system \eqref{e:def:system} with time-discrete dynamics \eqref{e:dynamics:discrete}. (See Section \ref{ss:sampledsystem} for the details.)
The \begriff{behavior of $S$ initialized at $p \in X$},
which results from the imposed dynamics, is the set
\begin{equation}
\label{e:behaviour}
\{ (u,x) \in (U\times X)^{\mathbb{Z}_+} \mid p=x(0) \ \wedge \ \forall_{t\in \mathbb{Z}_+}\!\!: \eqref{e:dynamics:discrete} \text{ holds} \}.
\end{equation}
Loosely speaking, the behavior is the set of all input-output signal pairs that can be measured on the lines of the system. Subsequently, we denote \eqref{e:behaviour} by $\mathcal{B}_p(S)$.
\subsection{Controller and Cost functional}
We investigate
the problem of synthesizing
an optimal controller with respect to costs as we detail below.
To begin with, by a \begriff{controller} for a system $S$ of the form \eqref{e:def:system} we mean a strict set-valued map
\[\mu \colon X \rightrightarrows U.\]
Therefore, controllers in this work are static,
do not block, and
do not use information from the past.
By concept, a controller shall restrict
the behavior of the plant to control.
This property is
reflected in our formalism as follows.
We define the \begriff{closed-loop behavior}
of the controller
$\mu$ interconnected with $S$ and
initialized at $p \in X$ by
\begin{equation}
\mathcal{B}_p^\mu(S) = \{ (u,x) \in \mathcal{B}_p(S) \mid \forall_{t\in\mathbb{Z}_+} u(t) \in \mu(x(t))\}.
\end{equation}
Obviously, $\mathcal{B}^\mu_p(S) \subseteq \mathcal{B}_p(S)$, so this formalism is indeed compliant with intuition about controllers.
The objective that we consider
is to minimize
the cost for operating the closed loop.
Specifically, given a
\begriff{terminal} and
\begriff{running cost function} of the form
\begin{subequations}
\label{e:cost}
\begin{align}
G &\colon X \to \mathbb{R} \cup \{ \infty\}, \ \text{ and } \\
g &\colon X \times X \times U \to \mathbb{R} \cup \{\infty\},
\end{align}
\end{subequations}
respectively,
the controller shall minimize the cost functional
\begin{equation*}
J \colon (U \times \{0,1\} \times X)^{\mathbb{Z}_+} \to \mathbb{R} \cup \{-\infty,\infty\}
\end{equation*}
defined as follows:
\begin{equation}
\label{e:costs}
J(u,v,x) = G(x(T)) + \sum_{t = 0}^{T-1} g(x(t),x(t+1),u(t))
\end{equation}
if
$T := \inf v^{-1}(1) < \infty$ and
$J(u,v,x) = \infty$ if $v = 0$.
In words, $v$ is a boolean-valued signal
whose first edge from $0$ to $1$
defines the termination time $T$
of the (primary) controller.
We would like to illustrate the notion of cost
by the following example.
It will be also continued later.
\begin{example}
\label{ex:costs}
Let $(X,U,F)$ be a system with seven states and two inputs,
where $F$ is defined graphically
in Fig.~\ref{fig:example}. To be specific,
$X = \{1,2,\ldots,7\}$, $U = \{ \mathrm{b}, \mathrm{g}\}$ and
the \underline{b}lack and \underline{g}ray dashed edges, respectively,
define the image of $F$ for the input $\mathrm{b}$ and $\mathrm{g}$, respectively.
E.g., $F(1,\mathrm{b}) = \{1\}$, $F(4,\mathrm{g}) = \{1,2\}$.
The running cost function $g$ is also defined graphically by the label of each edge. E.g., $g(1,1,\mathrm{b}) = 1$, $g(2,3,\mathrm{g})=-4$.
The terminal cost function $G$ is defined by $G(x) = x$ for all $x \in X$, i.e. the label of a state in Fig.~\ref{fig:example}
equals exactly the value of the terminal cost of the state.\\
We consider the state and input signal
$x = (7,4,3,3,\ldots)$ and
$u = (\mathrm{b},\mathrm{b},\ldots)$, and the termination signals
$v_0 := (1,\ldots)$,
$v_1 := (0,1,\ldots)$ and
$v_2 := (0,0,1,\ldots)$.
Then, $J(u,v_0,x) = G(7) = 7$, $J(u,v_1,x) = G(4) + g(7,4,\mathrm{b}) = 4 + 1$ and
$J(u,v_2,x) = 3 + 1 -2 = 2$.
Analogously, for $y=(7,5,1,1,\ldots)$ it holds
$J(u,v_1,y) = 6$, $J(u,v_2,y)=1$.
\end{example}
\begin{figure}
\centering
{
\newcommand{2.5}{2.5}
\begin{tikzpicture}[>=latex,scale=1.3]
\node[draw=black,circle,minimum size=.6cm] (p1) at (0,.5) {$1$};
\node[draw=black,circle,minimum size=.6cm] (p2) at (0,-.5) {$2$};
\node[] (p2a) at (0+.05,-.65) {};
\node[] (p2b) at (0-.05,-.65) {};
\node[draw=black,circle,minimum size=.6cm] (p3) at (0,-1.5) {$3$};
\node[] (p3a) at (0+.05,-1.35) {};
\node[] (p3b) at (0-.05,-1.35) {};
\node[darkgray] (p3c) at (0+.2,-1.08) {$5$};
\node[darkgray] (p3d) at (0-.3,-.9) {$-4$};
\node[draw=black,fill=white,circle,minimum size=.6cm] (p4) at (-2.5,.5) {$4$};
\node[draw=black,fill=white,circle,minimum size=.6cm] (p5) at (-2.5,-.5) {$5$};
\node[draw=black,fill=white,circle,minimum size=.6cm] (p6) at (-2.5,-1.5) {$6$};
\node[draw=black,fill=white,circle,minimum size=.6cm] (p7) at (-2.5-2.5,-.5) {$7$};
\draw[->,thick] (p7) to node[above] {\small $1$} (p4) ;
\draw[->,thick] (p7) to node[above] {\small $1$} (p5) ;
\draw[->,thick,darkgray,thick,densely dashed] (p7) to node[below] {\small $1$} (p6) ;
\draw[->,thick] (p4) to node[left] {\small $-2$} (p3) ;
\draw[->,thick,darkgray,densely dashed] (p4) to node[above] {\small $-1$} (p1) ;
\draw[->,thick,darkgray,densely dashed] (p4) to node[below] {\small $-1$} (p2) ;
\draw[->,thick] (p5) to node[above] {\small $-1$} (p1) ;
\draw[->,thick,darkgray,densely dashed] (p5) to node[below] {\small $-4$} (p3) ;
\draw[->,thick] (p6) to node[left] {\small $0$} (p5) ;
\draw[->,thick,darkgray,densely dashed] (p6) to node[below] {\small $-1$} (p3) ;
\draw[->,thick,darkgray,densely dashed] (p1) to node[right] {\small $2$} (p2) ;
\draw[->,thick,darkgray,densely dashed] (p3a) -- (p2a) ;
\draw[->,thick,darkgray,densely dashed] (p2b) -- (p3b) ;
\path[]
(p1) edge [loop right,thick] node[right] {$1$} (p1)
(p2) edge [loop right,thick] node[right] {$1$} (p2)
(p3) edge [loop right,thick] node[right] {$1$} (p3);
\end{tikzpicture}
\vspace*{-.5\baselineskip}
}%
\caption{\label{fig:example}System and cost functions in Examples \ref{ex:costs}, \ref{ex:closedloopperformance} and \ref{ex:algorithm}.}
\vspace*{-1\baselineskip}
\end{figure}
To a controller $\mu$ we associate a
\begriff{closed-loop performance initialized at $p \in X$},
which is the value
\begin{equation}
\label{e:closedloopperformance}
L(p,\mu) = \sup_{(u,x) \in \mathcal{B}_p^\mu(S)}\inf\{ J(u,v,x) \mid v \in \{0,1\}^{\mathbb{Z}_+}\}.
\end{equation}
Roughly speaking, this quantity is the \emph{worst-case} cost for the evolution of the closed loop with controller $\mu$ for the \emph{best} possible hand-over time $T = \inf v^{-1}(1)$.
We would like to illustrate the closed-loop performance of a controller by continuing Example \ref{ex:costs}.
\begin{example}
\label{ex:closedloopperformance}
Let the system $S:=(X,U,F)$, the
cost functions $G$, $g$ and $u$, $v_2$, $x$, $y$
be as in Example \ref{ex:costs}.
We consider the controller
$\mu \colon X \rightrightarrows U$
defined by $\mu(x) = \{\mathrm{b}\}$ for all $x \in X$.
Then $\mathcal{B}_7^\mu(S) = \{(u,x),(u,y)\}$.
Hence, using the results in Example \ref{ex:costs}
we conclude that
the closed-loop performance initialized at $7$
satisfies $L(7,\mu) = \max\{J(u,v_2,x),J(u,v_2,y)\} = \max\{1,2\} = 2$.
Note that a termination time greater than $2$ increases the cost by $1$ since $g(1,1,\mathrm{b}) = g(3,3,\mathrm{b}) = 1$.
\end{example}
\subsection{Optimal Control Problem}
The previously defined objects in \eqref{e:def:system} and \eqref{e:cost}
can be grouped together in a
compact form \cite{ReissigRungger13}.
\begin{definition}
\label{def:ocp}
Let $S$ be a system of the form \eqref{e:def:system}.
An \begriff{optimal control problem} (on $S$)
is a 5-tuple
\begin{equation}
\label{e:def:ocp}
(X,U,F,G,g)
\end{equation}
where $G$ and $g$ are as in \eqref{e:cost}.
\end{definition}
Finally, we relate the solution of an optimal control problem $\Pi$ of the form \eqref{e:def:ocp} to the so-called \begriff{value function}.
The latter is the map
$V \colon X \to \mathbb{R} \cup \{-\infty,\infty\}$ defined
by
\begin{equation}
\label{e:valuefunction}
V(p) = \inf \{ L(p,\nu) \mid \nu \colon X \rightrightarrows U \text{ strict} \}.
\end{equation}
We say that
$\mu \colon X \rightrightarrows U$ \begriff{realizes} $V$
if $V = L(\cdot, \mu)$. If additionally $V(p)$ is finite for all $p \in A$, where $A \subseteq X$,
then $\mu$ is \begriff{optimal} for $\Pi$ \begriff{on} $A$.
(Here, optimal controller and optimal termination
are formally separated, see \eqref{e:closedloopperformance}. However, as we will see later, both come naturally hand in hand.)
The focus of this work is on how to solve
optimal control problems of the form \eqref{e:def:ocp}
algorithmically.
Therefore, we shall review some important
results on optimal control problems.
In fact, we will use \cite[Th.~IV.2]{Reissig16}
in the proofs of our main results later.
It states that the value function is the maximal fixed point of the \begriff{dynamic programming operator}
$P \colon \intcc{-\infty,\infty}^X \to \intcc{-\infty,\infty}^X$ defined by
\begin{equation}
\label{e:dynamicprogramming}
P(W)(x) = \min \left \{ G(x), \inf_{u \in U} \sup_{y \in F(x,u)} g(x,y,u) + W(y) \right \}.
\end{equation}
The precise statement is as follows.
\begin{theorem}
\label{t:dynamicprogramming}
Let $\Pi$ be an optimal control problem of the form \eqref{e:def:ocp} and
let $V$ be the value function of $\Pi$ defined in \eqref{e:valuefunction}.
Then $V$ is the maximal fixed point of the functional defined in \eqref{e:dynamicprogramming}, i.e. $P(V) = V$ and if $W\leq P(W)$ then $W\leq V$ for every $W \in \intcc{-\infty,\infty}^X$.
\end{theorem}
We would like to emphasize
that the previous setup can be
easily rephrased to
the terminology of directed hypergraphs and
the search for optimal (hyper-)paths.
In fact, we may identify a system (with the input space being a singleton)
with a directed hypergraph,
where every hyperarc points to possibly several vertices \cite{AusielloDAtriSacca86}.
This is the very reason that
graph-theoretical algorithms can be used
as a computational means to obtain controllers.
Subsequently, we present such an algorithm.
\section{Main Contributions}
\label{s:algorithm}
In this section, we present our main contribution,
which is an algorithm to determine the value function and
the realizing controller under weaker assumptions
than so far known on the given optimal control problem.
Specifically,
for the special case
that terminal and running cost functions
satisfy $G,g\geq 0$ a generalized
Dijkstra algorithm was presented in \cite{ReissigRungger18}
to solve the optimal control problem.
(Various versions of the Dijkstra algorithm were used also in other works like \cite{GrueneJunge08,RunggerStursberg12,Reissig11,WeberReissig13}.)
Besides the restriction to non-negative cost functions the Dijkstra algorithm has another disadvantage according to the prevailing opinion in literature:
it cannot be conveniently parallelized due to the involved priority queue.
The novel algorithm we present
below does not require the non-negativity of $G$ nor of $g$, and
it can be easily executed in parallel.
In the special case of ordinary,
directed graphs
the novel algorithm reduces to
the classical Bellman-Ford algorithm \cite{Bellman58}
combined with ideas of Yen \cite{Yen70} and
Cormen et. al. \cite{CormenLeisersonRivestStein09}.
Using the techniques of the latter works
a memory and time efficient implementation can be realized.
The classical Bellman-Ford algorithm
can be executed with a high degree of
parallelism \cite{DavidsonBaxterGarlandOwens14,BusatoBombieri16},
and our algorithm inherits this property.
We discuss implementation details in Section \ref{ss:algorithm:implementation}.
In Section \ref{ss:algorithm},
we present the algorithm and its properties.
\subsection{Algorithm}
\label{ss:algorithm}
In the statement of Algorithm \ref{alg:BellmanFord} the set
\begin{equation}
\label{e:pred}
\operatorname{pred}(x,u) = \{ y \in X \mid x \in F(y,u)\}
\end{equation}
is used, which may be seen as the preimages of $F$ or,
equivalently, as the predecessors in the hypergraph defined by $F$.
(In \eqref{e:pred}, $F$ and $X$ are as in Algorithm \ref{alg:BellmanFord}.)
\begin{algorithm}
\caption{\label{alg:BellmanFord}Generalized Bellman-Ford-Yen Algorithm}
\begin{algorithmic}[1]
\Input{Optimal control problem $(X,U,F,G,g)$}
\State{$\mathcal{F}_1 \gets \emptyset$\hfill{}{// ``Active" Frontier \cite{CormenLeisersonRivestStein09}}}
\State{$\mathcal{F}_2 \gets \emptyset$\hfill{}{// ``Upcoming" Frontier \cite{CormenLeisersonRivestStein09}}}
\ForAll{$x \in X$}
\State{\label{alg:BF:init:V}$W(x) \gets G(x)$}
\State{$\mu(x) \gets U$}
\If{$G(x) < \infty$}
\State{$\mathcal{F}_1 \gets \mathcal{F}_1 \cup ( \cup_{u \in U } \operatorname{pred}(x,u) )$}
\EndIf{}
\EndFor{\label{alg:init:endfor}}
\State{$i=0$}
\While{\label{alg:BF:while}$\mathcal{F}_1 \neq \emptyset$ \textbf{and} $i < |X|$}
\For {\label{alg:BF:for}$(x,u) \in \mathcal{F}_1 \times U$}
\State{\label{alg:BF:sup}$d \gets \sup_{y \in F(x,u)} g(x,y,u) + W(y) $}
\If{$d < W(x)$}
\State{\label{alg:BF:assign}$W(x) \gets d$}
\State{\label{alg:BF:control}$\mu(x) \gets \{u\}$}
\State{\label{alg:BF:F2}$\mathcal{F}_2 \gets \mathcal{F}_2 \cup ( \cup_{\tilde u \in U } \operatorname{pred}(x,\tilde u) )$}
\EndIf{}
\EndFor{\label{alg:BF:endfor}}
\State {$\mathcal{F}_1 \gets \mathcal{F}_2$\hfill{}{// Swap frontiers}}
\State{$\mathcal{F}_2 \gets \emptyset $}
\State{$i \gets i + 1$}
\EndWhile{}
\State{\Return{$W$, $\mu$}}
\end{algorithmic}
\end{algorithm}
Algorithm~\ref{alg:BellmanFord} basically implements a fixed-point iteration
according to \eqref{e:dynamicprogramming} using additionally some heuristics to improve efficiency, which we adopt from improvements on
the classical Bellman-Ford algorithm \cite{BannisterEppstein12}.
Firstly, Yen \cite{Yen70} observed that only a certain subset of predecessors
need to be processed iteratively in the while-loop -- and not all elements of $X$.
Secondly,
in \cite{CormenLeisersonRivestStein09} the two sets $\mathcal{F}_1$ and
$\mathcal{F}_2$, which are called \begriff{frontiers},
have been introduced
replacing the queue proposed in \cite{Yen70}.
Lastly, the second condition in line \ref{alg:BF:while}
implements negative cycle detection.
\begin{theorem}
\label{t:BellmanFord}
Let $S$ be a system of the form \eqref{e:def:system},
where $X$ and $U$ are finite. Let $\Pi$ be an optimal control problem of the form \eqref{e:def:ocp} on $S$. If Algorithm \ref{alg:BellmanFord} terminates with $\mathcal{F}_1 = \emptyset$ then
$W$ equals the value function of $\Pi$ and $\mu$ realizes $W$.
\end{theorem}
Before we give the proof of the theorem and discuss the time complexity of the algorithm, we would like to briefly illustrate its execution in a simple example.
\begin{example}
\label{ex:algorithm}
Let $(X,U,F,G,g)$ be the optimal control problem
defined by means of Example \ref{ex:costs}.
We apply Algorithm \ref{alg:BellmanFord} to it.
After initialization (line \ref{alg:init:endfor})
the frontier $\mathcal{F}_1$
equals $X$ as every state is a predecessor
of some state.
Starting the for-loop
in line \ref{alg:BF:for} with
$(x,u)=(6,\mathrm{b})$
lines \ref{alg:BF:assign}, \ref{alg:BF:control}
imply
$W(6)=0 + W(5) = 5$ and
$\mu(6) = \{\mathrm{b}\}$.
(Note that the processing order for $\mathcal{F}_1$ is irrelevant.)
Then $\mathcal{F}_2 = \operatorname{pred}(6,\mathrm{g}) = \{7\}$
in line \ref{alg:BF:F2}.
The next iteration with
$(x,u) = (6,\mathrm{g})$ results in the changes
$W(6) = -1 + W(3) = 2$ and
$\mu(6) = \{\mathrm{g}\}$.
Executing the for-loop in line \ref{alg:BF:for} again for
$(x,u)=(4,\mathrm{g})$ yields
$W(4) = -1 + \max\{ W(1) , W(2) \} = -1 + \max\{1,2\} = 1$ as
$F(4,\mathrm{g}) = \{1,2\}$.
Analogously, we update $W(5) = W(2) = -1$ and
when exiting the for-loop (line \ref{alg:BF:endfor})
all but the state $5$ need to be processed again.
In fact, $\mathcal{F}_2 = X \setminus \{5\}$
as the $W$-values of
$\{1\} = F(5,\mathrm{b})$ and
$\{3\} = F(5,\mathrm{g})$ do not change.
The following execution of the while-loop (line \ref{alg:BF:while})
is therefore done with $\mathcal{F}_1 = X \setminus \{5\}$.
Continuing the execution as illustrated until $\mathcal{F}_1 = \emptyset$ we finally obtain that
the optimal controller $\mu$ satisfies $\mu(6)=\{\mathrm{b}\}$ and $\mu(s)=\{\mathrm{g}\}$ for $s \in \{2, 4, 5, 7\}$.
For states $1$ and $3$ the image of $\mu$ is $U$, which may be interpreted as the command to hand over control.
\end{example}
\begin{proof}[Proof of Theorem \ref{t:BellmanFord}]
Denote by $V$ the value function of $\Pi$ and let $P$ be as in \eqref{e:dynamicprogramming}.
We shall prove $W = V$.
Assume that the inequality
$\tilde V(x) > P(\tilde V)(x)$
holds for some $x \in X$, where $\tilde V$ denotes the intermediate value of $W$
at the end of the for-loop in line \ref{alg:BF:endfor} at some iteration.
Then there exists $u\in U$ such that
$W(y) < \infty$ for all $y \in F(x,u)$.
So $x \in \mathcal{F}_2$, which implies $\tilde V(x) = P(\tilde V)(x)$ in the next iteration.
This proves $W \leq P(W)$ and therefore $W \leq V$ by Theorem \ref{t:dynamicprogramming}.
Since $W \geq P(W) \geq P(V) = V$ by lines \ref{alg:BF:init:V}, \ref{alg:BF:assign}
and Theorem \ref{t:dynamicprogramming}
we conclude $W \geq V$ and therefore $W = V$.
The claim on $\mu$ is obvious.
\end{proof}
\begin{theorem}
Algorithm \ref{alg:BellmanFord}
can be implemented
to run with time complexity $\mathcal{O}(nm)$,
where $n = |X|$ and
$m = \sum_{(x,u) \in X \times U} | F(x,u) |$.
\end{theorem}
\begin{proof}
The while-loop in Algorithm \ref{alg:BellmanFord} is executed at most $n$ times and the nested for-loop at most $m$ times, which proves the claim.
\end{proof}
\subsection{Implementation technique}
\label{ss:algorithm:implementation}
Section \ref{s:algorithm} is concluded with some notes
on how to implement Algorithm \ref{alg:BellmanFord} efficiently.
The frontiers $\mathcal{F}_1$ and $\mathcal{F}_2$
can be realized as FIFO queues
such that their lengths never exceed $|X|$.
As for parallelization,
it is readily seen that the for-loop in lines \ref{alg:BF:for}-\ref{alg:BF:endfor} can be executed in parallel where only the reading (resp. writing) operation on the array $W$ in line \ref{alg:BF:sup} (resp. line \ref{alg:BF:assign}) needs to be thread-safe.
In this case, every thread uses its own local
frontier $\mathcal{F}_2$ to avoid further communication among the threads.
All local frontiers are finally merged to obtain $\mathcal{F}_1$.
Memory consumption can be controlled quite easily.
Firstly, only
$\mathcal{F}_1$,
$\mathcal{F}_2$,
$W$ and
$\mu$ need
to reside in memory throughout execution.
Keeping the images of
$F$ and $\operatorname{pred}$ in memory
throughout execution normally results
in increased processing speed.
Reading the data out of memory
is typically faster than computing the images.
On the other hand,
these data significantly contribute to the memory consumption.
Therefore, in case of the lack of memory,
recomputing images of $F$ and $\operatorname{pred}$
allows to continue the execution.
Needless to say,
the computation method for $F$ and $\operatorname{pred}$ depends
on the representation of the input data.
\section{Application to Sampled Systems}
\label{s:application}
Within the framework of symbolic optimal control
Algorithm \ref{alg:BellmanFord} can be used
for synthesizing near-optimal controllers for
sampled-data systems with continuous state space.
Specifically, the sampled version of a dynamical system
with continuous-time continuous-state dynamics
\begin{equation}
\label{e:ode}
\dot x(t) = f(x(t),u(t))
\end{equation}
can be considered.
In \eqref{e:ode},
$f$ is a function
$\mathbb{R}^n \times \bar U \to \mathbb{R}^n$, where
$\bar U \subseteq \mathbb{R}^m$,
$n,m \in \mathbb{N}$.
A brief overview of the synthesis method
is given below
in preparation
for our experimental
results in Section \ref{s:example}.
More concretely, in Section \ref{ss:sampledsystem}
\begriff{sampled systems} are formalized and then
\begriff{discrete abstractions}
are introduced.
The technique to implement a synthesis algorithm
for control problems on sampled systems
is outlined in Section \ref{ss:implementation}.
A comprehensive discussion of symbolic control
is beyond the scope
of this paper.
The interested reader may refer to
\cite{ReissigWeberRungger17} for further details.
\subsection{Symbolic near-optimal control of sampled systems}
\label{ss:sampledsystem}
In this work, we assume in \eqref{e:ode} that $f(\cdot,\bar u)$
is locally Lipschitz-continuous
for all $\bar u \in \bar U$ and
$\operatorname{dom}\varphi = \mathbb{R}_+ \times \mathbb{R}^n \times \bar U$.
Here and subsequently,
the symbol $\varphi$ is used to denote
the general solution of \eqref{e:ode}, i.e.
$\varphi(0,x,u) = x$ and
$D_1\varphi(t,x,u) = f( \varphi(t,x,u), u)$
for all $(t,x,u) \in \operatorname{dom}\varphi$.
We may formulate the discretization of \eqref{e:ode}
with respect to a chosen sampling time
$\tau$ as below \cite{ReissigWeberRungger17}.
\begin{definition}
\label{def:sampledsystem}
Let $\tau > 0$.
A system $S$ of the form \eqref{e:def:system} with
$X = \mathbb{R}^n$,
$U = \bar U$
is called \begriff{sampled system} associated with $f$ and
$\tau$ if
$F(x,u) = \{ \varphi(\tau,x,u) \}$
for all $(x, u) \in X \times U$.
\end{definition}
Given an optimal control problem $\Pi$ of the form \eqref{e:def:ocp},
where $S=(X,U,F)$ is a sampled system,
the theory developed in \cite{ReissigRungger18} implies
a method to be outlined below
to compute near-optimal controllers.
Here, we say a controller $\mu$ for $\Pi$
is \begriff{near-optimal on $A \subseteq X$},
if its closed-loop performance
\eqref{e:closedloopperformance} is finite on $A$,
i.e. $L(p,\mu) < \infty$ for all $p \in A$.
The term ``near-optimal" can be indeed
justified since according to the theory of
symbolic optimal control \cite{ReissigRungger18}
the considered synthesis method implies
a sequence of functions
converging to the
value function of the problem.
It is beyond the scope of this paper
to investigate the convergence properties
in detail.
In the first step of the aforementioned method
a certain auxiliary optimal control problem
$\Pi' = (X',U',F',G',g')$ is defined.
By virtue of its special properties,
which we explain later,
theory ensures that
a near-optimal controller
for the actual control problem is found
\emph{if} $\Pi'$ can be solved.
The latter statement is the
key point of this synthesis method:
$\Pi'$ is chosen having
\emph{discrete} problem data and is solved in the second step.
So, algorithms, as the one we have presented,
can be used to eventually obtain a controller
for the actual control problem.
The structure of the
controller for the actual problem
is explained after the discussion of $\Pi'$.
The key object in $\Pi'$ is
the so-called discrete abstraction.
A discrete abstraction is, roughly speaking,
an approximation of the system dynamics:
The continuous state space of the given sampled system
is quantized by means of a cover,
a subset of the input space is picked and
transitions between the elements of the cover
for the few chosen inputs ``cover" the transition
function of the sampled system.
The precise definition of a discrete abstraction is the following \cite{ReissigWeberRungger17}.
\begin{definition}
\label{def:abstraction}
Let $S$ be a sampled system and
let the system $S' = (X',U',F')$ satisfy:
\begin{enumerate}
\item $X'$ is a cover of $X$ by non-empty sets;
\item $U' \subseteq U$;
\item
\label{def:abstraction:frr}
Let $\Omega_i \in X$, $i \in \{1,2\}$, $u \in U'$.
If $\Omega_2 \cap F(\Omega_1,u) \neq \emptyset$ then $\Omega_2 \in F'(\Omega_1,u)$.
\end{enumerate}
Then $S'$ is called a \begriff{discrete abstraction} of $S$.
\end{definition}
The correctness of previous method is based on the following ``overapproximation" properties of $\Pi'$:
\begin{enumerate}
\item $G(x) \leq G'(\Omega)$, whenever $x \in \Omega \in X'$;
\item $g(x,x',u) \leq g'(\Omega,\Omega',u)$, whenever $u \in U'$, $(x,x') \in (\Omega,\Omega') \in X' \times X'$;
\item The system $(X',U',F')$ is a discrete abstraction of $S$.
\end{enumerate}
The structure of the controller $\mu$ for $S$ is simple:
It is composed of the optimal controller for $\Pi'$ and
the quantizer induced by the cover $X'$.
See Fig.~\ref{fig:controller}.
\begin{figure}
\centering
\input{figures/controller.tikz}
\caption{\label{fig:controller}Structure of the controller $X \rightrightarrows U$
for optimal control problem $\Pi$ in Section~\ref{ss:sampledsystem}. The controller is the composition of $\mu$ and $Q$, where
$\mu$ is an optimal controller for the auxiliary problem $\Pi'$ and the quantizer $Q$ is given through the property $\Omega \in Q(x) \Leftrightarrow x \in \Omega$.}
\end{figure}
\subsection{Implementation of Symbolic Optimal Control}
\label{ss:implementation}
In order to solve optimal control problems in practice
sophisticated choices for the ingredients
of the discrete abstraction are required.
In this paper, we implement
the method of \cite{ReissigWeberRungger17,Weber18}
in the following sense.
Let $S$ be the sampled system associated with $f$ and $\tau$.
A discrete abstraction $(X',U',F')$ of $S$
is constructed as follows.
The set $X'$
consists of a finite number of compact hyper-rectangles and
a finite number of unbounded sets such that $X'$ is a cover of $X$.
The compact sets in $X'$ are translated copies of
\begin{equation*}
\intcc{0,\eta_1} \times \dots \times \intcc{0,\eta_n}
\end{equation*}
for some parameter $\eta \in \mathbb{R}^n_+$.
Typically,
these compact elements cover the ``operating range"
of the controller to synthesize whereas the other elements
catch ``overflows".
Some finite subset $U' \subseteq U$ is chosen.
Condition \ref{def:abstraction:frr}) in Definition \ref{def:abstraction}
is realized by overapproximating the attainable sets
$\varphi(\tau,\Omega, \bar u)$ for
$(\Omega,\bar u) \in X'\times U'$
by hyper-rectangles \cite{KapelaZgliczynski09}.
Applying Algorithm \ref{alg:BellmanFord}
to discrete abstractions is straightforward
by observing that
the algorithm remains correct if
$\operatorname{pred}$
is replaced by a superset of it.
Such a set can be obtained for a sampled system
by integrating the time-reverse dynamics
\begin{equation}
\dot x(t) = -f(x(t),u(t))
\end{equation}
and
overapproximating attainable sets accordingly \cite{MacoveiciucReissig19}.
\section{Examples}
\label{s:example}
We discuss a detailed problem about aerial firefighting
in Section \ref{ss:firefighting} and compare the performance
of our algorithm to the Dijkstra-like algorthm of \cite{MacoveiciucReissig19} in Section \ref{ss:landing}.
\subsection{Automated aerial firefighting}
\label{ss:firefighting}
Until now, only few scientific works discuss automated aerial firefighting.
The works \cite{ImdoukhEtAl17, QinEtAl16} focus on quadcopters and
their properties for firefighting.
The work \cite{HarikumarSenthilnathSundaram18} discusses
a leader-follower firefighting strategy based on engineering experience.
At the same time,
automated aerial firefighting would drastically increase efficiency
of the firefighting task
while reducing risk for humans.
The reasons are divers: One main difficulty
in extinguishing wildfires
is that aerial firefighting
in darkness is typically not possible
for the sake of pilots' safety.
So a considerable amount of time is lost.
In any case,
firefighting pilots take a huge risk
in those operations due to turbulence,
smoke, other participating vehicles and the like.
Therefore, firefighting unmanned aerial vehicles
in combination with
a sophisticated firefighting strategy
would reduce risks for humans.
As one next step towards automated
aerial firefighting
we apply in this section
our novel results of Section \ref{s:algorithm}
to a simplified scenario of a wildfire:
An aircraft fights the fire
by releasing its water tank over the hot spot.
It is not our purpose
to present a fully realistic scenario
that includes real problem data.
Nevertheless the presented scenario is scalable to real data
and the full potential of symbolic controller synthesis
is also not exploited for the sake of a clear presentation.
For example,
uncertainties due to wind or
sensor noise could be
easily taken into account
\cite[Sect.~VI-B]{ReissigWeberRungger17}
but would add
some extra notation.
\subsubsection{Problem definition}
We consider a fixed-wing aircraft,
where we model
a) the planar motion of the aircraft,
b) instantaneous weight loss due to water release.
In fact, we consider equations of motion given by \eqref{e:ode}
with $f_1$ (``empty water tank"), respectively $f_2$ (``filled water tank"),
in place of $f$.
Here,
$f_\sigma \colon \mathbb{R}^4 \times \bar U \to \mathbb{R}^4$ with
$\sigma \in \{1, 2\}$ is given by
\begin{equation}
\label{e:aircraft}
f_\sigma(x,u) = \begin{pmatrix}
x_4 \cdot \cos(x_3) \\
x_4 \cdot \sin(x_3) \\
m_\sigma^{-1} \cdot p_L \cdot x_4 \cdot \sin(u_2) \\
m_\sigma^{-1}(u_1 - p_D \cdot x_4^2)
\end{pmatrix},
\end{equation}
where $\bar U$ and the symbols in \eqref{e:aircraft} are explained in Tab.~\ref{tab:aircraft}.
This point-mass model for a fixed-wing aircraft is
widely used in literature, e.g. \cite[Sect.~3]{GloverLygeros04}.
\begin{figure*}
\centering
\input{figures/scenario}
\vspace*{-3.5em}
\caption{\label{fig:scenario}Aerial firefighting scenario from Section \ref{ss:firefighting}. A closed-loop trajectory is illustrated, which starts at
$p_1 = (840,140,0^\circ,53)$
and ends at
$p_3 \approx (319, 130, 2.43^\circ, 50.4)$.
At the point $p_2$ ($\odot$) the controller for $\Pi_2$ hands over the control to the controller for $\Pi_1$.
The trajectory segment indicated by the triangles belongs to a trajectory that would result if for both $\Pi_1$ and $\Pi_2$ the running cost be equal to $g_1$, i.e. without reward mechanism.
}
\end{figure*}%
\begin{table}
\centering
\begin{tabular}{|cl|}
\hline
Symbol & Meaning and numerical values where applicable \\
\hline
\hline
$(x_1,x_2)$ & Planar position of aircraft \\
$x_3$ & Heading of aircraft \\
$x_4$ & Velocity of aircraft \\
\hline
$u_1$ & Thrust of aircraft \\
$u_2$ & Bank angle of aircraft \\
$\bar U$ & Admissible range of thrust and bank angle; \\
& $\bar U = \intcc{0, 18\!\cdot\!10^3} \times \intcc{-40^\circ , 40^\circ}$ \\
\hline
$m_1$ $[m_2]$ & Mass of aircraft without [with] its payload; \\
& $m_1 = 4250$, $m_2 = 6250$ \\
$p_D$ & Coefficient (rel. to drag) in aircraft dynamics; $p_D = 1.8$\\
$p_L$ & Coefficient (rel. to lift) in aircraft dynamics; $p_L = 85$\\
\hline
\end{tabular}
\caption{\label{tab:aircraft} Symbols used in \eqref{e:aircraft}. }
\vspace*{-1.5\baselineskip}
\end{table}%
The specification is to fly the aircraft to the fire
after departure from the airfield,
fly over the fire for some ``optimal" time and
fly back to the airfield.
Obstacles in the area must be avoided and
the aircraft must be flown within allowed limits.
The relevant sets of the scenario
are given in Tab.~\ref{tab:specification}.
See also Fig.~\ref{fig:scenario}.
\begin{table}
\centering
\begin{tabular}{|lll|}
\hline
Symbol & Value & Meaning\\
\hline \hline
$X_\textrm{scen}$ & $\intcc{0,2500} \times \intcc{0,800}$ & Spatial mission area \\
$\bar X$ & $X_\textrm{scen} \times \mathbb{R} \times \intcc{50,85}$ & Operating range of controllers \\ \hline
$A_\textrm{rwy}$ & $\intcc{300,900} \times\intcc{100,180}$ & Runway of the airfield \\
$A_\textrm{land}$ & $\intcc{-10^\circ,10^\circ} \times \intcc{50,55}$ & Admissible heading and ve- \\
& & locity of aircraft for landing \\
$A_\textrm{fire}$ & $\subseteq \mathbb{R}^2$, see Fig.~\ref{fig:scenario} & Water release region (fire) \\
$A_\textrm{drop}$ & $\mathbb{R} \times \intcc{53,56}$ & Admissible heading and veloc- \\
& & ity of aircraft for water release \\
\hline
$A_\textrm{nofly}$ & $\intcc{320,880} \times\intcc{120,160}$ & Illegal aircraft states \\ & $\times \intcc{12^\circ, 348^\circ} \times \mathbb{R}$ & over runway\\
$A_\textrm{hill}$ & $\subseteq \mathbb{R}^2$, see Fig.~\ref{fig:scenario} & Spatial obstacle set \\
$A_\mathrm{a}$ & $A_\textrm{nofly} \cup (A_\textrm{hill} \times \mathbb{R}^2)$ & Overall obstacle set \\
\hline
\end{tabular}
\caption{\label{tab:specification} Sets defining the scenario. }
\vspace*{-2\baselineskip}
\end{table}%
We formalize this mission by two optimal control problems
$\Pi_i = (X,U,F_i,G_i,g_i)$, $i \in \{1,2\}$
which we are going to solve in succession.
The optimal control problem $\Pi_1$
is the control task to fly from the fire back to
the airfield with empty water tank.
In particular, $(X, U, F_1)$ is the sampled system
associated with $f_1$ and sampling time $\tau = 0.45$, and
\begin{align*}
g_1(x,y,u) &= \begin{cases}
\infty , & \textrm{if } y \in (\mathbb{R}^4 \setminus \bar X ) \cup A_\textrm{a} \\
\tau + u_2^2, & \text{otherwise}
\end{cases} \\
G_1(x) &= \begin{cases}
\infty, & \textrm{if } x \notin A_\textrm{rwy} \times A_\textrm{land} \\
0, & \textrm{otherwise}
\end{cases}
\end{align*}
Loosely speaking, the optimal controller for $\Pi_1$
minimizes the time to arrive at the airfield
but additionally favors small bank angles\footnote{In the definition of $g_1$, angles are taken in radians in the range $\intcc{-\pi,\pi}$.} and avoids obstacles.
The controller terminates its action
only if the aircraft is on ``final approach", which is enforced by $G_1$.
Optimal control problem $\Pi_2$
formalizes the control task
to fly to the hot spot with filled water tank.
In particular,
$(X, U, F_2)$ is the sampled system associated with $f_2$ and $\tau$,
\begin{equation*}
g_2(x,y,u) = \begin{cases}
-5\tau, & \textrm{if } y \in A_\textrm{fire} \times A_\textrm{drop} \\
g_1(x,y,u), & \textrm{otherwise}
\end{cases}
\end{equation*}
and $G_2 = V_1$, where $V_1$ is the value function of $\Pi_1$.
The motivation for $g_2$ is to reward the aircraft for flying over the fire. The controller terminates its action at a beneficial state for flying back to the airfield. The latter behavior is due to the definition of $G_2$.
\subsubsection{Auxiliary optimal control problems}
To solve $\Pi_1$ and $\Pi_2$
two auxiliary problems
$\Pi_i' = (X',U',F'_i,G_i',g_i')$,
$i \in \{1,2\}$ are defined
in accordance with Section \ref{s:application},
where the ingredients are as follows.
The ``abstract" state and input space
satisfy
$|X'| \approx 141.7 \cdot 10^6 $ and
$|U'| = 35$, respectively.
They are defined through subdividing the
components of $X_\textrm{scen} \times \intcc{-\pi,\pi} \times \intcc{50,85}$ and $\bar U$
into $200\!\cdot\!70\!\cdot\!75\cdot\!135$ and $4\cdot6$ respectively,
compact hyper-rectangles.
The running costs
are defined by
\begin{align*}
g_1'(\Omega,\Omega',u) &=
\begin{cases}
\infty, & \textrm{if } \Omega' \cap ( (\mathbb{R}^4 \setminus \bar X ) \cup A_\mathrm{a} ) \neq \emptyset \\
\tau + u_2^2, & \textrm{otherwise}
\end{cases} \\
g_2'(\Omega,\Omega',u) &=
\begin{cases}
-5\tau, & \textrm{if } \Omega' \subseteq A_\textrm{fire} \times A_\textrm{drop} \\
g_1'(\Omega,\Omega',u), & \textrm{otherwise}
\end{cases}
\end{align*}
and the terminal costs by $G'_2 = V'_1$,
\begin{align*}
G_1'(\Omega) &=
\begin{cases}
0, & \textrm{if } \Omega \subseteq A_\textrm{rwy}\times A_\textrm{land} \\
\infty, & \textrm{otherwise}
\end{cases}
\end{align*}
where $V'_1$ is the value function of $\Pi_1'$.
\subsubsection{Experimental results}
\label{ss:experimentalresults}
Both $\Pi_1'$ and $\Pi_2'$ can be solved
utilizing Algorithm \ref{alg:BellmanFord}
and near-optimal controllers for $\Pi_1$ on
$A_\textrm{fire}\times A_\textrm{drop}$ and
for $\Pi_2$ on
$A_\textrm{rwy} \times A_\textrm{land}$ are found.
A trajectory of the resulting closed loop is illustrated in Fig.~\ref{fig:scenario}.
It is also important to note that the reward mechanism implemented by means of $g_2$ is indeed relevant to fly the aircraft a longer time over the fire.
Disabling this mechanism, i.e.
defining $\Pi_2$ with $g_1$ in place of $g_2$, results
in the trajectory outlined by the triangles
in Fig.~\ref{fig:scenario}. Hence, the hand-over command would follow
immediately after reaching $A_\textrm{fire}$.
The performance of Algorithm \ref{alg:BellmanFord}
shall be discussed by means of Fig.~\ref{fig:experimentalresults}.
The corresponding implementation is written in C and compiled for Linux.
Firstly, Fig.~\ref{fig:experimentalresults}a indicates that the load
of the random access memory can be regulated almost without restrictions.
Specifically,
the used implementation stores
all computed images of $F$ and $\operatorname{pred}$
until $68\%$ of RAM is consumed (predefined by user).
After that, after every iteration (line \ref{alg:BF:endfor} in Algorithm \ref{alg:BellmanFord})
all transitions are deleted from memory
except those required in the next iteration.
A temporary small loss of processing speed
can be detected but the computation proceeds efficiently.
Fig.~\ref{fig:experimentalresults}b illustrates
the great scaling property of the algorithm (and its implementation)
with respect to parallelization.
\subsection{Aircraft landing maneuver}
\label{ss:landing}
We would like to compare the performance of
Algorithm~\ref{alg:BellmanFord} with the one of a recent,
memory-efficient Dijkstra-like algorithm.
To be specific, we apply our algorithm
to the example considered in \cite[Sect.~V.B]{MacoveiciucReissig19},
which is a control problem about landing an aircraft DC-9
with 3-dimensional dynamics.
Algorithm~\ref{alg:BellmanFord} of this paper
needs 122 MB memory, which is 61\% less than the consumption
reported for \mbox{``Algorithm 2"} of \cite{MacoveiciucReissig19} (317 MB).
With one thread (resp. two threads) the computation terminates successfully in
321 (resp. 255) seconds\footnote{Implementation and hardware as in Section \ref{ss:firefighting}.}. The authors of \cite{MacoveiciucReissig19}
report 320 seconds runtime, where only sequential computation is feasible.
\section{Conclusions}
\label{s:conclusion}
Our experimental results confirm the conclusions
in \cite{KhaledZamani19} that
the efficiency of symbolic controller synthesis
can be drastically increased by utilizing algorithms
that can be executed in parallel.
In addition, our second numerical example
reveals that though Dijkstra-like algorithms
have a smaller time complexity
than our algorithm
the ability of parallelization and limiting RAM usage
makes our algorithm more suitable for solving
complex control problems.
The aerial firefighting problem in Section \ref{ss:firefighting}
could not be solved, or could not be solved in a
reasonable time, without the said properties.
In future work further techniques for
an even higher degree of parallelization
will be investigated,
e.g. using graphical
processing units.
On such an architecture
thousands of elementary operations
can be executed in parallel.
The challenge is to organize shared memory operations properly
since they take a significant amount
of runtime.
\begin{figure*}
\centering
\begin{minipage}{.64\textwidth}
\centering
\input{figures/runtime_vs_mem_vs_perf}
\\
(a)
\end{minipage}%
\begin{minipage}{.35\textwidth}
\centering
\input{figures/runtime_vs_threads}\\
(b)
\end{minipage}
\caption{\label{fig:experimentalresults}Performance analysis of Algorithm \ref{alg:BellmanFord} and its implementation applied to the optimal control problems in Section \ref{ss:firefighting}. All computations ran on up to 24 CPUs of type Intel Xeon E5-2697 v3 (2.60GHz) sharing 64 GB RAM. (a) Relation between memory use ($\bullet$) and processing speed ({\tiny{$\blacksquare$}}) when solving $\Pi'_2$ using 24 threads.
The quantities are measured whenever the for-loop in Algorithm \ref{alg:BellmanFord} is exited (line \ref{alg:BF:endfor}). For RAM usage the system function
\texttt{getrusage} \cite{LinuxProgrammersManual} is used.
(b) Runtime in dependence of the number of threads that are used to solve problem
$\Pi_1'$ ($\times$) and $\Pi_2'$ ($+$), respectively.}
\vspace*{-.8\baselineskip}
\end{figure*}
\section*{Acknowledgment}
The authors gratefully acknowledge the compute resources provided by the Leibniz Supercomputing Centre.
| {
"attr-fineweb-edu": 1.651367,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUeC04uBhjDKLkNGWm | \chapter{Introduction}
One of the most interesting recent developments is the connection between
classical brane dynamics and quantum field theories.
This work was initiated by Witten [\Witten] who studied a configuration
of NS-fivebranes and D-fourbranes in type IIA string theory,
for which the low energy effective
action is a four-dimensional $N=2$ Yang-Mills theory.
From the M theory point of view this configuration is a single
M-fivebrane with a complicated set of self-intersections and it was argued
in [\Witten] that it is in fact wrapped on a
Riemann surface $\Sigma$. This identification
provides a systematic explanation of the origin of
the auxiliary curve in the Seiberg-Witten solution of $N=2$ Yang-Mills [\SW]
for various gauge groups and matter content and also illuminates many
qualitative features of quantum Yang-Mills theories. A similar role for the
Riemann surface also appeared in [\KLVW] when considering Calabi-Yau
compactifications of type II strings.
In [\three]
the worldvolume soliton solution for the intersection of two M-fivebranes
(or self-intersection of a single M-fivebrane) was found.
The Riemann surface then appears
naturally as a consequence of the Bogomoln'yi condition. It was further
shown in [\HLW,\LW] that the classical effective action for this soliton
could be calculated from the M-fivebrane's equations of motion and
the lowest order terms are precisely the full Seiberg-Witten effective
action for quantum $N=2$ Yang-Mills theory.
Thus the classical M-fivebrane dynamics contains the entire quantum effects
of low energy $N=2$ $SU(2)$ Yang-Mills theory.
This correspondence not only predicts the correct known
perturbative contributions to the $\beta$-function [\pert] but also an
infinite number of instanton corrections, of which only
the first two have been found by explicit calculation [\inst].
In this paper we will restrict our attention to
the case of two threebranes whose centre of mass is fixed.
However our work can be readily extended to the more general case.
In the corresponding type IIA picture one sees two NS-fivebranes with two
D-fourbranes suspended between them so that the low energy
effective action is
$N=2$ $SU(2)$ Yang-Mills [\Witten].
We provide a complete $N=2$ superspace
proof of the equivalence between the low energy
motion of threebranes in the M-fivebrane and the low energy Seiberg-Witten
effective action for quantum $N=2$ $SU(2)$
super-Yang-Mills, extending the results
of [\HLW,\LW] to include the fermionic zero modes. However our primary
motivation for this work is to see what insights and simplifications
can be gained into
the M-fivebrane/Seiberg-Witten correspondence from $N=2$ superspace, which
provides a geometrical unification of all of the zero modes.
Indeed many of subtleties which appear in the purely bosonic formulations
can be clearly observed and understood in $N=2$ superspace.
\chapter{$N=2$ and $N=1$ Superfields}
Let us first recall how the $N=2$ chiral effective action decomposes
in terms of $N=1$ superfields. In this paper $N=2$ superfields are
denoted
by calligraphic or bold faced letters, $N=1$ superfields by
upper case letters and $N=0$ fields by lower case letters.
$N=2$ Yang-Mills theory is described by a chiral superfield ${\cal A}$ [\GSW]
$$
{\bar D}_{\dot B}^i {\cal A}=0\ ,\
\eqn\chrialdef
$$
which also satisfies the constraint
$$
D^{ij} {\cal A} = {\bar D}^{ij}{{\bar {\cal A}}}\ ,
\eqn\constraint
$$
where $D^{ij} = D^{Ai}D_{A}^{\ j}$ and ${\bar D}^{ij} =
{\bar D}^{{\dot A}i}{\bar D}_{\dot A}^{\ j}$.
This constraint then ensures that the vector
component obeys the Bianchi identity and that the auxiliary field is real.
The chiral effective action may then be written as
$$
S = {\rm Im}\int d^4x d^4\theta F({\cal A}) \ .
\eqn\YMaction
$$
The solution to the above constraints in the Abelian case is of the
form [\M]
$$
{\cal A} = {\bar D}^4D^{ij}{\cal V}_{ij}\ ,
\eqn\Vdef
$$
where ${\cal V}_{ij}$ is an unconstrained superfield of mass dimension $-2$.
Varying the action \YMaction\ with respect to ${\cal V}_{ij}$ yields
$$
{\rm Im}\int d^4 x d^4\theta {\bar D}^4 D^{ij}\delta {\cal V}_{ij}
{dF({\cal A})\over d{\cal A}}
= 16{\rm Im} \int d^4 x d^8\theta
\delta {\cal V}_{ij}D^{ij}\left({dF({\cal A})\over d{\cal A}}\right)\ ,
\eqn\variation
$$
and hence we find the equation of motion to be
$$
D^{ij}{dF\over d{\cal A}} = {\bar D}^{ij} {d {\bar F}\over d {\bar {\cal A}}}\ .
\eqn\YMeqofm
$$
We observe that if we write ${\cal A}_D = dF/d{\cal A}$ and ${\cal A}$ as a doublet then the
constraint \constraint\ and the equations of motion \YMeqofm\ can be
written as
$$
D^{ij}\left(\matrix{{\cal A}_D\cr {\cal A} \cr}\right)
= {\bar D}^{ij}\left(\matrix{{{\bar {\cal A}}}_D\cr {{\bar {\cal A}}} \cr}\right)\ .
\eqn\system
$$
Clearly this system has a set of equations invariant under
$$
\left(\matrix{{\cal A}_D\cr {\cal A} \cr}\right)\rightarrow
\Omega \left(\matrix{{\cal A}_D\cr {\cal A} \cr}\right)\ ,
\eqn\sym
$$
where $\Omega \in SL(2,{\bf Z})$. Thus an $SL(2,{\bf Z})$ symmetry is
naturally realised in $N=2$ superspace [\vP].
For a free theory $F= iA^2$ and we find the equation of motion and constraint
imply $D^{ij} {\cal A} = {\bar D}^{ij} {\cal A}= 0$. At $\theta^{Ai}=0$ this equation
sets the auxiliary field to zero and ensures the correct equation of motion
for the Yang-Mills theory.
We now solve the $N=2$ superspace constraints in terms of $N=1$ superfields.
Let us label the two superspace Grassmann odd coordinates which occur in the
$N=2$ superspace as $\theta^{A1}=\theta^A$ and $\theta^{A2}=\eta^{A}$ and
similarly for their conjugates. We also introduce the derivatives
$D_A =D_A^{\ 1}$ and $\nabla_A=D_{A}^{\ 2}$ for which $D^2=D^AD_A$,
${\bar D}^2={\bar D}^{\dot A}{\bar D}_{\dot A}$
and similarly for $\nabla^2$ and ${\bar \nabla}^2$.
We associate the coordinates $\theta^A$
and ${\bar \theta}^{\dot A}$ with those of the $N=1$ superspace, which we will
keep manifest. The $N=2$ superfield ${\cal A}$ can be decomposed into the
$N=1$ superfields $A={\cal A}\mid_{\eta=0}$,
$W_A = \nabla_A{\cal A}\mid_{\eta=0}$ and $G = -{1\over2}\nabla^2{\cal A}\mid_{\eta=0}$.
These $N=1$ superfields are also $N=1$ chiral since ${\cal A}$ satisfies \chrialdef;
$$
{\bar D}_{\dot B}A = 0\ ,\quad {\bar D}_{\dot B}W_C = 0\ .
\eqn\onechiral
$$
It remains to solve the constraint \constraint .
Taking $i=1,\ j=2$ we find
the constraint $D^BW_B= {\bar D}^{\dot B} {\bar W}_{\dot B}$.
Taking $i=j=2$ we find that
$$
\nabla^2 {\cal A} = {\bar D}^2{{\bar {\cal A}}}\ ,
\eqn\twochiral
$$
which implies $G=-{1\over 2}{\bar D}^2 {\bar A}$ and so $G$
is not an independent superfield.
We can now evaluate the action \YMaction\ in terms of $N=1$ superfields
[\STG]
$$\eqalign{
S & =
{\rm Im}\int d^4 x d^2 \theta d^2\eta F \ ,\cr
&= {\rm Im}\int d^4 x d^2 \theta\left\{-{\nabla^2{{\cal A}}\over 4}
{dF\over d{\cal A}}
- {1\over4} {\nabla}^{ B} {\cal A} {\nabla}_{B} {\cal A}
{d^2F \over d{\cal A}^2}\right\}_{\eta=0}\ ,\cr
&= {\rm Im}\int d^4 x d^2 \theta\left\{-{\bar D^2{\bar A}\over 4}
{dF\over dA}
- {1\over4}{d^2F\over dA^2} {W}^{B}{W}_{B}\right\}\ ,\cr
&= {\rm Im}\left\{\int d^4 x d^4\theta{\bar A}{dF\over dA}
- {1\over4}\int d^4 x d^2\theta {d^2F\over dA^2}{W}^{B}{W}_{B}\right\} \ .\cr}
\eqn\oneaction
$$
This is easily recognised as the standard form
for the action of an $N=2$
Abelian Yang-Mills multiplet written in $N=1$ superfields.
At this point we remark on an important notational oddity. Consider the
coefficient of
$D_AW_B|_{\theta=0}$. Following standard conventions
the generators of the four-dimensional Lorentz group $\sigma^{\mu\nu}$
satisfy
$\sigma^{\mu\nu} = -{i\over 2}\epsilon^{\mu\nu\rho\lambda}
\sigma_{\rho\lambda}$. Therefore
$D_AW_B |_{\theta=0}= \sigma_{AB}^{\mu\nu}(F_{\mu\nu} -i\star F_{\mu\nu})$,
where
$F_{\mu\nu}$ is the curl of a four-dimensional gauge field.
Now the M-fivebrane field content contains a self-dual three form
$H$ and one can check that
$F_{\mu\nu} -i\star F_{\mu\nu}$ must come
from the $H_{\mu\nu{\bar z}}$ component of $H$ (this was also shown in [\LW]).
Thus one sees that the lowest component of $A$ must be ${\bar a}({\bar z})$.
In this paper then unbarred superfields depend upon
${\bar z}$ and barred superfields depend on $z$.
This problem has its origins
in the two uses for the bar symbol; as complex conjugation on the Riemann
surface, or as Hermitian conjugation in $N=2$ superspace. We are choosing
the later definition to have precedence.
\chapter{The M-Fivebrane and Seiberg-Witten}
We refer the reader to [\HLW,\LW] for a detailed
discussion of the threebrane and its
zero modes. The Riemann surface $\Sigma$ for two threebranes is given by
$t^2 - 2(z^2 - u)t + \Lambda^4 = 0$, where $t = e^{-s}$, $z$ is a coordinate
of the surface and $u$ is a modulus
related to the relative separation of the two threebranes.
From a supersymmetric point of view it is natural to take the
complex scalar ${\bar s}({\bar z})$ to be the lowest component of a chiral
$N=2$ superfield ${\cal S}$ with independent $N=1$ components
$$
{\cal S}\mid_{\eta=0} = S\ , \quad \nabla_A {\cal S}\mid_{\eta=0} = H_{A {\bar z}}\ ,
\eqn\Scomponents
$$
with $S\mid_{\theta =0} = {\bar s}$ and
$D_AH_{B{\bar z}}|_{\theta=0}=\sigma_{AB}^{\mu\nu}H_{\mu\nu{\bar z}}$.
Similarly we promote the moduli ${\bar u}$ of the Riemann surface to an
$N=2$ superfield ${\cal U}$ with the independent $N=1$ components
$$
{\cal U}\mid_{\eta=0} = U\ , \quad \nabla_A {\cal U}\mid_{\eta=0} = T_{A}\ ,
\eqn\Ucomponents
$$
and $U\mid_{\theta =0} = {\bar u}$.
We can then define the $N=2$ superfields
$$
{\cal A} = \oint_{A}{\cal S} d{\bar z}\ ,\quad {\cal A}_D = \oint_{B}{\cal S} d{\bar z}\ ,
\eqn\AADdef
$$
where $A$ and $B$ are a basis of one cycles of $\Sigma$.
Given these definitions one can see that the $\eta^A$ components of ${\cal A}$
and ${\cal A}_D$ are
$$\eqalign{
W_{A} &=\nabla_A \oint_{A}{\cal S} d{\bar z}
= \oint_{A} {\Lambda (U)} T_A
= {dA\over dU} T_A\ ,\cr
W^D_{A} &=\nabla_A \oint_{B}{\cal S} d{\bar z}
= \oint_{B} {\Lambda (U)} T_A
= {dA_D\over dU} T_A\ .\cr
}
\eqn\WWD
$$
Here $\Lambda = {dS\over dU} d{\bar z}$ is an $N=1$ superfield whose lowest
component is the anti-holomorphic one form ${\bar \lambda}$
of the Riemann surface. Furthermore it follows from
$$
H_{A{\bar z}} = \nabla_A{\cal S}|_{\eta=0}
= {dS\over dU}\nabla_A{\cal U}|_{\eta=0}
= {dS\over dU}T_A\ ,
\eqn\Teval
$$
and \WWD\ that
$$
H_{\mu\nu{\bar z}}
= \left({d{S}\over d{A}}\right)(F_{\mu\nu}-i\star F_{\mu\nu})
= \left({dA\over dU}\right)^{-1}
(F_{\mu\nu}-i\star F_{\mu\nu})\Lambda_{{\bar z}}\ ,
\eqn\ansatztwo
$$
which agrees with the ansatz used in [\LW]
for the vector zero modes.
Now we wish to obtain a manifestly $N=2$ formulation of the equations of
motion for the threebranes of the M-fivebrane [\LW].
To this end we postulate the following equation
$$
{\cal E} \equiv D^{ij}{\cal S} - R^2\Lambda^4\partial_{{\bar z}}
\left({D^{Ai}{\cal S} D_A^{\ \ j}{\cal S}\partial_z{\bar {\cal S}}\over
1+R^2\Lambda^4|{\partial_{{\bar z}}}{\cal S}|^2 }
- {D^{{\dot A}i}{\bar {\cal S}} {\bar D}_{\dot A}^{\ \ j}{\bar {\cal S}}\partial_{{\bar z}}{\cal S}\over
1+R^2\Lambda^4|\partial_{{\bar z}}{\cal S}|^2}\right)=0\ .
\eqn\susyeq
$$
First we take the $i=j=1$ component of \susyeq. This gives at $\eta=0$
$$
D^2 S - R^2\Lambda^4 \partial_{{\bar z}}\left(
{D^{A}S D_A S\partial_z{\bar S}\over
1+R^2\Lambda^4|{\partial_{{\bar z}}}S|^2 }
- {{\bar T}^{\dot A} {\bar T}_{\dot A}\partial_{{\bar z}}S\over
1+R^2\Lambda^4|\partial_{{\bar z}}S|^2}
\right)=0 \ .
\eqn\ijone
$$
Next we act on \ijone\ with ${\bar D}^2$ and set the fermions to zero to
obtain
$$
{\bar D}^2D^2 S + 2R^2\Lambda^4 \partial_{{\bar z}}\left(
{{\bar D}^{\dot B}D^{A}S {\bar D}_{\dot B}D_A S\partial_z{\bar S}\over
1+R^2\Lambda^4|{\partial_{{\bar z}}}S|^2}
- {{\bar D}^{\dot B}{\bar T}^{\dot A} {\bar D}_{\dot B}{\bar T}_{\dot A}
\partial_{{\bar z}}S\over 1+R^2\Lambda^4|\partial_{{\bar z}}S|^2}
\right)=0 \ .
\eqn\ijtwo
$$
This equation can then be evaluated to give
$$
\partial_{\mu}\partial^{\mu}{\bar s}
- R^2\Lambda^4\partial_{{\bar z}}\left(
{\partial_{\mu}{\bar s}\partial^{\mu}{\bar s}\partial_z s\over
1+R^2\Lambda^4|{\partial_{z}}s|^2}
+ H_{\mu\nu z}H^{\mu\nu}_{\ \ z}
{\partial_{{\bar z}}{\bar s}\over 1+R^2\Lambda^4|{\partial_{z}}s|^2}\right)=0\ ,
\eqn\ijthree
$$
which is precisely the equation for the scalar zero modes obtained in [\LW],
provided that we rescale $H_{\mu\nu z}\rightarrow 4H_{\mu\nu z}$.
Now we take the $i=1$, $j=2$ component of \susyeq. At $\eta=0$ we find
$$
D^AT_A + R^2\Lambda^4\partial_{{\bar z}}\left(
{D^A S T_A\partial_z{\bar S}\over 1+R^2\Lambda^4|\partial_{{\bar z}}S|^2}
+ {{\bar T}^{\dot A}{\bar D}_{\dot A}{\bar S} \partial_{{\bar z}}S
\over 1+R^2\Lambda^4|{\partial_{{\bar z}}}S|^2} \right)=0 \ .
\eqn\ijfour
$$
Next we act with $D_C{\bar D}_{\dot B}$ on \ijfour\ and set
the fermions to zero to obtain
$$
D_C{\bar D}_{\dot B}D^AT_A
-R^2\Lambda^4\partial_{{\bar z}}\left(
{\partial_z {\bar S} {\bar D}_{\dot B}D^A S D_C T_A
\over 1+R^2\Lambda^4|\partial_{z}{\bar S}|^2}
- {\partial_{{\bar z}}SD_C{\bar D}^{\dot A}{\bar S}
{\bar D}_{\dot B}{\bar T}_{\dot A}
\over 1+R^2\Lambda^4|\partial_{z}{\bar S}|^2}
\right)=0\ .
\eqn\ijfive
$$
Evaluating \ijfive\ we then arrive at the equation for the vector
zero modes
$$
\partial^{\nu}H_{\mu\nu{\bar z}} - R^2\Lambda^4\partial_{{\bar z}}\left(
{\partial_z s\partial^{\nu}{\bar s} H_{\mu\nu{\bar z}}
\over 1+R^2\Lambda^4|{\partial_{z}}s|^2}
-{\partial_{{\bar z}}{\bar s} \partial^{\nu}s H_{\mu\nu z}
\over 1+R^2\Lambda^4|{\partial_{z}}s|^2}\right)=0\ ,
\eqn\ijsix
$$
which is again precisely the same equation that was found in [\LW].
Since the scalar and vector component equations agree with the correct
equations of motion, it follows that \susyeq\ describes the all zero modes
of a threebrane soliton in the M-fivebrane worldvolume.
Now that we have shown that \susyeq\ reproduces the M-fivebrane equations
of motion we can determine the equations of motion for the massless modes
of the threebranes solitons by reducing ${\cal E}=0$
over the Riemann surface.
Thus we consider
$$
0 = \int_{\Sigma} {\cal E}d{\bar z}\wedge {\bar {\bf \Lambda}}\ ,
\eqn\reduce
$$
where, according to our conventions ${\bar {\bf \Lambda}} =
{d{{\bar {\cal S}}}\over d{{\bar {\cal U}}}}dz$ is an $N=2$ superfield whose lowest component is
the holomorphic one form $\lambda$.
Expanding out the terms in \susyeq\ one finds
$$
0 = D^{ij}{\cal U} I
+ D^{Ai}{\cal U} D_{A}^{\ \ j}{\cal U} {dI\over d{\cal U}}
- D^{Ai}{\cal U} D_{A}^{\ \ j}{\cal U} J
+ {\bar D}^{{\dot A}i}{\bar {\cal U}} {\bar D}_{\dot A}^{\ \ j}{\bar {\cal U}} K\ ,
\eqn\eqofmo
$$
where the $I,J$ and $K$ integrals
have appeared before in [\LW] where their
values have also been deduced;
$$\eqalign{
I &\equiv \int_{\Sigma}{{\bf \Lambda}}\wedge{\bar {\bf \Lambda}}
= {d{\cal A}\over d {\cal U}}{d{\bar {\cal A}}\over d{\bar {\cal U}}}({\tau} - {\bar \tau})\ ,\cr
J&\equiv R^2\Lambda^4\int_{\Sigma}\partial_{{\bar z}}\left(
{{\bf \Lambda}_{{\bar z}}^2\partial_{z}{\bar {\cal S}}\over 1+R^2\Lambda^4\partial_{{\bar z}}
{\cal S}\partial_{z}{\bar {\cal S}}}\right)d{\bar z}\wedge{\bar {\bf \Lambda}} = 0\ ,\cr
K &\equiv R^2\Lambda^4 \int_{\Sigma}\partial_{{\bar z}}\left(
{{\bar {\bf \Lambda}}_{z}^2\partial_{{\bar z}}{\cal S}\over 1+R^2\Lambda^4\partial_{{\bar z}} {\cal S}
\partial_{z}{\bar {\cal S}}} \right)d{\bar z}\wedge {\bar {\bf \Lambda}}
= -{d{\bar \tau}\over d{{\bar {\cal U}}}}\left({d{{\bar {\cal A}}}\over d{{\bar {\cal U}}}}\right)^2
\ ,\cr}
\eqn\IJK
$$
where $\tau({\cal U}) = {d{{\cal A}}_D\over d{\cal A}}$.
The first integral is easily evaluated using the Riemann bilinear
identity, however in [\LW] the two non-holomorhpic integrals required a
rather indirect
method to evaluate them. In the appendix to this paper we provide a
direct proof of the above expressions for $J$ and $K$.
Multiplying by $({d{\bar {\cal A}}\over d{\bar {\cal U}}})^{-1}$ one can rewrite \eqofmo\ in the
simple form
$$
D^{ij}{{\cal A}}_D-{\bar D}^{ij}{\bar {\cal A}}_D-{\bar \tau}(D^{ij}{\cal A} -{\bar D}^{ij}{\bar {\cal A}}) = 0\ .
\eqn\eqofmotwo
$$
The real and imaginary parts of this equation are equivalent to
$$
D^{ij}{\cal A} = {\bar D}^{ij}{\bar {\cal A}}\ , \quad
D^{ij}{{\cal A}}_D = {\bar D}^{ij}{{\bar {\cal A}}}_D\ ,
\eqn\eqofmori
$$
respectively. Therefore if we introduce a function $F({\cal A})$ defined so that
${\cal A}_D = {\partial F\over \partial{\cal A}}$ then we see
from \AADdef\
that these equations are precisely those of the Seiberg-Witten effective
theory for $N=2$ $SU(2)$ Yang-Mills with $N=2$ superspace action \YMaction.
Finally let us consider the following $N=2$
superspace generalisation of the Seiberg-Witten differential,
${\bf \Lambda}_{SW}= {\cal S} d{\bar z}$. The lowest component of ${\bf \Lambda}_{SW}$
is the Seiberg-Witten
differential ${\bar \lambda}_{SW}$ and its $\eta^A$ component is the form
$H_{A{\bar z}}d{\bar z}$. First we note that for either the $A$ or $B$ cycle
$$
\oint ({\cal E}d{\bar z} - {\bar {\cal E}}dz)
= \oint (D^{ij}{\bf \Lambda}_{SW}-{\bar D}^{ij}{\bar {\bf \Lambda}}_{SW})\ ,
\eqn\cycle
$$
since the non-holomorphic terms in $\cal E$ collect into the form
$d(f-{\bar f})$. We can discard the integral of $d(f-{\bar f})$ over the $A$
or $B$ cycles as the function
$f({\cal S},{\bar {\cal S}})$ is non-singular in a neighbourhood of these cycles, which are
therefore closed curves on $\Sigma$.
A direct derivation of the Seiberg-Witten effective
equations of motion comes from evaluating
$$\eqalign{
0 & = \oint_A \left( {\cal E}d{\bar z} - {\bar {\cal E}}dz\right)
= \oint_A\left(
D^{ij}{\bf \Lambda}_{SW}-{\bar D}^{ij}{\bar {\bf \Lambda}}_{SW}\right)\ ,\cr
0 & = \oint_B \left( {\cal E}d{\bar z} - {\bar {\cal E}}dz\right)
= \oint_B\left(
D^{ij}{\bf \Lambda}_{SW}-{\bar D}^{ij}{\bar {\bf \Lambda}}_{SW}\right)\ ,\cr}
\eqn\quick
$$
which yields
$$
D^{ij}\left(\matrix{\oint_{A}{{\bf \Lambda}_{SW}}\cr \oint_{B}{\bf \Lambda}_{SW} \cr}
\right) = {\bar D}^{ij}\left(
\matrix{\oint_{A}{\bar {\bf \Lambda}}_{SW}\cr \oint_{B}{\bar {\bf \Lambda}}_{SW} \cr}
\right)\ .
\eqn\eqofm
$$
Clearly an $SL(2,{\bf Z})$ transformation on the $(A,B)$
cycles generates the $SL(2,{\bf Z})$ transformation on $(A,A_D)$ discussed
in the previous section. We note that the condition
$D^{ij}{\bf \Lambda}_{SW} = {\bar D}^{ij}{\bar {\bf \Lambda}}_{SW}$ is simply the
constraint \constraint\ applied to ${\bf \Lambda}_{SW}$.
Thus the Seiberg-Witten equations of motion can be obtained
by imposing the $N=2$
superfield constraint \constraint\ on the generalised Seiberg-Witten
differential ${\bf \Lambda}_{SW}$ and then integrating it over the
cycles of the Riemann surface.
To make contact with the previous discussion we note that
because $dz\wedge {\bar {\bf \Lambda}}=0$ \reduce\ can be
rewritten as
$$\eqalign{
0 &= \int_{\Sigma} \left( {\cal E}d{\bar z} - {\bar {\cal E}}dz\right)
\wedge {\bar {\bf \Lambda}} \cr
&= \oint_B\left(
D^{ij}{\bf \Lambda}_{SW}-{\bar D}^{ij}{\bar {\bf \Lambda}}_{SW}\right)
\oint_A {\bar {\bf \Lambda}}
- \oint_A\left(
D^{ij}{\bf \Lambda}_{SW}-{\bar D}^{ij}{\bar {\bf \Lambda}}_{SW}\right)
\oint_B {\bar {\bf \Lambda}} \cr
&=(D^{ij}{{\cal A}}_D - {\bar D}^{ij}{{\bar {\cal A}}}_D){d{\bar {\cal A}}\over d{\bar {\cal U}}}
- (D^{ij}{\cal A} - {\bar D}^{ij}{\bar {\cal A}}){d{{\bar {\cal A}}}_D\over d{\bar {\cal U}}}\ ,}
\eqn\redconst
$$
where we have applied the Riemann bilinear relation and again dropped the
total derivative terms. In this way we arrive immediately at equation
\eqofmotwo, whose real and imaginary parts are \eqofm.
We would like to thank A. Pressley for discussions on Riemann surfaces.
Note Added
From the above superfield construction one sees a potential obstruction
to obtaining the Seiberg-Witten effective action from a
six-dimensional action. The chiral nature of $N=2$ superspace requires
that such an action could depend only upon ${\bf \Lambda}_{SW}={\cal S} dz$ and not
${\bar {\bf \Lambda}}_{SW} = {\bar {\cal S}} d{\bar z}$. However to obtain a covariant action
in six dimensions requires both $dz$ and $d{\bar z}$ to appear.
| {
"attr-fineweb-edu": 1.995117,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUeC84ubnhDOA8sWqH | \section{Introduction.}\label{intro}
\section{Introduction}
In many applications of reinforcement learning, such as playing chess and Go, the underlying model is known and so the main challenge is in solving the associated dynamic programming problem in an efficient manner. Policy iteration and variants of policy iteration \cite{bertsekas2019reinforcement,Bertsekas2011ApproximatePI,bertsekastsitsiklis} that solve dynamic programming problems rely on computations that are infeasible due to the sizes of the state and action spaces in modern reinforcement learning problems.
As a remedy to this ``curse of dimensionality,'' several state-of-the-art algorithms \cite{silver2017shoji, silver2017mastering, DBLP:journals/corr/MnihBMGLHSK16} employ lookahead for policy evaluation and policy improvement, function approximation, and gradient descent to compute the function approximation, see (Section \ref{section2} and \cite{bertsekas2019reinforcement} for a definition of these terms).
The recent work in \cite{efroni2019combine} considers a variant of policy iteration that utilizes lookahead and approximate policy evaluation using an $m$-step return (see Section \ref{section2} for definitions of these terms). As stated in the motivation in \cite{efroni2019combine}, we note that lookahead can be approximated well in practice using Monte Carlo Tree Search (MCTS) \cite{kocisszepesvari, browne} even though in theory, it has exponential complexity \cite{shah2020nonasymptotic}. Motivated by policy iteration, the algorithm in \cite{efroni2019combine} estimates the value function associated with a policy and aims to improve the policy at each step. Policy improvement is achieved by obtaining the ``greedy'' policy in the case of policy iteration or a lookahead policy in the work of \cite{efroni2019combine}, which involves applying the Bellman operator several times to the current iterate before obtaining the greedy policy. The idea is that the application of the Bellman operator several times gives a more accurate estimate of the optimal value function. Then, similar to policy iteration, the algorithm in \cite{efroni2019combine} aims to evaluate the new policy. The algorithm in \cite{efroni2019combine} uses an $m$-step return to compute the value function associated with a policy, i.e., it applies the Bellman operator associated with the policy $m$ times.
The work of \cite{efroni2019combine} establishes that a lookahead can significantly improve the rate of convergence if one uses the value function computed using lookahead in the approximate policy evaluation step. Our main interest is understanding how these convergence results change when the state-space is very large and one has to resort to function approximation of the value function.
Our contributions are as follows:
(1) We examine the impact of lookahead and $m$-step return on approximate policy iteration with linear function approximation. As in common in practice, we assume that we evaluate an approximate value only for some states at each iteration. Since we use function approximation, we need different proof techniques than in \cite{efroni2019combine} and one consequence of this is that the performance bounds that we obtain for the algorithm require that the sum of lookahead and the number of steps in the $m$-step return is sufficiently large. We demonstrate through an extension of a counter example in \cite{Tsitsiklis94feature-basedmethods} that such a condition is necessary, in general, for convergence with function approximation unlike the tabular setting in \cite{efroni2019combine}. See Appendix \ref{appendix:counterexAppendix} for our counter example.
(2) For ease of exposition, we first present the case where one solves a least-squares problem at each iteration to obtain the weights associated with the feature vectors in the function approximation of the value function. We then consider a more practical and widely-used scheme where one step of gradient descent is used to update the weights of the value function approximation at each iteration. Obtaining
performance bounds for the gradient descent algorithm is more challenging and these bounds can be found in Section \ref{SectionGD}. \color{black} Our results are presented in the limit as the number of iterations goes to infinity, but finite-time bounds can be easily extracted from the intermediate steps of the proofs.
(3) Our results show that the sufficient condition on the minimum amount of lookahead and return for convergence does not depend on the size of the state space but depends on the feature vectors used for function approximation.
(4) We complement our theoretical results with experiments on the same grid world problem as in \cite{efroni2018} and \cite{efroni2019combine}. These experiments are presented in Appendix \ref{appendix:numerical}.
In addition to the work of \cite{efroni2019combine}, there is a long history of other work on approximate policy iteration. We will compare our results to some of these prior works in a later section.
\section{Preliminaries} \label{section2}
We consider a Markov Decision Process (MDP), which is defined to be a 5-tuple $(\scriptS, \scriptA, P, R, \alpha)$. The finite set of states of the MDP is $\scriptS$. There exists a finite set of actions associated with the MDP $\scriptA$. Let $P_{ij}(a)$ be the probability of transitioning from state $i$ to state $j$ when taking action $a \in \scriptA$. We denote by $s_k$ the state of the MDP and by $a_k$ the corresponding action at time $k$. We associate with state $s_k$ and action $a_k$ a non-deterministic reward $r(s_k, a_k) \in [0, 1] \forall s_k \in \scriptS, a_k \in \scriptA.$ We assume that the rewards are uniformly bounded. Our objective is to maximize the cumulative discounted reward with discount factor $\alpha \in (0, 1), \sum_{i=0}^\infty \alpha^k r(s_k, a_k).$
Towards this end, we associate with each state $s\in \scriptS$ a deterministic policy $\mu(s) \in \scriptA$ which prescribes an action to take. For every policy $\mu$ and every state $s \in \scriptS$ we define $J^{\mu}(s)$ as follows:
\begin{align*}
J^{\mu}(s) := E[\sum_{i=0}^\infty \alpha^k r(s_k, \mu(s_k))|s_0=s].
\end{align*}
We define the optimal reward-to-go $J^*$ as
$J^*(s) := \underset{\mu}\min J^\mu(s).$The objective is to find a policy $\mu$ that maximizes $J^\mu(s)$ for all $s \in \scriptS$. Towards the objective, we associate with each policy $\mu$ a function $T_\mu: \mathbb{R}^n \to \mathbb{R}^n$ where for $J \in \mathbb{R}^n,$ the $s$th component of $T_{\mu}J$ is
\begin{align*}
(T_\mu J)(s) = r(s, \mu(s)) + \alpha \sum_{j=1}^{|\scriptS|} p_{sj}(\mu(s)) J(j),
\end{align*} for all $s \in S$. If function $T_{\mu}$ is applied $m$ times to vector $J \in \mathbb{R}^{|\scriptS|},$ we call the result $T^m_\mu J.$ We say that $T^m_\mu J$ is the $m$- return corresponding to $J$ or the ``return,'' when $J$ and $m$ are understood.
Similarly, we define the Bellman operator $T: \mathbb{R}^{|\scriptS|} \to \mathbb{R}^{|\scriptS|}$ with the $s$th component of $TJ$ being
\begin{align}
(TJ)(s) = \underset{a}\max \Bigg \{ r(s, a) + \alpha \sum_{j=1}^{|\scriptS|} p_{sj}(a)J(j) \Bigg \}. \label{T}
\end{align}
The policy corresponding to the $T$ operator is defined as the \textit{greedy} policy. If operator $T$ is applied $H$ times to vector $J \in \mathbb{R}^{|\scriptS|},$ we call the result - $T^H J$ - the $H$-step ``lookahead'' corresponding to $J$. The greedy policy corresponding to $T^H J$ is called the $H$-step lookahead policy, or the lookahead policy, when $H$ is understood. More precisely, given an estimate $J$ of the value function, the lookahead policy is the policy $\mu$ such that $T_\mu(T^{H-1} J)=T(T^{H-1} J).$
It is well known that each time the Bellman operator is applied to a vector $J$ to obtain $TJ,$ the following holds:
\begin{align*}
\norm{TJ-J^*}_\infty\leq \alpha\norm{J-J^*}_\infty.
\end{align*} Thus, applying $T$ to obtain $TJ$ gives a better estimate of the value function than $J.$
The Bellman equations state that the vector $J^\mu$ is the unique solution to the linear equation
\begin{align}
J^\mu = T_\mu J^\mu. \label{bellman}
\end{align}
Additionally, we have that $J^*$ is a solution to
\begin{align*}
J^* = TJ^*.
\end{align*}
Note that every greedy policy w.r.t. $J^*$ is optimal and vice versa \cite{bertsekastsitsiklis}.
We will now state several useful properties of the operators $T$ and $T_\mu$. Consider the vector $e \in \mathbb{R}^{|\scriptS|}$ where $e(i) = 1 \forall i \in 1, 2, \ldots, |\scriptS|.$ We have:
\begin{equation}
T(J + ce) = TJ + \alpha ce, \quad T_\mu(J + ce) = T_\mu J + \alpha ce. \label{eq:usefulproperties}
\end{equation}
Operators $T$ and $T_\mu$ are also monotone:
\begin{align}
J \leq J' \implies TJ \leq TJ', \quad T_\mu J \leq T_\mu J'. \label{monotonicityproperty}
\end{align}
\section{Least Squares Function Approximation Algorithm}
Our algorithm is described in Algorithm \ref{alg:LSalg}. The algorithm is an approximation to policy iteration with lookahead. At each iteration index, say, $k$, we have an estimate of the value function, which we denote by $J_k$. To obtain $J_{k+1}$, we perform a lookahead to improve the value function estimate at a certain number of states (denoted by $\scriptD_k$) which can vary with each iteration. For example, $\scriptD_k$ could be chosen as the states visited when performing a tree search to approximate the lookahead process. During the lookahead process, we note that we will also obtain a $H$-step lookahead policy, which we denote by $\mu_{k+1}$. As noted in the Introduction, the computation of $T^{H-1}(J_k)(i)$ for $i \in \scriptD_k$ in Step \ref{step 3 alg} of Algorithm \ref{alg:LSalg} may be computationally infeasible; however, as noted in \cite{efroni2019combine}, techniques such as Monte Carlo tree search (MCTS) are employed in practice to approximately estimate $T^{H-1}(J_k)(i).$ \color{black}
We obtain estimates of $J^{\mu_{k+1}}(i)$ for $i \in \scriptD_k$, which we call $\hat{J}^{\mu_{k+1}}(i)$. To obtain the estimate of the $J^{\mu_{k+1}}(i)$, we perform a $m$-step return with policy $\mu_{k+1}$, and obtain the estimate of $T^m_{\mu_{k+1}}T^{H-1}J_k(i)$ for $i \in \scriptD_k.$ We model the approximation errors in lookahead and return by adding noise to the output of these steps.
In order to estimate the value function for states not in $\scriptD_k$, we associate with each state $i \in \scriptS$ a feature vector $\phi(i)\in \mathbb{R}^d$ where typically $d << |\scriptS|$. The matrix comprised of the feature vectors as rows is denoted by $\Phi$. We obtain estimates of $T_{\mu}^m T^{H-1} J_k$ for states in $\scriptD_k$ and use those estimates to find the best fitting $\theta \in \mathbb{R}^d$, i.e.,
\begin{align*}
\min_\theta \sum_{i \in D_k} \Big( (\Phi \theta)(i) - \hat{J}^{\mu_{k+1}}(i) \Big)^2.
\end{align*}
Our algorithm then computes $\theta_{k+1}$. It uses the $\theta_{k+1}$ to obtain $J_{k+1} = \Phi \theta_{k+1}$. The process then repeats. To estimate the value function associated with policy $\mu_{k+1},$ we compute $T_{\mu_{k+1}}^m T^{H-1} J_k (i)$ for $i\in \scriptD_k.$ Another alternative is to instead compute $T_{\mu_{k+1}}^m J_k (i).$ It was shown in \cite{efroni2019combine} that the former option is preferable because it has a certain contraction property. Thus, we have chosen to use this computation in our algorithm as well. However, we show in the arxiv that the algorithm also converges with the second option if $m$ is chosen to be sufficiently large \cite{anna}.
\begin{algorithm}[tb]
\caption{Least Squares Function Approximation Algorithm}
\label{alg:LSalg}
\textbf{Input}: $J_0, m,$ $H,$ feature vectors $\{ \phi(i) \}_{i \in \scriptS}, \phi(i) \in \mathbb{R}^d$ and subsets $\scriptD_k \subseteq \scriptS, k = 0, 1, \ldots.$ Here $\scriptD_k$ is the set of states at which we evaluate the current policy at iteration $k.$\\
\begin{algorithmic}[1]
\STATE Let $k=0$.
\STATE Let $\mu_{k+1}$ be such that $\norm{T^H J_k - T_{\mu_{k+1}} T^{H-1} J_k}_\infty \leq \eps$.\\\label{step 2 alg}
\STATE Compute $\hat{J}^{\mu_{k+1}}(i) = T_{\mu_{k+1}}^m T^{H-1} (J_k)(i)+w_{k+1}(i)$ for $i \in \scriptD_k.$ \\ \label{step 3 alg}
\STATE Choose $\theta_{k+1}$ to solve
\begin{align}
\min_\theta \sum_{i \in D_k} \Big( (\Phi \theta)(i) - \hat{J}^{\mu_{k+1}}(i) \Big)^2, \label{step 4 alg}
\end{align} \\
where $\Phi$ is a matrix whose rows are the feature vectors.
\STATE $J_{k+1} = \Phi \theta_{k+1}.$
\STATE Set $k \leftarrow k+1.$ Go to 2.
\end{algorithmic}
\end{algorithm}
\begin{align*}
\theta_{k+1} &= \Phi_{\scriptD_k}^\top \scriptP_{k}(T_{\mu_{k+1}}^m T^{H-1}J_k+w_k).
\end{align*}
To analyze Algorithm \ref{alg:LSalg}, we make the following assumption which states that we explore a sufficient number of states during the policy evaluation phase at each iteration.
\begin{assumption}\label{assume 1}
For each $k \geq 0, \text{ rank }\{ \phi(i)\}_{i \in D_k} = d$.
\end{assumption}
We assume that the noise, $w_k,$ is bounded.
\begin{assumption}
For some $\eps' >0,$ $\norm{w_k}_\infty \leq \eps'$, \label{assume 2}
\end{assumption}
We also assume that the rewards are bounded.
\begin{assumption}\label{assume 3}
$r(i,u) \in[0,1]$ $\forall i,u.$
\end{assumption}
Using Assumption~\ref{assume 1}, $J_{k+1}$ can be written as
\begin{align}
J_{k+1} &= \Phi \theta_{k+1} =\underbrace{\Phi(\Phi_{\scriptD_{k}}^\top \Phi_{\scriptD_{k}} )^{-1} \Phi_{\scriptD_{k}}^\top \scriptP_k}_{=: \scriptM_{k+1}} \hat{J}^{\mu_{k+1}},\label{defMk}\end{align}
where $\Phi_{\scriptD_{k}}$ is a matrix whose rows are the feature vectors of the states in $\scriptD_{k}$ and $\scriptP_k$ is a matrix of zeros and ones such that $\scriptP_k\hat{J}^{\mu_{k+1}}$ is a vector whose elements are a subset of the elements of $\hat{J}^{\mu_{k+1}}$ corresponding to $\scriptD_k$.
Written concisely, our algorithm is as follows:
\begin{equation}
J_{k+1} = \scriptM_{k+1} (T_{\mu_{k+1}}^m T^{H-1} J_k+w_k), \label{eq:iterateAPI}
\end{equation}
where $\mu_{k+1}$ is defined in step 2 of the algorithm.
Note that $w_k(i)$ for $i\notin\scriptD_k$ does not affect the algorithm, so for convenience we can define $w_k(i)=0$ for $i\notin\scriptD_k.$
Now we will state our theorem which characterizes the role of lookahead ($H$) and return ($m$)
on the convergence of approximate policy iteration with function approximation.
\begin{theorem}\label{mainAPI}
Suppose that $m$ and $H$ satisfy $m + H -1>\log (\delta_1)/\log (1/\alpha),$ where
$$\delta_1 := \sup_k \norm{\scriptM_k}_\infty = \sup_k\norm{\Phi(\Phi_{\scriptD_{k}}^\top \Phi_{\scriptD_{k}} )^{-1} \Phi_{\scriptD_{k}}^\top \scriptP_k}_\infty,$$
then
\begin{align}
\norm{J^{\mu_{k}}-J^*}_\infty &\leq \alpha^{(H-1)k} \norm{J^{\mu_0}-J^*}_\infty + \frac{ c'_{m, H}}{(1-\alpha^{H-1})(1-\alpha)},
\end{align} where $c'_{m, H} := 2\alpha^H \Bigg(\frac{\frac{\alpha^m + \alpha^{m+H-1}}{1-\alpha} \delta_1 + \delta_2 + \delta_1 \eps' }{1-\alpha^{m+H-1}\delta_1} + \frac{1}{1-\alpha}+\norm{J_0}_\infty\Bigg)+\eps.$
and
$
\delta_2 := \sup_{k, \mu_k}\norm{\scriptM_k J^{\mu_k}- J^{\mu_k}}_\infty.
$
\end{theorem}
The proof of Theorem \ref{mainAPI} can be found in Appendix \ref{appendix:thm1}. We now make several comments about the implications of Theorem \ref{mainAPI}:
\begin{enumerate}
\item The constant terms in Theorem \ref{mainAPI} can be interpreted as follows. $\eps$ represents the tree search error, $\delta_2$ represents the function approximation error, $\delta_1$ represents the feature vector parameter, and $\eps'$ represents the policy evaluation error.
\item Using this notation, we can see that the rate of convergence to the optimal policy, barring the constant term $ \frac{ c'_{m, H}}{(1-\alpha^{(H-1)})(1-\alpha)},$ is at least $O(\alpha^{H-1}).$
\item We can show that in the limit, our bounds can be further tightened and the following Proposition holds:
\begin{proposition}
\begin{align*}
\limsup_{k\to \infty}\norm{J^{\mu_k}-J^*}_\infty \leq \frac{c_{m, H}}{(1-\alpha)(1-\alpha^{H-1})},
\end{align*}
where
\begin{align*}
c_{m, H} &:= 2\alpha^H \Big( \frac{\frac{\alpha^m + \alpha^{m+H-1}}{1-\alpha}\delta_1 + \delta_2+ \delta_1 \eps'}{1-\alpha^{m+H-1} \delta_1}\Big)+\eps.
\end{align*}
\end{proposition}
The above shows that the error has 2 terms.
(32) - becomes the limit as k goes to infinity
We noticed that there are 2 terms. The first terms shows Dk asymptotically, second term asymptotic error. Second remark is c'mH is a function of the fn approx error, delta2 (names), but the effect of them dim as H gets large except eps which you cannot do any better.
Get finite time bounds for the GD algorithm.
\end{enumerate}
The proof of Theorem \ref{mainAPI} is closely related to the proof of Theorem \ref{theorem3}. The proof of Theorem~\ref{theorem3} is presented in the next section while we defer the proof of Theorem \ref{mainAPI} to Appendix \ref{appendix:thm1}. It should be noted that, while the bounds given in Theorem \ref{mainAPI} and Theorem \ref{theorem3} are asymptotic, finite-time bounds can be obtained as a by-product of our proofs. We note that the above result is fundamentally different than the conclusion from Theorem 3 in \cite{efroni2019combine} where one requires a condition on $m$ and $H$ for convergence when one uses $J_k$ instead of using $T^{H-1}J_k$ in Step 2 of the algorithm. Here, we have shown that even when one uses $T^{H-1}J_k,$ one may need large $m+H$ for convergence due to the use of function approximation.
We additionally characterize the approximation error of our iterates, $J_k$, by computing bounds on the asymptotic error $\limsup_{k \to \infty} \norm{J_k - J^*}_\infty.$ The bounds along with their derivations can be found in Appendix \ref{appendix:prop1}. The corresponding finite-time bounds can be easily obtained from the proof of Proposition \ref{IterAPITheorem} in Appendix \ref{appendix:prop1}.
It is important to note that the upper bounds on $\norm{J^{\mu_k}-J^*}_\infty$ and $\norm{J_k - J^*}_\infty$ illustrate that the convergence rate of $\norm{J^{\mu_k}-J^*}_\infty$ is much faster than the convergence rate of $\norm{J_k - J^*}_\infty$. Thus, algorithms need not wait for the value function estimates to converge before the corresponding greedy policy reaches near optimality.
In \cite{bertsekas2021lessons}, it is noted that, in reinforcement learning to play computer games or board games, it is not uncommon during training to get a relatively crude estimate of the value function, which is improved by lookahead and $m$-step return during actual game play. Our analysis would also apply to this situation -- we have not explicitly differentiated between training and game play in our analysis.
\color{black}
Theorem \ref{mainAPI} can be used to make the following observation: how close $J^{\mu_k}$ is to $J^*$ depends on four factors -- the representation power of the feature vectors and the feature vectors themselves ($\delta_2, \delta_1$), the amount of lookahead ($H$), the extent of the return ($m$) and the approximation in the policy determination and policy evaluation steps ($\eps$ and $\eps'$). Further, by examining $c_{m, H}$ one can see that lookahead and return help mitigate the effect of feature vectors and their ability to represent the value functions. We note that \cite{efroni2019combine} also consider a version of the algorithm where in Step~\ref{step 3 alg}, $T_{\mu_k}^m T^{H-1}J_k$ is replaced by $T_{\mu_k}^m J_k.$ We obtain a performance bound for this algorithm with function approximation in Appendix~\ref{appendix:thm2} under the assumption that $m$ is large.
\section{Gradient Descent Algorithm} \label{SectionGD}
Solving the least-squares problem in Algorithm~\ref{alg:LSalg} involves a matrix inversion, which can be computationally difficult if the dimension of the feature vectors is large. So we propose an alternative algorithm which performs one step of gradient descent at each iteration, where the gradient refers to the gradient of the least-squares objective in (\ref{step 4 alg}). The gradient descent-based algorithm is presented in Algorithm~\ref{alg:GDalg}.
\begin{algorithm}[tb]
\caption{Gradient Descent Algorithm}
\label{alg:GDalg}
\textbf{Input}: $\theta_0, m,$ $H,$ feature vectors $\{ \phi(i) \}_{i \in \scriptS}, \phi(i) \in \mathbb{R}^d,$ and $\scriptD_k,$ which is the set of states for which we evaluate the current policy at iteration $k.$\\
\begin{algorithmic}[1]
\STATE $k=0, J_0 = \Phi \theta_0$. \\
\STATE Let $\mu_{k+1}$ be such that $\norm{T^H J_k - T_{\mu_{k+1}} T^{H-1} J_k}_\infty \leq \eps$. \\
\STATE Compute $\hat{J}^{\mu_{k+1}}(i) = T_{\mu_{k+1}}^m T^{H-1} J_k(i)+w_{k+1}(i)$ for $i \in \scriptD_k.$ \\
\STATE
\begin{align}
\theta_{k+1} &= \theta_k - \gamma \nabla_\theta c(\theta;\hat{J}^{\mu_{k+1}}) \label{eq:iterthetaGD}
\end{align} where
\begin{align*}
c(\theta;\hat{J}^{\mu_{k+1}}) := \frac{1}{2}\sum_{i \in \scriptD_k} \Big( (\Phi \theta)(i) - \hat{J}^{\mu_{k+1}}(i) \Big)^2.
\end{align*}
\STATE $J_{k+1} = \Phi \theta_{k+1}.$
\STATE Set $k \leftarrow k+1.$ Go to 2.
\end{algorithmic}
\end{algorithm}
In order to present our main result for the gradient descent version of our algorithm, we define $\theta^{\mu_k}$ for any policy $\mu_k$ as follows:
\begin{align}\label{eq: theta_mu_k}
\theta^{\mu_k} &:= \arg\min_\theta \frac{1}{2} \norm{\Phi_{\scriptD_k} \theta - \scriptP_{k}J^{\mu_{k}}}_2^2.
\end{align}
In other words, $\theta^{\mu_k}$ represents the function approximation of $J^{\mu_k}.$ We also define another quantity $\tilde{\theta}^{\mu_k}$ which will be used in the proof of the theorem:
\begin{align*}
\tilde{\theta}^{\mu_k} &:= \arg\min_\theta \frac{1}{2}\norm{\Phi_{\scriptD_k} \theta - \scriptP_{k}(T_{\mu_{k}}^m T^{H-1} J_{k-1}+w_k)}_2^2
\\&=\arg\min_\theta \frac{1}{2}\norm{\Phi_{\scriptD_k} \theta - \scriptP_{k}(T_{\mu_{k}}^m T^{H-1} \Phi \theta_{k-1}+w_k)}_2^2.
\end{align*}
Thus, $\tilde{\theta}^{\mu_k}$ represents the function approximation of the estimate of $J^{\mu_k}$ obtained from the $m$-step rollout.
We now present our main result:
\begin{theorem}\label{theorem3}
Suppose that Assumptions \ref{assume 1}-\ref{assume 3} hold and further, $\gamma, m,$ and $H$ satisfy $$\beta := \alpha' + (1+\alpha') \alpha^{m+H-1} \delta_1' <1,$$
where
$\alpha' :=\sup_k \norm{I - \gamma \Phi_{\scriptD_k}^\top \Phi_{\scriptD_k}}_2$ and $\delta_1' := \sup_k \norm{\Phi}_\infty \norm{\Big( \Phi_{\scriptD_k}^\top
\Phi_{\scriptD_k}\Big)^{-1} \Phi_{\scriptD_k}^\top \scriptP_{k}}_\infty.$ Then
\begin{align}
\norm{J^{\mu_{k}} - J^*}_\infty \leq \frac{2\alpha^H(\norm{\Phi}_\infty \frac{\tau}{1-\beta} + \delta_2') + \eps}{(1-\alpha^{H-1})(1-\alpha)}e,
\end{align}
where
$$
\tau := \alpha' C +(1+\alpha')\Big[\alpha^m \frac{\delta_1'}{\norm{\Phi}_\infty} \Big( \frac{1+\alpha^{H-1}}{1-\alpha} + \alpha^{H-1}\delta_2' \Big)+\eps'\Big]
$$ and
$C := \frac{\sigma_{\min, \Phi}}{1-\alpha}+2\sigma_{\min, \Phi} \delta'_2,$ where $\sigma_{\min, \Phi}$ is the smallest singular value in the singular value decomposition of $\Phi.$
\end{theorem}
\proof{Proof of Theorem \ref{theorem3}}
The proof of Theorem \ref{theorem3} is somewhat involved, so we break it up into steps.
\noindent\textit{Step 1:}
In this step, since $\theta_{k+1}$ is obtained by taking a step of gradient descent towards $\tilde{\theta}^{\mu_{k+1}}$ beginning from $\theta_k$, we show that the following holds:
\begin{align*}
\norm{\theta_{k+1} - \tilde{\theta}^{\mu_{k+1}}}_\infty \leq \alpha' \norm{\theta_k - \tilde{\theta}^{\mu_{k+1}}}_\infty.
\end{align*}
\textit{Proof of Step 1:} Recall that the iterates in Equation \eqref{eq:iterthetaGD} can be written as follows:
\begin{align*}
\theta_{k+1} &= \theta_k - \gamma \nabla_\theta c(\theta;\hat{J}^{\mu_{k+1}}) = \theta_k - \gamma \Big( \Phi_{\scriptD_k}^\top \Phi_{\scriptD_k} \theta_k - \Phi_{\scriptD_k}^\top \scriptP_{k}(T_{\mu_{k+1}}^m T^{H-1}J_k+w_k)\Big).
\end{align*}
Since
\begin{align*}0 &= \nabla_\theta c(\theta;\hat{J}^{\mu_{k+1}})|_{\tilde{\theta}^{\mu_{k+1}}}= \Phi_{\scriptD_k}^\top \Phi_{\scriptD_k} \tilde{\theta}^{\mu_{k+1}} - \Phi_{\scriptD_k}^\top \scriptP_{k}(T_{\mu_{k+1}}^m T^{H-1}J_k+w_k),
\end{align*}
we have the following:
\begin{equation*}
\begin{array}{lll}
\theta_{k+1} &=& \theta_k - \gamma \Big( \Phi_{\scriptD_k}^\top \Phi_{\scriptD_k} \theta_k - \Phi_{\scriptD_k}^\top \Phi_{\scriptD_k} \tilde{\theta}^{\mu_{k+1}} - \Phi_{\scriptD_k}^\top \scriptP_{k}(T_{\mu_{k+1}}^m T^{H-1}J_k+w_k) + \Phi_{\scriptD_k}^\top \scriptP_{k}(T_{\mu_{k+1}}^m T^{H-1}J_k+w_k)\Big) \\
&=& \theta_k - \gamma \Phi_{\scriptD_k}^\top \Phi_{\scriptD_k} (\theta_k - \tilde{\theta}^{\mu_{k+1}}).
\end{array}
\end{equation*}
Subtracting $\tilde{\theta}^{\mu_{k+1}}$ from both sides gives:
\begin{align*}
\theta_{k+1} - \tilde{\theta}^{\mu_{k+1}} &= \theta_k - \tilde{\theta}^{\mu_{k+1}} - \gamma \Phi_{\scriptD_k}^\top \Phi_{\scriptD_k} (\theta_k - \tilde{\theta}^{\mu_{k+1}})= (I - \gamma \Phi_{\scriptD_k}^\top \Phi_{\scriptD_k}) (\theta_k - \tilde{\theta}^{\mu_{k+1}}).
\end{align*}
Thus,
\begin{align*}
\nonumber\norm{\theta_{k+1} - \tilde{\theta}^{\mu_{k+1}}}_\infty \nonumber &= \norm{(I - \gamma \Phi_{\scriptD_k}^\top \Phi_{\scriptD_k}) (\theta_k - \tilde{\theta}^{\mu_{k+1}})}_\infty = \norm{I - \gamma \Phi_{\scriptD_k}^\top \Phi_{\scriptD_k}}_\infty \norm{\theta_k - \tilde{\theta}^{\mu_{k+1}}}_\infty \\ \nonumber
&\leq \norm{I - \gamma \Phi_{\scriptD_k}^\top \Phi_{\scriptD_k}}_2 \norm{\theta_k - \tilde{\theta}^{\mu_{k+1}}}_\infty\\& \leq \underbrace{\sup_k\norm{I - \gamma \Phi_{\scriptD_k}^\top \Phi_{\scriptD_k}}_2}_{=: \alpha'} \norm{\theta_k - \tilde{\theta}^{\mu_{k+1}}}_\infty.
\end{align*}
\noindent\textit{Step 2:} Using the previous step in conjunction with contraction properties of the Bellman operators, we will show that the following recursion holds:
\begin{align*}
\norm{\theta_k - \theta^{\mu_k}}_\infty
&\leq \Big( \alpha' + (1+\alpha') \alpha^{m+H-1} \delta_1' \Big) \norm{\theta_{k-1} - \theta^{\mu_{k-1}}}_\infty + \alpha' C \\& + (1+\alpha')\Big[\alpha^m \frac{\delta_1'}{\norm{\Phi}_\infty} \Bigg( \frac{1+\alpha^{H-1}}{1-\alpha} + \alpha^{H-1}\delta_2' \Bigg)+\eps'\Big],
\end{align*}
for a constant $C$ defined below.
We will then iterate over $k$ to get the following:
\begin{align}
\norm{\theta_k - \theta^{\mu_k}}_\infty
\leq\beta^k \norm{\theta_{0} - \theta^{\mu_{0}}}_\infty + \frac{\tau}{1-\beta},
\end{align}
for some $\tau$ and $\beta$ defined below.
\textit{Proof of Step 2:} We first note the following:
\begin{align}
\nonumber \norm{\theta_k - \theta^{\mu_k}}_\infty &\leq \norm{\theta_k - \tilde{\theta}^{\mu_k}}_\infty + \norm{\tilde{\theta}^{\mu_k} - \theta^{\mu_k}}_\infty \nonumber
\leq \alpha' \norm{\theta_{k-1} - \theta^{\mu_k}}_\infty + \alpha'\norm{ \tilde{\theta}^{\mu_k} - \theta^{\mu_k}}_\infty \nonumber + \norm{\tilde{\theta}^{\mu_k} - \theta^{\mu_k}}_\infty \nonumber
\\&\leq \alpha' \norm{\theta_{k-1} - \theta^{\mu_{k-1}}}_\infty + \alpha' \norm{\theta^{\mu_k} - \theta^{\mu_{k-1}}}_\infty
+ (1+\alpha')\norm{\tilde{\theta}^{\mu_k} - \theta^{\mu_k}}_\infty.\label{finthetakthetamuk}
\end{align}
In order to further bound (\ref{finthetakthetamuk}), we introduce the following lemmas:
\begin{lemma}\label{cthetalemma}
$ \norm{ \theta^{\mu_{k-1}} - \theta^{\mu_{k}}}_\infty \leq \underbrace{\frac{\sigma_{\min, \Phi}}{1-\alpha}+2\sigma_{\min, \Phi} \delta'_2}_{=: C},$
where $\sigma_{\min, \Phi}$ is the smallest singular value in the singular value decomposition of $\Phi.$ \end{lemma}
\begin{lemma}\label{lemmathetatilde}
\begin{align*}
\norm{\theta^{\mu_k} - \tilde{\theta}^{\mu_k}}_\infty &\leq \frac{\delta_1' \eps'}{\norm{\Phi}_\infty}+\alpha^m \frac{\delta_1'}{\norm{\Phi}_\infty} \Bigg( \frac{1+\alpha^{H-1}}{1-\alpha} + \alpha^{H-1}\delta_2' \Bigg) + \alpha^{m+H-1} \delta_1' \norm{ \theta^{\mu_{k-1}} - \theta_{k-1} }_\infty.
\end{align*}
\end{lemma}
The proofs of these lemmas can be found in Appendix \ref{appendix:lem2} and Appendix \ref{appendix:lem1}.
Using lemmas \ref{cthetalemma} and \ref{lemmathetatilde} in (\ref{finthetakthetamuk}), we have:
\begin{align}
\begin{split}\nonumber
\norm{\theta_k - \theta^{\mu_k}}_\infty \leq{}& \alpha' \norm{\theta_{k-1} - \theta^{\mu_{k-1}}}_\infty + \alpha' C\\
& + (1+\alpha') \Bigg[\frac{\delta_1' \eps'}{\norm{\Phi}_\infty}\nonumber+ \alpha^m \frac{\delta_1'}{\norm{\Phi}_\infty} \Bigg( \frac{1+\alpha^{H-1}}{1-\alpha} + \alpha^{H-1}\delta_2' \Bigg) \\&+ \alpha^{m+H-1} \delta_1' \norm{ \theta^{\mu_{k-1}} - \theta_{k-1} }_\infty \Bigg]
\end{split}\\
\begin{split}\label{eq:boundb4labels}
\leq{}& \Big( \alpha' + (1+\alpha') \alpha^{m+H-1} \delta_1' \Big) \norm{\theta_{k-1} - \theta^{\mu_{k-1}}}_\infty + \alpha' C\\
& + (1+\alpha')\Big[\alpha^m \frac{\delta_1'}{\norm{\Phi}_\infty} \Big( \frac{1+\alpha^{H-1}}{1-\alpha} + \alpha^{H-1}\delta_2' \Big)+\eps'\Big].
\end{split}
\end{align}
Now suppose that we define $\beta \in (0, 1)$ as follows:
\begin{align}
\beta := \alpha' + (1+\alpha') \alpha^{m+H-1} \delta_1' , \label{eq:defbetaGD}
\end{align} where we note from our assumption in Theorem \ref{theorem3} that $\alpha' + (1+\alpha') \alpha^{m+H-1} \delta_1' <1.$
Furthermore, we denote $\tau$ as follows:
\begin{align}
\tau := &\alpha' C +(1+\alpha')\Big[\alpha^m \frac{\delta_1'}{\norm{\Phi}_\infty} \Big( \frac{1+\alpha^{H-1}}{1-\alpha} + \alpha^{H-1}\delta_2' \Big)+\eps'\Big]. \label{eq:defdelta3GD}
\end{align}
Then, our bound in the inequality in \eqref{eq:boundb4labels} can be rewritten as the following:
\begin{align}
\nonumber &\norm{\theta_k - \theta^{\mu_k}}_\infty\nonumber \leq \beta \norm{\theta_{k-1} - \theta^{\mu_{k-1}}}_\infty + \tau.
\end{align}
Iterating over $k$, we get that for any $k$,
\begin{align}
\norm{\theta_k - \theta^{\mu_k}}_\infty \leq \beta^k \norm{\theta_{0} - \theta^{\mu_{0}}}_\infty + \sum_{i=0}^{k-1} \beta^i \tau \label{eq:firstfinite}
\leq\beta^k \norm{\theta_{0} - \theta^{\mu_{0}}}_\infty + \frac{\tau}{1-\beta}.
\end{align}
\noindent\textit{Step 3:}
Since $J_k = \Phi \theta_k$ and $\Phi \theta^{\mu_k}$ is the best approximation in $\mathbb{R}^d$ of $J^{\mu_k}$, we will show the following bound:
\begin{align}
\nonumber \norm{J^{\mu_k}-J_k}_\infty
&\leq \norm{\Phi}_\infty \norm{\theta_k - \theta^{\mu_k}}_\infty + \delta_2'.
\end{align} Then, using the previous step, we will get a bound on $ \norm{J^{\mu_k}-J_k}_\infty$.
\textit{Proof of Step 3:} %
\begin{align}
\nonumber \norm{J^{\mu_k}-J_k}_\infty \nonumber &= \norm{\Phi \theta_k - J^{\mu_k}}_\infty
\leq \norm{\Phi \theta_k - \Phi \theta^{\mu_k}}_\infty + \norm{\Phi \theta^{\mu_k} - J^{\mu_k}}_\infty \\
&\leq \norm{\Phi}_\infty \norm{\theta_k - \theta^{\mu_k}}_\infty + \delta_2'.
\label{eq:finjmukjk}
\end{align}
Using \eqref{eq:finjmukjk}, we will obtain a bound on $\norm{J^{\mu_k}-J_k}_\infty$ as follows:
\begin{align}
\nonumber \norm{J^{\mu_k}-J_k}_\infty \nonumber
&\leq \norm{\Phi}_\infty \norm{\theta_k - \theta^{\mu_k}}_\infty + \delta_2' \\
&\leq \norm{\Phi}_\infty (\beta^k \norm{\theta_{0} - \theta^{\mu_{0}}}_\infty + \frac{\tau}{1-\beta}) + \delta_2' \nonumber\\
&= \beta^k \underbrace{\norm{\Phi}_\infty \norm{\theta_{0} - \theta^{\mu_{0}}}_\infty}_{=: \tau_2} +\underbrace{\norm{\Phi}_\infty \frac{\tau}{1-\beta} + \delta_2'}_{=: \tau_1}. \label{eq: tau1 tau2 or}
\end{align}
\noindent\textit{Step 4:}
We will establish the following bound on $T_{\mu_{k+1}}T^{H-1}J^{\mu_k}$ using the contraction property of Bellman operators and property in \eqref{eq:usefulproperties}:
\begin{align*}
T_{\mu_{k+1}}T^{H-1}J^{\mu_k}
&\leq 2\alpha^H \norm{J^{\mu_k}-J_k}_\infty e + T^{H-1}J^{\mu_k} + \eps e. \nonumber
\end{align*}
Using properties in \eqref{eq:usefulproperties} and monotonicity, we will repeatedly apply $T_{\mu_{k+1}}$ to both sides and take limits to obtain the following:
\begin{align*}
J^{\mu_{k+1}} - T^{H-1}J^{\mu_k} \leq \frac{2\alpha^H \norm{J^{\mu_k}-J_k}_\infty + \eps }{1-\alpha}e.
\end{align*}
\textit{Proof of Step 4:} We begin by noting that
\begin{align}
&-T_{\mu_{k+1}}T^{H-1}J^{\mu_k} = -T_{\mu_{k+1}}T^{H-1}J^{\mu_k} - T_{\mu_{k+1}}T^{H-1}J_k \nonumber+ T_{\mu_{k+1}}T^{H-1}J_k \nonumber\\
&\leq \alpha^H \norm{J^{\mu_k}-J_k}_\infty e - T_{\mu_{k+1}}T^{H-1}J_k \leq \alpha^H \norm{J^{\mu_k}-J_k}_\infty e - T^H J_k + \eps e \nonumber\\
&= \alpha^H \norm{J^{\mu_k}-J_k}_\infty e - T^H J_k + T^H J^{\mu_k}\nonumber - T^H J^{\mu_k} + \eps e \\&\leq 2 \alpha^H \norm{J^{\mu_k}-J_k}_\infty e - T^H J^{\mu_k} + \eps e \nonumber\leq 2\alpha^H \norm{J^{\mu_k}-J_k}_\infty e - T^{H-1}J^{\mu_k} + \eps e \label{eq:defcdmhge}\\& \leq - T^{H-1}J^{\mu_k} + 2\alpha^H \norm{J^{\mu_k}-J_k}_\infty e + \eps e. \end{align}
The rest of the proof is similar to the arguments in \cite{bertsekastsitsiklis}, \cite{bertsekas2019reinforcement}; however, we have to explicitly incorporate the role of lookahead ($H$) in the remaining steps of the proof. Suppose that we apply the $T_{\mu_{k+1}}$ operator $\ell-1$ times to both sides. Then, due to monotonicity and the fact $T_\mu(J+ce)=T_\mu(J)+\alpha ce,$ for any policy $\mu,$ we have the following:
\begin{align*}
T_{\mu_{k+1}}^\ell T^{H-1}J^{\mu_k} \leq \alpha^{\ell } (2\alpha^H \norm{J^{\mu_k}-J_k}_\infty + \eps )e + T_{\mu_{k+1}}^{\ell+1} T^{H-1} J^{\mu_k}.
\end{align*}
Using a telescoping sum, we get the following inequality:
\begin{align*}
T_{\mu_{k+1}}^j T^{H-1} J^{\mu_k} - T^{H-1}J^{\mu_k}
&\geq - \sum_{\ell = 1}^{j} \alpha^{\ell -1} (2\alpha^H \norm{J^{\mu_k}-J_k}_\infty + \eps )e.
\end{align*}
The rest of proof is straightforward.
Subtracting $J^*$ from both sides of the previous inequality and using the contraction property of $T$, we get
\begin{align*}
&\alpha^{H-1} \norm{J^* - J^{\mu_k}}_\infty e + \sum_{\ell = 1}^{j} \alpha^{\ell -1} (2\alpha^H \norm{J^{\mu_k}-J_k}_\infty + \eps )e \\&\geq J^* - T^{H-1}J^{\mu_k} + \sum_{\ell = 1}^{j} \alpha^{\ell -1} (2\alpha^H \norm{J^{\mu_k}-J_k}_\infty + \eps )e
\\& \geq J^* - J^{\mu_{k+1}}\\
&\geq 0,
\end{align*}
which implies that
\begin{align}
\norm{J^{\mu_{k+1}} - J^*}_\infty &\leq \alpha^{H-1} \norm{J^{\mu_k}-J^*}_\infty + \sum_{\ell = 1}^{j} \alpha^{\ell -1} (2\alpha^H \norm{J^{\mu_k}-J_k}_\infty + \eps )e, \label{eq: ** 4}
\end{align} since $J^* \geq J^{\mu}$ for all policies $\mu$.
We now plug in for $\norm{J^{\mu_k}-J_k}_\infty$ in \eqref{eq: tau1 tau2 or} to get the following:
\begin{align}
\norm{J^{\mu_{k+1}} - J^*}_\infty \nonumber&\leq \alpha^{H-1} \norm{J^{\mu_k}-J^*}_\infty + \sum_{\ell = 1}^{j} \alpha^{\ell -1} (2\alpha^H (\beta^k \tau_2 + \tau_1) + \eps )e \nonumber\\\nonumber
&= \alpha^{H-1} \norm{J^{\mu_k}-J^*}_\infty + \beta^k\frac{2\alpha^H \tau_2 }{1-\alpha} + \frac{2\alpha^H\tau_1 + \eps}{1-\alpha}e.
\end{align}
We now iterate over $k > 0$ to get the following:
\begin{align}
\norm{J^{\mu_{k}} - J^*}_\infty &\leq \alpha^{k(H-1)} \norm{J^{\mu_0}-J^*}_\infty + \sum_{\ell = 0}^{k-1} \alpha^{(k-\ell-1)(H-1)} \Big[\beta^\ell\frac{2\alpha^H \tau_2 }{1-\alpha} + \frac{2\alpha^H\tau_1 + \eps}{1-\alpha}e\Big].\label{eq: fin bound jmuk jstar or}
\end{align}
We take limits as $k \to \infty$ to obtain asymptotic limits on $\norm{J^{\mu_{k}} - J^*}_\infty.$ To do so, we introduce the following lemma:
\begin{lemma} \label{conv lemma 1 or}
$\lim_{k\to \infty}\sum_{\ell=0}^{k-1} \alpha^{(k-\ell-1)(H-1)} \beta^\ell \to 0.$
\end{lemma}
\proof{Proof of Lemma \ref{conv lemma 1 or}}
First, notice that $\alpha^{H-1}<1$ and $\beta<1$ from our assumption in Theorem \ref{theorem3}.
Now suppose that $\alpha^{H-1} \leq \beta.$ Then, we get that
\begin{align*}
\sum_{\ell=0}^{k-1} \alpha^{(k-\ell-1)(H-1)} \beta^\ell &\leq \sum_{\ell=0}^{k-1} \beta^{k-\ell-1} \beta^\ell \\
&\leq \sum_{\ell=0}^{k-1} \beta^{k-1}\\
&= k\beta^{k-1}.
\end{align*} Taking limits as $k \to \infty$ on both sides, we have that $\lim_{k\to \infty}\sum_{\ell=0}^{k-1} \alpha^{(k-\ell-1)(H-1)} \beta^\ell \to 0.$
Now suppose that $\beta \leq \alpha^{H-1}.$ Then,
\begin{align*}
\sum_{\ell=0}^{k-1} \alpha^{(k-\ell-1)(H-1)} \beta^\ell &\leq \sum_{\ell=0}^{k-1} \alpha^{(H-1)(k-\ell-1)} \alpha^{(H-1)\ell} \\
&\leq \sum_{\ell=0}^{k-1} \alpha^{k-1}\\
&= k\alpha^{k-1}.
\end{align*}
Taking limits as $k \to \infty$ on both sides, we get that $\lim_{k\to \infty}\sum_{\ell=0}^{k-1} \alpha^{(k-\ell-1)(H-1)} \beta^\ell \to 0.$
\Halmos
\endproof
Taking limits on both sides of \eqref{eq: fin bound jmuk jstar or} Lemma \ref{conv lemma 1 or}, we have the following:
\begin{align}
\norm{J^{\mu_{k}} - J^*}_\infty \leq \frac{2\alpha^H\tau_1 + \eps}{(1-\alpha^{H-1})(1-\alpha)}e
\end{align}
Plugging in for $\tau_1$ in the above bound gives the following:
\begin{align}
\norm{J^{\mu_{k}} - J^*}_\infty \leq \frac{2\alpha^H(\norm{\Phi}_\infty \frac{\tau}{1-\beta} + \delta_2') + \eps}{(1-\alpha^{H-1})(1-\alpha)}e,
\end{align}
where
$$ \beta := \alpha' + (1+\alpha') \alpha^{m+H-1} \delta_1'$$ and
$$
\tau := \alpha' C +(1+\alpha')\Big[\alpha^m \frac{\delta_1'}{\norm{\Phi}_\infty} \Big( \frac{1+\alpha^{H-1}}{1-\alpha} + \alpha^{H-1}\delta_2' \Big)+\eps'\Big],
$$
and we get Theorem \ref{theorem3}.
\Halmos \endproof
\section{Related Work}
In the introduction, we compared our results to those in \cite{efroni2019combine}, which builds on a lot of ideas in \cite{efroni2018}. Now, we compare our work to other papers in the literature. The role of lookahead and return in improving the performance of RL algorithms has also been studied in a large number of papers including \cite{moerland2020framework, efroni2020online, tomar2020multistep, efroni2018multiplestep, springenberg2020local, 9407870}. The works of \cite{baxter, veness, lanctot2014monte} explore the role of tree search in RL algorithms. However, to the best of our knowledge, the amount of lookahead and return needed as a function of the feature vectors has not been quantified in prior works.
Approximate policy iteration is a well studied topic, see \cite{bertsekastsitsiklis, bertsekas2019reinforcement, Puterman1978ModifiedPI, scherrer}, for example. However, since their models do not involve a scheme for approximating the value function at each iteration, the role of the depth of lookahead ($H$) and return ($m$) cannot be quantified using their approaches.
It is important to note that our work is different from least squares policy iteration (LSPI), which is a common means of obtaining function approximation parameters from value function estimates. While one of our algorithms used least squares estimates, our work more importantly analyzes the use of lookahead and $m$-step return, which are employed prior to the least squares step of the algorithm.
The works of \cite{Bertsekas2011ApproximatePI} and \cite{bertsekas2019reinforcement} also study a variant of policy iteration wherein a greedy policy is evaluated approximately using feature vectors at each iteration. These papers also provide rates of convergence as well as a bound on the approximation error. However, our main goal is to understand the relations between function approximation and lookahead/return which are not considered in these other works.
\section{Conclusion}
We show that a minimum threshold for the lookahead and corresponding policy return may be necessary for algorithms with function approximation based approximate policy iteration to converge. In particular, we show that if one uses an $m$-step return of an $H$-step lookahead policy, the $m+H$ must be sufficiently large. Possible directions for future work include the following:
\begin{itemize}
\item For problems with a terminal state, it would be interesting to consider cases where the value function of a given policy is estimated using a full rollout which provides an unbiased estimate as in \cite{tsitsiklis2002convergence}.
\item In game playing applications, gradient descent is commonly used to estimate the value function, but temporal-difference learning is used in other applications. It would be interesting to extend our results to the case of TD learning-based policy evaluation.
\item While neural networks are not linear function approximators, recent results on the NTK analysis of neural networks suggest that they can be approximated as linear combinations of basis functions \cite{jacot2018neural,du2018gradient,arora2019fine,ji2019polylogarithmic, cao2019generalization}. Thus, to the extent that the NTK approximation is reasonable, our results can potentially shed light on why the combination of the representation capability of neural networks and tree-search methods work well in practice, although further work is necessary to make this connection precise.
\end{itemize}
\begin{APPENDICES}
\section{Proof of Theorem \ref{mainAPI}}
\label{appendix:thm1}
\begin{remark}
The proof of Theorem \ref{mainAPI} is somewhat simpler than that of Theorem \ref{theorem3} but uses many of the same ideas.
The proof of Theorem \ref{mainAPI} skips Steps 1 and 2 in the proof of Theorem \ref{theorem3} and instead uses techniques in Lemma \ref{mainAPILemma} below to obtain an analogous result to Step 3. The rest of the proof (Steps 4-6) of Theorem \ref{theorem3} also applies to the proof of Theorem \ref{mainAPI} except with $p_{m,H}+\eps''$ (with $p_{m, H}$ defined in the proof of Theorem \ref{mainAPI}) in place of $p_{m, H, \gamma, \eps''}$ as defined in Step 3 of Theorem \ref{theorem3}.
\end{remark}
\proof{Proof of Theorem \ref{mainAPI}}
Consider policies $\mu_k$ and $\mu_{k+1},$ where $\mu_0$ can be taken to be any arbitrary policy. We have the following:
\begin{align}
-T_{\mu_{k+1}}T^{H-1}J^{\mu_k} &= -T_{\mu_{k+1}}T^{H-1}J^{\mu_k} - T_{\mu_{k+1}}T^{H-1}J_k \nonumber\\&+ T_{\mu_{k+1}}T^{H-1}J_k \nonumber\\
&\overset{(a)}\leq \alpha^H \norm{J^{\mu_k}-J_k}_\infty e - T_{\mu_{k+1}}T^{H-1}J_k \nonumber\\
&\leq \alpha^H \norm{J^{\mu_k}-J_k}_\infty e - T^H J_k + \eps e \nonumber\\
&= \alpha^H \norm{J^{\mu_k}-J_k}_\infty e \\\nonumber&- T^H J_k - T^H J^{\mu_k} + T^H J^{\mu_k} + \eps e \nonumber\\
&\overset{(b)}\leq 2 \alpha^H \norm{J^{\mu_k}-J_k}_\infty e - T^H J^{\mu_k} + \eps e \nonumber\\
&\leq 2\alpha^H \norm{J^{\mu_k}-J_k}_\infty e - T^{H-1}J^{\mu_k} + \eps e, \label{eq: *}
\end{align}
where $e$ is the vector of all $1$s, (a) and (b) follow from the contraction property of $T^H$ and $T_{\mu_{k+1}}$ and the last inequality follows from standard arguments using the monotonicity properties and the definition of $T$: specifically, note that
$$
TJ^\mu = \max_{\mu'} T_{\mu'} J^\mu \geq T_\mu J^\mu = J^{\mu},
$$
and repeatedly apply $T$ to both sides of the inequality and use monotonicity to obtain
$T^\ell J^\mu \geq T^{\ell-1} J^\mu$ for all $\ell$ and all policies $\mu.$
We can further bound $\norm{J^{\mu_k}-J_k}_\infty$ as follows:
\begin{lemma}
\begin{align*}
\norm{J^{\mu_k}-J_k}_\infty
&\leq \alpha^{m+H-1} \delta_1 \norm{J_{k-1} -J^{\mu_{k-1}}}_\infty + \frac{\alpha^m + \alpha^{m+H-1}}{1-\alpha}\delta_1 + \delta_2+ \delta_1 \eps'.
\end{align*} \label{mainAPILemma}
\end{lemma}
\proof{Proof of Lemma \ref{mainAPILemma}}
\begin{align*}
\norm{J^{\mu_k}-J_k}_\infty &= \norm{\scriptM_k (T_{\mu_k}^m T^{H-1} J_{k-1}+w_k)- J^{\mu_k}}_\infty \\
&\leq \norm{\scriptM_k T_{\mu_k}^m T^{H-1} J_{k-1}- J^{\mu_k}}_\infty + \norm{\scriptM_k w_k}_\infty \\
&\leq\norm{\scriptM_k T_{\mu_k}^m T^{H-1} J_{k-1}- J^{\mu_k}}_\infty + \norm{\scriptM_k}_\infty \norm{w_k}_\infty \\ \allowdisplaybreaks
&\leq \norm{\scriptM_k T_{\mu_k}^m T^{H-1} J_{k-1}- J^{\mu_k}}_\infty + \delta_1 \eps' \\ \allowdisplaybreaks
&= \norm{\scriptM_k T_{\mu_k}^m T^{H-1} J_{k-1} - \scriptM_k J^{\mu_k} + \scriptM_k J^{\mu_k}- J^{\mu_k}}_\infty + \delta_1 \eps'\\
&\leq \norm{\scriptM_k T_{\mu_k}^m T^{H-1} J_{k-1} - \scriptM_k J^{\mu_k}}_\infty + \norm{\scriptM_k J^{\mu_k}- J^{\mu_k}}_\infty+ \delta_1 \eps'\\
&\leq \norm{\scriptM_k}_\infty \norm{ T_{\mu_k}^m T^{H-1} J_{k-1} - J^{\mu_k}}_\infty + \norm{\scriptM_k J^{\mu_k}- J^{\mu_k}}_\infty+ \delta_1 \eps'\\
&\leq \alpha^m \norm{\scriptM_k}_\infty \norm{T^{H-1} J_{k-1} - J^{\mu_k}}_\infty + \sup_{k, \mu_k}\norm{\scriptM_k J^{\mu_k}- J^{\mu_k}}_\infty + \delta_1 \eps'\\
&\leq \alpha^m\norm{\scriptM_k}_\infty \norm{T^{H-1} J_{k-1} - J^* + J^* - J^{\mu_k}}_\infty +\delta_2 + \delta_1 \eps'\\
&\leq \alpha^m\norm{\scriptM_k}_\infty \norm{T^{H-1} J_{k-1} - J^*}_\infty + \alpha^m\norm{\scriptM_k}_\infty \norm{J^* - J^{\mu_k}}_\infty + \delta_2+ \delta_1 \eps' \\
&\leq \alpha^{m+H-1}\norm{\scriptM_k}_\infty \norm{J_{k-1} - J^*}_\infty + \frac{\alpha^m}{1-\alpha}\norm{\scriptM_k}_\infty +\delta_2 + \delta_1 \eps'\\
&\leq \alpha^{m+H-1}\norm{\scriptM_k}_\infty \norm{J_{k-1} -J^{\mu_{k-1}} + J^{\mu_{k-1}}- J^*}_\infty + \frac{\alpha^m}{1-\alpha}\norm{\scriptM_k}_\infty +\delta_2 + \delta_1 \eps'\\
&\leq \alpha^{m+H-1}\norm{\scriptM_k}_\infty \norm{J_{k-1} -J^{\mu_{k-1}}}_\infty + \frac{\alpha^m + \alpha^{m+H-1}}{1-\alpha}\norm{\scriptM_k}_\infty +\delta_2 + \delta_1 \eps'\\
&\leq \alpha^{m+H-1} \delta_1 \norm{J_{k-1} -J^{\mu_{k-1}}}_\infty + \frac{\alpha^m + \alpha^{m+H-1}}{1-\alpha}\delta_1 + \delta_2+ \delta_1 \eps'.
\end{align*} \Halmos
\endproof
Now suppose that we define $\beta' \in (0, 1)$ as follows:
\begin{align}
\beta' := \alpha^{m+H-1} \delta_1, \label{eq:defbeta}
\end{align} where we note from our assumption in Theorem \ref{mainAPI} that $\alpha^{m+H-1} \delta_1<1.$ Furthermore, we denote $\tau'$ as follows:
\begin{align}
\tau' := \frac{\alpha^m + \alpha^{m+H-1}}{1-\alpha}\delta_1 + \delta_2+ \delta_1 \eps'. \label{eq:defdelta3}
\end{align}
Then, our bound in Lemma \ref{mainAPILemma} can be rewritten as the following:
\begin{align*}
\norm{J^{\mu_k}-J_k}_\infty
&\leq \beta' \norm{J_{k-1} -J^{\mu_{k-1}}}_\infty + \tau'.
\end{align*}
Iterating over $k$, we get that for any $k$,
\begin{align}
\norm{J^{\mu_k}-J_k}_\infty \leq \underbrace{(\beta')^k \norm{J^{\mu_0}-J_0}_\infty + \sum_{i=0}^{k-1} \beta'^i \tau'}_{=: f_k}. \label{eq: pound2}
\end{align}
Putting (\ref{eq: *}) and (\ref{eq: pound2}) together, we have the following:
\begin{align}
-T_{\mu_{k+1}}T^{H-1}J^{\mu_k}
&\leq 2\alpha^H f_k e - T^{H-1}J^{\mu_k} + \eps e.
\end{align}
The rest of the proof is similar to the arguments in \cite{bertsekastsitsiklis}, \cite{bertsekas2019reinforcement}; however, we have to explicitly incorporate the role of lookahead ($H$) in the remaining steps of the proof.
Suppose that we apply the $T_{\mu_{k+1}}$ operator $\ell-1$ times. Then, due to monotonicity and the fact that $T_\mu(J+ce)=T_\mu(J)+\alpha ce,$ for any policy $\mu,$ we have the following:
\begin{align*}
-T_{\mu_{k+1}}^{\ell+1} T^{H-1}J^{\mu_k}
&\leq \alpha^\ell (2\alpha^{H} f_k +\eps)e - T_{\mu_{k+1}}^{\ell}T^{H-1}J^{\mu_k}.
\end{align*}
Using a telescoping sum, we get the following inequality:
\begin{align*}
T_{\mu_{k+1}}^j T^{H-1} J^{\mu_k} - T^{H-1}J^{\mu_k}
&\geq -\sum_{\ell = 1}^{j} \alpha^{\ell-1} (2\alpha^{H} f_k +\eps)e.
\end{align*}
Taking the limit as $j\rightarrow\infty$ on both sides, we have the following:
\begin{align*}
J^{\mu_{k+1}} - T^{H-1}J^{\mu_k} \geq -\frac{2\alpha^{H} f_k +\eps}{1-\alpha}e.
\end{align*}
Rearranging terms and subtracting $J^*$ from both sides, we get the following:
\begin{align*}
\alpha^{H-1} \norm{J^* - J^{\mu_k}}_\infty e + \frac{2\alpha^{H} f_k +\eps}{1-\alpha}e \geq J^* - T^{H-1}J^{\mu_k} + \frac{2\alpha^{H} f_k +\eps}{1-\alpha}e \geq J^* - J^{\mu_{k+1}} \geq 0,
\end{align*}
which implies that
\begin{align}
\norm{J^{\mu_{k+1}} - J^*}_\infty &\leq \alpha^{H-1} \norm{J^{\mu_k}-J^*}_\infty + \frac{2\alpha^{H} f_k +\eps}{1-\alpha},
\end{align} since $J^* \geq J^{\mu}$ for all policies $\mu$.
\color{black}
We now iterate over $k>0$ to get the following:
\begin{align*}
\norm{J^{\mu_{k}} - J^*}_\infty &\leq \alpha^{k(H-1)} \norm{J^{\mu_0}-J^*}_\infty + \sum_{\ell=0}^{k-1} \alpha^{(k-\ell-1)(H-1)}\frac{2\alpha^{H} f_\ell +\eps}{1-\alpha}\\
&\leq \frac{\alpha^{k(H-1)} }{1-\alpha} + \sum_{\ell=0}^{k-1} \alpha^{(k-\ell-1)(H-1)}\frac{2\alpha^{H} f_\ell +\eps}{1-\alpha}.
\end{align*}
where $$f_\ell := (\beta')^\ell \norm{J^{\mu_0}-J_0}_\infty + \sum_{i=0}^{k-1} \beta'^i \tau',$$ and $\beta':= \alpha^{m+H-1} \delta_1$ and $\tau' := \frac{\alpha^m + \alpha^{m+H-1}}{1-\alpha}\delta_1 + \delta_2+ \delta_1 \eps'$.
We will now obtain asymptotic limits of the above results. First, we provide an upper bound for $f_\ell,$ which we call $f$ and is defined as follows:
\begin{align*}
f := (\beta')^\ell \norm{J^{\mu_0}-J_0}_\infty + \frac{\tau'}{1-\beta'},
\end{align*} noting that $\beta' < 1$ from our assumption in Theorem \ref{mainAPI}. It is easy to see that $f_\ell < \tilde{f}_\ell$ for all $\ell$.
Thus, the following holds:
\begin{align*}
\norm{J^{\mu_{k}} - J^*}_\infty
&\leq \frac{\alpha^{k(H-1)} }{1-\alpha} + \sum_{\ell=0}^{k-1} \alpha^{(k-\ell-1)(H-1)}\frac{2\alpha^{H} f_\ell +\eps}{1-\alpha} \\
&\leq \frac{\alpha^{k(H-1)} }{1-\alpha} + \sum_{\ell=0}^{k-1} \alpha^{(k-\ell-1)(H-1)}\frac{2\alpha^{H} \tilde{f}_\ell +\eps}{1-\alpha} \\
&\leq \frac{\alpha^{k(H-1)} }{1-\alpha} + \sum_{\ell=0}^{k-1} \alpha^{(k-\ell-1)(H-1)}\frac{2\alpha^{H} \Big(\beta'^\ell \norm{J^{\mu_0}-J_0}_\infty + \frac{\tau'}{1-\beta'}\Big) +\eps}{1-\alpha} \\
&\leq \frac{\alpha^{k(H-1)} }{1-\alpha} + \frac{2\alpha^{H}\norm{J^{\mu_0}-J_0}_\infty}{1-\alpha}\sum_{\ell=0}^{k-1} \alpha^{(k-\ell-1)(H-1)} \beta'^\ell +\frac{2\alpha^H \frac{\tau'}{1-\beta'} +\eps}{(1-\alpha^{H-1})(1-\alpha)}.
\end{align*}
Since $\beta'=\alpha^{m+H-1} \delta_1<1$ from our assumption in Theorem \ref{mainAPI}, we will now show in the following lemma that $\lim_{k\to \infty}\sum_{\ell=0}^{k-1} \alpha^{(k-\ell-1)(H-1)} \beta'^\ell \to 0.$
\begin{lemma} \label{conv lemma or}
$\lim_{k\to \infty}\sum_{\ell=0}^{k-1} \alpha^{(k-\ell-1)(H-1)} \beta'^\ell \to 0.$
\end{lemma}
\proof{Proof of Lemma \ref{conv lemma or}}
First, notice that $\alpha^{H-1}<1$ and $\beta'<1$ from our assumption in Theorem \ref{mainAPI}.
Now suppose that $\alpha^{H-1} \leq \beta'.$ Then, we get that
\begin{align*}
\sum_{\ell=0}^{k-1} \alpha^{(k-\ell-1)(H-1)} \beta'^\ell &\leq \sum_{\ell=0}^{k-1} \beta'^{k-\ell-1} \beta'^\ell \\
&\leq \sum_{\ell=0}^{k-1} \beta'^{k-1}\\
&= k\beta'^{k-1}.
\end{align*} Taking limits as $k \to \infty$ on both sides, we have that $\lim_{k\to \infty}\sum_{\ell=0}^{k-1} \alpha^{(k-\ell-1)(H-1)} \beta'^\ell \to 0.$
Now suppose that $\beta' \leq \alpha^{H-1}.$ Then,
\begin{align*}
\sum_{\ell=0}^{k-1} \alpha^{(k-\ell-1)(H-1)} \beta'^\ell &\leq \sum_{\ell=0}^{k-1} \alpha^{(H-1)(k-\ell-1)} \alpha^{(H-1)\ell} \\
&\leq \sum_{\ell=0}^{k-1} \alpha^{k-1}\\
&= k\alpha^{k-1}.
\end{align*}
Taking limits as $k \to \infty$ on both sides, we get that $\lim_{k\to \infty}\sum_{\ell=0}^{k-1} \alpha^{(k-\ell-1)(H-1)} \beta'^\ell \to 0.$
\Halmos
\endproof
Thus, we get that \begin{align*}
\norm{J^{\mu_{k}} - J^*}_\infty
&\leq \frac{2\alpha^H \frac{\tau'}{1-\beta'} +\eps}{(1-\alpha^{H-1})(1-\alpha)}.
\end{align*}
So, in the limit, our bound becomes the following:
\begin{align*}
\limsup_{k\to \infty}\norm{J^{\mu_k}-J^*}_\infty &\leq \frac{ 2\alpha^H \Big( \frac{\tau'}{1-\beta'}\Big)+\eps}{(1-\alpha)(1-\alpha^{H-1})} = \frac{c_{m, H}}{(1-\alpha)(1-\alpha^{H-1})},
\end{align*}
where $c_{m, H} := 2\alpha^H \Big( \frac{\tau'}{1-\beta'}\Big)+\eps = 2\alpha^H \Big( \frac{\frac{\alpha^m + \alpha^{m+H-1}}{1-\alpha}\delta_1 + \delta_2+ \delta_1 \eps'}{1-\alpha^{m+H-1} \delta_1}\Big)+\eps,$ where expressions for $\beta'$ and $\tau'$ in \eqref{eq:defbeta} and \eqref{eq:defdelta3}, respectively, were used to obtain the last equality.
\Halmos \endproof
\color{black}
\begin{comment}
\section{A Modified Least Squares Algorithm}\label{appendix:thm2}
Suppose Step \ref{step 3 alg} of Algorithm \ref{alg:LSalg} is changed to $\hat{J}^{\mu_{k+1}}(i) = T_{\mu_{k+1}}^m (J_k)(i)+w_{k+1}(i)$ for $i \in \scriptD_k$. Then, it is still possible to get bounds on the performance of the algorithm when $m$ is sufficiently large. With this modification to the algorithm, when Assumptions \ref{assume 1}, \ref{assume 2}, and \ref{assume 3} hold, we have the following:
\begin{theorem}\label{mainAPISecond}
Suppose that $m$ satisfies $m >\log (\delta_1)/\log (1/\alpha),$ where
$$\delta_1 := \sup_k \norm{\scriptM_k}_\infty = \sup_k\norm{\Phi(\Phi_{\scriptD_{k}}^\top \Phi_{\scriptD_{k}} )^{-1} \Phi_{\scriptD_{k}}^\top \scriptP_k}_\infty,$$
then
\begin{align*}
\limsup_{k\to \infty}\norm{J^{\mu_k}-J^*}_\infty \leq \frac{\tilde{c}_{m, H}}{(1-\alpha)(1-\alpha^{H-1})},
\end{align*}
where $$\tilde{c}_{m, H} := 2\alpha^H \Bigg(\frac{\frac{\alpha^m}{1-\alpha} \delta_1 + \delta_2+ \delta_1 \eps' }{1-\alpha^{m}\delta_1} + \frac{1}{1-\alpha}+\norm{J_0}_\infty \Bigg)+\eps$$ and
$
\delta_2 := \sup_{k, \mu_k}\norm{\scriptM_k J^{\mu_k}- J^{\mu_k}}_\infty.
$
\end{theorem}
\proof{Proof of Theorem \ref{mainAPISecond}}
Consider policies $\mu_k$ and $\mu_{k+1},$ where $\mu_0$ can be taken to be any arbitrary policy. We have the following:
\begin{align}
-T_{\mu_{k+1}}T^{H-1}J^{\mu_k} &= -T_{\mu_{k+1}}T^{H-1}J^{\mu_k} - T_{\mu_{k+1}}T^{H-1}J_k + T_{\mu_{k+1}}T^{H-1}J_k \nonumber\\
&\overset{(a)}\leq \alpha^H \norm{J^{\mu_k}-J_k}_\infty e - T_{\mu_{k+1}}T^{H-1}J_k \nonumber\\
&\leq \alpha^H \norm{J^{\mu_k}-J_k}_\infty e - T^H J_k + \eps e \nonumber\\
&= \alpha^H \norm{J^{\mu_k}-J_k}_\infty e - T^H J_k - T^H J^{\mu_k} + T^H J^{\mu_k} + \eps e \nonumber\\
&\overset{(b)}\leq 2 \alpha^H \norm{J^{\mu_k}-J_k}_\infty e - T^H J^{\mu_k} + \eps e \nonumber\\
&\leq 2\alpha^H \norm{J^{\mu_k}-J_k}_\infty e - T^{H-1}J^{\mu_k} + \eps e, \label{eq: * 2}
\end{align}
where $e$ is the vector of all $1$s, (a) and (b) follow from the contraction property of $T^H$ and $T_{\mu_{k+1}}$ and the last inequality follows from standard arguments using the monotonicity properties and the definition of $T$: specifically, note that
$$
TJ^\mu = \max_{\mu'} T_{\mu'} J^\mu \geq T_\mu J^\mu = J^{\mu},
$$
and repeatedly apply $T$ to both sides of the inequality and use monotonocity to obtain
$T^\ell J^\mu \geq T^{\ell-1} J^\mu$ for all $\ell$ and all policies $\mu.$
We can further bound $\norm{J^{\mu_k}-J_k}$ as follows:
\begin{align*}
\norm{J^{\mu_k}-J_k} &= \norm{\scriptM_k (T_{\mu_k}^m J_{k-1}+w_k)- J^{\mu_k}}_\infty
\leq \norm{\scriptM_k T_{\mu_k}^m J_{k-1}- J^{\mu_k}}_\infty + \norm{\scriptM_k w_k}_\infty \\
&\leq\norm{\scriptM_k T_{\mu_k}^m J_{k-1}- J^{\mu_k}}_\infty + \norm{\scriptM_k}_\infty \norm{w_k}_\infty
\leq \norm{\scriptM_k T_{\mu_k}^m J_{k-1}- J^{\mu_k}}_\infty + \delta_1 \eps' \\
&= \norm{\scriptM_k T_{\mu_k}^m J_{k-1} - \scriptM_k J^{\mu_k} + \scriptM_k J^{\mu_k}- J^{\mu_k}}_\infty + \delta_1 \eps'\\
&\leq \norm{\scriptM_k T_{\mu_k}^m J_{k-1} - \scriptM_k J^{\mu_k}}_\infty + \norm{\scriptM_k J^{\mu_k}- J^{\mu_k}}_\infty+ \delta_1 \eps'\\
&\leq \norm{\scriptM_k}_\infty \norm{ T_{\mu_k}^m J_{k-1} - J^{\mu_k}}_\infty + \norm{\scriptM_k J^{\mu_k}- J^{\mu_k}}_\infty+ \delta_1 \eps'\\
&\leq \delta_1 \norm{ T_{\mu_k}^m J_{k-1} - J^{\mu_k}}_\infty + \norm{\scriptM_k J^{\mu_k}- J^{\mu_k}}_\infty+ \delta_1 \eps'\\
&\leq \alpha^m \delta_1 \norm{J_{k-1} - J^{\mu_k}}_\infty + \sup_{k, \mu_k}\norm{\scriptM_k J^{\mu_k}- J^{\mu_k}}_\infty + \delta_1 \eps'\\
&\leq \alpha^m \delta_1 \norm{J_{k-1} - J^{\mu_k}}_\infty + \delta_2+ \delta_1 \eps'\\
&\leq \alpha^m \delta_1 \norm{J_{k-1} - J^{\mu_{k-1}} + J^{\mu_{k-1}} - J^{\mu_k}}_\infty + \delta_2+ \delta_1 \eps'\\
&\leq \alpha^m\delta_1 \norm{J_{k-1} - J^{\mu_{k-1}}}_\infty + \alpha^m \delta_1 \norm{J^{\mu_{k-1}} - J^{\mu_k}}_\infty + \delta_2+ \delta_1 \eps'\\
&\leq \alpha^m \delta_1 \norm{J_{k-1} - J^{\mu_{k-1}}}_\infty + \frac{\alpha^m \delta_1}{1-\alpha} + \delta_2+ \delta_1 \eps'
\end{align*}
Iterating over $k$ we get that for all $k$,
\begin{align}
\norm{J^{\mu_k}-J_k}_\infty \leq \underbrace{\frac{\frac{\alpha^m}{1-\alpha} \delta_1 + \delta_2+ \delta_1 \eps' }{1-\alpha^{m}\delta_1} + \frac{1}{1-\alpha}+\norm{J_0}_\infty}_{=: p_{m}}, \label{eq: pound 2}
\end{align}
where we have used the assumption $\alpha^{m} \delta_1 < 1$ and the fact that $\norm{J^{\mu_0}}_\infty\leq 1/(1-\alpha)$ due to Assumption~\ref{assume 3}.
Putting (\ref{eq: * 2}) and (\ref{eq: pound 2}) together, we have the following:
\begin{align*}
-T_{\mu_{k+1}}T^{H-1}J^{\mu_k} \leq \underbrace{\Bigg[2\alpha^H p_{m} + \eps\Bigg]}_{=: \tilde{c}_{m, H}} e -T^{H-1}J^{\mu_k}.
\end{align*}
The rest of the proof follows from the proof of Theorem \ref{mainAPI} with $\tilde{c}_{m, H}$ instead of $c_{m, H}$ and we get Theorem \ref{mainAPISecond}.
\Halmos \endproof
Analogously to the inequality \eqref{eq: ***} in Appendix \ref{appendix:thm1}, the proof of Theorem \ref{mainAPISecond} gives us the following:
\begin{align}
\norm{J^{\mu_{k}}-J^*}_\infty &\leq \alpha^{(H-1)k} \norm{J^{\mu_0}-J^*}_\infty + \sum_{i=0}^{k-1} \alpha^{(H-1)i} \frac{ \tilde{c}_{m, H}}{1-\alpha},
\label{eq: *** 8}
\end{align} for $k >0.$
Note that the inequality \eqref{eq: *** 8} provides finite-time performance bounds for the modified least squares algorithm, in addition to the asymptotic result stated in Theorem \ref{mainAPISecond}. \color{black}
\section{Bounds on Iterates In Algorithm \ref{alg:LSalg}}\label{appendix:prop1}
In the following proposition, we present a bound on the difference between $J_k$ and $J^*.$
\begin{proposition} When $\alpha^{m+H-1} \delta_1 <1,$
$\limsup_{k \to \infty} \norm{J_k - J^*}_\infty \leq \frac{r_{m, H}}{1-q_{m, H}},$ \label{IterAPITheorem}
\end{proposition}
where $q_{m, H} := \delta_1 \alpha^{m+H-1} + \big( 1+\delta_1 \alpha^m \big)\alpha^{H-1}$ and $r_{m, H} := \big( 1+\delta_1 \alpha^m \big)\Bigg(\alpha^{H-1} p_{m, H} + \frac{ c_{m, H}}{1-\alpha} \Bigg) + \delta_2+\delta_1 \eps',$ where $c_{m, H}$ is defined in \eqref{eq:defcdmh} and $p_{m, H}$ is defined in $(\ref{eq: pound})$. The proof is as follows.
\proof{Proof of Proposition \ref{IterAPITheorem}}
\begin{align*}
&\norm{J_{k+1} - J^*}_\infty \\&= \norm{J_{k+1} -J^{\mu_{k+1}} + J^{\mu_{k+1}}- J^*}_\infty \\
&\leq \norm{J_{k+1} -J^{\mu_{k+1}}}_\infty + \norm{J^{\mu_{k+1}}- J^*}_\infty \\
&\leq \norm{\scriptM_{k+1} T_{\mu_{k+1}}^m T^{H-1} J_k -J^{\mu_{k+1}}}_\infty +\delta_1 \eps' +\norm{J^{\mu_{k+1}}- J^*}_\infty +\delta_1 \eps'\\
&= \norm{\scriptM_{k+1} T_{\mu_{k+1}}^m T^{H-1} J_k -\scriptM_{k+1} J^{\mu_{k+1}} + \scriptM_{k+1} J^{\mu_{k+1}} -J^{\mu_{k+1}}}_\infty +\norm{J^{\mu_{k+1}}- J^*}_\infty +\delta_1 \eps'\\
&\leq \norm{\scriptM_{k+1} T_{\mu_{k+1}}^m T^{H-1} J_k -\scriptM_{k+1} J^{\mu_{k+1}}}_\infty + \norm{\scriptM_{k+1} J^{\mu_{k+1}} -J^{\mu_{k+1}}}_\infty +\norm{J^{\mu_{k+1}}- J^*}_\infty +\delta_1 \eps'\\
&\leq \norm{\scriptM_{k+1}}_\infty \norm{T_{\mu_{k+1}}^m T^{H-1} J_k - J^{\mu_{k+1}}}_\infty + \norm{\scriptM_{k+1} J^{\mu_{k+1}} -J^{\mu_{k+1}}}_\infty +\norm{J^{\mu_{k+1}}- J^*}_\infty +\delta_1 \eps'\\
&\leq \delta_1 \alpha^m \norm{T^{H-1} J_k - J^{\mu_{k+1}}}_\infty + \delta_2 +\norm{J^{\mu_{k+1}}- J^*}_\infty +\delta_1 \eps'\\
&= \delta_1 \alpha^m \norm{T^{H-1} J_k - J^* + J^* - J^{\mu_{k+1}}}_\infty + \delta_2 +\norm{J^{\mu_{k+1}}- J^*}_\infty +\delta_1 \eps'\\
&\leq \delta_1 \alpha^m \norm{T^{H-1} J_k - J^*}_\infty + \delta_1 \alpha^m \norm{J^* - J^{\mu_{k+1}}}_\infty + \delta_2 + \norm{J^{\mu_{k+1}}- J^*}_\infty+\delta_1 \eps'\\
&\leq \delta_1 \alpha^{m+H-1} \norm{J_k - J^*}_\infty + \delta_1 \alpha^m \norm{J^* - J^{\mu_{k+1}}}_\infty + \delta_2 + \norm{J^{\mu_{k+1}}- J^*}_\infty+\delta_1 \eps'\\
&= \delta_1 \alpha^{m+H-1} \norm{J_k - J^*}_\infty + \big( 1+\delta_1 \alpha^m \big) \norm{J^* - J^{\mu_{k+1}}}_\infty + \delta_2+\delta_1 \eps' \allowdisplaybreaks\\
&\overset{(c)}\leq \delta_1 \alpha^{m+H-1} \norm{J_k - J^*}_\infty + \big( 1+\delta_1 \alpha^m \big) \Bigg(\alpha^{H-1} \norm{J^{\mu_k}-J^*}_\infty + \frac{ c_{m, H}}{1-\alpha} \Bigg) + \delta_2 +\delta_1 \eps'\\
&\leq \delta_1 \alpha^{m+H-1} \norm{J_k - J^*}_\infty + \big( 1+\delta_1 \alpha^m \big) \Bigg(\alpha^{H-1} (\norm{J^{\mu_k}-J_k}_\infty + \norm{J_k-J^*}_\infty) + \frac{ c_{m, H}}{1-\alpha} \Bigg) + \delta_2 +\delta_1 \eps'\allowdisplaybreaks\\
&= \Bigg(\delta_1 \alpha^{m+H-1} + \big( 1+\delta_1 \alpha^m \big)\alpha^{H-1} \Bigg) \norm{J_k - J^*}_\infty + \big( 1+\delta_1 \alpha^m \big)\Bigg(\alpha^{H-1} \norm{J^{\mu_k}-J_k}_\infty + \frac{ c_{m, H}}{1-\alpha} \Bigg)\\& + \delta_2 +\delta_1 \eps'\allowdisplaybreaks\\
&\overset{(d)}\leq \Bigg(\delta_1 \alpha^{m+H-1} + \big( 1+\delta_1 \alpha^m \big)\alpha^{H-1} \Bigg) \norm{J_k - J^*}_\infty + \big( 1+\delta_1 \alpha^m \big)\Bigg(\alpha^{H-1} p_{m, H} + \frac{ c_{m, H}}{1-\alpha} \Bigg) + \delta_2 +\delta_1 \eps'\\
&= q_{m, H} \norm{J_k - J^*}_\infty + r_{m, H},
\end{align*}
where $q_{m, H} := \delta_1 \alpha^{m+H-1} + \big( 1+\delta_1 \alpha^m \big)\alpha^{H-1}$ and $r_{m, H} := \big( 1+\delta_1 \alpha^m \big)\Bigg(\alpha^{H-1} p_{m, H} + \frac{ c_{m, H}}{1-\alpha} \Bigg) + \delta_2+\delta_1 \eps'.$ Note that $p_{m, H}$ is defined in $(\ref{eq: pound})$ and $c_{m, H}$ is defined in $(\ref{eq:defcdmh}).$ Additionally, $(c)$ follows from $(\ref{eq:cprop1})$ and $(d)$ follows from $(\ref{eq: pound})$, noting that the inequalities in $(\ref{eq:cprop1})$ and $(\ref{eq: pound})$ hold for all $\eps''>0$.
Iterating over $k$, we get Proposition \ref{IterAPITheorem}.
We obtain the following inequality in much a similar way to inequality \eqref{eq: ***} in the proof of Theorem \ref{mainAPI}:
\begin{align}
\norm{J_{k} - J^*}_\infty \leq q_{m, H}^k \norm{J_0 - J^*}_\infty + \sum_{i=0}^{k-1} q_{m, H}^i r_{m, H}, \text{ for $k >0.$ } \label{eq:*** 3}
\end{align} \Halmos
\endproof
Note that the inequality \eqref{eq:*** 3} provides finite-time performance bounds, in addition to the asymptotic result stated in Proposition \ref{IterAPITheorem}.
\end{comment}
\section{Counter-Example} \label{appendix:counterexAppendix}
Even though, in practice, $J^{\mu_k}$ is what we are interested in, the values $J_k$ computed as part of our algorithm should not go to $\infty$ since the algorithm would be numerically unstable otherwise. In Appendix \ref{appendix:prop1}, we provide a bound on $\norm{J_k-J^*}_\infty$ when $m+H-1$ is sufficiently large as in Theorem~\ref{mainAPI}. In this subsection, we show that, when this condition is not satisfied, $J_k$ can become unbounded.
\begin{figure}
\centering
\subfloat[\centering $\mu^a$]{\includegraphics[width=2.5cm]{Counter1.drawio-1} }%
\qquad
\subfloat[\centering $\mu^b$]{\includegraphics[width=2.5cm]{Counter2.drawio-1} }%
\caption{An example illustrating the necessity of the condition in Theorem~\ref{mainAPI}}%
\label{fig:TsitVanRoyIm}%
\end{figure}
The example we use is depicted in Figure \ref{fig:TsitVanRoyIm}.
There are two policies, $\mu^a$ and $\mu^b$ and the transitions are deterministic under the two policies. The rewards are deterministic and only depend on the states. The rewards associated with states are denoted by $r(x_1)$ and $r(x_2),$
with $r(x_1)>r(x_2)$. Thus, the optimal policy is $\mu^a$. We assume scalar features $\phi(x_1)=1$ and $\phi(x_2)=2.$
We fix $H=1$.
The MDP follows policy $\mu^a$ when:
\begin{align*}
&J_k(x_1) > J_k(x_2) \implies \theta_k > 2\theta_k.
\end{align*} Thus, as long as $\theta_k>0,$ the lookahead policy will be $\mu_b.$
We will now show that $\theta_k$ increases at each iteration when $\delta_1 \alpha^{m+H-1}>1.$ We assume that $\theta_0>0$ and $\scriptD_k = \{1, 2\}$ $\forall k.$ A straightforward computation shows that $\delta_1=\frac{6}{5}.$
At iteration $k+1,$ suppose $\mu_{k+1}=\mu^b,$ our $\hat{J}^{\mu_{k+1}}(i)$ for $i = 1, 2$ are as follows:
\begin{align*}
\hat{J}^{\mu_{k+1}}(1) =r(x_1)+\sum_{i=1}^{m-1} r(x_1) \alpha^i + 2 \alpha^m \theta_k, \quad
\hat{J}^{\mu_{k+1}}(2) = r(x_2) +\sum_{i=1}^{m-1} r(x_2)\alpha^i + 2 \alpha^m \theta_k.
\end{align*}
Thus, from Step \ref{step 4 alg} of Algorithm \ref{alg:LSalg}:
\begin{align*}
&\theta_{k+1} = \arg \min_\theta \sum_{i =1}^2 \Big( (\Phi \theta)(i) - \hat{J}^{\mu_{k+1}}(i) \Big)^2 \\
&\implies \theta_{k+1} = \frac{\sum_{i=1}^{m-1} \alpha^i r(x_1)}{5} + \frac{2 \sum_{i=1}^{m-1} \alpha^i r(x_2)}{5} + \frac{6 \alpha^m \theta_k}{5}\\
&\implies \theta_{k+1}> \frac{6}{5} \alpha^{m}\theta_k.
\end{align*}
Thus, since $\theta_0 > 0$ and $H=1$, when $ \frac{6}{5} \alpha^{m+H-1}\theta_k =\delta_1 \alpha^{m+H-1} >1,$ $\theta_{k}$ goes to $\infty.$
\section{Proof of Lemma \ref{cthetalemma}}\label{appendix:lem2}
\proof{Proof of Lemma \ref{cthetalemma}}
\begin{align*}
\frac{1}{1-\alpha} \nonumber&\geq \norm{J^{\mu_k}-J^{\mu_{k+1}}}_\infty\\
&=\norm{J^{\mu_k} - \Phi \theta^{\mu_k} - \Big(J^{\mu_{k-1}} - \Phi \theta^{\mu_{k-1}} + \Phi \theta^{\mu_{k-1}} - \Phi \theta^{\mu_{k}} \Big)}_\infty \\\nonumber&\geq \norm{J^{\mu_{k-1}} - \Phi \theta^{\mu_{k-1}} + \Phi \theta^{\mu_{k-1}} - \Phi \theta^{\mu_{k}}}_\infty - \delta'_2 \\ \nonumber
& \geq\norm{\Phi \theta^{\mu_{k-1}} - \Phi \theta^{\mu_{k}}}_\infty - 2\delta'_2 \\ \nonumber& \geq \frac{1}{\sigma_{\min, \Phi}} \norm{ \theta^{\mu_{k-1}} - \theta^{\mu_{k}}}_\infty - 2\delta'_2 \nonumber
\end{align*}
\begin{align*}
\implies \underbrace{\frac{\sigma_{\min, \Phi}}{1-\alpha}+2\sigma_{\min, \Phi} \delta'_2}_{=: C} \nonumber&\geq \norm{ \theta^{\mu_{k-1}} - \theta^{\mu_{k}}}_\infty,
\end{align*}
where $\sigma_{\min, \Phi}$ is the smallest singular value in the singular value decomposition of $\Phi.$
\Halmos
\endproof
\section{Proof of Lemma \ref{lemmathetatilde}}\label{appendix:lem1}
\proof{Proof of Lemma \ref{lemmathetatilde}}
From assumption \ref{assume 1}, we have that $\Phi_{\scriptD_k}$ is of rank $d$. Thus, we have the following:
\begin{align*}
\theta^{\mu_k} - \tilde{\theta}^{\mu_k} &= \Big( \Phi_{\scriptD_k}^\top \Phi_{\scriptD_k}\Big)^{-1} \Phi_{\scriptD_k}^\top (\scriptP_{k}J^{\mu_{k}} - \scriptP_{k}(T_{\mu_{k}}^m T^{H-1} J_{k-1}+w_k)) \\
&= \Big( \Phi_{\scriptD_k}^\top \Phi_{\scriptD_k}\Big)^{-1} \Phi_{\scriptD_k}^\top \scriptP_{k} (J^{\mu_{k}} - (T_{\mu_{k}}^m T^{H-1} \Phi \theta_{k-1}+w_k))
\end{align*}
\begin{align}
\implies \norm{\theta^{\mu_k} - \tilde{\theta}^{\mu_k}}_\infty \nonumber&\leq \norm{\Big( \Phi_{\scriptD_k}^\top \Phi_{\scriptD_k}\Big)^{-1} \Phi_{\scriptD_k}^\top \scriptP_{k}}_\infty \norm{J^{\mu_{k}} - (T_{\mu_{k}}^m T^{H-1} \Phi \theta_{k-1}+w_k) }_\infty \\ \nonumber
&\leq \norm{\Big( \Phi_{\scriptD_k}^\top \Phi_{\scriptD_k}\Big)^{-1} \Phi_{\scriptD_k}^\top \scriptP_{k}}_\infty \norm{J^{\mu_{k}} - T_{\mu_{k}}^m T^{H-1} \Phi \theta_{k-1}}_\infty + \frac{\delta_1' \eps'}{\norm{\Phi}_\infty}\\ &\leq
\alpha^m \sup_k \norm{\Big( \Phi_{\scriptD_k}^\top \Phi_{\scriptD_k}\Big)^{-1} \Phi_{\scriptD_k}^\top \scriptP_{k}}_\infty \norm{J^{\mu_{k}} - T^{H-1} \Phi \theta_{k-1} }_\infty + \frac{\delta_1' \eps'}{\norm{\Phi}_\infty}\label{finalthetacomp}
\end{align}
We now obtain a bound for $\norm{J^{\mu_{k}} - T^{H-1} \Phi \theta_{k-1} }_\infty$ as follows:
\begin{align}
\norm{J^{\mu_{k}} - T^{H-1} \Phi \theta_{k-1} }_\infty \nonumber&= \norm{J^{\mu_{k}}-J^* + J^* - T^{H-1} \Phi \theta_{k-1} }_\infty \\ \nonumber
&\leq \norm{J^{\mu_{k}}-J^*}_\infty + \norm{J^* - T^{H-1} \Phi \theta_{k-1} }_\infty \\ \nonumber
&\leq \norm{J^{\mu_{k}}-J^*}_\infty + \alpha^{H-1}\norm{J^* - \Phi \theta_{k-1} }_\infty \\ \nonumber
&\leq \frac{1}{1-\alpha} + \alpha^{H-1}\norm{J^* - \Phi \theta_{k-1} }_\infty \\ \nonumber
&\leq \frac{1}{1-\alpha} + \alpha^{H-1}\norm{J^*-J^{\mu_{k-1}}+J^{\mu_{k-1}} - \Phi \theta_{k-1} }_\infty \\ \nonumber
&\leq \frac{1}{1-\alpha} + \alpha^{H-1}\Big(\norm{J^*-J^{\mu_{k-1}}}_\infty+\norm{J^{\mu_{k-1}} - \Phi \theta_{k-1} }_\infty\Big) \\ \nonumber
&\leq \frac{1+\alpha^{H-1}}{1-\alpha} + \alpha^{H-1}\norm{J^{\mu_{k-1}} - \Phi \theta_{k-1} }_\infty \\ \nonumber
&= \frac{1+\alpha^{H-1}}{1-\alpha} + \alpha^{H-1}\norm{J^{\mu_{k-1}} -\Phi \theta^{\mu_{k-1}} + \Phi \theta^{\mu_{k-1}} - \Phi \theta_{k-1} }_\infty \\ \nonumber
&= \frac{1+\alpha^{H-1}}{1-\alpha} + \alpha^{H-1}\norm{J^{\mu_{k-1}} -\Phi \theta^{\mu_{k-1}}}_\infty + \alpha^{H-1}\norm{\Phi \theta^{\mu_{k-1}} - \Phi \theta_{k-1} }_\infty \\
&\leq \frac{1+\alpha^{H-1}}{1-\alpha} + \alpha^{H-1}\delta_2' + \alpha^{H-1}\norm{\Phi}_\infty \norm{ \theta^{\mu_{k-1}} - \theta_{k-1} }_\infty. \label{compjmukthetak}
\end{align}
Putting \ref{finalthetacomp} and \ref{compjmukthetak} together gives the following:
\begin{align*}
\norm{\theta^{\mu_k} - \tilde{\theta}^{\mu_k}}_\infty &\leq \frac{\delta_1' \eps'}{\norm{\Phi}_\infty}+\alpha^m \frac{\delta_1'}{\norm{\Phi}_\infty} \Bigg( \frac{1+\alpha^{H-1}}{1-\alpha} + \alpha^{H-1}\delta_2' \Bigg) + \alpha^{m+H-1} \delta_1' \norm{ \theta^{\mu_{k-1}} - \theta_{k-1} }_\infty.
\end{align*}
\Halmos \endproof
\begin{comment}
\section{TD-Learning Approximation} \label{appendix:TD1}
We consider the variant of approximate policy iteration studied in \cite{efroni2019combine}. We define the following operator with parameter $\lambda \in (0, 1)$:
\begin{align}
T_{\lambda}^\mu J \nonumber&= (1-\lambda) \sum_{j=0}^\infty \lambda^j (T_{\mu})^{j+1} J \\
&= J + (1-\gamma \lambda P^{\mu} )^{-1} (T_{\mu} J - J), \label{TDlearnop}
\end{align} where $P^\mu$ is the state transition matrix corresponding policy $\mu.$ This operator is an estimator of the $TD-\lambda$ operator in \cite{SuttonBarto}.
The full return ($m=\infty$) is often desired in practice but is impossible to obtain for some MDPs. It is possible to instead obtain an estimate of a full return for any policy $\mu_k$ with the operator in equation (\ref{TDlearnop}) $T_{\lambda}^{\mu_{k}}$ parameterized by $\lambda \in (0, 1)$. Using the $T_{\lambda}^{\mu_k}$ operator to obtain an estimate for $J^{\mu_k}$, our modified algorithm is given in Algorithm \ref{alg:TDalg}.
\begin{algorithm}[tb]
\caption{TD-Learning Approximation Algorithm}
\label{alg:TDalg}
\textbf{Input}: $J_0, m,$ $H, \lambda,$ feature vectors $\{ \phi(i) \}_{i \in \scriptS}, \phi(i) \in \mathbb{R}^d$ and subsets $\scriptD_k \subseteq \scriptS, k = 0, 1, \ldots.$ Here $\scriptD_k$ is the set of states at which we evaluate the current policy at iteration $k.$ \\
\begin{algorithmic}[1]
\STATE Let $k=0$.
\STATE Let $\mu_{k+1}$ be such that $\norm{T^H J_k - T_{\mu_{k+1}} T^{H-1} J_k}_\infty \leq \eps$.\\\label{step 2 alg 2}
\STATE Compute $\hat{J}^{\mu_{k+1}}(i) = T_{\mu_{k+1}}^m T^{H-1} (J_k)(i)+w_{k+1}(i)$ for $i \in \scriptD_k.$ \\
\STATE Choose $\theta_{k+1}$ to solve
\begin{align*}
\min_\theta \sum_{i \in D_k} \Big( (\Phi \theta)(i) - \hat{J}^{\mu_{k+1}}(i) \Big)^2,
\end{align*} \\
where $\Phi$ is a matrix whose rows are the feature vectors.
\STATE $J_{k+1} = \Phi \theta_{k+1}.$
\STATE Set $k \leftarrow k+1.$ Go to 2. \end{algorithmic}
\end{algorithm}
As was the case with our main algorithm, we make Assumption \ref{assume 1}, Assumption \ref{assume 2} and Assumption \ref{assume 3}. Using Assumption \ref{assume 1}, we can succinctly write our iterates as follows:
\begin{equation}
J_{k+1} = \scriptM_{k+1} (T_\lambda^{\mu_{k+1}} T^{H-1} J_k + w_{k+1}). \label{eq:iterateAPITD}
\end{equation}
We will now state a theorem that characterizes the role of $\lambda$ in the convergence of our algorithm:
\begin{theorem}
When $ \delta_1 c_\lambda \alpha^{H-1} <1$, where $\delta_1$ is defined in Theorem \ref{mainAPI} and $c_\lambda := \frac{1-\lambda}{\lambda(1-\lambda\alpha)},$
\begin{align*}
\limsup_{k\to \infty}\norm{J^{\mu_k}-J^*}_\infty \leq \frac{c_{m,H, \lambda}}{(1-\alpha)(1-\alpha^{H-1})},
\end{align*}
where $c_{m, H, \lambda} := 2\alpha^H \Big(\frac{\delta_1 c_\lambda\Big(\frac{1 + \alpha^{H-1}}{1-\alpha}\Big) + \delta_2+ \delta_1 \eps'}{1-\delta_1 c_\lambda \alpha^{H-1}} + \frac{1}{1-\alpha} +\norm{J_0}_\infty\Big)+\eps,$ and $\delta_2$ is defined in Theorem \ref{mainAPI}. \label{IterAPITheoremTD}
\end{theorem} The proof of Theorem \ref{IterAPITheoremTD} is as follows:
\proof
Consider the following sequences:
\begin{align*}
V_k^i &:= (1+\lambda+\ldots+\lambda^i)^{-1}\sum_{j=0}^i \lambda^j (T^{\mu_{k}})^{j+1} (T^{H-1}J_{k-1}-J^{\mu_k}) \\
z_k^i &:= V_k^i - J^{\mu_k}.
\end{align*}
First, we get bounds on $V_k^i.$ We have the following:
\begin{align*}
& J^{\mu_k} - \norm{T^{H-1}J_{k-1} - J^{\mu_k}}_\infty e \leq T^{H-1} J_{k-1} -J^{\mu_k} \leq J^{\mu_k} + \norm{T^{H-1}J_{k-1} - J^{\mu_k}}_\infty e \\
&\implies (1+\lambda+\ldots+\lambda^i)^{-1}\sum_{j=0}^i \lambda^j (T^{\mu_{k}})^{j+1} (J^{\mu_k} - \underbrace{\norm{T^{H-1}J_{k-1}-J^{\mu_k}}_\infty}_{:=\eps_\lambda} e) - J^{\mu_k} \\
&\leq z_k^i \leq (1+\lambda+\ldots+\lambda^i)^{-1}\sum_{j=0}^i \lambda^j (T^{\mu_{k}})^{j+1} (J^{\mu_k} + \norm{T^{H-1}J_{k-1}-J^{\mu_k}}_\infty e) - J^{\mu_k} \\
&\implies (1+\lambda+\ldots+\lambda^i)^{-1}\sum_{j=0}^i \lambda^j (T^{\mu_{k}})^{j+1} (J^{\mu_k} - \eps_\lambda e) - J^{\mu_k} \leq z_k^i
\\&\leq (1+\lambda+\ldots+\lambda^i)^{-1}\sum_{j=0}^i \lambda^j (T^{\mu_{k}})^{j+1} (J^{\mu_k} + \eps_\lambda e) - J^{\mu_k} \\
&\implies - (1+ \lambda+\ldots+\lambda^i)^{-1} \sum_{j=0}^i \lambda^j \alpha^{j+1}\eps_\lambda e \leq z_k^i \leq (1+ \lambda+\ldots+\lambda^i)^{-1} \sum_{j=0}^i \lambda^j \alpha^{j+1}\eps_\lambda e \\
&\implies \norm{z_k^i}_\infty \leq (1+ \lambda+\ldots+\lambda^i)^{-1} \sum_{j=0}^i \lambda^j \alpha^{j+1}\eps_\lambda.
\end{align*}
Taking limits since norm is a continuous function, we have the following using a ``continuity'' argument:
\begin{align*}
\norm{J_k - J^{\mu_k}}_\infty &= \norm{\scriptM_k (T_\lambda^{\mu_k} T^{H-1}J_k +w_k)- J^{\mu_k}}_\infty + \delta_1 \eps' \\
&\leq \norm{\scriptM_k T_\lambda^{\mu_k} T^{H-1}J_k - J^{\mu_k}}_\infty + \delta_1 \eps'\\
&\leq \norm{\scriptM_k T_\lambda^{\mu_k} T^{H-1}J_k - J^{\mu_k}}_\infty + \norm{\scriptM_k J^{\mu_k} -J^{\mu_k}}_\infty + \delta_1 \eps'\\
&\leq \norm{\scriptM_k}_\infty \norm{T_\lambda^{\mu_k} T^{H-1}J_k -J^{\mu_k}}_\infty +\delta_2+ \delta_1 \eps'\\
&\leq \delta_1 \norm{T_\lambda^{\mu_k} T^{H-1}J_k - J^{\mu_k}}_\infty +\delta_2 + \delta_1 \eps'\\
&= \delta_1 \lim_i \norm{V_k^i - J^{\mu_k}}_\infty +\delta_2 + \delta_1 \eps'\\&= \delta_1 \lim_i \norm{z_k^i}_\infty + \delta_2 + \delta_1 \eps'\\&\leq \delta_1 \frac{\eps_\lambda(1-\lambda)}{\lambda(1-\lambda\alpha)}+\delta_2 + \delta_1 \eps' \\&= \delta_1 \frac{\norm{T^{H-1}J_{k-1}-J^{\mu_k}}_\infty(1-\lambda)}{\lambda(1-\lambda\alpha)} + \delta_2+ \delta_1 \eps'.
\end{align*}
We now have the following:
\begin{align}
\norm{J_k - J^{\mu_k}} &\leq \delta_1 \norm{T^{H-1}J_{k-1}-J^{\mu_k}}_\infty \underbrace{\frac{1-\lambda}{\lambda(1-\lambda\alpha)}}_{=: c_\lambda} + \delta_2 + \delta_1 \eps'\label{eq:defclambda} \\
&= \delta_1 c_\lambda\norm{ T^{H-1} J_{k-1} -J^{\mu_k}}_\infty + \delta_2+ \delta_1 \eps' \\\nonumber
&= \delta_1 c_\lambda \norm{T^{H-1} J_{k-1} - J^* + J^* - J^{\mu_k}}_\infty + \delta_2+ \delta_1 \eps'\\\nonumber
&\leq \delta_1 c_\lambda\norm{T^{H-1} J_{k-1} - J^*}_\infty + \delta_1 c_\lambda\norm{J^* - J^{\mu_k}}_\infty + \delta_2+ \delta_1 \eps' \\\nonumber
&\leq \delta_1 c_\lambda \alpha^{H-1} \norm{J_{k-1} - J^*}_\infty + \frac{ \delta_1 c_\lambda}{1-\alpha} + \delta_2+ \delta_1 \eps'\\\nonumber
&= \delta_1 c_\lambda \alpha^{H-1} \norm{J_{k-1} -J^{\mu_{k-1}} + J^{\mu_{k-1}}- J^*}_\infty + \frac{\delta_1 c_\lambda}{1-\alpha} + \delta_2+ \delta_1 \eps'\\ \nonumber
&\leq \delta_1 c_\lambda \alpha^{H-1} \norm{J_{k-1} -J^{\mu_{k-1}}}_\infty +\delta_1 c_\lambda\Big(\frac{1 + \alpha^{H-1}}{1-\alpha}\Big) + \delta_2+ \delta_1 \eps'.
\end{align}
When we iterate over $k$, we get the following for all $k$:
\begin{align*}
\norm{J^{\mu_k} - J_k}_\infty \leq \underbrace{\frac{\delta_1 c_\lambda\Big(\frac{1 + \alpha^{H-1}}{1-\alpha}\Big) + \delta_2+ \delta_1 \eps'}{1-\delta_1 c_\lambda \alpha^{H-1}} + \frac{1}{1-\alpha} + \norm{J_0}_\infty}_{=: p_{m, H, \lambda}},
\end{align*} when $\delta_1 c_\lambda \alpha^{H-1} <1.$
The rest of the proof follows from the proof of Theorem \ref{mainAPI} with $p_{m, H, \lambda}$ instead of $p_{m, H}$ and we get Theorem \ref{IterAPITheoremTD}.
\Halmos \endproof
The corresponding finite-time bounds are as follows:
\begin{align}
\norm{J^{\mu_{k}}-J^*}_\infty &\leq \alpha^{(H-1)k} \norm{J^{\mu_0}-J^*}_\infty + \sum_{i=0}^{k-1} \alpha^{(H-1)i} \frac{c_{m, H, \lambda}}{1-\alpha},
\label{eq: *** 22}
\end{align} for $k >0.$
\color{black}
\section{TD-Learning Approximation - A Special Case}\label{appendix:TD2}
When the $\lambda$ in our TD-learning algorithm is very small, we may obtain better bounds using an alternative proof technique. Note that in this case, $T_\lambda^{\mu_{k+1}}$ can be seen as an approximation to the $T^{\mu_{k+1}}$ operator. We have the following result that is tailored towards small $\lambda:$
\begin{proposition}
When $z_\lambda \alpha^{H-1} <1,$ where $z_\lambda := (\delta_1 \delta_3 \alpha + \delta_1 \delta_3 + \alpha),$ $\delta_3 := \norm{(I - \gamma \lambda P^{\mu_{k}})^{-1} - I }_\infty,$ and $\delta_1, \delta_2$ are defined in Theorem \ref{IterAPITheorem}:
\begin{align*}
\limsup_{k\to \infty}\norm{J^{\mu_k}-J^*}_\infty \leq \frac{\tilde{c}_{m,H,\lambda}}{(1-\alpha)(1-\alpha^{H-1})},
\end{align*}
where $\tilde{c}_{m,H,\lambda} := 2\alpha^H \Big(\frac{z_\lambda\Big(\frac{1 + \alpha^{H-1}}{1-\alpha}\Big) + \delta_2+\delta_1\eps'}{1-z_\lambda \alpha^{H-1}} + \frac{1}{1-\alpha}+\norm{J_0}_\infty\Big)+\eps.$ \label{iterAPITheoremTDAlter}
\end{proposition} The proof of Proposition \ref{iterAPITheoremTDAlter} is as follows:
\proof
Note that our iterates can be re-written as follows:
\begin{align*}
J_{k+1} = \scriptM_{k+1} \Bigg(T^{H-1} J_k + (I - \gamma \lambda P^{\mu_{k+1}})^{-1} (T^{\mu_{k+1}}T^{H-1} J_k - T^{H-1} J_k) + w_k\Bigg).
\end{align*}
We have the following bounds:
\begin{align}
\nonumber \norm{J^{\mu_k}-J_k}_\infty &= \norm{\scriptM_{k} \Bigg(T^{H-1} J_{k-1} + (I - \gamma \lambda P^{\mu_{k}})^{-1} (T^{\mu_{k}}T^{H-1} J_{k-1} - T^{H-1} J_{k-1})+w_k\Bigg)- J^{\mu_{k}}}_\infty \\ \nonumber
&\leq \norm{\scriptM_{k} \Bigg(T^{H-1} J_{k-1} + (I - \gamma \lambda P^{\mu_{k}})^{-1} (T^{\mu_{k}}T^{H-1} J_{k-1} - T^{H-1} J_{k-1})\Bigg)- J^{\mu_{k}}}_\infty+\delta_1\eps' \\ \nonumber
&= \norm{\scriptM_{k} \Bigg(T^{H-1} J_{k-1} + (I - \gamma \lambda P^{\mu_{k}})^{-1} (T^{\mu_{k}}T^{H-1} J_{k-1} - T^{H-1} J_{k-1})\Bigg)- \scriptM_k J^{\mu_k} + \scriptM_k J^{\mu_k}- J^{\mu_{k}}}_\infty+\delta_1\eps' \\\nonumber
&\leq \norm{\scriptM_{k} \Bigg(T^{H-1} J_{k-1} + (I - \gamma \lambda P^{\mu_{k}})^{-1} (T^{\mu_{k}}T^{H-1} J_{k-1} - T^{H-1} J_{k-1})\Bigg)- \scriptM_k J^{\mu_k}}_\infty + \norm{\scriptM_k J^{\mu_k}- J^{\mu_{k}}}_\infty +\delta_1\eps'\\\nonumber
&\leq \norm{\scriptM_{k} \Bigg(T^{H-1} J_{k-1} + (I - \gamma \lambda P^{\mu_{k}})^{-1} (T^{\mu_{k}}T^{H-1} J_{k-1} - T^{H-1} J_{k-1})\Bigg)- \scriptM_k J^{\mu_k}}_\infty + \delta_2 +\delta_1\eps'\\\nonumber
&\leq \norm{\scriptM_{k}}_\infty \norm{\Bigg(T^{H-1} J_{k-1} + (I - \gamma \lambda P^{\mu_{k}})^{-1} (T^{\mu_{k}}T^{H-1} J_{k-1} - T^{H-1} J_{k-1})\Bigg)- J^{\mu_k}}_\infty + \delta_2 +\delta_1\eps'\\\nonumber
&\leq \delta_1 \norm{\Bigg(T^{H-1} J_{k-1} + (I - \gamma \lambda P^{\mu_{k}})^{-1} (T^{\mu_{k}}T^{H-1} J_{k-1} - T^{H-1} J_{k-1})\Bigg)- J^{\mu_k}}_\infty + \delta_2 +\delta_1\eps'\\\nonumber
&\leq \delta_1 \norm{\Bigg(T^{H-1} J_{k-1} + (I - \gamma \lambda P^{\mu_{k}})^{-1} (T^{\mu_{k}}T^{H-1} J_{k-1} - T^{H-1} J_{k-1})\Bigg) - T^{\mu_{k}}T^{H-1} J_{k-1}}_\infty \\\nonumber&+ \norm{T^{\mu_{k}}T^{H-1} J_{k-1} - J^{\mu_k}}_\infty + \delta_2 +\delta_1\eps'\\\nonumber
&=\delta_1 \norm{\Big((I - \gamma \lambda P^{\mu_{k}})^{-1} - I \Big) \Big[ T^{\mu_{k}}T^{H-1} J_{k-1} - T^{H-1} J_{k-1}\Big]}_\infty \\\nonumber&+ \norm{T^{\mu_{k}}T^{H-1} J_{k-1} - J^{\mu_k}}_\infty + \delta_2 +\delta_1\eps'\\\nonumber
&\leq \delta_1 \norm{(I - \gamma \lambda P^{\mu_{k}})^{-1} - I }_\infty \norm{ T^{\mu_{k}}T^{H-1} J_{k-1} - T^{H-1} J_{k-1}}_\infty \\\nonumber&+ \norm{T^{\mu_{k}}T^{H-1} J_{k-1} - J^{\mu_k}}_\infty + \delta_2 +\delta_1\eps'\\\nonumber
&= \delta_1 \underbrace{\norm{(I - \gamma \lambda P^{\mu_{k}})^{-1} - I }_\infty}_{=: \delta_3} \norm{ T^{\mu_{k}}T^{H-1} J_{k-1} -J^{\mu_k} + J^{\mu_k}- T^{H-1} J_{k-1}}_\infty \\&+ \norm{T^{\mu_{k}}T^{H-1} J_{k-1} - J^{\mu_k}}_\infty +\delta_2 +\delta_1\eps'\\\nonumber
\end{align}
\begin{align}
&= \delta_1 \delta_3 \norm{ T^{\mu_{k}}T^{H-1} J_{k-1} -J^{\mu_k}}_\infty + \delta_1 \delta_3 \norm{J^{\mu_k}- T^{H-1} J_{k-1}}_\infty + \norm{T^{\mu_{k}}T^{H-1} J_{k-1} - J^{\mu_k}}_\infty +\delta_2 +\delta_1\eps'\\\nonumber
&\leq \delta_1 \delta_3 \alpha \norm{ T^{H-1} J_{k-1} -J^{\mu_k}}_\infty + \delta_1 \delta_3 \norm{J^{\mu_k}- T^{H-1} J_{k-1}}_\infty + \alpha \norm{T^{H-1} J_{k-1} - J^{\mu_k}}_\infty +\delta_2+\delta_1\eps' \\\label{eq:defzdlambda}
&= \underbrace{(\delta_1 \delta_3 \alpha + \delta_1 \delta_3 + \alpha)}_{=: z_\lambda} \norm{ T^{H-1} J_{k-1} -J^{\mu_k}}_\infty + \delta_2 +\delta_1\eps'\\\nonumber
&= z_\lambda \norm{T^{H-1} J_{k-1} - J^* + J^* - J^{\mu_k}}_\infty + \delta_2+\delta_1\eps'\\\nonumber
&\leq z_\lambda \norm{T^{H-1} J_{k-1} - J^*}_\infty + z_\lambda\norm{J^* - J^{\mu_k}}_\infty + \delta_2+\delta_1\eps' \\\nonumber
&\leq z_\lambda \alpha^{H-1} \norm{J_{k-1} - J^*}_\infty + \frac{z_\lambda}{1-\alpha} + \delta_2+\delta_1\eps'\\\nonumber
&= z_\lambda \alpha^{H-1} \norm{J_{k-1} -J^{\mu_{k-1}} + J^{\mu_{k-1}}- J^*}_\infty + \frac{z_\lambda}{1-\alpha} + \delta_2+\delta_1\eps'\\ \nonumber
&\leq z_\lambda \alpha^{H-1} \norm{J_{k-1} -J^{\mu_{k-1}}}_\infty + z_\lambda\Big(\frac{1 + \alpha^{H-1}}{1-\alpha}\Big) + \delta_2+\delta_1\eps'.
\end{align}
When we iterate over $k$, we get the following for all $k$:
\begin{align*}
\norm{J^{\mu_k} - J_k}_\infty \leq \underbrace{\frac{z_\lambda\Big(\frac{1 + \alpha^{H-1}}{1-\alpha}\Big) + \delta_2+\delta_1\eps'}{1-z_\lambda \alpha^{H-1}} + \frac{1}{1-\alpha}+\norm{J_0}_\infty}_{=: \tilde{p}_{m, H, \lambda}},
\end{align*} when $z_\lambda \alpha^{H-1} <1.$
The rest of the proof follows from the proof of Theorem \ref{mainAPI} with $\tilde{p}_{m, H, \lambda}$ in place of $p_{m, H}$ and we get Proposition \ref{iterAPITheoremTDAlter}.
\Halmos \endproof
The corresponding finite-time bounds are as follows:
\begin{align}
\norm{J^{\mu_{k}}-J^*}_\infty &\leq \alpha^{(H-1)k} \norm{J^{\mu_0}-J^*}_\infty + \sum_{i=0}^{k-1} \alpha^{(H-1)i} \frac{\tilde{c}_{m, H, \lambda}}{1-\alpha},
\label{eq: *** 2}
\end{align} for $k >0.$
\color{black}
\end{comment}
\section{Numerical Results} \label{appendix:numerical}
In this appendix, we test our algorithms on a grid world problem, using the same grid world problem as in \cite{efroni2018} and \cite{efroni2019combine}.
For our simulations, we assume a deterministic grid world problem played on an $N \times N$ grid. The states are the squares of the grid and the actions are $\{ \text{'up', 'down', 'right', 'left', and 'stay'} \}$, which move the agent in the prescribed direction, if possible. In each experiment, a goal state is chosen uniformly at random to have a reward of 1, while each other state has a fixed reward drawn uniformly from $[-0.1, 0.1]$. Unless otherwise mentioned, for the duration of this section, $N=25$ and $\alpha = 0.9$.
In order to perform linear function approximation, we prescribe a feature vector for each state. In this section, we focus on three particular choices:
\begin{enumerate}
\item Random feature vectors: each entry of the matrix $\Phi$ is an independent $\mathcal{N}(0, 1)$ random variable
\item Designed feature vectors: the feature vector for a state with coordinates $(x, y)$ is $[x, y, d, 1]^T$, where $d$ is the number of steps required to reach the goal from state $(x, y)$
\item Indicator vectors: the feature vector for each state $i$ is a $N^2$-dimensional indicator vector where only the $i$-th entry is nonzero
\end{enumerate}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{h-m-error-plots.png}
\caption{(Top) For random feature vectors, as $m$ and $H$ increase, the value $J_k$ eventually stops diverging. (Bottom) For designed feature vectors, a smaller amount of lookahead and $m$-step return are needed to prevent $J_k$ from diverging. }
\label{fig:hm_plots}
\end{figure}
Recall that our theorems suggest that the amount of lookahead and return depends on the choice of the feature vectors. Our experiments support this observation as well. The amount of lookahead and $m$-step return required is high (often over $30$) for random feature vectors, but we are able to significantly reduce the amount required by using the designed feature vectors which better represent the states.
We test Algorithm \ref{alg:LSalg} in each of our experiments, using a starting state of $J_0 = \theta_0 = 0$. All plots in this section graph an average over 20 trials, where each trial has a fixed random choice of $\mathcal{D}_k$, the set of states used for policy evaluation. Error bars show the standard deviation of the mean. The code used to produce these graphs is included in the Supplementary Material.
\subsection{The effect of m and H on convergence}
In Figure \ref{fig:hm_plots}, we showed how $H$ and $m$ affect convergence of the iterates $J_k$ to $J^*$. When $m$ and $H$ are small, the value of $J_k$ sometimes diverges. If the value diverges for even one trial, then the average over trials of $\|J_k - J^*\|_\infty$ also increases exponentially with $k$. However, if the average converges for all trials, then the plot is relatively flat. The $m$ or $H$ required for convergence depends on the parameter $\delta_1$ defined in Theorem~\ref{mainAPI}. Over 20 trials, the average value of $\delta_1$ for each of our choices of feature vectors are $30.22, 16.29$, and $1.0$, respectively. As showed through a counter-example in the main body of the paper, in general, one needs $m + H - 1 > \log(\delta_1) / \log(1/\alpha)$ for convergence. However, in specific examples, it is possible for convergence to for smaller values of $m+H.$ For example, in our grid word model, $\frac{\log(16.29)}{\log(1/0.9)} \approx 26.5$, but we will observe that such a large amount of $m + H$ is not required for convergence.
In Figure \ref{fig:hm_plots}, it is difficult to see how $H$ and $m$ affect the probability of divergence, as a function of the representative states chosen to be sampled. Therefore, we introduce Figure \ref{fig:probability_of_convergence}. These plots show the proportion of trials in which the distance $\|J_{k} - J^*\|_\infty$ exceeded $10^5$ after 30 iterations of our algorithm. As expected, the algorithm never diverges for indicator vectors, as our algorithm is then equivalent to the tabular setting. The designed feature vectors clearly require a much smaller amount of lookahead or $m$-step return, well below the amount predicted by the average $\delta_1$ of $16.29$. However, no matter the choice of feature vectors, we will eventually prevent our algorithm from diverging with a large enough value of $H + m$.
\begin{figure}
\centering
\subfloat[\centering Varying $H$]{\includegraphics[width=0.45\linewidth]{Images/prob_divergence_hs} }%
\qquad
\subfloat[\centering Varying $m$]{\includegraphics[width=0.45\linewidth]{Images/prob_divergence_ms} }%
\caption{We plot the probability that $\|J_k - J^*\|_\infty$ diverges as a function of $H$ and $m$. For the first plot, $m=3$ and for the second plot, $H=3$. In both cases, the algorithm never diverges once $H+m$ is large enough, though a smaller amount of lookahead or $m$-step return are needed for the designed feature vectors.}%
\label{fig:probability_of_convergence}%
\end{figure}
\begin{figure}
\centering
\subfloat[\centering Varying $H$]{\includegraphics[width=0.45\linewidth]{Images/final_policy_hs} }%
\qquad
\subfloat[\centering Varying $m$]{\includegraphics[width=0.45\linewidth]{Images/final_policy_ms} }%
\caption{We plot the final value of $\|J^{\mu_k} - J^*\|_\infty$ after 30 iterations. For the first plot, $m=3$ and for the second plot, $H=3$. As $H$ increases, the final policy improves. With large enough $H$, we obtain the optimal policy. However, past a certain point, increasing $m$ is not helpful for finding a better policy.}%
\label{fig:final_policy_value}%
\end{figure}
\subsection{Convergence to the optimal policy}
In Theorem~\ref{mainAPI}, we show that as $H$ increases, we converge to a policy $\mu_k$ that is closer to the optimal policy. In this section, we experimentally investigate the role of $m$ and $H$ on the final value of $\|J^{\mu_k} - J^*\|_\infty$. The results can be found in Figure \ref{fig:final_policy_value}. As predicted by theory, we do get closer to the optimal policy as $H$ increases. However, increasing $m$ does not help past a certain point, which is also consistent with the theory. Indeed, although $\mu_k$ is approaching the optimal policy $\mu^*$ as $H$ increases, the iterates $J_k$ are not converging to $J^*$ due to error induced by function approximation. Increasing $m$ improves the policy evaluation, but cannot correct for this inherent error from approximating the value function.
\begin{comment}
\begin{figure}[H]
\centering
\subfigure[\centering Varying $H$]{\includegraphics[width=0.45\linewidth]{Images/final_policy_hs} }%
\qquad
\subfigure[\centering Varying $m$]{\includegraphics[width=0.45\linewidth]{Images/final_policy_ms} }%
\caption{We plot the final value of $\|J^{\mu_k} - J^*\|_\infty$ after 30 iterations. For the first plot, $m=3$ and for the second plot, $H=3$. As $H$ increases, the final policy improves. With large enough $H$, we obtain the optimal policy. However, past a certain point, increasing $m$ is not helpful for finding a better policy.}%
\label{fig:final_policy_value}%
\end{figure}
\end{comment}
\section{Bounds on Iterates In Algorithm \ref{alg:LSalg}}\label{appendix:prop1}
In the following proposition, we present a bound on the difference between $J_k$ and $J^*.$
\begin{proposition} When $\alpha^{m+H-1} \delta_1 <1,$
$\limsup_{k \to \infty} \norm{J_k - J^*}_\infty \leq \frac{r_{m, H}}{1-q_{m, H}},$ \label{IterAPITheorem}
\end{proposition}
where $q_{m, H} := \delta_1 \alpha^{m+H-1} + \big( 1+\delta_1 \alpha^m \big)\alpha^{H-1}$ and $r_{m, H} := \big( 1+\delta_1 \alpha^m \big)\Bigg(\alpha^{H-1} p_{m, H} + \frac{ c_{m, H}}{1-\alpha} \Bigg) + \delta_2+\delta_1 \eps',$ where $c_{m, H}$ is defined in \eqref{eq:defcdmh} and $p_{m, H}$ is defined in $(\ref{eq: pound})$. The proof is as follows.
\begin{proof}
\begin{align*}
\norm{J_{k+1} - J^*}_\infty &= \norm{J_{k+1} -J^{\mu_{k+1}} + J^{\mu_{k+1}}- J^*}_\infty \\
&\leq \norm{J_{k+1} -J^{\mu_{k+1}}}_\infty + \norm{J^{\mu_{k+1}}- J^*}_\infty \\
&\leq \norm{\scriptM_{k+1} T_{\mu_{k+1}}^m T^{H-1} J_k -J^{\mu_{k+1}}}_\infty +\delta_1 \eps' \\&+\norm{J^{\mu_{k+1}}- J^*}_\infty +\delta_1 \eps'\\
&= \norm{\scriptM_{k+1} T_{\mu_{k+1}}^m T^{H-1} J_k -\scriptM_{k+1} J^{\mu_{k+1}} + \scriptM_{k+1} J^{\mu_{k+1}} -J^{\mu_{k+1}}}_\infty \\&+\norm{J^{\mu_{k+1}}- J^*}_\infty +\delta_1 \eps'\\
&\leq \norm{\scriptM_{k+1} T_{\mu_{k+1}}^m T^{H-1} J_k -\scriptM_{k+1} J^{\mu_{k+1}}}_\infty + \norm{\scriptM_{k+1} J^{\mu_{k+1}} -J^{\mu_{k+1}}}_\infty \\&+\norm{J^{\mu_{k+1}}- J^*}_\infty +\delta_1 \eps'\\
&\leq \norm{\scriptM_{k+1}}_\infty \norm{T_{\mu_{k+1}}^m T^{H-1} J_k - J^{\mu_{k+1}}}_\infty + \norm{\scriptM_{k+1} J^{\mu_{k+1}} -J^{\mu_{k+1}}}_\infty \\&+\norm{J^{\mu_{k+1}}- J^*}_\infty +\delta_1 \eps'\\
&\leq \delta_1 \alpha^m \norm{T^{H-1} J_k - J^{\mu_{k+1}}}_\infty + \delta_2 \\&+\norm{J^{\mu_{k+1}}- J^*}_\infty +\delta_1 \eps'\\
&= \delta_1 \alpha^m \norm{T^{H-1} J_k - J^* + J^* - J^{\mu_{k+1}}}_\infty + \delta_2 +\norm{J^{\mu_{k+1}}- J^*}_\infty +\delta_1 \eps'\\
&\leq \delta_1 \alpha^m \norm{T^{H-1} J_k - J^*}_\infty + \delta_1 \alpha^m \norm{J^* - J^{\mu_{k+1}}}_\infty + \delta_2 \\&+ \norm{J^{\mu_{k+1}}- J^*}_\infty+\delta_1 \eps'\\
&\leq \delta_1 \alpha^{m+H-1} \norm{J_k - J^*}_\infty + \delta_1 \alpha^m \norm{J^* - J^{\mu_{k+1}}}_\infty + \delta_2 \\&+ \norm{J^{\mu_{k+1}}- J^*}_\infty+\delta_1 \eps'\\
&= \delta_1 \alpha^{m+H-1} \norm{J_k - J^*}_\infty + \big( 1+\delta_1 \alpha^m \big) \norm{J^* - J^{\mu_{k+1}}}_\infty + \delta_2+\delta_1 \eps' \allowdisplaybreaks\\
&\overset{(c)}\leq \delta_1 \alpha^{m+H-1} \norm{J_k - J^*}_\infty + \big( 1+\delta_1 \alpha^m \big) \Bigg(\alpha^{H-1} \norm{J^{\mu_k}-J^*}_\infty + \frac{ c_{m, H}}{1-\alpha} \Bigg) \\&+ \delta_2 +\delta_1 \eps'\\
&\leq \delta_1 \alpha^{m+H-1} \norm{J_k - J^*}_\infty + \big( 1+\delta_1 \alpha^m \big) \\&\Bigg(\alpha^{H-1} (\norm{J^{\mu_k}-J_k}_\infty + \norm{J_k-J^*}_\infty) + \frac{ c_{m, H}}{1-\alpha} \Bigg) + \delta_2 +\delta_1 \eps'\allowdisplaybreaks\\
&= \Bigg(\delta_1 \alpha^{m+H-1} + \big( 1+\delta_1 \alpha^m \big)\alpha^{H-1} \Bigg) \norm{J_k - J^*}_\infty + \big( 1+\delta_1 \alpha^m \big)\\&\Bigg(\alpha^{H-1} \norm{J^{\mu_k}-J_k}_\infty + \frac{ c_{m, H}}{1-\alpha} \Bigg) + \delta_2 +\delta_1 \eps'\allowdisplaybreaks\\
&\overset{(d)}\leq \Bigg(\delta_1 \alpha^{m+H-1} + \big( 1+\delta_1 \alpha^m \big)\alpha^{H-1} \Bigg) \norm{J_k - J^*}_\infty + \big( 1+\delta_1 \alpha^m \big)\\&\Bigg(\alpha^{H-1} p_{m, H} + \frac{ c_{m, H}}{1-\alpha} \Bigg) + \delta_2 +\delta_1 \eps'\\
&= q_{m, H} \norm{J_k - J^*}_\infty + r_{m, H},
\end{align*}
where $q_{m, H} := \delta_1 \alpha^{m+H-1} + \big( 1+\delta_1 \alpha^m \big)\alpha^{H-1}$ and $r_{m, H} := \big( 1+\delta_1 \alpha^m \big)\Bigg(\alpha^{H-1} p_{m, H} + \frac{ c_{m, H}}{1-\alpha} \Bigg) + \delta_2+\delta_1 \eps'.$ Note that $p_{m, H}$ is defined in $(\ref{eq: pound})$ and $c_{m, H}$ is defined in $(\ref{eq:defcdmh}).$ Additionally, $(c)$ follows from $(\ref{eq:cprop1})$ and $(d)$ follows from $(\ref{eq: pound})$, noting that the inequalities in $(\ref{eq:cprop1})$ and $(\ref{eq: pound})$ hold for all $\eps''>0$.
Iterating over $k$, we get Proposition \ref{IterAPITheorem}.
\end{proof}
We obtain the following inequality in much a similar way to inequality \eqref{eq: ***} in the proof of Theorem \ref{mainAPI}:
\begin{align}
\norm{J_{k} - J^*}_\infty \leq q_{m, H}^k \norm{J_0 - J^*}_\infty + \sum_{i=0}^{k-1} q_{m, H}^i r_{m, H}, \label{eq:*** 3}
\end{align} for $k >0.$
Note that the inequality \eqref{eq:*** 3} provides finite-time performance bounds, in addition to the asymptotic result stated in Proposition \ref{IterAPITheorem}.
\end{APPENDICES}
\ACKNOWLEDGMENT{The research presented here was supported in part by a grant from Sandia National Labs \footnote{Sandia National Laboratories is a multimission laboratory managed and operated by National Technology \& Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA0003525. This paper describes objective technical results and analysis. Any subjective views or opinions that might be expressed in the paper do not necessarily represent the views of the U.S. Department of Energy or the United States Government.} and the NSF Grants CCF 1934986, CCF 1704970, CNS 2106801, ONR Grant N00014-19-1-2566, NSF/USDA Grant AG 2018-67007-28379, and ARO Grant W911NF-19-1-0379.
}
\section{Introduction.}\label{intro}
\section{Introduction}
In many applications of reinforcement learning, such as playing chess and Go, the underlying model is known and so the main challenge is in solving the associated dynamic programming problem in an efficient manner. Policy iteration and variants of policy iteration \cite{bertsekas2019reinforcement,Bertsekas2011ApproximatePI,bertsekastsitsiklis} that solve dynamic programming problems rely on computations that are infeasible due to the sizes of the state and action spaces in modern reinforcement learning problems.
As a remedy to this ``curse of dimensionality,'' several state-of-the-art algorithms \cite{silver2017shoji, silver2017mastering, DBLP:journals/corr/MnihBMGLHSK16} employ function approximation, lookahead for policy improvement, $m$-step rollout for policy evaluation, and gradient descent to compute the function approximation, see Section \ref{section2} for a definition of these terms.
Our goal in this paper is to understand the role of multi-step lookahead for policy improvement (i.e., repeatedly applying the Bellman operator multiple times) and $m$-step rollout (which is a technique to approximately evaluate a policy by rolling out the dynamic programming tree for a certain number of steps $m$) on the accuracy of approximate policy iteration techniques. The algorithms we study in this paper are closely related to least-squares policy iteration (LSPI) \cite{parr} and approximate policy iteration (PI), see \cite{bertsekastsitsiklis, bertsekas2019reinforcement}. In the analysis of approximate PI, it is assumed that the policy evaluation and improvement steps have bounded errors, and using these, an error bound is obtained for the algorithm which repeatedly uses approximate policy evaluation and improvement. LSPI is an algorithm that builds on approximate PI where the policy evaluation step uses a least-squares algorithm to estimate the value function for the entire state space using the value function evaluated at a few states. However, the bounds presented in \cite{parr} are simply a special case of the bounds for generic approximate PI, and do not explicitly take into account the details of the implementation of least-squares-based policy evaluation. When such details are taken into account, it turns out the roles of the depth of lookahead ($H$) and rollout ($m$) become important, and their impact on the error bounds on the performance of approximate value iteration has not been characterized in prior work. In this paper, on the other hand, we assume that policies are evaluated at a few states using an $m$-step rollout and as a result, convergence of the algorithm is not guaranteed in general. Additionally, we show that the effect of function approximation can be mitigated using lookahead in the policy improvement step. The use of a partial rollout in our algorithm also makes our work similar to modified policy iteration \cite{Puterman1978ModifiedPI}, which is also called optimistic policy iteration \cite{bertsekastsitsiklis}. To the best of our knowledge, none of these prior works consider the impact of using gradient descent to implement an approximate version of least-squares policy evaluation within approximate PI. Thus, our algorithm and analysis can be viewed as a detailed look at approximate PI and modified PI when linear function approximation, least-squares policy evaluation and gradient descent are used to evaluate policies.
Our contributions are as follows:
\begin{itemize}
\item We examine the impact of lookahead and $m$-step rollout on approximate policy iteration with linear function approximation. As is common in practice, we assume that we evaluate an approximate value function only for some states at each iteration. We obtain performance bounds for our algorithm under the assumption that the sum of the lookahead and the number of steps in the $m$-step rollout is sufficiently large. We demonstrate through an extension of a counterexample in \cite{Tsitsiklis94feature-basedmethods} that such a condition is necessary, in general, for convergence with function approximation unlike the tabular setting in \cite{efroni2019combine}. See Appendix \ref{appendix:counterexAppendix} for our counterexample.
\item For ease of exposition, we first present the case where one solves a least-squares problem at each iteration to obtain the weights associated with the feature vectors in the function approximation of the value function. Our performance bounds in this case strictly generalize the bounds in \cite{parr} and \cite{bertsekas2019reinforcement} for approximate PI.
\item We then consider a more practical and widely-used scheme where several steps of gradient descent are used to update the weights of the value function approximation at each iteration. Obtaining performance bounds for the gradient descent algorithm is more challenging and these bounds can be found in Section \ref{SectionGD}.
\item Our results show that the sufficient conditions on the hyperparameters (such as the amount of lookahead, rollout, gradient descent parameters) of the algorithm required for convergence either do not depend on the size of the state space or depend only logarithmically on the size of the state space. \color{black}
\item From a theoretical perspective, our analysis shows that one can improve the upper bound on the error in approximate policy iteration from $1/(1-\alpha)^2$ (see \cite{parr}, \cite{bertsekas2019reinforcement}) to $1/(1-\alpha^{H})(1-\alpha)$ by using lookahead, where $\alpha$ is the discount factor and $H$ is the amount of lookahead used.
\item In addition to asymptotic performance bounds, we also provide finite-time guarantees for our algorithm. Our finite-time bounds show that our algorithm converges exponentially fast in the case of least-squares as well as the case where a fixed number of gradient descent steps are performed in each iteration of the algorithm.
\end{itemize}
\subsection{Other Related Work}
The recent work in \cite{efroni2019combine} considers a variant of policy iteration that utilizes lookahead and approximate policy evaluation using an $m$-step rollout (see Section \ref{section2} for definitions of these terms). As stated in the motivation in \cite{efroni2019combine}, it is well known that Monte Carlo Tree Search (MCTS) \cite{kocisszepesvari, browne} works well in practice
even though the worst-case complexity can be exponential \cite{shah2020nonasymptotic}; see \cite{munosbook} for some analysis of MCTS in MDPs where the number of states that can be visited from a given state is bounded. Motivated by policy iteration, the algorithm in \cite{efroni2019combine} estimates the value function associated with a policy and aims to improve the policy at each step. Policy improvement is achieved by obtaining the ``greedy'' policy in the case of policy iteration or a lookahead policy in the work of \cite{efroni2019combine}, which involves applying the Bellman operator several times to the current iterate before obtaining the greedy policy. The idea is that the application of the Bellman operator several times gives a more accurate estimate of the optimal value function. Then, similarly to policy iteration, the algorithm in \cite{efroni2019combine} aims to evaluate the new policy. The algorithm in \cite{efroni2019combine} uses an $m$-step rollout to compute the value function associated with a policy, i.e., it applies the Bellman operator associated with the policy $m$ times.
The work of \cite{efroni2019combine} establishes that a lookahead can significantly improve the rate of convergence if one uses the value function computed using lookahead in the approximate policy evaluation step. However, their paper does not study the use of function approximation which is critical to handling large state spaces, nor does it quantify the effects of varying $m$ in the convergence of their algorithm.
Now, we compare our work to other papers in the literature. The role of lookahead and rollout in improving the performance of RL algorithms has also been studied in a large number of papers including \cite{shahxie, moerland2020framework, efroni2020online, tomar2020multistep, efroni2018multiplestep, springenberg2020local, 9407870}. The works of \cite{baxter, veness, lanctot2014monte} explore the role of tree search in RL algorithms. However, to the best of our knowledge, the amount of lookahead and rollout needed as a function of the feature vectors has not been quantified in prior works.
The works of \cite{Bertsekas2011ApproximatePI} and \cite{bertsekas2019reinforcement} also study a variant of policy iteration wherein a greedy policy is evaluated approximately using feature vectors at each iteration. These papers also provide rates of convergence as well as a bound on the approximation error. However, our main goal is to understand the relations between function approximation and lookahead/rollout which are not considered in these other works.
\section{Preliminaries} \label{section2}
We consider a Markov Decision Process (MDP), which is defined to be a 5-tuple $(\scriptS, \scriptA, P, r, \alpha)$. The finite set of states of the MDP is $\scriptS$. There exists a finite set of actions associated with the MDP $\scriptA$. Let $P_{ij}(a)$ be the probability of transitioning from state $i$ to state $j$ when taking action $a \in \scriptA$. We denote by $s_k$ the state of the MDP and by $a_k$ the corresponding action at time $k$. We associate with state $s_k$ and action $a_k$ a non-deterministic reward $r(s_k, a_k) \in [0, 1] \forall s_k \in \scriptS, a_k \in \scriptA.$ We assume that the rewards are uniformly bounded. Our objective is to maximize the cumulative discounted reward with discount factor $\alpha \in (0, 1).$
Towards this end, we seek to find a deterministic policy $\mu$ which associates with each state $s\in \scriptS$ an action $\mu(s) \in \scriptA$. For every policy $\mu$ and every state $s \in \scriptS$ we define $J^{\mu}(s)$ as follows:
\begin{align*}
J^{\mu}(s) := E[\sum_{i=0}^\infty \alpha^k r(s_k, \mu(s_k))|s_0=s].
\end{align*}
We define the optimal reward-to-go $J^*$ as
$J^*(s) := \underset{\mu}\max J^\mu(s).$ The objective is to find a policy $\mu$ that maximizes $J^\mu(s)$ for all $s \in \scriptS$. Towards the objective, we associate with each policy $\mu$ a function $T_\mu: \mathbb{R}^{|\scriptS|} \to \mathbb{R}^{|\scriptS|}$ where for $J \in \mathbb{R}^{|\scriptS|},$ the $s$th component of $T_{\mu}J$ is
\begin{align*}
(T_\mu J)(s) = r(s, \mu(s)) + \alpha \sum_{j=1}^{|\scriptS|} P_{sj}(\mu(s)) J(j),
\end{align*} for all $s \in S$. If function $T_{\mu}$ is applied $m$ times to vector $J \in \mathbb{R}^{|\scriptS|},$ then we say that we have performed an $m$-step rollout of the policy $\mu$ and the result $T^m_\mu J$ of the rollout is called the return.
Similarly, we define the Bellman operator $T: \mathbb{R}^{|\scriptS|} \to \mathbb{R}^{|\scriptS|}$ with the $s$th component of $TJ$ being
\begin{align}
(TJ)(s) = \underset{a \in \scriptA}\max \Bigg \{ r(s, a) + \alpha \sum_{j=1}^{|\scriptS|} P_{sj}(a)J(j) \Bigg \}. \label{T}
\end{align}
The policy corresponding to the $T$ operator is defined as the \textit{greedy} policy. If operator $T$ is applied $H$ times to vector $J \in \mathbb{R}^{|\scriptS|},$ we call the result - $T^H J$ - the $H$-step ``lookahead'' corresponding to $J$. The greedy policy corresponding to $T^H J$ is called the $H$-step lookahead policy, or the lookahead policy, when $H$ is understood. More precisely, given an estimate $J$ of the value function, the lookahead policy is the policy $\mu$ such that $T_\mu(T^{H-1} J)=T(T^{H-1} J).$
It is well known that each time the Bellman operator is applied to a vector $J$ to obtain $TJ,$ the following holds:
\begin{align*}
\norm{TJ-J^*}_\infty\leq \alpha\norm{J-J^*}_\infty.
\end{align*} Thus, applying $T$ to obtain $TJ$ gives a better estimate of the value function than $J.$
The Bellman equations state that the vector $J^\mu$ is the unique solution to the linear equation
\begin{align}
J^\mu = T_\mu J^\mu. \label{bellman}
\end{align}
Additionally, we have that $J^*$ is a solution to
\begin{align*}
J^* = TJ^*.
\end{align*}
Note that every greedy policy w.r.t. $J^*$ is optimal and vice versa \cite{bertsekastsitsiklis}.
We will now state several useful properties of the operators $T$ and $T_\mu$. See \cite{bertsekastsitsiklis} for more on these properties. \color{black} Consider the vector $e \in \mathbb{R}^{|\scriptS|}$ where $e(i) = 1 \forall i \in 1, 2, \ldots, |\scriptS|.$ We have:
\begin{equation}
T(J + ce) = TJ + \alpha ce, \quad T_\mu(J + ce) = T_\mu J + \alpha ce. \label{eq:usefulproperties}
\end{equation}
Operators $T$ and $T_\mu$ are also monotone:
\begin{align}
J \leq J' \implies TJ \leq TJ', \quad T_\mu J \leq T_\mu J'. \label{monotonicityproperty}
\end{align}
\section{Least-Squares Function Approximation Algorithm}
\begin{algorithm}[tb]
\caption{Least-Squares Function Approximation Algorithm}
\label{alg:LSalg}
\textbf{Input}: $J_0, m,$ $H,$ feature vectors $\{ \phi(i) \}_{i \in \scriptS}, \phi(i) \in \mathbb{R}^d$ and subsets $\scriptD_k \subseteq \scriptS, k = 0, 1, \ldots.$ Here $\scriptD_k$ is the set of states at which we evaluate the current policy at iteration $k.$
\begin{algorithmic}[1]
\STATE Let $k=0$.
\STATE Let $\mu_{k+1}$ be such that $\norm{T^H J_k - T_{\mu_{k+1}} T^{H-1} J_k}_\infty \leq \eps_{LA}$.\\\label{step 2 alg}
\STATE Compute $\hat{J}^{\mu_{k+1}}(i) = T_{\mu_{k+1}}^m T^{H-1} (J_k)(i)+w_{k+1}(i)$ for $i \in \scriptD_k.$ \\ \label{step 3 alg}
\STATE Choose $\theta_{k+1}$ to solve
\begin{align}
\min_\theta \sum_{i \in D_k} \Big( (\Phi \theta)(i) - \hat{J}^{\mu_{k+1}}(i) \Big)^2, \label{step 4 alg}
\end{align} \\
where $\Phi$ is a matrix whose rows are the feature vectors.
\STATE $J_{k+1} = \Phi \theta_{k+1}.$
\STATE Set $k \leftarrow k+1.$ Go to 2.
\end{algorithmic}
\end{algorithm}
Our algorithm is described in Algorithm \ref{alg:LSalg}. We now explain our algorithm and the associated notation in detail. Due to the use of function approximation, our algorithm is an approximation to policy iteration with lookahead. At each iteration index, say, $k$, we have an estimate of the value function, which we denote by $J_k$. To obtain $J_{k+1}$, we perform a lookahead to improve the value function estimate at a certain number of states (denoted by $\scriptD_k$) which can vary with each iteration. For example, $\scriptD_k$ could be chosen as the states visited when performing a tree search to approximate the lookahead process. During the lookahead process, we note that we will also obtain an $H$-step lookahead policy, which we denote by $\mu_{k+1}$. As noted in the Introduction, the computation of $T^{H-1}(J_k)(i)$ for $i \in \scriptD_k$ in Step \ref{step 3 alg} of Algorithm \ref{alg:LSalg} may be computationally infeasible; however, as noted in \cite{efroni2019combine}, techniques such as Monte Carlo tree search (MCTS) are employed in practice to approximately estimate $T^{H-1}(J_k)(i).$ \color{black} In this paper, we model the fact that lookahead cannot be performed exactly due to the associated computational complexity by allowing an error in the lookahead process which we denote by $\eps_{LA}$ in Step~\ref{step 2 alg} of the algorithm.
We obtain estimates of $J^{\mu_{k+1}}(i)$ for $i \in \scriptD_k$, which we call $\hat{J}^{\mu_{k+1}}(i)$. To obtain an estimate of $J^{\mu_{k+1}}(i)$, we perform an $m$-step rollout with policy $\mu_{k+1}$, and obtain a noisy version of $T^m_{\mu_{k+1}}T^{H-1}J_k(i)$ for $i \in \scriptD_k.$ We also model the approximation error in the rollout by adding noise (denoted by $w_{k+1}(i)$ in Step ~\ref{step 3 alg} of the algorithm) to the return (result of the rollout - see Section \ref{section2}) computed at the end of this step. In order to estimate the value function for states not in $\scriptD_k$, we associate with each state $i \in \scriptS$ a feature vector $\phi(i)\in \mathbb{R}^d$ where typically $d << |\scriptS|$. The matrix comprised of the feature vectors as rows is denoted by $\Phi$. We use those estimates to find the best fitting $\theta \in \mathbb{R}^d$, i.e.,
\begin{align*}
\min_\theta \sum_{i \in D_k} \Big( (\Phi \theta)(i) - \hat{J}^{\mu_{k+1}}(i) \Big)^2.
\end{align*}
The solution to the above minimization problem is denoted by $\theta_{k+1}$. The algorithm then uses $\theta_{k+1}$ to obtain $J_{k+1} = \Phi \theta_{k+1}$. The process then repeats. Note that to compute $\hat{J}^{\mu_{k+1}}(i),$ we obtain noisy estimates of $T_{\mu_{k+1}}^m T^{H-1} J_k (i)$ for $i\in \scriptD_k.$ Another alternative is to instead obtain noisy estimates of $T_{\mu_{k+1}}^m J_k (i)$ for $i\in \scriptD_k.$ It was shown in \cite{efroni2019combine} that the former option is preferable because it has a certain contraction property. Thus, we have chosen to use this computation in our algorithm as well. However, we have shown in the longer version of this paper that the algorithm also has bounded error which becomes small if $m$ is chosen to be sufficiently large \cite{annaor}.
\begin{remark}
We note that $\mu_{k+1}(i)$ in Step \ref{step 2 alg} of Algorithm \ref{alg:LSalg} does not have to be computed for all states $i\in \scriptS.$ The actions $\mu_{k+1}(i)$ have to be computed only for those $i\in\scriptS$ that are encountered in the rollout step of the algorithm (Step \ref{step 3 alg}).
\end{remark}
To analyze Algorithm \ref{alg:LSalg}, we make the following assumption which states that we explore a sufficient number of states during the policy evaluation phase at each iteration.
\begin{assumption}\label{assume 1 or}
For each $k \geq 0, \text{ rank }\{ \phi(i)\}_{i \in D_k} = d$.
\end{assumption}
We assume that the noise $w_k$ is bounded.
\begin{assumption}
For some $\eps_{PE} >0,$ the noise in policy evaluation satisfies $\norm{w_k}_\infty \leq \eps_{PE} \forall k$. \label{assume 2 or or}
\end{assumption}
We also assume that the rewards are bounded.
\begin{assumption}\label{assume 3 or}
$r(s,u) \in[0,1]$ $\forall s \in \scriptS,u\in\scriptA.$
\end{assumption}
Using Assumption~\ref{assume 1 or}, $J_{k+1}$ can be written as
\begin{align}
J_{k+1} &= \Phi \theta_{k+1} =\underbrace{\Phi(\Phi_{\scriptD_{k}}^\top \Phi_{\scriptD_{k}} )^{-1} \Phi_{\scriptD_{k}}^\top \scriptP_k}_{=: \scriptM_{k+1}} \hat{J}^{\mu_{k+1}},\label{defMk}\end{align}
where $\Phi_{\scriptD_{k}}$ is a matrix whose rows are the feature vectors of the states in $\scriptD_{k}$ and $\scriptP_k$ is a matrix of zeros and ones such that $\scriptP_k\hat{J}^{\mu_{k+1}}$ is a vector whose elements are a subset of the elements of $\hat{J}^{\mu_{k+1}}$ corresponding to $\scriptD_k$. Note that $\hat{J}^{\mu_{k+1}}(i)$ for $i\notin\scriptD_k$ does not affect the algorithm, so we can define $\hat{J}^{\mu_{k+1}}(i)=T_{\mu_{k+1}}^m T^{H-1} J_k(i)$ for $i\notin\scriptD_k.$
Written concisely, our algorithm is as follows:
\begin{equation}
J_{k+1} = \scriptM_{k+1} (T_{\mu_{k+1}}^m T^{H-1} J_k+w_k), \label{eq:iterateAPI}
\end{equation}
where $\mu_{k+1}$ is defined in Step 2 of the algorithm.
Since $w_k(i)$ for $i\notin\scriptD_k$ does not affect the algorithm, we define $w_k(i)=0$ for $i\notin\scriptD_k.$
Now we will state our theorem which characterizes the role of lookahead ($H$) and return ($m$)
on the convergence of approximate policy iteration with function approximation.
\begin{theorem}\label{mainAPI}
Suppose that $m$ and $H$ satisfy $m + H -1>\log (\delta_{FV})/\log (1/\alpha),$ where
$$\delta_{FV} := \sup_k \norm{\scriptM_k}_\infty = \sup_k\norm{\Phi(\Phi_{\scriptD_{k}}^\top \Phi_{\scriptD_{k}} )^{-1} \Phi_{\scriptD_{k}}^\top \scriptP_k}_\infty.$$
Then, under Assumptions \ref{assume 1 or}-\ref{assume 3 or}, the following holds:
\begin{align}
\norm{J^{\mu_{k}} - J^*}_\infty
&\leq \underbrace{\frac{\alpha^{k(H)} }{1-\alpha} + \frac{2\alpha^{H}\norm{J^{\mu_0}-J_0}_\infty}{1-\alpha}k\max(\alpha^{H},\beta)^{k-1}}_{ \text{ finite-time component }} +\underbrace{\frac{2\alpha^H \frac{\tau}{1-\beta} +\eps_{LA}}{(1-\alpha^{H})(1-\alpha)}}_{ \text{ asymptotic component }}.
\label{eq:mainAPI bound}
\end{align}
where $$\tau:= \frac{\alpha^m + \alpha^{m+H-1}}{1-\alpha}\delta_{FV} + \delta_{app}+ \delta_{FV} \eps_{PE},$$
$$\beta:=\alpha^{m+H-1} \delta_{FV},$$
and
$$
\delta_{app} := \sup_{k, \mu_k}\norm{\scriptM_k J^{\mu_k}- J^{\mu_k}}_\infty.
$$
\end{theorem}
The proof of Theorem \ref{mainAPI} can be found in Appendix \ref{appendix:thm1}. We now make several comments about the implications of Theorem \ref{mainAPI}:
\begin{enumerate}
\item In conjunction with the counterexample in Appendix \ref{appendix:counterexAppendix}, Theorem \ref{mainAPI} shows that while $\norm{J^{\mu_k}-J^*}_\infty$ depends on the function approximation error ($\delta_{app}$) and the feature vectors ($\delta_{FV}$), the effect of these terms diminishes exponentially with increased $H$, with the exception of the tree search error ($\eps_{LA}$)
\item It is useful to compare our asymptotic bound with the asymptotic bound for approximate PI in \cite{bertsekas2019reinforcement}. There, it assumed that $m=\infty$, $\eps_{PE}=0$ and $H=1,$ in which case our bound is identical to the one in \cite{bertsekas2019reinforcement}. However, when $H>1,$ our asymptotic error is proportional to $1/(1-\alpha)(1-\alpha^{H})$ which is much better than the $1/(1-\alpha)^2$ bound for approximate policy iteration \cite{bertsekas2019reinforcement}. When $\alpha\rightarrow 1,$ the expected discounted reward is of the order of $1/(1-\alpha),$ thus the $1/(1-\alpha)^2$ bound on the error is generally considered to be very loose. Our result shows that the use of lookahead significantly improves this error bound. Additionally, our bound is able to capture situations where a full rollout (i.e., $m=\infty$) is impossible to perform.
\end{enumerate}
The proof of Theorem \ref{mainAPI} is closely related to the proofs of Theorems \ref{theorem 2 or}-\ref{theorem 3 or}. The proofs of Theorems~\ref{theorem 2 or}-\ref{theorem 3 or} are presented in the next section while we defer the proof of Theorem \ref{mainAPI} to Appendix \ref{appendix:thm1}. We note that the above result is fundamentally different from the conclusion in Theorem 4 in \cite{efroni2019combine} where one requires a condition on $m$ and $H$ for convergence when one uses $J_k$ instead of using $T^{H-1}J_k$ in Step 2 of the algorithm. Here, we have shown that even when one uses $T^{H-1}J_k,$ one may need large $m+H$ for convergence due to the use of function approximation.
We can additionally characterize the approximation error of our iterates, $J_k$, by computing bounds on the asymptotic error $\limsup_{k \to \infty} \norm{J_k - J^*}_\infty.$ The bounds along with their derivations can be found in the longer version of this paper, \cite{annaor}.
It is important to note that the upper bounds on $\norm{J^{\mu_k}-J^*}_\infty$ and $\norm{J_k - J^*}_\infty$ illustrate that $J^{\mu_k}$ approximates $J^*$ much better than $J_k$ does. Thus, algorithms need not wait for the value function estimates to converge before the corresponding policies reach near optimality.
In \cite{bertsekas2021lessons}, it is noted that, in reinforcement learning to play computer games or board games, it is not uncommon during training to get a relatively crude estimate of the value function, which is improved by lookahead and $m$-step return during actual game play. Our analysis would also apply to this situation -- we have not explicitly differentiated between training and game play in our analysis.
\color{black}
Theorem \ref{mainAPI} can be used to make the following observation: how close $J^{\mu_k}$ is to $J^*$ depends on four factors -- the representation power of the feature vectors and the feature vectors themselves ($\delta_{app}, \delta_{FV}$), the amount of lookahead ($H$), the extent of the rollout ($m$) and the approximation in the policy determination and policy evaluation steps ($\eps_{LA}$ and $\eps_{PE}$). Further, it is easy to see that lookahead and rollout help mitigate the effect of feature vectors and their ability to represent the value functions. \color{black}
\section{Gradient Descent Algorithm} \label{SectionGD}
Solving the least-squares problem in Algorithm~\ref{alg:LSalg} involves a matrix inversion, which can be computationally difficult. So we propose an alternative algorithm which performs $\eta_k$ steps of gradient descent with stepsize $\gamma$ at each iteration $k$, where the gradient refers to the gradient of the least-squares objective in (\ref{step 4 alg}).
The gradient descent-based algorithm is presented in Algorithm~\ref{alg:GDalg}.
\begin{algorithm}[tb]
\caption{Gradient Descent Algorithm}
\label{alg:GDalg}
\textbf{Input}: $\theta_0, m,$ $H,$ feature vectors $\{ \phi(i) \}_{i \in \scriptS}, \phi(i) \in \mathbb{R}^d,$ and $\scriptD_k,$ which is the set of states for which we evaluate the current policy at iteration $k.$
\begin{algorithmic}[1]
\STATE $k=0, J_0 = \Phi \theta_0$. \\
\STATE Let $\mu_{k+1}$ be such that $\norm{T^H J_k - T_{\mu_{k+1}} T^{H-1} J_k}_\infty \leq \eps_{LA}$. \\
\STATE Compute $\hat{J}^{\mu_{k+1}}(i) = T_{\mu_{k+1}}^m T^{H-1} J_k(i)+w_{k+1}(i)$ for $i \in \scriptD_k.$ \\
\STATE \label{step 4 unbiased} $\theta_{k+1, 0} := \theta_k.$ For $\ell = 1, 2, \ldots, \eta_{k+1},$ iteratively compute the following:
\begin{align}
\theta_{k+1, \ell} &= \theta_{k+1,\ell-1} - \gamma \nabla_\theta c(\theta;\hat{J}^{\mu_{k+1}})|_{\theta_{k+1,\ell-1}}, \label{eq:iterthetaGD}
\end{align} where
\begin{align*}
c(\theta;\hat{J}^{\mu_{k+1}}) := \frac{1}{2}\sum_{i \in \scriptD } \Big( (\Phi \theta)(i) - \hat{J}^{\mu_{k+1}}(i) \Big)^2,
\end{align*} \\
and $\Phi$ is a matrix whose rows are the feature vectors.\\
\STATE Define
\begin{align*}
\theta_{k+1} &= \theta_{k+1,\eta_{k+1}},
\end{align*} and set $$J_{k+1} = \Phi \theta_{k+1}.$$
\STATE Set $k \leftarrow k+1.$ Go to 2.
\end{algorithmic}
\end{algorithm}
In order to present our main result for the gradient descent version of our algorithm, we define $\tilde{\theta}^{\mu_k}$ for any policy $\mu_k$ which will be used in the proof of the theorem:
\begin{align}
\tilde{\theta}^{\mu_k} \nonumber&:= \arg\min_\theta \frac{1}{2}\norm{\Phi_{\scriptD_k} \theta - \scriptP_{k}(T_{\mu_{k}}^m T^{H-1} J_{k-1}+w_k)}_2^2.
\end{align}
Note that
\begin{align}
\Phi\tilde{\theta}^{\mu_k}= \scriptM_k (T_{\mu_k}^m T^{H-1} J_{k-1}+w_k), \label{eq: theta tilde muk or}
\end{align}
where $\scriptM_k$ is defined in \eqref{defMk}.
Thus, $\tilde{\theta}^{\mu_k}$ represents the function approximation of the estimate of $J^{\mu_k}$ obtained from the $m$-step return.
We now present two propositions which will be used in the subsequent theorems to obtain bounds
on the convergence of approximate policy iteration with function approximation when gradient descent is employed to estimate the least-squares objective in each iteration.
\begin{proposition}\label{prop1}
\begin{align}
\norm{J^{\mu_{k+1}} - J^*}_\infty &\leq \alpha^{H} \norm{J^{\mu_k}-J^*}_\infty + \frac{ 2\alpha^H \norm{J^{\mu_k}-J_k}_\infty + \eps_{LA} }{1-\alpha},\label{eq: jmu-jk or}
\end{align}
where
\begin{align}
\norm{J^{\mu_{k}}-J_k}_\infty \leq
a_k \norm{J_{k-1} -J^{\mu_{k-1}}}_\infty +b_k,\label{eq: ineq 2 or}
\end{align}
$$a_k := \alpha^{m+H-1} \delta_{FV}+ \frac{\sqrt{|S|}\norm{\Phi}_\infty}{\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta_{k}}( \alpha^{m+H-1} \delta_{FV}+1)$$ and
$$
b_k := (1+ \frac{\sqrt{|S|}\norm{\Phi}_\infty}{\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta_{k}})( \frac{\alpha^m + \alpha^{m+H-1}}{1-\alpha}\delta_{FV} + \delta_{app}+ \delta_{FV} \eps_{PE}) +\frac{\sqrt{|S|}\norm{\Phi}_\infty}{(1-\alpha)\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta_{k}},
$$
where $\sigma_{\min, \Phi}$ is the smallest singular value in the singular value decomposition of $\Phi$ and $\alpha_{GD, \gamma}:=\sup_k \max_i |1-\gamma\lambda_i ( \Phi_{\scriptD_k}^\top \Phi_{\scriptD_k})|,$ where $\lambda_i$ denotes the $i$-th largest eigenvalue of a matrix.
\end{proposition}
The proof of Proposition \ref{prop1} is presented later in this section. \color{black}Using Proposition \ref{mainGDLemma} and iterating in the special case where $a_k$ is constant or upper bounded by a constant $a,$ where $0<a<1$, and $b_k$ is constant or upper bounded by a constant $b$, where $0<b$, we get the following:
\begin{proposition}\label{prop2}
When $a_k \leq a$ for all $k$ where $0<a<1,$ and $b_k \leq b,$ for all $k$, where $0<b$, the following holds:
\begin{align}
\norm{J^{\mu_{k}} - J^*}_\infty\nonumber &\leq \frac{\alpha^{k(H)}}{1-\alpha} + \frac{2\alpha^H}{1-\alpha}k \max(\alpha^{H},a)^{k-1}\norm{J^{\mu_0}-J_0}_\infty +\frac{2\alpha^H\frac{b}{1-a}+\eps_{LA}}{(1-\alpha)(1-\alpha^{H})}.
\end{align}
\end{proposition}
Using Proposition \ref{prop2}, we can get Theorems \ref{theorem 2 or} and \ref{theorem 3 or} which give us finite-time and asymptotic bounds for the cases where $\eta_k$ is constant and $\eta_k$ is increasing to infinity, respectively:
\begin{theorem}\label{theorem 2 or}
Suppose that $\gamma, m,$ and $H$ satisfy
\begin{align}
\gamma < \frac{1}{d\inf_k \norm{\Phi_{\scriptD_k}^\top \Phi_{\scriptD_k}}_\infty^2}, \label{eq: gamma assumption or}
\end{align}
and
\begin{align}
m + H >1+\log (2\delta_{FV})/\log (1/\alpha).\label{eq: m H assumption or}
\end{align}
Furthermore, consider the case where $\eta_k$ is a constant, which we call $\eta,$ where $$\eta>\log (\frac {3\sqrt{|S|}\norm{\Phi}_\infty}{\sigma_{\min,\Phi}})/\log (1/\alpha_{GD, \gamma}).$$
Then, under Assumptions~\ref{assume 1 or}-\ref{assume 3 or}, the following holds:
\begin{align}
\norm{J^{\mu_{k}} - J^*}_\infty
&\leq \underbrace{\frac{\alpha^{k(H)}}{1-\alpha} + \frac{2\alpha^H}{(1-\alpha)^2}k \max(\alpha^{H},a_\eta)^{k-1}}_{ \text{ finite-time component }} + \underbrace{\frac{2\alpha^H \frac{b_\eta}{1-a_\eta}+\eps_{LA}}{(1-\alpha)(1-\alpha^{H})}}_{ \text{ asymptotic component }}, \label{eq: case a bound}
\end{align}
where
$$a_\eta := \alpha^{m+H-1} \delta_{FV}+ \frac{\sqrt{|S|}\norm{\Phi}_\infty}{\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta}( \alpha^{m+H-1} \delta_{FV}+1)$$ and
$$
b_\eta := (1+ \frac{\sqrt{|S|}\norm{\Phi}_\infty}{\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta})( \frac{\alpha^m + \alpha^{m+H-1}}{1-\alpha}\delta_{FV} + \delta_{app}+ \delta_{FV} \eps_{PE}) +\frac{\sqrt{|S|}\norm{\Phi}_\infty}{(1-\alpha)\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta}.
$$
Taking limits on both sides as $k \to \infty,$ we have the following asymptotic bound:
\begin{align*}
\limsup_{k\to \infty} \norm{J^{\mu_k}-J^*}_\infty \leq \frac{2\alpha^H \frac{b_\eta}{1-a_\eta}+\eps_{LA}}{(1-\alpha)(1-\alpha^{H})}.
\end{align*}
\end{theorem}
\begin{theorem}\label{theorem 3 or}
Consider $\eta_k$ where $\eta_k$ is increasing and $\eta_k \to \infty.$ Suppose that
$\gamma, m,$ and $H$ satisfy \eqref{eq: gamma assumption or} and \eqref{eq: m H assumption or}. Then, under Assumptions~\ref{assume 1 or}-\ref{assume 3 or}, the following holds:
\begin{align}
\limsup_{k\to \infty} \norm{J^{\mu_k}-J^*}_\infty \leq \frac{2\alpha^H \frac{b^*}{1-a^*}+\eps_{LA}}{(1-\alpha)(1-\alpha^{H})}, \label{eq: recover ls}
\end{align}
where
$$a^*:= \alpha^{m+H-1} \delta_{FV}$$ and
$$
b^* :=\frac{\alpha^m + \alpha^{m+H-1}}{1-\alpha}\delta_{FV} + \delta_{app}+ \delta_{FV} \eps_{PE}.
$$
\end{theorem}
Before we present the proofs of Propositions \ref{prop1}-\ref{prop2} and Theorems \ref{theorem 2 or}-\ref{theorem 3 or}, we make the following remarks:
In the case where $\eta_k$ is constant, i.e., $\eta_k = \eta$, when $\gamma$ is sufficiently small and $\eta$ is sufficiently large, we have exponential rate of convergence to the asymptotic error, assuming that $m$ and $H$ are sufficiently large.
When we increase $\eta$, our asymptotic error becomes smaller until it reaches the asymptotic error of the least-squares algorithm, i.e., when $\eta \rightarrow \infty$, we recover the asymptotic error of Algorithm \ref{alg:LSalg}.
If it is difficult to ascertain whether $\eta$ is sufficiently large, one can consider an increasing sequence $\eta_k$ such that $\eta_k \to \infty.$ For such a sequence, our asymptotic error term is the same as that of the least-squares problem.
\proof{Proof of Proposition \ref{prop1}}
We break the proof of the the proposition into three steps.
\noindent\textit{Step 1:}
In this step, since $\theta_{k}$ is obtained by taking $\eta_{k}$ steps of gradient descent towards $\tilde{\theta}^{\mu_{k}}$ beginning from $\theta_{k-1}$, we show that the following holds:
\begin{align*}
\norm{\theta_{k} - \tilde{\theta}^{\mu_{k}}}_2
\leq \alpha_{GD, \gamma}^{\eta_{k}} \norm{\theta_{k-1} - \tilde{\theta}^{\mu_{k}}}_2,
\end{align*}
where $\alpha_{GD, \gamma}:=\sup_k \max_i |1-\gamma\lambda_i ( \Phi_{\scriptD_k}^\top \Phi_{\scriptD_k})|,$ where $\lambda_i$ denotes the $i$-th largest eigenvalue of a matrix.
We note that since $$0 < \lambda_i ( \Phi_{\scriptD_k}^\top \Phi_{\scriptD_k}) \leq \norm{\Phi_{\scriptD_k}^\top \Phi_{\scriptD_k}}_2^2 \leq d \norm{\Phi_{\scriptD_k}^\top \Phi_{\scriptD_k}}_\infty^2 \leq d \sup_k \norm{\Phi_{\scriptD_k}^\top \Phi_{\scriptD_k}}_\infty^2,$$
$\alpha_{GD, \gamma}<1$ when $\gamma < \frac{1}{d\sup_k \norm{\Phi_{\scriptD_k}^\top \Phi_{\scriptD_k}}_\infty^2}$.
\textit{Proof of Step 1:} Recall that the iterates in Equation \eqref{eq:iterthetaGD} can be written as follows:
\begin{align*}
\theta_{k,\ell} &= \theta_{k,\ell-1} - \gamma \nabla_\theta c(\theta;\hat{J}^{\mu_{k}})|_{\theta_{k,\ell-1}} =\theta_{k,\ell-1} - \gamma \Big( \Phi_{\scriptD_{k-1}}^\top \Phi_{\scriptD_{k-1}} \theta_{k,\ell-1} - \Phi_{\scriptD_{k-1}}^\top \scriptP_{k-1}(T_{\mu_{k}}^m T^{H-1}J_k+w_{k-1})\Big).
\end{align*}
Since
\begin{align*}0 &= \nabla_\theta c(\theta;\hat{J}^{\mu_{k}})|_{\tilde{\theta}^{\mu_{k}}}= \Phi_{\scriptD_{k-1}}^\top \Phi_{\scriptD_{k-1}} \tilde{\theta}^{\mu_{k}} - \Phi_{\scriptD_{k-1}}^\top \scriptP_{k-1}(T_{\mu_{k}}^m T^{H-1}J_k+w_{k-1}),
\end{align*}
we have the following:
\begin{equation*}
\begin{array}{lll}
\theta_{k,\ell} &=& \theta_{k,\ell-1} - \gamma \Big( \Phi_{\scriptD_{k-1}}^\top \Phi_{\scriptD_{k-1}} \theta_{k,\ell-1} - \Phi_{\scriptD_{k-1}}^\top \Phi_{\scriptD_{k-1}} \tilde{\theta}^{\mu_{k}} - \Phi_{\scriptD_{k-1}}^\top \scriptP_{k-1}(T_{\mu_{k}}^m T^{H-1}J_k+w_{k-1}) \\&+& \Phi_{\scriptD_{k-1}}^\top \scriptP_{k-1}(T_{\mu_{k}}^m T^{H-1}J_k+w_{k-1})\Big) \\
&=& \theta_{k,\ell-1} - \gamma \Phi_{\scriptD_{k-1}}^\top \Phi_{\scriptD_{k-1}} (\theta_{k,\ell-1} - \tilde{\theta}^{\mu_{k}}).
\end{array}
\end{equation*}
Subtracting $\tilde{\theta}^{\mu_{k}}$ from both sides gives:
\begin{align*}
\theta_{k,\ell} - \tilde{\theta}^{\mu_{k}} &= \theta_{k,\ell-1} - \tilde{\theta}^{\mu_{k}} - \gamma \Phi_{\scriptD_{k-1}}^\top \Phi_{\scriptD_{k-1}} (\theta_{k,\ell-1} - \tilde{\theta}^{\mu_{k}})\\&= (I - \gamma \Phi_{\scriptD_{k-1}}^\top \Phi_{\scriptD_{k-1}}) (\theta_{k,\ell-1} - \tilde{\theta}^{\mu_{k}}).
\end{align*}
Thus,
\begin{align*}
\norm{\theta_{k,\ell} - \tilde{\theta}^{\mu_{k}}}_2&= \norm{(I - \gamma \Phi_{\scriptD_{k-1}}^\top \Phi_{\scriptD_{k-1}}) (\theta_{k,\ell-1} - \tilde{\theta}^{\mu_{k}})}_2\\&\leq \norm{I - \gamma \Phi_{\scriptD_{k-1}}^\top \Phi_{\scriptD_{k-1}}}_2 \norm{\theta_{k,\ell-1} - \tilde{\theta}^{\mu_{k}}}_2\\
&\leq \max_i |\lambda_i (I - \gamma \Phi_{\scriptD_{k-1}}^\top \Phi_{\scriptD_{k-1}})|\norm{\theta_{k,\ell-1} - \tilde{\theta}^{\mu_{k}}}_2\\
&\leq \max_i |1-\gamma\lambda_i ( \Phi_{\scriptD_{k-1}}^\top \Phi_{\scriptD_{k-1}})|\norm{\theta_{k,\ell-1} - \tilde{\theta}^{\mu_{k}}}_2 \\
&\leq \underbrace{\sup_k \max_i |1-\gamma\lambda_i ( \Phi_{\scriptD_k}^\top \Phi_{\scriptD_k})|}_{=: \alpha_{GD, \gamma}}\norm{\theta_{k,\ell-1} - \tilde{\theta}^{\mu_{k}}}_2,
\end{align*} where $\lambda_i$ denotes the $i$-th largest eigenvalue of a matrix.
Iterating over $k,$ the following holds:
\begin{align*}
\norm{\theta_{k} - \tilde{\theta}^{\mu_{k}}}_2 &=\norm{\theta_{k,\eta_{k}} - \tilde{\theta}^{\mu_{k}}}_2\\
&\leq \alpha_{GD, \gamma}^{\eta_{k}} \norm{\theta_{k,0} - \tilde{\theta}^{\mu_{k}}}_2\\
&= \alpha_{GD, \gamma}^{\eta_{k}} \norm{\theta_{k-1} - \tilde{\theta}^{\mu_{k}}}_2.
\end{align*}
\textit{Step 2}: Using Step 1 and matrix norm properties, we obtain the following bound on $\norm{J_{\mu_k}-J_k}_\infty:$
\begin{align}
\norm{J^{\mu_{k}}-J_k}_\infty \leq
a_k \norm{J_{k-1} -J^{\mu_{k-1}}}_\infty +b_k,\label{eq: first ineq or}
\end{align}
$$a_k := \alpha^{m+H-1} \delta_{FV}+ \frac{\sqrt{|S|}\norm{\Phi}_\infty}{\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta_{k}}( \alpha^{m+H-1} \delta_{FV}+1)$$ and
$$
b_k := (1+ \frac{\sqrt{|S|}\norm{\Phi}_\infty}{\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta_{k}})( \frac{\alpha^m + \alpha^{m+H-1}}{1-\alpha}\delta_{FV} + \delta_{app}+ \delta_{FV} \eps_{PE}) +\frac{\sqrt{|S|}\norm{\Phi}_\infty}{(1-\alpha)\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta_{k}}.
$$
\textit{Proof of Step 2:}
Using equivalence and sub-multiplicative properties of matrix norms, we have the following:
\begin{alignat*}{2}
&\frac{1}{\norm{\Phi}_\infty} \norm{\Phi \theta_k-\Phi \tilde{\theta}^{\mu_{k}}}_\infty &&\leq \norm{\theta_k - \tilde{\theta}^{\mu_{k}}}_\infty
\\& &&\leq \norm{\theta_k - \tilde{\theta}^{\mu_{k}}}_2
\\& &&\leq \alpha_{GD, \gamma}^{\eta_{k}} \norm{\theta_{k-1} - \tilde{\theta}^{\mu_{k}}}_2
\\& &&\leq \frac{1}{\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta_{k}} \norm{\Phi\theta_{k-1} - \Phi\tilde{\theta}^{\mu_{k}}}_2
\\& &&\leq \frac{\sqrt{|S|}}{\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta_{k}} \norm{\Phi\theta_{k-1} - \Phi\tilde{\theta}^{\mu_{k}}}_\infty\\
&\implies \norm{J_k-\Phi \tilde{\theta}^{\mu_{k}}}_\infty&&\leq \frac{\sqrt{|S|}\norm{\Phi}_\infty}{\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta_{k}} \norm{J_{k-1} - \Phi\tilde{\theta}^{\mu_{k}}}_\infty ,
\end{alignat*}
where $\sigma_{\min, \Phi}$ is the smallest singular value in the singular value decomposition of $\Phi$ and the last line follows from the fact that $J_k := \Phi \theta_k.$
The above implies the following:
\begin{align}
\norm{J^{\mu_{k}}-J_k}_\infty &\leq \norm{\Phi\tilde{\theta}^{\mu_{k}}-J^{\mu_k}}_\infty +\frac{\sqrt{|S|}\norm{\Phi}_\infty}{\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta_{k}}\norm{J_{k-1} - \Phi\tilde{\theta}^{\mu_{k}}}_\infty\nonumber\\
&= \norm{\scriptM_k (T_{\mu_k}^m T^{H-1} J_{k-1}+w_k)-J^{\mu_k}}_\infty +\frac{\sqrt{|S|}\norm{\Phi}_\infty}{\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta_{k}}\norm{J_{k-1} - \Phi\tilde{\theta}^{\mu_{k}}}_\infty,\label{eq: label 1 or}
\end{align}
where the equality follows from \eqref{eq: theta tilde muk or}.
Now we bound $\norm{J_{k-1}-\Phi\tilde{\theta}^{\mu_{k}}}_\infty$ as follows:
\begin{align}
\norm{J_{k-1}-\Phi\tilde{\theta}^{\mu_{k}}}_\infty
\nonumber&\leq \norm{J_{k-1} - J^{\mu_{k-1}}}_\infty + \norm{ J^{\mu_{k-1}}-J^{\mu_{k}}}_\infty + \norm{J^{\mu_{k}}-\Phi\tilde{\theta}^{\mu_{k}}}_\infty \\
\nonumber&\leq \norm{J_{k-1} - J^{\mu_{k-1}}}_\infty + \frac{1}{1-\alpha} + \norm{J^{\mu_{k}}-\Phi\tilde{\theta}^{\mu_{k}}}_\infty
\\
&\leq \norm{J_{k-1} - J^{\mu_{k-1}}}_\infty + \frac{1}{1-\alpha} + \norm{J^{\mu_{k}}-\scriptM_k (T_{\mu_k}^m T^{H-1} J_{k-1}+w_k)}_\infty, \label{eq: label 2 or}
\end{align}
where the last line follows from \eqref{eq: theta tilde muk or}. We introduce Lemma \ref{mainGDLemma} to upper bound $\norm{\scriptM_k (T_{\mu_k}^m T^{H-1} J_{k-1}+w_k)- J^{\mu_k}}_\infty$ as follows:
\begin{lemma}
\begin{align*}
\norm{\scriptM_k (T_{\mu_k}^m T^{H-1} J_{k-1}+w_k)- J^{\mu_k}}_\infty
&\leq \alpha^{m+H-1} \delta_{FV} \norm{J_{k-1} -J^{\mu_{k-1}}}_\infty + \frac{\alpha^m + \alpha^{m+H-1}}{1-\alpha}\delta_{FV} + \delta_{app}+ \delta_{FV} \eps_{PE}.
\end{align*} \label{mainGDLemma}
\end{lemma}
The proof of Lemma \ref{mainGDLemma} is in Appendix \ref{appendix:mainGDLemmaProof}.
Putting \eqref{eq: label 1 or}, \eqref{eq: label 2 or}, and Lemma \ref{mainGDLemma} together, we get the following:
\begin{align*}
\norm{J^{\mu_{k}}-J_k}_\infty \leq
a_k \norm{J_{k-1} -J^{\mu_{k-1}}}_\infty +b_k,
\end{align*}
\begin{align*}a_k := \alpha^{m+H-1} \delta_{FV}+ \frac{\sqrt{|S|}\norm{\Phi}_\infty}{\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta_{k}}( \alpha^{m+H-1} \delta_{FV}+1) \end{align*} and
\begin{align*}
b_k := (1+ \frac{\sqrt{|S|}\norm{\Phi}_\infty}{\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta_{k}})( \frac{\alpha^m + \alpha^{m+H-1}}{1-\alpha}\delta_{FV} + \delta_{app}+ \delta_{FV} \eps_{PE}) +\frac{\sqrt{|S|}\norm{\Phi}_\infty}{(1-\alpha)\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta_{k}}.
\end{align*}
\noindent\textit{Step 3:}
We will establish the following bound on $T_{\mu_{k+1}}T^{H-1}J^{\mu_k}$ using the contraction property of Bellman operators and property in \eqref{eq:usefulproperties}:
\begin{align*}
-T_{\mu_{k+1}}T^{H}J^{\mu_k}
&\leq 2\alpha^H f_k e - T^{H}J^{\mu_k} + \eps_{LA} e. \nonumber
\end{align*}
Using properties in \eqref{eq:usefulproperties} and monotonicity, we will repeatedly apply $T_{\mu_{k+1}}$ to both sides and take limits to obtain the following:
\begin{align*}
J^{\mu_{k+1}} - T^{H}J^{\mu_k}
&\geq - \frac{ 2\alpha^H \norm{J^{\mu_k}-J_k}_\infty + \eps_{LA} }{1-\alpha}e.
\end{align*}
\textit{Proof of Step 3:} We begin by noting that
\begin{align}
-T_{\mu_{k+1}}T^{H}J^{\mu_k} &\leq -T_{\mu_{k+1}}T^{H-1}J^{\mu_k} - T_{\mu_{k+1}}T^{H-1}J_k \nonumber+ T_{\mu_{k+1}}T^{H-1}J_k \nonumber\\
&\overset{(a)}\leq \alpha^{H} \norm{J^{\mu_k}-J_k}_\infty e - T_{\mu_{k+1}}T^{H-1}J_k \nonumber\\
&\leq \alpha^{H} \norm{J^{\mu_k}-J_k}_\infty e - T^H J_k + \eps_{LA} e \nonumber\\
&= \alpha^{H} \norm{J^{\mu_k}-J_k}_\infty e - T^H J_k - T^H J^{\mu_k} + T^H J^{\mu_k} + \eps_{LA} e \nonumber\\
&\overset{(b)}\leq 2\alpha^H \norm{J^{\mu_k}-J_k}_\infty e - T^{H}J^{\mu_k} + \eps_{LA} e, \nonumber
\end{align}
where $e$ is the vector of all $1$s, (a) and (b) follow from the contraction property of $T^H$ and $T_{\mu_{k+1}}$.
The rest of the proof is similar to the arguments in \cite{bertsekastsitsiklis}, \cite{bertsekas2019reinforcement}; however, we have to explicitly incorporate the role of lookahead ($H$) in the remaining steps of the proof. Suppose that we apply the $T_{\mu_{k+1}}$ operator $\ell-1$ times to both sides. Then, due to monotonicity and the fact $T_\mu(J+ce)=T_\mu(J)+\alpha ce,$ for any policy $\mu,$ we have the following:
\begin{align*}
T_{\mu_{k+1}}^\ell T^{H}J^{\mu_k} \leq \alpha^{\ell } (2\alpha^H \norm{J^{\mu_k}-J_k}_\infty + \eps_{LA} )e + T_{\mu_{k+1}}^{\ell+1} T^{H} J^{\mu_k}.
\end{align*}
Using a telescoping sum, we get the following inequality:
\begin{align*}
T_{\mu_{k+1}}^j T^{H} J^{\mu_k} - T^{H}J^{\mu_k}
&\geq - \sum_{\ell = 1}^{j} \alpha^{\ell -1} (2\alpha^H \norm{J^{\mu_k}-J_k}_\infty + \eps_{LA} )e.
\end{align*}
Taking limits as $j \to \infty,$ we obtain the following:
\begin{align*}
J^{\mu_{k+1}} - T^{H}J^{\mu_k}
&\geq - \frac{ 2\alpha^H \norm{J^{\mu_k}-J_k}_\infty + \eps_{LA} }{1-\alpha}e.
\end{align*}
The rest of proof is straightforward.
Subtracting $J^*$ from both sides of the previous inequality and using the contraction property of $T$, we get:
\begin{align*}
&\alpha^{H} \norm{J^* - J^{\mu_k}}_\infty e + \frac{ 2\alpha^H \norm{J^{\mu_k}-J_k}_\infty + \eps_{LA} }{1-\alpha}e. \\&\geq J^* - T^{H}J^{\mu_k} + \frac{ 2\alpha^H \norm{J^{\mu_k}-J_k}_\infty + \eps_{LA} }{1-\alpha}e
\\& \geq J^* - J^{\mu_{k}}\\
&\geq 0,
\end{align*}
which implies that
\begin{align}
\norm{J^{\mu_{k+1}} - J^*}_\infty &\leq \alpha^{H} \norm{J^{\mu_k}-J^*}_\infty + \frac{ 2\alpha^H \norm{J^{\mu_k}-J_k}_\infty + \eps_{LA} }{1-\alpha},\label{eq: ** 4}
\end{align} since $J^* \geq J^{\mu}$ for all policies $\mu$. The above, together with the inequality in \eqref{eq: first ineq or} gives us Proposition \ref{prop1}.
\endproof
\Halmos
We now prove Proposition \ref{prop2} by iterating over $k$ using Proposition \ref{prop1}.
\proof{Proof of Proposition \ref{prop2}}
First, noting our assumptions in Proposition \ref{prop1} that $a_k \leq a$ for all $k$ where $0<a<1,$ and $b_k \leq b$ for all $k$, where $0<b$, we iterate over $k$ using \eqref{eq: ineq 2 or} to obtain a bound on $\norm{J^{\mu_k}-J_k}_\infty$ as follows:
\begin{align}
\norm{J^{\mu_{k}}-J_{k}}_\infty \leq a^k \norm{J_0-J^{\mu_0}}_\infty + b\sum_{j=0}^{k-1}\prod_{m=j}^{k-1} a^m. \label{eq: ineq 1 last or}
\end{align}
Now, we iterate over $k$ in \eqref{eq: jmu-jk or} to get the following:
\begin{align}
\norm{J^{\mu_{k}} - J^*}_\infty \nonumber&\leq \alpha^{k(H-1)} \norm{J^{\mu_0}-J^*}_\infty + \frac{2\alpha^H}{1-\alpha}\sum_{\ell = 0}^{k-1}\alpha^{(k-\ell-1)(H-1)} \norm{J^{\mu_{\ell}}-J_\ell}_\infty + \frac{\eps_{LA}}{(1-\alpha)(1-\alpha^{H-1})}\\
&\leq \frac{\alpha^{k(H-1)}}{1-\alpha} + \frac{2\alpha^H}{1-\alpha}\sum_{\ell = 0}^{k-1}\alpha^{(k-\ell-1)(H-1)} \norm{J^{\mu_{\ell}}-J_\ell}_\infty + \frac{\eps_{LA}}{(1-\alpha)(1-\alpha^{H-1})}.\label{eq: ineq 2 last or}
\end{align}
Combining the inequalities in \eqref{eq: ineq 1 last or} and \eqref{eq: ineq 2 last or} gives us the following:
\begin{align}
\norm{J^{\mu_{k}} - J^*}_\infty &\leq \frac{\alpha^{k(H-1)}}{1-\alpha} + \frac{2\alpha^H}{1-\alpha}\sum_{\ell = 0}^{k-1}\alpha^{(k-\ell-1)(H-1)} \Big[a^\ell \norm{J_0-J^{\mu_0}}_\infty + b\sum_{j=0}^{\ell-1} \prod_{m=j}^{\ell-1} a^m \Big] \nonumber\\&+ \frac{\eps_{LA}}{(1-\alpha)(1-\alpha^{H-1})} \nonumber\\
&\leq \frac{\alpha^{k(H-1)}}{1-\alpha} + \frac{2\alpha^H}{1-\alpha}\sum_{\ell = 0}^{k-1}\alpha^{(k-\ell-1)(H-1)} \Big[ a^\ell \norm{J_0-J^{\mu_0}}_\infty + \frac{b}{1-a} \Big] + \frac{\eps_{LA}}{(1-\alpha)(1-\alpha^{H-1})} \nonumber\\
&\leq \frac{\alpha^{k(H-1)}}{1-\alpha} + \frac{2\alpha^H}{1-\alpha}\norm{J_0 - J^{\mu_0}}_\infty \sum_{\ell = 0}^{k-1}\alpha^{(k-\ell-1)(H-1)} a^\ell \nonumber+ \frac{2\alpha^H\frac{b}{1-a}+\eps_{LA}}{(1-\alpha)(1-\alpha^{H-1})} \nonumber\\
&\leq \frac{\alpha^{k(H-1)}}{1-\alpha} + \frac{2\alpha^H}{1-\alpha}\norm{J_0 - J^{\mu_0}}_\infty \sum_{\ell = 0}^{k-1}\max(\alpha^{H-1},a)^{k-1} \nonumber+ \frac{2\alpha^H\frac{b}{1-a}+\eps_{LA}}{(1-\alpha)(1-\alpha^{H-1})} \nonumber\\
&= \frac{\alpha^{k(H-1)}}{1-\alpha} + \frac{2\alpha^H}{1-\alpha}k \max(\alpha^{H-1},a)^{k-1}\norm{J^{\mu_0}-J_0}_\infty +\frac{2\alpha^H\frac{b}{1-a}+\eps_{LA}}{(1-\alpha)(1-\alpha^{H-1})},\nonumber
\end{align}
where the last inequality follows from the assumption that $0<a<1.$
\Halmos
\endproof
We now use Proposition \ref{prop2} to prove Theorems \ref{theorem 2 or}-\ref{theorem 3 or}.
\proof{Proof of Theorem \ref{theorem 2 or}}
When $\eta_k=\eta,$ the following holds:
$$a_k = a_\eta := \alpha^{m+H-1} \delta_{FV}+ \frac{\sqrt{|S|}\norm{\Phi}_\infty}{\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta}( \alpha^{m+H-1} \delta_{FV}+1)$$ and
$$
b_k = b_\eta := (1+ \frac{\sqrt{|S|}\norm{\Phi}_\infty}{\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta})( \frac{\alpha^m + \alpha^{m+H-1}}{1-\alpha}\delta_{FV} + \delta_{app}+ \delta_{FV} \eps_{PE}) +\frac{\sqrt{|S|}\norm{\Phi}_\infty}{(1-\alpha)\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta}.
$$
Note that from our assumptions on $m, \gamma,$ and $H$, $0<a_\eta<1$ and $0< b_\eta$.
To see that $a_\eta < 1$, observe from our assumptions in Theorem \ref{theorem 2 or} that \begin{align}\alpha^{m+H-1} \delta_{FV} < \frac{1}{2}\label{eq: ** or}\end{align} and $$\frac{\sqrt{|S|}\norm{\Phi}_\infty}{\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta}( \alpha^{m+H-1} \delta_{FV}+1) < \frac{\sqrt{|S|}\norm{\Phi}_\infty}{\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta}\frac{3}{2}< \frac{1}{2}.$$
Putting the above two lines together gives us
\begin{align*}
\alpha^{m+H-1} \delta_{FV}+ \frac{\sqrt{|S|}\norm{\Phi}_\infty}{\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta}( \alpha^{m+H-1} \delta_{FV}+1)< 1.
\end{align*}
So, we can directly use Proposition \ref{prop2} to obtain \eqref{eq: case a bound}.
\Halmos
\endproof
\proof{Proof of Theorem \ref{theorem 3 or}}
Take any constant $0 <c^*< 1- \alpha^{m+H-1} \delta_{FV},$ where $c^*$ is a margin of error. Define $k(c^*)$ to be the smallest value $k(c^*)$ such that:
\begin{align} \eta_{k(c^*)} >\log\Big(\frac{1}{c^*}\frac{\sqrt{|S|}\norm{\Phi}_\infty}{\sigma_{\min,\Phi}}( \frac{\alpha^m + \alpha^{m+H-1}}{1-\alpha}\delta_{FV} + \delta_{app}+ \delta_{FV} \eps_{PE}+\frac{1}{1-\alpha})\Big)/\log(1/\alpha_{GD, \gamma}). \label{eq: c* assume or}
\end{align}
We know such a $k(c^*)$ exists since $\eta_k \to \infty$.
We define $a_{c^*}$ and $b_{c^*}$ as follows:
$$a_{c^*} := \alpha^{m+H-1} \delta_{FV} + c^*,$$
$$b_{c^*} := \frac{\alpha^m + \alpha^{m+H-1}}{1-\alpha}\delta_{FV} + \delta_{app}+ \delta_{FV} \eps_{PE} + c^*.$$
It is easy to see from \eqref{eq: c* assume or} and the definitions of $a_k$ and $b_k$ in Proposition \ref{prop1} that $a_k \leq a_{c^*}$ and $b_k \leq b_{c^*}$ when $k > k(c^*),$ since $0<\alpha_{GD, \gamma}<1$ from our assumption on $\gamma$ in Theorem \ref{theorem 3 or}.
From our assumptions on $c^*$, $m, \gamma,$ and $H$, $0<a_{c^*}<1$ and $0< b_{c^*}$, where we use a similar technique as in the proof of Theorem \ref{theorem 2 or} with $\alpha^{m+H-1}\delta_{FV}< 1-c^*$ in place of \eqref{eq: ** or} to show that $a_{c^*}<1$.
So, we can use Proposition \ref{prop2} and begin iterating at $k(c^*)$ to obtain finite time bounds on $\norm{J^{\mu_{k}} - J^*}_\infty$ as follows:
\begin{align}
&\norm{J^{\mu_{k}} - J^*}_\infty
\nonumber \\&\leq \underbrace{\frac{\alpha^{(k-k(c^*))(H)}}{1-\alpha} + \frac{2\alpha^H}{1-\alpha}(k-k(c^*))\max(a_{c^*}, \alpha^{H})^{k-k(c^*)-1}\norm{J^{\mu_{k(c^*)}}-J_{k(c^*)}}_\infty}_{ \text{ finite-time component }}+ \underbrace{\frac{2\alpha^H \frac{b_{c^*}}{1-a_{c^*}}+\eps_{LA}}{(1-\alpha^{H})(1-\alpha)}}_{ \text{ asymptotic component }}. \label{eq: fin time jmu nk}
\end{align}
Using Proposition \ref{prop1}, we can upper bound $\norm{J^{\mu_{k(c^*)}}-J_{k(c^*)}}_\infty$ as follows:
$$\norm{J^{\mu_{k(c^*)}}-J_{k(c^*)}}_\infty \leq \prod_{i=1}^{k(c^*)} a_{i}\norm{J_0-J^{\mu_0}}_\infty + \sum_{j=1}^{k(c^*)} b_j \prod_{k=j+1}^{k(c^*)} a_k,$$ where $a_k$ and $b_k$ are defined in Proposition \ref{prop1}.
Taking limits on both sides of \eqref{eq: fin time jmu nk}, we get the following:
\begin{align}
\limsup \norm{J^{\mu_{k}} - J^*}_\infty
\nonumber\leq \frac{2\alpha^H \frac{b_{c^*}}{1-a_{c^*}}+\eps_{LA}}{(1-\alpha^{H})(1-\alpha)}.
\end{align}
Since the above holds for all $c^* > 0$, we get Theorem \ref{theorem 3 or}.
\endproof\Halmos
Looking through the steps of our proof, one can obtain finite-time bounds for the case where $\eta_k$ is increasing. However, the algorithm consists of two loops: one corresponding to policy iteration and the other corresponding to gradient descent within each policy iteration step. It is hard to compare the relative complexities of each step within these loops. Therefore, a finite-time analysis does not shed much light into the amount of computations needed to execute the algorithm with an increasing sequence $\eta_k.$ However, it is interesting to note that for all $k>k(c^*),$ the algorithm converges exponentially fast in the number of policy iteration steps although the number of gradient descent steps within each policy iteration step is increasing.
\section{Conclusion}
Practical RL algorithms that deal with large state spaces implement some form of approximate policy iteration. In traditional analyses of approximate policy iteration, for example in \cite{bertsekas2019reinforcement}, it is assumed that there is an error in the policy evaluation step and an error in the policy improvement step. In this paper, we seek to understand the role of function approximation in the policy evaluation step and the associated changes that one has to make to the approximate policy iteration algorithm (such as lookahead) to counteract the effect of function approximation. Our main conclusion is that lookahead provides two benefits: (i) it mitigates the effects of function approximation, rollout and the choice of specific feature vectors, and (ii) from a theoretical perspective, it improves the upper bound on the error in approximate policy iteration from $1/(1-\alpha)^2$ to $1/(1-\alpha^{H})(1-\alpha).$
Possible directions for future work include the following:
\begin{itemize}
\item For problems with a terminal state, it would be interesting to consider cases where the value function of a given policy is estimated using a full rollout which provides an unbiased estimate as in \cite{tsitsiklis2002convergence}.
\item In game playing applications, gradient descent is commonly used to estimate the value function, but temporal-difference learning is used in other applications. It would be interesting to extend our results to the case of TD learning-based policy evaluation.
\item While neural networks are not linear function approximators, recent results on the NTK analysis of neural networks suggest that they can be approximated as linear combinations of basis functions \cite{jacot2018neural,du2018gradient,arora2019fine,ji2019polylogarithmic, cao2019generalization}. Thus, to the extent that the NTK approximation is reasonable, our results can potentially shed light on why the combination of the representation capability of neural networks and tree-search methods work well in practice, although further work is necessary to make this connection precise.
\end{itemize}
\begin{APPENDICES}
\section{Proof of Lemma \ref{mainGDLemma}} \label{appendix:mainGDLemmaProof}
\proof{Proof of Lemma \ref{mainGDLemma}}
\begin{align*}
& \norm{\scriptM_k (T_{\mu_k}^m T^{H-1} J_{k-1}+w_k)- J^{\mu_k}}_\infty \\
&\leq \norm{\scriptM_k T_{\mu_k}^m T^{H-1} J_{k-1}- J^{\mu_k}}_\infty + \norm{\scriptM_k w_k}_\infty \\
&\leq\norm{\scriptM_k T_{\mu_k}^m T^{H-1} J_{k-1}- J^{\mu_k}}_\infty + \norm{\scriptM_k}_\infty \norm{w_k}_\infty \\ \allowdisplaybreaks
&\leq \norm{\scriptM_k T_{\mu_k}^m T^{H-1} J_{k-1}- J^{\mu_k}}_\infty + \delta_{FV} \eps_{PE} \\ \allowdisplaybreaks
&= \norm{\scriptM_k T_{\mu_k}^m T^{H-1} J_{k-1} - \scriptM_k J^{\mu_k} + \scriptM_k J^{\mu_k}- J^{\mu_k}}_\infty + \delta_{FV} \eps_{PE}\\
&\leq \norm{\scriptM_k T_{\mu_k}^m T^{H-1} J_{k-1} - \scriptM_k J^{\mu_k}}_\infty + \norm{\scriptM_k J^{\mu_k}- J^{\mu_k}}_\infty+ \delta_{FV} \eps_{PE}\\
&\leq \norm{\scriptM_k}_\infty \norm{ T_{\mu_k}^m T^{H-1} J_{k-1} - J^{\mu_k}}_\infty + \norm{\scriptM_k J^{\mu_k}- J^{\mu_k}}_\infty+ \delta_{FV} \eps_{PE}\\
&\leq \alpha^m \norm{\scriptM_k}_\infty \norm{T^{H-1} J_{k-1} - J^{\mu_k}}_\infty + \sup_{k, \mu_k}\norm{\scriptM_k J^{\mu_k}- J^{\mu_k}}_\infty + \delta_{FV} \eps_{PE}\\
&\leq \alpha^m\norm{\scriptM_k}_\infty \norm{T^{H-1} J_{k-1} - J^* + J^* - J^{\mu_k}}_\infty +\delta_{app} + \delta_{FV} \eps_{PE}\\
&\leq \alpha^m\norm{\scriptM_k}_\infty \norm{T^{H-1} J_{k-1} - J^*}_\infty + \alpha^m\norm{\scriptM_k}_\infty \norm{J^* - J^{\mu_k}}_\infty + \delta_{app}+ \delta_{FV} \eps_{PE} \\
&\leq \alpha^{m+H-1}\norm{\scriptM_k}_\infty \norm{J_{k-1} - J^*}_\infty + \frac{\alpha^m}{1-\alpha}\norm{\scriptM_k}_\infty +\delta_{app} + \delta_{FV} \eps_{PE}\\
&\leq \alpha^{m+H-1}\norm{\scriptM_k}_\infty \norm{J_{k-1} -J^{\mu_{k-1}} + J^{\mu_{k-1}}- J^*}_\infty + \frac{\alpha^m}{1-\alpha}\norm{\scriptM_k}_\infty +\delta_{app} + \delta_{FV} \eps_{PE}\\
&\leq \alpha^{m+H-1}\norm{\scriptM_k}_\infty \norm{J_{k-1} -J^{\mu_{k-1}}}_\infty + \frac{\alpha^m + \alpha^{m+H-1}}{1-\alpha}\norm{\scriptM_k}_\infty +\delta_{app} + \delta_{FV} \eps_{PE}\\
&\leq \alpha^{m+H-1} \delta_{FV} \norm{J_{k-1} -J^{\mu_{k-1}}}_\infty + \frac{\alpha^m + \alpha^{m+H-1}}{1-\alpha}\delta_{FV} + \delta_{app}+ \delta_{FV} \eps_{PE}.
\end{align*} \Halmos
\endproof
\section{Proof of Theorem \ref{mainAPI}}
\label{appendix:thm1}
\begin{remark}
The proof of Theorem \ref{mainAPI} is somewhat simpler than that of Theorems \ref{theorem 2 or}-\ref{theorem 3 or} but uses many of the same ideas.
The proof of Theorem \ref{mainAPI} skips Steps 1 and 2 in the proof of Proposition \ref{prop1} and instead uses Lemma \ref{mainGDLemma} to obtain an analogous result to Step 2. The rest of the proof of Proposition \ref{prop1} also applies to the proof of Theorem \ref{mainAPI}.
\end{remark}
\proof{Proof of Theorem \ref{mainAPI}}
Consider policies $\mu_k$ and $\mu_{k+1},$ where $\mu_0$ can be taken to be any arbitrary policy. We have the following:
\begin{align}
-T_{\mu_{k+1}}T^{H}J^{\mu_k} &\leq -T_{\mu_{k+1}}T^{H-1}J^{\mu_k} - T_{\mu_{k+1}}T^{H-1}J_k \nonumber+ T_{\mu_{k+1}}T^{H-1}J_k \nonumber\\
&\overset{(a)}\leq \alpha^{H} \norm{J^{\mu_k}-J_k}_\infty e - T_{\mu_{k+1}}T^{H-1}J_k \nonumber\\
&\leq \alpha^{H} \norm{J^{\mu_k}-J_k}_\infty e - T^H J_k + \eps_{LA} e \nonumber\\
&= \alpha^{H} \norm{J^{\mu_k}-J_k}_\infty e - T^H J_k - T^H J^{\mu_k} + T^H J^{\mu_k} + \eps_{LA} e \nonumber\\
&\overset{(b)}\leq 2\alpha^H \norm{J^{\mu_k}-J_k}_\infty e - T^{H}J^{\mu_k} + \eps_{LA} e, \label{eq: *}
\end{align}
where $e$ is the vector of all $1$s, (a) and (b) follow from the contraction property of $T^H$ and $T_{\mu_{k+1}}$.
Since $\norm{J^{\mu_k}-J_k}_\infty= \norm{\scriptM_k (T_{\mu_k}^m T^{H-1} J_{k-1}+w_k)- J^{\mu_k}}_\infty$ from \eqref{eq:iterateAPI}, we can further bound $\norm{J^{\mu_k}-J_k}_\infty$ using Lemma \ref{mainGDLemma}. We now rewrite our recursion from Lemma \ref{mainGDLemma}:
Suppose that we define $\beta \in (0, 1)$ as follows:
\begin{align}
\beta := \alpha^{m+H-1} \delta_{FV}, \label{eq:defbeta}
\end{align} where we note from our assumption in Theorem \ref{mainAPI} that $\alpha^{m+H-1} \delta_{FV}<1.$ Furthermore, we denote $\tau$ as follows:
\begin{align}
\tau := \frac{\alpha^m + \alpha^{m+H-1}}{1-\alpha}\delta_{FV} + \delta_{app}+ \delta_{FV} \eps_{PE}. \label{eq:defdelta3}
\end{align}
Then, our bound from Lemma \ref{mainGDLemma} can be rewritten as the following:
\begin{align*}
\norm{J^{\mu_k}-J_k}_\infty
&\leq \beta \norm{J_{k-1} -J^{\mu_{k-1}}}_\infty + \tau.
\end{align*}
Iterating over $k$, we get that for any $k$,
\begin{align}
\norm{J^{\mu_k}-J_k}_\infty \leq \underbrace{(\beta)^k \norm{J^{\mu_0}-J_0}_\infty + \sum_{i=0}^{k-1} \beta^i \tau}_{=: f_k}. \label{eq: pound2}
\end{align}
Putting (\ref{eq: *}) and (\ref{eq: pound2}) together, we have the following:
\begin{align}
-T_{\mu_{k+1}}T^{H}J^{\mu_k}
&\leq 2\alpha^H f_k e - T^{H}J^{\mu_k} + \eps_{LA} e.
\end{align}
The rest of the proof is similar to the arguments in \cite{bertsekastsitsiklis}, \cite{bertsekas2019reinforcement}; however, we have to explicitly incorporate the role of lookahead ($H$) in the remaining steps of the proof.
Suppose that we apply the $T_{\mu_{k+1}}$ operator $\ell-1$ times. Then, due to monotonicity and the fact that $T_\mu(J+ce)=T_\mu(J)+\alpha ce,$ for any policy $\mu,$ we have the following:
\begin{align*}
-T_{\mu_{k+1}}^{\ell+1} T^{H}J^{\mu_k}
&\leq \alpha^\ell (2\alpha^{H} f_k +\eps_{LA})e - T_{\mu_{k+1}}^{\ell}T^{H}J^{\mu_k}.
\end{align*}
Using a telescoping sum, we get the following inequality:
\begin{align*}
T_{\mu_{k+1}}^j T^{H} J^{\mu_k} - T^{H}J^{\mu_k}
&\geq -\sum_{\ell = 1}^{j} \alpha^{\ell-1} (2\alpha^{H} f_k +\eps_{LA})e.
\end{align*}
Taking the limit as $j\rightarrow\infty$ on both sides, we have the following:
\begin{align*}
J^{\mu_{k+1}} - T^{H}J^{\mu_k} \geq -\frac{2\alpha^{H} f_k +\eps_{LA}}{1-\alpha}e.
\end{align*}
Rearranging terms and subtracting $J^*$ from both sides, we get the following:
\begin{align*}
\alpha^{H} \norm{J^* - J^{\mu_k}}_\infty e + \frac{2\alpha^{H} f_k +\eps_{LA}}{1-\alpha}e \geq J^* - T^{H}J^{\mu_k} + \frac{2\alpha^{H} f_k +\eps_{LA}}{1-\alpha}e \geq J^* - J^{\mu_{k+1}} \geq 0,
\end{align*}
which implies that
\begin{align}
\norm{J^{\mu_{k+1}} - J^*}_\infty &\leq \alpha^{H} \norm{J^{\mu_k}-J^*}_\infty + \frac{2\alpha^{H} f_k +\eps_{LA}}{1-\alpha},
\end{align} since $J^* \geq J^{\mu}$ for all policies $\mu$.
\color{black}
We iterate over $k>0$ to get the following:
\begin{align*}
\norm{J^{\mu_{k}} - J^*}_\infty &\leq \alpha^{k(H)} \norm{J^{\mu_0}-J^*}_\infty + \sum_{\ell=0}^{k-1} \alpha^{(k-\ell-1)(H)}\frac{2\alpha^{H} f_\ell +\eps_{LA}}{1-\alpha}\\
&\leq \frac{\alpha^{k(H)} }{1-\alpha} + \sum_{\ell=0}^{k-1} \alpha^{(k-\ell-1)(H)}\frac{2\alpha^{H} f_\ell +\eps_{LA}}{1-\alpha}.
\end{align*}
where $$f_\ell := (\beta)^\ell \norm{J^{\mu_0}-J_0}_\infty + \sum_{i=0}^{k-1} \beta^i \tau,$$ and $\beta:= \alpha^{m+H-1} \delta_{FV}$ and $\tau := \frac{\alpha^m + \alpha^{m+H-1}}{1-\alpha}\delta_{FV} + \delta_{app}+ \delta_{FV} \eps_{PE}$.
We will now obtain finite-time bounds of $\norm{J^{\mu_{k}} - J^*}_\infty$ using the above results. The following holds:
\begin{align*}
\norm{J^{\mu_{k}} - J^*}_\infty
&\leq \frac{\alpha^{k(H)} }{1-\alpha} + \sum_{\ell=0}^{k-1} \alpha^{(k-\ell-1)(H)}\frac{2\alpha^{H} f_\ell +\eps_{LA}}{1-\alpha} \\
&= \frac{\alpha^{k(H)} }{1-\alpha} + \sum_{\ell=0}^{k-1} \alpha^{(k-\ell-1)(H)}\frac{2\alpha^{H} \Big(\beta^\ell \norm{J^{\mu_0}-J_0}_\infty + \frac{\tau}{1-\beta}\Big) +\eps_{LA}}{1-\alpha} \\
&\leq \frac{\alpha^{k(H)} }{1-\alpha} + \frac{2\alpha^{H}\norm{J^{\mu_0}-J_0}_\infty}{1-\alpha}\sum_{\ell=0}^{k-1} \alpha^{(k-\ell-1)(H)} \beta^\ell +\frac{2\alpha^H \frac{\tau}{1-\beta} +\eps_{LA}}{(1-\alpha^{H})(1-\alpha)}\\
&\leq \frac{\alpha^{k(H)} }{1-\alpha} + \frac{2\alpha^{H}\norm{J^{\mu_0}-J_0}_\infty}{1-\alpha}\sum_{\ell = 0}^{k-1}\max(\alpha^{H},\beta)^{k-1} +\frac{2\alpha^H \frac{\tau}{1-\beta} +\eps_{LA}}{(1-\alpha^{H})(1-\alpha)}\\
&\leq \frac{\alpha^{k(H)} }{1-\alpha} + \frac{2\alpha^{H}\norm{J^{\mu_0}-J_0}_\infty}{1-\alpha}k\max(\alpha^{H},\beta)^{k-1} +\frac{2\alpha^H \frac{\tau}{1-\beta} +\eps_{LA}}{(1-\alpha^{H})(1-\alpha)},
\end{align*}
where the second to last to last inequality holds due to the assumption in Theorem \ref{mainAPI}.
\Halmos \endproof
\color{black}
\begin{comment}
\section{A Modified Least-Squares Algorithm}\label{appendix:thm2}
Suppose Step \ref{step 3 alg} of Algorithm \ref{alg:LSalg} is changed to $\hat{J}^{\mu_{k+1}}(i) = T_{\mu_{k+1}}^m (J_k)(i)+w_{k+1}(i)$ for $i \in \scriptD_k$. Then, it is still possible to get bounds on the performance of the algorithm when $m$ is sufficiently large. With this modification to the algorithm, when Assumptions \ref{assume 1 or}, \ref{assume 2 or}, and \ref{assume 3 or} hold, we have the following:
\begin{theorem}\label{mainAPISecond}
Suppose that $m$ satisfies $m >\log (\delta_{FV})/\log (1/\alpha),$ where
$$\delta_{FV} := \sup_k \norm{\scriptM_k}_\infty = \sup_k\norm{\Phi(\Phi_{\scriptD_{k}}^\top \Phi_{\scriptD_{k}} )^{-1} \Phi_{\scriptD_{k}}^\top \scriptP_k}_\infty,$$
then
\begin{align*}
\limsup_{k\to \infty}\norm{J^{\mu_k}-J^*}_\infty \leq \frac{\tilde{c}_{m, H}}{(1-\alpha)(1-\alpha^{H-1})},
\end{align*}
where $$\tilde{c}_{m, H} := 2\alpha^H \Bigg(\frac{\frac{\alpha^m}{1-\alpha} \delta_{FV} + \delta_{app}+ \delta_{FV} \eps_{PE} }{1-\alpha^{m}\delta_{FV}} + \frac{1}{1-\alpha}+\norm{J_0}_\infty \Bigg)+\eps_{LA}$$ and
$
\delta_{app} := \sup_{k, \mu_k}\norm{\scriptM_k J^{\mu_k}- J^{\mu_k}}_\infty.
$
\end{theorem}
\proof{Proof of Theorem \ref{mainAPISecond}}
Consider policies $\mu_k$ and $\mu_{k+1},$ where $\mu_0$ can be taken to be any arbitrary policy. We have the following:
\begin{align}
-T_{\mu_{k+1}}T^{H-1}J^{\mu_k} &= -T_{\mu_{k+1}}T^{H-1}J^{\mu_k} - T_{\mu_{k+1}}T^{H-1}J_k + T_{\mu_{k+1}}T^{H-1}J_k \nonumber\\
&\overset{(a)}\leq \alpha^H \norm{J^{\mu_k}-J_k}_\infty e - T_{\mu_{k+1}}T^{H-1}J_k \nonumber\\
&\leq \alpha^H \norm{J^{\mu_k}-J_k}_\infty e - T^H J_k + \eps_{LA} e \nonumber\\
&= \alpha^H \norm{J^{\mu_k}-J_k}_\infty e - T^H J_k - T^H J^{\mu_k} + T^H J^{\mu_k} + \eps_{LA} e \nonumber\\
&\overset{(b)}\leq 2 \alpha^H \norm{J^{\mu_k}-J_k}_\infty e - T^H J^{\mu_k} + \eps_{LA} e \nonumber\\
&\leq 2\alpha^H \norm{J^{\mu_k}-J_k}_\infty e - T^{H-1}J^{\mu_k} + \eps_{LA} e, \label{eq: * 2}
\end{align}
where $e$ is the vector of all $1$s, (a) and (b) follow from the contraction property of $T^H$ and $T_{\mu_{k+1}}$ and the last inequality follows from standard arguments using the monotonicity properties and the definition of $T$: specifically, note that
$$
TJ^\mu = \max_{\mu'} T_{\mu'} J^\mu \geq T_\mu J^\mu = J^{\mu},
$$
and repeatedly apply $T$ to both sides of the inequality and use monotonocity to obtain
$T^\ell J^\mu \geq T^{\ell-1} J^\mu$ for all $\ell$ and all policies $\mu.$
We can further bound $\norm{J^{\mu_k}-J_k}$ as follows:
\begin{align*}
\norm{J^{\mu_k}-J_k} &= \norm{\scriptM_k (T_{\mu_k}^m J_{k-1}+w_k)- J^{\mu_k}}_\infty
\leq \norm{\scriptM_k T_{\mu_k}^m J_{k-1}- J^{\mu_k}}_\infty + \norm{\scriptM_k w_k}_\infty \\
&\leq\norm{\scriptM_k T_{\mu_k}^m J_{k-1}- J^{\mu_k}}_\infty + \norm{\scriptM_k}_\infty \norm{w_k}_\infty
\leq \norm{\scriptM_k T_{\mu_k}^m J_{k-1}- J^{\mu_k}}_\infty + \delta_{FV} \eps_{PE} \\
&= \norm{\scriptM_k T_{\mu_k}^m J_{k-1} - \scriptM_k J^{\mu_k} + \scriptM_k J^{\mu_k}- J^{\mu_k}}_\infty + \delta_{FV} \eps_{PE}\\
&\leq \norm{\scriptM_k T_{\mu_k}^m J_{k-1} - \scriptM_k J^{\mu_k}}_\infty + \norm{\scriptM_k J^{\mu_k}- J^{\mu_k}}_\infty+ \delta_{FV} \eps_{PE}\\
&\leq \norm{\scriptM_k}_\infty \norm{ T_{\mu_k}^m J_{k-1} - J^{\mu_k}}_\infty + \norm{\scriptM_k J^{\mu_k}- J^{\mu_k}}_\infty+ \delta_{FV} \eps_{PE}\\
&\leq \delta_{FV} \norm{ T_{\mu_k}^m J_{k-1} - J^{\mu_k}}_\infty + \norm{\scriptM_k J^{\mu_k}- J^{\mu_k}}_\infty+ \delta_{FV} \eps_{PE}\\
&\leq \alpha^m \delta_{FV} \norm{J_{k-1} - J^{\mu_k}}_\infty + \sup_{k, \mu_k}\norm{\scriptM_k J^{\mu_k}- J^{\mu_k}}_\infty + \delta_{FV} \eps_{PE}\\
&\leq \alpha^m \delta_{FV} \norm{J_{k-1} - J^{\mu_k}}_\infty + \delta_{app}+ \delta_{FV} \eps_{PE}\\
&\leq \alpha^m \delta_{FV} \norm{J_{k-1} - J^{\mu_{k-1}} + J^{\mu_{k-1}} - J^{\mu_k}}_\infty + \delta_{app}+ \delta_{FV} \eps_{PE}\\
&\leq \alpha^m\delta_{FV} \norm{J_{k-1} - J^{\mu_{k-1}}}_\infty + \alpha^m \delta_{FV} \norm{J^{\mu_{k-1}} - J^{\mu_k}}_\infty + \delta_{app}+ \delta_{FV} \eps_{PE}\\
&\leq \alpha^m \delta_{FV} \norm{J_{k-1} - J^{\mu_{k-1}}}_\infty + \frac{\alpha^m \delta_{FV}}{1-\alpha} + \delta_{app}+ \delta_{FV} \eps_{PE}
\end{align*}
Iterating over $k$ we get that for all $k$,
\begin{align}
\norm{J^{\mu_k}-J_k}_\infty \leq \underbrace{\frac{\frac{\alpha^m}{1-\alpha} \delta_{FV} + \delta_{app}+ \delta_{FV} \eps_{PE} }{1-\alpha^{m}\delta_{FV}} + \frac{1}{1-\alpha}+\norm{J_0}_\infty}_{=: p_{m}}, \label{eq: pound 2}
\end{align}
where we have used the assumption $\alpha^{m} \delta_{FV} < 1$ and the fact that $\norm{J^{\mu_0}}_\infty\leq 1/(1-\alpha)$ due to Assumption~\ref{assume 3 or}.
Putting (\ref{eq: * 2}) and (\ref{eq: pound 2}) together, we have the following:
\begin{align*}
-T_{\mu_{k+1}}T^{H-1}J^{\mu_k} \leq \underbrace{\Bigg[2\alpha^H p_{m} + \eps_{LA}\Bigg]}_{=: \tilde{c}_{m, H}} e -T^{H-1}J^{\mu_k}.
\end{align*}
The rest of the proof follows from the proof of Theorem \ref{mainAPI} with $\tilde{c}_{m, H}$ instead of $c_{m, H}$ and we get Theorem \ref{mainAPISecond}.
\Halmos \endproof
Analogously to the inequality \eqref{eq: ***} in Appendix \ref{appendix:thm1}, the proof of Theorem \ref{mainAPISecond} gives us the following:
\begin{align}
\norm{J^{\mu_{k}}-J^*}_\infty &\leq \alpha^{(H-1)k} \norm{J^{\mu_0}-J^*}_\infty + \sum_{i=0}^{k-1} \alpha^{(H-1)i} \frac{ \tilde{c}_{m, H}}{1-\alpha},
\label{eq: *** 8}
\end{align} for $k >0.$
Note that the inequality \eqref{eq: *** 8} provides finite-time performance bounds for the modified least-squares algorithm, in addition to the asymptotic result stated in Theorem \ref{mainAPISecond}. \color{black}
\section{Bounds on Iterates In Algorithm \ref{alg:LSalg}}\label{appendix:prop1}
In the following proposition, we present a bound on the difference between $J_k$ and $J^*.$
\begin{proposition} When $\alpha^{m+H-1} \delta_{FV} <1,$
$\limsup_{k \to \infty} \norm{J_k - J^*}_\infty \leq \frac{r_{m, H}}{1-q_{m, H}},$ \label{IterAPITheorem}
\end{proposition}
where $q_{m, H} := \delta_{FV} \alpha^{m+H-1} + \big( 1+\delta_{FV} \alpha^m \big)\alpha^{H-1}$ and $r_{m, H} := \big( 1+\delta_{FV} \alpha^m \big)\Bigg(\alpha^{H-1} p_{m, H} + \frac{ c_{m, H}}{1-\alpha} \Bigg) + \delta_{app}+\delta_{FV} \eps_{PE},$ where $c_{m, H}$ is defined in \eqref{eq:defcdmh} and $p_{m, H}$ is defined in $(\ref{eq: pound})$. The proof is as follows.
\proof{Proof of Proposition \ref{IterAPITheorem}}
\begin{align*}
&\norm{J_{k+1} - J^*}_\infty \\&= \norm{J_{k+1} -J^{\mu_{k+1}} + J^{\mu_{k+1}}- J^*}_\infty \\
&\leq \norm{J_{k+1} -J^{\mu_{k+1}}}_\infty + \norm{J^{\mu_{k+1}}- J^*}_\infty \\
&\leq \norm{\scriptM_{k+1} T_{\mu_{k+1}}^m T^{H-1} J_k -J^{\mu_{k+1}}}_\infty +\delta_{FV} \eps_{PE} +\norm{J^{\mu_{k+1}}- J^*}_\infty +\delta_{FV} \eps_{PE}\\
&= \norm{\scriptM_{k+1} T_{\mu_{k+1}}^m T^{H-1} J_k -\scriptM_{k+1} J^{\mu_{k+1}} + \scriptM_{k+1} J^{\mu_{k+1}} -J^{\mu_{k+1}}}_\infty +\norm{J^{\mu_{k+1}}- J^*}_\infty +\delta_{FV} \eps_{PE}\\
&\leq \norm{\scriptM_{k+1} T_{\mu_{k+1}}^m T^{H-1} J_k -\scriptM_{k+1} J^{\mu_{k+1}}}_\infty + \norm{\scriptM_{k+1} J^{\mu_{k+1}} -J^{\mu_{k+1}}}_\infty +\norm{J^{\mu_{k+1}}- J^*}_\infty +\delta_{FV} \eps_{PE}\\
&\leq \norm{\scriptM_{k+1}}_\infty \norm{T_{\mu_{k+1}}^m T^{H-1} J_k - J^{\mu_{k+1}}}_\infty + \norm{\scriptM_{k+1} J^{\mu_{k+1}} -J^{\mu_{k+1}}}_\infty +\norm{J^{\mu_{k+1}}- J^*}_\infty +\delta_{FV} \eps_{PE}\\
&\leq \delta_{FV} \alpha^m \norm{T^{H-1} J_k - J^{\mu_{k+1}}}_\infty + \delta_{app} +\norm{J^{\mu_{k+1}}- J^*}_\infty +\delta_{FV} \eps_{PE}\\
&= \delta_{FV} \alpha^m \norm{T^{H-1} J_k - J^* + J^* - J^{\mu_{k+1}}}_\infty + \delta_{app} +\norm{J^{\mu_{k+1}}- J^*}_\infty +\delta_{FV} \eps_{PE}\\
&\leq \delta_{FV} \alpha^m \norm{T^{H-1} J_k - J^*}_\infty + \delta_{FV} \alpha^m \norm{J^* - J^{\mu_{k+1}}}_\infty + \delta_{app} + \norm{J^{\mu_{k+1}}- J^*}_\infty+\delta_{FV} \eps_{PE}\\
&\leq \delta_{FV} \alpha^{m+H-1} \norm{J_k - J^*}_\infty + \delta_{FV} \alpha^m \norm{J^* - J^{\mu_{k+1}}}_\infty + \delta_{app} + \norm{J^{\mu_{k+1}}- J^*}_\infty+\delta_{FV} \eps_{PE}\\
&= \delta_{FV} \alpha^{m+H-1} \norm{J_k - J^*}_\infty + \big( 1+\delta_{FV} \alpha^m \big) \norm{J^* - J^{\mu_{k+1}}}_\infty + \delta_{app}+\delta_{FV} \eps_{PE} \allowdisplaybreaks\\
&\overset{(c)}\leq \delta_{FV} \alpha^{m+H-1} \norm{J_k - J^*}_\infty + \big( 1+\delta_{FV} \alpha^m \big) \Bigg(\alpha^{H-1} \norm{J^{\mu_k}-J^*}_\infty + \frac{ c_{m, H}}{1-\alpha} \Bigg) + \delta_{app} +\delta_{FV} \eps_{PE}\\
&\leq \delta_{FV} \alpha^{m+H-1} \norm{J_k - J^*}_\infty + \big( 1+\delta_{FV} \alpha^m \big) \Bigg(\alpha^{H-1} (\norm{J^{\mu_k}-J_k}_\infty + \norm{J_k-J^*}_\infty) + \frac{ c_{m, H}}{1-\alpha} \Bigg) + \delta_{app} +\delta_{FV} \eps_{PE}\allowdisplaybreaks\\
&= \Bigg(\delta_{FV} \alpha^{m+H-1} + \big( 1+\delta_{FV} \alpha^m \big)\alpha^{H-1} \Bigg) \norm{J_k - J^*}_\infty + \big( 1+\delta_{FV} \alpha^m \big)\Bigg(\alpha^{H-1} \norm{J^{\mu_k}-J_k}_\infty + \frac{ c_{m, H}}{1-\alpha} \Bigg)\\& + \delta_{app} +\delta_{FV} \eps_{PE}\allowdisplaybreaks\\
&\overset{(d)}\leq \Bigg(\delta_{FV} \alpha^{m+H-1} + \big( 1+\delta_{FV} \alpha^m \big)\alpha^{H-1} \Bigg) \norm{J_k - J^*}_\infty + \big( 1+\delta_{FV} \alpha^m \big)\Bigg(\alpha^{H-1} p_{m, H} + \frac{ c_{m, H}}{1-\alpha} \Bigg) + \delta_{app} +\delta_{FV} \eps_{PE}\\
&= q_{m, H} \norm{J_k - J^*}_\infty + r_{m, H},
\end{align*}
where $q_{m, H} := \delta_{FV} \alpha^{m+H-1} + \big( 1+\delta_{FV} \alpha^m \big)\alpha^{H-1}$ and $r_{m, H} := \big( 1+\delta_{FV} \alpha^m \big)\Bigg(\alpha^{H-1} p_{m, H} + \frac{ c_{m, H}}{1-\alpha} \Bigg) + \delta_{app}+\delta_{FV} \eps_{PE}.$ Note that $p_{m, H}$ is defined in $(\ref{eq: pound})$ and $c_{m, H}$ is defined in $(\ref{eq:defcdmh}).$ Additionally, $(c)$ follows from $(\ref{eq:cprop1})$ and $(d)$ follows from $(\ref{eq: pound})$, noting that the inequalities in $(\ref{eq:cprop1})$ and $(\ref{eq: pound})$ hold for all $\eps_{PE}'>0$.
Iterating over $k$, we get Proposition \ref{IterAPITheorem}.
We obtain the following inequality in much a similar way to inequality \eqref{eq: ***} in the proof of Theorem \ref{mainAPI}:
\begin{align}
\norm{J_{k} - J^*}_\infty \leq q_{m, H}^k \norm{J_0 - J^*}_\infty + \sum_{i=0}^{k-1} q_{m, H}^i r_{m, H}, \text{ for $k >0.$ } \label{eq:*** 3}
\end{align} \Halmos
\endproof
Note that the inequality \eqref{eq:*** 3} provides finite-time performance bounds, in addition to the asymptotic result stated in Proposition \ref{IterAPITheorem}.
\end{comment}
\section{Counterexample} \label{appendix:counterexAppendix}
Even though, in practice, $J^{\mu_k}$ is what we are interested in, the values $J_k$ computed as part of our algorithm should not go to $\infty$ since the algorithm would be numerically unstable otherwise. In the longer version of this work, \cite{annaor}, we provide a bound on $\norm{J_k-J^*}_\infty$ when $m+H-1$ is sufficiently large as in Theorem~\ref{mainAPI}. In this subsection, we show that, when this condition is not satisfied, $J_k$ can become unbounded.
The example we use is depicted in Figure \ref{fig:TsitVanRoyIm}.
\begin{figure}
\centering
\subfloat[\centering $\mu^a$]{\includegraphics[width=2.5cm]{Role of Lookahead+FA/Images/counter1.png} }%
\qquad
\subfloat[\centering $\mu^b$]{\includegraphics[width=2.5cm]{Role of Lookahead+FA/Images/counter2.png} }%
\caption{An example illustrating the necessity of the condition in Theorem~\ref{mainAPI}}%
\label{fig:TsitVanRoyIm}%
\end{figure}
There are two policies, $\mu^a$ and $\mu^b$ and the transitions are deterministic under the two policies. The rewards are deterministic and only depend on the states. The rewards associated with states are denoted by $r(x_1)$ and $r(x_2),$
with $r(x_1)>r(x_2)$. Thus, the optimal policy is $\mu^a$. We assume scalar features $\phi(x_1)=1$ and $\phi(x_2)=2.$
We fix $H=1$.
The MDP follows policy $\mu^a$ when:
\begin{align*}
&J_k(x_1) > J_k(x_2) \implies \theta_k > 2\theta_k.
\end{align*} Thus, as long as $\theta_k>0,$ the lookahead policy will be $\mu_b.$
We will now show that $\theta_k$ increases at each iteration when $\delta_{FV} \alpha^{m+H-1}>1.$ We assume that $\theta_0>0$ and $\scriptD_k = \{1, 2\}$ $\forall k.$ A straightforward computation shows that $\delta_{FV}=\frac{6}{5}.$
At iteration $k+1,$ suppose $\mu_{k+1}=\mu^b,$ our $\hat{J}^{\mu_{k+1}}(i)$ for $i = 1, 2$ are as follows:
\begin{align*}
\hat{J}^{\mu_{k+1}}(1) =r(x_1)+\sum_{i=1}^{m-1} r(x_1) \alpha^i + 2 \alpha^m \theta_k, \quad
\hat{J}^{\mu_{k+1}}(2) = r(x_2) +\sum_{i=1}^{m-1} r(x_2)\alpha^i + 2 \alpha^m \theta_k.
\end{align*}
Thus, from Step \ref{step 4 alg} of Algorithm \ref{alg:LSalg}:
\begin{align*}
&\theta_{k+1} = \arg \min_\theta \sum_{i =1}^2 \Big( (\Phi \theta)(i) - \hat{J}^{\mu_{k+1}}(i) \Big)^2 \\
&\implies \theta_{k+1} = \frac{\sum_{i=1}^{m-1} \alpha^i r(x_1)}{5} + \frac{2 \sum_{i=1}^{m-1} \alpha^i r(x_2)}{5} + \frac{6 \alpha^m \theta_k}{5}\\
&\implies \theta_{k+1}> \frac{6}{5} \alpha^{m}\theta_k.
\end{align*}
Thus, since $\theta_0 > 0$ and $H=1$, when $ \frac{6}{5} \alpha^{m+H-1}\theta_k =\delta_{FV} \alpha^{m+H-1} >1,$ $\theta_{k}$ goes to $\infty.$
\end{APPENDICES}
\ACKNOWLEDGMENT{The research presented here was supported in part by a grant from Sandia National Labs and the NSF Grants CCF 1934986, CCF 2207547, CNS 2106801, ONR Grant N00014-19-1-2566, and ARO Grant W911NF-19-1-0379. Sandia National Laboratories is a multimission laboratory managed and operated by National Technology \& Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA0003525. This paper describes objective technical results and analysis. Any subjective views or opinions that might be expressed in the paper do not necessarily represent the views of the U.S. Department of Energy or the United States Government.
}
\section{Introduction.}\label{intro}
\section{Introduction}
In many applications of reinforcement learning, such as playing chess and Go, the underlying model is known and so the main challenge is in solving the associated dynamic programming problem in an efficient manner. Policy iteration and variants of policy iteration \cite{bertsekas2019reinforcement,Bertsekas2011ApproximatePI,bertsekastsitsiklis} that solve dynamic programming problems rely on computations that are infeasible due to the sizes of the state and action spaces in modern reinforcement learning problems.
As a remedy to this ``curse of dimensionality,'' several state-of-the-art algorithms \cite{silver2017shoji, silver2017mastering, DBLP:journals/corr/MnihBMGLHSK16} employ lookahead for policy evaluation and policy improvement, function approximation, and gradient descent to compute the function approximation, see (Section \ref{section2} and \cite{bertsekas2019reinforcement} for a definition of these terms).
The recent work in \cite{efroni2019combine} considers a variant of policy iteration that utilizes lookahead and approximate policy evaluation using an $m$-step return (see Section \ref{section2} for definitions of these terms). As stated in the motivation in \cite{efroni2019combine}, we note that lookahead can be approximated well in practice using Monte Carlo Tree Search (MCTS) \cite{kocisszepesvari, browne} even though in theory, it has exponential complexity \cite{shah2020nonasymptotic}. Motivated by policy iteration, the algorithm in \cite{efroni2019combine} estimates the value function associated with a policy and aims to improve the policy at each step. Policy improvement is achieved by obtaining the ``greedy'' policy in the case of policy iteration or a lookahead policy in the work of \cite{efroni2019combine}, which involves applying the Bellman operator several times to the current iterate before obtaining the greedy policy. The idea is that the application of the Bellman operator several times gives a more accurate estimate of the optimal value function. Then, similar to policy iteration, the algorithm in \cite{efroni2019combine} aims to evaluate the new policy. The algorithm in \cite{efroni2019combine} uses an $m$-step return to compute the value function associated with a policy, i.e., it applies the Bellman operator associated with the policy $m$ times.
The work of \cite{efroni2019combine} establishes that a lookahead can significantly improve the rate of convergence if one uses the value function computed using lookahead in the approximate policy evaluation step. Our main interest is understanding how these convergence results change when the state-space is very large and one has to resort to function approximation of the value function.
Our contributions are as follows:
(1) We examine the impact of lookahead and $m$-step return on approximate policy iteration with linear function approximation. As in common in practice, we assume that we evaluate an approximate value only for some states at each iteration. Since we use function approximation, we need different proof techniques than in \cite{efroni2019combine} and one consequence of this is that the performance bounds that we obtain for the algorithm require that the sum of lookahead and the number of steps in the $m$-step return is sufficiently large. We demonstrate through an extension of a counter example in \cite{Tsitsiklis94feature-basedmethods} that such a condition is necessary, in general, for convergence with function approximation unlike the tabular setting in \cite{efroni2019combine}. See Appendix \ref{appendix:counterexAppendix} for our counter example.
(2) For ease of exposition, we first present the case where one solves a least-squares problem at each iteration to obtain the weights associated with the feature vectors in the function approximation of the value function. We then consider a more practical and widely-used scheme where one step of gradient descent is used to update the weights of the value function approximation at each iteration. Obtaining
performance bounds for the gradient descent algorithm is more challenging and these bounds can be found in Section \ref{SectionGD}. \color{black} Our results are presented in the limit as the number of iterations goes to infinity, but finite-time bounds can be easily extracted from the intermediate steps of the proofs.
(3) Our results show that the sufficient condition on the minimum amount of lookahead and return for convergence does not depend on the size of the state space but depends on the feature vectors used for function approximation.
(4) We complement our theoretical results with experiments on the same grid world problem as in \cite{efroni2018} and \cite{efroni2019combine}. These experiments are presented in Appendix \ref{appendix:numerical}.
In addition to the work of \cite{efroni2019combine}, there is a long history of other work on approximate policy iteration. We will compare our results to some of these prior works in a later section.
\section{Preliminaries} \label{section2}
We consider a Markov Decision Process (MDP), which is defined to be a 5-tuple $(\scriptS, \scriptA, P, R, \alpha)$. The finite set of states of the MDP is $\scriptS$. There exists a finite set of actions associated with the MDP $\scriptA$. Let $P_{ij}(a)$ be the probability of transitioning from state $i$ to state $j$ when taking action $a \in \scriptA$. We denote by $s_k$ the state of the MDP and by $a_k$ the corresponding action at time $k$. We associate with state $s_k$ and action $a_k$ a non-deterministic reward $r(s_k, a_k) \in [0, 1] \forall s_k \in \scriptS, a_k \in \scriptA.$ We assume that the rewards are uniformly bounded. Our objective is to maximize the cumulative discounted reward with discount factor $\alpha \in (0, 1), \sum_{i=0}^\infty \alpha^k r(s_k, a_k).$
Towards this end, we associate with each state $s\in \scriptS$ a deterministic policy $\mu(s) \in \scriptA$ which prescribes an action to take. For every policy $\mu$ and every state $s \in \scriptS$ we define $J^{\mu}(s)$ as follows:
\begin{align*}
J^{\mu}(s) := E[\sum_{i=0}^\infty \alpha^k r(s_k, \mu(s_k))|s_0=s].
\end{align*}
We define the optimal reward-to-go $J^*$ as
$J^*(s) := \underset{\mu}\min J^\mu(s).$The objective is to find a policy $\mu$ that maximizes $J^\mu(s)$ for all $s \in \scriptS$. Towards the objective, we associate with each policy $\mu$ a function $T_\mu: \mathbb{R}^n \to \mathbb{R}^n$ where for $J \in \mathbb{R}^n,$ the $s$th component of $T_{\mu}J$ is
\begin{align*}
(T_\mu J)(s) = r(s, \mu(s)) + \alpha \sum_{j=1}^{|\scriptS|} p_{sj}(\mu(s)) J(j),
\end{align*} for all $s \in S$. If function $T_{\mu}$ is applied $m$ times to vector $J \in \mathbb{R}^{|\scriptS|},$ we call the result $T^m_\mu J.$ We say that $T^m_\mu J$ is the $m$- return corresponding to $J$ or the ``return,'' when $J$ and $m$ are understood.
Similarly, we define the Bellman operator $T: \mathbb{R}^{|\scriptS|} \to \mathbb{R}^{|\scriptS|}$ with the $s$th component of $TJ$ being
\begin{align}
(TJ)(s) = \underset{a}\max \Bigg \{ r(s, a) + \alpha \sum_{j=1}^{|\scriptS|} p_{sj}(a)J(j) \Bigg \}. \label{T}
\end{align}
The policy corresponding to the $T$ operator is defined as the \textit{greedy} policy. If operator $T$ is applied $H$ times to vector $J \in \mathbb{R}^{|\scriptS|},$ we call the result - $T^H J$ - the $H$-step ``lookahead'' corresponding to $J$. The greedy policy corresponding to $T^H J$ is called the $H$-step lookahead policy, or the lookahead policy, when $H$ is understood. More precisely, given an estimate $J$ of the value function, the lookahead policy is the policy $\mu$ such that $T_\mu(T^{H-1} J)=T(T^{H-1} J).$
It is well known that each time the Bellman operator is applied to a vector $J$ to obtain $TJ,$ the following holds:
\begin{align*}
\norm{TJ-J^*}_\infty\leq \alpha\norm{J-J^*}_\infty.
\end{align*} Thus, applying $T$ to obtain $TJ$ gives a better estimate of the value function than $J.$
The Bellman equations state that the vector $J^\mu$ is the unique solution to the linear equation
\begin{align}
J^\mu = T_\mu J^\mu. \label{bellman}
\end{align}
Additionally, we have that $J^*$ is a solution to
\begin{align*}
J^* = TJ^*.
\end{align*}
Note that every greedy policy w.r.t. $J^*$ is optimal and vice versa \cite{bertsekastsitsiklis}.
We will now state several useful properties of the operators $T$ and $T_\mu$. Consider the vector $e \in \mathbb{R}^{|\scriptS|}$ where $e(i) = 1 \forall i \in 1, 2, \ldots, |\scriptS|.$ We have:
\begin{equation}
T(J + ce) = TJ + \alpha ce, \quad T_\mu(J + ce) = T_\mu J + \alpha ce. \label{eq:usefulproperties}
\end{equation}
Operators $T$ and $T_\mu$ are also monotone:
\begin{align}
J \leq J' \implies TJ \leq TJ', \quad T_\mu J \leq T_\mu J'. \label{monotonicityproperty}
\end{align}
\section{Least Squares Function Approximation Algorithm}
Our algorithm is described in Algorithm \ref{alg:LSalg}. The algorithm is an approximation to policy iteration with lookahead. At each iteration index, say, $k$, we have an estimate of the value function, which we denote by $J_k$. To obtain $J_{k+1}$, we perform a lookahead to improve the value function estimate at a certain number of states (denoted by $\scriptD_k$) which can vary with each iteration. For example, $\scriptD_k$ could be chosen as the states visited when performing a tree search to approximate the lookahead process. During the lookahead process, we note that we will also obtain a $H$-step lookahead policy, which we denote by $\mu_{k+1}$. As noted in the Introduction, the computation of $T^{H-1}(J_k)(i)$ for $i \in \scriptD_k$ in Step \ref{step 3 alg} of Algorithm \ref{alg:LSalg} may be computationally infeasible; however, as noted in \cite{efroni2019combine}, techniques such as Monte Carlo tree search (MCTS) are employed in practice to approximately estimate $T^{H-1}(J_k)(i).$ \color{black}
We obtain estimates of $J^{\mu_{k+1}}(i)$ for $i \in \scriptD_k$, which we call $\hat{J}^{\mu_{k+1}}(i)$. To obtain the estimate of the $J^{\mu_{k+1}}(i)$, we perform a $m$-step return with policy $\mu_{k+1}$, and obtain the estimate of $T^m_{\mu_{k+1}}T^{H-1}J_k(i)$ for $i \in \scriptD_k.$ We model the approximation errors in lookahead and return by adding noise to the output of these steps.
In order to estimate the value function for states not in $\scriptD_k$, we associate with each state $i \in \scriptS$ a feature vector $\phi(i)\in \mathbb{R}^d$ where typically $d << |\scriptS|$. The matrix comprised of the feature vectors as rows is denoted by $\Phi$. We obtain estimates of $T_{\mu}^m T^{H-1} J_k$ for states in $\scriptD_k$ and use those estimates to find the best fitting $\theta \in \mathbb{R}^d$, i.e.,
\begin{align*}
\min_\theta \sum_{i \in D_k} \Big( (\Phi \theta)(i) - \hat{J}^{\mu_{k+1}}(i) \Big)^2.
\end{align*}
Our algorithm then computes $\theta_{k+1}$. It uses the $\theta_{k+1}$ to obtain $J_{k+1} = \Phi \theta_{k+1}$. The process then repeats. To estimate the value function associated with policy $\mu_{k+1},$ we compute $T_{\mu_{k+1}}^m T^{H-1} J_k (i)$ for $i\in \scriptD_k.$ Another alternative is to instead compute $T_{\mu_{k+1}}^m J_k (i).$ It was shown in \cite{efroni2019combine} that the former option is preferable because it has a certain contraction property. Thus, we have chosen to use this computation in our algorithm as well. However, we show in the arxiv that the algorithm also converges with the second option if $m$ is chosen to be sufficiently large \cite{anna}.
\begin{algorithm}[tb]
\caption{Least Squares Function Approximation Algorithm}
\label{alg:LSalg}
\textbf{Input}: $J_0, m,$ $H,$ feature vectors $\{ \phi(i) \}_{i \in \scriptS}, \phi(i) \in \mathbb{R}^d$ and subsets $\scriptD_k \subseteq \scriptS, k = 0, 1, \ldots.$ Here $\scriptD_k$ is the set of states at which we evaluate the current policy at iteration $k.$\\
\begin{algorithmic}[1]
\STATE Let $k=0$.
\STATE Let $\mu_{k+1}$ be such that $\norm{T^H J_k - T_{\mu_{k+1}} T^{H-1} J_k}_\infty \leq \eps$.\\\label{step 2 alg}
\STATE Compute $\hat{J}^{\mu_{k+1}}(i) = T_{\mu_{k+1}}^m T^{H-1} (J_k)(i)+w_{k+1}(i)$ for $i \in \scriptD_k.$ \\ \label{step 3 alg}
\STATE Choose $\theta_{k+1}$ to solve
\begin{align}
\min_\theta \sum_{i \in D_k} \Big( (\Phi \theta)(i) - \hat{J}^{\mu_{k+1}}(i) \Big)^2, \label{step 4 alg}
\end{align} \\
where $\Phi$ is a matrix whose rows are the feature vectors.
\STATE $J_{k+1} = \Phi \theta_{k+1}.$
\STATE Set $k \leftarrow k+1.$ Go to 2.
\end{algorithmic}
\end{algorithm}
\begin{align*}
\theta_{k+1} &= \Phi_{\scriptD_k}^\top \scriptP_{k}(T_{\mu_{k+1}}^m T^{H-1}J_k+w_k).
\end{align*}
To analyze Algorithm \ref{alg:LSalg}, we make the following assumption which states that we explore a sufficient number of states during the policy evaluation phase at each iteration.
\begin{assumption}\label{assume 1}
For each $k \geq 0, \text{ rank }\{ \phi(i)\}_{i \in D_k} = d$.
\end{assumption}
We assume that the noise, $w_k,$ is bounded.
\begin{assumption}
For some $\eps' >0,$ $\norm{w_k}_\infty \leq \eps'$, \label{assume 2}
\end{assumption}
We also assume that the rewards are bounded.
\begin{assumption}\label{assume 3}
$r(i,u) \in[0,1]$ $\forall i,u.$
\end{assumption}
Using Assumption~\ref{assume 1}, $J_{k+1}$ can be written as
\begin{align}
J_{k+1} &= \Phi \theta_{k+1} =\underbrace{\Phi(\Phi_{\scriptD_{k}}^\top \Phi_{\scriptD_{k}} )^{-1} \Phi_{\scriptD_{k}}^\top \scriptP_k}_{=: \scriptM_{k+1}} \hat{J}^{\mu_{k+1}},\label{defMk}\end{align}
where $\Phi_{\scriptD_{k}}$ is a matrix whose rows are the feature vectors of the states in $\scriptD_{k}$ and $\scriptP_k$ is a matrix of zeros and ones such that $\scriptP_k\hat{J}^{\mu_{k+1}}$ is a vector whose elements are a subset of the elements of $\hat{J}^{\mu_{k+1}}$ corresponding to $\scriptD_k$.
Written concisely, our algorithm is as follows:
\begin{equation}
J_{k+1} = \scriptM_{k+1} (T_{\mu_{k+1}}^m T^{H-1} J_k+w_k), \label{eq:iterateAPI}
\end{equation}
where $\mu_{k+1}$ is defined in step 2 of the algorithm.
Note that $w_k(i)$ for $i\notin\scriptD_k$ does not affect the algorithm, so for convenience we can define $w_k(i)=0$ for $i\notin\scriptD_k.$
Now we will state our theorem which characterizes the role of lookahead ($H$) and return ($m$)
on the convergence of approximate policy iteration with function approximation.
\begin{theorem}\label{mainAPI}
Suppose that $m$ and $H$ satisfy $m + H -1>\log (\delta_1)/\log (1/\alpha),$ where
$$\delta_1 := \sup_k \norm{\scriptM_k}_\infty = \sup_k\norm{\Phi(\Phi_{\scriptD_{k}}^\top \Phi_{\scriptD_{k}} )^{-1} \Phi_{\scriptD_{k}}^\top \scriptP_k}_\infty,$$
then
\begin{align}
\norm{J^{\mu_{k}}-J^*}_\infty &\leq \alpha^{(H-1)k} \norm{J^{\mu_0}-J^*}_\infty + \frac{ c'_{m, H}}{(1-\alpha^{H-1})(1-\alpha)},
\end{align} where $c'_{m, H} := 2\alpha^H \Bigg(\frac{\frac{\alpha^m + \alpha^{m+H-1}}{1-\alpha} \delta_1 + \delta_2 + \delta_1 \eps' }{1-\alpha^{m+H-1}\delta_1} + \frac{1}{1-\alpha}+\norm{J_0}_\infty\Bigg)+\eps.$
and
$
\delta_2 := \sup_{k, \mu_k}\norm{\scriptM_k J^{\mu_k}- J^{\mu_k}}_\infty.
$
\end{theorem}
The proof of Theorem \ref{mainAPI} can be found in Appendix \ref{appendix:thm1}. We now make several comments about the implications of Theorem \ref{mainAPI}:
\begin{enumerate}
\item The constant terms in Theorem \ref{mainAPI} can be interpreted as follows. $\eps$ represents the tree search error, $\delta_2$ represents the function approximation error, $\delta_1$ represents the feature vector parameter, and $\eps'$ represents the policy evaluation error.
\item Using this notation, we can see that the rate of convergence to the optimal policy, barring the constant term $ \frac{ c'_{m, H}}{(1-\alpha^{(H-1)})(1-\alpha)},$ is at least $O(\alpha^{H-1}).$
\item We can show that in the limit, our bounds can be further tightened and the following Proposition holds:
\begin{proposition}
\begin{align*}
\limsup_{k\to \infty}\norm{J^{\mu_k}-J^*}_\infty \leq \frac{c_{m, H}}{(1-\alpha)(1-\alpha^{H-1})},
\end{align*}
where
\begin{align*}
c_{m, H} &:= 2\alpha^H \Big( \frac{\frac{\alpha^m + \alpha^{m+H-1}}{1-\alpha}\delta_1 + \delta_2+ \delta_1 \eps'}{1-\alpha^{m+H-1} \delta_1}\Big)+\eps.
\end{align*}
\end{proposition}
The above shows that the error has 2 terms.
(32) - becomes the limit as k goes to infinity
We noticed that there are 2 terms. The first terms shows Dk asymptotically, second term asymptotic error. Second remark is c'mH is a function of the fn approx error, delta2 (names), but the effect of them dim as H gets large except eps which you cannot do any better.
Get finite time bounds for the GD algorithm.
\end{enumerate}
The proof of Theorem \ref{mainAPI} is closely related to the proof of Theorem \ref{theorem3}. The proof of Theorem~\ref{theorem3} is presented in the next section while we defer the proof of Theorem \ref{mainAPI} to Appendix \ref{appendix:thm1}. It should be noted that, while the bounds given in Theorem \ref{mainAPI} and Theorem \ref{theorem3} are asymptotic, finite-time bounds can be obtained as a by-product of our proofs. We note that the above result is fundamentally different than the conclusion from Theorem 3 in \cite{efroni2019combine} where one requires a condition on $m$ and $H$ for convergence when one uses $J_k$ instead of using $T^{H-1}J_k$ in Step 2 of the algorithm. Here, we have shown that even when one uses $T^{H-1}J_k,$ one may need large $m+H$ for convergence due to the use of function approximation.
We additionally characterize the approximation error of our iterates, $J_k$, by computing bounds on the asymptotic error $\limsup_{k \to \infty} \norm{J_k - J^*}_\infty.$ The bounds along with their derivations can be found in Appendix \ref{appendix:prop1}. The corresponding finite-time bounds can be easily obtained from the proof of Proposition \ref{IterAPITheorem} in Appendix \ref{appendix:prop1}.
It is important to note that the upper bounds on $\norm{J^{\mu_k}-J^*}_\infty$ and $\norm{J_k - J^*}_\infty$ illustrate that the convergence rate of $\norm{J^{\mu_k}-J^*}_\infty$ is much faster than the convergence rate of $\norm{J_k - J^*}_\infty$. Thus, algorithms need not wait for the value function estimates to converge before the corresponding greedy policy reaches near optimality.
In \cite{bertsekas2021lessons}, it is noted that, in reinforcement learning to play computer games or board games, it is not uncommon during training to get a relatively crude estimate of the value function, which is improved by lookahead and $m$-step return during actual game play. Our analysis would also apply to this situation -- we have not explicitly differentiated between training and game play in our analysis.
\color{black}
Theorem \ref{mainAPI} can be used to make the following observation: how close $J^{\mu_k}$ is to $J^*$ depends on four factors -- the representation power of the feature vectors and the feature vectors themselves ($\delta_2, \delta_1$), the amount of lookahead ($H$), the extent of the return ($m$) and the approximation in the policy determination and policy evaluation steps ($\eps$ and $\eps'$). Further, by examining $c_{m, H}$ one can see that lookahead and return help mitigate the effect of feature vectors and their ability to represent the value functions. We note that \cite{efroni2019combine} also consider a version of the algorithm where in Step~\ref{step 3 alg}, $T_{\mu_k}^m T^{H-1}J_k$ is replaced by $T_{\mu_k}^m J_k.$ We obtain a performance bound for this algorithm with function approximation in Appendix~\ref{appendix:thm2} under the assumption that $m$ is large.
\section{Gradient Descent Algorithm} \label{SectionGD}
Solving the least-squares problem in Algorithm~\ref{alg:LSalg} involves a matrix inversion, which can be computationally difficult if the dimension of the feature vectors is large. So we propose an alternative algorithm which performs one step of gradient descent at each iteration, where the gradient refers to the gradient of the least-squares objective in (\ref{step 4 alg}). The gradient descent-based algorithm is presented in Algorithm~\ref{alg:GDalg}.
\begin{algorithm}[tb]
\caption{Gradient Descent Algorithm}
\label{alg:GDalg}
\textbf{Input}: $\theta_0, m,$ $H,$ feature vectors $\{ \phi(i) \}_{i \in \scriptS}, \phi(i) \in \mathbb{R}^d,$ and $\scriptD_k,$ which is the set of states for which we evaluate the current policy at iteration $k.$\\
\begin{algorithmic}[1]
\STATE $k=0, J_0 = \Phi \theta_0$. \\
\STATE Let $\mu_{k+1}$ be such that $\norm{T^H J_k - T_{\mu_{k+1}} T^{H-1} J_k}_\infty \leq \eps$. \\
\STATE Compute $\hat{J}^{\mu_{k+1}}(i) = T_{\mu_{k+1}}^m T^{H-1} J_k(i)+w_{k+1}(i)$ for $i \in \scriptD_k.$ \\
\STATE
\begin{align}
\theta_{k+1} &= \theta_k - \gamma \nabla_\theta c(\theta;\hat{J}^{\mu_{k+1}}) \label{eq:iterthetaGD}
\end{align} where
\begin{align*}
c(\theta;\hat{J}^{\mu_{k+1}}) := \frac{1}{2}\sum_{i \in \scriptD_k} \Big( (\Phi \theta)(i) - \hat{J}^{\mu_{k+1}}(i) \Big)^2.
\end{align*}
\STATE $J_{k+1} = \Phi \theta_{k+1}.$
\STATE Set $k \leftarrow k+1.$ Go to 2.
\end{algorithmic}
\end{algorithm}
In order to present our main result for the gradient descent version of our algorithm, we define $\theta^{\mu_k}$ for any policy $\mu_k$ as follows:
\begin{align}\label{eq: theta_mu_k}
\theta^{\mu_k} &:= \arg\min_\theta \frac{1}{2} \norm{\Phi_{\scriptD_k} \theta - \scriptP_{k}J^{\mu_{k}}}_2^2.
\end{align}
In other words, $\theta^{\mu_k}$ represents the function approximation of $J^{\mu_k}.$ We also define another quantity $\tilde{\theta}^{\mu_k}$ which will be used in the proof of the theorem:
\begin{align*}
\tilde{\theta}^{\mu_k} &:= \arg\min_\theta \frac{1}{2}\norm{\Phi_{\scriptD_k} \theta - \scriptP_{k}(T_{\mu_{k}}^m T^{H-1} J_{k-1}+w_k)}_2^2
\\&=\arg\min_\theta \frac{1}{2}\norm{\Phi_{\scriptD_k} \theta - \scriptP_{k}(T_{\mu_{k}}^m T^{H-1} \Phi \theta_{k-1}+w_k)}_2^2.
\end{align*}
Thus, $\tilde{\theta}^{\mu_k}$ represents the function approximation of the estimate of $J^{\mu_k}$ obtained from the $m$-step rollout.
We now present our main result:
\begin{theorem}\label{theorem3}
Suppose that Assumptions \ref{assume 1}-\ref{assume 3} hold and further, $\gamma, m,$ and $H$ satisfy $$\beta := \alpha' + (1+\alpha') \alpha^{m+H-1} \delta_1' <1,$$
where
$\alpha' :=\sup_k \norm{I - \gamma \Phi_{\scriptD_k}^\top \Phi_{\scriptD_k}}_2$ and $\delta_1' := \sup_k \norm{\Phi}_\infty \norm{\Big( \Phi_{\scriptD_k}^\top
\Phi_{\scriptD_k}\Big)^{-1} \Phi_{\scriptD_k}^\top \scriptP_{k}}_\infty.$ Then
\begin{align}
\norm{J^{\mu_{k}} - J^*}_\infty \leq \frac{2\alpha^H(\norm{\Phi}_\infty \frac{\tau}{1-\beta} + \delta_2') + \eps}{(1-\alpha^{H-1})(1-\alpha)}e,
\end{align}
where
$$
\tau := \alpha' C +(1+\alpha')\Big[\alpha^m \frac{\delta_1'}{\norm{\Phi}_\infty} \Big( \frac{1+\alpha^{H-1}}{1-\alpha} + \alpha^{H-1}\delta_2' \Big)+\eps'\Big]
$$ and
$C := \frac{\sigma_{\min, \Phi}}{1-\alpha}+2\sigma_{\min, \Phi} \delta'_2,$ where $\sigma_{\min, \Phi}$ is the smallest singular value in the singular value decomposition of $\Phi.$
\end{theorem}
\proof{Proof of Theorem \ref{theorem3}}
The proof of Theorem \ref{theorem3} is somewhat involved, so we break it up into steps.
\noindent\textit{Step 1:}
In this step, since $\theta_{k+1}$ is obtained by taking a step of gradient descent towards $\tilde{\theta}^{\mu_{k+1}}$ beginning from $\theta_k$, we show that the following holds:
\begin{align*}
\norm{\theta_{k+1} - \tilde{\theta}^{\mu_{k+1}}}_\infty \leq \alpha' \norm{\theta_k - \tilde{\theta}^{\mu_{k+1}}}_\infty.
\end{align*}
\textit{Proof of Step 1:} Recall that the iterates in Equation \eqref{eq:iterthetaGD} can be written as follows:
\begin{align*}
\theta_{k+1} &= \theta_k - \gamma \nabla_\theta c(\theta;\hat{J}^{\mu_{k+1}}) = \theta_k - \gamma \Big( \Phi_{\scriptD_k}^\top \Phi_{\scriptD_k} \theta_k - \Phi_{\scriptD_k}^\top \scriptP_{k}(T_{\mu_{k+1}}^m T^{H-1}J_k+w_k)\Big).
\end{align*}
Since
\begin{align*}0 &= \nabla_\theta c(\theta;\hat{J}^{\mu_{k+1}})|_{\tilde{\theta}^{\mu_{k+1}}}= \Phi_{\scriptD_k}^\top \Phi_{\scriptD_k} \tilde{\theta}^{\mu_{k+1}} - \Phi_{\scriptD_k}^\top \scriptP_{k}(T_{\mu_{k+1}}^m T^{H-1}J_k+w_k),
\end{align*}
we have the following:
\begin{equation*}
\begin{array}{lll}
\theta_{k+1} &=& \theta_k - \gamma \Big( \Phi_{\scriptD_k}^\top \Phi_{\scriptD_k} \theta_k - \Phi_{\scriptD_k}^\top \Phi_{\scriptD_k} \tilde{\theta}^{\mu_{k+1}} - \Phi_{\scriptD_k}^\top \scriptP_{k}(T_{\mu_{k+1}}^m T^{H-1}J_k+w_k) + \Phi_{\scriptD_k}^\top \scriptP_{k}(T_{\mu_{k+1}}^m T^{H-1}J_k+w_k)\Big) \\
&=& \theta_k - \gamma \Phi_{\scriptD_k}^\top \Phi_{\scriptD_k} (\theta_k - \tilde{\theta}^{\mu_{k+1}}).
\end{array}
\end{equation*}
Subtracting $\tilde{\theta}^{\mu_{k+1}}$ from both sides gives:
\begin{align*}
\theta_{k+1} - \tilde{\theta}^{\mu_{k+1}} &= \theta_k - \tilde{\theta}^{\mu_{k+1}} - \gamma \Phi_{\scriptD_k}^\top \Phi_{\scriptD_k} (\theta_k - \tilde{\theta}^{\mu_{k+1}})= (I - \gamma \Phi_{\scriptD_k}^\top \Phi_{\scriptD_k}) (\theta_k - \tilde{\theta}^{\mu_{k+1}}).
\end{align*}
Thus,
\begin{align*}
\nonumber\norm{\theta_{k+1} - \tilde{\theta}^{\mu_{k+1}}}_\infty \nonumber &= \norm{(I - \gamma \Phi_{\scriptD_k}^\top \Phi_{\scriptD_k}) (\theta_k - \tilde{\theta}^{\mu_{k+1}})}_\infty = \norm{I - \gamma \Phi_{\scriptD_k}^\top \Phi_{\scriptD_k}}_\infty \norm{\theta_k - \tilde{\theta}^{\mu_{k+1}}}_\infty \\ \nonumber
&\leq \norm{I - \gamma \Phi_{\scriptD_k}^\top \Phi_{\scriptD_k}}_2 \norm{\theta_k - \tilde{\theta}^{\mu_{k+1}}}_\infty\\& \leq \underbrace{\sup_k\norm{I - \gamma \Phi_{\scriptD_k}^\top \Phi_{\scriptD_k}}_2}_{=: \alpha'} \norm{\theta_k - \tilde{\theta}^{\mu_{k+1}}}_\infty.
\end{align*}
\noindent\textit{Step 2:} Using the previous step in conjunction with contraction properties of the Bellman operators, we will show that the following recursion holds:
\begin{align*}
\norm{\theta_k - \theta^{\mu_k}}_\infty
&\leq \Big( \alpha' + (1+\alpha') \alpha^{m+H-1} \delta_1' \Big) \norm{\theta_{k-1} - \theta^{\mu_{k-1}}}_\infty + \alpha' C \\& + (1+\alpha')\Big[\alpha^m \frac{\delta_1'}{\norm{\Phi}_\infty} \Bigg( \frac{1+\alpha^{H-1}}{1-\alpha} + \alpha^{H-1}\delta_2' \Bigg)+\eps'\Big],
\end{align*}
for a constant $C$ defined below.
We will then iterate over $k$ to get the following:
\begin{align}
\norm{\theta_k - \theta^{\mu_k}}_\infty
\leq\beta^k \norm{\theta_{0} - \theta^{\mu_{0}}}_\infty + \frac{\tau}{1-\beta},
\end{align}
for some $\tau$ and $\beta$ defined below.
\textit{Proof of Step 2:} We first note the following:
\begin{align}
\nonumber \norm{\theta_k - \theta^{\mu_k}}_\infty &\leq \norm{\theta_k - \tilde{\theta}^{\mu_k}}_\infty + \norm{\tilde{\theta}^{\mu_k} - \theta^{\mu_k}}_\infty \nonumber
\leq \alpha' \norm{\theta_{k-1} - \theta^{\mu_k}}_\infty + \alpha'\norm{ \tilde{\theta}^{\mu_k} - \theta^{\mu_k}}_\infty \nonumber + \norm{\tilde{\theta}^{\mu_k} - \theta^{\mu_k}}_\infty \nonumber
\\&\leq \alpha' \norm{\theta_{k-1} - \theta^{\mu_{k-1}}}_\infty + \alpha' \norm{\theta^{\mu_k} - \theta^{\mu_{k-1}}}_\infty
+ (1+\alpha')\norm{\tilde{\theta}^{\mu_k} - \theta^{\mu_k}}_\infty.\label{finthetakthetamuk}
\end{align}
In order to further bound (\ref{finthetakthetamuk}), we introduce the following lemmas:
\begin{lemma}\label{cthetalemma}
$ \norm{ \theta^{\mu_{k-1}} - \theta^{\mu_{k}}}_\infty \leq \underbrace{\frac{\sigma_{\min, \Phi}}{1-\alpha}+2\sigma_{\min, \Phi} \delta'_2}_{=: C},$
where $\sigma_{\min, \Phi}$ is the smallest singular value in the singular value decomposition of $\Phi.$ \end{lemma}
\begin{lemma}\label{lemmathetatilde}
\begin{align*}
\norm{\theta^{\mu_k} - \tilde{\theta}^{\mu_k}}_\infty &\leq \frac{\delta_1' \eps'}{\norm{\Phi}_\infty}+\alpha^m \frac{\delta_1'}{\norm{\Phi}_\infty} \Bigg( \frac{1+\alpha^{H-1}}{1-\alpha} + \alpha^{H-1}\delta_2' \Bigg) + \alpha^{m+H-1} \delta_1' \norm{ \theta^{\mu_{k-1}} - \theta_{k-1} }_\infty.
\end{align*}
\end{lemma}
The proofs of these lemmas can be found in Appendix \ref{appendix:lem2} and Appendix \ref{appendix:lem1}.
Using lemmas \ref{cthetalemma} and \ref{lemmathetatilde} in (\ref{finthetakthetamuk}), we have:
\begin{align}
\begin{split}\nonumber
\norm{\theta_k - \theta^{\mu_k}}_\infty \leq{}& \alpha' \norm{\theta_{k-1} - \theta^{\mu_{k-1}}}_\infty + \alpha' C\\
& + (1+\alpha') \Bigg[\frac{\delta_1' \eps'}{\norm{\Phi}_\infty}\nonumber+ \alpha^m \frac{\delta_1'}{\norm{\Phi}_\infty} \Bigg( \frac{1+\alpha^{H-1}}{1-\alpha} + \alpha^{H-1}\delta_2' \Bigg) \\&+ \alpha^{m+H-1} \delta_1' \norm{ \theta^{\mu_{k-1}} - \theta_{k-1} }_\infty \Bigg]
\end{split}\\
\begin{split}\label{eq:boundb4labels}
\leq{}& \Big( \alpha' + (1+\alpha') \alpha^{m+H-1} \delta_1' \Big) \norm{\theta_{k-1} - \theta^{\mu_{k-1}}}_\infty + \alpha' C\\
& + (1+\alpha')\Big[\alpha^m \frac{\delta_1'}{\norm{\Phi}_\infty} \Big( \frac{1+\alpha^{H-1}}{1-\alpha} + \alpha^{H-1}\delta_2' \Big)+\eps'\Big].
\end{split}
\end{align}
Now suppose that we define $\beta \in (0, 1)$ as follows:
\begin{align}
\beta := \alpha' + (1+\alpha') \alpha^{m+H-1} \delta_1' , \label{eq:defbetaGD}
\end{align} where we note from our assumption in Theorem \ref{theorem3} that $\alpha' + (1+\alpha') \alpha^{m+H-1} \delta_1' <1.$
Furthermore, we denote $\tau$ as follows:
\begin{align}
\tau := &\alpha' C +(1+\alpha')\Big[\alpha^m \frac{\delta_1'}{\norm{\Phi}_\infty} \Big( \frac{1+\alpha^{H-1}}{1-\alpha} + \alpha^{H-1}\delta_2' \Big)+\eps'\Big]. \label{eq:defdelta3GD}
\end{align}
Then, our bound in the inequality in \eqref{eq:boundb4labels} can be rewritten as the following:
\begin{align}
\nonumber &\norm{\theta_k - \theta^{\mu_k}}_\infty\nonumber \leq \beta \norm{\theta_{k-1} - \theta^{\mu_{k-1}}}_\infty + \tau.
\end{align}
Iterating over $k$, we get that for any $k$,
\begin{align}
\norm{\theta_k - \theta^{\mu_k}}_\infty \leq \beta^k \norm{\theta_{0} - \theta^{\mu_{0}}}_\infty + \sum_{i=0}^{k-1} \beta^i \tau \label{eq:firstfinite}
\leq\beta^k \norm{\theta_{0} - \theta^{\mu_{0}}}_\infty + \frac{\tau}{1-\beta}.
\end{align}
\noindent\textit{Step 3:}
Since $J_k = \Phi \theta_k$ and $\Phi \theta^{\mu_k}$ is the best approximation in $\mathbb{R}^d$ of $J^{\mu_k}$, we will show the following bound:
\begin{align}
\nonumber \norm{J^{\mu_k}-J_k}_\infty
&\leq \norm{\Phi}_\infty \norm{\theta_k - \theta^{\mu_k}}_\infty + \delta_2'.
\end{align} Then, using the previous step, we will get a bound on $ \norm{J^{\mu_k}-J_k}_\infty$.
\textit{Proof of Step 3:} %
\begin{align}
\nonumber \norm{J^{\mu_k}-J_k}_\infty \nonumber &= \norm{\Phi \theta_k - J^{\mu_k}}_\infty
\leq \norm{\Phi \theta_k - \Phi \theta^{\mu_k}}_\infty + \norm{\Phi \theta^{\mu_k} - J^{\mu_k}}_\infty \\
&\leq \norm{\Phi}_\infty \norm{\theta_k - \theta^{\mu_k}}_\infty + \delta_2'.
\label{eq:finjmukjk}
\end{align}
Using \eqref{eq:finjmukjk}, we will obtain a bound on $\norm{J^{\mu_k}-J_k}_\infty$ as follows:
\begin{align}
\nonumber \norm{J^{\mu_k}-J_k}_\infty \nonumber
&\leq \norm{\Phi}_\infty \norm{\theta_k - \theta^{\mu_k}}_\infty + \delta_2' \\
&\leq \norm{\Phi}_\infty (\beta^k \norm{\theta_{0} - \theta^{\mu_{0}}}_\infty + \frac{\tau}{1-\beta}) + \delta_2' \nonumber\\
&= \beta^k \underbrace{\norm{\Phi}_\infty \norm{\theta_{0} - \theta^{\mu_{0}}}_\infty}_{=: \tau_2} +\underbrace{\norm{\Phi}_\infty \frac{\tau}{1-\beta} + \delta_2'}_{=: \tau_1}. \label{eq: tau1 tau2 or}
\end{align}
\noindent\textit{Step 4:}
We will establish the following bound on $T_{\mu_{k+1}}T^{H-1}J^{\mu_k}$ using the contraction property of Bellman operators and property in \eqref{eq:usefulproperties}:
\begin{align*}
T_{\mu_{k+1}}T^{H-1}J^{\mu_k}
&\leq 2\alpha^H \norm{J^{\mu_k}-J_k}_\infty e + T^{H-1}J^{\mu_k} + \eps e. \nonumber
\end{align*}
Using properties in \eqref{eq:usefulproperties} and monotonicity, we will repeatedly apply $T_{\mu_{k+1}}$ to both sides and take limits to obtain the following:
\begin{align*}
J^{\mu_{k+1}} - T^{H-1}J^{\mu_k} \leq \frac{2\alpha^H \norm{J^{\mu_k}-J_k}_\infty + \eps }{1-\alpha}e.
\end{align*}
\textit{Proof of Step 4:} We begin by noting that
\begin{align}
&-T_{\mu_{k+1}}T^{H-1}J^{\mu_k} = -T_{\mu_{k+1}}T^{H-1}J^{\mu_k} - T_{\mu_{k+1}}T^{H-1}J_k \nonumber+ T_{\mu_{k+1}}T^{H-1}J_k \nonumber\\
&\leq \alpha^H \norm{J^{\mu_k}-J_k}_\infty e - T_{\mu_{k+1}}T^{H-1}J_k \leq \alpha^H \norm{J^{\mu_k}-J_k}_\infty e - T^H J_k + \eps e \nonumber\\
&= \alpha^H \norm{J^{\mu_k}-J_k}_\infty e - T^H J_k + T^H J^{\mu_k}\nonumber - T^H J^{\mu_k} + \eps e \\&\leq 2 \alpha^H \norm{J^{\mu_k}-J_k}_\infty e - T^H J^{\mu_k} + \eps e \nonumber\leq 2\alpha^H \norm{J^{\mu_k}-J_k}_\infty e - T^{H-1}J^{\mu_k} + \eps e \label{eq:defcdmhge}\\& \leq - T^{H-1}J^{\mu_k} + 2\alpha^H \norm{J^{\mu_k}-J_k}_\infty e + \eps e. \end{align}
The rest of the proof is similar to the arguments in \cite{bertsekastsitsiklis}, \cite{bertsekas2019reinforcement}; however, we have to explicitly incorporate the role of lookahead ($H$) in the remaining steps of the proof. Suppose that we apply the $T_{\mu_{k+1}}$ operator $\ell-1$ times to both sides. Then, due to monotonicity and the fact $T_\mu(J+ce)=T_\mu(J)+\alpha ce,$ for any policy $\mu,$ we have the following:
\begin{align*}
T_{\mu_{k+1}}^\ell T^{H-1}J^{\mu_k} \leq \alpha^{\ell } (2\alpha^H \norm{J^{\mu_k}-J_k}_\infty + \eps )e + T_{\mu_{k+1}}^{\ell+1} T^{H-1} J^{\mu_k}.
\end{align*}
Using a telescoping sum, we get the following inequality:
\begin{align*}
T_{\mu_{k+1}}^j T^{H-1} J^{\mu_k} - T^{H-1}J^{\mu_k}
&\geq - \sum_{\ell = 1}^{j} \alpha^{\ell -1} (2\alpha^H \norm{J^{\mu_k}-J_k}_\infty + \eps )e.
\end{align*}
The rest of proof is straightforward.
Subtracting $J^*$ from both sides of the previous inequality and using the contraction property of $T$, we get
\begin{align*}
&\alpha^{H-1} \norm{J^* - J^{\mu_k}}_\infty e + \sum_{\ell = 1}^{j} \alpha^{\ell -1} (2\alpha^H \norm{J^{\mu_k}-J_k}_\infty + \eps )e \\&\geq J^* - T^{H-1}J^{\mu_k} + \sum_{\ell = 1}^{j} \alpha^{\ell -1} (2\alpha^H \norm{J^{\mu_k}-J_k}_\infty + \eps )e
\\& \geq J^* - J^{\mu_{k+1}}\\
&\geq 0,
\end{align*}
which implies that
\begin{align}
\norm{J^{\mu_{k+1}} - J^*}_\infty &\leq \alpha^{H-1} \norm{J^{\mu_k}-J^*}_\infty + \sum_{\ell = 1}^{j} \alpha^{\ell -1} (2\alpha^H \norm{J^{\mu_k}-J_k}_\infty + \eps )e, \label{eq: ** 4}
\end{align} since $J^* \geq J^{\mu}$ for all policies $\mu$.
We now plug in for $\norm{J^{\mu_k}-J_k}_\infty$ in \eqref{eq: tau1 tau2 or} to get the following:
\begin{align}
\norm{J^{\mu_{k+1}} - J^*}_\infty \nonumber&\leq \alpha^{H-1} \norm{J^{\mu_k}-J^*}_\infty + \sum_{\ell = 1}^{j} \alpha^{\ell -1} (2\alpha^H (\beta^k \tau_2 + \tau_1) + \eps )e \nonumber\\\nonumber
&= \alpha^{H-1} \norm{J^{\mu_k}-J^*}_\infty + \beta^k\frac{2\alpha^H \tau_2 }{1-\alpha} + \frac{2\alpha^H\tau_1 + \eps}{1-\alpha}e.
\end{align}
We now iterate over $k > 0$ to get the following:
\begin{align}
\norm{J^{\mu_{k}} - J^*}_\infty &\leq \alpha^{k(H-1)} \norm{J^{\mu_0}-J^*}_\infty + \sum_{\ell = 0}^{k-1} \alpha^{(k-\ell-1)(H-1)} \Big[\beta^\ell\frac{2\alpha^H \tau_2 }{1-\alpha} + \frac{2\alpha^H\tau_1 + \eps}{1-\alpha}e\Big].\label{eq: fin bound jmuk jstar or}
\end{align}
We take limits as $k \to \infty$ to obtain asymptotic limits on $\norm{J^{\mu_{k}} - J^*}_\infty.$ To do so, we introduce the following lemma:
\begin{lemma} \label{conv lemma 1 or}
$\lim_{k\to \infty}\sum_{\ell=0}^{k-1} \alpha^{(k-\ell-1)(H-1)} \beta^\ell \to 0.$
\end{lemma}
\proof{Proof of Lemma \ref{conv lemma 1 or}}
First, notice that $\alpha^{H-1}<1$ and $\beta<1$ from our assumption in Theorem \ref{theorem3}.
Now suppose that $\alpha^{H-1} \leq \beta.$ Then, we get that
\begin{align*}
\sum_{\ell=0}^{k-1} \alpha^{(k-\ell-1)(H-1)} \beta^\ell &\leq \sum_{\ell=0}^{k-1} \beta^{k-\ell-1} \beta^\ell \\
&\leq \sum_{\ell=0}^{k-1} \beta^{k-1}\\
&= k\beta^{k-1}.
\end{align*} Taking limits as $k \to \infty$ on both sides, we have that $\lim_{k\to \infty}\sum_{\ell=0}^{k-1} \alpha^{(k-\ell-1)(H-1)} \beta^\ell \to 0.$
Now suppose that $\beta \leq \alpha^{H-1}.$ Then,
\begin{align*}
\sum_{\ell=0}^{k-1} \alpha^{(k-\ell-1)(H-1)} \beta^\ell &\leq \sum_{\ell=0}^{k-1} \alpha^{(H-1)(k-\ell-1)} \alpha^{(H-1)\ell} \\
&\leq \sum_{\ell=0}^{k-1} \alpha^{k-1}\\
&= k\alpha^{k-1}.
\end{align*}
Taking limits as $k \to \infty$ on both sides, we get that $\lim_{k\to \infty}\sum_{\ell=0}^{k-1} \alpha^{(k-\ell-1)(H-1)} \beta^\ell \to 0.$
\Halmos
\endproof
Taking limits on both sides of \eqref{eq: fin bound jmuk jstar or} Lemma \ref{conv lemma 1 or}, we have the following:
\begin{align}
\norm{J^{\mu_{k}} - J^*}_\infty \leq \frac{2\alpha^H\tau_1 + \eps}{(1-\alpha^{H-1})(1-\alpha)}e
\end{align}
Plugging in for $\tau_1$ in the above bound gives the following:
\begin{align}
\norm{J^{\mu_{k}} - J^*}_\infty \leq \frac{2\alpha^H(\norm{\Phi}_\infty \frac{\tau}{1-\beta} + \delta_2') + \eps}{(1-\alpha^{H-1})(1-\alpha)}e,
\end{align}
where
$$ \beta := \alpha' + (1+\alpha') \alpha^{m+H-1} \delta_1'$$ and
$$
\tau := \alpha' C +(1+\alpha')\Big[\alpha^m \frac{\delta_1'}{\norm{\Phi}_\infty} \Big( \frac{1+\alpha^{H-1}}{1-\alpha} + \alpha^{H-1}\delta_2' \Big)+\eps'\Big],
$$
and we get Theorem \ref{theorem3}.
\Halmos \endproof
\section{Related Work}
In the introduction, we compared our results to those in \cite{efroni2019combine}, which builds on a lot of ideas in \cite{efroni2018}. Now, we compare our work to other papers in the literature. The role of lookahead and return in improving the performance of RL algorithms has also been studied in a large number of papers including \cite{moerland2020framework, efroni2020online, tomar2020multistep, efroni2018multiplestep, springenberg2020local, 9407870}. The works of \cite{baxter, veness, lanctot2014monte} explore the role of tree search in RL algorithms. However, to the best of our knowledge, the amount of lookahead and return needed as a function of the feature vectors has not been quantified in prior works.
Approximate policy iteration is a well studied topic, see \cite{bertsekastsitsiklis, bertsekas2019reinforcement, Puterman1978ModifiedPI, scherrer}, for example. However, since their models do not involve a scheme for approximating the value function at each iteration, the role of the depth of lookahead ($H$) and return ($m$) cannot be quantified using their approaches.
It is important to note that our work is different from least squares policy iteration (LSPI), which is a common means of obtaining function approximation parameters from value function estimates. While one of our algorithms used least squares estimates, our work more importantly analyzes the use of lookahead and $m$-step return, which are employed prior to the least squares step of the algorithm.
The works of \cite{Bertsekas2011ApproximatePI} and \cite{bertsekas2019reinforcement} also study a variant of policy iteration wherein a greedy policy is evaluated approximately using feature vectors at each iteration. These papers also provide rates of convergence as well as a bound on the approximation error. However, our main goal is to understand the relations between function approximation and lookahead/return which are not considered in these other works.
\section{Conclusion}
We show that a minimum threshold for the lookahead and corresponding policy return may be necessary for algorithms with function approximation based approximate policy iteration to converge. In particular, we show that if one uses an $m$-step return of an $H$-step lookahead policy, the $m+H$ must be sufficiently large. Possible directions for future work include the following:
\begin{itemize}
\item For problems with a terminal state, it would be interesting to consider cases where the value function of a given policy is estimated using a full rollout which provides an unbiased estimate as in \cite{tsitsiklis2002convergence}.
\item In game playing applications, gradient descent is commonly used to estimate the value function, but temporal-difference learning is used in other applications. It would be interesting to extend our results to the case of TD learning-based policy evaluation.
\item While neural networks are not linear function approximators, recent results on the NTK analysis of neural networks suggest that they can be approximated as linear combinations of basis functions \cite{jacot2018neural,du2018gradient,arora2019fine,ji2019polylogarithmic, cao2019generalization}. Thus, to the extent that the NTK approximation is reasonable, our results can potentially shed light on why the combination of the representation capability of neural networks and tree-search methods work well in practice, although further work is necessary to make this connection precise.
\end{itemize}
\begin{APPENDICES}
\section{Proof of Theorem \ref{mainAPI}}
\label{appendix:thm1}
\begin{remark}
The proof of Theorem \ref{mainAPI} is somewhat simpler than that of Theorem \ref{theorem3} but uses many of the same ideas.
The proof of Theorem \ref{mainAPI} skips Steps 1 and 2 in the proof of Theorem \ref{theorem3} and instead uses techniques in Lemma \ref{mainAPILemma} below to obtain an analogous result to Step 3. The rest of the proof (Steps 4-6) of Theorem \ref{theorem3} also applies to the proof of Theorem \ref{mainAPI} except with $p_{m,H}+\eps''$ (with $p_{m, H}$ defined in the proof of Theorem \ref{mainAPI}) in place of $p_{m, H, \gamma, \eps''}$ as defined in Step 3 of Theorem \ref{theorem3}.
\end{remark}
\proof{Proof of Theorem \ref{mainAPI}}
Consider policies $\mu_k$ and $\mu_{k+1},$ where $\mu_0$ can be taken to be any arbitrary policy. We have the following:
\begin{align}
-T_{\mu_{k+1}}T^{H-1}J^{\mu_k} &= -T_{\mu_{k+1}}T^{H-1}J^{\mu_k} - T_{\mu_{k+1}}T^{H-1}J_k \nonumber\\&+ T_{\mu_{k+1}}T^{H-1}J_k \nonumber\\
&\overset{(a)}\leq \alpha^H \norm{J^{\mu_k}-J_k}_\infty e - T_{\mu_{k+1}}T^{H-1}J_k \nonumber\\
&\leq \alpha^H \norm{J^{\mu_k}-J_k}_\infty e - T^H J_k + \eps e \nonumber\\
&= \alpha^H \norm{J^{\mu_k}-J_k}_\infty e \\\nonumber&- T^H J_k - T^H J^{\mu_k} + T^H J^{\mu_k} + \eps e \nonumber\\
&\overset{(b)}\leq 2 \alpha^H \norm{J^{\mu_k}-J_k}_\infty e - T^H J^{\mu_k} + \eps e \nonumber\\
&\leq 2\alpha^H \norm{J^{\mu_k}-J_k}_\infty e - T^{H-1}J^{\mu_k} + \eps e, \label{eq: *}
\end{align}
where $e$ is the vector of all $1$s, (a) and (b) follow from the contraction property of $T^H$ and $T_{\mu_{k+1}}$ and the last inequality follows from standard arguments using the monotonicity properties and the definition of $T$: specifically, note that
$$
TJ^\mu = \max_{\mu'} T_{\mu'} J^\mu \geq T_\mu J^\mu = J^{\mu},
$$
and repeatedly apply $T$ to both sides of the inequality and use monotonicity to obtain
$T^\ell J^\mu \geq T^{\ell-1} J^\mu$ for all $\ell$ and all policies $\mu.$
We can further bound $\norm{J^{\mu_k}-J_k}_\infty$ as follows:
\begin{lemma}
\begin{align*}
\norm{J^{\mu_k}-J_k}_\infty
&\leq \alpha^{m+H-1} \delta_1 \norm{J_{k-1} -J^{\mu_{k-1}}}_\infty + \frac{\alpha^m + \alpha^{m+H-1}}{1-\alpha}\delta_1 + \delta_2+ \delta_1 \eps'.
\end{align*} \label{mainAPILemma}
\end{lemma}
\proof{Proof of Lemma \ref{mainAPILemma}}
\begin{align*}
\norm{J^{\mu_k}-J_k}_\infty &= \norm{\scriptM_k (T_{\mu_k}^m T^{H-1} J_{k-1}+w_k)- J^{\mu_k}}_\infty \\
&\leq \norm{\scriptM_k T_{\mu_k}^m T^{H-1} J_{k-1}- J^{\mu_k}}_\infty + \norm{\scriptM_k w_k}_\infty \\
&\leq\norm{\scriptM_k T_{\mu_k}^m T^{H-1} J_{k-1}- J^{\mu_k}}_\infty + \norm{\scriptM_k}_\infty \norm{w_k}_\infty \\ \allowdisplaybreaks
&\leq \norm{\scriptM_k T_{\mu_k}^m T^{H-1} J_{k-1}- J^{\mu_k}}_\infty + \delta_1 \eps' \\ \allowdisplaybreaks
&= \norm{\scriptM_k T_{\mu_k}^m T^{H-1} J_{k-1} - \scriptM_k J^{\mu_k} + \scriptM_k J^{\mu_k}- J^{\mu_k}}_\infty + \delta_1 \eps'\\
&\leq \norm{\scriptM_k T_{\mu_k}^m T^{H-1} J_{k-1} - \scriptM_k J^{\mu_k}}_\infty + \norm{\scriptM_k J^{\mu_k}- J^{\mu_k}}_\infty+ \delta_1 \eps'\\
&\leq \norm{\scriptM_k}_\infty \norm{ T_{\mu_k}^m T^{H-1} J_{k-1} - J^{\mu_k}}_\infty + \norm{\scriptM_k J^{\mu_k}- J^{\mu_k}}_\infty+ \delta_1 \eps'\\
&\leq \alpha^m \norm{\scriptM_k}_\infty \norm{T^{H-1} J_{k-1} - J^{\mu_k}}_\infty + \sup_{k, \mu_k}\norm{\scriptM_k J^{\mu_k}- J^{\mu_k}}_\infty + \delta_1 \eps'\\
&\leq \alpha^m\norm{\scriptM_k}_\infty \norm{T^{H-1} J_{k-1} - J^* + J^* - J^{\mu_k}}_\infty +\delta_2 + \delta_1 \eps'\\
&\leq \alpha^m\norm{\scriptM_k}_\infty \norm{T^{H-1} J_{k-1} - J^*}_\infty + \alpha^m\norm{\scriptM_k}_\infty \norm{J^* - J^{\mu_k}}_\infty + \delta_2+ \delta_1 \eps' \\
&\leq \alpha^{m+H-1}\norm{\scriptM_k}_\infty \norm{J_{k-1} - J^*}_\infty + \frac{\alpha^m}{1-\alpha}\norm{\scriptM_k}_\infty +\delta_2 + \delta_1 \eps'\\
&\leq \alpha^{m+H-1}\norm{\scriptM_k}_\infty \norm{J_{k-1} -J^{\mu_{k-1}} + J^{\mu_{k-1}}- J^*}_\infty + \frac{\alpha^m}{1-\alpha}\norm{\scriptM_k}_\infty +\delta_2 + \delta_1 \eps'\\
&\leq \alpha^{m+H-1}\norm{\scriptM_k}_\infty \norm{J_{k-1} -J^{\mu_{k-1}}}_\infty + \frac{\alpha^m + \alpha^{m+H-1}}{1-\alpha}\norm{\scriptM_k}_\infty +\delta_2 + \delta_1 \eps'\\
&\leq \alpha^{m+H-1} \delta_1 \norm{J_{k-1} -J^{\mu_{k-1}}}_\infty + \frac{\alpha^m + \alpha^{m+H-1}}{1-\alpha}\delta_1 + \delta_2+ \delta_1 \eps'.
\end{align*} \Halmos
\endproof
Now suppose that we define $\beta' \in (0, 1)$ as follows:
\begin{align}
\beta' := \alpha^{m+H-1} \delta_1, \label{eq:defbeta}
\end{align} where we note from our assumption in Theorem \ref{mainAPI} that $\alpha^{m+H-1} \delta_1<1.$ Furthermore, we denote $\tau'$ as follows:
\begin{align}
\tau' := \frac{\alpha^m + \alpha^{m+H-1}}{1-\alpha}\delta_1 + \delta_2+ \delta_1 \eps'. \label{eq:defdelta3}
\end{align}
Then, our bound in Lemma \ref{mainAPILemma} can be rewritten as the following:
\begin{align*}
\norm{J^{\mu_k}-J_k}_\infty
&\leq \beta' \norm{J_{k-1} -J^{\mu_{k-1}}}_\infty + \tau'.
\end{align*}
Iterating over $k$, we get that for any $k$,
\begin{align}
\norm{J^{\mu_k}-J_k}_\infty \leq \underbrace{(\beta')^k \norm{J^{\mu_0}-J_0}_\infty + \sum_{i=0}^{k-1} \beta'^i \tau'}_{=: f_k}. \label{eq: pound2}
\end{align}
Putting (\ref{eq: *}) and (\ref{eq: pound2}) together, we have the following:
\begin{align}
-T_{\mu_{k+1}}T^{H-1}J^{\mu_k}
&\leq 2\alpha^H f_k e - T^{H-1}J^{\mu_k} + \eps e.
\end{align}
The rest of the proof is similar to the arguments in \cite{bertsekastsitsiklis}, \cite{bertsekas2019reinforcement}; however, we have to explicitly incorporate the role of lookahead ($H$) in the remaining steps of the proof.
Suppose that we apply the $T_{\mu_{k+1}}$ operator $\ell-1$ times. Then, due to monotonicity and the fact that $T_\mu(J+ce)=T_\mu(J)+\alpha ce,$ for any policy $\mu,$ we have the following:
\begin{align*}
-T_{\mu_{k+1}}^{\ell+1} T^{H-1}J^{\mu_k}
&\leq \alpha^\ell (2\alpha^{H} f_k +\eps)e - T_{\mu_{k+1}}^{\ell}T^{H-1}J^{\mu_k}.
\end{align*}
Using a telescoping sum, we get the following inequality:
\begin{align*}
T_{\mu_{k+1}}^j T^{H-1} J^{\mu_k} - T^{H-1}J^{\mu_k}
&\geq -\sum_{\ell = 1}^{j} \alpha^{\ell-1} (2\alpha^{H} f_k +\eps)e.
\end{align*}
Taking the limit as $j\rightarrow\infty$ on both sides, we have the following:
\begin{align*}
J^{\mu_{k+1}} - T^{H-1}J^{\mu_k} \geq -\frac{2\alpha^{H} f_k +\eps}{1-\alpha}e.
\end{align*}
Rearranging terms and subtracting $J^*$ from both sides, we get the following:
\begin{align*}
\alpha^{H-1} \norm{J^* - J^{\mu_k}}_\infty e + \frac{2\alpha^{H} f_k +\eps}{1-\alpha}e \geq J^* - T^{H-1}J^{\mu_k} + \frac{2\alpha^{H} f_k +\eps}{1-\alpha}e \geq J^* - J^{\mu_{k+1}} \geq 0,
\end{align*}
which implies that
\begin{align}
\norm{J^{\mu_{k+1}} - J^*}_\infty &\leq \alpha^{H-1} \norm{J^{\mu_k}-J^*}_\infty + \frac{2\alpha^{H} f_k +\eps}{1-\alpha},
\end{align} since $J^* \geq J^{\mu}$ for all policies $\mu$.
\color{black}
We now iterate over $k>0$ to get the following:
\begin{align*}
\norm{J^{\mu_{k}} - J^*}_\infty &\leq \alpha^{k(H-1)} \norm{J^{\mu_0}-J^*}_\infty + \sum_{\ell=0}^{k-1} \alpha^{(k-\ell-1)(H-1)}\frac{2\alpha^{H} f_\ell +\eps}{1-\alpha}\\
&\leq \frac{\alpha^{k(H-1)} }{1-\alpha} + \sum_{\ell=0}^{k-1} \alpha^{(k-\ell-1)(H-1)}\frac{2\alpha^{H} f_\ell +\eps}{1-\alpha}.
\end{align*}
where $$f_\ell := (\beta')^\ell \norm{J^{\mu_0}-J_0}_\infty + \sum_{i=0}^{k-1} \beta'^i \tau',$$ and $\beta':= \alpha^{m+H-1} \delta_1$ and $\tau' := \frac{\alpha^m + \alpha^{m+H-1}}{1-\alpha}\delta_1 + \delta_2+ \delta_1 \eps'$.
We will now obtain asymptotic limits of the above results. First, we provide an upper bound for $f_\ell,$ which we call $f$ and is defined as follows:
\begin{align*}
f := (\beta')^\ell \norm{J^{\mu_0}-J_0}_\infty + \frac{\tau'}{1-\beta'},
\end{align*} noting that $\beta' < 1$ from our assumption in Theorem \ref{mainAPI}. It is easy to see that $f_\ell < \tilde{f}_\ell$ for all $\ell$.
Thus, the following holds:
\begin{align*}
\norm{J^{\mu_{k}} - J^*}_\infty
&\leq \frac{\alpha^{k(H-1)} }{1-\alpha} + \sum_{\ell=0}^{k-1} \alpha^{(k-\ell-1)(H-1)}\frac{2\alpha^{H} f_\ell +\eps}{1-\alpha} \\
&\leq \frac{\alpha^{k(H-1)} }{1-\alpha} + \sum_{\ell=0}^{k-1} \alpha^{(k-\ell-1)(H-1)}\frac{2\alpha^{H} \tilde{f}_\ell +\eps}{1-\alpha} \\
&\leq \frac{\alpha^{k(H-1)} }{1-\alpha} + \sum_{\ell=0}^{k-1} \alpha^{(k-\ell-1)(H-1)}\frac{2\alpha^{H} \Big(\beta'^\ell \norm{J^{\mu_0}-J_0}_\infty + \frac{\tau'}{1-\beta'}\Big) +\eps}{1-\alpha} \\
&\leq \frac{\alpha^{k(H-1)} }{1-\alpha} + \frac{2\alpha^{H}\norm{J^{\mu_0}-J_0}_\infty}{1-\alpha}\sum_{\ell=0}^{k-1} \alpha^{(k-\ell-1)(H-1)} \beta'^\ell +\frac{2\alpha^H \frac{\tau'}{1-\beta'} +\eps}{(1-\alpha^{H-1})(1-\alpha)}.
\end{align*}
Since $\beta'=\alpha^{m+H-1} \delta_1<1$ from our assumption in Theorem \ref{mainAPI}, we will now show in the following lemma that $\lim_{k\to \infty}\sum_{\ell=0}^{k-1} \alpha^{(k-\ell-1)(H-1)} \beta'^\ell \to 0.$
\begin{lemma} \label{conv lemma or}
$\lim_{k\to \infty}\sum_{\ell=0}^{k-1} \alpha^{(k-\ell-1)(H-1)} \beta'^\ell \to 0.$
\end{lemma}
\proof{Proof of Lemma \ref{conv lemma or}}
First, notice that $\alpha^{H-1}<1$ and $\beta'<1$ from our assumption in Theorem \ref{mainAPI}.
Now suppose that $\alpha^{H-1} \leq \beta'.$ Then, we get that
\begin{align*}
\sum_{\ell=0}^{k-1} \alpha^{(k-\ell-1)(H-1)} \beta'^\ell &\leq \sum_{\ell=0}^{k-1} \beta'^{k-\ell-1} \beta'^\ell \\
&\leq \sum_{\ell=0}^{k-1} \beta'^{k-1}\\
&= k\beta'^{k-1}.
\end{align*} Taking limits as $k \to \infty$ on both sides, we have that $\lim_{k\to \infty}\sum_{\ell=0}^{k-1} \alpha^{(k-\ell-1)(H-1)} \beta'^\ell \to 0.$
Now suppose that $\beta' \leq \alpha^{H-1}.$ Then,
\begin{align*}
\sum_{\ell=0}^{k-1} \alpha^{(k-\ell-1)(H-1)} \beta'^\ell &\leq \sum_{\ell=0}^{k-1} \alpha^{(H-1)(k-\ell-1)} \alpha^{(H-1)\ell} \\
&\leq \sum_{\ell=0}^{k-1} \alpha^{k-1}\\
&= k\alpha^{k-1}.
\end{align*}
Taking limits as $k \to \infty$ on both sides, we get that $\lim_{k\to \infty}\sum_{\ell=0}^{k-1} \alpha^{(k-\ell-1)(H-1)} \beta'^\ell \to 0.$
\Halmos
\endproof
Thus, we get that \begin{align*}
\norm{J^{\mu_{k}} - J^*}_\infty
&\leq \frac{2\alpha^H \frac{\tau'}{1-\beta'} +\eps}{(1-\alpha^{H-1})(1-\alpha)}.
\end{align*}
So, in the limit, our bound becomes the following:
\begin{align*}
\limsup_{k\to \infty}\norm{J^{\mu_k}-J^*}_\infty &\leq \frac{ 2\alpha^H \Big( \frac{\tau'}{1-\beta'}\Big)+\eps}{(1-\alpha)(1-\alpha^{H-1})} = \frac{c_{m, H}}{(1-\alpha)(1-\alpha^{H-1})},
\end{align*}
where $c_{m, H} := 2\alpha^H \Big( \frac{\tau'}{1-\beta'}\Big)+\eps = 2\alpha^H \Big( \frac{\frac{\alpha^m + \alpha^{m+H-1}}{1-\alpha}\delta_1 + \delta_2+ \delta_1 \eps'}{1-\alpha^{m+H-1} \delta_1}\Big)+\eps,$ where expressions for $\beta'$ and $\tau'$ in \eqref{eq:defbeta} and \eqref{eq:defdelta3}, respectively, were used to obtain the last equality.
\Halmos \endproof
\color{black}
\begin{comment}
\section{A Modified Least Squares Algorithm}\label{appendix:thm2}
Suppose Step \ref{step 3 alg} of Algorithm \ref{alg:LSalg} is changed to $\hat{J}^{\mu_{k+1}}(i) = T_{\mu_{k+1}}^m (J_k)(i)+w_{k+1}(i)$ for $i \in \scriptD_k$. Then, it is still possible to get bounds on the performance of the algorithm when $m$ is sufficiently large. With this modification to the algorithm, when Assumptions \ref{assume 1}, \ref{assume 2}, and \ref{assume 3} hold, we have the following:
\begin{theorem}\label{mainAPISecond}
Suppose that $m$ satisfies $m >\log (\delta_1)/\log (1/\alpha),$ where
$$\delta_1 := \sup_k \norm{\scriptM_k}_\infty = \sup_k\norm{\Phi(\Phi_{\scriptD_{k}}^\top \Phi_{\scriptD_{k}} )^{-1} \Phi_{\scriptD_{k}}^\top \scriptP_k}_\infty,$$
then
\begin{align*}
\limsup_{k\to \infty}\norm{J^{\mu_k}-J^*}_\infty \leq \frac{\tilde{c}_{m, H}}{(1-\alpha)(1-\alpha^{H-1})},
\end{align*}
where $$\tilde{c}_{m, H} := 2\alpha^H \Bigg(\frac{\frac{\alpha^m}{1-\alpha} \delta_1 + \delta_2+ \delta_1 \eps' }{1-\alpha^{m}\delta_1} + \frac{1}{1-\alpha}+\norm{J_0}_\infty \Bigg)+\eps$$ and
$
\delta_2 := \sup_{k, \mu_k}\norm{\scriptM_k J^{\mu_k}- J^{\mu_k}}_\infty.
$
\end{theorem}
\proof{Proof of Theorem \ref{mainAPISecond}}
Consider policies $\mu_k$ and $\mu_{k+1},$ where $\mu_0$ can be taken to be any arbitrary policy. We have the following:
\begin{align}
-T_{\mu_{k+1}}T^{H-1}J^{\mu_k} &= -T_{\mu_{k+1}}T^{H-1}J^{\mu_k} - T_{\mu_{k+1}}T^{H-1}J_k + T_{\mu_{k+1}}T^{H-1}J_k \nonumber\\
&\overset{(a)}\leq \alpha^H \norm{J^{\mu_k}-J_k}_\infty e - T_{\mu_{k+1}}T^{H-1}J_k \nonumber\\
&\leq \alpha^H \norm{J^{\mu_k}-J_k}_\infty e - T^H J_k + \eps e \nonumber\\
&= \alpha^H \norm{J^{\mu_k}-J_k}_\infty e - T^H J_k - T^H J^{\mu_k} + T^H J^{\mu_k} + \eps e \nonumber\\
&\overset{(b)}\leq 2 \alpha^H \norm{J^{\mu_k}-J_k}_\infty e - T^H J^{\mu_k} + \eps e \nonumber\\
&\leq 2\alpha^H \norm{J^{\mu_k}-J_k}_\infty e - T^{H-1}J^{\mu_k} + \eps e, \label{eq: * 2}
\end{align}
where $e$ is the vector of all $1$s, (a) and (b) follow from the contraction property of $T^H$ and $T_{\mu_{k+1}}$ and the last inequality follows from standard arguments using the monotonicity properties and the definition of $T$: specifically, note that
$$
TJ^\mu = \max_{\mu'} T_{\mu'} J^\mu \geq T_\mu J^\mu = J^{\mu},
$$
and repeatedly apply $T$ to both sides of the inequality and use monotonocity to obtain
$T^\ell J^\mu \geq T^{\ell-1} J^\mu$ for all $\ell$ and all policies $\mu.$
We can further bound $\norm{J^{\mu_k}-J_k}$ as follows:
\begin{align*}
\norm{J^{\mu_k}-J_k} &= \norm{\scriptM_k (T_{\mu_k}^m J_{k-1}+w_k)- J^{\mu_k}}_\infty
\leq \norm{\scriptM_k T_{\mu_k}^m J_{k-1}- J^{\mu_k}}_\infty + \norm{\scriptM_k w_k}_\infty \\
&\leq\norm{\scriptM_k T_{\mu_k}^m J_{k-1}- J^{\mu_k}}_\infty + \norm{\scriptM_k}_\infty \norm{w_k}_\infty
\leq \norm{\scriptM_k T_{\mu_k}^m J_{k-1}- J^{\mu_k}}_\infty + \delta_1 \eps' \\
&= \norm{\scriptM_k T_{\mu_k}^m J_{k-1} - \scriptM_k J^{\mu_k} + \scriptM_k J^{\mu_k}- J^{\mu_k}}_\infty + \delta_1 \eps'\\
&\leq \norm{\scriptM_k T_{\mu_k}^m J_{k-1} - \scriptM_k J^{\mu_k}}_\infty + \norm{\scriptM_k J^{\mu_k}- J^{\mu_k}}_\infty+ \delta_1 \eps'\\
&\leq \norm{\scriptM_k}_\infty \norm{ T_{\mu_k}^m J_{k-1} - J^{\mu_k}}_\infty + \norm{\scriptM_k J^{\mu_k}- J^{\mu_k}}_\infty+ \delta_1 \eps'\\
&\leq \delta_1 \norm{ T_{\mu_k}^m J_{k-1} - J^{\mu_k}}_\infty + \norm{\scriptM_k J^{\mu_k}- J^{\mu_k}}_\infty+ \delta_1 \eps'\\
&\leq \alpha^m \delta_1 \norm{J_{k-1} - J^{\mu_k}}_\infty + \sup_{k, \mu_k}\norm{\scriptM_k J^{\mu_k}- J^{\mu_k}}_\infty + \delta_1 \eps'\\
&\leq \alpha^m \delta_1 \norm{J_{k-1} - J^{\mu_k}}_\infty + \delta_2+ \delta_1 \eps'\\
&\leq \alpha^m \delta_1 \norm{J_{k-1} - J^{\mu_{k-1}} + J^{\mu_{k-1}} - J^{\mu_k}}_\infty + \delta_2+ \delta_1 \eps'\\
&\leq \alpha^m\delta_1 \norm{J_{k-1} - J^{\mu_{k-1}}}_\infty + \alpha^m \delta_1 \norm{J^{\mu_{k-1}} - J^{\mu_k}}_\infty + \delta_2+ \delta_1 \eps'\\
&\leq \alpha^m \delta_1 \norm{J_{k-1} - J^{\mu_{k-1}}}_\infty + \frac{\alpha^m \delta_1}{1-\alpha} + \delta_2+ \delta_1 \eps'
\end{align*}
Iterating over $k$ we get that for all $k$,
\begin{align}
\norm{J^{\mu_k}-J_k}_\infty \leq \underbrace{\frac{\frac{\alpha^m}{1-\alpha} \delta_1 + \delta_2+ \delta_1 \eps' }{1-\alpha^{m}\delta_1} + \frac{1}{1-\alpha}+\norm{J_0}_\infty}_{=: p_{m}}, \label{eq: pound 2}
\end{align}
where we have used the assumption $\alpha^{m} \delta_1 < 1$ and the fact that $\norm{J^{\mu_0}}_\infty\leq 1/(1-\alpha)$ due to Assumption~\ref{assume 3}.
Putting (\ref{eq: * 2}) and (\ref{eq: pound 2}) together, we have the following:
\begin{align*}
-T_{\mu_{k+1}}T^{H-1}J^{\mu_k} \leq \underbrace{\Bigg[2\alpha^H p_{m} + \eps\Bigg]}_{=: \tilde{c}_{m, H}} e -T^{H-1}J^{\mu_k}.
\end{align*}
The rest of the proof follows from the proof of Theorem \ref{mainAPI} with $\tilde{c}_{m, H}$ instead of $c_{m, H}$ and we get Theorem \ref{mainAPISecond}.
\Halmos \endproof
Analogously to the inequality \eqref{eq: ***} in Appendix \ref{appendix:thm1}, the proof of Theorem \ref{mainAPISecond} gives us the following:
\begin{align}
\norm{J^{\mu_{k}}-J^*}_\infty &\leq \alpha^{(H-1)k} \norm{J^{\mu_0}-J^*}_\infty + \sum_{i=0}^{k-1} \alpha^{(H-1)i} \frac{ \tilde{c}_{m, H}}{1-\alpha},
\label{eq: *** 8}
\end{align} for $k >0.$
Note that the inequality \eqref{eq: *** 8} provides finite-time performance bounds for the modified least squares algorithm, in addition to the asymptotic result stated in Theorem \ref{mainAPISecond}. \color{black}
\section{Bounds on Iterates In Algorithm \ref{alg:LSalg}}\label{appendix:prop1}
In the following proposition, we present a bound on the difference between $J_k$ and $J^*.$
\begin{proposition} When $\alpha^{m+H-1} \delta_1 <1,$
$\limsup_{k \to \infty} \norm{J_k - J^*}_\infty \leq \frac{r_{m, H}}{1-q_{m, H}},$ \label{IterAPITheorem}
\end{proposition}
where $q_{m, H} := \delta_1 \alpha^{m+H-1} + \big( 1+\delta_1 \alpha^m \big)\alpha^{H-1}$ and $r_{m, H} := \big( 1+\delta_1 \alpha^m \big)\Bigg(\alpha^{H-1} p_{m, H} + \frac{ c_{m, H}}{1-\alpha} \Bigg) + \delta_2+\delta_1 \eps',$ where $c_{m, H}$ is defined in \eqref{eq:defcdmh} and $p_{m, H}$ is defined in $(\ref{eq: pound})$. The proof is as follows.
\proof{Proof of Proposition \ref{IterAPITheorem}}
\begin{align*}
&\norm{J_{k+1} - J^*}_\infty \\&= \norm{J_{k+1} -J^{\mu_{k+1}} + J^{\mu_{k+1}}- J^*}_\infty \\
&\leq \norm{J_{k+1} -J^{\mu_{k+1}}}_\infty + \norm{J^{\mu_{k+1}}- J^*}_\infty \\
&\leq \norm{\scriptM_{k+1} T_{\mu_{k+1}}^m T^{H-1} J_k -J^{\mu_{k+1}}}_\infty +\delta_1 \eps' +\norm{J^{\mu_{k+1}}- J^*}_\infty +\delta_1 \eps'\\
&= \norm{\scriptM_{k+1} T_{\mu_{k+1}}^m T^{H-1} J_k -\scriptM_{k+1} J^{\mu_{k+1}} + \scriptM_{k+1} J^{\mu_{k+1}} -J^{\mu_{k+1}}}_\infty +\norm{J^{\mu_{k+1}}- J^*}_\infty +\delta_1 \eps'\\
&\leq \norm{\scriptM_{k+1} T_{\mu_{k+1}}^m T^{H-1} J_k -\scriptM_{k+1} J^{\mu_{k+1}}}_\infty + \norm{\scriptM_{k+1} J^{\mu_{k+1}} -J^{\mu_{k+1}}}_\infty +\norm{J^{\mu_{k+1}}- J^*}_\infty +\delta_1 \eps'\\
&\leq \norm{\scriptM_{k+1}}_\infty \norm{T_{\mu_{k+1}}^m T^{H-1} J_k - J^{\mu_{k+1}}}_\infty + \norm{\scriptM_{k+1} J^{\mu_{k+1}} -J^{\mu_{k+1}}}_\infty +\norm{J^{\mu_{k+1}}- J^*}_\infty +\delta_1 \eps'\\
&\leq \delta_1 \alpha^m \norm{T^{H-1} J_k - J^{\mu_{k+1}}}_\infty + \delta_2 +\norm{J^{\mu_{k+1}}- J^*}_\infty +\delta_1 \eps'\\
&= \delta_1 \alpha^m \norm{T^{H-1} J_k - J^* + J^* - J^{\mu_{k+1}}}_\infty + \delta_2 +\norm{J^{\mu_{k+1}}- J^*}_\infty +\delta_1 \eps'\\
&\leq \delta_1 \alpha^m \norm{T^{H-1} J_k - J^*}_\infty + \delta_1 \alpha^m \norm{J^* - J^{\mu_{k+1}}}_\infty + \delta_2 + \norm{J^{\mu_{k+1}}- J^*}_\infty+\delta_1 \eps'\\
&\leq \delta_1 \alpha^{m+H-1} \norm{J_k - J^*}_\infty + \delta_1 \alpha^m \norm{J^* - J^{\mu_{k+1}}}_\infty + \delta_2 + \norm{J^{\mu_{k+1}}- J^*}_\infty+\delta_1 \eps'\\
&= \delta_1 \alpha^{m+H-1} \norm{J_k - J^*}_\infty + \big( 1+\delta_1 \alpha^m \big) \norm{J^* - J^{\mu_{k+1}}}_\infty + \delta_2+\delta_1 \eps' \allowdisplaybreaks\\
&\overset{(c)}\leq \delta_1 \alpha^{m+H-1} \norm{J_k - J^*}_\infty + \big( 1+\delta_1 \alpha^m \big) \Bigg(\alpha^{H-1} \norm{J^{\mu_k}-J^*}_\infty + \frac{ c_{m, H}}{1-\alpha} \Bigg) + \delta_2 +\delta_1 \eps'\\
&\leq \delta_1 \alpha^{m+H-1} \norm{J_k - J^*}_\infty + \big( 1+\delta_1 \alpha^m \big) \Bigg(\alpha^{H-1} (\norm{J^{\mu_k}-J_k}_\infty + \norm{J_k-J^*}_\infty) + \frac{ c_{m, H}}{1-\alpha} \Bigg) + \delta_2 +\delta_1 \eps'\allowdisplaybreaks\\
&= \Bigg(\delta_1 \alpha^{m+H-1} + \big( 1+\delta_1 \alpha^m \big)\alpha^{H-1} \Bigg) \norm{J_k - J^*}_\infty + \big( 1+\delta_1 \alpha^m \big)\Bigg(\alpha^{H-1} \norm{J^{\mu_k}-J_k}_\infty + \frac{ c_{m, H}}{1-\alpha} \Bigg)\\& + \delta_2 +\delta_1 \eps'\allowdisplaybreaks\\
&\overset{(d)}\leq \Bigg(\delta_1 \alpha^{m+H-1} + \big( 1+\delta_1 \alpha^m \big)\alpha^{H-1} \Bigg) \norm{J_k - J^*}_\infty + \big( 1+\delta_1 \alpha^m \big)\Bigg(\alpha^{H-1} p_{m, H} + \frac{ c_{m, H}}{1-\alpha} \Bigg) + \delta_2 +\delta_1 \eps'\\
&= q_{m, H} \norm{J_k - J^*}_\infty + r_{m, H},
\end{align*}
where $q_{m, H} := \delta_1 \alpha^{m+H-1} + \big( 1+\delta_1 \alpha^m \big)\alpha^{H-1}$ and $r_{m, H} := \big( 1+\delta_1 \alpha^m \big)\Bigg(\alpha^{H-1} p_{m, H} + \frac{ c_{m, H}}{1-\alpha} \Bigg) + \delta_2+\delta_1 \eps'.$ Note that $p_{m, H}$ is defined in $(\ref{eq: pound})$ and $c_{m, H}$ is defined in $(\ref{eq:defcdmh}).$ Additionally, $(c)$ follows from $(\ref{eq:cprop1})$ and $(d)$ follows from $(\ref{eq: pound})$, noting that the inequalities in $(\ref{eq:cprop1})$ and $(\ref{eq: pound})$ hold for all $\eps''>0$.
Iterating over $k$, we get Proposition \ref{IterAPITheorem}.
We obtain the following inequality in much a similar way to inequality \eqref{eq: ***} in the proof of Theorem \ref{mainAPI}:
\begin{align}
\norm{J_{k} - J^*}_\infty \leq q_{m, H}^k \norm{J_0 - J^*}_\infty + \sum_{i=0}^{k-1} q_{m, H}^i r_{m, H}, \text{ for $k >0.$ } \label{eq:*** 3}
\end{align} \Halmos
\endproof
Note that the inequality \eqref{eq:*** 3} provides finite-time performance bounds, in addition to the asymptotic result stated in Proposition \ref{IterAPITheorem}.
\end{comment}
\section{Counter-Example} \label{appendix:counterexAppendix}
Even though, in practice, $J^{\mu_k}$ is what we are interested in, the values $J_k$ computed as part of our algorithm should not go to $\infty$ since the algorithm would be numerically unstable otherwise. In Appendix \ref{appendix:prop1}, we provide a bound on $\norm{J_k-J^*}_\infty$ when $m+H-1$ is sufficiently large as in Theorem~\ref{mainAPI}. In this subsection, we show that, when this condition is not satisfied, $J_k$ can become unbounded.
\begin{figure}
\centering
\subfloat[\centering $\mu^a$]{\includegraphics[width=2.5cm]{Counter1.drawio-1} }%
\qquad
\subfloat[\centering $\mu^b$]{\includegraphics[width=2.5cm]{Counter2.drawio-1} }%
\caption{An example illustrating the necessity of the condition in Theorem~\ref{mainAPI}}%
\label{fig:TsitVanRoyIm}%
\end{figure}
The example we use is depicted in Figure \ref{fig:TsitVanRoyIm}.
There are two policies, $\mu^a$ and $\mu^b$ and the transitions are deterministic under the two policies. The rewards are deterministic and only depend on the states. The rewards associated with states are denoted by $r(x_1)$ and $r(x_2),$
with $r(x_1)>r(x_2)$. Thus, the optimal policy is $\mu^a$. We assume scalar features $\phi(x_1)=1$ and $\phi(x_2)=2.$
We fix $H=1$.
The MDP follows policy $\mu^a$ when:
\begin{align*}
&J_k(x_1) > J_k(x_2) \implies \theta_k > 2\theta_k.
\end{align*} Thus, as long as $\theta_k>0,$ the lookahead policy will be $\mu_b.$
We will now show that $\theta_k$ increases at each iteration when $\delta_1 \alpha^{m+H-1}>1.$ We assume that $\theta_0>0$ and $\scriptD_k = \{1, 2\}$ $\forall k.$ A straightforward computation shows that $\delta_1=\frac{6}{5}.$
At iteration $k+1,$ suppose $\mu_{k+1}=\mu^b,$ our $\hat{J}^{\mu_{k+1}}(i)$ for $i = 1, 2$ are as follows:
\begin{align*}
\hat{J}^{\mu_{k+1}}(1) =r(x_1)+\sum_{i=1}^{m-1} r(x_1) \alpha^i + 2 \alpha^m \theta_k, \quad
\hat{J}^{\mu_{k+1}}(2) = r(x_2) +\sum_{i=1}^{m-1} r(x_2)\alpha^i + 2 \alpha^m \theta_k.
\end{align*}
Thus, from Step \ref{step 4 alg} of Algorithm \ref{alg:LSalg}:
\begin{align*}
&\theta_{k+1} = \arg \min_\theta \sum_{i =1}^2 \Big( (\Phi \theta)(i) - \hat{J}^{\mu_{k+1}}(i) \Big)^2 \\
&\implies \theta_{k+1} = \frac{\sum_{i=1}^{m-1} \alpha^i r(x_1)}{5} + \frac{2 \sum_{i=1}^{m-1} \alpha^i r(x_2)}{5} + \frac{6 \alpha^m \theta_k}{5}\\
&\implies \theta_{k+1}> \frac{6}{5} \alpha^{m}\theta_k.
\end{align*}
Thus, since $\theta_0 > 0$ and $H=1$, when $ \frac{6}{5} \alpha^{m+H-1}\theta_k =\delta_1 \alpha^{m+H-1} >1,$ $\theta_{k}$ goes to $\infty.$
\section{Proof of Lemma \ref{cthetalemma}}\label{appendix:lem2}
\proof{Proof of Lemma \ref{cthetalemma}}
\begin{align*}
\frac{1}{1-\alpha} \nonumber&\geq \norm{J^{\mu_k}-J^{\mu_{k+1}}}_\infty\\
&=\norm{J^{\mu_k} - \Phi \theta^{\mu_k} - \Big(J^{\mu_{k-1}} - \Phi \theta^{\mu_{k-1}} + \Phi \theta^{\mu_{k-1}} - \Phi \theta^{\mu_{k}} \Big)}_\infty \\\nonumber&\geq \norm{J^{\mu_{k-1}} - \Phi \theta^{\mu_{k-1}} + \Phi \theta^{\mu_{k-1}} - \Phi \theta^{\mu_{k}}}_\infty - \delta'_2 \\ \nonumber
& \geq\norm{\Phi \theta^{\mu_{k-1}} - \Phi \theta^{\mu_{k}}}_\infty - 2\delta'_2 \\ \nonumber& \geq \frac{1}{\sigma_{\min, \Phi}} \norm{ \theta^{\mu_{k-1}} - \theta^{\mu_{k}}}_\infty - 2\delta'_2 \nonumber
\end{align*}
\begin{align*}
\implies \underbrace{\frac{\sigma_{\min, \Phi}}{1-\alpha}+2\sigma_{\min, \Phi} \delta'_2}_{=: C} \nonumber&\geq \norm{ \theta^{\mu_{k-1}} - \theta^{\mu_{k}}}_\infty,
\end{align*}
where $\sigma_{\min, \Phi}$ is the smallest singular value in the singular value decomposition of $\Phi.$
\Halmos
\endproof
\section{Proof of Lemma \ref{lemmathetatilde}}\label{appendix:lem1}
\proof{Proof of Lemma \ref{lemmathetatilde}}
From assumption \ref{assume 1}, we have that $\Phi_{\scriptD_k}$ is of rank $d$. Thus, we have the following:
\begin{align*}
\theta^{\mu_k} - \tilde{\theta}^{\mu_k} &= \Big( \Phi_{\scriptD_k}^\top \Phi_{\scriptD_k}\Big)^{-1} \Phi_{\scriptD_k}^\top (\scriptP_{k}J^{\mu_{k}} - \scriptP_{k}(T_{\mu_{k}}^m T^{H-1} J_{k-1}+w_k)) \\
&= \Big( \Phi_{\scriptD_k}^\top \Phi_{\scriptD_k}\Big)^{-1} \Phi_{\scriptD_k}^\top \scriptP_{k} (J^{\mu_{k}} - (T_{\mu_{k}}^m T^{H-1} \Phi \theta_{k-1}+w_k))
\end{align*}
\begin{align}
\implies \norm{\theta^{\mu_k} - \tilde{\theta}^{\mu_k}}_\infty \nonumber&\leq \norm{\Big( \Phi_{\scriptD_k}^\top \Phi_{\scriptD_k}\Big)^{-1} \Phi_{\scriptD_k}^\top \scriptP_{k}}_\infty \norm{J^{\mu_{k}} - (T_{\mu_{k}}^m T^{H-1} \Phi \theta_{k-1}+w_k) }_\infty \\ \nonumber
&\leq \norm{\Big( \Phi_{\scriptD_k}^\top \Phi_{\scriptD_k}\Big)^{-1} \Phi_{\scriptD_k}^\top \scriptP_{k}}_\infty \norm{J^{\mu_{k}} - T_{\mu_{k}}^m T^{H-1} \Phi \theta_{k-1}}_\infty + \frac{\delta_1' \eps'}{\norm{\Phi}_\infty}\\ &\leq
\alpha^m \sup_k \norm{\Big( \Phi_{\scriptD_k}^\top \Phi_{\scriptD_k}\Big)^{-1} \Phi_{\scriptD_k}^\top \scriptP_{k}}_\infty \norm{J^{\mu_{k}} - T^{H-1} \Phi \theta_{k-1} }_\infty + \frac{\delta_1' \eps'}{\norm{\Phi}_\infty}\label{finalthetacomp}
\end{align}
We now obtain a bound for $\norm{J^{\mu_{k}} - T^{H-1} \Phi \theta_{k-1} }_\infty$ as follows:
\begin{align}
\norm{J^{\mu_{k}} - T^{H-1} \Phi \theta_{k-1} }_\infty \nonumber&= \norm{J^{\mu_{k}}-J^* + J^* - T^{H-1} \Phi \theta_{k-1} }_\infty \\ \nonumber
&\leq \norm{J^{\mu_{k}}-J^*}_\infty + \norm{J^* - T^{H-1} \Phi \theta_{k-1} }_\infty \\ \nonumber
&\leq \norm{J^{\mu_{k}}-J^*}_\infty + \alpha^{H-1}\norm{J^* - \Phi \theta_{k-1} }_\infty \\ \nonumber
&\leq \frac{1}{1-\alpha} + \alpha^{H-1}\norm{J^* - \Phi \theta_{k-1} }_\infty \\ \nonumber
&\leq \frac{1}{1-\alpha} + \alpha^{H-1}\norm{J^*-J^{\mu_{k-1}}+J^{\mu_{k-1}} - \Phi \theta_{k-1} }_\infty \\ \nonumber
&\leq \frac{1}{1-\alpha} + \alpha^{H-1}\Big(\norm{J^*-J^{\mu_{k-1}}}_\infty+\norm{J^{\mu_{k-1}} - \Phi \theta_{k-1} }_\infty\Big) \\ \nonumber
&\leq \frac{1+\alpha^{H-1}}{1-\alpha} + \alpha^{H-1}\norm{J^{\mu_{k-1}} - \Phi \theta_{k-1} }_\infty \\ \nonumber
&= \frac{1+\alpha^{H-1}}{1-\alpha} + \alpha^{H-1}\norm{J^{\mu_{k-1}} -\Phi \theta^{\mu_{k-1}} + \Phi \theta^{\mu_{k-1}} - \Phi \theta_{k-1} }_\infty \\ \nonumber
&= \frac{1+\alpha^{H-1}}{1-\alpha} + \alpha^{H-1}\norm{J^{\mu_{k-1}} -\Phi \theta^{\mu_{k-1}}}_\infty + \alpha^{H-1}\norm{\Phi \theta^{\mu_{k-1}} - \Phi \theta_{k-1} }_\infty \\
&\leq \frac{1+\alpha^{H-1}}{1-\alpha} + \alpha^{H-1}\delta_2' + \alpha^{H-1}\norm{\Phi}_\infty \norm{ \theta^{\mu_{k-1}} - \theta_{k-1} }_\infty. \label{compjmukthetak}
\end{align}
Putting \ref{finalthetacomp} and \ref{compjmukthetak} together gives the following:
\begin{align*}
\norm{\theta^{\mu_k} - \tilde{\theta}^{\mu_k}}_\infty &\leq \frac{\delta_1' \eps'}{\norm{\Phi}_\infty}+\alpha^m \frac{\delta_1'}{\norm{\Phi}_\infty} \Bigg( \frac{1+\alpha^{H-1}}{1-\alpha} + \alpha^{H-1}\delta_2' \Bigg) + \alpha^{m+H-1} \delta_1' \norm{ \theta^{\mu_{k-1}} - \theta_{k-1} }_\infty.
\end{align*}
\Halmos \endproof
\begin{comment}
\section{TD-Learning Approximation} \label{appendix:TD1}
We consider the variant of approximate policy iteration studied in \cite{efroni2019combine}. We define the following operator with parameter $\lambda \in (0, 1)$:
\begin{align}
T_{\lambda}^\mu J \nonumber&= (1-\lambda) \sum_{j=0}^\infty \lambda^j (T_{\mu})^{j+1} J \\
&= J + (1-\gamma \lambda P^{\mu} )^{-1} (T_{\mu} J - J), \label{TDlearnop}
\end{align} where $P^\mu$ is the state transition matrix corresponding policy $\mu.$ This operator is an estimator of the $TD-\lambda$ operator in \cite{SuttonBarto}.
The full return ($m=\infty$) is often desired in practice but is impossible to obtain for some MDPs. It is possible to instead obtain an estimate of a full return for any policy $\mu_k$ with the operator in equation (\ref{TDlearnop}) $T_{\lambda}^{\mu_{k}}$ parameterized by $\lambda \in (0, 1)$. Using the $T_{\lambda}^{\mu_k}$ operator to obtain an estimate for $J^{\mu_k}$, our modified algorithm is given in Algorithm \ref{alg:TDalg}.
\begin{algorithm}[tb]
\caption{TD-Learning Approximation Algorithm}
\label{alg:TDalg}
\textbf{Input}: $J_0, m,$ $H, \lambda,$ feature vectors $\{ \phi(i) \}_{i \in \scriptS}, \phi(i) \in \mathbb{R}^d$ and subsets $\scriptD_k \subseteq \scriptS, k = 0, 1, \ldots.$ Here $\scriptD_k$ is the set of states at which we evaluate the current policy at iteration $k.$ \\
\begin{algorithmic}[1]
\STATE Let $k=0$.
\STATE Let $\mu_{k+1}$ be such that $\norm{T^H J_k - T_{\mu_{k+1}} T^{H-1} J_k}_\infty \leq \eps$.\\\label{step 2 alg 2}
\STATE Compute $\hat{J}^{\mu_{k+1}}(i) = T_{\mu_{k+1}}^m T^{H-1} (J_k)(i)+w_{k+1}(i)$ for $i \in \scriptD_k.$ \\
\STATE Choose $\theta_{k+1}$ to solve
\begin{align*}
\min_\theta \sum_{i \in D_k} \Big( (\Phi \theta)(i) - \hat{J}^{\mu_{k+1}}(i) \Big)^2,
\end{align*} \\
where $\Phi$ is a matrix whose rows are the feature vectors.
\STATE $J_{k+1} = \Phi \theta_{k+1}.$
\STATE Set $k \leftarrow k+1.$ Go to 2. \end{algorithmic}
\end{algorithm}
As was the case with our main algorithm, we make Assumption \ref{assume 1}, Assumption \ref{assume 2} and Assumption \ref{assume 3}. Using Assumption \ref{assume 1}, we can succinctly write our iterates as follows:
\begin{equation}
J_{k+1} = \scriptM_{k+1} (T_\lambda^{\mu_{k+1}} T^{H-1} J_k + w_{k+1}). \label{eq:iterateAPITD}
\end{equation}
We will now state a theorem that characterizes the role of $\lambda$ in the convergence of our algorithm:
\begin{theorem}
When $ \delta_1 c_\lambda \alpha^{H-1} <1$, where $\delta_1$ is defined in Theorem \ref{mainAPI} and $c_\lambda := \frac{1-\lambda}{\lambda(1-\lambda\alpha)},$
\begin{align*}
\limsup_{k\to \infty}\norm{J^{\mu_k}-J^*}_\infty \leq \frac{c_{m,H, \lambda}}{(1-\alpha)(1-\alpha^{H-1})},
\end{align*}
where $c_{m, H, \lambda} := 2\alpha^H \Big(\frac{\delta_1 c_\lambda\Big(\frac{1 + \alpha^{H-1}}{1-\alpha}\Big) + \delta_2+ \delta_1 \eps'}{1-\delta_1 c_\lambda \alpha^{H-1}} + \frac{1}{1-\alpha} +\norm{J_0}_\infty\Big)+\eps,$ and $\delta_2$ is defined in Theorem \ref{mainAPI}. \label{IterAPITheoremTD}
\end{theorem} The proof of Theorem \ref{IterAPITheoremTD} is as follows:
\proof
Consider the following sequences:
\begin{align*}
V_k^i &:= (1+\lambda+\ldots+\lambda^i)^{-1}\sum_{j=0}^i \lambda^j (T^{\mu_{k}})^{j+1} (T^{H-1}J_{k-1}-J^{\mu_k}) \\
z_k^i &:= V_k^i - J^{\mu_k}.
\end{align*}
First, we get bounds on $V_k^i.$ We have the following:
\begin{align*}
& J^{\mu_k} - \norm{T^{H-1}J_{k-1} - J^{\mu_k}}_\infty e \leq T^{H-1} J_{k-1} -J^{\mu_k} \leq J^{\mu_k} + \norm{T^{H-1}J_{k-1} - J^{\mu_k}}_\infty e \\
&\implies (1+\lambda+\ldots+\lambda^i)^{-1}\sum_{j=0}^i \lambda^j (T^{\mu_{k}})^{j+1} (J^{\mu_k} - \underbrace{\norm{T^{H-1}J_{k-1}-J^{\mu_k}}_\infty}_{:=\eps_\lambda} e) - J^{\mu_k} \\
&\leq z_k^i \leq (1+\lambda+\ldots+\lambda^i)^{-1}\sum_{j=0}^i \lambda^j (T^{\mu_{k}})^{j+1} (J^{\mu_k} + \norm{T^{H-1}J_{k-1}-J^{\mu_k}}_\infty e) - J^{\mu_k} \\
&\implies (1+\lambda+\ldots+\lambda^i)^{-1}\sum_{j=0}^i \lambda^j (T^{\mu_{k}})^{j+1} (J^{\mu_k} - \eps_\lambda e) - J^{\mu_k} \leq z_k^i
\\&\leq (1+\lambda+\ldots+\lambda^i)^{-1}\sum_{j=0}^i \lambda^j (T^{\mu_{k}})^{j+1} (J^{\mu_k} + \eps_\lambda e) - J^{\mu_k} \\
&\implies - (1+ \lambda+\ldots+\lambda^i)^{-1} \sum_{j=0}^i \lambda^j \alpha^{j+1}\eps_\lambda e \leq z_k^i \leq (1+ \lambda+\ldots+\lambda^i)^{-1} \sum_{j=0}^i \lambda^j \alpha^{j+1}\eps_\lambda e \\
&\implies \norm{z_k^i}_\infty \leq (1+ \lambda+\ldots+\lambda^i)^{-1} \sum_{j=0}^i \lambda^j \alpha^{j+1}\eps_\lambda.
\end{align*}
Taking limits since norm is a continuous function, we have the following using a ``continuity'' argument:
\begin{align*}
\norm{J_k - J^{\mu_k}}_\infty &= \norm{\scriptM_k (T_\lambda^{\mu_k} T^{H-1}J_k +w_k)- J^{\mu_k}}_\infty + \delta_1 \eps' \\
&\leq \norm{\scriptM_k T_\lambda^{\mu_k} T^{H-1}J_k - J^{\mu_k}}_\infty + \delta_1 \eps'\\
&\leq \norm{\scriptM_k T_\lambda^{\mu_k} T^{H-1}J_k - J^{\mu_k}}_\infty + \norm{\scriptM_k J^{\mu_k} -J^{\mu_k}}_\infty + \delta_1 \eps'\\
&\leq \norm{\scriptM_k}_\infty \norm{T_\lambda^{\mu_k} T^{H-1}J_k -J^{\mu_k}}_\infty +\delta_2+ \delta_1 \eps'\\
&\leq \delta_1 \norm{T_\lambda^{\mu_k} T^{H-1}J_k - J^{\mu_k}}_\infty +\delta_2 + \delta_1 \eps'\\
&= \delta_1 \lim_i \norm{V_k^i - J^{\mu_k}}_\infty +\delta_2 + \delta_1 \eps'\\&= \delta_1 \lim_i \norm{z_k^i}_\infty + \delta_2 + \delta_1 \eps'\\&\leq \delta_1 \frac{\eps_\lambda(1-\lambda)}{\lambda(1-\lambda\alpha)}+\delta_2 + \delta_1 \eps' \\&= \delta_1 \frac{\norm{T^{H-1}J_{k-1}-J^{\mu_k}}_\infty(1-\lambda)}{\lambda(1-\lambda\alpha)} + \delta_2+ \delta_1 \eps'.
\end{align*}
We now have the following:
\begin{align}
\norm{J_k - J^{\mu_k}} &\leq \delta_1 \norm{T^{H-1}J_{k-1}-J^{\mu_k}}_\infty \underbrace{\frac{1-\lambda}{\lambda(1-\lambda\alpha)}}_{=: c_\lambda} + \delta_2 + \delta_1 \eps'\label{eq:defclambda} \\
&= \delta_1 c_\lambda\norm{ T^{H-1} J_{k-1} -J^{\mu_k}}_\infty + \delta_2+ \delta_1 \eps' \\\nonumber
&= \delta_1 c_\lambda \norm{T^{H-1} J_{k-1} - J^* + J^* - J^{\mu_k}}_\infty + \delta_2+ \delta_1 \eps'\\\nonumber
&\leq \delta_1 c_\lambda\norm{T^{H-1} J_{k-1} - J^*}_\infty + \delta_1 c_\lambda\norm{J^* - J^{\mu_k}}_\infty + \delta_2+ \delta_1 \eps' \\\nonumber
&\leq \delta_1 c_\lambda \alpha^{H-1} \norm{J_{k-1} - J^*}_\infty + \frac{ \delta_1 c_\lambda}{1-\alpha} + \delta_2+ \delta_1 \eps'\\\nonumber
&= \delta_1 c_\lambda \alpha^{H-1} \norm{J_{k-1} -J^{\mu_{k-1}} + J^{\mu_{k-1}}- J^*}_\infty + \frac{\delta_1 c_\lambda}{1-\alpha} + \delta_2+ \delta_1 \eps'\\ \nonumber
&\leq \delta_1 c_\lambda \alpha^{H-1} \norm{J_{k-1} -J^{\mu_{k-1}}}_\infty +\delta_1 c_\lambda\Big(\frac{1 + \alpha^{H-1}}{1-\alpha}\Big) + \delta_2+ \delta_1 \eps'.
\end{align}
When we iterate over $k$, we get the following for all $k$:
\begin{align*}
\norm{J^{\mu_k} - J_k}_\infty \leq \underbrace{\frac{\delta_1 c_\lambda\Big(\frac{1 + \alpha^{H-1}}{1-\alpha}\Big) + \delta_2+ \delta_1 \eps'}{1-\delta_1 c_\lambda \alpha^{H-1}} + \frac{1}{1-\alpha} + \norm{J_0}_\infty}_{=: p_{m, H, \lambda}},
\end{align*} when $\delta_1 c_\lambda \alpha^{H-1} <1.$
The rest of the proof follows from the proof of Theorem \ref{mainAPI} with $p_{m, H, \lambda}$ instead of $p_{m, H}$ and we get Theorem \ref{IterAPITheoremTD}.
\Halmos \endproof
The corresponding finite-time bounds are as follows:
\begin{align}
\norm{J^{\mu_{k}}-J^*}_\infty &\leq \alpha^{(H-1)k} \norm{J^{\mu_0}-J^*}_\infty + \sum_{i=0}^{k-1} \alpha^{(H-1)i} \frac{c_{m, H, \lambda}}{1-\alpha},
\label{eq: *** 22}
\end{align} for $k >0.$
\color{black}
\section{TD-Learning Approximation - A Special Case}\label{appendix:TD2}
When the $\lambda$ in our TD-learning algorithm is very small, we may obtain better bounds using an alternative proof technique. Note that in this case, $T_\lambda^{\mu_{k+1}}$ can be seen as an approximation to the $T^{\mu_{k+1}}$ operator. We have the following result that is tailored towards small $\lambda:$
\begin{proposition}
When $z_\lambda \alpha^{H-1} <1,$ where $z_\lambda := (\delta_1 \delta_3 \alpha + \delta_1 \delta_3 + \alpha),$ $\delta_3 := \norm{(I - \gamma \lambda P^{\mu_{k}})^{-1} - I }_\infty,$ and $\delta_1, \delta_2$ are defined in Theorem \ref{IterAPITheorem}:
\begin{align*}
\limsup_{k\to \infty}\norm{J^{\mu_k}-J^*}_\infty \leq \frac{\tilde{c}_{m,H,\lambda}}{(1-\alpha)(1-\alpha^{H-1})},
\end{align*}
where $\tilde{c}_{m,H,\lambda} := 2\alpha^H \Big(\frac{z_\lambda\Big(\frac{1 + \alpha^{H-1}}{1-\alpha}\Big) + \delta_2+\delta_1\eps'}{1-z_\lambda \alpha^{H-1}} + \frac{1}{1-\alpha}+\norm{J_0}_\infty\Big)+\eps.$ \label{iterAPITheoremTDAlter}
\end{proposition} The proof of Proposition \ref{iterAPITheoremTDAlter} is as follows:
\proof
Note that our iterates can be re-written as follows:
\begin{align*}
J_{k+1} = \scriptM_{k+1} \Bigg(T^{H-1} J_k + (I - \gamma \lambda P^{\mu_{k+1}})^{-1} (T^{\mu_{k+1}}T^{H-1} J_k - T^{H-1} J_k) + w_k\Bigg).
\end{align*}
We have the following bounds:
\begin{align}
\nonumber \norm{J^{\mu_k}-J_k}_\infty &= \norm{\scriptM_{k} \Bigg(T^{H-1} J_{k-1} + (I - \gamma \lambda P^{\mu_{k}})^{-1} (T^{\mu_{k}}T^{H-1} J_{k-1} - T^{H-1} J_{k-1})+w_k\Bigg)- J^{\mu_{k}}}_\infty \\ \nonumber
&\leq \norm{\scriptM_{k} \Bigg(T^{H-1} J_{k-1} + (I - \gamma \lambda P^{\mu_{k}})^{-1} (T^{\mu_{k}}T^{H-1} J_{k-1} - T^{H-1} J_{k-1})\Bigg)- J^{\mu_{k}}}_\infty+\delta_1\eps' \\ \nonumber
&= \norm{\scriptM_{k} \Bigg(T^{H-1} J_{k-1} + (I - \gamma \lambda P^{\mu_{k}})^{-1} (T^{\mu_{k}}T^{H-1} J_{k-1} - T^{H-1} J_{k-1})\Bigg)- \scriptM_k J^{\mu_k} + \scriptM_k J^{\mu_k}- J^{\mu_{k}}}_\infty+\delta_1\eps' \\\nonumber
&\leq \norm{\scriptM_{k} \Bigg(T^{H-1} J_{k-1} + (I - \gamma \lambda P^{\mu_{k}})^{-1} (T^{\mu_{k}}T^{H-1} J_{k-1} - T^{H-1} J_{k-1})\Bigg)- \scriptM_k J^{\mu_k}}_\infty + \norm{\scriptM_k J^{\mu_k}- J^{\mu_{k}}}_\infty +\delta_1\eps'\\\nonumber
&\leq \norm{\scriptM_{k} \Bigg(T^{H-1} J_{k-1} + (I - \gamma \lambda P^{\mu_{k}})^{-1} (T^{\mu_{k}}T^{H-1} J_{k-1} - T^{H-1} J_{k-1})\Bigg)- \scriptM_k J^{\mu_k}}_\infty + \delta_2 +\delta_1\eps'\\\nonumber
&\leq \norm{\scriptM_{k}}_\infty \norm{\Bigg(T^{H-1} J_{k-1} + (I - \gamma \lambda P^{\mu_{k}})^{-1} (T^{\mu_{k}}T^{H-1} J_{k-1} - T^{H-1} J_{k-1})\Bigg)- J^{\mu_k}}_\infty + \delta_2 +\delta_1\eps'\\\nonumber
&\leq \delta_1 \norm{\Bigg(T^{H-1} J_{k-1} + (I - \gamma \lambda P^{\mu_{k}})^{-1} (T^{\mu_{k}}T^{H-1} J_{k-1} - T^{H-1} J_{k-1})\Bigg)- J^{\mu_k}}_\infty + \delta_2 +\delta_1\eps'\\\nonumber
&\leq \delta_1 \norm{\Bigg(T^{H-1} J_{k-1} + (I - \gamma \lambda P^{\mu_{k}})^{-1} (T^{\mu_{k}}T^{H-1} J_{k-1} - T^{H-1} J_{k-1})\Bigg) - T^{\mu_{k}}T^{H-1} J_{k-1}}_\infty \\\nonumber&+ \norm{T^{\mu_{k}}T^{H-1} J_{k-1} - J^{\mu_k}}_\infty + \delta_2 +\delta_1\eps'\\\nonumber
&=\delta_1 \norm{\Big((I - \gamma \lambda P^{\mu_{k}})^{-1} - I \Big) \Big[ T^{\mu_{k}}T^{H-1} J_{k-1} - T^{H-1} J_{k-1}\Big]}_\infty \\\nonumber&+ \norm{T^{\mu_{k}}T^{H-1} J_{k-1} - J^{\mu_k}}_\infty + \delta_2 +\delta_1\eps'\\\nonumber
&\leq \delta_1 \norm{(I - \gamma \lambda P^{\mu_{k}})^{-1} - I }_\infty \norm{ T^{\mu_{k}}T^{H-1} J_{k-1} - T^{H-1} J_{k-1}}_\infty \\\nonumber&+ \norm{T^{\mu_{k}}T^{H-1} J_{k-1} - J^{\mu_k}}_\infty + \delta_2 +\delta_1\eps'\\\nonumber
&= \delta_1 \underbrace{\norm{(I - \gamma \lambda P^{\mu_{k}})^{-1} - I }_\infty}_{=: \delta_3} \norm{ T^{\mu_{k}}T^{H-1} J_{k-1} -J^{\mu_k} + J^{\mu_k}- T^{H-1} J_{k-1}}_\infty \\&+ \norm{T^{\mu_{k}}T^{H-1} J_{k-1} - J^{\mu_k}}_\infty +\delta_2 +\delta_1\eps'\\\nonumber
\end{align}
\begin{align}
&= \delta_1 \delta_3 \norm{ T^{\mu_{k}}T^{H-1} J_{k-1} -J^{\mu_k}}_\infty + \delta_1 \delta_3 \norm{J^{\mu_k}- T^{H-1} J_{k-1}}_\infty + \norm{T^{\mu_{k}}T^{H-1} J_{k-1} - J^{\mu_k}}_\infty +\delta_2 +\delta_1\eps'\\\nonumber
&\leq \delta_1 \delta_3 \alpha \norm{ T^{H-1} J_{k-1} -J^{\mu_k}}_\infty + \delta_1 \delta_3 \norm{J^{\mu_k}- T^{H-1} J_{k-1}}_\infty + \alpha \norm{T^{H-1} J_{k-1} - J^{\mu_k}}_\infty +\delta_2+\delta_1\eps' \\\label{eq:defzdlambda}
&= \underbrace{(\delta_1 \delta_3 \alpha + \delta_1 \delta_3 + \alpha)}_{=: z_\lambda} \norm{ T^{H-1} J_{k-1} -J^{\mu_k}}_\infty + \delta_2 +\delta_1\eps'\\\nonumber
&= z_\lambda \norm{T^{H-1} J_{k-1} - J^* + J^* - J^{\mu_k}}_\infty + \delta_2+\delta_1\eps'\\\nonumber
&\leq z_\lambda \norm{T^{H-1} J_{k-1} - J^*}_\infty + z_\lambda\norm{J^* - J^{\mu_k}}_\infty + \delta_2+\delta_1\eps' \\\nonumber
&\leq z_\lambda \alpha^{H-1} \norm{J_{k-1} - J^*}_\infty + \frac{z_\lambda}{1-\alpha} + \delta_2+\delta_1\eps'\\\nonumber
&= z_\lambda \alpha^{H-1} \norm{J_{k-1} -J^{\mu_{k-1}} + J^{\mu_{k-1}}- J^*}_\infty + \frac{z_\lambda}{1-\alpha} + \delta_2+\delta_1\eps'\\ \nonumber
&\leq z_\lambda \alpha^{H-1} \norm{J_{k-1} -J^{\mu_{k-1}}}_\infty + z_\lambda\Big(\frac{1 + \alpha^{H-1}}{1-\alpha}\Big) + \delta_2+\delta_1\eps'.
\end{align}
When we iterate over $k$, we get the following for all $k$:
\begin{align*}
\norm{J^{\mu_k} - J_k}_\infty \leq \underbrace{\frac{z_\lambda\Big(\frac{1 + \alpha^{H-1}}{1-\alpha}\Big) + \delta_2+\delta_1\eps'}{1-z_\lambda \alpha^{H-1}} + \frac{1}{1-\alpha}+\norm{J_0}_\infty}_{=: \tilde{p}_{m, H, \lambda}},
\end{align*} when $z_\lambda \alpha^{H-1} <1.$
The rest of the proof follows from the proof of Theorem \ref{mainAPI} with $\tilde{p}_{m, H, \lambda}$ in place of $p_{m, H}$ and we get Proposition \ref{iterAPITheoremTDAlter}.
\Halmos \endproof
The corresponding finite-time bounds are as follows:
\begin{align}
\norm{J^{\mu_{k}}-J^*}_\infty &\leq \alpha^{(H-1)k} \norm{J^{\mu_0}-J^*}_\infty + \sum_{i=0}^{k-1} \alpha^{(H-1)i} \frac{\tilde{c}_{m, H, \lambda}}{1-\alpha},
\label{eq: *** 2}
\end{align} for $k >0.$
\color{black}
\end{comment}
\section{Numerical Results} \label{appendix:numerical}
In this appendix, we test our algorithms on a grid world problem, using the same grid world problem as in \cite{efroni2018} and \cite{efroni2019combine}.
For our simulations, we assume a deterministic grid world problem played on an $N \times N$ grid. The states are the squares of the grid and the actions are $\{ \text{'up', 'down', 'right', 'left', and 'stay'} \}$, which move the agent in the prescribed direction, if possible. In each experiment, a goal state is chosen uniformly at random to have a reward of 1, while each other state has a fixed reward drawn uniformly from $[-0.1, 0.1]$. Unless otherwise mentioned, for the duration of this section, $N=25$ and $\alpha = 0.9$.
In order to perform linear function approximation, we prescribe a feature vector for each state. In this section, we focus on three particular choices:
\begin{enumerate}
\item Random feature vectors: each entry of the matrix $\Phi$ is an independent $\mathcal{N}(0, 1)$ random variable
\item Designed feature vectors: the feature vector for a state with coordinates $(x, y)$ is $[x, y, d, 1]^T$, where $d$ is the number of steps required to reach the goal from state $(x, y)$
\item Indicator vectors: the feature vector for each state $i$ is a $N^2$-dimensional indicator vector where only the $i$-th entry is nonzero
\end{enumerate}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{h-m-error-plots.png}
\caption{(Top) For random feature vectors, as $m$ and $H$ increase, the value $J_k$ eventually stops diverging. (Bottom) For designed feature vectors, a smaller amount of lookahead and $m$-step return are needed to prevent $J_k$ from diverging. }
\label{fig:hm_plots}
\end{figure}
Recall that our theorems suggest that the amount of lookahead and return depends on the choice of the feature vectors. Our experiments support this observation as well. The amount of lookahead and $m$-step return required is high (often over $30$) for random feature vectors, but we are able to significantly reduce the amount required by using the designed feature vectors which better represent the states.
We test Algorithm \ref{alg:LSalg} in each of our experiments, using a starting state of $J_0 = \theta_0 = 0$. All plots in this section graph an average over 20 trials, where each trial has a fixed random choice of $\mathcal{D}_k$, the set of states used for policy evaluation. Error bars show the standard deviation of the mean. The code used to produce these graphs is included in the Supplementary Material.
\subsection{The effect of m and H on convergence}
In Figure \ref{fig:hm_plots}, we showed how $H$ and $m$ affect convergence of the iterates $J_k$ to $J^*$. When $m$ and $H$ are small, the value of $J_k$ sometimes diverges. If the value diverges for even one trial, then the average over trials of $\|J_k - J^*\|_\infty$ also increases exponentially with $k$. However, if the average converges for all trials, then the plot is relatively flat. The $m$ or $H$ required for convergence depends on the parameter $\delta_1$ defined in Theorem~\ref{mainAPI}. Over 20 trials, the average value of $\delta_1$ for each of our choices of feature vectors are $30.22, 16.29$, and $1.0$, respectively. As showed through a counter-example in the main body of the paper, in general, one needs $m + H - 1 > \log(\delta_1) / \log(1/\alpha)$ for convergence. However, in specific examples, it is possible for convergence to for smaller values of $m+H.$ For example, in our grid word model, $\frac{\log(16.29)}{\log(1/0.9)} \approx 26.5$, but we will observe that such a large amount of $m + H$ is not required for convergence.
In Figure \ref{fig:hm_plots}, it is difficult to see how $H$ and $m$ affect the probability of divergence, as a function of the representative states chosen to be sampled. Therefore, we introduce Figure \ref{fig:probability_of_convergence}. These plots show the proportion of trials in which the distance $\|J_{k} - J^*\|_\infty$ exceeded $10^5$ after 30 iterations of our algorithm. As expected, the algorithm never diverges for indicator vectors, as our algorithm is then equivalent to the tabular setting. The designed feature vectors clearly require a much smaller amount of lookahead or $m$-step return, well below the amount predicted by the average $\delta_1$ of $16.29$. However, no matter the choice of feature vectors, we will eventually prevent our algorithm from diverging with a large enough value of $H + m$.
\begin{figure}
\centering
\subfloat[\centering Varying $H$]{\includegraphics[width=0.45\linewidth]{Images/prob_divergence_hs} }%
\qquad
\subfloat[\centering Varying $m$]{\includegraphics[width=0.45\linewidth]{Images/prob_divergence_ms} }%
\caption{We plot the probability that $\|J_k - J^*\|_\infty$ diverges as a function of $H$ and $m$. For the first plot, $m=3$ and for the second plot, $H=3$. In both cases, the algorithm never diverges once $H+m$ is large enough, though a smaller amount of lookahead or $m$-step return are needed for the designed feature vectors.}%
\label{fig:probability_of_convergence}%
\end{figure}
\begin{figure}
\centering
\subfloat[\centering Varying $H$]{\includegraphics[width=0.45\linewidth]{Images/final_policy_hs} }%
\qquad
\subfloat[\centering Varying $m$]{\includegraphics[width=0.45\linewidth]{Images/final_policy_ms} }%
\caption{We plot the final value of $\|J^{\mu_k} - J^*\|_\infty$ after 30 iterations. For the first plot, $m=3$ and for the second plot, $H=3$. As $H$ increases, the final policy improves. With large enough $H$, we obtain the optimal policy. However, past a certain point, increasing $m$ is not helpful for finding a better policy.}%
\label{fig:final_policy_value}%
\end{figure}
\subsection{Convergence to the optimal policy}
In Theorem~\ref{mainAPI}, we show that as $H$ increases, we converge to a policy $\mu_k$ that is closer to the optimal policy. In this section, we experimentally investigate the role of $m$ and $H$ on the final value of $\|J^{\mu_k} - J^*\|_\infty$. The results can be found in Figure \ref{fig:final_policy_value}. As predicted by theory, we do get closer to the optimal policy as $H$ increases. However, increasing $m$ does not help past a certain point, which is also consistent with the theory. Indeed, although $\mu_k$ is approaching the optimal policy $\mu^*$ as $H$ increases, the iterates $J_k$ are not converging to $J^*$ due to error induced by function approximation. Increasing $m$ improves the policy evaluation, but cannot correct for this inherent error from approximating the value function.
\begin{comment}
\begin{figure}[H]
\centering
\subfigure[\centering Varying $H$]{\includegraphics[width=0.45\linewidth]{Images/final_policy_hs} }%
\qquad
\subfigure[\centering Varying $m$]{\includegraphics[width=0.45\linewidth]{Images/final_policy_ms} }%
\caption{We plot the final value of $\|J^{\mu_k} - J^*\|_\infty$ after 30 iterations. For the first plot, $m=3$ and for the second plot, $H=3$. As $H$ increases, the final policy improves. With large enough $H$, we obtain the optimal policy. However, past a certain point, increasing $m$ is not helpful for finding a better policy.}%
\label{fig:final_policy_value}%
\end{figure}
\end{comment}
\section{Bounds on Iterates In Algorithm \ref{alg:LSalg}}\label{appendix:prop1}
In the following proposition, we present a bound on the difference between $J_k$ and $J^*.$
\begin{proposition} When $\alpha^{m+H-1} \delta_1 <1,$
$\limsup_{k \to \infty} \norm{J_k - J^*}_\infty \leq \frac{r_{m, H}}{1-q_{m, H}},$ \label{IterAPITheorem}
\end{proposition}
where $q_{m, H} := \delta_1 \alpha^{m+H-1} + \big( 1+\delta_1 \alpha^m \big)\alpha^{H-1}$ and $r_{m, H} := \big( 1+\delta_1 \alpha^m \big)\Bigg(\alpha^{H-1} p_{m, H} + \frac{ c_{m, H}}{1-\alpha} \Bigg) + \delta_2+\delta_1 \eps',$ where $c_{m, H}$ is defined in \eqref{eq:defcdmh} and $p_{m, H}$ is defined in $(\ref{eq: pound})$. The proof is as follows.
\begin{proof}
\begin{align*}
\norm{J_{k+1} - J^*}_\infty &= \norm{J_{k+1} -J^{\mu_{k+1}} + J^{\mu_{k+1}}- J^*}_\infty \\
&\leq \norm{J_{k+1} -J^{\mu_{k+1}}}_\infty + \norm{J^{\mu_{k+1}}- J^*}_\infty \\
&\leq \norm{\scriptM_{k+1} T_{\mu_{k+1}}^m T^{H-1} J_k -J^{\mu_{k+1}}}_\infty +\delta_1 \eps' \\&+\norm{J^{\mu_{k+1}}- J^*}_\infty +\delta_1 \eps'\\
&= \norm{\scriptM_{k+1} T_{\mu_{k+1}}^m T^{H-1} J_k -\scriptM_{k+1} J^{\mu_{k+1}} + \scriptM_{k+1} J^{\mu_{k+1}} -J^{\mu_{k+1}}}_\infty \\&+\norm{J^{\mu_{k+1}}- J^*}_\infty +\delta_1 \eps'\\
&\leq \norm{\scriptM_{k+1} T_{\mu_{k+1}}^m T^{H-1} J_k -\scriptM_{k+1} J^{\mu_{k+1}}}_\infty + \norm{\scriptM_{k+1} J^{\mu_{k+1}} -J^{\mu_{k+1}}}_\infty \\&+\norm{J^{\mu_{k+1}}- J^*}_\infty +\delta_1 \eps'\\
&\leq \norm{\scriptM_{k+1}}_\infty \norm{T_{\mu_{k+1}}^m T^{H-1} J_k - J^{\mu_{k+1}}}_\infty + \norm{\scriptM_{k+1} J^{\mu_{k+1}} -J^{\mu_{k+1}}}_\infty \\&+\norm{J^{\mu_{k+1}}- J^*}_\infty +\delta_1 \eps'\\
&\leq \delta_1 \alpha^m \norm{T^{H-1} J_k - J^{\mu_{k+1}}}_\infty + \delta_2 \\&+\norm{J^{\mu_{k+1}}- J^*}_\infty +\delta_1 \eps'\\
&= \delta_1 \alpha^m \norm{T^{H-1} J_k - J^* + J^* - J^{\mu_{k+1}}}_\infty + \delta_2 +\norm{J^{\mu_{k+1}}- J^*}_\infty +\delta_1 \eps'\\
&\leq \delta_1 \alpha^m \norm{T^{H-1} J_k - J^*}_\infty + \delta_1 \alpha^m \norm{J^* - J^{\mu_{k+1}}}_\infty + \delta_2 \\&+ \norm{J^{\mu_{k+1}}- J^*}_\infty+\delta_1 \eps'\\
&\leq \delta_1 \alpha^{m+H-1} \norm{J_k - J^*}_\infty + \delta_1 \alpha^m \norm{J^* - J^{\mu_{k+1}}}_\infty + \delta_2 \\&+ \norm{J^{\mu_{k+1}}- J^*}_\infty+\delta_1 \eps'\\
&= \delta_1 \alpha^{m+H-1} \norm{J_k - J^*}_\infty + \big( 1+\delta_1 \alpha^m \big) \norm{J^* - J^{\mu_{k+1}}}_\infty + \delta_2+\delta_1 \eps' \allowdisplaybreaks\\
&\overset{(c)}\leq \delta_1 \alpha^{m+H-1} \norm{J_k - J^*}_\infty + \big( 1+\delta_1 \alpha^m \big) \Bigg(\alpha^{H-1} \norm{J^{\mu_k}-J^*}_\infty + \frac{ c_{m, H}}{1-\alpha} \Bigg) \\&+ \delta_2 +\delta_1 \eps'\\
&\leq \delta_1 \alpha^{m+H-1} \norm{J_k - J^*}_\infty + \big( 1+\delta_1 \alpha^m \big) \\&\Bigg(\alpha^{H-1} (\norm{J^{\mu_k}-J_k}_\infty + \norm{J_k-J^*}_\infty) + \frac{ c_{m, H}}{1-\alpha} \Bigg) + \delta_2 +\delta_1 \eps'\allowdisplaybreaks\\
&= \Bigg(\delta_1 \alpha^{m+H-1} + \big( 1+\delta_1 \alpha^m \big)\alpha^{H-1} \Bigg) \norm{J_k - J^*}_\infty + \big( 1+\delta_1 \alpha^m \big)\\&\Bigg(\alpha^{H-1} \norm{J^{\mu_k}-J_k}_\infty + \frac{ c_{m, H}}{1-\alpha} \Bigg) + \delta_2 +\delta_1 \eps'\allowdisplaybreaks\\
&\overset{(d)}\leq \Bigg(\delta_1 \alpha^{m+H-1} + \big( 1+\delta_1 \alpha^m \big)\alpha^{H-1} \Bigg) \norm{J_k - J^*}_\infty + \big( 1+\delta_1 \alpha^m \big)\\&\Bigg(\alpha^{H-1} p_{m, H} + \frac{ c_{m, H}}{1-\alpha} \Bigg) + \delta_2 +\delta_1 \eps'\\
&= q_{m, H} \norm{J_k - J^*}_\infty + r_{m, H},
\end{align*}
where $q_{m, H} := \delta_1 \alpha^{m+H-1} + \big( 1+\delta_1 \alpha^m \big)\alpha^{H-1}$ and $r_{m, H} := \big( 1+\delta_1 \alpha^m \big)\Bigg(\alpha^{H-1} p_{m, H} + \frac{ c_{m, H}}{1-\alpha} \Bigg) + \delta_2+\delta_1 \eps'.$ Note that $p_{m, H}$ is defined in $(\ref{eq: pound})$ and $c_{m, H}$ is defined in $(\ref{eq:defcdmh}).$ Additionally, $(c)$ follows from $(\ref{eq:cprop1})$ and $(d)$ follows from $(\ref{eq: pound})$, noting that the inequalities in $(\ref{eq:cprop1})$ and $(\ref{eq: pound})$ hold for all $\eps''>0$.
Iterating over $k$, we get Proposition \ref{IterAPITheorem}.
\end{proof}
We obtain the following inequality in much a similar way to inequality \eqref{eq: ***} in the proof of Theorem \ref{mainAPI}:
\begin{align}
\norm{J_{k} - J^*}_\infty \leq q_{m, H}^k \norm{J_0 - J^*}_\infty + \sum_{i=0}^{k-1} q_{m, H}^i r_{m, H}, \label{eq:*** 3}
\end{align} for $k >0.$
Note that the inequality \eqref{eq:*** 3} provides finite-time performance bounds, in addition to the asymptotic result stated in Proposition \ref{IterAPITheorem}.
\end{APPENDICES}
\ACKNOWLEDGMENT{The research presented here was supported in part by a grant from Sandia National Labs \footnote{Sandia National Laboratories is a multimission laboratory managed and operated by National Technology \& Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA0003525. This paper describes objective technical results and analysis. Any subjective views or opinions that might be expressed in the paper do not necessarily represent the views of the U.S. Department of Energy or the United States Government.} and the NSF Grants CCF 1934986, CCF 1704970, CNS 2106801, ONR Grant N00014-19-1-2566, NSF/USDA Grant AG 2018-67007-28379, and ARO Grant W911NF-19-1-0379.
}
\section{Introduction.}\label{intro}
\section{Introduction}
In many applications of reinforcement learning, such as playing chess and Go, the underlying model is known and so the main challenge is in solving the associated dynamic programming problem in an efficient manner. Policy iteration and variants of policy iteration \cite{bertsekas2019reinforcement,Bertsekas2011ApproximatePI,bertsekastsitsiklis} that solve dynamic programming problems rely on computations that are infeasible due to the sizes of the state and action spaces in modern reinforcement learning problems.
As a remedy to this ``curse of dimensionality,'' several state-of-the-art algorithms \cite{silver2017shoji, silver2017mastering, DBLP:journals/corr/MnihBMGLHSK16} employ function approximation, lookahead for policy improvement, $m$-step rollout for policy evaluation, and gradient descent to compute the function approximation, see Section \ref{section2} for a definition of these terms.
Our goal in this paper is to understand the role of multi-step lookahead for policy improvement (i.e., repeatedly applying the Bellman operator multiple times) and $m$-step rollout (which is a technique to approximately evaluate a policy by rolling out the dynamic programming tree for a certain number of steps $m$) on the accuracy of approximate policy iteration techniques. The algorithms we study in this paper are closely related to least-squares policy iteration (LSPI) \cite{parr} and approximate policy iteration (PI), see \cite{bertsekastsitsiklis, bertsekas2019reinforcement}. In the analysis of approximate PI, it is assumed that the policy evaluation and improvement steps have bounded errors, and using these, an error bound is obtained for the algorithm which repeatedly uses approximate policy evaluation and improvement. LSPI is an algorithm that builds on approximate PI where the policy evaluation step uses a least-squares algorithm to estimate the value function for the entire state space using the value function evaluated at a few states. However, the bounds presented in \cite{parr} are simply a special case of the bounds for generic approximate PI, and do not explicitly take into account the details of the implementation of least-squares-based policy evaluation. When such details are taken into account, it turns out the roles of the depth of lookahead ($H$) and rollout ($m$) become important, and their impact on the error bounds on the performance of approximate value iteration has not been characterized in prior work. In this paper, on the other hand, we assume that policies are evaluated at a few states using an $m$-step rollout and as a result, convergence of the algorithm is not guaranteed in general. Additionally, we show that the effect of function approximation can be mitigated using lookahead in the policy improvement step. The use of a partial rollout in our algorithm also makes our work similar to modified policy iteration \cite{Puterman1978ModifiedPI}, which is also called optimistic policy iteration \cite{bertsekastsitsiklis}. To the best of our knowledge, none of these prior works consider the impact of using gradient descent to implement an approximate version of least-squares policy evaluation within approximate PI. Thus, our algorithm and analysis can be viewed as a detailed look at approximate PI and modified PI when linear function approximation, least-squares policy evaluation and gradient descent are used to evaluate policies.
Our contributions are as follows:
\begin{itemize}
\item We examine the impact of lookahead and $m$-step rollout on approximate policy iteration with linear function approximation. As is common in practice, we assume that we evaluate an approximate value function only for some states at each iteration. We obtain performance bounds for our algorithm under the assumption that the sum of the lookahead and the number of steps in the $m$-step rollout is sufficiently large. We demonstrate through an extension of a counterexample in \cite{Tsitsiklis94feature-basedmethods} that such a condition is necessary, in general, for convergence with function approximation unlike the tabular setting in \cite{efroni2019combine}. See Appendix \ref{appendix:counterexAppendix} for our counterexample.
\item For ease of exposition, we first present the case where one solves a least-squares problem at each iteration to obtain the weights associated with the feature vectors in the function approximation of the value function. Our performance bounds in this case strictly generalize the bounds in \cite{parr} and \cite{bertsekas2019reinforcement} for approximate PI.
\item We then consider a more practical and widely-used scheme where several steps of gradient descent are used to update the weights of the value function approximation at each iteration. Obtaining performance bounds for the gradient descent algorithm is more challenging and these bounds can be found in Section \ref{SectionGD}.
\item Our results show that the sufficient conditions on the hyperparameters (such as the amount of lookahead, rollout, gradient descent parameters) of the algorithm required for convergence either do not depend on the size of the state space or depend only logarithmically on the size of the state space. \color{black}
\item From a theoretical perspective, our analysis shows that one can improve the upper bound on the error in approximate policy iteration from $1/(1-\alpha)^2$ (see \cite{parr}, \cite{bertsekas2019reinforcement}) to $1/(1-\alpha^{H})(1-\alpha)$ by using lookahead, where $\alpha$ is the discount factor and $H$ is the amount of lookahead used.
\item In addition to asymptotic performance bounds, we also provide finite-time guarantees for our algorithm. Our finite-time bounds show that our algorithm converges exponentially fast in the case of least-squares as well as the case where a fixed number of gradient descent steps are performed in each iteration of the algorithm.
\end{itemize}
\subsection{Other Related Work}
The recent work in \cite{efroni2019combine} considers a variant of policy iteration that utilizes lookahead and approximate policy evaluation using an $m$-step rollout (see Section \ref{section2} for definitions of these terms). As stated in the motivation in \cite{efroni2019combine}, it is well known that Monte Carlo Tree Search (MCTS) \cite{kocisszepesvari, browne} works well in practice
even though the worst-case complexity can be exponential \cite{shah2020nonasymptotic}; see \cite{munosbook} for some analysis of MCTS in MDPs where the number of states that can be visited from a given state is bounded. Motivated by policy iteration, the algorithm in \cite{efroni2019combine} estimates the value function associated with a policy and aims to improve the policy at each step. Policy improvement is achieved by obtaining the ``greedy'' policy in the case of policy iteration or a lookahead policy in the work of \cite{efroni2019combine}, which involves applying the Bellman operator several times to the current iterate before obtaining the greedy policy. The idea is that the application of the Bellman operator several times gives a more accurate estimate of the optimal value function. Then, similarly to policy iteration, the algorithm in \cite{efroni2019combine} aims to evaluate the new policy. The algorithm in \cite{efroni2019combine} uses an $m$-step rollout to compute the value function associated with a policy, i.e., it applies the Bellman operator associated with the policy $m$ times.
The work of \cite{efroni2019combine} establishes that a lookahead can significantly improve the rate of convergence if one uses the value function computed using lookahead in the approximate policy evaluation step. However, their paper does not study the use of function approximation which is critical to handling large state spaces, nor does it quantify the effects of varying $m$ in the convergence of their algorithm.
Now, we compare our work to other papers in the literature. The role of lookahead and rollout in improving the performance of RL algorithms has also been studied in a large number of papers including \cite{shahxie, moerland2020framework, efroni2020online, tomar2020multistep, efroni2018multiplestep, springenberg2020local, 9407870}. The works of \cite{baxter, veness, lanctot2014monte} explore the role of tree search in RL algorithms. However, to the best of our knowledge, the amount of lookahead and rollout needed as a function of the feature vectors has not been quantified in prior works.
The works of \cite{Bertsekas2011ApproximatePI} and \cite{bertsekas2019reinforcement} also study a variant of policy iteration wherein a greedy policy is evaluated approximately using feature vectors at each iteration. These papers also provide rates of convergence as well as a bound on the approximation error. However, our main goal is to understand the relations between function approximation and lookahead/rollout which are not considered in these other works.
\section{Preliminaries} \label{section2}
We consider a Markov Decision Process (MDP), which is defined to be a 5-tuple $(\scriptS, \scriptA, P, r, \alpha)$. The finite set of states of the MDP is $\scriptS$. There exists a finite set of actions associated with the MDP $\scriptA$. Let $P_{ij}(a)$ be the probability of transitioning from state $i$ to state $j$ when taking action $a \in \scriptA$. We denote by $s_k$ the state of the MDP and by $a_k$ the corresponding action at time $k$. We associate with state $s_k$ and action $a_k$ a non-deterministic reward $r(s_k, a_k) \in [0, 1] \forall s_k \in \scriptS, a_k \in \scriptA.$ We assume that the rewards are uniformly bounded. Our objective is to maximize the cumulative discounted reward with discount factor $\alpha \in (0, 1).$
Towards this end, we seek to find a deterministic policy $\mu$ which associates with each state $s\in \scriptS$ an action $\mu(s) \in \scriptA$. For every policy $\mu$ and every state $s \in \scriptS$ we define $J^{\mu}(s)$ as follows:
\begin{align*}
J^{\mu}(s) := E[\sum_{i=0}^\infty \alpha^k r(s_k, \mu(s_k))|s_0=s].
\end{align*}
We define the optimal reward-to-go $J^*$ as
$J^*(s) := \underset{\mu}\max J^\mu(s).$ The objective is to find a policy $\mu$ that maximizes $J^\mu(s)$ for all $s \in \scriptS$. Towards the objective, we associate with each policy $\mu$ a function $T_\mu: \mathbb{R}^{|\scriptS|} \to \mathbb{R}^{|\scriptS|}$ where for $J \in \mathbb{R}^{|\scriptS|},$ the $s$th component of $T_{\mu}J$ is
\begin{align*}
(T_\mu J)(s) = r(s, \mu(s)) + \alpha \sum_{j=1}^{|\scriptS|} P_{sj}(\mu(s)) J(j),
\end{align*} for all $s \in S$. If function $T_{\mu}$ is applied $m$ times to vector $J \in \mathbb{R}^{|\scriptS|},$ then we say that we have performed an $m$-step rollout of the policy $\mu$ and the result $T^m_\mu J$ of the rollout is called the return.
Similarly, we define the Bellman operator $T: \mathbb{R}^{|\scriptS|} \to \mathbb{R}^{|\scriptS|}$ with the $s$th component of $TJ$ being
\begin{align}
(TJ)(s) = \underset{a \in \scriptA}\max \Bigg \{ r(s, a) + \alpha \sum_{j=1}^{|\scriptS|} P_{sj}(a)J(j) \Bigg \}. \label{T}
\end{align}
The policy corresponding to the $T$ operator is defined as the \textit{greedy} policy. If operator $T$ is applied $H$ times to vector $J \in \mathbb{R}^{|\scriptS|},$ we call the result - $T^H J$ - the $H$-step ``lookahead'' corresponding to $J$. The greedy policy corresponding to $T^H J$ is called the $H$-step lookahead policy, or the lookahead policy, when $H$ is understood. More precisely, given an estimate $J$ of the value function, the lookahead policy is the policy $\mu$ such that $T_\mu(T^{H-1} J)=T(T^{H-1} J).$
It is well known that each time the Bellman operator is applied to a vector $J$ to obtain $TJ,$ the following holds:
\begin{align*}
\norm{TJ-J^*}_\infty\leq \alpha\norm{J-J^*}_\infty.
\end{align*} Thus, applying $T$ to obtain $TJ$ gives a better estimate of the value function than $J.$
The Bellman equations state that the vector $J^\mu$ is the unique solution to the linear equation
\begin{align}
J^\mu = T_\mu J^\mu. \label{bellman}
\end{align}
Additionally, we have that $J^*$ is a solution to
\begin{align*}
J^* = TJ^*.
\end{align*}
Note that every greedy policy w.r.t. $J^*$ is optimal and vice versa \cite{bertsekastsitsiklis}.
We will now state several useful properties of the operators $T$ and $T_\mu$. See \cite{bertsekastsitsiklis} for more on these properties. \color{black} Consider the vector $e \in \mathbb{R}^{|\scriptS|}$ where $e(i) = 1 \forall i \in 1, 2, \ldots, |\scriptS|.$ We have:
\begin{equation}
T(J + ce) = TJ + \alpha ce, \quad T_\mu(J + ce) = T_\mu J + \alpha ce. \label{eq:usefulproperties}
\end{equation}
Operators $T$ and $T_\mu$ are also monotone:
\begin{align}
J \leq J' \implies TJ \leq TJ', \quad T_\mu J \leq T_\mu J'. \label{monotonicityproperty}
\end{align}
\section{Least-Squares Function Approximation Algorithm}
\begin{algorithm}[tb]
\caption{Least-Squares Function Approximation Algorithm}
\label{alg:LSalg}
\textbf{Input}: $J_0, m,$ $H,$ feature vectors $\{ \phi(i) \}_{i \in \scriptS}, \phi(i) \in \mathbb{R}^d$ and subsets $\scriptD_k \subseteq \scriptS, k = 0, 1, \ldots.$ Here $\scriptD_k$ is the set of states at which we evaluate the current policy at iteration $k.$
\begin{algorithmic}[1]
\STATE Let $k=0$.
\STATE Let $\mu_{k+1}$ be such that $\norm{T^H J_k - T_{\mu_{k+1}} T^{H-1} J_k}_\infty \leq \eps_{LA}$.\\\label{step 2 alg}
\STATE Compute $\hat{J}^{\mu_{k+1}}(i) = T_{\mu_{k+1}}^m T^{H-1} (J_k)(i)+w_{k+1}(i)$ for $i \in \scriptD_k.$ \\ \label{step 3 alg}
\STATE Choose $\theta_{k+1}$ to solve
\begin{align}
\min_\theta \sum_{i \in D_k} \Big( (\Phi \theta)(i) - \hat{J}^{\mu_{k+1}}(i) \Big)^2, \label{step 4 alg}
\end{align} \\
where $\Phi$ is a matrix whose rows are the feature vectors.
\STATE $J_{k+1} = \Phi \theta_{k+1}.$
\STATE Set $k \leftarrow k+1.$ Go to 2.
\end{algorithmic}
\end{algorithm}
Our algorithm is described in Algorithm \ref{alg:LSalg}. We now explain our algorithm and the associated notation in detail. Due to the use of function approximation, our algorithm is an approximation to policy iteration with lookahead. At each iteration index, say, $k$, we have an estimate of the value function, which we denote by $J_k$. To obtain $J_{k+1}$, we perform a lookahead to improve the value function estimate at a certain number of states (denoted by $\scriptD_k$) which can vary with each iteration. For example, $\scriptD_k$ could be chosen as the states visited when performing a tree search to approximate the lookahead process. During the lookahead process, we note that we will also obtain an $H$-step lookahead policy, which we denote by $\mu_{k+1}$. As noted in the Introduction, the computation of $T^{H-1}(J_k)(i)$ for $i \in \scriptD_k$ in Step \ref{step 3 alg} of Algorithm \ref{alg:LSalg} may be computationally infeasible; however, as noted in \cite{efroni2019combine}, techniques such as Monte Carlo tree search (MCTS) are employed in practice to approximately estimate $T^{H-1}(J_k)(i).$ \color{black} In this paper, we model the fact that lookahead cannot be performed exactly due to the associated computational complexity by allowing an error in the lookahead process which we denote by $\eps_{LA}$ in Step~\ref{step 2 alg} of the algorithm.
We obtain estimates of $J^{\mu_{k+1}}(i)$ for $i \in \scriptD_k$, which we call $\hat{J}^{\mu_{k+1}}(i)$. To obtain an estimate of $J^{\mu_{k+1}}(i)$, we perform an $m$-step rollout with policy $\mu_{k+1}$, and obtain a noisy version of $T^m_{\mu_{k+1}}T^{H-1}J_k(i)$ for $i \in \scriptD_k.$ We also model the approximation error in the rollout by adding noise (denoted by $w_{k+1}(i)$ in Step ~\ref{step 3 alg} of the algorithm) to the return (result of the rollout - see Section \ref{section2}) computed at the end of this step. In order to estimate the value function for states not in $\scriptD_k$, we associate with each state $i \in \scriptS$ a feature vector $\phi(i)\in \mathbb{R}^d$ where typically $d << |\scriptS|$. The matrix comprised of the feature vectors as rows is denoted by $\Phi$. We use those estimates to find the best fitting $\theta \in \mathbb{R}^d$, i.e.,
\begin{align*}
\min_\theta \sum_{i \in D_k} \Big( (\Phi \theta)(i) - \hat{J}^{\mu_{k+1}}(i) \Big)^2.
\end{align*}
The solution to the above minimization problem is denoted by $\theta_{k+1}$. The algorithm then uses $\theta_{k+1}$ to obtain $J_{k+1} = \Phi \theta_{k+1}$. The process then repeats. Note that to compute $\hat{J}^{\mu_{k+1}}(i),$ we obtain noisy estimates of $T_{\mu_{k+1}}^m T^{H-1} J_k (i)$ for $i\in \scriptD_k.$ Another alternative is to instead obtain noisy estimates of $T_{\mu_{k+1}}^m J_k (i)$ for $i\in \scriptD_k.$ It was shown in \cite{efroni2019combine} that the former option is preferable because it has a certain contraction property. Thus, we have chosen to use this computation in our algorithm as well. However, we have shown in Appendix \ref{appendix:thm2} that the algorithm also has bounded error which becomes small if $m$ is chosen to be sufficiently large.
\begin{remark}
We note that $\mu_{k+1}(i)$ in Step \ref{step 2 alg} of Algorithm \ref{alg:LSalg} does not have to be computed for all states $i\in \scriptS.$ The actions $\mu_{k+1}(i)$ have to be computed only for those $i\in\scriptS$ that are encountered in the rollout step of the algorithm (Step \ref{step 3 alg}).
\end{remark}
To analyze Algorithm \ref{alg:LSalg}, we make the following assumption which states that we explore a sufficient number of states during the policy evaluation phase at each iteration.
\begin{assumption}\label{assume 1 or}
For each $k \geq 0, \text{ rank }\{ \phi(i)\}_{i \in D_k} = d$.
\end{assumption}
We assume that the noise $w_k$ is bounded.
\begin{assumption}
For some $\eps_{PE} >0,$ the noise in policy evaluation satisfies $\norm{w_k}_\infty \leq \eps_{PE} \forall k$. \label{assume 2 or}
\end{assumption}
We also assume that the rewards are bounded.
\begin{assumption}\label{assume 3 or}
$r(s,u) \in[0,1]$ $\forall s \in \scriptS,u\in\scriptA.$
\end{assumption}
Using Assumption~\ref{assume 1 or}, $J_{k+1}$ can be written as
\begin{align}
J_{k+1} &= \Phi \theta_{k+1} =\underbrace{\Phi(\Phi_{\scriptD_{k}}^\top \Phi_{\scriptD_{k}} )^{-1} \Phi_{\scriptD_{k}}^\top \scriptP_k}_{=: \scriptM_{k+1}} \hat{J}^{\mu_{k+1}},\label{defMk}\end{align}
where $\Phi_{\scriptD_{k}}$ is a matrix whose rows are the feature vectors of the states in $\scriptD_{k}$ and $\scriptP_k$ is a matrix of zeros and ones such that $\scriptP_k\hat{J}^{\mu_{k+1}}$ is a vector whose elements are a subset of the elements of $\hat{J}^{\mu_{k+1}}$ corresponding to $\scriptD_k$. Note that $\hat{J}^{\mu_{k+1}}(i)$ for $i\notin\scriptD_k$ does not affect the algorithm, so we can define $\hat{J}^{\mu_{k+1}}(i)=T_{\mu_{k+1}}^m T^{H-1} J_k(i)$ for $i\notin\scriptD_k.$
Written concisely, our algorithm is as follows:
\begin{equation}
J_{k+1} = \scriptM_{k+1} (T_{\mu_{k+1}}^m T^{H-1} J_k+w_k), \label{eq:iterateAPI}
\end{equation}
where $\mu_{k+1}$ is defined in Step 2 of the algorithm.
Since $w_k(i)$ for $i\notin\scriptD_k$ does not affect the algorithm, we define $w_k(i)=0$ for $i\notin\scriptD_k.$
Now we will state our theorem which characterizes the role of lookahead ($H$) and return ($m$)
on the convergence of approximate policy iteration with function approximation.
\begin{theorem}\label{mainAPI}
Suppose that $m$ and $H$ satisfy $m + H -1>\log (\delta_{FV})/\log (1/\alpha),$ where
$$\delta_{FV} := \sup_k \norm{\scriptM_k}_\infty = \sup_k\norm{\Phi(\Phi_{\scriptD_{k}}^\top \Phi_{\scriptD_{k}} )^{-1} \Phi_{\scriptD_{k}}^\top \scriptP_k}_\infty.$$
Then, under Assumptions \ref{assume 1 or}-\ref{assume 3 or}, the following holds:
\begin{align}
\norm{J^{\mu_{k}} - J^*}_\infty
&\leq \underbrace{\frac{\alpha^{k(H)} }{1-\alpha} + \frac{2\alpha^{H}\norm{J^{\mu_0}-J_0}_\infty}{1-\alpha}k\max(\alpha^{H},\beta)^{k-1}}_{ \text{ finite-time component }} +\underbrace{\frac{2\alpha^H \frac{\tau}{1-\beta} +\eps_{LA}}{(1-\alpha^{H})(1-\alpha)}}_{ \text{ asymptotic component }}.
\label{eq:mainAPI bound}
\end{align}
where $$\tau:= \frac{\alpha^m + \alpha^{m+H-1}}{1-\alpha}\delta_{FV} + \delta_{app}+ \delta_{FV} \eps_{PE},$$
$$\beta:=\alpha^{m+H-1} \delta_{FV},$$
and
$$
\delta_{app} := \sup_{k, \mu_k}\norm{\scriptM_k J^{\mu_k}- J^{\mu_k}}_\infty.
$$
\end{theorem}
The proof of Theorem \ref{mainAPI} can be found in Appendix \ref{appendix:thm1}. We now make several comments about the implications of Theorem \ref{mainAPI}:
\begin{enumerate}
\item In conjunction with the counterexample in Appendix \ref{appendix:counterexAppendix}, Theorem \ref{mainAPI} shows that while $\norm{J^{\mu_k}-J^*}_\infty$ depends on the function approximation error ($\delta_{app}$) and the feature vectors ($\delta_{FV}$), the effect of these terms diminishes exponentially with increased $H$, with the exception of the tree search error ($\eps_{LA}$)
\item It is useful to compare our asymptotic bound with the asymptotic bound for approximate PI in \cite{bertsekas2019reinforcement}. There, it assumed that $m=\infty$, $\eps_{PE}=0$ and $H=1,$ in which case our bound is identical to the one in \cite{bertsekas2019reinforcement}. However, when $H>1,$ our asymptotic error is proportional to $1/(1-\alpha)(1-\alpha^{H})$ which is much better than the $1/(1-\alpha)^2$ bound for approximate policy iteration \cite{bertsekas2019reinforcement}. When $\alpha\rightarrow 1,$ the expected discounted reward is of the order of $1/(1-\alpha),$ thus the $1/(1-\alpha)^2$ bound on the error is generally considered to be very loose. Our result shows that the use of lookahead significantly improves this error bound. Additionally, our bound is able to capture situations where a full rollout (i.e., $m=\infty$) is impossible to perform.
\end{enumerate}
The proof of Theorem \ref{mainAPI} is closely related to the proofs of Theorems \ref{theorem 2 or}-\ref{theorem 3 or}. The proofs of Theorems~\ref{theorem 2 or}-\ref{theorem 3 or} are presented in the next section while we defer the proof of Theorem \ref{mainAPI} to Appendix \ref{appendix:thm1}. We note that the above result is fundamentally different from the conclusion in Theorem 4 in \cite{efroni2019combine} where one requires a condition on $m$ and $H$ for convergence when one uses $J_k$ instead of using $T^{H-1}J_k$ in Step 2 of the algorithm. Here, we have shown that even when one uses $T^{H-1}J_k,$ one may need large $m+H$ for convergence due to the use of function approximation.
We can additionally characterize the approximation error of our iterates, $J_k$, by computing bounds on the asymptotic error $\limsup_{k \to \infty} \norm{J_k - J^*}_\infty.$ The bounds along with their derivations can be found in Appendix \ref{appendix:prop1}. The corresponding finite-time bounds can be easily obtained from the proof of Proposition \ref{IterAPITheorem} in Appendix \ref{appendix:prop1}.
It is important to note that the upper bounds on $\norm{J^{\mu_k}-J^*}_\infty$ and $\norm{J_k - J^*}_\infty$ illustrate that $J^{\mu_k}$ approximates $J^*$ much better than $J_k$ does. Thus, algorithms need not wait for the value function estimates to converge before the corresponding policies reach near optimality.
In \cite{bertsekas2021lessons}, it is noted that, in reinforcement learning to play computer games or board games, it is not uncommon during training to get a relatively crude estimate of the value function, which is improved by lookahead and $m$-step return during actual game play. Our analysis would also apply to this situation -- we have not explicitly differentiated between training and game play in our analysis.
\color{black}
Theorem \ref{mainAPI} can be used to make the following observation: how close $J^{\mu_k}$ is to $J^*$ depends on four factors -- the representation power of the feature vectors and the feature vectors themselves ($\delta_{app}, \delta_{FV}$), the amount of lookahead ($H$), the extent of the rollout ($m$) and the approximation in the policy determination and policy evaluation steps ($\eps_{LA}$ and $\eps_{PE}$). Further, it is easy to see that lookahead and rollout help mitigate the effect of feature vectors and their ability to represent the value functions. \color{black}
\section{Gradient Descent Algorithm} \label{SectionGD}
Solving the least-squares problem in Algorithm~\ref{alg:LSalg} involves a matrix inversion, which can be computationally difficult. So we propose an alternative algorithm which performs $\eta_k$ steps of gradient descent with stepsize $\gamma$ at each iteration $k$, where the gradient refers to the gradient of the least-squares objective in (\ref{step 4 alg}).
The gradient descent-based algorithm is presented in Algorithm~\ref{alg:GDalg}.
\begin{algorithm}[tb]
\caption{Gradient Descent Algorithm}
\label{alg:GDalg}
\textbf{Input}: $\theta_0, m,$ $H,$ feature vectors $\{ \phi(i) \}_{i \in \scriptS}, \phi(i) \in \mathbb{R}^d,$ and $\scriptD_k,$ which is the set of states for which we evaluate the current policy at iteration $k.$
\begin{algorithmic}[1]
\STATE $k=0, J_0 = \Phi \theta_0$. \\
\STATE Let $\mu_{k+1}$ be such that $\norm{T^H J_k - T_{\mu_{k+1}} T^{H-1} J_k}_\infty \leq \eps_{LA}$. \\
\STATE Compute $\hat{J}^{\mu_{k+1}}(i) = T_{\mu_{k+1}}^m T^{H-1} J_k(i)+w_{k+1}(i)$ for $i \in \scriptD_k.$ \\
\STATE \label{step 4 unbiased} $\theta_{k+1, 0} := \theta_k.$ For $\ell = 1, 2, \ldots, \eta_{k+1},$ iteratively compute the following:
\begin{align}
\theta_{k+1, \ell} &= \theta_{k+1,\ell-1} - \gamma \nabla_\theta c(\theta;\hat{J}^{\mu_{k+1}})|_{\theta_{k+1,\ell-1}}, \label{eq:iterthetaGD}
\end{align} where
\begin{align*}
c(\theta;\hat{J}^{\mu_{k+1}}) := \frac{1}{2}\sum_{i \in \scriptD } \Big( (\Phi \theta)(i) - \hat{J}^{\mu_{k+1}}(i) \Big)^2,
\end{align*} \\
and $\Phi$ is a matrix whose rows are the feature vectors.\\
\STATE Define
\begin{align*}
\theta_{k+1} &= \theta_{k+1,\eta_{k+1}},
\end{align*} and set $$J_{k+1} = \Phi \theta_{k+1}.$$
\STATE Set $k \leftarrow k+1.$ Go to 2.
\end{algorithmic}
\end{algorithm}
In order to present our main result for the gradient descent version of our algorithm, we define $\tilde{\theta}^{\mu_k}$ for any policy $\mu_k$ which will be used in the proof of the theorem:
\begin{align}
\tilde{\theta}^{\mu_k} \nonumber&:= \arg\min_\theta \frac{1}{2}\norm{\Phi_{\scriptD_k} \theta - \scriptP_{k}(T_{\mu_{k}}^m T^{H-1} J_{k-1}+w_k)}_2^2.
\end{align}
Note that
\begin{align}
\Phi\tilde{\theta}^{\mu_k}= \scriptM_k (T_{\mu_k}^m T^{H-1} J_{k-1}+w_k), \label{eq: theta tilde muk or}
\end{align}
where $\scriptM_k$ is defined in \eqref{defMk}.
Thus, $\tilde{\theta}^{\mu_k}$ represents the function approximation of the estimate of $J^{\mu_k}$ obtained from the $m$-step return.
We now present two propositions which will be used in the subsequent theorems to obtain bounds
on the convergence of approximate policy iteration with function approximation when gradient descent is employed to estimate the least-squares objective in each iteration.
\begin{proposition}\label{prop1}
\begin{align}
\norm{J^{\mu_{k+1}} - J^*}_\infty &\leq \alpha^{H} \norm{J^{\mu_k}-J^*}_\infty + \frac{ 2\alpha^H \norm{J^{\mu_k}-J_k}_\infty + \eps_{LA} }{1-\alpha},\label{eq: jmu-jk or}
\end{align}
where
\begin{align}
\norm{J^{\mu_{k}}-J_k}_\infty \leq
a_k \norm{J_{k-1} -J^{\mu_{k-1}}}_\infty +b_k,\label{eq: ineq 2 or}
\end{align}
$$a_k := \alpha^{m+H-1} \delta_{FV}+ \frac{\sqrt{|S|}\norm{\Phi}_\infty}{\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta_{k}}( \alpha^{m+H-1} \delta_{FV}+1)$$ and
$$
b_k := (1+ \frac{\sqrt{|S|}\norm{\Phi}_\infty}{\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta_{k}})( \frac{\alpha^m + \alpha^{m+H-1}}{1-\alpha}\delta_{FV} + \delta_{app}+ \delta_{FV} \eps_{PE}) +\frac{\sqrt{|S|}\norm{\Phi}_\infty}{(1-\alpha)\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta_{k}},
$$
where $\sigma_{\min, \Phi}$ is the smallest singular value in the singular value decomposition of $\Phi$ and $\alpha_{GD, \gamma}:=\sup_k \max_i |1-\gamma\lambda_i ( \Phi_{\scriptD_k}^\top \Phi_{\scriptD_k})|,$ where $\lambda_i$ denotes the $i$-th largest eigenvalue of a matrix.
\end{proposition}
The proof of Proposition \ref{prop1} is presented later in this section. \color{black}Using Proposition \ref{mainGDLemma} and iterating in the special case where $a_k$ is constant or upper bounded by a constant $a,$ where $0<a<1$, and $b_k$ is constant or upper bounded by a constant $b$, where $0<b$, we get the following:
\begin{proposition}\label{prop2}
When $a_k \leq a$ for all $k$ where $0<a<1,$ and $b_k \leq b,$ for all $k$, where $0<b$, the following holds:
\begin{align}
\norm{J^{\mu_{k}} - J^*}_\infty\nonumber &\leq \frac{\alpha^{k(H)}}{1-\alpha} + \frac{2\alpha^H}{1-\alpha}k \max(\alpha^{H},a)^{k-1}\norm{J^{\mu_0}-J_0}_\infty +\frac{2\alpha^H\frac{b}{1-a}+\eps_{LA}}{(1-\alpha)(1-\alpha^{H})}.
\end{align}
\end{proposition}
Using Proposition \ref{prop2}, we can get Theorems \ref{theorem 2 or} and \ref{theorem 3 or} which give us finite-time and asymptotic bounds for the cases where $\eta_k$ is constant and $\eta_k$ is increasing to infinity, respectively:
\begin{theorem}\label{theorem 2 or}
Suppose that $\gamma, m,$ and $H$ satisfy
\begin{align}
\gamma < \frac{1}{d\inf_k \norm{\Phi_{\scriptD_k}^\top \Phi_{\scriptD_k}}_\infty^2}, \label{eq: gamma assumption or}
\end{align}
and
\begin{align}
m + H >1+\log (2\delta_{FV})/\log (1/\alpha).\label{eq: m H assumption or}
\end{align}
Furthermore, consider the case where $\eta_k$ is a constant, which we call $\eta,$ where $$\eta>\log (\frac {3\sqrt{|S|}\norm{\Phi}_\infty}{\sigma_{\min,\Phi}})/\log (1/\alpha_{GD, \gamma}).$$
Then, under Assumptions~\ref{assume 1 or}-\ref{assume 3 or}, the following holds:
\begin{align}
\norm{J^{\mu_{k}} - J^*}_\infty
&\leq \underbrace{\frac{\alpha^{k(H)}}{1-\alpha} + \frac{2\alpha^H}{(1-\alpha)^2}k \max(\alpha^{H},a_\eta)^{k-1}}_{ \text{ finite-time component }} + \underbrace{\frac{2\alpha^H \frac{b_\eta}{1-a_\eta}+\eps_{LA}}{(1-\alpha)(1-\alpha^{H})}}_{ \text{ asymptotic component }}, \label{eq: case a bound}
\end{align}
where
$$a_\eta := \alpha^{m+H-1} \delta_{FV}+ \frac{\sqrt{|S|}\norm{\Phi}_\infty}{\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta}( \alpha^{m+H-1} \delta_{FV}+1)$$ and
$$
b_\eta := (1+ \frac{\sqrt{|S|}\norm{\Phi}_\infty}{\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta})( \frac{\alpha^m + \alpha^{m+H-1}}{1-\alpha}\delta_{FV} + \delta_{app}+ \delta_{FV} \eps_{PE}) +\frac{\sqrt{|S|}\norm{\Phi}_\infty}{(1-\alpha)\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta}.
$$
Taking limits on both sides as $k \to \infty,$ we have the following asymptotic bound:
\begin{align*}
\limsup_{k\to \infty} \norm{J^{\mu_k}-J^*}_\infty \leq \frac{2\alpha^H \frac{b_\eta}{1-a_\eta}+\eps_{LA}}{(1-\alpha)(1-\alpha^{H})}.
\end{align*}
\end{theorem}
\begin{theorem}\label{theorem 3 or}
Consider $\eta_k$ where $\eta_k$ is increasing and $\eta_k \to \infty.$ Suppose that
$\gamma, m,$ and $H$ satisfy \eqref{eq: gamma assumption or} and \eqref{eq: m H assumption or}. Then, under Assumptions~\ref{assume 1 or}-\ref{assume 3 or}, the following holds:
\begin{align}
\limsup_{k\to \infty} \norm{J^{\mu_k}-J^*}_\infty \leq \frac{2\alpha^H \frac{b^*}{1-a^*}+\eps_{LA}}{(1-\alpha)(1-\alpha^{H})}, \label{eq: recover ls}
\end{align}
where
$$a^*:= \alpha^{m+H-1} \delta_{FV}$$ and
$$
b^* :=\frac{\alpha^m + \alpha^{m+H-1}}{1-\alpha}\delta_{FV} + \delta_{app}+ \delta_{FV} \eps_{PE}.
$$
\end{theorem}
Before we present the proofs of Propositions \ref{prop1}-\ref{prop2} and Theorems \ref{theorem 2 or}-\ref{theorem 3 or}, we make the following remarks:
In the case where $\eta_k$ is constant, i.e., $\eta_k = \eta$, when $\gamma$ is sufficiently small and $\eta$ is sufficiently large, we have exponential rate of convergence to the asymptotic error, assuming that $m$ and $H$ are sufficiently large.
When we increase $\eta$, our asymptotic error becomes smaller until it reaches the asymptotic error of the least-squares algorithm, i.e., when $\eta \rightarrow \infty$, we recover the asymptotic error of Algorithm \ref{alg:LSalg}.
If it is difficult to ascertain whether $\eta$ is sufficiently large, one can consider an increasing sequence $\eta_k$ such that $\eta_k \to \infty.$ For such a sequence, our asymptotic error term is the same as that of the least-squares problem.
\proof{Proof of Proposition \ref{prop1}}
We break the proof of the the proposition into three steps.
\noindent\textit{Step 1:}
In this step, since $\theta_{k}$ is obtained by taking $\eta_{k}$ steps of gradient descent towards $\tilde{\theta}^{\mu_{k}}$ beginning from $\theta_{k-1}$, we show that the following holds:
\begin{align*}
\norm{\theta_{k} - \tilde{\theta}^{\mu_{k}}}_2
\leq \alpha_{GD, \gamma}^{\eta_{k}} \norm{\theta_{k-1} - \tilde{\theta}^{\mu_{k}}}_2,
\end{align*}
where $\alpha_{GD, \gamma}:=\sup_k \max_i |1-\gamma\lambda_i ( \Phi_{\scriptD_k}^\top \Phi_{\scriptD_k})|,$ where $\lambda_i$ denotes the $i$-th largest eigenvalue of a matrix.
We note that since $$0 < \lambda_i ( \Phi_{\scriptD_k}^\top \Phi_{\scriptD_k}) \leq \norm{\Phi_{\scriptD_k}^\top \Phi_{\scriptD_k}}_2^2 \leq d \norm{\Phi_{\scriptD_k}^\top \Phi_{\scriptD_k}}_\infty^2 \leq d \sup_k \norm{\Phi_{\scriptD_k}^\top \Phi_{\scriptD_k}}_\infty^2,$$
$\alpha_{GD, \gamma}<1$ when $\gamma < \frac{1}{d\sup_k \norm{\Phi_{\scriptD_k}^\top \Phi_{\scriptD_k}}_\infty^2}$.
\textit{Proof of Step 1:} Recall that the iterates in Equation \eqref{eq:iterthetaGD} can be written as follows:
\begin{align*}
\theta_{k,\ell} &= \theta_{k,\ell-1} - \gamma \nabla_\theta c(\theta;\hat{J}^{\mu_{k}})|_{\theta_{k,\ell-1}} =\theta_{k,\ell-1} - \gamma \Big( \Phi_{\scriptD_{k-1}}^\top \Phi_{\scriptD_{k-1}} \theta_{k,\ell-1} - \Phi_{\scriptD_{k-1}}^\top \scriptP_{k-1}(T_{\mu_{k}}^m T^{H-1}J_k+w_{k-1})\Big).
\end{align*}
Since
\begin{align*}0 &= \nabla_\theta c(\theta;\hat{J}^{\mu_{k}})|_{\tilde{\theta}^{\mu_{k}}}= \Phi_{\scriptD_{k-1}}^\top \Phi_{\scriptD_{k-1}} \tilde{\theta}^{\mu_{k}} - \Phi_{\scriptD_{k-1}}^\top \scriptP_{k-1}(T_{\mu_{k}}^m T^{H-1}J_k+w_{k-1}),
\end{align*}
we have the following:
\begin{equation*}
\begin{array}{lll}
\theta_{k,\ell} &=& \theta_{k,\ell-1} - \gamma \Big( \Phi_{\scriptD_{k-1}}^\top \Phi_{\scriptD_{k-1}} \theta_{k,\ell-1} - \Phi_{\scriptD_{k-1}}^\top \Phi_{\scriptD_{k-1}} \tilde{\theta}^{\mu_{k}} - \Phi_{\scriptD_{k-1}}^\top \scriptP_{k-1}(T_{\mu_{k}}^m T^{H-1}J_k+w_{k-1}) \\&+& \Phi_{\scriptD_{k-1}}^\top \scriptP_{k-1}(T_{\mu_{k}}^m T^{H-1}J_k+w_{k-1})\Big) \\
&=& \theta_{k,\ell-1} - \gamma \Phi_{\scriptD_{k-1}}^\top \Phi_{\scriptD_{k-1}} (\theta_{k,\ell-1} - \tilde{\theta}^{\mu_{k}}).
\end{array}
\end{equation*}
Subtracting $\tilde{\theta}^{\mu_{k}}$ from both sides gives:
\begin{align*}
\theta_{k,\ell} - \tilde{\theta}^{\mu_{k}} &= \theta_{k,\ell-1} - \tilde{\theta}^{\mu_{k}} - \gamma \Phi_{\scriptD_{k-1}}^\top \Phi_{\scriptD_{k-1}} (\theta_{k,\ell-1} - \tilde{\theta}^{\mu_{k}})\\&= (I - \gamma \Phi_{\scriptD_{k-1}}^\top \Phi_{\scriptD_{k-1}}) (\theta_{k,\ell-1} - \tilde{\theta}^{\mu_{k}}).
\end{align*}
Thus,
\begin{align*}
\norm{\theta_{k,\ell} - \tilde{\theta}^{\mu_{k}}}_2&= \norm{(I - \gamma \Phi_{\scriptD_{k-1}}^\top \Phi_{\scriptD_{k-1}}) (\theta_{k,\ell-1} - \tilde{\theta}^{\mu_{k}})}_2\\&\leq \norm{I - \gamma \Phi_{\scriptD_{k-1}}^\top \Phi_{\scriptD_{k-1}}}_2 \norm{\theta_{k,\ell-1} - \tilde{\theta}^{\mu_{k}}}_2\\
&\leq \max_i |\lambda_i (I - \gamma \Phi_{\scriptD_{k-1}}^\top \Phi_{\scriptD_{k-1}})|\norm{\theta_{k,\ell-1} - \tilde{\theta}^{\mu_{k}}}_2\\
&\leq \max_i |1-\gamma\lambda_i ( \Phi_{\scriptD_{k-1}}^\top \Phi_{\scriptD_{k-1}})|\norm{\theta_{k,\ell-1} - \tilde{\theta}^{\mu_{k}}}_2 \\
&\leq \underbrace{\sup_k \max_i |1-\gamma\lambda_i ( \Phi_{\scriptD_k}^\top \Phi_{\scriptD_k})|}_{=: \alpha_{GD, \gamma}}\norm{\theta_{k,\ell-1} - \tilde{\theta}^{\mu_{k}}}_2,
\end{align*} where $\lambda_i$ denotes the $i$-th largest eigenvalue of a matrix.
Iterating over $k,$ the following holds:
\begin{align*}
\norm{\theta_{k} - \tilde{\theta}^{\mu_{k}}}_2 &=\norm{\theta_{k,\eta_{k}} - \tilde{\theta}^{\mu_{k}}}_2\\
&\leq \alpha_{GD, \gamma}^{\eta_{k}} \norm{\theta_{k,0} - \tilde{\theta}^{\mu_{k}}}_2\\
&= \alpha_{GD, \gamma}^{\eta_{k}} \norm{\theta_{k-1} - \tilde{\theta}^{\mu_{k}}}_2.
\end{align*}
\textit{Step 2}: Using Step 1 and matrix norm properties, we obtain the following bound on $\norm{J_{\mu_k}-J_k}_\infty:$
\begin{align}
\norm{J^{\mu_{k}}-J_k}_\infty \leq
a_k \norm{J_{k-1} -J^{\mu_{k-1}}}_\infty +b_k,\label{eq: first ineq or}
\end{align}
$$a_k := \alpha^{m+H-1} \delta_{FV}+ \frac{\sqrt{|S|}\norm{\Phi}_\infty}{\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta_{k}}( \alpha^{m+H-1} \delta_{FV}+1)$$ and
$$
b_k := (1+ \frac{\sqrt{|S|}\norm{\Phi}_\infty}{\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta_{k}})( \frac{\alpha^m + \alpha^{m+H-1}}{1-\alpha}\delta_{FV} + \delta_{app}+ \delta_{FV} \eps_{PE}) +\frac{\sqrt{|S|}\norm{\Phi}_\infty}{(1-\alpha)\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta_{k}}.
$$
\textit{Proof of Step 2:}
Using equivalence and sub-multiplicative properties of matrix norms, we have the following:
\begin{alignat*}{2}
&\frac{1}{\norm{\Phi}_\infty} \norm{\Phi \theta_k-\Phi \tilde{\theta}^{\mu_{k}}}_\infty &&\leq \norm{\theta_k - \tilde{\theta}^{\mu_{k}}}_\infty
\\& &&\leq \norm{\theta_k - \tilde{\theta}^{\mu_{k}}}_2
\\& &&\leq \alpha_{GD, \gamma}^{\eta_{k}} \norm{\theta_{k-1} - \tilde{\theta}^{\mu_{k}}}_2
\\& &&\leq \frac{1}{\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta_{k}} \norm{\Phi\theta_{k-1} - \Phi\tilde{\theta}^{\mu_{k}}}_2
\\& &&\leq \frac{\sqrt{|S|}}{\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta_{k}} \norm{\Phi\theta_{k-1} - \Phi\tilde{\theta}^{\mu_{k}}}_\infty\\
&\implies \norm{J_k-\Phi \tilde{\theta}^{\mu_{k}}}_\infty&&\leq \frac{\sqrt{|S|}\norm{\Phi}_\infty}{\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta_{k}} \norm{J_{k-1} - \Phi\tilde{\theta}^{\mu_{k}}}_\infty ,
\end{alignat*}
where $\sigma_{\min, \Phi}$ is the smallest singular value in the singular value decomposition of $\Phi$ and the last line follows from the fact that $J_k := \Phi \theta_k.$
The above implies the following:
\begin{align}
\norm{J^{\mu_{k}}-J_k}_\infty &\leq \norm{\Phi\tilde{\theta}^{\mu_{k}}-J^{\mu_k}}_\infty +\frac{\sqrt{|S|}\norm{\Phi}_\infty}{\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta_{k}}\norm{J_{k-1} - \Phi\tilde{\theta}^{\mu_{k}}}_\infty\nonumber\\
&= \norm{\scriptM_k (T_{\mu_k}^m T^{H-1} J_{k-1}+w_k)-J^{\mu_k}}_\infty +\frac{\sqrt{|S|}\norm{\Phi}_\infty}{\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta_{k}}\norm{J_{k-1} - \Phi\tilde{\theta}^{\mu_{k}}}_\infty,\label{eq: label 1 or}
\end{align}
where the equality follows from \eqref{eq: theta tilde muk or}.
Now we bound $\norm{J_{k-1}-\Phi\tilde{\theta}^{\mu_{k}}}_\infty$ as follows:
\begin{align}
\norm{J_{k-1}-\Phi\tilde{\theta}^{\mu_{k}}}_\infty
\nonumber&\leq \norm{J_{k-1} - J^{\mu_{k-1}}}_\infty + \norm{ J^{\mu_{k-1}}-J^{\mu_{k}}}_\infty + \norm{J^{\mu_{k}}-\Phi\tilde{\theta}^{\mu_{k}}}_\infty \\
\nonumber&\leq \norm{J_{k-1} - J^{\mu_{k-1}}}_\infty + \frac{1}{1-\alpha} + \norm{J^{\mu_{k}}-\Phi\tilde{\theta}^{\mu_{k}}}_\infty
\\
&\leq \norm{J_{k-1} - J^{\mu_{k-1}}}_\infty + \frac{1}{1-\alpha} + \norm{J^{\mu_{k}}-\scriptM_k (T_{\mu_k}^m T^{H-1} J_{k-1}+w_k)}_\infty, \label{eq: label 2 or}
\end{align}
where the last line follows from \eqref{eq: theta tilde muk or}. We introduce Lemma \ref{mainGDLemma} to upper bound $\norm{\scriptM_k (T_{\mu_k}^m T^{H-1} J_{k-1}+w_k)- J^{\mu_k}}_\infty$ as follows:
\begin{lemma}
\begin{align*}
\norm{\scriptM_k (T_{\mu_k}^m T^{H-1} J_{k-1}+w_k)- J^{\mu_k}}_\infty
&\leq \alpha^{m+H-1} \delta_{FV} \norm{J_{k-1} -J^{\mu_{k-1}}}_\infty + \frac{\alpha^m + \alpha^{m+H-1}}{1-\alpha}\delta_{FV} + \delta_{app}+ \delta_{FV} \eps_{PE}.
\end{align*} \label{mainGDLemma}
\end{lemma}
The proof of Lemma \ref{mainGDLemma} is in Appendix \ref{appendix:mainGDLemmaProof}.
Putting \eqref{eq: label 1 or}, \eqref{eq: label 2 or}, and Lemma \ref{mainGDLemma} together, we get the following:
\begin{align*}
\norm{J^{\mu_{k}}-J_k}_\infty \leq
a_k \norm{J_{k-1} -J^{\mu_{k-1}}}_\infty +b_k,
\end{align*}
\begin{align*}a_k := \alpha^{m+H-1} \delta_{FV}+ \frac{\sqrt{|S|}\norm{\Phi}_\infty}{\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta_{k}}( \alpha^{m+H-1} \delta_{FV}+1) \end{align*} and
\begin{align*}
b_k := (1+ \frac{\sqrt{|S|}\norm{\Phi}_\infty}{\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta_{k}})( \frac{\alpha^m + \alpha^{m+H-1}}{1-\alpha}\delta_{FV} + \delta_{app}+ \delta_{FV} \eps_{PE}) +\frac{\sqrt{|S|}\norm{\Phi}_\infty}{(1-\alpha)\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta_{k}}.
\end{align*}
\noindent\textit{Step 3:}
We will establish the following bound on $T_{\mu_{k+1}}T^{H-1}J^{\mu_k}$ using the contraction property of Bellman operators and property in \eqref{eq:usefulproperties}:
\begin{align*}
-T_{\mu_{k+1}}T^{H}J^{\mu_k}
&\leq 2\alpha^H f_k e - T^{H}J^{\mu_k} + \eps_{LA} e. \nonumber
\end{align*}
Using properties in \eqref{eq:usefulproperties} and monotonicity, we will repeatedly apply $T_{\mu_{k+1}}$ to both sides and take limits to obtain the following:
\begin{align*}
J^{\mu_{k+1}} - T^{H}J^{\mu_k}
&\geq - \frac{ 2\alpha^H \norm{J^{\mu_k}-J_k}_\infty + \eps_{LA} }{1-\alpha}e.
\end{align*}
\textit{Proof of Step 3:} We begin by noting that
\begin{align}
-T_{\mu_{k+1}}T^{H}J^{\mu_k} &\leq -T_{\mu_{k+1}}T^{H-1}J^{\mu_k} - T_{\mu_{k+1}}T^{H-1}J_k \nonumber+ T_{\mu_{k+1}}T^{H-1}J_k \nonumber\\
&\overset{(a)}\leq \alpha^{H} \norm{J^{\mu_k}-J_k}_\infty e - T_{\mu_{k+1}}T^{H-1}J_k \nonumber\\
&\leq \alpha^{H} \norm{J^{\mu_k}-J_k}_\infty e - T^H J_k + \eps_{LA} e \nonumber\\
&= \alpha^{H} \norm{J^{\mu_k}-J_k}_\infty e - T^H J_k - T^H J^{\mu_k} + T^H J^{\mu_k} + \eps_{LA} e \nonumber\\
&\overset{(b)}\leq 2\alpha^H \norm{J^{\mu_k}-J_k}_\infty e - T^{H}J^{\mu_k} + \eps_{LA} e, \nonumber
\end{align}
where $e$ is the vector of all $1$s, (a) and (b) follow from the contraction property of $T^H$ and $T_{\mu_{k+1}}$.
The rest of the proof is similar to the arguments in \cite{bertsekastsitsiklis}, \cite{bertsekas2019reinforcement}; however, we have to explicitly incorporate the role of lookahead ($H$) in the remaining steps of the proof. Suppose that we apply the $T_{\mu_{k+1}}$ operator $\ell-1$ times to both sides. Then, due to monotonicity and the fact $T_\mu(J+ce)=T_\mu(J)+\alpha ce,$ for any policy $\mu,$ we have the following:
\begin{align*}
T_{\mu_{k+1}}^\ell T^{H}J^{\mu_k} \leq \alpha^{\ell } (2\alpha^H \norm{J^{\mu_k}-J_k}_\infty + \eps_{LA} )e + T_{\mu_{k+1}}^{\ell+1} T^{H} J^{\mu_k}.
\end{align*}
Using a telescoping sum, we get the following inequality:
\begin{align*}
T_{\mu_{k+1}}^j T^{H} J^{\mu_k} - T^{H}J^{\mu_k}
&\geq - \sum_{\ell = 1}^{j} \alpha^{\ell -1} (2\alpha^H \norm{J^{\mu_k}-J_k}_\infty + \eps_{LA} )e.
\end{align*}
Taking limits as $j \to \infty,$ we obtain the following:
\begin{align*}
J^{\mu_{k+1}} - T^{H}J^{\mu_k}
&\geq - \frac{ 2\alpha^H \norm{J^{\mu_k}-J_k}_\infty + \eps_{LA} }{1-\alpha}e.
\end{align*}
The rest of proof is straightforward.
Subtracting $J^*$ from both sides of the previous inequality and using the contraction property of $T$, we get:
\begin{align*}
&\alpha^{H} \norm{J^* - J^{\mu_k}}_\infty e + \frac{ 2\alpha^H \norm{J^{\mu_k}-J_k}_\infty + \eps_{LA} }{1-\alpha}e. \\&\geq J^* - T^{H}J^{\mu_k} + \frac{ 2\alpha^H \norm{J^{\mu_k}-J_k}_\infty + \eps_{LA} }{1-\alpha}e
\\& \geq J^* - J^{\mu_{k}}\\
&\geq 0,
\end{align*}
which implies that
\begin{align}
\norm{J^{\mu_{k+1}} - J^*}_\infty &\leq \alpha^{H} \norm{J^{\mu_k}-J^*}_\infty + \frac{ 2\alpha^H \norm{J^{\mu_k}-J_k}_\infty + \eps_{LA} }{1-\alpha},\label{eq: ** 4}
\end{align} since $J^* \geq J^{\mu}$ for all policies $\mu$. The above, together with the inequality in \eqref{eq: first ineq or} gives us Proposition \ref{prop1}.
\endproof
\Halmos
We now prove Proposition \ref{prop2} by iterating over $k$ using Proposition \ref{prop1}.
\proof{Proof of Proposition \ref{prop2}}
First, noting our assumptions in Proposition \ref{prop1} that $a_k \leq a$ for all $k$ where $0<a<1,$ and $b_k \leq b$ for all $k$, where $0<b$, we iterate over $k$ using \eqref{eq: ineq 2 or} to obtain a bound on $\norm{J^{\mu_k}-J_k}_\infty$ as follows:
\begin{align}
\norm{J^{\mu_{k}}-J_{k}}_\infty \leq a^k \norm{J_0-J^{\mu_0}}_\infty + b\sum_{j=0}^{k-1}\prod_{m=j}^{k-1} a^m. \label{eq: ineq 1 last or}
\end{align}
Now, we iterate over $k$ in \eqref{eq: jmu-jk or} to get the following:
\begin{align}
\norm{J^{\mu_{k}} - J^*}_\infty \nonumber&\leq \alpha^{k(H-1)} \norm{J^{\mu_0}-J^*}_\infty + \frac{2\alpha^H}{1-\alpha}\sum_{\ell = 0}^{k-1}\alpha^{(k-\ell-1)(H-1)} \norm{J^{\mu_{\ell}}-J_\ell}_\infty + \frac{\eps_{LA}}{(1-\alpha)(1-\alpha^{H-1})}\\
&\leq \frac{\alpha^{k(H-1)}}{1-\alpha} + \frac{2\alpha^H}{1-\alpha}\sum_{\ell = 0}^{k-1}\alpha^{(k-\ell-1)(H-1)} \norm{J^{\mu_{\ell}}-J_\ell}_\infty + \frac{\eps_{LA}}{(1-\alpha)(1-\alpha^{H-1})}.\label{eq: ineq 2 last or}
\end{align}
Combining the inequalities in \eqref{eq: ineq 1 last or} and \eqref{eq: ineq 2 last or} gives us the following:
\begin{align}
\norm{J^{\mu_{k}} - J^*}_\infty &\leq \frac{\alpha^{k(H-1)}}{1-\alpha} + \frac{2\alpha^H}{1-\alpha}\sum_{\ell = 0}^{k-1}\alpha^{(k-\ell-1)(H-1)} \Big[a^\ell \norm{J_0-J^{\mu_0}}_\infty + b\sum_{j=0}^{\ell-1} \prod_{m=j}^{\ell-1} a^m \Big] \nonumber\\&+ \frac{\eps_{LA}}{(1-\alpha)(1-\alpha^{H-1})} \nonumber\\
&\leq \frac{\alpha^{k(H-1)}}{1-\alpha} + \frac{2\alpha^H}{1-\alpha}\sum_{\ell = 0}^{k-1}\alpha^{(k-\ell-1)(H-1)} \Big[ a^\ell \norm{J_0-J^{\mu_0}}_\infty + \frac{b}{1-a} \Big] + \frac{\eps_{LA}}{(1-\alpha)(1-\alpha^{H-1})} \nonumber\\
&\leq \frac{\alpha^{k(H-1)}}{1-\alpha} + \frac{2\alpha^H}{1-\alpha}\norm{J_0 - J^{\mu_0}}_\infty \sum_{\ell = 0}^{k-1}\alpha^{(k-\ell-1)(H-1)} a^\ell \nonumber+ \frac{2\alpha^H\frac{b}{1-a}+\eps_{LA}}{(1-\alpha)(1-\alpha^{H-1})} \nonumber\\
&\leq \frac{\alpha^{k(H-1)}}{1-\alpha} + \frac{2\alpha^H}{1-\alpha}\norm{J_0 - J^{\mu_0}}_\infty \sum_{\ell = 0}^{k-1}\max(\alpha^{H-1},a)^{k-1} \nonumber+ \frac{2\alpha^H\frac{b}{1-a}+\eps_{LA}}{(1-\alpha)(1-\alpha^{H-1})} \nonumber\\
&= \frac{\alpha^{k(H-1)}}{1-\alpha} + \frac{2\alpha^H}{1-\alpha}k \max(\alpha^{H-1},a)^{k-1}\norm{J^{\mu_0}-J_0}_\infty +\frac{2\alpha^H\frac{b}{1-a}+\eps_{LA}}{(1-\alpha)(1-\alpha^{H-1})},\nonumber
\end{align}
where the last inequality follows from the assumption that $0<a<1.$
\Halmos
\endproof
We now use Proposition \ref{prop2} to prove Theorems \ref{theorem 2 or}-\ref{theorem 3 or}.
\proof{Proof of Theorem \ref{theorem 2 or}}
When $\eta_k=\eta,$ the following holds:
$$a_k = a_\eta := \alpha^{m+H-1} \delta_{FV}+ \frac{\sqrt{|S|}\norm{\Phi}_\infty}{\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta}( \alpha^{m+H-1} \delta_{FV}+1)$$ and
$$
b_k = b_\eta := (1+ \frac{\sqrt{|S|}\norm{\Phi}_\infty}{\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta})( \frac{\alpha^m + \alpha^{m+H-1}}{1-\alpha}\delta_{FV} + \delta_{app}+ \delta_{FV} \eps_{PE}) +\frac{\sqrt{|S|}\norm{\Phi}_\infty}{(1-\alpha)\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta}.
$$
Note that from our assumptions on $m, \gamma,$ and $H$, $0<a_\eta<1$ and $0< b_\eta$.
To see that $a_\eta < 1$, observe from our assumptions in Theorem \ref{theorem 2 or} that \begin{align}\alpha^{m+H-1} \delta_{FV} < \frac{1}{2}\label{eq: ** or}\end{align} and $$\frac{\sqrt{|S|}\norm{\Phi}_\infty}{\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta}( \alpha^{m+H-1} \delta_{FV}+1) < \frac{\sqrt{|S|}\norm{\Phi}_\infty}{\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta}\frac{3}{2}< \frac{1}{2}.$$
Putting the above two lines together gives us
\begin{align*}
\alpha^{m+H-1} \delta_{FV}+ \frac{\sqrt{|S|}\norm{\Phi}_\infty}{\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta}( \alpha^{m+H-1} \delta_{FV}+1)< 1.
\end{align*}
So, we can directly use Proposition \ref{prop2} to obtain \eqref{eq: case a bound}.
\Halmos
\endproof
\proof{Proof of Theorem \ref{theorem 3 or}}
Take any constant $0 <c^*< 1- \alpha^{m+H-1} \delta_{FV},$ where $c^*$ is a margin of error. Define $k(c^*)$ to be the smallest value $k(c^*)$ such that:
\begin{align} \eta_{k(c^*)} >\log\Big(\frac{1}{c^*}\frac{\sqrt{|S|}\norm{\Phi}_\infty}{\sigma_{\min,\Phi}}( \frac{\alpha^m + \alpha^{m+H-1}}{1-\alpha}\delta_{FV} + \delta_{app}+ \delta_{FV} \eps_{PE}+\frac{1}{1-\alpha})\Big)/\log(1/\alpha_{GD, \gamma}). \label{eq: c* assume or}
\end{align}
We know such a $k(c^*)$ exists since $\eta_k \to \infty$.
We define $a_{c^*}$ and $b_{c^*}$ as follows:
$$a_{c^*} := \alpha^{m+H-1} \delta_{FV} + c^*,$$
$$b_{c^*} := \frac{\alpha^m + \alpha^{m+H-1}}{1-\alpha}\delta_{FV} + \delta_{app}+ \delta_{FV} \eps_{PE} + c^*.$$
It is easy to see from \eqref{eq: c* assume or} and the definitions of $a_k$ and $b_k$ in Proposition \ref{prop1} that $a_k \leq a_{c^*}$ and $b_k \leq b_{c^*}$ when $k > k(c^*),$ since $0<\alpha_{GD, \gamma}<1$ from our assumption on $\gamma$ in Theorem \ref{theorem 3 or}.
From our assumptions on $c^*$, $m, \gamma,$ and $H$, $0<a_{c^*}<1$ and $0< b_{c^*}$, where we use a similar technique as in the proof of Theorem \ref{theorem 2 or} with $\alpha^{m+H-1}\delta_{FV}< 1-c^*$ in place of \eqref{eq: ** or} to show that $a_{c^*}<1$.
So, we can use Proposition \ref{prop2} and begin iterating at $k(c^*)$ to obtain finite time bounds on $\norm{J^{\mu_{k}} - J^*}_\infty$ as follows:
\begin{align}
&\norm{J^{\mu_{k}} - J^*}_\infty
\nonumber \\&\leq \underbrace{\frac{\alpha^{(k-k(c^*))(H)}}{1-\alpha} + \frac{2\alpha^H}{1-\alpha}(k-k(c^*))\max(a_{c^*}, \alpha^{H})^{k-k(c^*)-1}\norm{J^{\mu_{k(c^*)}}-J_{k(c^*)}}_\infty}_{ \text{ finite-time component }}+ \underbrace{\frac{2\alpha^H \frac{b_{c^*}}{1-a_{c^*}}+\eps_{LA}}{(1-\alpha^{H})(1-\alpha)}}_{ \text{ asymptotic component }}. \label{eq: fin time jmu nk}
\end{align}
Using Proposition \ref{prop1}, we can upper bound $\norm{J^{\mu_{k(c^*)}}-J_{k(c^*)}}_\infty$ as follows:
$$\norm{J^{\mu_{k(c^*)}}-J_{k(c^*)}}_\infty \leq \prod_{i=1}^{k(c^*)} a_{i}\norm{J_0-J^{\mu_0}}_\infty + \sum_{j=1}^{k(c^*)} b_j \prod_{k=j+1}^{k(c^*)} a_k,$$ where $a_k$ and $b_k$ are defined in Proposition \ref{prop1}.
Taking limits on both sides of \eqref{eq: fin time jmu nk}, we get the following:
\begin{align}
\limsup \norm{J^{\mu_{k}} - J^*}_\infty
\nonumber\leq \frac{2\alpha^H \frac{b_{c^*}}{1-a_{c^*}}+\eps_{LA}}{(1-\alpha^{H})(1-\alpha)}.
\end{align}
Since the above holds for all $c^* > 0$, we get Theorem \ref{theorem 3 or}.
\endproof\Halmos
Looking through the steps of our proof, one can obtain finite-time bounds for the case where $\eta_k$ is increasing. However, the algorithm consists of two loops: one corresponding to policy iteration and the other corresponding to gradient descent within each policy iteration step. It is hard to compare the relative complexities of each step within these loops. Therefore, a finite-time analysis does not shed much light into the amount of computations needed to execute the algorithm with an increasing sequence $\eta_k.$ However, it is interesting to note that for all $k>k(c^*),$ the algorithm converges exponentially fast in the number of policy iteration steps although the number of gradient descent steps within each policy iteration step is increasing.
\section{Conclusion}
Practical RL algorithms that deal with large state spaces implement some form of approximate policy iteration. In traditional analyses of approximate policy iteration, for example in \cite{bertsekas2019reinforcement}, it is assumed that there is an error in the policy evaluation step and an error in the policy improvement step. In this paper, we seek to understand the role of function approximation in the policy evaluation step and the associated changes that one has to make to the approximate policy iteration algorithm (such as lookahead) to counteract the effect of function approximation. Our main conclusion is that lookahead provides two benefits: (i) it mitigates the effects of function approximation, rollout and the choice of specific feature vectors, and (ii) from a theoretical perspective, it improves the upper bound on the error in approximate policy iteration from $1/(1-\alpha)^2$ to $1/(1-\alpha^{H})(1-\alpha).$
Possible directions for future work include the following:
\begin{itemize}
\item For problems with a terminal state, it would be interesting to consider cases where the value function of a given policy is estimated using a full rollout which provides an unbiased estimate as in \cite{tsitsiklis2002convergence}.
\item In game playing applications, gradient descent is commonly used to estimate the value function, but temporal-difference learning is used in other applications. It would be interesting to extend our results to the case of TD learning-based policy evaluation.
\item While neural networks are not linear function approximators, recent results on the NTK analysis of neural networks suggest that they can be approximated as linear combinations of basis functions \cite{jacot2018neural,du2018gradient,arora2019fine,ji2019polylogarithmic, cao2019generalization}. Thus, to the extent that the NTK approximation is reasonable, our results can potentially shed light on why the combination of the representation capability of neural networks and tree-search methods work well in practice, although further work is necessary to make this connection precise.
\end{itemize}
\begin{APPENDICES}
\section{Proof of Lemma \ref{mainGDLemma}} \label{appendix:mainGDLemmaProof}
\proof{Proof of Lemma \ref{mainGDLemma}}
\begin{align*}
& \norm{\scriptM_k (T_{\mu_k}^m T^{H-1} J_{k-1}+w_k)- J^{\mu_k}}_\infty \\
&\leq \norm{\scriptM_k T_{\mu_k}^m T^{H-1} J_{k-1}- J^{\mu_k}}_\infty + \norm{\scriptM_k w_k}_\infty \\
&\leq\norm{\scriptM_k T_{\mu_k}^m T^{H-1} J_{k-1}- J^{\mu_k}}_\infty + \norm{\scriptM_k}_\infty \norm{w_k}_\infty \\ \allowdisplaybreaks
&\leq \norm{\scriptM_k T_{\mu_k}^m T^{H-1} J_{k-1}- J^{\mu_k}}_\infty + \delta_{FV} \eps_{PE} \\ \allowdisplaybreaks
&= \norm{\scriptM_k T_{\mu_k}^m T^{H-1} J_{k-1} - \scriptM_k J^{\mu_k} + \scriptM_k J^{\mu_k}- J^{\mu_k}}_\infty + \delta_{FV} \eps_{PE}\\
&\leq \norm{\scriptM_k T_{\mu_k}^m T^{H-1} J_{k-1} - \scriptM_k J^{\mu_k}}_\infty + \norm{\scriptM_k J^{\mu_k}- J^{\mu_k}}_\infty+ \delta_{FV} \eps_{PE}\\
&\leq \norm{\scriptM_k}_\infty \norm{ T_{\mu_k}^m T^{H-1} J_{k-1} - J^{\mu_k}}_\infty + \norm{\scriptM_k J^{\mu_k}- J^{\mu_k}}_\infty+ \delta_{FV} \eps_{PE}\\
&\leq \alpha^m \norm{\scriptM_k}_\infty \norm{T^{H-1} J_{k-1} - J^{\mu_k}}_\infty + \sup_{k, \mu_k}\norm{\scriptM_k J^{\mu_k}- J^{\mu_k}}_\infty + \delta_{FV} \eps_{PE}\\
&\leq \alpha^m\norm{\scriptM_k}_\infty \norm{T^{H-1} J_{k-1} - J^* + J^* - J^{\mu_k}}_\infty +\delta_{app} + \delta_{FV} \eps_{PE}\\
&\leq \alpha^m\norm{\scriptM_k}_\infty \norm{T^{H-1} J_{k-1} - J^*}_\infty + \alpha^m\norm{\scriptM_k}_\infty \norm{J^* - J^{\mu_k}}_\infty + \delta_{app}+ \delta_{FV} \eps_{PE} \\
&\leq \alpha^{m+H-1}\norm{\scriptM_k}_\infty \norm{J_{k-1} - J^*}_\infty + \frac{\alpha^m}{1-\alpha}\norm{\scriptM_k}_\infty +\delta_{app} + \delta_{FV} \eps_{PE}\\
&\leq \alpha^{m+H-1}\norm{\scriptM_k}_\infty \norm{J_{k-1} -J^{\mu_{k-1}} + J^{\mu_{k-1}}- J^*}_\infty + \frac{\alpha^m}{1-\alpha}\norm{\scriptM_k}_\infty +\delta_{app} + \delta_{FV} \eps_{PE}\\
&\leq \alpha^{m+H-1}\norm{\scriptM_k}_\infty \norm{J_{k-1} -J^{\mu_{k-1}}}_\infty + \frac{\alpha^m + \alpha^{m+H-1}}{1-\alpha}\norm{\scriptM_k}_\infty +\delta_{app} + \delta_{FV} \eps_{PE}\\
&\leq \alpha^{m+H-1} \delta_{FV} \norm{J_{k-1} -J^{\mu_{k-1}}}_\infty + \frac{\alpha^m + \alpha^{m+H-1}}{1-\alpha}\delta_{FV} + \delta_{app}+ \delta_{FV} \eps_{PE}.
\end{align*} \Halmos
\endproof
\section{Proof of Theorem \ref{mainAPI}}
\label{appendix:thm1}
\begin{remark}
The proof of Theorem \ref{mainAPI} is somewhat simpler than that of Theorems \ref{theorem 2 or}-\ref{theorem 3 or} but uses many of the same ideas.
The proof of Theorem \ref{mainAPI} skips Steps 1 and 2 in the proof of Proposition \ref{prop1} and instead uses Lemma \ref{mainGDLemma} to obtain an analogous result to Step 2. The rest of the proof of Proposition \ref{prop1} also applies to the proof of Theorem \ref{mainAPI}.
\end{remark}
\proof{Proof of Theorem \ref{mainAPI}}
Consider policies $\mu_k$ and $\mu_{k+1},$ where $\mu_0$ can be taken to be any arbitrary policy. We have the following:
\begin{align}
-T_{\mu_{k+1}}T^{H}J^{\mu_k} &\leq -T_{\mu_{k+1}}T^{H-1}J^{\mu_k} - T_{\mu_{k+1}}T^{H-1}J_k \nonumber+ T_{\mu_{k+1}}T^{H-1}J_k \nonumber\\
&\overset{(a)}\leq \alpha^{H} \norm{J^{\mu_k}-J_k}_\infty e - T_{\mu_{k+1}}T^{H-1}J_k \nonumber\\
&\leq \alpha^{H} \norm{J^{\mu_k}-J_k}_\infty e - T^H J_k + \eps_{LA} e \nonumber\\
&= \alpha^{H} \norm{J^{\mu_k}-J_k}_\infty e - T^H J_k - T^H J^{\mu_k} + T^H J^{\mu_k} + \eps_{LA} e \nonumber\\
&\overset{(b)}\leq 2\alpha^H \norm{J^{\mu_k}-J_k}_\infty e - T^{H}J^{\mu_k} + \eps_{LA} e, \label{eq: *}
\end{align}
where $e$ is the vector of all $1$s, (a) and (b) follow from the contraction property of $T^H$ and $T_{\mu_{k+1}}$.
Since $\norm{J^{\mu_k}-J_k}_\infty= \norm{\scriptM_k (T_{\mu_k}^m T^{H-1} J_{k-1}+w_k)- J^{\mu_k}}_\infty$ from \eqref{eq:iterateAPI}, we can further bound $\norm{J^{\mu_k}-J_k}_\infty$ using Lemma \ref{mainGDLemma}. We now rewrite our recursion from Lemma \ref{mainGDLemma}:
Suppose that we define $\beta \in (0, 1)$ as follows:
\begin{align}
\beta := \alpha^{m+H-1} \delta_{FV}, \label{eq:defbeta}
\end{align} where we note from our assumption in Theorem \ref{mainAPI} that $\alpha^{m+H-1} \delta_{FV}<1.$ Furthermore, we denote $\tau$ as follows:
\begin{align}
\tau := \frac{\alpha^m + \alpha^{m+H-1}}{1-\alpha}\delta_{FV} + \delta_{app}+ \delta_{FV} \eps_{PE}. \label{eq:defdelta3}
\end{align}
Then, our bound from Lemma \ref{mainGDLemma} can be rewritten as the following:
\begin{align}
\norm{J^{\mu_k}-J_k}_\infty
&\leq \beta \norm{J_{k-1} -J^{\mu_{k-1}}}_\infty + \tau.\label{eq: iteration or}
\end{align}
Iterating over $k$, we get that for any $k$,
\begin{align}
\norm{J^{\mu_k}-J_k}_\infty \leq \underbrace{(\beta)^k \norm{J^{\mu_0}-J_0}_\infty + \sum_{i=0}^{k-1} \beta^i \tau}_{=: f_k}. \label{eq: pound2}
\end{align}
Putting (\ref{eq: *}) and (\ref{eq: pound2}) together, we have the following:
\begin{align}
-T_{\mu_{k+1}}T^{H}J^{\mu_k}
&\leq 2\alpha^H f_k e - T^{H}J^{\mu_k} + \eps_{LA} e.
\end{align}
The rest of the proof is similar to the arguments in \cite{bertsekastsitsiklis}, \cite{bertsekas2019reinforcement}; however, we have to explicitly incorporate the role of lookahead ($H$) in the remaining steps of the proof.
Suppose that we apply the $T_{\mu_{k+1}}$ operator $\ell-1$ times. Then, due to monotonicity and the fact that $T_\mu(J+ce)=T_\mu(J)+\alpha ce,$ for any policy $\mu,$ we have the following:
\begin{align*}
-T_{\mu_{k+1}}^{\ell+1} T^{H}J^{\mu_k}
&\leq \alpha^\ell (2\alpha^{H} f_k +\eps_{LA})e - T_{\mu_{k+1}}^{\ell}T^{H}J^{\mu_k}.
\end{align*}
Using a telescoping sum, we get the following inequality:
\begin{align*}
T_{\mu_{k+1}}^j T^{H} J^{\mu_k} - T^{H}J^{\mu_k}
&\geq -\sum_{\ell = 1}^{j} \alpha^{\ell-1} (2\alpha^{H} f_k +\eps_{LA})e.
\end{align*}
Taking the limit as $j\rightarrow\infty$ on both sides, we have the following:
\begin{align*}
J^{\mu_{k+1}} - T^{H}J^{\mu_k} \geq -\frac{2\alpha^{H} f_k +\eps_{LA}}{1-\alpha}e.
\end{align*}
Rearranging terms and subtracting $J^*$ from both sides, we get the following:
\begin{align*}
\alpha^{H} \norm{J^* - J^{\mu_k}}_\infty e + \frac{2\alpha^{H} f_k +\eps_{LA}}{1-\alpha}e \geq J^* - T^{H}J^{\mu_k} + \frac{2\alpha^{H} f_k +\eps_{LA}}{1-\alpha}e \geq J^* - J^{\mu_{k+1}} \geq 0,
\end{align*}
which implies that
\begin{align}
\norm{J^{\mu_{k+1}} - J^*}_\infty &\leq \alpha^{H} \norm{J^{\mu_k}-J^*}_\infty + \frac{2\alpha^{H} f_k +\eps_{LA}}{1-\alpha},
\end{align} since $J^* \geq J^{\mu}$ for all policies $\mu$.
\color{black}
We iterate over $k>0$ to get the following:
\begin{align*}
\norm{J^{\mu_{k}} - J^*}_\infty &\leq \alpha^{k(H)} \norm{J^{\mu_0}-J^*}_\infty + \sum_{\ell=0}^{k-1} \alpha^{(k-\ell-1)(H)}\frac{2\alpha^{H} f_\ell +\eps_{LA}}{1-\alpha}\\
&\leq \frac{\alpha^{k(H)} }{1-\alpha} + \sum_{\ell=0}^{k-1} \alpha^{(k-\ell-1)(H)}\frac{2\alpha^{H} f_\ell +\eps_{LA}}{1-\alpha}.
\end{align*}
where $$f_\ell := (\beta)^\ell \norm{J^{\mu_0}-J_0}_\infty + \sum_{i=0}^{k-1} \beta^i \tau,$$ and $\beta:= \alpha^{m+H-1} \delta_{FV}$ and $\tau := \frac{\alpha^m + \alpha^{m+H-1}}{1-\alpha}\delta_{FV} + \delta_{app}+ \delta_{FV} \eps_{PE}$.
We will now obtain finite-time bounds of $\norm{J^{\mu_{k}} - J^*}_\infty$ using the above results. The following holds:
\begin{align}
\norm{J^{\mu_{k}} - J^*}_\infty
&\leq \frac{\alpha^{k(H)} }{1-\alpha} + \sum_{\ell=0}^{k-1} \alpha^{(k-\ell-1)(H)}\frac{2\alpha^{H} f_\ell +\eps_{LA}}{1-\alpha} \nonumber\\
&= \frac{\alpha^{k(H)} }{1-\alpha} + \sum_{\ell=0}^{k-1} \alpha^{(k-\ell-1)(H)}\frac{2\alpha^{H} \Big(\beta^\ell \norm{J^{\mu_0}-J_0}_\infty + \frac{\tau}{1-\beta}\Big) +\eps_{LA}}{1-\alpha} \nonumber\\
&\leq \frac{\alpha^{k(H)} }{1-\alpha} + \frac{2\alpha^{H}\norm{J^{\mu_0}-J_0}_\infty}{1-\alpha}\sum_{\ell=0}^{k-1} \alpha^{(k-\ell-1)(H)} \beta^\ell +\frac{2\alpha^H \frac{\tau}{1-\beta} +\eps_{LA}}{(1-\alpha^{H})(1-\alpha)}\nonumber\\
&\leq \frac{\alpha^{k(H)} }{1-\alpha} + \frac{2\alpha^{H}\norm{J^{\mu_0}-J_0}_\infty}{1-\alpha}\sum_{\ell = 0}^{k-1}\max(\alpha^{H},\beta)^{k-1} +\frac{2\alpha^H \frac{\tau}{1-\beta} +\eps_{LA}}{(1-\alpha^{H})(1-\alpha)}\nonumber\\
&\leq \frac{\alpha^{k(H)} }{1-\alpha} + \frac{2\alpha^{H}\norm{J^{\mu_0}-J_0}_\infty}{1-\alpha}k\max(\alpha^{H},\beta)^{k-1} +\frac{2\alpha^H \frac{\tau}{1-\beta} +\eps_{LA}}{(1-\alpha^{H})(1-\alpha)} \label{eq: *** or},
\end{align}
where the second to last to last inequality holds due to the assumption in Theorem \ref{mainAPI}.
\Halmos \endproof
\color{black}
\begin{comment}
\section{A Modified Least-Squares Algorithm}\label{appendix:thm2}
Suppose Step \ref{step 3 alg} of Algorithm \ref{alg:LSalg} is changed to $\hat{J}^{\mu_{k+1}}(i) = T_{\mu_{k+1}}^m (J_k)(i)+w_{k+1}(i)$ for $i \in \scriptD_k$. Then, it is still possible to get bounds on the performance of the algorithm when $m$ is sufficiently large. With this modification to the algorithm, when Assumptions \ref{assume 1 or}, \ref{assume 2 or}, and \ref{assume 3 or} hold, we have the following:
\begin{theorem}\label{mainAPISecond}
Suppose that $m$ satisfies $m >\log (\delta_{FV})/\log (1/\alpha),$ where
$$\delta_{FV} := \sup_k \norm{\scriptM_k}_\infty = \sup_k\norm{\Phi(\Phi_{\scriptD_{k}}^\top \Phi_{\scriptD_{k}} )^{-1} \Phi_{\scriptD_{k}}^\top \scriptP_k}_\infty,$$
then
\begin{align*}
\limsup_{k\to \infty}\norm{J^{\mu_k}-J^*}_\infty \leq \frac{\tilde{c}_{m, H}}{(1-\alpha)(1-\alpha^{H-1})},
\end{align*}
where $$\tilde{c}_{m, H} := 2\alpha^H \Bigg(\frac{\frac{\alpha^m}{1-\alpha} \delta_{FV} + \delta_{app}+ \delta_{FV} \eps_{PE} }{1-\alpha^{m}\delta_{FV}} + \frac{1}{1-\alpha}+\norm{J_0}_\infty \Bigg)+\eps_{LA}$$ and
$
\delta_{app} := \sup_{k, \mu_k}\norm{\scriptM_k J^{\mu_k}- J^{\mu_k}}_\infty.
$
\end{theorem}
\proof{Proof of Theorem \ref{mainAPISecond}}
Consider policies $\mu_k$ and $\mu_{k+1},$ where $\mu_0$ can be taken to be any arbitrary policy. We have the following:
\begin{align}
-T_{\mu_{k+1}}T^{H-1}J^{\mu_k} &= -T_{\mu_{k+1}}T^{H-1}J^{\mu_k} - T_{\mu_{k+1}}T^{H-1}J_k + T_{\mu_{k+1}}T^{H-1}J_k \nonumber\\
&\overset{(a)}\leq \alpha^H \norm{J^{\mu_k}-J_k}_\infty e - T_{\mu_{k+1}}T^{H-1}J_k \nonumber\\
&\leq \alpha^H \norm{J^{\mu_k}-J_k}_\infty e - T^H J_k + \eps_{LA} e \nonumber\\
&= \alpha^H \norm{J^{\mu_k}-J_k}_\infty e - T^H J_k - T^H J^{\mu_k} + T^H J^{\mu_k} + \eps_{LA} e \nonumber\\
&\overset{(b)}\leq 2 \alpha^H \norm{J^{\mu_k}-J_k}_\infty e - T^H J^{\mu_k} + \eps_{LA} e \nonumber\\
&\leq 2\alpha^H \norm{J^{\mu_k}-J_k}_\infty e - T^{H-1}J^{\mu_k} + \eps_{LA} e, \label{eq: * 2}
\end{align}
where $e$ is the vector of all $1$s, (a) and (b) follow from the contraction property of $T^H$ and $T_{\mu_{k+1}}$ and the last inequality follows from standard arguments using the monotonicity properties and the definition of $T$: specifically, note that
$$
TJ^\mu = \max_{\mu'} T_{\mu'} J^\mu \geq T_\mu J^\mu = J^{\mu},
$$
and repeatedly apply $T$ to both sides of the inequality and use monotonocity to obtain
$T^\ell J^\mu \geq T^{\ell-1} J^\mu$ for all $\ell$ and all policies $\mu.$
We can further bound $\norm{J^{\mu_k}-J_k}$ as follows:
\begin{align*}
\norm{J^{\mu_k}-J_k} &= \norm{\scriptM_k (T_{\mu_k}^m J_{k-1}+w_k)- J^{\mu_k}}_\infty
\leq \norm{\scriptM_k T_{\mu_k}^m J_{k-1}- J^{\mu_k}}_\infty + \norm{\scriptM_k w_k}_\infty \\
&\leq\norm{\scriptM_k T_{\mu_k}^m J_{k-1}- J^{\mu_k}}_\infty + \norm{\scriptM_k}_\infty \norm{w_k}_\infty
\leq \norm{\scriptM_k T_{\mu_k}^m J_{k-1}- J^{\mu_k}}_\infty + \delta_{FV} \eps_{PE} \\
&= \norm{\scriptM_k T_{\mu_k}^m J_{k-1} - \scriptM_k J^{\mu_k} + \scriptM_k J^{\mu_k}- J^{\mu_k}}_\infty + \delta_{FV} \eps_{PE}\\
&\leq \norm{\scriptM_k T_{\mu_k}^m J_{k-1} - \scriptM_k J^{\mu_k}}_\infty + \norm{\scriptM_k J^{\mu_k}- J^{\mu_k}}_\infty+ \delta_{FV} \eps_{PE}\\
&\leq \norm{\scriptM_k}_\infty \norm{ T_{\mu_k}^m J_{k-1} - J^{\mu_k}}_\infty + \norm{\scriptM_k J^{\mu_k}- J^{\mu_k}}_\infty+ \delta_{FV} \eps_{PE}\\
&\leq \delta_{FV} \norm{ T_{\mu_k}^m J_{k-1} - J^{\mu_k}}_\infty + \norm{\scriptM_k J^{\mu_k}- J^{\mu_k}}_\infty+ \delta_{FV} \eps_{PE}\\
&\leq \alpha^m \delta_{FV} \norm{J_{k-1} - J^{\mu_k}}_\infty + \sup_{k, \mu_k}\norm{\scriptM_k J^{\mu_k}- J^{\mu_k}}_\infty + \delta_{FV} \eps_{PE}\\
&\leq \alpha^m \delta_{FV} \norm{J_{k-1} - J^{\mu_k}}_\infty + \delta_{app}+ \delta_{FV} \eps_{PE}\\
&\leq \alpha^m \delta_{FV} \norm{J_{k-1} - J^{\mu_{k-1}} + J^{\mu_{k-1}} - J^{\mu_k}}_\infty + \delta_{app}+ \delta_{FV} \eps_{PE}\\
&\leq \alpha^m\delta_{FV} \norm{J_{k-1} - J^{\mu_{k-1}}}_\infty + \alpha^m \delta_{FV} \norm{J^{\mu_{k-1}} - J^{\mu_k}}_\infty + \delta_{app}+ \delta_{FV} \eps_{PE}\\
&\leq \alpha^m \delta_{FV} \norm{J_{k-1} - J^{\mu_{k-1}}}_\infty + \frac{\alpha^m \delta_{FV}}{1-\alpha} + \delta_{app}+ \delta_{FV} \eps_{PE}
\end{align*}
Iterating over $k$ we get that for all $k$,
\begin{align}
\norm{J^{\mu_k}-J_k}_\infty \leq \underbrace{\frac{\frac{\alpha^m}{1-\alpha} \delta_{FV} + \delta_{app}+ \delta_{FV} \eps_{PE} }{1-\alpha^{m}\delta_{FV}} + \frac{1}{1-\alpha}+\norm{J_0}_\infty}_{=: p_{m}}, \label{eq: pound 2}
\end{align}
where we have used the assumption $\alpha^{m} \delta_{FV} < 1$ and the fact that $\norm{J^{\mu_0}}_\infty\leq 1/(1-\alpha)$ due to Assumption~\ref{assume 3 or}.
Putting (\ref{eq: * 2}) and (\ref{eq: pound 2}) together, we have the following:
\begin{align*}
-T_{\mu_{k+1}}T^{H-1}J^{\mu_k} \leq \underbrace{\Bigg[2\alpha^H p_{m} + \eps_{LA}\Bigg]}_{=: \tilde{c}_{m, H}} e -T^{H-1}J^{\mu_k}.
\end{align*}
The rest of the proof follows from the proof of Theorem \ref{mainAPI} with $\tilde{c}_{m, H}$ instead of $c_{m, H}$ and we get Theorem \ref{mainAPISecond}.
\Halmos \endproof
Analogously to the inequality \eqref{eq: ***} in Appendix \ref{appendix:thm1}, the proof of Theorem \ref{mainAPISecond} gives us the following:
\begin{align}
\norm{J^{\mu_{k}}-J^*}_\infty &\leq \alpha^{(H-1)k} \norm{J^{\mu_0}-J^*}_\infty + \sum_{i=0}^{k-1} \alpha^{(H-1)i} \frac{ \tilde{c}_{m, H}}{1-\alpha},
\label{eq: *** 8}
\end{align} for $k >0.$
Note that the inequality \eqref{eq: *** 8} provides finite-time performance bounds for the modified least-squares algorithm, in addition to the asymptotic result stated in Theorem \ref{mainAPISecond}. \color{black}
\section{Bounds on Iterates In Algorithm \ref{alg:LSalg}}\label{appendix:prop1}
In the following proposition, we present a bound on the difference between $J_k$ and $J^*.$
\begin{proposition} When $\alpha^{m+H-1} \delta_{FV} <1,$
$\limsup_{k \to \infty} \norm{J_k - J^*}_\infty \leq \frac{r_{m, H}}{1-q_{m, H}},$ \label{IterAPITheorem}
\end{proposition}
where $q_{m, H} := \delta_{FV} \alpha^{m+H-1} + \big( 1+\delta_{FV} \alpha^m \big)\alpha^{H-1}$ and $r_{m, H} := \big( 1+\delta_{FV} \alpha^m \big)\Bigg(\alpha^{H-1} p_{m, H} + \frac{ c_{m, H}}{1-\alpha} \Bigg) + \delta_{app}+\delta_{FV} \eps_{PE},$ where $c_{m, H}$ is defined in \eqref{eq:defcdmh} and $p_{m, H}$ is defined in $(\ref{eq: pound})$. The proof is as follows.
\proof{Proof of Proposition \ref{IterAPITheorem}}
\begin{align*}
&\norm{J_{k+1} - J^*}_\infty \\&= \norm{J_{k+1} -J^{\mu_{k+1}} + J^{\mu_{k+1}}- J^*}_\infty \\
&\leq \norm{J_{k+1} -J^{\mu_{k+1}}}_\infty + \norm{J^{\mu_{k+1}}- J^*}_\infty \\
&\leq \norm{\scriptM_{k+1} T_{\mu_{k+1}}^m T^{H-1} J_k -J^{\mu_{k+1}}}_\infty +\delta_{FV} \eps_{PE} +\norm{J^{\mu_{k+1}}- J^*}_\infty +\delta_{FV} \eps_{PE}\\
&= \norm{\scriptM_{k+1} T_{\mu_{k+1}}^m T^{H-1} J_k -\scriptM_{k+1} J^{\mu_{k+1}} + \scriptM_{k+1} J^{\mu_{k+1}} -J^{\mu_{k+1}}}_\infty +\norm{J^{\mu_{k+1}}- J^*}_\infty +\delta_{FV} \eps_{PE}\\
&\leq \norm{\scriptM_{k+1} T_{\mu_{k+1}}^m T^{H-1} J_k -\scriptM_{k+1} J^{\mu_{k+1}}}_\infty + \norm{\scriptM_{k+1} J^{\mu_{k+1}} -J^{\mu_{k+1}}}_\infty +\norm{J^{\mu_{k+1}}- J^*}_\infty +\delta_{FV} \eps_{PE}\\
&\leq \norm{\scriptM_{k+1}}_\infty \norm{T_{\mu_{k+1}}^m T^{H-1} J_k - J^{\mu_{k+1}}}_\infty + \norm{\scriptM_{k+1} J^{\mu_{k+1}} -J^{\mu_{k+1}}}_\infty +\norm{J^{\mu_{k+1}}- J^*}_\infty +\delta_{FV} \eps_{PE}\\
&\leq \delta_{FV} \alpha^m \norm{T^{H-1} J_k - J^{\mu_{k+1}}}_\infty + \delta_{app} +\norm{J^{\mu_{k+1}}- J^*}_\infty +\delta_{FV} \eps_{PE}\\
&= \delta_{FV} \alpha^m \norm{T^{H-1} J_k - J^* + J^* - J^{\mu_{k+1}}}_\infty + \delta_{app} +\norm{J^{\mu_{k+1}}- J^*}_\infty +\delta_{FV} \eps_{PE}\\
&\leq \delta_{FV} \alpha^m \norm{T^{H-1} J_k - J^*}_\infty + \delta_{FV} \alpha^m \norm{J^* - J^{\mu_{k+1}}}_\infty + \delta_{app} + \norm{J^{\mu_{k+1}}- J^*}_\infty+\delta_{FV} \eps_{PE}\\
&\leq \delta_{FV} \alpha^{m+H-1} \norm{J_k - J^*}_\infty + \delta_{FV} \alpha^m \norm{J^* - J^{\mu_{k+1}}}_\infty + \delta_{app} + \norm{J^{\mu_{k+1}}- J^*}_\infty+\delta_{FV} \eps_{PE}\\
&= \delta_{FV} \alpha^{m+H-1} \norm{J_k - J^*}_\infty + \big( 1+\delta_{FV} \alpha^m \big) \norm{J^* - J^{\mu_{k+1}}}_\infty + \delta_{app}+\delta_{FV} \eps_{PE} \allowdisplaybreaks\\
&\overset{(c)}\leq \delta_{FV} \alpha^{m+H-1} \norm{J_k - J^*}_\infty + \big( 1+\delta_{FV} \alpha^m \big) \Bigg(\alpha^{H-1} \norm{J^{\mu_k}-J^*}_\infty + \frac{ c_{m, H}}{1-\alpha} \Bigg) + \delta_{app} +\delta_{FV} \eps_{PE}\\
&\leq \delta_{FV} \alpha^{m+H-1} \norm{J_k - J^*}_\infty + \big( 1+\delta_{FV} \alpha^m \big) \Bigg(\alpha^{H-1} (\norm{J^{\mu_k}-J_k}_\infty + \norm{J_k-J^*}_\infty) + \frac{ c_{m, H}}{1-\alpha} \Bigg) + \delta_{app} +\delta_{FV} \eps_{PE}\allowdisplaybreaks\\
&= \Bigg(\delta_{FV} \alpha^{m+H-1} + \big( 1+\delta_{FV} \alpha^m \big)\alpha^{H-1} \Bigg) \norm{J_k - J^*}_\infty + \big( 1+\delta_{FV} \alpha^m \big)\Bigg(\alpha^{H-1} \norm{J^{\mu_k}-J_k}_\infty + \frac{ c_{m, H}}{1-\alpha} \Bigg)\\& + \delta_{app} +\delta_{FV} \eps_{PE}\allowdisplaybreaks\\
&\overset{(d)}\leq \Bigg(\delta_{FV} \alpha^{m+H-1} + \big( 1+\delta_{FV} \alpha^m \big)\alpha^{H-1} \Bigg) \norm{J_k - J^*}_\infty + \big( 1+\delta_{FV} \alpha^m \big)\Bigg(\alpha^{H-1} p_{m, H} + \frac{ c_{m, H}}{1-\alpha} \Bigg) + \delta_{app} +\delta_{FV} \eps_{PE}\\
&= q_{m, H} \norm{J_k - J^*}_\infty + r_{m, H},
\end{align*}
where $q_{m, H} := \delta_{FV} \alpha^{m+H-1} + \big( 1+\delta_{FV} \alpha^m \big)\alpha^{H-1}$ and $r_{m, H} := \big( 1+\delta_{FV} \alpha^m \big)\Bigg(\alpha^{H-1} p_{m, H} + \frac{ c_{m, H}}{1-\alpha} \Bigg) + \delta_{app}+\delta_{FV} \eps_{PE}.$ Note that $p_{m, H}$ is defined in $(\ref{eq: pound})$ and $c_{m, H}$ is defined in $(\ref{eq:defcdmh}).$ Additionally, $(c)$ follows from $(\ref{eq:cprop1})$ and $(d)$ follows from $(\ref{eq: pound})$, noting that the inequalities in $(\ref{eq:cprop1})$ and $(\ref{eq: pound})$ hold for all $\eps_{PE}'>0$.
Iterating over $k$, we get Proposition \ref{IterAPITheorem}.
We obtain the following inequality in much a similar way to inequality \eqref{eq: ***} in the proof of Theorem \ref{mainAPI}:
\begin{align}
\norm{J_{k} - J^*}_\infty \leq q_{m, H}^k \norm{J_0 - J^*}_\infty + \sum_{i=0}^{k-1} q_{m, H}^i r_{m, H}, \text{ for $k >0.$ } \label{eq:*** 3}
\end{align} \Halmos
\endproof
Note that the inequality \eqref{eq:*** 3} provides finite-time performance bounds, in addition to the asymptotic result stated in Proposition \ref{IterAPITheorem}.
\end{comment}
\section{A Modified Least Squares Algorithm}\label{appendix:thm2}
Suppose Step \ref{step 3 alg} of Algorithm \ref{alg:LSalg} is changed to $\hat{J}^{\mu_{k+1}}(i) = T_{\mu_{k+1}}^m (J_k)(i)+w_{k+1}(i)$ for $i \in \scriptD_k$. Then, it is still possible to get bounds on the performance of the algorithm when $m$ is sufficiently large. With this modification to the algorithm, we have the following:
\begin{proposition}\label{mainAPISecond}
Suppose that $m$ satisfies $m>\log (\delta_{FV})/\log (1/\alpha),$ where
$$\delta_{FV} := \sup_k \norm{\scriptM_k}_\infty = \sup_k\norm{\Phi(\Phi_{\scriptD_{k}}^\top \Phi_{\scriptD_{k}} )^{-1} \Phi_{\scriptD_{k}}^\top \scriptP_k}_\infty.$$
Then, under Assumptions \ref{assume 1 or}-\ref{assume 3 or}, the following holds:
\begin{align*}
\norm{J^{\mu_{k}} - J^*}_\infty
&\leq \underbrace{\frac{\alpha^{k(H)} }{1-\alpha} + \frac{2\alpha^{H}\norm{J^{\mu_0}-J_0}_\infty}{1-\alpha}k\max(\alpha^{H},\beta')^{k-1}}_{ \text{ finite-time component }} +\underbrace{\frac{2\alpha^H \frac{\tau'}{1-\beta'} +\eps_{LA}}{(1-\alpha^{H})(1-\alpha)}}_{ \text{ asymptotic component }}.
\end{align*}
where $$\tau':= \alpha^m \delta_{FV},$$
$$\beta':=\frac{\alpha^m \delta_{FV}}{1-\alpha} + \delta_{app} + \delta_{FV} \eps_{PE},$$
and
$$
\delta_{app} := \sup_{k, \mu_k}\norm{\scriptM_k J^{\mu_k}- J^{\mu_k}}_\infty.
$$
\end{proposition}
\begin{proof}{Proof of Proposition \ref{mainAPISecond}}
The proof of Theorem \ref{mainAPISecond} is identical to the proof of Theorem \ref{mainAPI} except for the iteration in \eqref{eq: iteration or}. We thus give the following iteration which can be substituted in our proof of Theorem \ref{mainAPI}:
\begin{align*}
\norm{J_k-J^{\mu_k}}_\infty &=\norm{\scriptM_k (T_{\mu_k}^m J_{k-1}+w_k)- J^{\mu_k}}_\infty \\&= \norm{\scriptM_k (T_{\mu_k}^m J_{k-1}+w_k)- J^{\mu_k}}_\infty \\
&\leq \norm{\scriptM_k T_{\mu_k}^m J_{k-1}- J^{\mu_k}}_\infty + \norm{\scriptM_k w_k}_\infty \\
&\leq\norm{\scriptM_k T_{\mu_k}^m J_{k-1}- J^{\mu_k}}_\infty + \norm{\scriptM_k}_\infty \norm{w_k}_\infty \\ \allowdisplaybreaks
&\leq \norm{\scriptM_k T_{\mu_k}^m J_{k-1}- J^{\mu_k}}_\infty + \delta_{FV} \eps_{PE} \\ \allowdisplaybreaks
&= \norm{\scriptM_k T_{\mu_k}^m J_{k-1} - \scriptM_k J^{\mu_k} + \scriptM_k J^{\mu_k}- J^{\mu_k}}_\infty + \delta_{FV} \eps_{PE}\\
&\leq \norm{\scriptM_k T_{\mu_k}^m J_{k-1} - \scriptM_k J^{\mu_k}}_\infty + \norm{\scriptM_k J^{\mu_k}- J^{\mu_k}}_\infty+ \delta_{FV} \eps_{PE}\\
&\leq \sup_k \norm{\scriptM_k}_\infty \norm{ T_{\mu_k}^m J_{k-1} - J^{\mu_k}}_\infty + \sup_{k, \mu_k}\norm{\scriptM_k J^{\mu_k}- J^{\mu_k}}_\infty+ \delta_{FV} \eps_{PE}\\
&\leq \alpha^m \delta_{FV} \norm{ J_{k-1} - J^{\mu_k}}_\infty + \delta_{app} + \delta_{FV} \eps_{PE}\\
&=\alpha^m \delta_{FV} \norm{ J_{k-1} -J^{\mu_{k-1}}+J^{\mu_{k-1}}- J^{\mu_k}}_\infty + \delta_{app} + \delta_{FV} \eps_{PE}\\
&\leq \alpha^m \delta_{FV} \norm{ J_{k-1} -J^{\mu_{k-1}}}_\infty+\alpha^m \delta_{FV}\norm{J^{\mu_{k-1}}- J^{\mu_k}}_\infty + \delta_{app} + \delta_{FV} \eps_{PE}\\
&\leq \alpha^m \delta_{FV} \norm{ J_{k-1} -J^{\mu_{k-1}}}_\infty+\frac{\alpha^m \delta_{FV}}{1-\alpha} + \delta_{app} + \delta_{FV} \eps_{PE}.
\end{align*}
Substituting $$\beta':= \alpha^m \delta_{FV}$$ and $$\tau' :=\frac{\alpha^m \delta_{FV}}{1-\alpha} + \delta_{app} + \delta_{FV} \eps_{PE},$$ in place of $\beta$ and $\tau$, respectively, in Theorem \ref{mainAPI} and the proof of Theorem \ref{mainAPI}, we obtain Proposition \ref{mainAPISecond}.
\end{proof}
\section{Bounds on Iterates In Algorithm \ref{alg:LSalg}}\label{appendix:prop1}
In the following proposition, we present a bound on the difference between $J_k$ and $J^*.$
\begin{proposition} \label{IterAPITheorem}When $\alpha^{m+H-1} \delta_{FV} <1,$
\begin{align*}
\limsup_{k\to\infty} \norm{J_{k} - J^*}_\infty &\leq \frac{ \big( 1+\delta_{FV} \alpha^m \big) \Big[\frac{2\alpha^H \frac{\tau}{1-\beta} +\eps_{LA}}{(1-\alpha^{H})(1-\alpha)} \Big] + \delta_{app}+\delta_{FV} \eps_{LA}}{1-\delta_{FV} \alpha^{m+H-1}},
\end{align*}
where $\beta$ and $\tau$ are defined in Theorem \ref{mainAPI}.
\end{proposition}
The proof is as follows.
\begin{proof}{Proof of Proposition \ref{IterAPITheorem}}
\begin{align*}
\norm{J_{k+1} - J^*}_\infty &= \norm{J_{k+1} -J^{\mu_{k+1}} + J^{\mu_{k+1}}- J^*}_\infty \\
&\leq \norm{J_{k+1} -J^{\mu_{k+1}}}_\infty + \norm{J^{\mu_{k+1}}- J^*}_\infty \\
&\leq \norm{\scriptM_{k+1} T_{\mu_{k+1}}^m T^{H-1} J_k -J^{\mu_{k+1}}}_\infty +\delta_{FV} \eps_{LA} \\&+\norm{J^{\mu_{k+1}}- J^*}_\infty +\delta_{FV} \eps_{LA}\\
&= \norm{\scriptM_{k+1} T_{\mu_{k+1}}^m T^{H-1} J_k -\scriptM_{k+1} J^{\mu_{k+1}} + \scriptM_{k+1} J^{\mu_{k+1}} -J^{\mu_{k+1}}}_\infty \\&+\norm{J^{\mu_{k+1}}- J^*}_\infty +\delta_{FV} \eps_{LA}\\
&\leq \norm{\scriptM_{k+1} T_{\mu_{k+1}}^m T^{H-1} J_k -\scriptM_{k+1} J^{\mu_{k+1}}}_\infty + \norm{\scriptM_{k+1} J^{\mu_{k+1}} -J^{\mu_{k+1}}}_\infty \\&+\norm{J^{\mu_{k+1}}- J^*}_\infty +\delta_{FV} \eps_{LA}\\
&\leq \norm{\scriptM_{k+1}}_\infty \norm{T_{\mu_{k+1}}^m T^{H-1} J_k - J^{\mu_{k+1}}}_\infty + \norm{\scriptM_{k+1} J^{\mu_{k+1}} -J^{\mu_{k+1}}}_\infty \\&+\norm{J^{\mu_{k+1}}- J^*}_\infty +\delta_{FV} \eps_{LA}\\
&\leq \delta_{FV} \alpha^m \norm{T^{H-1} J_k - J^{\mu_{k+1}}}_\infty + \delta_{app} \\&+\norm{J^{\mu_{k+1}}- J^*}_\infty +\delta_{FV} \eps_{LA}\\
&= \delta_{FV} \alpha^m \norm{T^{H-1} J_k - J^* + J^* - J^{\mu_{k+1}}}_\infty + \delta_{app} +\norm{J^{\mu_{k+1}}- J^*}_\infty +\delta_{FV} \eps_{LA}\\
&\leq \delta_{FV} \alpha^m \norm{T^{H-1} J_k - J^*}_\infty + \delta_{FV} \alpha^m \norm{J^* - J^{\mu_{k+1}}}_\infty + \delta_{app} \\&+ \norm{J^{\mu_{k+1}}- J^*}_\infty+\delta_{FV} \eps_{LA}\\
&\leq \delta_{FV} \alpha^{m+H-1} \norm{J_k - J^*}_\infty + \delta_{FV} \alpha^m \norm{J^* - J^{\mu_{k+1}}}_\infty + \delta_{app} \\&+ \norm{J^{\mu_{k+1}}- J^*}_\infty+\delta_{FV} \eps_{LA}\\
&= \delta_{FV} \alpha^{m+H-1} \norm{J_k - J^*}_\infty + \big( 1+\delta_{FV} \alpha^m \big) \norm{J^* - J^{\mu_{k+1}}}_\infty + \delta_{app}+\delta_{FV} \eps_{LA}.
\end{align*}
From Theorem \ref{mainAPI}, we have that
\begin{align*}
\limsup_{k\to\infty} \norm{J^{\mu_k}-J^*}_\infty \leq \frac{2\alpha^H \frac{\tau}{1-\beta} +\eps_{LA}}{(1-\alpha^{H})(1-\alpha)}.
\end{align*}
Thus, for every $\eps'>0,$ there exists a $k(\eps')$ such that for all $k>k(\eps')$,
\begin{align*}
\norm{J^{\mu_k}-J^*}_\infty \leq \frac{2\alpha^H \frac{\tau}{1-\beta} +\eps_{LA}}{(1-\alpha^{H})(1-\alpha)}+\eps'.
\end{align*}
Thus, for all $k>k(\eps')$, we have:
\begin{align*}
\norm{J_{k+1} - J^*}_\infty &\leq \delta_{FV} \alpha^{m+H-1} \norm{J_k - J^*}_\infty + \big( 1+\delta_{FV} \alpha^m \big) \Big[\frac{2\alpha^H \frac{\tau}{1-\beta} +\eps_{LA}}{(1-\alpha^{H})(1-\alpha)}+\eps' \Big] + \delta_{app}+\delta_{FV} \eps_{LA}.
\end{align*}
Iterating over $k$ gives us:
\begin{align*}
\limsup_{k\to\infty} \norm{J_{k} - J^*}_\infty &\leq \frac{ \big( 1+\delta_{FV} \alpha^m \big) \Big[\frac{2\alpha^H \frac{\tau}{1-\beta} +\eps_{LA}}{(1-\alpha^{H})(1-\alpha)}+\eps' \Big] + \delta_{app}+\delta_{FV} \eps_{LA}}{1-\delta_{FV} \alpha^{m+H-1}}.
\end{align*}
Since the above holds for all $\eps'$:
\begin{align*}
\limsup_{k\to\infty} \norm{J_{k} - J^*}_\infty &\leq \frac{ \big( 1+\delta_{FV} \alpha^m \big) \Big[\frac{2\alpha^H \frac{\tau}{1-\beta} +\eps_{LA}}{(1-\alpha^{H})(1-\alpha)} \Big] + \delta_{app}+\delta_{FV} \eps_{LA}}{1-\delta_{FV} \alpha^{m+H-1}}.
\end{align*}
\end{proof}
\section{Counterexample} \label{appendix:counterexAppendix}
Even though, in practice, $J^{\mu_k}$ is what we are interested in, the values $J_k$ computed as part of our algorithm should not go to $\infty$ since the algorithm would be numerically unstable otherwise. In Appendix \ref{appendix:prop1}, we provide a bound on $\norm{J_k-J^*}_\infty$ when $m+H-1$ is sufficiently large as in Theorem~\ref{mainAPI}. In this subsection, we show that, when this condition is not satisfied, $J_k$ can become unbounded.
The example we use is depicted in Figure \ref{fig:TsitVanRoyIm}.
\begin{figure}
\centering
\subfloat[\centering $\mu^a$]{\includegraphics[width=2.5cm]{Role of Lookahead and FA/Images/counter1.png} }%
\qquad
\subfloat[\centering $\mu^b$]{\includegraphics[width=2.5cm]{Role of Lookahead and FA/Images/counter2.png} }%
\caption{An example illustrating the necessity of the condition in Theorem~\ref{mainAPI}}%
\label{fig:TsitVanRoyIm}%
\end{figure}
There are two policies, $\mu^a$ and $\mu^b$ and the transitions are deterministic under the two policies. The rewards are deterministic and only depend on the states. The rewards associated with states are denoted by $r(x_1)$ and $r(x_2),$
with $r(x_1)>r(x_2)$. Thus, the optimal policy is $\mu^a$. We assume scalar features $\phi(x_1)=1$ and $\phi(x_2)=2.$
We fix $H=1$.
The MDP follows policy $\mu^a$ when:
\begin{align*}
&J_k(x_1) > J_k(x_2) \implies \theta_k > 2\theta_k.
\end{align*} Thus, as long as $\theta_k>0,$ the lookahead policy will be $\mu_b.$
We will now show that $\theta_k$ increases at each iteration when $\delta_{FV} \alpha^{m+H-1}>1.$ We assume that $\theta_0>0$ and $\scriptD_k = \{1, 2\}$ $\forall k.$ A straightforward computation shows that $\delta_{FV}=\frac{6}{5}.$
At iteration $k+1,$ suppose $\mu_{k+1}=\mu^b,$ our $\hat{J}^{\mu_{k+1}}(i)$ for $i = 1, 2$ are as follows:
\begin{align*}
\hat{J}^{\mu_{k+1}}(1) =r(x_1)+\sum_{i=1}^{m-1} r(x_1) \alpha^i + 2 \alpha^m \theta_k, \quad
\hat{J}^{\mu_{k+1}}(2) = r(x_2) +\sum_{i=1}^{m-1} r(x_2)\alpha^i + 2 \alpha^m \theta_k.
\end{align*}
Thus, from Step \ref{step 4 alg} of Algorithm \ref{alg:LSalg}:
\begin{align*}
&\theta_{k+1} = \arg \min_\theta \sum_{i =1}^2 \Big( (\Phi \theta)(i) - \hat{J}^{\mu_{k+1}}(i) \Big)^2 \\
&\implies \theta_{k+1} = \frac{\sum_{i=1}^{m-1} \alpha^i r(x_1)}{5} + \frac{2 \sum_{i=1}^{m-1} \alpha^i r(x_2)}{5} + \frac{6 \alpha^m \theta_k}{5}\\
&\implies \theta_{k+1}> \frac{6}{5} \alpha^{m}\theta_k.
\end{align*}
Thus, since $\theta_0 > 0$ and $H=1$, when $ \frac{6}{5} \alpha^{m+H-1}\theta_k =\delta_{FV} \alpha^{m+H-1} >1,$ $\theta_{k}$ goes to $\infty.$
\section{Numerical Results} \label{appendix:numerical}
In this appendix, we test our algorithms on a grid world problem, using the same grid world problem as in \cite{efroni2018} and \cite{efroni2019combine}.
For our simulations, we assume a deterministic grid world problem played on an $N \times N$ grid. The states are the squares of the grid and the actions are $\{ \text{'up', 'down', 'right', 'left', and 'stay'} \}$, which move the agent in the prescribed direction, if possible. In each experiment, a goal state is chosen uniformly at random to have a reward of 1, while each other state has a fixed reward drawn uniformly from $[-0.1, 0.1]$. Unless otherwise mentioned, for the duration of this section, $N=25$ and $\alpha = 0.9$.
In order to perform linear function approximation, we prescribe a feature vector for each state. In this section, we focus on three particular choices:
\begin{enumerate}
\item Random feature vectors: each entry of the matrix $\Phi$ is an independent $\mathcal{N}(0, 1)$ random variable
\item Designed feature vectors: the feature vector for a state with coordinates $(x, y)$ is $[x, y, d, 1]^T$, where $d$ is the number of steps required to reach the goal from state $(x, y)$
\item Indicator vectors: the feature vector for each state $i$ is a $N^2$-dimensional indicator vector where only the $i$-th entry is nonzero
\end{enumerate}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{Role of Lookahead and FA/Images/h-m-error-plots.png}
\caption{(Top) For random feature vectors, as $m$ and $H$ increase, the value $J_k$ eventually stops diverging. (Bottom) For designed feature vectors, a smaller amount of lookahead and $m$-step return are needed to prevent $J_k$ from diverging. }
\label{fig:hm_plots}
\end{figure}
Recall that our theorems suggest that the amount of lookahead and return depends on the choice of the feature vectors. Our experiments support this observation as well. The amount of lookahead and $m$-step return required is high (often over $30$) for random feature vectors, but we are able to significantly reduce the amount required by using the designed feature vectors which better represent the states.
We test Algorithm \ref{alg:LSalg} in each of our experiments, using a starting state of $J_0 = \theta_0 = 0$. All plots in this section graph an average over 20 trials, where each trial has a fixed random choice of $\mathcal{D}_k$, the set of states used for policy evaluation. Error bars show the standard deviation of the mean. The code used to produce these graphs is included in the Supplementary Material.
\subsection{The effect of m and H on convergence}
In Figure \ref{fig:hm_plots}, we showed how $H$ and $m$ affect convergence of the iterates $J_k$ to $J^*$. When $m$ and $H$ are small, the value of $J_k$ sometimes diverges. If the value diverges for even one trial, then the average over trials of $\|J_k - J^*\|_\infty$ also increases exponentially with $k$. However, if the average converges for all trials, then the plot is relatively flat. The $m$ or $H$ required for convergence depends on the parameter $\delta_{FV}$ defined in Theorem~\ref{mainAPI}. Over 20 trials, the average value of $\delta_{FV}$ for each of our choices of feature vectors are $30.22, 16.29$, and $1.0$, respectively. As showed through a counter-example in the main body of the paper, in general, one needs $m + H - 1 > \log(\delta_{FV}) / \log(1/\alpha)$ for convergence. However, in specific examples, it is possible for convergence to for smaller values of $m+H.$ For example, in our grid word model, $\frac{\log(16.29)}{\log(1/0.9)} \approx 26.5$, but we will observe that such a large amount of $m + H$ is not required for convergence.
In Figure \ref{fig:hm_plots}, it is difficult to see how $H$ and $m$ affect the probability of divergence, as a function of the representative states chosen to be sampled. Therefore, we introduce Figure \ref{fig:probability_of_convergence}. These plots show the proportion of trials in which the distance $\|J_{k} - J^*\|_\infty$ exceeded $10^5$ after 30 iterations of our algorithm. As expected, the algorithm never diverges for indicator vectors, as our algorithm is then equivalent to the tabular setting. The designed feature vectors clearly require a much smaller amount of lookahead or $m$-step return, well below the amount predicted by the average $\delta_{FV}$ of $16.29$. However, no matter the choice of feature vectors, we will eventually prevent our algorithm from diverging with a large enough value of $H + m$.
\begin{figure}
\centering
\subfloat[\centering Varying $H$]{\includegraphics[width=0.45\linewidth]{Images/prob_divergence_hs} }%
\qquad
\subfloat[\centering Varying $m$]{\includegraphics[width=0.45\linewidth]{Images/prob_divergence_ms} }%
\caption{We plot the probability that $\|J_k - J^*\|_\infty$ diverges as a function of $H$ and $m$. For the first plot, $m=3$ and for the second plot, $H=3$. In both cases, the algorithm never diverges once $H+m$ is large enough, though a smaller amount of lookahead or $m$-step return are needed for the designed feature vectors.}%
\label{fig:probability_of_convergence}%
\end{figure}
\begin{figure}
\centering
\subfloat[\centering Varying $H$]{\includegraphics[width=0.45\linewidth]{Images/final_policy_hs} }%
\qquad
\subfloat[\centering Varying $m$]{\includegraphics[width=0.45\linewidth]{Images/final_policy_ms} }%
\caption{We plot the final value of $\|J^{\mu_k} - J^*\|_\infty$ after 30 iterations. For the first plot, $m=3$ and for the second plot, $H=3$. As $H$ increases, the final policy improves. With large enough $H$, we obtain the optimal policy. However, past a certain point, increasing $m$ is not helpful for finding a better policy.}%
\label{fig:final_policy_value}%
\end{figure}
\subsection{Convergence to the optimal policy}
In Theorem~\ref{mainAPI}, we show that as $H$ increases, we converge to a policy $\mu_k$ that is closer to the optimal policy. In this section, we experimentally investigate the role of $m$ and $H$ on the final value of $\|J^{\mu_k} - J^*\|_\infty$. The results can be found in Figure \ref{fig:final_policy_value}. As predicted by theory, we do get closer to the optimal policy as $H$ increases. However, increasing $m$ does not help past a certain point, which is also consistent with the theory. Indeed, although $\mu_k$ is approaching the optimal policy $\mu^*$ as $H$ increases, the iterates $J_k$ are not converging to $J^*$ due to error induced by function approximation. Increasing $m$ improves the policy evaluation, but cannot correct for this inherent error from approximating the value function.
\begin{comment}
\begin{figure}[H]
\centering
\subfigure[\centering Varying $H$]{\includegraphics[width=0.45\linewidth]{Images/final_policy_hs} }%
\qquad
\subfigure[\centering Varying $m$]{\includegraphics[width=0.45\linewidth]{Images/final_policy_ms} }%
\caption{We plot the final value of $\|J^{\mu_k} - J^*\|_\infty$ after 30 iterations. For the first plot, $m=3$ and for the second plot, $H=3$. As $H$ increases, the final policy improves. With large enough $H$, we obtain the optimal policy. However, past a certain point, increasing $m$ is not helpful for finding a better policy.}%
\label{fig:final_policy_value}%
\end{figure}
\end{comment}
\end{APPENDICES}
\ACKNOWLEDGMENT{The research presented here was supported in part by a grant from Sandia National Labs and the NSF Grants CCF 1934986, CCF 2207547, CNS 2106801, ONR Grant N00014-19-1-2566, and ARO Grant W911NF-19-1-0379. Sandia National Laboratories is a multimission laboratory managed and operated by National Technology \& Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA0003525. This paper describes objective technical results and analysis. Any subjective views or opinions that might be expressed in the paper do not necessarily represent the views of the U.S. Department of Energy or the United States Government.
}
\section{Introduction.}\label{intro}
\section{Introduction}
In many applications of reinforcement learning, such as playing chess and Go, the underlying model is known and so the main challenge is in solving the associated dynamic programming problem in an efficient manner. Policy iteration and variants of policy iteration \cite{bertsekas2019reinforcement,Bertsekas2011ApproximatePI,bertsekastsitsiklis} that solve dynamic programming problems rely on computations that are infeasible due to the sizes of the state and action spaces in modern reinforcement learning problems.
As a remedy to this ``curse of dimensionality,'' several state-of-the-art algorithms \cite{silver2017shoji, silver2017mastering, DBLP:journals/corr/MnihBMGLHSK16} employ function approximation, lookahead for policy improvement, $m$-step rollout for policy evaluation, and gradient descent to compute the function approximation, see Section \ref{section2} for a definition of these terms.
Our goal in this paper is to understand the role of multi-step lookahead for policy improvement (i.e., repeatedly applying the Bellman operator multiple times) and $m$-step rollout (which is a technique to approximately evaluate a policy by rolling out the dynamic programming tree for a certain number of steps $m$) on the accuracy of approximate policy iteration techniques. The algorithms we study in this paper are closely related to least-squares policy iteration (LSPI) \cite{parr} and approximate policy iteration (PI), see \cite{bertsekastsitsiklis, bertsekas2019reinforcement}. In the analysis of approximate PI, it is assumed that the policy evaluation and improvement steps have bounded errors, and using these, an error bound is obtained for the algorithm which repeatedly uses approximate policy evaluation and improvement. LSPI is an algorithm that builds on approximate PI where the policy evaluation step uses a least-squares algorithm to estimate the value function for the entire state space using the value function evaluated at a few states. However, the bounds presented in \cite{parr} are simply a special case of the bounds for generic approximate PI, and do not explicitly take into account the details of the implementation of least-squares-based policy evaluation. When such details are taken into account, it turns out the roles of the depth of lookahead ($H$) and rollout ($m$) become important, and their impact on the error bounds on the performance of approximate value iteration has not been characterized in prior work. In this paper, on the other hand, we assume that policies are evaluated at a few states using an $m$-step rollout and as a result, convergence of the algorithm is not guaranteed in general. Additionally, we show that the effect of function approximation can be mitigated using lookahead in the policy improvement step. The use of a partial rollout in our algorithm also makes our work similar to modified policy iteration \cite{Puterman1978ModifiedPI}, which is also called optimistic policy iteration \cite{bertsekastsitsiklis}. To the best of our knowledge, none of these prior works consider the impact of using gradient descent to implement an approximate version of least-squares policy evaluation within approximate PI. Thus, our algorithm and analysis can be viewed as a detailed look at approximate PI and modified PI when linear function approximation, least-squares policy evaluation and gradient descent are used to evaluate policies.
Our contributions are as follows:
\begin{itemize}
\item We examine the impact of lookahead and $m$-step rollout on approximate policy iteration with linear function approximation. As is common in practice, we assume that we evaluate an approximate value function only for some states at each iteration. We obtain performance bounds for our algorithm under the assumption that the sum of the lookahead and the number of steps in the $m$-step rollout is sufficiently large. We demonstrate through an extension of a counterexample in \cite{Tsitsiklis94feature-basedmethods} that such a condition is necessary, in general, for convergence with function approximation unlike the tabular setting in \cite{efroni2019combine}. See Appendix \ref{appendix:counterexAppendix} for our counterexample.
\item For ease of exposition, we first present the case where one solves a least-squares problem at each iteration to obtain the weights associated with the feature vectors in the function approximation of the value function. Our performance bounds in this case strictly generalize the bounds in \cite{parr} and \cite{bertsekas2019reinforcement} for approximate PI.
\item We then consider a more practical and widely-used scheme where several steps of gradient descent are used to update the weights of the value function approximation at each iteration. Obtaining performance bounds for the gradient descent algorithm is more challenging and these bounds can be found in Section \ref{SectionGD}.
\item Our results show that the sufficient conditions on the hyperparameters (such as the amount of lookahead, rollout, gradient descent parameters) of the algorithm required for convergence either do not depend on the size of the state space or depend only logarithmically on the size of the state space. \color{black}
\item From a theoretical perspective, our analysis shows that one can improve the upper bound on the error in approximate policy iteration from $1/(1-\alpha)^2$ (see \cite{parr}, \cite{bertsekas2019reinforcement}) to $1/(1-\alpha^{H})(1-\alpha)$ by using lookahead, where $\alpha$ is the discount factor and $H$ is the amount of lookahead used.
\item In addition to asymptotic performance bounds, we also provide finite-time guarantees for our algorithm. Our finite-time bounds show that our algorithm converges exponentially fast in the case of least-squares as well as the case where a fixed number of gradient descent steps are performed in each iteration of the algorithm.
\end{itemize}
\subsection{Other Related Work}
The recent work in \cite{efroni2019combine} considers a variant of policy iteration that utilizes lookahead and approximate policy evaluation using an $m$-step rollout (see Section \ref{section2} for definitions of these terms). As stated in the motivation in \cite{efroni2019combine}, it is well known that Monte Carlo Tree Search (MCTS) \cite{kocisszepesvari, browne} works well in practice
even though the worst-case complexity can be exponential \cite{shah2020nonasymptotic}; see \cite{munosbook} for some analysis of MCTS in MDPs where the number of states that can be visited from a given state is bounded. Motivated by policy iteration, the algorithm in \cite{efroni2019combine} estimates the value function associated with a policy and aims to improve the policy at each step. Policy improvement is achieved by obtaining the ``greedy'' policy in the case of policy iteration or a lookahead policy in the work of \cite{efroni2019combine}, which involves applying the Bellman operator several times to the current iterate before obtaining the greedy policy. The idea is that the application of the Bellman operator several times gives a more accurate estimate of the optimal value function. Then, similarly to policy iteration, the algorithm in \cite{efroni2019combine} aims to evaluate the new policy. The algorithm in \cite{efroni2019combine} uses an $m$-step rollout to compute the value function associated with a policy, i.e., it applies the Bellman operator associated with the policy $m$ times.
The work of \cite{efroni2019combine} establishes that a lookahead can significantly improve the rate of convergence if one uses the value function computed using lookahead in the approximate policy evaluation step. However, their paper does not study the use of function approximation which is critical to handling large state spaces, nor does it quantify the effects of varying $m$ in the convergence of their algorithm.
Now, we compare our work to other papers in the literature. The role of lookahead and rollout in improving the performance of RL algorithms has also been studied in a large number of papers including \cite{shahxie, moerland2020framework, efroni2020online, tomar2020multistep, efroni2018multiplestep, springenberg2020local, 9407870}. The works of \cite{baxter, veness, lanctot2014monte} explore the role of tree search in RL algorithms. However, to the best of our knowledge, the amount of lookahead and rollout needed as a function of the feature vectors has not been quantified in prior works.
The works of \cite{Bertsekas2011ApproximatePI} and \cite{bertsekas2019reinforcement} also study a variant of policy iteration wherein a greedy policy is evaluated approximately using feature vectors at each iteration. These papers also provide rates of convergence as well as a bound on the approximation error. However, our main goal is to understand the relations between function approximation and lookahead/rollout which are not considered in these other works.
\section{Preliminaries} \label{section2}
We consider a Markov Decision Process (MDP), which is defined to be a 5-tuple $(\scriptS, \scriptA, P, r, \alpha)$. The finite set of states of the MDP is $\scriptS$. There exists a finite set of actions associated with the MDP $\scriptA$. Let $P_{ij}(a)$ be the probability of transitioning from state $i$ to state $j$ when taking action $a \in \scriptA$. We denote by $s_k$ the state of the MDP and by $a_k$ the corresponding action at time $k$. We associate with state $s_k$ and action $a_k$ a non-deterministic reward $r(s_k, a_k) \in [0, 1] \forall s_k \in \scriptS, a_k \in \scriptA.$ We assume that the rewards are uniformly bounded. Our objective is to maximize the cumulative discounted reward with discount factor $\alpha \in (0, 1).$
Towards this end, we seek to find a deterministic policy $\mu$ which associates with each state $s\in \scriptS$ an action $\mu(s) \in \scriptA$. For every policy $\mu$ and every state $s \in \scriptS$ we define $J^{\mu}(s)$ as follows:
\begin{align*}
J^{\mu}(s) := E[\sum_{i=0}^\infty \alpha^k r(s_k, \mu(s_k))|s_0=s].
\end{align*}
We define the optimal reward-to-go $J^*$ as
$J^*(s) := \underset{\mu}\max J^\mu(s).$ The objective is to find a policy $\mu$ that maximizes $J^\mu(s)$ for all $s \in \scriptS$. Towards the objective, we associate with each policy $\mu$ a function $T_\mu: \mathbb{R}^{|\scriptS|} \to \mathbb{R}^{|\scriptS|}$ where for $J \in \mathbb{R}^{|\scriptS|},$ the $s$th component of $T_{\mu}J$ is
\begin{align*}
(T_\mu J)(s) = r(s, \mu(s)) + \alpha \sum_{j=1}^{|\scriptS|} P_{sj}(\mu(s)) J(j),
\end{align*} for all $s \in S$. If function $T_{\mu}$ is applied $m$ times to vector $J \in \mathbb{R}^{|\scriptS|},$ then we say that we have performed an $m$-step rollout of the policy $\mu$ and the result $T^m_\mu J$ of the rollout is called the return.
Similarly, we define the Bellman operator $T: \mathbb{R}^{|\scriptS|} \to \mathbb{R}^{|\scriptS|}$ with the $s$th component of $TJ$ being
\begin{align}
(TJ)(s) = \underset{a \in \scriptA}\max \Bigg \{ r(s, a) + \alpha \sum_{j=1}^{|\scriptS|} P_{sj}(a)J(j) \Bigg \}. \label{T}
\end{align}
The policy corresponding to the $T$ operator is defined as the \textit{greedy} policy. If operator $T$ is applied $H$ times to vector $J \in \mathbb{R}^{|\scriptS|},$ we call the result - $T^H J$ - the $H$-step ``lookahead'' corresponding to $J$. The greedy policy corresponding to $T^H J$ is called the $H$-step lookahead policy, or the lookahead policy, when $H$ is understood. More precisely, given an estimate $J$ of the value function, the lookahead policy is the policy $\mu$ such that $T_\mu(T^{H-1} J)=T(T^{H-1} J).$
It is well known that each time the Bellman operator is applied to a vector $J$ to obtain $TJ,$ the following holds:
\begin{align*}
\norm{TJ-J^*}_\infty\leq \alpha\norm{J-J^*}_\infty.
\end{align*} Thus, applying $T$ to obtain $TJ$ gives a better estimate of the value function than $J.$
The Bellman equations state that the vector $J^\mu$ is the unique solution to the linear equation
\begin{align}
J^\mu = T_\mu J^\mu. \label{bellman}
\end{align}
Additionally, we have that $J^*$ is a solution to
\begin{align*}
J^* = TJ^*.
\end{align*}
Note that every greedy policy w.r.t. $J^*$ is optimal and vice versa \cite{bertsekastsitsiklis}.
We will now state several useful properties of the operators $T$ and $T_\mu$. See \cite{bertsekastsitsiklis} for more on these properties. \color{black} Consider the vector $e \in \mathbb{R}^{|\scriptS|}$ where $e(i) = 1 \forall i \in 1, 2, \ldots, |\scriptS|.$ We have:
\begin{equation}
T(J + ce) = TJ + \alpha ce, \quad T_\mu(J + ce) = T_\mu J + \alpha ce. \label{eq:usefulproperties}
\end{equation}
Operators $T$ and $T_\mu$ are also monotone:
\begin{align}
J \leq J' \implies TJ \leq TJ', \quad T_\mu J \leq T_\mu J'. \label{monotonicityproperty}
\end{align}
\section{Least-Squares Function Approximation Algorithm}
\begin{algorithm}[tb]
\caption{Least-Squares Function Approximation Algorithm}
\label{alg:LSalg}
\textbf{Input}: $J_0, m,$ $H,$ feature vectors $\{ \phi(i) \}_{i \in \scriptS}, \phi(i) \in \mathbb{R}^d$ and subsets $\scriptD_k \subseteq \scriptS, k = 0, 1, \ldots.$ Here $\scriptD_k$ is the set of states at which we evaluate the current policy at iteration $k.$
\begin{algorithmic}[1]
\STATE Let $k=0$.
\STATE Let $\mu_{k+1}$ be such that $\norm{T^H J_k - T_{\mu_{k+1}} T^{H-1} J_k}_\infty \leq \eps_{LA}$.\\\label{step 2 alg}
\STATE Compute $\hat{J}^{\mu_{k+1}}(i) = T_{\mu_{k+1}}^m T^{H-1} (J_k)(i)+w_{k+1}(i)$ for $i \in \scriptD_k.$ \\ \label{step 3 alg}
\STATE Choose $\theta_{k+1}$ to solve
\begin{align}
\min_\theta \sum_{i \in D_k} \Big( (\Phi \theta)(i) - \hat{J}^{\mu_{k+1}}(i) \Big)^2, \label{step 4 alg}
\end{align} \\
where $\Phi$ is a matrix whose rows are the feature vectors.
\STATE $J_{k+1} = \Phi \theta_{k+1}.$
\STATE Set $k \leftarrow k+1.$ Go to 2.
\end{algorithmic}
\end{algorithm}
Our algorithm is described in Algorithm \ref{alg:LSalg}. We now explain our algorithm and the associated notation in detail. Due to the use of function approximation, our algorithm is an approximation to policy iteration with lookahead. At each iteration index, say, $k$, we have an estimate of the value function, which we denote by $J_k$. To obtain $J_{k+1}$, we perform a lookahead to improve the value function estimate at a certain number of states (denoted by $\scriptD_k$) which can vary with each iteration. For example, $\scriptD_k$ could be chosen as the states visited when performing a tree search to approximate the lookahead process. During the lookahead process, we note that we will also obtain an $H$-step lookahead policy, which we denote by $\mu_{k+1}$. As noted in the Introduction, the computation of $T^{H-1}(J_k)(i)$ for $i \in \scriptD_k$ in Step \ref{step 3 alg} of Algorithm \ref{alg:LSalg} may be computationally infeasible; however, as noted in \cite{efroni2019combine}, techniques such as Monte Carlo tree search (MCTS) are employed in practice to approximately estimate $T^{H-1}(J_k)(i).$ \color{black} In this paper, we model the fact that lookahead cannot be performed exactly due to the associated computational complexity by allowing an error in the lookahead process which we denote by $\eps_{LA}$ in Step~\ref{step 2 alg} of the algorithm.
We obtain estimates of $J^{\mu_{k+1}}(i)$ for $i \in \scriptD_k$, which we call $\hat{J}^{\mu_{k+1}}(i)$. To obtain an estimate of $J^{\mu_{k+1}}(i)$, we perform an $m$-step rollout with policy $\mu_{k+1}$, and obtain a noisy version of $T^m_{\mu_{k+1}}T^{H-1}J_k(i)$ for $i \in \scriptD_k.$ We also model the approximation error in the rollout by adding noise (denoted by $w_{k+1}(i)$ in Step ~\ref{step 3 alg} of the algorithm) to the return (result of the rollout - see Section \ref{section2}) computed at the end of this step. In order to estimate the value function for states not in $\scriptD_k$, we associate with each state $i \in \scriptS$ a feature vector $\phi(i)\in \mathbb{R}^d$ where typically $d << |\scriptS|$. The matrix comprised of the feature vectors as rows is denoted by $\Phi$. We use those estimates to find the best fitting $\theta \in \mathbb{R}^d$, i.e.,
\begin{align*}
\min_\theta \sum_{i \in D_k} \Big( (\Phi \theta)(i) - \hat{J}^{\mu_{k+1}}(i) \Big)^2.
\end{align*}
The solution to the above minimization problem is denoted by $\theta_{k+1}$. The algorithm then uses $\theta_{k+1}$ to obtain $J_{k+1} = \Phi \theta_{k+1}$. The process then repeats. Note that to compute $\hat{J}^{\mu_{k+1}}(i),$ we obtain noisy estimates of $T_{\mu_{k+1}}^m T^{H-1} J_k (i)$ for $i\in \scriptD_k.$ Another alternative is to instead obtain noisy estimates of $T_{\mu_{k+1}}^m J_k (i)$ for $i\in \scriptD_k.$ It was shown in \cite{efroni2019combine} that the former option is preferable because it has a certain contraction property. Thus, we have chosen to use this computation in our algorithm as well. However, we have shown in the longer version of this paper that the algorithm also has bounded error which becomes small if $m$ is chosen to be sufficiently large \cite{annaor}.
\begin{remark}
We note that $\mu_{k+1}(i)$ in Step \ref{step 2 alg} of Algorithm \ref{alg:LSalg} does not have to be computed for all states $i\in \scriptS.$ The actions $\mu_{k+1}(i)$ have to be computed only for those $i\in\scriptS$ that are encountered in the rollout step of the algorithm (Step \ref{step 3 alg}).
\end{remark}
To analyze Algorithm \ref{alg:LSalg}, we make the following assumption which states that we explore a sufficient number of states during the policy evaluation phase at each iteration.
\begin{assumption}\label{assume 1 or}
For each $k \geq 0, \text{ rank }\{ \phi(i)\}_{i \in D_k} = d$.
\end{assumption}
We assume that the noise $w_k$ is bounded.
\begin{assumption}
For some $\eps_{PE} >0,$ the noise in policy evaluation satisfies $\norm{w_k}_\infty \leq \eps_{PE} \forall k$. \label{assume 2 or or}
\end{assumption}
We also assume that the rewards are bounded.
\begin{assumption}\label{assume 3 or}
$r(s,u) \in[0,1]$ $\forall s \in \scriptS,u\in\scriptA.$
\end{assumption}
Using Assumption~\ref{assume 1 or}, $J_{k+1}$ can be written as
\begin{align}
J_{k+1} &= \Phi \theta_{k+1} =\underbrace{\Phi(\Phi_{\scriptD_{k}}^\top \Phi_{\scriptD_{k}} )^{-1} \Phi_{\scriptD_{k}}^\top \scriptP_k}_{=: \scriptM_{k+1}} \hat{J}^{\mu_{k+1}},\label{defMk}\end{align}
where $\Phi_{\scriptD_{k}}$ is a matrix whose rows are the feature vectors of the states in $\scriptD_{k}$ and $\scriptP_k$ is a matrix of zeros and ones such that $\scriptP_k\hat{J}^{\mu_{k+1}}$ is a vector whose elements are a subset of the elements of $\hat{J}^{\mu_{k+1}}$ corresponding to $\scriptD_k$. Note that $\hat{J}^{\mu_{k+1}}(i)$ for $i\notin\scriptD_k$ does not affect the algorithm, so we can define $\hat{J}^{\mu_{k+1}}(i)=T_{\mu_{k+1}}^m T^{H-1} J_k(i)$ for $i\notin\scriptD_k.$
Written concisely, our algorithm is as follows:
\begin{equation}
J_{k+1} = \scriptM_{k+1} (T_{\mu_{k+1}}^m T^{H-1} J_k+w_k), \label{eq:iterateAPI}
\end{equation}
where $\mu_{k+1}$ is defined in Step 2 of the algorithm.
Since $w_k(i)$ for $i\notin\scriptD_k$ does not affect the algorithm, we define $w_k(i)=0$ for $i\notin\scriptD_k.$
Now we will state our theorem which characterizes the role of lookahead ($H$) and return ($m$)
on the convergence of approximate policy iteration with function approximation.
\begin{theorem}\label{mainAPI}
Suppose that $m$ and $H$ satisfy $m + H -1>\log (\delta_{FV})/\log (1/\alpha),$ where
$$\delta_{FV} := \sup_k \norm{\scriptM_k}_\infty = \sup_k\norm{\Phi(\Phi_{\scriptD_{k}}^\top \Phi_{\scriptD_{k}} )^{-1} \Phi_{\scriptD_{k}}^\top \scriptP_k}_\infty.$$
Then, under Assumptions \ref{assume 1 or}-\ref{assume 3 or}, the following holds:
\begin{align}
\norm{J^{\mu_{k}} - J^*}_\infty
&\leq \underbrace{\frac{\alpha^{k(H)} }{1-\alpha} + \frac{2\alpha^{H}\norm{J^{\mu_0}-J_0}_\infty}{1-\alpha}k\max(\alpha^{H},\beta)^{k-1}}_{ \text{ finite-time component }} +\underbrace{\frac{2\alpha^H \frac{\tau}{1-\beta} +\eps_{LA}}{(1-\alpha^{H})(1-\alpha)}}_{ \text{ asymptotic component }}.
\label{eq:mainAPI bound}
\end{align}
where $$\tau:= \frac{\alpha^m + \alpha^{m+H-1}}{1-\alpha}\delta_{FV} + \delta_{app}+ \delta_{FV} \eps_{PE},$$
$$\beta:=\alpha^{m+H-1} \delta_{FV},$$
and
$$
\delta_{app} := \sup_{k, \mu_k}\norm{\scriptM_k J^{\mu_k}- J^{\mu_k}}_\infty.
$$
\end{theorem}
The proof of Theorem \ref{mainAPI} can be found in Appendix \ref{appendix:thm1}. We now make several comments about the implications of Theorem \ref{mainAPI}:
\begin{enumerate}
\item In conjunction with the counterexample in Appendix \ref{appendix:counterexAppendix}, Theorem \ref{mainAPI} shows that while $\norm{J^{\mu_k}-J^*}_\infty$ depends on the function approximation error ($\delta_{app}$) and the feature vectors ($\delta_{FV}$), the effect of these terms diminishes exponentially with increased $H$, with the exception of the tree search error ($\eps_{LA}$)
\item It is useful to compare our asymptotic bound with the asymptotic bound for approximate PI in \cite{bertsekas2019reinforcement}. There, it assumed that $m=\infty$, $\eps_{PE}=0$ and $H=1,$ in which case our bound is identical to the one in \cite{bertsekas2019reinforcement}. However, when $H>1,$ our asymptotic error is proportional to $1/(1-\alpha)(1-\alpha^{H})$ which is much better than the $1/(1-\alpha)^2$ bound for approximate policy iteration \cite{bertsekas2019reinforcement}. When $\alpha\rightarrow 1,$ the expected discounted reward is of the order of $1/(1-\alpha),$ thus the $1/(1-\alpha)^2$ bound on the error is generally considered to be very loose. Our result shows that the use of lookahead significantly improves this error bound. Additionally, our bound is able to capture situations where a full rollout (i.e., $m=\infty$) is impossible to perform.
\end{enumerate}
The proof of Theorem \ref{mainAPI} is closely related to the proofs of Theorems \ref{theorem 2 or}-\ref{theorem 3 or}. The proofs of Theorems~\ref{theorem 2 or}-\ref{theorem 3 or} are presented in the next section while we defer the proof of Theorem \ref{mainAPI} to Appendix \ref{appendix:thm1}. We note that the above result is fundamentally different from the conclusion in Theorem 4 in \cite{efroni2019combine} where one requires a condition on $m$ and $H$ for convergence when one uses $J_k$ instead of using $T^{H-1}J_k$ in Step 2 of the algorithm. Here, we have shown that even when one uses $T^{H-1}J_k,$ one may need large $m+H$ for convergence due to the use of function approximation.
We can additionally characterize the approximation error of our iterates, $J_k$, by computing bounds on the asymptotic error $\limsup_{k \to \infty} \norm{J_k - J^*}_\infty.$ The bounds along with their derivations can be found in the longer version of this paper, \cite{annaor}.
It is important to note that the upper bounds on $\norm{J^{\mu_k}-J^*}_\infty$ and $\norm{J_k - J^*}_\infty$ illustrate that $J^{\mu_k}$ approximates $J^*$ much better than $J_k$ does. Thus, algorithms need not wait for the value function estimates to converge before the corresponding policies reach near optimality.
In \cite{bertsekas2021lessons}, it is noted that, in reinforcement learning to play computer games or board games, it is not uncommon during training to get a relatively crude estimate of the value function, which is improved by lookahead and $m$-step return during actual game play. Our analysis would also apply to this situation -- we have not explicitly differentiated between training and game play in our analysis.
\color{black}
Theorem \ref{mainAPI} can be used to make the following observation: how close $J^{\mu_k}$ is to $J^*$ depends on four factors -- the representation power of the feature vectors and the feature vectors themselves ($\delta_{app}, \delta_{FV}$), the amount of lookahead ($H$), the extent of the rollout ($m$) and the approximation in the policy determination and policy evaluation steps ($\eps_{LA}$ and $\eps_{PE}$). Further, it is easy to see that lookahead and rollout help mitigate the effect of feature vectors and their ability to represent the value functions. \color{black}
\section{Gradient Descent Algorithm} \label{SectionGD}
Solving the least-squares problem in Algorithm~\ref{alg:LSalg} involves a matrix inversion, which can be computationally difficult. So we propose an alternative algorithm which performs $\eta_k$ steps of gradient descent with stepsize $\gamma$ at each iteration $k$, where the gradient refers to the gradient of the least-squares objective in (\ref{step 4 alg}).
The gradient descent-based algorithm is presented in Algorithm~\ref{alg:GDalg}.
\begin{algorithm}[tb]
\caption{Gradient Descent Algorithm}
\label{alg:GDalg}
\textbf{Input}: $\theta_0, m,$ $H,$ feature vectors $\{ \phi(i) \}_{i \in \scriptS}, \phi(i) \in \mathbb{R}^d,$ and $\scriptD_k,$ which is the set of states for which we evaluate the current policy at iteration $k.$
\begin{algorithmic}[1]
\STATE $k=0, J_0 = \Phi \theta_0$. \\
\STATE Let $\mu_{k+1}$ be such that $\norm{T^H J_k - T_{\mu_{k+1}} T^{H-1} J_k}_\infty \leq \eps_{LA}$. \\
\STATE Compute $\hat{J}^{\mu_{k+1}}(i) = T_{\mu_{k+1}}^m T^{H-1} J_k(i)+w_{k+1}(i)$ for $i \in \scriptD_k.$ \\
\STATE \label{step 4 unbiased} $\theta_{k+1, 0} := \theta_k.$ For $\ell = 1, 2, \ldots, \eta_{k+1},$ iteratively compute the following:
\begin{align}
\theta_{k+1, \ell} &= \theta_{k+1,\ell-1} - \gamma \nabla_\theta c(\theta;\hat{J}^{\mu_{k+1}})|_{\theta_{k+1,\ell-1}}, \label{eq:iterthetaGD}
\end{align} where
\begin{align*}
c(\theta;\hat{J}^{\mu_{k+1}}) := \frac{1}{2}\sum_{i \in \scriptD } \Big( (\Phi \theta)(i) - \hat{J}^{\mu_{k+1}}(i) \Big)^2,
\end{align*} \\
and $\Phi$ is a matrix whose rows are the feature vectors.\\
\STATE Define
\begin{align*}
\theta_{k+1} &= \theta_{k+1,\eta_{k+1}},
\end{align*} and set $$J_{k+1} = \Phi \theta_{k+1}.$$
\STATE Set $k \leftarrow k+1.$ Go to 2.
\end{algorithmic}
\end{algorithm}
In order to present our main result for the gradient descent version of our algorithm, we define $\tilde{\theta}^{\mu_k}$ for any policy $\mu_k$ which will be used in the proof of the theorem:
\begin{align}
\tilde{\theta}^{\mu_k} \nonumber&:= \arg\min_\theta \frac{1}{2}\norm{\Phi_{\scriptD_k} \theta - \scriptP_{k}(T_{\mu_{k}}^m T^{H-1} J_{k-1}+w_k)}_2^2.
\end{align}
Note that
\begin{align}
\Phi\tilde{\theta}^{\mu_k}= \scriptM_k (T_{\mu_k}^m T^{H-1} J_{k-1}+w_k), \label{eq: theta tilde muk or}
\end{align}
where $\scriptM_k$ is defined in \eqref{defMk}.
Thus, $\tilde{\theta}^{\mu_k}$ represents the function approximation of the estimate of $J^{\mu_k}$ obtained from the $m$-step return.
We now present two propositions which will be used in the subsequent theorems to obtain bounds
on the convergence of approximate policy iteration with function approximation when gradient descent is employed to estimate the least-squares objective in each iteration.
\begin{proposition}\label{prop1}
\begin{align}
\norm{J^{\mu_{k+1}} - J^*}_\infty &\leq \alpha^{H} \norm{J^{\mu_k}-J^*}_\infty + \frac{ 2\alpha^H \norm{J^{\mu_k}-J_k}_\infty + \eps_{LA} }{1-\alpha},\label{eq: jmu-jk or}
\end{align}
where
\begin{align}
\norm{J^{\mu_{k}}-J_k}_\infty \leq
a_k \norm{J_{k-1} -J^{\mu_{k-1}}}_\infty +b_k,\label{eq: ineq 2 or}
\end{align}
$$a_k := \alpha^{m+H-1} \delta_{FV}+ \frac{\sqrt{|S|}\norm{\Phi}_\infty}{\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta_{k}}( \alpha^{m+H-1} \delta_{FV}+1)$$ and
$$
b_k := (1+ \frac{\sqrt{|S|}\norm{\Phi}_\infty}{\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta_{k}})( \frac{\alpha^m + \alpha^{m+H-1}}{1-\alpha}\delta_{FV} + \delta_{app}+ \delta_{FV} \eps_{PE}) +\frac{\sqrt{|S|}\norm{\Phi}_\infty}{(1-\alpha)\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta_{k}},
$$
where $\sigma_{\min, \Phi}$ is the smallest singular value in the singular value decomposition of $\Phi$ and $\alpha_{GD, \gamma}:=\sup_k \max_i |1-\gamma\lambda_i ( \Phi_{\scriptD_k}^\top \Phi_{\scriptD_k})|,$ where $\lambda_i$ denotes the $i$-th largest eigenvalue of a matrix.
\end{proposition}
The proof of Proposition \ref{prop1} is presented later in this section. \color{black}Using Proposition \ref{mainGDLemma} and iterating in the special case where $a_k$ is constant or upper bounded by a constant $a,$ where $0<a<1$, and $b_k$ is constant or upper bounded by a constant $b$, where $0<b$, we get the following:
\begin{proposition}\label{prop2}
When $a_k \leq a$ for all $k$ where $0<a<1,$ and $b_k \leq b,$ for all $k$, where $0<b$, the following holds:
\begin{align}
\norm{J^{\mu_{k}} - J^*}_\infty\nonumber &\leq \frac{\alpha^{k(H)}}{1-\alpha} + \frac{2\alpha^H}{1-\alpha}k \max(\alpha^{H},a)^{k-1}\norm{J^{\mu_0}-J_0}_\infty +\frac{2\alpha^H\frac{b}{1-a}+\eps_{LA}}{(1-\alpha)(1-\alpha^{H})}.
\end{align}
\end{proposition}
Using Proposition \ref{prop2}, we can get Theorems \ref{theorem 2 or} and \ref{theorem 3 or} which give us finite-time and asymptotic bounds for the cases where $\eta_k$ is constant and $\eta_k$ is increasing to infinity, respectively:
\begin{theorem}\label{theorem 2 or}
Suppose that $\gamma, m,$ and $H$ satisfy
\begin{align}
\gamma < \frac{1}{d\inf_k \norm{\Phi_{\scriptD_k}^\top \Phi_{\scriptD_k}}_\infty^2}, \label{eq: gamma assumption or}
\end{align}
and
\begin{align}
m + H >1+\log (2\delta_{FV})/\log (1/\alpha).\label{eq: m H assumption or}
\end{align}
Furthermore, consider the case where $\eta_k$ is a constant, which we call $\eta,$ where $$\eta>\log (\frac {3\sqrt{|S|}\norm{\Phi}_\infty}{\sigma_{\min,\Phi}})/\log (1/\alpha_{GD, \gamma}).$$
Then, under Assumptions~\ref{assume 1 or}-\ref{assume 3 or}, the following holds:
\begin{align}
\norm{J^{\mu_{k}} - J^*}_\infty
&\leq \underbrace{\frac{\alpha^{k(H)}}{1-\alpha} + \frac{2\alpha^H}{(1-\alpha)^2}k \max(\alpha^{H},a_\eta)^{k-1}}_{ \text{ finite-time component }} + \underbrace{\frac{2\alpha^H \frac{b_\eta}{1-a_\eta}+\eps_{LA}}{(1-\alpha)(1-\alpha^{H})}}_{ \text{ asymptotic component }}, \label{eq: case a bound}
\end{align}
where
$$a_\eta := \alpha^{m+H-1} \delta_{FV}+ \frac{\sqrt{|S|}\norm{\Phi}_\infty}{\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta}( \alpha^{m+H-1} \delta_{FV}+1)$$ and
$$
b_\eta := (1+ \frac{\sqrt{|S|}\norm{\Phi}_\infty}{\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta})( \frac{\alpha^m + \alpha^{m+H-1}}{1-\alpha}\delta_{FV} + \delta_{app}+ \delta_{FV} \eps_{PE}) +\frac{\sqrt{|S|}\norm{\Phi}_\infty}{(1-\alpha)\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta}.
$$
Taking limits on both sides as $k \to \infty,$ we have the following asymptotic bound:
\begin{align*}
\limsup_{k\to \infty} \norm{J^{\mu_k}-J^*}_\infty \leq \frac{2\alpha^H \frac{b_\eta}{1-a_\eta}+\eps_{LA}}{(1-\alpha)(1-\alpha^{H})}.
\end{align*}
\end{theorem}
\begin{theorem}\label{theorem 3 or}
Consider $\eta_k$ where $\eta_k$ is increasing and $\eta_k \to \infty.$ Suppose that
$\gamma, m,$ and $H$ satisfy \eqref{eq: gamma assumption or} and \eqref{eq: m H assumption or}. Then, under Assumptions~\ref{assume 1 or}-\ref{assume 3 or}, the following holds:
\begin{align}
\limsup_{k\to \infty} \norm{J^{\mu_k}-J^*}_\infty \leq \frac{2\alpha^H \frac{b^*}{1-a^*}+\eps_{LA}}{(1-\alpha)(1-\alpha^{H})}, \label{eq: recover ls}
\end{align}
where
$$a^*:= \alpha^{m+H-1} \delta_{FV}$$ and
$$
b^* :=\frac{\alpha^m + \alpha^{m+H-1}}{1-\alpha}\delta_{FV} + \delta_{app}+ \delta_{FV} \eps_{PE}.
$$
\end{theorem}
Before we present the proofs of Propositions \ref{prop1}-\ref{prop2} and Theorems \ref{theorem 2 or}-\ref{theorem 3 or}, we make the following remarks:
In the case where $\eta_k$ is constant, i.e., $\eta_k = \eta$, when $\gamma$ is sufficiently small and $\eta$ is sufficiently large, we have exponential rate of convergence to the asymptotic error, assuming that $m$ and $H$ are sufficiently large.
When we increase $\eta$, our asymptotic error becomes smaller until it reaches the asymptotic error of the least-squares algorithm, i.e., when $\eta \rightarrow \infty$, we recover the asymptotic error of Algorithm \ref{alg:LSalg}.
If it is difficult to ascertain whether $\eta$ is sufficiently large, one can consider an increasing sequence $\eta_k$ such that $\eta_k \to \infty.$ For such a sequence, our asymptotic error term is the same as that of the least-squares problem.
\proof{Proof of Proposition \ref{prop1}}
We break the proof of the the proposition into three steps.
\noindent\textit{Step 1:}
In this step, since $\theta_{k}$ is obtained by taking $\eta_{k}$ steps of gradient descent towards $\tilde{\theta}^{\mu_{k}}$ beginning from $\theta_{k-1}$, we show that the following holds:
\begin{align*}
\norm{\theta_{k} - \tilde{\theta}^{\mu_{k}}}_2
\leq \alpha_{GD, \gamma}^{\eta_{k}} \norm{\theta_{k-1} - \tilde{\theta}^{\mu_{k}}}_2,
\end{align*}
where $\alpha_{GD, \gamma}:=\sup_k \max_i |1-\gamma\lambda_i ( \Phi_{\scriptD_k}^\top \Phi_{\scriptD_k})|,$ where $\lambda_i$ denotes the $i$-th largest eigenvalue of a matrix.
We note that since $$0 < \lambda_i ( \Phi_{\scriptD_k}^\top \Phi_{\scriptD_k}) \leq \norm{\Phi_{\scriptD_k}^\top \Phi_{\scriptD_k}}_2^2 \leq d \norm{\Phi_{\scriptD_k}^\top \Phi_{\scriptD_k}}_\infty^2 \leq d \sup_k \norm{\Phi_{\scriptD_k}^\top \Phi_{\scriptD_k}}_\infty^2,$$
$\alpha_{GD, \gamma}<1$ when $\gamma < \frac{1}{d\sup_k \norm{\Phi_{\scriptD_k}^\top \Phi_{\scriptD_k}}_\infty^2}$.
\textit{Proof of Step 1:} Recall that the iterates in Equation \eqref{eq:iterthetaGD} can be written as follows:
\begin{align*}
\theta_{k,\ell} &= \theta_{k,\ell-1} - \gamma \nabla_\theta c(\theta;\hat{J}^{\mu_{k}})|_{\theta_{k,\ell-1}} =\theta_{k,\ell-1} - \gamma \Big( \Phi_{\scriptD_{k-1}}^\top \Phi_{\scriptD_{k-1}} \theta_{k,\ell-1} - \Phi_{\scriptD_{k-1}}^\top \scriptP_{k-1}(T_{\mu_{k}}^m T^{H-1}J_k+w_{k-1})\Big).
\end{align*}
Since
\begin{align*}0 &= \nabla_\theta c(\theta;\hat{J}^{\mu_{k}})|_{\tilde{\theta}^{\mu_{k}}}= \Phi_{\scriptD_{k-1}}^\top \Phi_{\scriptD_{k-1}} \tilde{\theta}^{\mu_{k}} - \Phi_{\scriptD_{k-1}}^\top \scriptP_{k-1}(T_{\mu_{k}}^m T^{H-1}J_k+w_{k-1}),
\end{align*}
we have the following:
\begin{equation*}
\begin{array}{lll}
\theta_{k,\ell} &=& \theta_{k,\ell-1} - \gamma \Big( \Phi_{\scriptD_{k-1}}^\top \Phi_{\scriptD_{k-1}} \theta_{k,\ell-1} - \Phi_{\scriptD_{k-1}}^\top \Phi_{\scriptD_{k-1}} \tilde{\theta}^{\mu_{k}} - \Phi_{\scriptD_{k-1}}^\top \scriptP_{k-1}(T_{\mu_{k}}^m T^{H-1}J_k+w_{k-1}) \\&+& \Phi_{\scriptD_{k-1}}^\top \scriptP_{k-1}(T_{\mu_{k}}^m T^{H-1}J_k+w_{k-1})\Big) \\
&=& \theta_{k,\ell-1} - \gamma \Phi_{\scriptD_{k-1}}^\top \Phi_{\scriptD_{k-1}} (\theta_{k,\ell-1} - \tilde{\theta}^{\mu_{k}}).
\end{array}
\end{equation*}
Subtracting $\tilde{\theta}^{\mu_{k}}$ from both sides gives:
\begin{align*}
\theta_{k,\ell} - \tilde{\theta}^{\mu_{k}} &= \theta_{k,\ell-1} - \tilde{\theta}^{\mu_{k}} - \gamma \Phi_{\scriptD_{k-1}}^\top \Phi_{\scriptD_{k-1}} (\theta_{k,\ell-1} - \tilde{\theta}^{\mu_{k}})\\&= (I - \gamma \Phi_{\scriptD_{k-1}}^\top \Phi_{\scriptD_{k-1}}) (\theta_{k,\ell-1} - \tilde{\theta}^{\mu_{k}}).
\end{align*}
Thus,
\begin{align*}
\norm{\theta_{k,\ell} - \tilde{\theta}^{\mu_{k}}}_2&= \norm{(I - \gamma \Phi_{\scriptD_{k-1}}^\top \Phi_{\scriptD_{k-1}}) (\theta_{k,\ell-1} - \tilde{\theta}^{\mu_{k}})}_2\\&\leq \norm{I - \gamma \Phi_{\scriptD_{k-1}}^\top \Phi_{\scriptD_{k-1}}}_2 \norm{\theta_{k,\ell-1} - \tilde{\theta}^{\mu_{k}}}_2\\
&\leq \max_i |\lambda_i (I - \gamma \Phi_{\scriptD_{k-1}}^\top \Phi_{\scriptD_{k-1}})|\norm{\theta_{k,\ell-1} - \tilde{\theta}^{\mu_{k}}}_2\\
&\leq \max_i |1-\gamma\lambda_i ( \Phi_{\scriptD_{k-1}}^\top \Phi_{\scriptD_{k-1}})|\norm{\theta_{k,\ell-1} - \tilde{\theta}^{\mu_{k}}}_2 \\
&\leq \underbrace{\sup_k \max_i |1-\gamma\lambda_i ( \Phi_{\scriptD_k}^\top \Phi_{\scriptD_k})|}_{=: \alpha_{GD, \gamma}}\norm{\theta_{k,\ell-1} - \tilde{\theta}^{\mu_{k}}}_2,
\end{align*} where $\lambda_i$ denotes the $i$-th largest eigenvalue of a matrix.
Iterating over $k,$ the following holds:
\begin{align*}
\norm{\theta_{k} - \tilde{\theta}^{\mu_{k}}}_2 &=\norm{\theta_{k,\eta_{k}} - \tilde{\theta}^{\mu_{k}}}_2\\
&\leq \alpha_{GD, \gamma}^{\eta_{k}} \norm{\theta_{k,0} - \tilde{\theta}^{\mu_{k}}}_2\\
&= \alpha_{GD, \gamma}^{\eta_{k}} \norm{\theta_{k-1} - \tilde{\theta}^{\mu_{k}}}_2.
\end{align*}
\textit{Step 2}: Using Step 1 and matrix norm properties, we obtain the following bound on $\norm{J_{\mu_k}-J_k}_\infty:$
\begin{align}
\norm{J^{\mu_{k}}-J_k}_\infty \leq
a_k \norm{J_{k-1} -J^{\mu_{k-1}}}_\infty +b_k,\label{eq: first ineq or}
\end{align}
$$a_k := \alpha^{m+H-1} \delta_{FV}+ \frac{\sqrt{|S|}\norm{\Phi}_\infty}{\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta_{k}}( \alpha^{m+H-1} \delta_{FV}+1)$$ and
$$
b_k := (1+ \frac{\sqrt{|S|}\norm{\Phi}_\infty}{\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta_{k}})( \frac{\alpha^m + \alpha^{m+H-1}}{1-\alpha}\delta_{FV} + \delta_{app}+ \delta_{FV} \eps_{PE}) +\frac{\sqrt{|S|}\norm{\Phi}_\infty}{(1-\alpha)\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta_{k}}.
$$
\textit{Proof of Step 2:}
Using equivalence and sub-multiplicative properties of matrix norms, we have the following:
\begin{alignat*}{2}
&\frac{1}{\norm{\Phi}_\infty} \norm{\Phi \theta_k-\Phi \tilde{\theta}^{\mu_{k}}}_\infty &&\leq \norm{\theta_k - \tilde{\theta}^{\mu_{k}}}_\infty
\\& &&\leq \norm{\theta_k - \tilde{\theta}^{\mu_{k}}}_2
\\& &&\leq \alpha_{GD, \gamma}^{\eta_{k}} \norm{\theta_{k-1} - \tilde{\theta}^{\mu_{k}}}_2
\\& &&\leq \frac{1}{\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta_{k}} \norm{\Phi\theta_{k-1} - \Phi\tilde{\theta}^{\mu_{k}}}_2
\\& &&\leq \frac{\sqrt{|S|}}{\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta_{k}} \norm{\Phi\theta_{k-1} - \Phi\tilde{\theta}^{\mu_{k}}}_\infty\\
&\implies \norm{J_k-\Phi \tilde{\theta}^{\mu_{k}}}_\infty&&\leq \frac{\sqrt{|S|}\norm{\Phi}_\infty}{\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta_{k}} \norm{J_{k-1} - \Phi\tilde{\theta}^{\mu_{k}}}_\infty ,
\end{alignat*}
where $\sigma_{\min, \Phi}$ is the smallest singular value in the singular value decomposition of $\Phi$ and the last line follows from the fact that $J_k := \Phi \theta_k.$
The above implies the following:
\begin{align}
\norm{J^{\mu_{k}}-J_k}_\infty &\leq \norm{\Phi\tilde{\theta}^{\mu_{k}}-J^{\mu_k}}_\infty +\frac{\sqrt{|S|}\norm{\Phi}_\infty}{\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta_{k}}\norm{J_{k-1} - \Phi\tilde{\theta}^{\mu_{k}}}_\infty\nonumber\\
&= \norm{\scriptM_k (T_{\mu_k}^m T^{H-1} J_{k-1}+w_k)-J^{\mu_k}}_\infty +\frac{\sqrt{|S|}\norm{\Phi}_\infty}{\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta_{k}}\norm{J_{k-1} - \Phi\tilde{\theta}^{\mu_{k}}}_\infty,\label{eq: label 1 or}
\end{align}
where the equality follows from \eqref{eq: theta tilde muk or}.
Now we bound $\norm{J_{k-1}-\Phi\tilde{\theta}^{\mu_{k}}}_\infty$ as follows:
\begin{align}
\norm{J_{k-1}-\Phi\tilde{\theta}^{\mu_{k}}}_\infty
\nonumber&\leq \norm{J_{k-1} - J^{\mu_{k-1}}}_\infty + \norm{ J^{\mu_{k-1}}-J^{\mu_{k}}}_\infty + \norm{J^{\mu_{k}}-\Phi\tilde{\theta}^{\mu_{k}}}_\infty \\
\nonumber&\leq \norm{J_{k-1} - J^{\mu_{k-1}}}_\infty + \frac{1}{1-\alpha} + \norm{J^{\mu_{k}}-\Phi\tilde{\theta}^{\mu_{k}}}_\infty
\\
&\leq \norm{J_{k-1} - J^{\mu_{k-1}}}_\infty + \frac{1}{1-\alpha} + \norm{J^{\mu_{k}}-\scriptM_k (T_{\mu_k}^m T^{H-1} J_{k-1}+w_k)}_\infty, \label{eq: label 2 or}
\end{align}
where the last line follows from \eqref{eq: theta tilde muk or}. We introduce Lemma \ref{mainGDLemma} to upper bound $\norm{\scriptM_k (T_{\mu_k}^m T^{H-1} J_{k-1}+w_k)- J^{\mu_k}}_\infty$ as follows:
\begin{lemma}
\begin{align*}
\norm{\scriptM_k (T_{\mu_k}^m T^{H-1} J_{k-1}+w_k)- J^{\mu_k}}_\infty
&\leq \alpha^{m+H-1} \delta_{FV} \norm{J_{k-1} -J^{\mu_{k-1}}}_\infty + \frac{\alpha^m + \alpha^{m+H-1}}{1-\alpha}\delta_{FV} + \delta_{app}+ \delta_{FV} \eps_{PE}.
\end{align*} \label{mainGDLemma}
\end{lemma}
The proof of Lemma \ref{mainGDLemma} is in Appendix \ref{appendix:mainGDLemmaProof}.
Putting \eqref{eq: label 1 or}, \eqref{eq: label 2 or}, and Lemma \ref{mainGDLemma} together, we get the following:
\begin{align*}
\norm{J^{\mu_{k}}-J_k}_\infty \leq
a_k \norm{J_{k-1} -J^{\mu_{k-1}}}_\infty +b_k,
\end{align*}
\begin{align*}a_k := \alpha^{m+H-1} \delta_{FV}+ \frac{\sqrt{|S|}\norm{\Phi}_\infty}{\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta_{k}}( \alpha^{m+H-1} \delta_{FV}+1) \end{align*} and
\begin{align*}
b_k := (1+ \frac{\sqrt{|S|}\norm{\Phi}_\infty}{\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta_{k}})( \frac{\alpha^m + \alpha^{m+H-1}}{1-\alpha}\delta_{FV} + \delta_{app}+ \delta_{FV} \eps_{PE}) +\frac{\sqrt{|S|}\norm{\Phi}_\infty}{(1-\alpha)\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta_{k}}.
\end{align*}
\noindent\textit{Step 3:}
We will establish the following bound on $T_{\mu_{k+1}}T^{H-1}J^{\mu_k}$ using the contraction property of Bellman operators and property in \eqref{eq:usefulproperties}:
\begin{align*}
-T_{\mu_{k+1}}T^{H}J^{\mu_k}
&\leq 2\alpha^H f_k e - T^{H}J^{\mu_k} + \eps_{LA} e. \nonumber
\end{align*}
Using properties in \eqref{eq:usefulproperties} and monotonicity, we will repeatedly apply $T_{\mu_{k+1}}$ to both sides and take limits to obtain the following:
\begin{align*}
J^{\mu_{k+1}} - T^{H}J^{\mu_k}
&\geq - \frac{ 2\alpha^H \norm{J^{\mu_k}-J_k}_\infty + \eps_{LA} }{1-\alpha}e.
\end{align*}
\textit{Proof of Step 3:} We begin by noting that
\begin{align}
-T_{\mu_{k+1}}T^{H}J^{\mu_k} &\leq -T_{\mu_{k+1}}T^{H-1}J^{\mu_k} - T_{\mu_{k+1}}T^{H-1}J_k \nonumber+ T_{\mu_{k+1}}T^{H-1}J_k \nonumber\\
&\overset{(a)}\leq \alpha^{H} \norm{J^{\mu_k}-J_k}_\infty e - T_{\mu_{k+1}}T^{H-1}J_k \nonumber\\
&\leq \alpha^{H} \norm{J^{\mu_k}-J_k}_\infty e - T^H J_k + \eps_{LA} e \nonumber\\
&= \alpha^{H} \norm{J^{\mu_k}-J_k}_\infty e - T^H J_k - T^H J^{\mu_k} + T^H J^{\mu_k} + \eps_{LA} e \nonumber\\
&\overset{(b)}\leq 2\alpha^H \norm{J^{\mu_k}-J_k}_\infty e - T^{H}J^{\mu_k} + \eps_{LA} e, \nonumber
\end{align}
where $e$ is the vector of all $1$s, (a) and (b) follow from the contraction property of $T^H$ and $T_{\mu_{k+1}}$.
The rest of the proof is similar to the arguments in \cite{bertsekastsitsiklis}, \cite{bertsekas2019reinforcement}; however, we have to explicitly incorporate the role of lookahead ($H$) in the remaining steps of the proof. Suppose that we apply the $T_{\mu_{k+1}}$ operator $\ell-1$ times to both sides. Then, due to monotonicity and the fact $T_\mu(J+ce)=T_\mu(J)+\alpha ce,$ for any policy $\mu,$ we have the following:
\begin{align*}
T_{\mu_{k+1}}^\ell T^{H}J^{\mu_k} \leq \alpha^{\ell } (2\alpha^H \norm{J^{\mu_k}-J_k}_\infty + \eps_{LA} )e + T_{\mu_{k+1}}^{\ell+1} T^{H} J^{\mu_k}.
\end{align*}
Using a telescoping sum, we get the following inequality:
\begin{align*}
T_{\mu_{k+1}}^j T^{H} J^{\mu_k} - T^{H}J^{\mu_k}
&\geq - \sum_{\ell = 1}^{j} \alpha^{\ell -1} (2\alpha^H \norm{J^{\mu_k}-J_k}_\infty + \eps_{LA} )e.
\end{align*}
Taking limits as $j \to \infty,$ we obtain the following:
\begin{align*}
J^{\mu_{k+1}} - T^{H}J^{\mu_k}
&\geq - \frac{ 2\alpha^H \norm{J^{\mu_k}-J_k}_\infty + \eps_{LA} }{1-\alpha}e.
\end{align*}
The rest of proof is straightforward.
Subtracting $J^*$ from both sides of the previous inequality and using the contraction property of $T$, we get:
\begin{align*}
&\alpha^{H} \norm{J^* - J^{\mu_k}}_\infty e + \frac{ 2\alpha^H \norm{J^{\mu_k}-J_k}_\infty + \eps_{LA} }{1-\alpha}e. \\&\geq J^* - T^{H}J^{\mu_k} + \frac{ 2\alpha^H \norm{J^{\mu_k}-J_k}_\infty + \eps_{LA} }{1-\alpha}e
\\& \geq J^* - J^{\mu_{k}}\\
&\geq 0,
\end{align*}
which implies that
\begin{align}
\norm{J^{\mu_{k+1}} - J^*}_\infty &\leq \alpha^{H} \norm{J^{\mu_k}-J^*}_\infty + \frac{ 2\alpha^H \norm{J^{\mu_k}-J_k}_\infty + \eps_{LA} }{1-\alpha},\label{eq: ** 4}
\end{align} since $J^* \geq J^{\mu}$ for all policies $\mu$. The above, together with the inequality in \eqref{eq: first ineq or} gives us Proposition \ref{prop1}.
\endproof
\Halmos
We now prove Proposition \ref{prop2} by iterating over $k$ using Proposition \ref{prop1}.
\proof{Proof of Proposition \ref{prop2}}
First, noting our assumptions in Proposition \ref{prop1} that $a_k \leq a$ for all $k$ where $0<a<1,$ and $b_k \leq b$ for all $k$, where $0<b$, we iterate over $k$ using \eqref{eq: ineq 2 or} to obtain a bound on $\norm{J^{\mu_k}-J_k}_\infty$ as follows:
\begin{align}
\norm{J^{\mu_{k}}-J_{k}}_\infty \leq a^k \norm{J_0-J^{\mu_0}}_\infty + b\sum_{j=0}^{k-1}\prod_{m=j}^{k-1} a^m. \label{eq: ineq 1 last or}
\end{align}
Now, we iterate over $k$ in \eqref{eq: jmu-jk or} to get the following:
\begin{align}
\norm{J^{\mu_{k}} - J^*}_\infty \nonumber&\leq \alpha^{k(H-1)} \norm{J^{\mu_0}-J^*}_\infty + \frac{2\alpha^H}{1-\alpha}\sum_{\ell = 0}^{k-1}\alpha^{(k-\ell-1)(H-1)} \norm{J^{\mu_{\ell}}-J_\ell}_\infty + \frac{\eps_{LA}}{(1-\alpha)(1-\alpha^{H-1})}\\
&\leq \frac{\alpha^{k(H-1)}}{1-\alpha} + \frac{2\alpha^H}{1-\alpha}\sum_{\ell = 0}^{k-1}\alpha^{(k-\ell-1)(H-1)} \norm{J^{\mu_{\ell}}-J_\ell}_\infty + \frac{\eps_{LA}}{(1-\alpha)(1-\alpha^{H-1})}.\label{eq: ineq 2 last or}
\end{align}
Combining the inequalities in \eqref{eq: ineq 1 last or} and \eqref{eq: ineq 2 last or} gives us the following:
\begin{align}
\norm{J^{\mu_{k}} - J^*}_\infty &\leq \frac{\alpha^{k(H-1)}}{1-\alpha} + \frac{2\alpha^H}{1-\alpha}\sum_{\ell = 0}^{k-1}\alpha^{(k-\ell-1)(H-1)} \Big[a^\ell \norm{J_0-J^{\mu_0}}_\infty + b\sum_{j=0}^{\ell-1} \prod_{m=j}^{\ell-1} a^m \Big] \nonumber\\&+ \frac{\eps_{LA}}{(1-\alpha)(1-\alpha^{H-1})} \nonumber\\
&\leq \frac{\alpha^{k(H-1)}}{1-\alpha} + \frac{2\alpha^H}{1-\alpha}\sum_{\ell = 0}^{k-1}\alpha^{(k-\ell-1)(H-1)} \Big[ a^\ell \norm{J_0-J^{\mu_0}}_\infty + \frac{b}{1-a} \Big] + \frac{\eps_{LA}}{(1-\alpha)(1-\alpha^{H-1})} \nonumber\\
&\leq \frac{\alpha^{k(H-1)}}{1-\alpha} + \frac{2\alpha^H}{1-\alpha}\norm{J_0 - J^{\mu_0}}_\infty \sum_{\ell = 0}^{k-1}\alpha^{(k-\ell-1)(H-1)} a^\ell \nonumber+ \frac{2\alpha^H\frac{b}{1-a}+\eps_{LA}}{(1-\alpha)(1-\alpha^{H-1})} \nonumber\\
&\leq \frac{\alpha^{k(H-1)}}{1-\alpha} + \frac{2\alpha^H}{1-\alpha}\norm{J_0 - J^{\mu_0}}_\infty \sum_{\ell = 0}^{k-1}\max(\alpha^{H-1},a)^{k-1} \nonumber+ \frac{2\alpha^H\frac{b}{1-a}+\eps_{LA}}{(1-\alpha)(1-\alpha^{H-1})} \nonumber\\
&= \frac{\alpha^{k(H-1)}}{1-\alpha} + \frac{2\alpha^H}{1-\alpha}k \max(\alpha^{H-1},a)^{k-1}\norm{J^{\mu_0}-J_0}_\infty +\frac{2\alpha^H\frac{b}{1-a}+\eps_{LA}}{(1-\alpha)(1-\alpha^{H-1})},\nonumber
\end{align}
where the last inequality follows from the assumption that $0<a<1.$
\Halmos
\endproof
We now use Proposition \ref{prop2} to prove Theorems \ref{theorem 2 or}-\ref{theorem 3 or}.
\proof{Proof of Theorem \ref{theorem 2 or}}
When $\eta_k=\eta,$ the following holds:
$$a_k = a_\eta := \alpha^{m+H-1} \delta_{FV}+ \frac{\sqrt{|S|}\norm{\Phi}_\infty}{\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta}( \alpha^{m+H-1} \delta_{FV}+1)$$ and
$$
b_k = b_\eta := (1+ \frac{\sqrt{|S|}\norm{\Phi}_\infty}{\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta})( \frac{\alpha^m + \alpha^{m+H-1}}{1-\alpha}\delta_{FV} + \delta_{app}+ \delta_{FV} \eps_{PE}) +\frac{\sqrt{|S|}\norm{\Phi}_\infty}{(1-\alpha)\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta}.
$$
Note that from our assumptions on $m, \gamma,$ and $H$, $0<a_\eta<1$ and $0< b_\eta$.
To see that $a_\eta < 1$, observe from our assumptions in Theorem \ref{theorem 2 or} that \begin{align}\alpha^{m+H-1} \delta_{FV} < \frac{1}{2}\label{eq: ** or}\end{align} and $$\frac{\sqrt{|S|}\norm{\Phi}_\infty}{\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta}( \alpha^{m+H-1} \delta_{FV}+1) < \frac{\sqrt{|S|}\norm{\Phi}_\infty}{\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta}\frac{3}{2}< \frac{1}{2}.$$
Putting the above two lines together gives us
\begin{align*}
\alpha^{m+H-1} \delta_{FV}+ \frac{\sqrt{|S|}\norm{\Phi}_\infty}{\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta}( \alpha^{m+H-1} \delta_{FV}+1)< 1.
\end{align*}
So, we can directly use Proposition \ref{prop2} to obtain \eqref{eq: case a bound}.
\Halmos
\endproof
\proof{Proof of Theorem \ref{theorem 3 or}}
Take any constant $0 <c^*< 1- \alpha^{m+H-1} \delta_{FV},$ where $c^*$ is a margin of error. Define $k(c^*)$ to be the smallest value $k(c^*)$ such that:
\begin{align} \eta_{k(c^*)} >\log\Big(\frac{1}{c^*}\frac{\sqrt{|S|}\norm{\Phi}_\infty}{\sigma_{\min,\Phi}}( \frac{\alpha^m + \alpha^{m+H-1}}{1-\alpha}\delta_{FV} + \delta_{app}+ \delta_{FV} \eps_{PE}+\frac{1}{1-\alpha})\Big)/\log(1/\alpha_{GD, \gamma}). \label{eq: c* assume or}
\end{align}
We know such a $k(c^*)$ exists since $\eta_k \to \infty$.
We define $a_{c^*}$ and $b_{c^*}$ as follows:
$$a_{c^*} := \alpha^{m+H-1} \delta_{FV} + c^*,$$
$$b_{c^*} := \frac{\alpha^m + \alpha^{m+H-1}}{1-\alpha}\delta_{FV} + \delta_{app}+ \delta_{FV} \eps_{PE} + c^*.$$
It is easy to see from \eqref{eq: c* assume or} and the definitions of $a_k$ and $b_k$ in Proposition \ref{prop1} that $a_k \leq a_{c^*}$ and $b_k \leq b_{c^*}$ when $k > k(c^*),$ since $0<\alpha_{GD, \gamma}<1$ from our assumption on $\gamma$ in Theorem \ref{theorem 3 or}.
From our assumptions on $c^*$, $m, \gamma,$ and $H$, $0<a_{c^*}<1$ and $0< b_{c^*}$, where we use a similar technique as in the proof of Theorem \ref{theorem 2 or} with $\alpha^{m+H-1}\delta_{FV}< 1-c^*$ in place of \eqref{eq: ** or} to show that $a_{c^*}<1$.
So, we can use Proposition \ref{prop2} and begin iterating at $k(c^*)$ to obtain finite time bounds on $\norm{J^{\mu_{k}} - J^*}_\infty$ as follows:
\begin{align}
&\norm{J^{\mu_{k}} - J^*}_\infty
\nonumber \\&\leq \underbrace{\frac{\alpha^{(k-k(c^*))(H)}}{1-\alpha} + \frac{2\alpha^H}{1-\alpha}(k-k(c^*))\max(a_{c^*}, \alpha^{H})^{k-k(c^*)-1}\norm{J^{\mu_{k(c^*)}}-J_{k(c^*)}}_\infty}_{ \text{ finite-time component }}+ \underbrace{\frac{2\alpha^H \frac{b_{c^*}}{1-a_{c^*}}+\eps_{LA}}{(1-\alpha^{H})(1-\alpha)}}_{ \text{ asymptotic component }}. \label{eq: fin time jmu nk}
\end{align}
Using Proposition \ref{prop1}, we can upper bound $\norm{J^{\mu_{k(c^*)}}-J_{k(c^*)}}_\infty$ as follows:
$$\norm{J^{\mu_{k(c^*)}}-J_{k(c^*)}}_\infty \leq \prod_{i=1}^{k(c^*)} a_{i}\norm{J_0-J^{\mu_0}}_\infty + \sum_{j=1}^{k(c^*)} b_j \prod_{k=j+1}^{k(c^*)} a_k,$$ where $a_k$ and $b_k$ are defined in Proposition \ref{prop1}.
Taking limits on both sides of \eqref{eq: fin time jmu nk}, we get the following:
\begin{align}
\limsup \norm{J^{\mu_{k}} - J^*}_\infty
\nonumber\leq \frac{2\alpha^H \frac{b_{c^*}}{1-a_{c^*}}+\eps_{LA}}{(1-\alpha^{H})(1-\alpha)}.
\end{align}
Since the above holds for all $c^* > 0$, we get Theorem \ref{theorem 3 or}.
\endproof\Halmos
Looking through the steps of our proof, one can obtain finite-time bounds for the case where $\eta_k$ is increasing. However, the algorithm consists of two loops: one corresponding to policy iteration and the other corresponding to gradient descent within each policy iteration step. It is hard to compare the relative complexities of each step within these loops. Therefore, a finite-time analysis does not shed much light into the amount of computations needed to execute the algorithm with an increasing sequence $\eta_k.$ However, it is interesting to note that for all $k>k(c^*),$ the algorithm converges exponentially fast in the number of policy iteration steps although the number of gradient descent steps within each policy iteration step is increasing.
\section{Conclusion}
Practical RL algorithms that deal with large state spaces implement some form of approximate policy iteration. In traditional analyses of approximate policy iteration, for example in \cite{bertsekas2019reinforcement}, it is assumed that there is an error in the policy evaluation step and an error in the policy improvement step. In this paper, we seek to understand the role of function approximation in the policy evaluation step and the associated changes that one has to make to the approximate policy iteration algorithm (such as lookahead) to counteract the effect of function approximation. Our main conclusion is that lookahead provides two benefits: (i) it mitigates the effects of function approximation, rollout and the choice of specific feature vectors, and (ii) from a theoretical perspective, it improves the upper bound on the error in approximate policy iteration from $1/(1-\alpha)^2$ to $1/(1-\alpha^{H})(1-\alpha).$
Possible directions for future work include the following:
\begin{itemize}
\item For problems with a terminal state, it would be interesting to consider cases where the value function of a given policy is estimated using a full rollout which provides an unbiased estimate as in \cite{tsitsiklis2002convergence}.
\item In game playing applications, gradient descent is commonly used to estimate the value function, but temporal-difference learning is used in other applications. It would be interesting to extend our results to the case of TD learning-based policy evaluation.
\item While neural networks are not linear function approximators, recent results on the NTK analysis of neural networks suggest that they can be approximated as linear combinations of basis functions \cite{jacot2018neural,du2018gradient,arora2019fine,ji2019polylogarithmic, cao2019generalization}. Thus, to the extent that the NTK approximation is reasonable, our results can potentially shed light on why the combination of the representation capability of neural networks and tree-search methods work well in practice, although further work is necessary to make this connection precise.
\end{itemize}
\begin{APPENDICES}
\section{Proof of Lemma \ref{mainGDLemma}} \label{appendix:mainGDLemmaProof}
\proof{Proof of Lemma \ref{mainGDLemma}}
\begin{align*}
& \norm{\scriptM_k (T_{\mu_k}^m T^{H-1} J_{k-1}+w_k)- J^{\mu_k}}_\infty \\
&\leq \norm{\scriptM_k T_{\mu_k}^m T^{H-1} J_{k-1}- J^{\mu_k}}_\infty + \norm{\scriptM_k w_k}_\infty \\
&\leq\norm{\scriptM_k T_{\mu_k}^m T^{H-1} J_{k-1}- J^{\mu_k}}_\infty + \norm{\scriptM_k}_\infty \norm{w_k}_\infty \\ \allowdisplaybreaks
&\leq \norm{\scriptM_k T_{\mu_k}^m T^{H-1} J_{k-1}- J^{\mu_k}}_\infty + \delta_{FV} \eps_{PE} \\ \allowdisplaybreaks
&= \norm{\scriptM_k T_{\mu_k}^m T^{H-1} J_{k-1} - \scriptM_k J^{\mu_k} + \scriptM_k J^{\mu_k}- J^{\mu_k}}_\infty + \delta_{FV} \eps_{PE}\\
&\leq \norm{\scriptM_k T_{\mu_k}^m T^{H-1} J_{k-1} - \scriptM_k J^{\mu_k}}_\infty + \norm{\scriptM_k J^{\mu_k}- J^{\mu_k}}_\infty+ \delta_{FV} \eps_{PE}\\
&\leq \norm{\scriptM_k}_\infty \norm{ T_{\mu_k}^m T^{H-1} J_{k-1} - J^{\mu_k}}_\infty + \norm{\scriptM_k J^{\mu_k}- J^{\mu_k}}_\infty+ \delta_{FV} \eps_{PE}\\
&\leq \alpha^m \norm{\scriptM_k}_\infty \norm{T^{H-1} J_{k-1} - J^{\mu_k}}_\infty + \sup_{k, \mu_k}\norm{\scriptM_k J^{\mu_k}- J^{\mu_k}}_\infty + \delta_{FV} \eps_{PE}\\
&\leq \alpha^m\norm{\scriptM_k}_\infty \norm{T^{H-1} J_{k-1} - J^* + J^* - J^{\mu_k}}_\infty +\delta_{app} + \delta_{FV} \eps_{PE}\\
&\leq \alpha^m\norm{\scriptM_k}_\infty \norm{T^{H-1} J_{k-1} - J^*}_\infty + \alpha^m\norm{\scriptM_k}_\infty \norm{J^* - J^{\mu_k}}_\infty + \delta_{app}+ \delta_{FV} \eps_{PE} \\
&\leq \alpha^{m+H-1}\norm{\scriptM_k}_\infty \norm{J_{k-1} - J^*}_\infty + \frac{\alpha^m}{1-\alpha}\norm{\scriptM_k}_\infty +\delta_{app} + \delta_{FV} \eps_{PE}\\
&\leq \alpha^{m+H-1}\norm{\scriptM_k}_\infty \norm{J_{k-1} -J^{\mu_{k-1}} + J^{\mu_{k-1}}- J^*}_\infty + \frac{\alpha^m}{1-\alpha}\norm{\scriptM_k}_\infty +\delta_{app} + \delta_{FV} \eps_{PE}\\
&\leq \alpha^{m+H-1}\norm{\scriptM_k}_\infty \norm{J_{k-1} -J^{\mu_{k-1}}}_\infty + \frac{\alpha^m + \alpha^{m+H-1}}{1-\alpha}\norm{\scriptM_k}_\infty +\delta_{app} + \delta_{FV} \eps_{PE}\\
&\leq \alpha^{m+H-1} \delta_{FV} \norm{J_{k-1} -J^{\mu_{k-1}}}_\infty + \frac{\alpha^m + \alpha^{m+H-1}}{1-\alpha}\delta_{FV} + \delta_{app}+ \delta_{FV} \eps_{PE}.
\end{align*} \Halmos
\endproof
\section{Proof of Theorem \ref{mainAPI}}
\label{appendix:thm1}
\begin{remark}
The proof of Theorem \ref{mainAPI} is somewhat simpler than that of Theorems \ref{theorem 2 or}-\ref{theorem 3 or} but uses many of the same ideas.
The proof of Theorem \ref{mainAPI} skips Steps 1 and 2 in the proof of Proposition \ref{prop1} and instead uses Lemma \ref{mainGDLemma} to obtain an analogous result to Step 2. The rest of the proof of Proposition \ref{prop1} also applies to the proof of Theorem \ref{mainAPI}.
\end{remark}
\proof{Proof of Theorem \ref{mainAPI}}
Consider policies $\mu_k$ and $\mu_{k+1},$ where $\mu_0$ can be taken to be any arbitrary policy. We have the following:
\begin{align}
-T_{\mu_{k+1}}T^{H}J^{\mu_k} &\leq -T_{\mu_{k+1}}T^{H-1}J^{\mu_k} - T_{\mu_{k+1}}T^{H-1}J_k \nonumber+ T_{\mu_{k+1}}T^{H-1}J_k \nonumber\\
&\overset{(a)}\leq \alpha^{H} \norm{J^{\mu_k}-J_k}_\infty e - T_{\mu_{k+1}}T^{H-1}J_k \nonumber\\
&\leq \alpha^{H} \norm{J^{\mu_k}-J_k}_\infty e - T^H J_k + \eps_{LA} e \nonumber\\
&= \alpha^{H} \norm{J^{\mu_k}-J_k}_\infty e - T^H J_k - T^H J^{\mu_k} + T^H J^{\mu_k} + \eps_{LA} e \nonumber\\
&\overset{(b)}\leq 2\alpha^H \norm{J^{\mu_k}-J_k}_\infty e - T^{H}J^{\mu_k} + \eps_{LA} e, \label{eq: *}
\end{align}
where $e$ is the vector of all $1$s, (a) and (b) follow from the contraction property of $T^H$ and $T_{\mu_{k+1}}$.
Since $\norm{J^{\mu_k}-J_k}_\infty= \norm{\scriptM_k (T_{\mu_k}^m T^{H-1} J_{k-1}+w_k)- J^{\mu_k}}_\infty$ from \eqref{eq:iterateAPI}, we can further bound $\norm{J^{\mu_k}-J_k}_\infty$ using Lemma \ref{mainGDLemma}. We now rewrite our recursion from Lemma \ref{mainGDLemma}:
Suppose that we define $\beta \in (0, 1)$ as follows:
\begin{align}
\beta := \alpha^{m+H-1} \delta_{FV}, \label{eq:defbeta}
\end{align} where we note from our assumption in Theorem \ref{mainAPI} that $\alpha^{m+H-1} \delta_{FV}<1.$ Furthermore, we denote $\tau$ as follows:
\begin{align}
\tau := \frac{\alpha^m + \alpha^{m+H-1}}{1-\alpha}\delta_{FV} + \delta_{app}+ \delta_{FV} \eps_{PE}. \label{eq:defdelta3}
\end{align}
Then, our bound from Lemma \ref{mainGDLemma} can be rewritten as the following:
\begin{align*}
\norm{J^{\mu_k}-J_k}_\infty
&\leq \beta \norm{J_{k-1} -J^{\mu_{k-1}}}_\infty + \tau.
\end{align*}
Iterating over $k$, we get that for any $k$,
\begin{align}
\norm{J^{\mu_k}-J_k}_\infty \leq \underbrace{(\beta)^k \norm{J^{\mu_0}-J_0}_\infty + \sum_{i=0}^{k-1} \beta^i \tau}_{=: f_k}. \label{eq: pound2}
\end{align}
Putting (\ref{eq: *}) and (\ref{eq: pound2}) together, we have the following:
\begin{align}
-T_{\mu_{k+1}}T^{H}J^{\mu_k}
&\leq 2\alpha^H f_k e - T^{H}J^{\mu_k} + \eps_{LA} e.
\end{align}
The rest of the proof is similar to the arguments in \cite{bertsekastsitsiklis}, \cite{bertsekas2019reinforcement}; however, we have to explicitly incorporate the role of lookahead ($H$) in the remaining steps of the proof.
Suppose that we apply the $T_{\mu_{k+1}}$ operator $\ell-1$ times. Then, due to monotonicity and the fact that $T_\mu(J+ce)=T_\mu(J)+\alpha ce,$ for any policy $\mu,$ we have the following:
\begin{align*}
-T_{\mu_{k+1}}^{\ell+1} T^{H}J^{\mu_k}
&\leq \alpha^\ell (2\alpha^{H} f_k +\eps_{LA})e - T_{\mu_{k+1}}^{\ell}T^{H}J^{\mu_k}.
\end{align*}
Using a telescoping sum, we get the following inequality:
\begin{align*}
T_{\mu_{k+1}}^j T^{H} J^{\mu_k} - T^{H}J^{\mu_k}
&\geq -\sum_{\ell = 1}^{j} \alpha^{\ell-1} (2\alpha^{H} f_k +\eps_{LA})e.
\end{align*}
Taking the limit as $j\rightarrow\infty$ on both sides, we have the following:
\begin{align*}
J^{\mu_{k+1}} - T^{H}J^{\mu_k} \geq -\frac{2\alpha^{H} f_k +\eps_{LA}}{1-\alpha}e.
\end{align*}
Rearranging terms and subtracting $J^*$ from both sides, we get the following:
\begin{align*}
\alpha^{H} \norm{J^* - J^{\mu_k}}_\infty e + \frac{2\alpha^{H} f_k +\eps_{LA}}{1-\alpha}e \geq J^* - T^{H}J^{\mu_k} + \frac{2\alpha^{H} f_k +\eps_{LA}}{1-\alpha}e \geq J^* - J^{\mu_{k+1}} \geq 0,
\end{align*}
which implies that
\begin{align}
\norm{J^{\mu_{k+1}} - J^*}_\infty &\leq \alpha^{H} \norm{J^{\mu_k}-J^*}_\infty + \frac{2\alpha^{H} f_k +\eps_{LA}}{1-\alpha},
\end{align} since $J^* \geq J^{\mu}$ for all policies $\mu$.
\color{black}
We iterate over $k>0$ to get the following:
\begin{align*}
\norm{J^{\mu_{k}} - J^*}_\infty &\leq \alpha^{k(H)} \norm{J^{\mu_0}-J^*}_\infty + \sum_{\ell=0}^{k-1} \alpha^{(k-\ell-1)(H)}\frac{2\alpha^{H} f_\ell +\eps_{LA}}{1-\alpha}\\
&\leq \frac{\alpha^{k(H)} }{1-\alpha} + \sum_{\ell=0}^{k-1} \alpha^{(k-\ell-1)(H)}\frac{2\alpha^{H} f_\ell +\eps_{LA}}{1-\alpha}.
\end{align*}
where $$f_\ell := (\beta)^\ell \norm{J^{\mu_0}-J_0}_\infty + \sum_{i=0}^{k-1} \beta^i \tau,$$ and $\beta:= \alpha^{m+H-1} \delta_{FV}$ and $\tau := \frac{\alpha^m + \alpha^{m+H-1}}{1-\alpha}\delta_{FV} + \delta_{app}+ \delta_{FV} \eps_{PE}$.
We will now obtain finite-time bounds of $\norm{J^{\mu_{k}} - J^*}_\infty$ using the above results. The following holds:
\begin{align*}
\norm{J^{\mu_{k}} - J^*}_\infty
&\leq \frac{\alpha^{k(H)} }{1-\alpha} + \sum_{\ell=0}^{k-1} \alpha^{(k-\ell-1)(H)}\frac{2\alpha^{H} f_\ell +\eps_{LA}}{1-\alpha} \\
&= \frac{\alpha^{k(H)} }{1-\alpha} + \sum_{\ell=0}^{k-1} \alpha^{(k-\ell-1)(H)}\frac{2\alpha^{H} \Big(\beta^\ell \norm{J^{\mu_0}-J_0}_\infty + \frac{\tau}{1-\beta}\Big) +\eps_{LA}}{1-\alpha} \\
&\leq \frac{\alpha^{k(H)} }{1-\alpha} + \frac{2\alpha^{H}\norm{J^{\mu_0}-J_0}_\infty}{1-\alpha}\sum_{\ell=0}^{k-1} \alpha^{(k-\ell-1)(H)} \beta^\ell +\frac{2\alpha^H \frac{\tau}{1-\beta} +\eps_{LA}}{(1-\alpha^{H})(1-\alpha)}\\
&\leq \frac{\alpha^{k(H)} }{1-\alpha} + \frac{2\alpha^{H}\norm{J^{\mu_0}-J_0}_\infty}{1-\alpha}\sum_{\ell = 0}^{k-1}\max(\alpha^{H},\beta)^{k-1} +\frac{2\alpha^H \frac{\tau}{1-\beta} +\eps_{LA}}{(1-\alpha^{H})(1-\alpha)}\\
&\leq \frac{\alpha^{k(H)} }{1-\alpha} + \frac{2\alpha^{H}\norm{J^{\mu_0}-J_0}_\infty}{1-\alpha}k\max(\alpha^{H},\beta)^{k-1} +\frac{2\alpha^H \frac{\tau}{1-\beta} +\eps_{LA}}{(1-\alpha^{H})(1-\alpha)},
\end{align*}
where the second to last to last inequality holds due to the assumption in Theorem \ref{mainAPI}.
\Halmos \endproof
\color{black}
\begin{comment}
\section{A Modified Least-Squares Algorithm}\label{appendix:thm2}
Suppose Step \ref{step 3 alg} of Algorithm \ref{alg:LSalg} is changed to $\hat{J}^{\mu_{k+1}}(i) = T_{\mu_{k+1}}^m (J_k)(i)+w_{k+1}(i)$ for $i \in \scriptD_k$. Then, it is still possible to get bounds on the performance of the algorithm when $m$ is sufficiently large. With this modification to the algorithm, when Assumptions \ref{assume 1 or}, \ref{assume 2 or}, and \ref{assume 3 or} hold, we have the following:
\begin{theorem}\label{mainAPISecond}
Suppose that $m$ satisfies $m >\log (\delta_{FV})/\log (1/\alpha),$ where
$$\delta_{FV} := \sup_k \norm{\scriptM_k}_\infty = \sup_k\norm{\Phi(\Phi_{\scriptD_{k}}^\top \Phi_{\scriptD_{k}} )^{-1} \Phi_{\scriptD_{k}}^\top \scriptP_k}_\infty,$$
then
\begin{align*}
\limsup_{k\to \infty}\norm{J^{\mu_k}-J^*}_\infty \leq \frac{\tilde{c}_{m, H}}{(1-\alpha)(1-\alpha^{H-1})},
\end{align*}
where $$\tilde{c}_{m, H} := 2\alpha^H \Bigg(\frac{\frac{\alpha^m}{1-\alpha} \delta_{FV} + \delta_{app}+ \delta_{FV} \eps_{PE} }{1-\alpha^{m}\delta_{FV}} + \frac{1}{1-\alpha}+\norm{J_0}_\infty \Bigg)+\eps_{LA}$$ and
$
\delta_{app} := \sup_{k, \mu_k}\norm{\scriptM_k J^{\mu_k}- J^{\mu_k}}_\infty.
$
\end{theorem}
\proof{Proof of Theorem \ref{mainAPISecond}}
Consider policies $\mu_k$ and $\mu_{k+1},$ where $\mu_0$ can be taken to be any arbitrary policy. We have the following:
\begin{align}
-T_{\mu_{k+1}}T^{H-1}J^{\mu_k} &= -T_{\mu_{k+1}}T^{H-1}J^{\mu_k} - T_{\mu_{k+1}}T^{H-1}J_k + T_{\mu_{k+1}}T^{H-1}J_k \nonumber\\
&\overset{(a)}\leq \alpha^H \norm{J^{\mu_k}-J_k}_\infty e - T_{\mu_{k+1}}T^{H-1}J_k \nonumber\\
&\leq \alpha^H \norm{J^{\mu_k}-J_k}_\infty e - T^H J_k + \eps_{LA} e \nonumber\\
&= \alpha^H \norm{J^{\mu_k}-J_k}_\infty e - T^H J_k - T^H J^{\mu_k} + T^H J^{\mu_k} + \eps_{LA} e \nonumber\\
&\overset{(b)}\leq 2 \alpha^H \norm{J^{\mu_k}-J_k}_\infty e - T^H J^{\mu_k} + \eps_{LA} e \nonumber\\
&\leq 2\alpha^H \norm{J^{\mu_k}-J_k}_\infty e - T^{H-1}J^{\mu_k} + \eps_{LA} e, \label{eq: * 2}
\end{align}
where $e$ is the vector of all $1$s, (a) and (b) follow from the contraction property of $T^H$ and $T_{\mu_{k+1}}$ and the last inequality follows from standard arguments using the monotonicity properties and the definition of $T$: specifically, note that
$$
TJ^\mu = \max_{\mu'} T_{\mu'} J^\mu \geq T_\mu J^\mu = J^{\mu},
$$
and repeatedly apply $T$ to both sides of the inequality and use monotonocity to obtain
$T^\ell J^\mu \geq T^{\ell-1} J^\mu$ for all $\ell$ and all policies $\mu.$
We can further bound $\norm{J^{\mu_k}-J_k}$ as follows:
\begin{align*}
\norm{J^{\mu_k}-J_k} &= \norm{\scriptM_k (T_{\mu_k}^m J_{k-1}+w_k)- J^{\mu_k}}_\infty
\leq \norm{\scriptM_k T_{\mu_k}^m J_{k-1}- J^{\mu_k}}_\infty + \norm{\scriptM_k w_k}_\infty \\
&\leq\norm{\scriptM_k T_{\mu_k}^m J_{k-1}- J^{\mu_k}}_\infty + \norm{\scriptM_k}_\infty \norm{w_k}_\infty
\leq \norm{\scriptM_k T_{\mu_k}^m J_{k-1}- J^{\mu_k}}_\infty + \delta_{FV} \eps_{PE} \\
&= \norm{\scriptM_k T_{\mu_k}^m J_{k-1} - \scriptM_k J^{\mu_k} + \scriptM_k J^{\mu_k}- J^{\mu_k}}_\infty + \delta_{FV} \eps_{PE}\\
&\leq \norm{\scriptM_k T_{\mu_k}^m J_{k-1} - \scriptM_k J^{\mu_k}}_\infty + \norm{\scriptM_k J^{\mu_k}- J^{\mu_k}}_\infty+ \delta_{FV} \eps_{PE}\\
&\leq \norm{\scriptM_k}_\infty \norm{ T_{\mu_k}^m J_{k-1} - J^{\mu_k}}_\infty + \norm{\scriptM_k J^{\mu_k}- J^{\mu_k}}_\infty+ \delta_{FV} \eps_{PE}\\
&\leq \delta_{FV} \norm{ T_{\mu_k}^m J_{k-1} - J^{\mu_k}}_\infty + \norm{\scriptM_k J^{\mu_k}- J^{\mu_k}}_\infty+ \delta_{FV} \eps_{PE}\\
&\leq \alpha^m \delta_{FV} \norm{J_{k-1} - J^{\mu_k}}_\infty + \sup_{k, \mu_k}\norm{\scriptM_k J^{\mu_k}- J^{\mu_k}}_\infty + \delta_{FV} \eps_{PE}\\
&\leq \alpha^m \delta_{FV} \norm{J_{k-1} - J^{\mu_k}}_\infty + \delta_{app}+ \delta_{FV} \eps_{PE}\\
&\leq \alpha^m \delta_{FV} \norm{J_{k-1} - J^{\mu_{k-1}} + J^{\mu_{k-1}} - J^{\mu_k}}_\infty + \delta_{app}+ \delta_{FV} \eps_{PE}\\
&\leq \alpha^m\delta_{FV} \norm{J_{k-1} - J^{\mu_{k-1}}}_\infty + \alpha^m \delta_{FV} \norm{J^{\mu_{k-1}} - J^{\mu_k}}_\infty + \delta_{app}+ \delta_{FV} \eps_{PE}\\
&\leq \alpha^m \delta_{FV} \norm{J_{k-1} - J^{\mu_{k-1}}}_\infty + \frac{\alpha^m \delta_{FV}}{1-\alpha} + \delta_{app}+ \delta_{FV} \eps_{PE}
\end{align*}
Iterating over $k$ we get that for all $k$,
\begin{align}
\norm{J^{\mu_k}-J_k}_\infty \leq \underbrace{\frac{\frac{\alpha^m}{1-\alpha} \delta_{FV} + \delta_{app}+ \delta_{FV} \eps_{PE} }{1-\alpha^{m}\delta_{FV}} + \frac{1}{1-\alpha}+\norm{J_0}_\infty}_{=: p_{m}}, \label{eq: pound 2}
\end{align}
where we have used the assumption $\alpha^{m} \delta_{FV} < 1$ and the fact that $\norm{J^{\mu_0}}_\infty\leq 1/(1-\alpha)$ due to Assumption~\ref{assume 3 or}.
Putting (\ref{eq: * 2}) and (\ref{eq: pound 2}) together, we have the following:
\begin{align*}
-T_{\mu_{k+1}}T^{H-1}J^{\mu_k} \leq \underbrace{\Bigg[2\alpha^H p_{m} + \eps_{LA}\Bigg]}_{=: \tilde{c}_{m, H}} e -T^{H-1}J^{\mu_k}.
\end{align*}
The rest of the proof follows from the proof of Theorem \ref{mainAPI} with $\tilde{c}_{m, H}$ instead of $c_{m, H}$ and we get Theorem \ref{mainAPISecond}.
\Halmos \endproof
Analogously to the inequality \eqref{eq: ***} in Appendix \ref{appendix:thm1}, the proof of Theorem \ref{mainAPISecond} gives us the following:
\begin{align}
\norm{J^{\mu_{k}}-J^*}_\infty &\leq \alpha^{(H-1)k} \norm{J^{\mu_0}-J^*}_\infty + \sum_{i=0}^{k-1} \alpha^{(H-1)i} \frac{ \tilde{c}_{m, H}}{1-\alpha},
\label{eq: *** 8}
\end{align} for $k >0.$
Note that the inequality \eqref{eq: *** 8} provides finite-time performance bounds for the modified least-squares algorithm, in addition to the asymptotic result stated in Theorem \ref{mainAPISecond}. \color{black}
\section{Bounds on Iterates In Algorithm \ref{alg:LSalg}}\label{appendix:prop1}
In the following proposition, we present a bound on the difference between $J_k$ and $J^*.$
\begin{proposition} When $\alpha^{m+H-1} \delta_{FV} <1,$
$\limsup_{k \to \infty} \norm{J_k - J^*}_\infty \leq \frac{r_{m, H}}{1-q_{m, H}},$ \label{IterAPITheorem}
\end{proposition}
where $q_{m, H} := \delta_{FV} \alpha^{m+H-1} + \big( 1+\delta_{FV} \alpha^m \big)\alpha^{H-1}$ and $r_{m, H} := \big( 1+\delta_{FV} \alpha^m \big)\Bigg(\alpha^{H-1} p_{m, H} + \frac{ c_{m, H}}{1-\alpha} \Bigg) + \delta_{app}+\delta_{FV} \eps_{PE},$ where $c_{m, H}$ is defined in \eqref{eq:defcdmh} and $p_{m, H}$ is defined in $(\ref{eq: pound})$. The proof is as follows.
\proof{Proof of Proposition \ref{IterAPITheorem}}
\begin{align*}
&\norm{J_{k+1} - J^*}_\infty \\&= \norm{J_{k+1} -J^{\mu_{k+1}} + J^{\mu_{k+1}}- J^*}_\infty \\
&\leq \norm{J_{k+1} -J^{\mu_{k+1}}}_\infty + \norm{J^{\mu_{k+1}}- J^*}_\infty \\
&\leq \norm{\scriptM_{k+1} T_{\mu_{k+1}}^m T^{H-1} J_k -J^{\mu_{k+1}}}_\infty +\delta_{FV} \eps_{PE} +\norm{J^{\mu_{k+1}}- J^*}_\infty +\delta_{FV} \eps_{PE}\\
&= \norm{\scriptM_{k+1} T_{\mu_{k+1}}^m T^{H-1} J_k -\scriptM_{k+1} J^{\mu_{k+1}} + \scriptM_{k+1} J^{\mu_{k+1}} -J^{\mu_{k+1}}}_\infty +\norm{J^{\mu_{k+1}}- J^*}_\infty +\delta_{FV} \eps_{PE}\\
&\leq \norm{\scriptM_{k+1} T_{\mu_{k+1}}^m T^{H-1} J_k -\scriptM_{k+1} J^{\mu_{k+1}}}_\infty + \norm{\scriptM_{k+1} J^{\mu_{k+1}} -J^{\mu_{k+1}}}_\infty +\norm{J^{\mu_{k+1}}- J^*}_\infty +\delta_{FV} \eps_{PE}\\
&\leq \norm{\scriptM_{k+1}}_\infty \norm{T_{\mu_{k+1}}^m T^{H-1} J_k - J^{\mu_{k+1}}}_\infty + \norm{\scriptM_{k+1} J^{\mu_{k+1}} -J^{\mu_{k+1}}}_\infty +\norm{J^{\mu_{k+1}}- J^*}_\infty +\delta_{FV} \eps_{PE}\\
&\leq \delta_{FV} \alpha^m \norm{T^{H-1} J_k - J^{\mu_{k+1}}}_\infty + \delta_{app} +\norm{J^{\mu_{k+1}}- J^*}_\infty +\delta_{FV} \eps_{PE}\\
&= \delta_{FV} \alpha^m \norm{T^{H-1} J_k - J^* + J^* - J^{\mu_{k+1}}}_\infty + \delta_{app} +\norm{J^{\mu_{k+1}}- J^*}_\infty +\delta_{FV} \eps_{PE}\\
&\leq \delta_{FV} \alpha^m \norm{T^{H-1} J_k - J^*}_\infty + \delta_{FV} \alpha^m \norm{J^* - J^{\mu_{k+1}}}_\infty + \delta_{app} + \norm{J^{\mu_{k+1}}- J^*}_\infty+\delta_{FV} \eps_{PE}\\
&\leq \delta_{FV} \alpha^{m+H-1} \norm{J_k - J^*}_\infty + \delta_{FV} \alpha^m \norm{J^* - J^{\mu_{k+1}}}_\infty + \delta_{app} + \norm{J^{\mu_{k+1}}- J^*}_\infty+\delta_{FV} \eps_{PE}\\
&= \delta_{FV} \alpha^{m+H-1} \norm{J_k - J^*}_\infty + \big( 1+\delta_{FV} \alpha^m \big) \norm{J^* - J^{\mu_{k+1}}}_\infty + \delta_{app}+\delta_{FV} \eps_{PE} \allowdisplaybreaks\\
&\overset{(c)}\leq \delta_{FV} \alpha^{m+H-1} \norm{J_k - J^*}_\infty + \big( 1+\delta_{FV} \alpha^m \big) \Bigg(\alpha^{H-1} \norm{J^{\mu_k}-J^*}_\infty + \frac{ c_{m, H}}{1-\alpha} \Bigg) + \delta_{app} +\delta_{FV} \eps_{PE}\\
&\leq \delta_{FV} \alpha^{m+H-1} \norm{J_k - J^*}_\infty + \big( 1+\delta_{FV} \alpha^m \big) \Bigg(\alpha^{H-1} (\norm{J^{\mu_k}-J_k}_\infty + \norm{J_k-J^*}_\infty) + \frac{ c_{m, H}}{1-\alpha} \Bigg) + \delta_{app} +\delta_{FV} \eps_{PE}\allowdisplaybreaks\\
&= \Bigg(\delta_{FV} \alpha^{m+H-1} + \big( 1+\delta_{FV} \alpha^m \big)\alpha^{H-1} \Bigg) \norm{J_k - J^*}_\infty + \big( 1+\delta_{FV} \alpha^m \big)\Bigg(\alpha^{H-1} \norm{J^{\mu_k}-J_k}_\infty + \frac{ c_{m, H}}{1-\alpha} \Bigg)\\& + \delta_{app} +\delta_{FV} \eps_{PE}\allowdisplaybreaks\\
&\overset{(d)}\leq \Bigg(\delta_{FV} \alpha^{m+H-1} + \big( 1+\delta_{FV} \alpha^m \big)\alpha^{H-1} \Bigg) \norm{J_k - J^*}_\infty + \big( 1+\delta_{FV} \alpha^m \big)\Bigg(\alpha^{H-1} p_{m, H} + \frac{ c_{m, H}}{1-\alpha} \Bigg) + \delta_{app} +\delta_{FV} \eps_{PE}\\
&= q_{m, H} \norm{J_k - J^*}_\infty + r_{m, H},
\end{align*}
where $q_{m, H} := \delta_{FV} \alpha^{m+H-1} + \big( 1+\delta_{FV} \alpha^m \big)\alpha^{H-1}$ and $r_{m, H} := \big( 1+\delta_{FV} \alpha^m \big)\Bigg(\alpha^{H-1} p_{m, H} + \frac{ c_{m, H}}{1-\alpha} \Bigg) + \delta_{app}+\delta_{FV} \eps_{PE}.$ Note that $p_{m, H}$ is defined in $(\ref{eq: pound})$ and $c_{m, H}$ is defined in $(\ref{eq:defcdmh}).$ Additionally, $(c)$ follows from $(\ref{eq:cprop1})$ and $(d)$ follows from $(\ref{eq: pound})$, noting that the inequalities in $(\ref{eq:cprop1})$ and $(\ref{eq: pound})$ hold for all $\eps_{PE}'>0$.
Iterating over $k$, we get Proposition \ref{IterAPITheorem}.
We obtain the following inequality in much a similar way to inequality \eqref{eq: ***} in the proof of Theorem \ref{mainAPI}:
\begin{align}
\norm{J_{k} - J^*}_\infty \leq q_{m, H}^k \norm{J_0 - J^*}_\infty + \sum_{i=0}^{k-1} q_{m, H}^i r_{m, H}, \text{ for $k >0.$ } \label{eq:*** 3}
\end{align} \Halmos
\endproof
Note that the inequality \eqref{eq:*** 3} provides finite-time performance bounds, in addition to the asymptotic result stated in Proposition \ref{IterAPITheorem}.
\end{comment}
\section{Counterexample} \label{appendix:counterexAppendix}
Even though, in practice, $J^{\mu_k}$ is what we are interested in, the values $J_k$ computed as part of our algorithm should not go to $\infty$ since the algorithm would be numerically unstable otherwise. In the longer version of this work, \cite{annaor}, we provide a bound on $\norm{J_k-J^*}_\infty$ when $m+H-1$ is sufficiently large as in Theorem~\ref{mainAPI}. In this subsection, we show that, when this condition is not satisfied, $J_k$ can become unbounded.
The example we use is depicted in Figure \ref{fig:TsitVanRoyIm}.
\begin{figure}
\centering
\subfloat[\centering $\mu^a$]{\includegraphics[width=2.5cm]{Role of Lookahead+FA/Images/counter1.png} }%
\qquad
\subfloat[\centering $\mu^b$]{\includegraphics[width=2.5cm]{Role of Lookahead+FA/Images/counter2.png} }%
\caption{An example illustrating the necessity of the condition in Theorem~\ref{mainAPI}}%
\label{fig:TsitVanRoyIm}%
\end{figure}
There are two policies, $\mu^a$ and $\mu^b$ and the transitions are deterministic under the two policies. The rewards are deterministic and only depend on the states. The rewards associated with states are denoted by $r(x_1)$ and $r(x_2),$
with $r(x_1)>r(x_2)$. Thus, the optimal policy is $\mu^a$. We assume scalar features $\phi(x_1)=1$ and $\phi(x_2)=2.$
We fix $H=1$.
The MDP follows policy $\mu^a$ when:
\begin{align*}
&J_k(x_1) > J_k(x_2) \implies \theta_k > 2\theta_k.
\end{align*} Thus, as long as $\theta_k>0,$ the lookahead policy will be $\mu_b.$
We will now show that $\theta_k$ increases at each iteration when $\delta_{FV} \alpha^{m+H-1}>1.$ We assume that $\theta_0>0$ and $\scriptD_k = \{1, 2\}$ $\forall k.$ A straightforward computation shows that $\delta_{FV}=\frac{6}{5}.$
At iteration $k+1,$ suppose $\mu_{k+1}=\mu^b,$ our $\hat{J}^{\mu_{k+1}}(i)$ for $i = 1, 2$ are as follows:
\begin{align*}
\hat{J}^{\mu_{k+1}}(1) =r(x_1)+\sum_{i=1}^{m-1} r(x_1) \alpha^i + 2 \alpha^m \theta_k, \quad
\hat{J}^{\mu_{k+1}}(2) = r(x_2) +\sum_{i=1}^{m-1} r(x_2)\alpha^i + 2 \alpha^m \theta_k.
\end{align*}
Thus, from Step \ref{step 4 alg} of Algorithm \ref{alg:LSalg}:
\begin{align*}
&\theta_{k+1} = \arg \min_\theta \sum_{i =1}^2 \Big( (\Phi \theta)(i) - \hat{J}^{\mu_{k+1}}(i) \Big)^2 \\
&\implies \theta_{k+1} = \frac{\sum_{i=1}^{m-1} \alpha^i r(x_1)}{5} + \frac{2 \sum_{i=1}^{m-1} \alpha^i r(x_2)}{5} + \frac{6 \alpha^m \theta_k}{5}\\
&\implies \theta_{k+1}> \frac{6}{5} \alpha^{m}\theta_k.
\end{align*}
Thus, since $\theta_0 > 0$ and $H=1$, when $ \frac{6}{5} \alpha^{m+H-1}\theta_k =\delta_{FV} \alpha^{m+H-1} >1,$ $\theta_{k}$ goes to $\infty.$
\end{APPENDICES}
\ACKNOWLEDGMENT{The research presented here was supported in part by a grant from Sandia National Labs and the NSF Grants CCF 1934986, CCF 2207547, CNS 2106801, ONR Grant N00014-19-1-2566, and ARO Grant W911NF-19-1-0379. Sandia National Laboratories is a multimission laboratory managed and operated by National Technology \& Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA0003525. This paper describes objective technical results and analysis. Any subjective views or opinions that might be expressed in the paper do not necessarily represent the views of the U.S. Department of Energy or the United States Government.
}
\section{Introduction.}\label{intro}
\section{Introduction}
In many applications of reinforcement learning, such as playing chess and Go, the underlying model is known and so the main challenge is in solving the associated dynamic programming problem in an efficient manner. Policy iteration and variants of policy iteration \cite{bertsekas2019reinforcement,Bertsekas2011ApproximatePI,bertsekastsitsiklis} that solve dynamic programming problems rely on computations that are infeasible due to the sizes of the state and action spaces in modern reinforcement learning problems.
As a remedy to this ``curse of dimensionality,'' several state-of-the-art algorithms \cite{silver2017shoji, silver2017mastering, DBLP:journals/corr/MnihBMGLHSK16} employ function approximation, lookahead for policy improvement, $m$-step rollout for policy evaluation, and gradient descent to compute the function approximation, see Section \ref{section2} for a definition of these terms.
Our goal in this paper is to understand the role of multi-step lookahead for policy improvement (i.e., repeatedly applying the Bellman operator multiple times) and $m$-step rollout (which is a technique to approximately evaluate a policy by rolling out the dynamic programming tree for a certain number of steps $m$) on the accuracy of approximate policy iteration techniques. The algorithms we study in this paper are closely related to least-squares policy iteration (LSPI) \cite{parr} and approximate policy iteration (PI), see \cite{bertsekastsitsiklis, bertsekas2019reinforcement}. In the analysis of approximate PI, it is assumed that the policy evaluation and improvement steps have bounded errors, and using these, an error bound is obtained for the algorithm which repeatedly uses approximate policy evaluation and improvement. LSPI is an algorithm that builds on approximate PI where the policy evaluation step uses a least-squares algorithm to estimate the value function for the entire state space using the value function evaluated at a few states. However, the bounds presented in \cite{parr} are simply a special case of the bounds for generic approximate PI, and do not explicitly take into account the details of the implementation of least-squares-based policy evaluation. When such details are taken into account, it turns out the roles of the depth of lookahead ($H$) and rollout ($m$) become important, and their impact on the error bounds on the performance of approximate value iteration has not been characterized in prior work. In this paper, on the other hand, we assume that policies are evaluated at a few states using an $m$-step rollout and as a result, convergence of the algorithm is not guaranteed in general. Additionally, we show that the effect of function approximation can be mitigated using lookahead in the policy improvement step. The use of a partial rollout in our algorithm also makes our work similar to modified policy iteration \cite{Puterman1978ModifiedPI}, which is also called optimistic policy iteration \cite{bertsekastsitsiklis}. To the best of our knowledge, none of these prior works consider the impact of using gradient descent to implement an approximate version of least-squares policy evaluation within approximate PI. Thus, our algorithm and analysis can be viewed as a detailed look at approximate PI and modified PI when linear function approximation, least-squares policy evaluation and gradient descent are used to evaluate policies.
Our contributions are as follows:
\begin{itemize}
\item We examine the impact of lookahead and $m$-step rollout on approximate policy iteration with linear function approximation. As is common in practice, we assume that we evaluate an approximate value function only for some states at each iteration. We obtain performance bounds for our algorithm under the assumption that the sum of the lookahead and the number of steps in the $m$-step rollout is sufficiently large. We demonstrate through an extension of a counterexample in \cite{Tsitsiklis94feature-basedmethods} that such a condition is necessary, in general, for convergence with function approximation unlike the tabular setting in \cite{efroni2019combine}. See Appendix \ref{appendix:counterexAppendix} for our counterexample.
\item For ease of exposition, we first present the case where one solves a least-squares problem at each iteration to obtain the weights associated with the feature vectors in the function approximation of the value function. Our performance bounds in this case strictly generalize the bounds in \cite{parr} and \cite{bertsekas2019reinforcement} for approximate PI.
\item We then consider a more practical and widely-used scheme where several steps of gradient descent are used to update the weights of the value function approximation at each iteration. Obtaining performance bounds for the gradient descent algorithm is more challenging and these bounds can be found in Section \ref{SectionGD}.
\item Our results show that the sufficient conditions on the hyperparameters (such as the amount of lookahead, rollout, gradient descent parameters) of the algorithm required for convergence either do not depend on the size of the state space or depend only logarithmically on the size of the state space. \color{black}
\item From a theoretical perspective, our analysis shows that one can improve the upper bound on the error in approximate policy iteration from $1/(1-\alpha)^2$ (see \cite{parr}, \cite{bertsekas2019reinforcement}) to $1/(1-\alpha^{H})(1-\alpha)$ by using lookahead, where $\alpha$ is the discount factor and $H$ is the amount of lookahead used.
\item In addition to asymptotic performance bounds, we also provide finite-time guarantees for our algorithm. Our finite-time bounds show that our algorithm converges exponentially fast in the case of least-squares as well as the case where a fixed number of gradient descent steps are performed in each iteration of the algorithm.
\end{itemize}
\subsection{Other Related Work}
The recent work in \cite{efroni2019combine} considers a variant of policy iteration that utilizes lookahead and approximate policy evaluation using an $m$-step rollout (see Section \ref{section2} for definitions of these terms). As stated in the motivation in \cite{efroni2019combine}, it is well known that Monte Carlo Tree Search (MCTS) \cite{kocisszepesvari, browne} works well in practice
even though the worst-case complexity can be exponential \cite{shah2020nonasymptotic}; see \cite{munosbook} for some analysis of MCTS in MDPs where the number of states that can be visited from a given state is bounded. Motivated by policy iteration, the algorithm in \cite{efroni2019combine} estimates the value function associated with a policy and aims to improve the policy at each step. Policy improvement is achieved by obtaining the ``greedy'' policy in the case of policy iteration or a lookahead policy in the work of \cite{efroni2019combine}, which involves applying the Bellman operator several times to the current iterate before obtaining the greedy policy. The idea is that the application of the Bellman operator several times gives a more accurate estimate of the optimal value function. Then, similarly to policy iteration, the algorithm in \cite{efroni2019combine} aims to evaluate the new policy. The algorithm in \cite{efroni2019combine} uses an $m$-step rollout to compute the value function associated with a policy, i.e., it applies the Bellman operator associated with the policy $m$ times.
The work of \cite{efroni2019combine} establishes that a lookahead can significantly improve the rate of convergence if one uses the value function computed using lookahead in the approximate policy evaluation step. However, their paper does not study the use of function approximation which is critical to handling large state spaces, nor does it quantify the effects of varying $m$ in the convergence of their algorithm.
Now, we compare our work to other papers in the literature. The role of lookahead and rollout in improving the performance of RL algorithms has also been studied in a large number of papers including \cite{shahxie, moerland2020framework, efroni2020online, tomar2020multistep, efroni2018multiplestep, springenberg2020local, 9407870}. The works of \cite{baxter, veness, lanctot2014monte} explore the role of tree search in RL algorithms. However, to the best of our knowledge, the amount of lookahead and rollout needed as a function of the feature vectors has not been quantified in prior works.
The works of \cite{Bertsekas2011ApproximatePI} and \cite{bertsekas2019reinforcement} also study a variant of policy iteration wherein a greedy policy is evaluated approximately using feature vectors at each iteration. These papers also provide rates of convergence as well as a bound on the approximation error. However, our main goal is to understand the relations between function approximation and lookahead/rollout which are not considered in these other works.
\section{Preliminaries} \label{section2}
We consider a Markov Decision Process (MDP), which is defined to be a 5-tuple $(\scriptS, \scriptA, P, r, \alpha)$. The finite set of states of the MDP is $\scriptS$. There exists a finite set of actions associated with the MDP $\scriptA$. Let $P_{ij}(a)$ be the probability of transitioning from state $i$ to state $j$ when taking action $a \in \scriptA$. We denote by $s_k$ the state of the MDP and by $a_k$ the corresponding action at time $k$. We associate with state $s_k$ and action $a_k$ a non-deterministic reward $r(s_k, a_k) \in [0, 1] \forall s_k \in \scriptS, a_k \in \scriptA.$ We assume that the rewards are uniformly bounded. Our objective is to maximize the cumulative discounted reward with discount factor $\alpha \in (0, 1).$
Towards this end, we seek to find a deterministic policy $\mu$ which associates with each state $s\in \scriptS$ an action $\mu(s) \in \scriptA$. For every policy $\mu$ and every state $s \in \scriptS$ we define $J^{\mu}(s)$ as follows:
\begin{align*}
J^{\mu}(s) := E[\sum_{i=0}^\infty \alpha^k r(s_k, \mu(s_k))|s_0=s].
\end{align*}
We define the optimal reward-to-go $J^*$ as
$J^*(s) := \underset{\mu}\max J^\mu(s).$ The objective is to find a policy $\mu$ that maximizes $J^\mu(s)$ for all $s \in \scriptS$. Towards the objective, we associate with each policy $\mu$ a function $T_\mu: \mathbb{R}^{|\scriptS|} \to \mathbb{R}^{|\scriptS|}$ where for $J \in \mathbb{R}^{|\scriptS|},$ the $s$th component of $T_{\mu}J$ is
\begin{align*}
(T_\mu J)(s) = r(s, \mu(s)) + \alpha \sum_{j=1}^{|\scriptS|} P_{sj}(\mu(s)) J(j),
\end{align*} for all $s \in S$. If function $T_{\mu}$ is applied $m$ times to vector $J \in \mathbb{R}^{|\scriptS|},$ then we say that we have performed an $m$-step rollout of the policy $\mu$ and the result $T^m_\mu J$ of the rollout is called the return.
Similarly, we define the Bellman operator $T: \mathbb{R}^{|\scriptS|} \to \mathbb{R}^{|\scriptS|}$ with the $s$th component of $TJ$ being
\begin{align}
(TJ)(s) = \underset{a \in \scriptA}\max \Bigg \{ r(s, a) + \alpha \sum_{j=1}^{|\scriptS|} P_{sj}(a)J(j) \Bigg \}. \label{T}
\end{align}
The policy corresponding to the $T$ operator is defined as the \textit{greedy} policy. If operator $T$ is applied $H$ times to vector $J \in \mathbb{R}^{|\scriptS|},$ we call the result - $T^H J$ - the $H$-step ``lookahead'' corresponding to $J$. The greedy policy corresponding to $T^H J$ is called the $H$-step lookahead policy, or the lookahead policy, when $H$ is understood. More precisely, given an estimate $J$ of the value function, the lookahead policy is the policy $\mu$ such that $T_\mu(T^{H-1} J)=T(T^{H-1} J).$
It is well known that each time the Bellman operator is applied to a vector $J$ to obtain $TJ,$ the following holds:
\begin{align*}
\norm{TJ-J^*}_\infty\leq \alpha\norm{J-J^*}_\infty.
\end{align*} Thus, applying $T$ to obtain $TJ$ gives a better estimate of the value function than $J.$
The Bellman equations state that the vector $J^\mu$ is the unique solution to the linear equation
\begin{align}
J^\mu = T_\mu J^\mu. \label{bellman}
\end{align}
Additionally, we have that $J^*$ is a solution to
\begin{align*}
J^* = TJ^*.
\end{align*}
Note that every greedy policy w.r.t. $J^*$ is optimal and vice versa \cite{bertsekastsitsiklis}.
We will now state several useful properties of the operators $T$ and $T_\mu$. See \cite{bertsekastsitsiklis} for more on these properties. \color{black} Consider the vector $e \in \mathbb{R}^{|\scriptS|}$ where $e(i) = 1 \forall i \in 1, 2, \ldots, |\scriptS|.$ We have:
\begin{equation}
T(J + ce) = TJ + \alpha ce, \quad T_\mu(J + ce) = T_\mu J + \alpha ce. \label{eq:usefulproperties}
\end{equation}
Operators $T$ and $T_\mu$ are also monotone:
\begin{align}
J \leq J' \implies TJ \leq TJ', \quad T_\mu J \leq T_\mu J'. \label{monotonicityproperty}
\end{align}
\section{Least-Squares Function Approximation Algorithm}
\begin{algorithm}[tb]
\caption{Least-Squares Function Approximation Algorithm}
\label{alg:LSalg}
\textbf{Input}: $J_0, m,$ $H,$ feature vectors $\{ \phi(i) \}_{i \in \scriptS}, \phi(i) \in \mathbb{R}^d$ and subsets $\scriptD_k \subseteq \scriptS, k = 0, 1, \ldots.$ Here $\scriptD_k$ is the set of states at which we evaluate the current policy at iteration $k.$
\begin{algorithmic}[1]
\STATE Let $k=0$.
\STATE Let $\mu_{k+1}$ be such that $\norm{T^H J_k - T_{\mu_{k+1}} T^{H-1} J_k}_\infty \leq \eps_{LA}$.\\\label{step 2 alg}
\STATE Compute $\hat{J}^{\mu_{k+1}}(i) = T_{\mu_{k+1}}^m T^{H-1} (J_k)(i)+w_{k+1}(i)$ for $i \in \scriptD_k.$ \\ \label{step 3 alg}
\STATE Choose $\theta_{k+1}$ to solve
\begin{align}
\min_\theta \sum_{i \in D_k} \Big( (\Phi \theta)(i) - \hat{J}^{\mu_{k+1}}(i) \Big)^2, \label{step 4 alg}
\end{align} \\
where $\Phi$ is a matrix whose rows are the feature vectors.
\STATE $J_{k+1} = \Phi \theta_{k+1}.$
\STATE Set $k \leftarrow k+1.$ Go to 2.
\end{algorithmic}
\end{algorithm}
Our algorithm is described in Algorithm \ref{alg:LSalg}. We now explain our algorithm and the associated notation in detail. Due to the use of function approximation, our algorithm is an approximation to policy iteration with lookahead. At each iteration index, say, $k$, we have an estimate of the value function, which we denote by $J_k$. To obtain $J_{k+1}$, we perform a lookahead to improve the value function estimate at a certain number of states (denoted by $\scriptD_k$) which can vary with each iteration. For example, $\scriptD_k$ could be chosen as the states visited when performing a tree search to approximate the lookahead process. During the lookahead process, we note that we will also obtain an $H$-step lookahead policy, which we denote by $\mu_{k+1}$. As noted in the Introduction, the computation of $T^{H-1}(J_k)(i)$ for $i \in \scriptD_k$ in Step \ref{step 3 alg} of Algorithm \ref{alg:LSalg} may be computationally infeasible; however, as noted in \cite{efroni2019combine}, techniques such as Monte Carlo tree search (MCTS) are employed in practice to approximately estimate $T^{H-1}(J_k)(i).$ \color{black} In this paper, we model the fact that lookahead cannot be performed exactly due to the associated computational complexity by allowing an error in the lookahead process which we denote by $\eps_{LA}$ in Step~\ref{step 2 alg} of the algorithm.
We obtain estimates of $J^{\mu_{k+1}}(i)$ for $i \in \scriptD_k$, which we call $\hat{J}^{\mu_{k+1}}(i)$. To obtain an estimate of $J^{\mu_{k+1}}(i)$, we perform an $m$-step rollout with policy $\mu_{k+1}$, and obtain a noisy version of $T^m_{\mu_{k+1}}T^{H-1}J_k(i)$ for $i \in \scriptD_k.$ We also model the approximation error in the rollout by adding noise (denoted by $w_{k+1}(i)$ in Step ~\ref{step 3 alg} of the algorithm) to the return (result of the rollout - see Section \ref{section2}) computed at the end of this step. In order to estimate the value function for states not in $\scriptD_k$, we associate with each state $i \in \scriptS$ a feature vector $\phi(i)\in \mathbb{R}^d$ where typically $d << |\scriptS|$. The matrix comprised of the feature vectors as rows is denoted by $\Phi$. We use those estimates to find the best fitting $\theta \in \mathbb{R}^d$, i.e.,
\begin{align*}
\min_\theta \sum_{i \in D_k} \Big( (\Phi \theta)(i) - \hat{J}^{\mu_{k+1}}(i) \Big)^2.
\end{align*}
The solution to the above minimization problem is denoted by $\theta_{k+1}$. The algorithm then uses $\theta_{k+1}$ to obtain $J_{k+1} = \Phi \theta_{k+1}$. The process then repeats. Note that to compute $\hat{J}^{\mu_{k+1}}(i),$ we obtain noisy estimates of $T_{\mu_{k+1}}^m T^{H-1} J_k (i)$ for $i\in \scriptD_k.$ Another alternative is to instead obtain noisy estimates of $T_{\mu_{k+1}}^m J_k (i)$ for $i\in \scriptD_k.$ It was shown in \cite{efroni2019combine} that the former option is preferable because it has a certain contraction property. Thus, we have chosen to use this computation in our algorithm as well. However, we have shown in Appendix \ref{appendix:thm2} that the algorithm also has bounded error which becomes small if $m$ is chosen to be sufficiently large.
\begin{remark}
We note that $\mu_{k+1}(i)$ in Step \ref{step 2 alg} of Algorithm \ref{alg:LSalg} does not have to be computed for all states $i\in \scriptS.$ The actions $\mu_{k+1}(i)$ have to be computed only for those $i\in\scriptS$ that are encountered in the rollout step of the algorithm (Step \ref{step 3 alg}).
\end{remark}
To analyze Algorithm \ref{alg:LSalg}, we make the following assumption which states that we explore a sufficient number of states during the policy evaluation phase at each iteration.
\begin{assumption}\label{assume 1 or}
For each $k \geq 0, \text{ rank }\{ \phi(i)\}_{i \in D_k} = d$.
\end{assumption}
We assume that the noise $w_k$ is bounded.
\begin{assumption}
For some $\eps_{PE} >0,$ the noise in policy evaluation satisfies $\norm{w_k}_\infty \leq \eps_{PE} \forall k$. \label{assume 2 or}
\end{assumption}
We also assume that the rewards are bounded.
\begin{assumption}\label{assume 3 or}
$r(s,u) \in[0,1]$ $\forall s \in \scriptS,u\in\scriptA.$
\end{assumption}
Using Assumption~\ref{assume 1 or}, $J_{k+1}$ can be written as
\begin{align}
J_{k+1} &= \Phi \theta_{k+1} =\underbrace{\Phi(\Phi_{\scriptD_{k}}^\top \Phi_{\scriptD_{k}} )^{-1} \Phi_{\scriptD_{k}}^\top \scriptP_k}_{=: \scriptM_{k+1}} \hat{J}^{\mu_{k+1}},\label{defMk}\end{align}
where $\Phi_{\scriptD_{k}}$ is a matrix whose rows are the feature vectors of the states in $\scriptD_{k}$ and $\scriptP_k$ is a matrix of zeros and ones such that $\scriptP_k\hat{J}^{\mu_{k+1}}$ is a vector whose elements are a subset of the elements of $\hat{J}^{\mu_{k+1}}$ corresponding to $\scriptD_k$. Note that $\hat{J}^{\mu_{k+1}}(i)$ for $i\notin\scriptD_k$ does not affect the algorithm, so we can define $\hat{J}^{\mu_{k+1}}(i)=T_{\mu_{k+1}}^m T^{H-1} J_k(i)$ for $i\notin\scriptD_k.$
Written concisely, our algorithm is as follows:
\begin{equation}
J_{k+1} = \scriptM_{k+1} (T_{\mu_{k+1}}^m T^{H-1} J_k+w_k), \label{eq:iterateAPI}
\end{equation}
where $\mu_{k+1}$ is defined in Step 2 of the algorithm.
Since $w_k(i)$ for $i\notin\scriptD_k$ does not affect the algorithm, we define $w_k(i)=0$ for $i\notin\scriptD_k.$
Now we will state our theorem which characterizes the role of lookahead ($H$) and return ($m$)
on the convergence of approximate policy iteration with function approximation.
\begin{theorem}\label{mainAPI}
Suppose that $m$ and $H$ satisfy $m + H -1>\log (\delta_{FV})/\log (1/\alpha),$ where
$$\delta_{FV} := \sup_k \norm{\scriptM_k}_\infty = \sup_k\norm{\Phi(\Phi_{\scriptD_{k}}^\top \Phi_{\scriptD_{k}} )^{-1} \Phi_{\scriptD_{k}}^\top \scriptP_k}_\infty.$$
Then, under Assumptions \ref{assume 1 or}-\ref{assume 3 or}, the following holds:
\begin{align}
\norm{J^{\mu_{k}} - J^*}_\infty
&\leq \underbrace{\frac{\alpha^{k(H)} }{1-\alpha} + \frac{2\alpha^{H}\norm{J^{\mu_0}-J_0}_\infty}{1-\alpha}k\max(\alpha^{H},\beta)^{k-1}}_{ \text{ finite-time component }} +\underbrace{\frac{2\alpha^H \frac{\tau}{1-\beta} +\eps_{LA}}{(1-\alpha^{H})(1-\alpha)}}_{ \text{ asymptotic component }}.
\label{eq:mainAPI bound}
\end{align}
where $$\tau:= \frac{\alpha^m + \alpha^{m+H-1}}{1-\alpha}\delta_{FV} + \delta_{app}+ \delta_{FV} \eps_{PE},$$
$$\beta:=\alpha^{m+H-1} \delta_{FV},$$
and
$$
\delta_{app} := \sup_{k, \mu_k}\norm{\scriptM_k J^{\mu_k}- J^{\mu_k}}_\infty.
$$
\end{theorem}
The proof of Theorem \ref{mainAPI} can be found in Appendix \ref{appendix:thm1}. We now make several comments about the implications of Theorem \ref{mainAPI}:
\begin{enumerate}
\item In conjunction with the counterexample in Appendix \ref{appendix:counterexAppendix}, Theorem \ref{mainAPI} shows that while $\norm{J^{\mu_k}-J^*}_\infty$ depends on the function approximation error ($\delta_{app}$) and the feature vectors ($\delta_{FV}$), the effect of these terms diminishes exponentially with increased $H$, with the exception of the tree search error ($\eps_{LA}$)
\item It is useful to compare our asymptotic bound with the asymptotic bound for approximate PI in \cite{bertsekas2019reinforcement}. There, it assumed that $m=\infty$, $\eps_{PE}=0$ and $H=1,$ in which case our bound is identical to the one in \cite{bertsekas2019reinforcement}. However, when $H>1,$ our asymptotic error is proportional to $1/(1-\alpha)(1-\alpha^{H})$ which is much better than the $1/(1-\alpha)^2$ bound for approximate policy iteration \cite{bertsekas2019reinforcement}. When $\alpha\rightarrow 1,$ the expected discounted reward is of the order of $1/(1-\alpha),$ thus the $1/(1-\alpha)^2$ bound on the error is generally considered to be very loose. Our result shows that the use of lookahead significantly improves this error bound. Additionally, our bound is able to capture situations where a full rollout (i.e., $m=\infty$) is impossible to perform.
\end{enumerate}
The proof of Theorem \ref{mainAPI} is closely related to the proofs of Theorems \ref{theorem 2 or}-\ref{theorem 3 or}. The proofs of Theorems~\ref{theorem 2 or}-\ref{theorem 3 or} are presented in the next section while we defer the proof of Theorem \ref{mainAPI} to Appendix \ref{appendix:thm1}. We note that the above result is fundamentally different from the conclusion in Theorem 4 in \cite{efroni2019combine} where one requires a condition on $m$ and $H$ for convergence when one uses $J_k$ instead of using $T^{H-1}J_k$ in Step 2 of the algorithm. Here, we have shown that even when one uses $T^{H-1}J_k,$ one may need large $m+H$ for convergence due to the use of function approximation.
We can additionally characterize the approximation error of our iterates, $J_k$, by computing bounds on the asymptotic error $\limsup_{k \to \infty} \norm{J_k - J^*}_\infty.$ The bounds along with their derivations can be found in Appendix \ref{appendix:prop1}. The corresponding finite-time bounds can be easily obtained from the proof of Proposition \ref{IterAPITheorem} in Appendix \ref{appendix:prop1}.
It is important to note that the upper bounds on $\norm{J^{\mu_k}-J^*}_\infty$ and $\norm{J_k - J^*}_\infty$ illustrate that $J^{\mu_k}$ approximates $J^*$ much better than $J_k$ does. Thus, algorithms need not wait for the value function estimates to converge before the corresponding policies reach near optimality.
In \cite{bertsekas2021lessons}, it is noted that, in reinforcement learning to play computer games or board games, it is not uncommon during training to get a relatively crude estimate of the value function, which is improved by lookahead and $m$-step return during actual game play. Our analysis would also apply to this situation -- we have not explicitly differentiated between training and game play in our analysis.
\color{black}
Theorem \ref{mainAPI} can be used to make the following observation: how close $J^{\mu_k}$ is to $J^*$ depends on four factors -- the representation power of the feature vectors and the feature vectors themselves ($\delta_{app}, \delta_{FV}$), the amount of lookahead ($H$), the extent of the rollout ($m$) and the approximation in the policy determination and policy evaluation steps ($\eps_{LA}$ and $\eps_{PE}$). Further, it is easy to see that lookahead and rollout help mitigate the effect of feature vectors and their ability to represent the value functions. \color{black}
\section{Gradient Descent Algorithm} \label{SectionGD}
Solving the least-squares problem in Algorithm~\ref{alg:LSalg} involves a matrix inversion, which can be computationally difficult. So we propose an alternative algorithm which performs $\eta_k$ steps of gradient descent with stepsize $\gamma$ at each iteration $k$, where the gradient refers to the gradient of the least-squares objective in (\ref{step 4 alg}).
The gradient descent-based algorithm is presented in Algorithm~\ref{alg:GDalg}.
\begin{algorithm}[tb]
\caption{Gradient Descent Algorithm}
\label{alg:GDalg}
\textbf{Input}: $\theta_0, m,$ $H,$ feature vectors $\{ \phi(i) \}_{i \in \scriptS}, \phi(i) \in \mathbb{R}^d,$ and $\scriptD_k,$ which is the set of states for which we evaluate the current policy at iteration $k.$
\begin{algorithmic}[1]
\STATE $k=0, J_0 = \Phi \theta_0$. \\
\STATE Let $\mu_{k+1}$ be such that $\norm{T^H J_k - T_{\mu_{k+1}} T^{H-1} J_k}_\infty \leq \eps_{LA}$. \\
\STATE Compute $\hat{J}^{\mu_{k+1}}(i) = T_{\mu_{k+1}}^m T^{H-1} J_k(i)+w_{k+1}(i)$ for $i \in \scriptD_k.$ \\
\STATE \label{step 4 unbiased} $\theta_{k+1, 0} := \theta_k.$ For $\ell = 1, 2, \ldots, \eta_{k+1},$ iteratively compute the following:
\begin{align}
\theta_{k+1, \ell} &= \theta_{k+1,\ell-1} - \gamma \nabla_\theta c(\theta;\hat{J}^{\mu_{k+1}})|_{\theta_{k+1,\ell-1}}, \label{eq:iterthetaGD}
\end{align} where
\begin{align*}
c(\theta;\hat{J}^{\mu_{k+1}}) := \frac{1}{2}\sum_{i \in \scriptD } \Big( (\Phi \theta)(i) - \hat{J}^{\mu_{k+1}}(i) \Big)^2,
\end{align*} \\
and $\Phi$ is a matrix whose rows are the feature vectors.\\
\STATE Define
\begin{align*}
\theta_{k+1} &= \theta_{k+1,\eta_{k+1}},
\end{align*} and set $$J_{k+1} = \Phi \theta_{k+1}.$$
\STATE Set $k \leftarrow k+1.$ Go to 2.
\end{algorithmic}
\end{algorithm}
In order to present our main result for the gradient descent version of our algorithm, we define $\tilde{\theta}^{\mu_k}$ for any policy $\mu_k$ which will be used in the proof of the theorem:
\begin{align}
\tilde{\theta}^{\mu_k} \nonumber&:= \arg\min_\theta \frac{1}{2}\norm{\Phi_{\scriptD_k} \theta - \scriptP_{k}(T_{\mu_{k}}^m T^{H-1} J_{k-1}+w_k)}_2^2.
\end{align}
Note that
\begin{align}
\Phi\tilde{\theta}^{\mu_k}= \scriptM_k (T_{\mu_k}^m T^{H-1} J_{k-1}+w_k), \label{eq: theta tilde muk or}
\end{align}
where $\scriptM_k$ is defined in \eqref{defMk}.
Thus, $\tilde{\theta}^{\mu_k}$ represents the function approximation of the estimate of $J^{\mu_k}$ obtained from the $m$-step return.
We now present two propositions which will be used in the subsequent theorems to obtain bounds
on the convergence of approximate policy iteration with function approximation when gradient descent is employed to estimate the least-squares objective in each iteration.
\begin{proposition}\label{prop1}
\begin{align}
\norm{J^{\mu_{k+1}} - J^*}_\infty &\leq \alpha^{H} \norm{J^{\mu_k}-J^*}_\infty + \frac{ 2\alpha^H \norm{J^{\mu_k}-J_k}_\infty + \eps_{LA} }{1-\alpha},\label{eq: jmu-jk or}
\end{align}
where
\begin{align}
\norm{J^{\mu_{k}}-J_k}_\infty \leq
a_k \norm{J_{k-1} -J^{\mu_{k-1}}}_\infty +b_k,\label{eq: ineq 2 or}
\end{align}
$$a_k := \alpha^{m+H-1} \delta_{FV}+ \frac{\sqrt{|S|}\norm{\Phi}_\infty}{\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta_{k}}( \alpha^{m+H-1} \delta_{FV}+1)$$ and
$$
b_k := (1+ \frac{\sqrt{|S|}\norm{\Phi}_\infty}{\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta_{k}})( \frac{\alpha^m + \alpha^{m+H-1}}{1-\alpha}\delta_{FV} + \delta_{app}+ \delta_{FV} \eps_{PE}) +\frac{\sqrt{|S|}\norm{\Phi}_\infty}{(1-\alpha)\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta_{k}},
$$
where $\sigma_{\min, \Phi}$ is the smallest singular value in the singular value decomposition of $\Phi$ and $\alpha_{GD, \gamma}:=\sup_k \max_i |1-\gamma\lambda_i ( \Phi_{\scriptD_k}^\top \Phi_{\scriptD_k})|,$ where $\lambda_i$ denotes the $i$-th largest eigenvalue of a matrix.
\end{proposition}
The proof of Proposition \ref{prop1} is presented later in this section. \color{black}Using Proposition \ref{mainGDLemma} and iterating in the special case where $a_k$ is constant or upper bounded by a constant $a,$ where $0<a<1$, and $b_k$ is constant or upper bounded by a constant $b$, where $0<b$, we get the following:
\begin{proposition}\label{prop2}
When $a_k \leq a$ for all $k$ where $0<a<1,$ and $b_k \leq b,$ for all $k$, where $0<b$, the following holds:
\begin{align}
\norm{J^{\mu_{k}} - J^*}_\infty\nonumber &\leq \frac{\alpha^{k(H)}}{1-\alpha} + \frac{2\alpha^H}{1-\alpha}k \max(\alpha^{H},a)^{k-1}\norm{J^{\mu_0}-J_0}_\infty +\frac{2\alpha^H\frac{b}{1-a}+\eps_{LA}}{(1-\alpha)(1-\alpha^{H})}.
\end{align}
\end{proposition}
Using Proposition \ref{prop2}, we can get Theorems \ref{theorem 2 or} and \ref{theorem 3 or} which give us finite-time and asymptotic bounds for the cases where $\eta_k$ is constant and $\eta_k$ is increasing to infinity, respectively:
\begin{theorem}\label{theorem 2 or}
Suppose that $\gamma, m,$ and $H$ satisfy
\begin{align}
\gamma < \frac{1}{d\inf_k \norm{\Phi_{\scriptD_k}^\top \Phi_{\scriptD_k}}_\infty^2}, \label{eq: gamma assumption or}
\end{align}
and
\begin{align}
m + H >1+\log (2\delta_{FV})/\log (1/\alpha).\label{eq: m H assumption or}
\end{align}
Furthermore, consider the case where $\eta_k$ is a constant, which we call $\eta,$ where $$\eta>\log (\frac {3\sqrt{|S|}\norm{\Phi}_\infty}{\sigma_{\min,\Phi}})/\log (1/\alpha_{GD, \gamma}).$$
Then, under Assumptions~\ref{assume 1 or}-\ref{assume 3 or}, the following holds:
\begin{align}
\norm{J^{\mu_{k}} - J^*}_\infty
&\leq \underbrace{\frac{\alpha^{k(H)}}{1-\alpha} + \frac{2\alpha^H}{(1-\alpha)^2}k \max(\alpha^{H},a_\eta)^{k-1}}_{ \text{ finite-time component }} + \underbrace{\frac{2\alpha^H \frac{b_\eta}{1-a_\eta}+\eps_{LA}}{(1-\alpha)(1-\alpha^{H})}}_{ \text{ asymptotic component }}, \label{eq: case a bound}
\end{align}
where
$$a_\eta := \alpha^{m+H-1} \delta_{FV}+ \frac{\sqrt{|S|}\norm{\Phi}_\infty}{\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta}( \alpha^{m+H-1} \delta_{FV}+1)$$ and
$$
b_\eta := (1+ \frac{\sqrt{|S|}\norm{\Phi}_\infty}{\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta})( \frac{\alpha^m + \alpha^{m+H-1}}{1-\alpha}\delta_{FV} + \delta_{app}+ \delta_{FV} \eps_{PE}) +\frac{\sqrt{|S|}\norm{\Phi}_\infty}{(1-\alpha)\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta}.
$$
Taking limits on both sides as $k \to \infty,$ we have the following asymptotic bound:
\begin{align*}
\limsup_{k\to \infty} \norm{J^{\mu_k}-J^*}_\infty \leq \frac{2\alpha^H \frac{b_\eta}{1-a_\eta}+\eps_{LA}}{(1-\alpha)(1-\alpha^{H})}.
\end{align*}
\end{theorem}
\begin{theorem}\label{theorem 3 or}
Consider $\eta_k$ where $\eta_k$ is increasing and $\eta_k \to \infty.$ Suppose that
$\gamma, m,$ and $H$ satisfy \eqref{eq: gamma assumption or} and \eqref{eq: m H assumption or}. Then, under Assumptions~\ref{assume 1 or}-\ref{assume 3 or}, the following holds:
\begin{align}
\limsup_{k\to \infty} \norm{J^{\mu_k}-J^*}_\infty \leq \frac{2\alpha^H \frac{b^*}{1-a^*}+\eps_{LA}}{(1-\alpha)(1-\alpha^{H})}, \label{eq: recover ls}
\end{align}
where
$$a^*:= \alpha^{m+H-1} \delta_{FV}$$ and
$$
b^* :=\frac{\alpha^m + \alpha^{m+H-1}}{1-\alpha}\delta_{FV} + \delta_{app}+ \delta_{FV} \eps_{PE}.
$$
\end{theorem}
Before we present the proofs of Propositions \ref{prop1}-\ref{prop2} and Theorems \ref{theorem 2 or}-\ref{theorem 3 or}, we make the following remarks:
In the case where $\eta_k$ is constant, i.e., $\eta_k = \eta$, when $\gamma$ is sufficiently small and $\eta$ is sufficiently large, we have exponential rate of convergence to the asymptotic error, assuming that $m$ and $H$ are sufficiently large.
When we increase $\eta$, our asymptotic error becomes smaller until it reaches the asymptotic error of the least-squares algorithm, i.e., when $\eta \rightarrow \infty$, we recover the asymptotic error of Algorithm \ref{alg:LSalg}.
If it is difficult to ascertain whether $\eta$ is sufficiently large, one can consider an increasing sequence $\eta_k$ such that $\eta_k \to \infty.$ For such a sequence, our asymptotic error term is the same as that of the least-squares problem.
\proof{Proof of Proposition \ref{prop1}}
We break the proof of the the proposition into three steps.
\noindent\textit{Step 1:}
In this step, since $\theta_{k}$ is obtained by taking $\eta_{k}$ steps of gradient descent towards $\tilde{\theta}^{\mu_{k}}$ beginning from $\theta_{k-1}$, we show that the following holds:
\begin{align*}
\norm{\theta_{k} - \tilde{\theta}^{\mu_{k}}}_2
\leq \alpha_{GD, \gamma}^{\eta_{k}} \norm{\theta_{k-1} - \tilde{\theta}^{\mu_{k}}}_2,
\end{align*}
where $\alpha_{GD, \gamma}:=\sup_k \max_i |1-\gamma\lambda_i ( \Phi_{\scriptD_k}^\top \Phi_{\scriptD_k})|,$ where $\lambda_i$ denotes the $i$-th largest eigenvalue of a matrix.
We note that since $$0 < \lambda_i ( \Phi_{\scriptD_k}^\top \Phi_{\scriptD_k}) \leq \norm{\Phi_{\scriptD_k}^\top \Phi_{\scriptD_k}}_2^2 \leq d \norm{\Phi_{\scriptD_k}^\top \Phi_{\scriptD_k}}_\infty^2 \leq d \sup_k \norm{\Phi_{\scriptD_k}^\top \Phi_{\scriptD_k}}_\infty^2,$$
$\alpha_{GD, \gamma}<1$ when $\gamma < \frac{1}{d\sup_k \norm{\Phi_{\scriptD_k}^\top \Phi_{\scriptD_k}}_\infty^2}$.
\textit{Proof of Step 1:} Recall that the iterates in Equation \eqref{eq:iterthetaGD} can be written as follows:
\begin{align*}
\theta_{k,\ell} &= \theta_{k,\ell-1} - \gamma \nabla_\theta c(\theta;\hat{J}^{\mu_{k}})|_{\theta_{k,\ell-1}} =\theta_{k,\ell-1} - \gamma \Big( \Phi_{\scriptD_{k-1}}^\top \Phi_{\scriptD_{k-1}} \theta_{k,\ell-1} - \Phi_{\scriptD_{k-1}}^\top \scriptP_{k-1}(T_{\mu_{k}}^m T^{H-1}J_k+w_{k-1})\Big).
\end{align*}
Since
\begin{align*}0 &= \nabla_\theta c(\theta;\hat{J}^{\mu_{k}})|_{\tilde{\theta}^{\mu_{k}}}= \Phi_{\scriptD_{k-1}}^\top \Phi_{\scriptD_{k-1}} \tilde{\theta}^{\mu_{k}} - \Phi_{\scriptD_{k-1}}^\top \scriptP_{k-1}(T_{\mu_{k}}^m T^{H-1}J_k+w_{k-1}),
\end{align*}
we have the following:
\begin{equation*}
\begin{array}{lll}
\theta_{k,\ell} &=& \theta_{k,\ell-1} - \gamma \Big( \Phi_{\scriptD_{k-1}}^\top \Phi_{\scriptD_{k-1}} \theta_{k,\ell-1} - \Phi_{\scriptD_{k-1}}^\top \Phi_{\scriptD_{k-1}} \tilde{\theta}^{\mu_{k}} - \Phi_{\scriptD_{k-1}}^\top \scriptP_{k-1}(T_{\mu_{k}}^m T^{H-1}J_k+w_{k-1}) \\&+& \Phi_{\scriptD_{k-1}}^\top \scriptP_{k-1}(T_{\mu_{k}}^m T^{H-1}J_k+w_{k-1})\Big) \\
&=& \theta_{k,\ell-1} - \gamma \Phi_{\scriptD_{k-1}}^\top \Phi_{\scriptD_{k-1}} (\theta_{k,\ell-1} - \tilde{\theta}^{\mu_{k}}).
\end{array}
\end{equation*}
Subtracting $\tilde{\theta}^{\mu_{k}}$ from both sides gives:
\begin{align*}
\theta_{k,\ell} - \tilde{\theta}^{\mu_{k}} &= \theta_{k,\ell-1} - \tilde{\theta}^{\mu_{k}} - \gamma \Phi_{\scriptD_{k-1}}^\top \Phi_{\scriptD_{k-1}} (\theta_{k,\ell-1} - \tilde{\theta}^{\mu_{k}})\\&= (I - \gamma \Phi_{\scriptD_{k-1}}^\top \Phi_{\scriptD_{k-1}}) (\theta_{k,\ell-1} - \tilde{\theta}^{\mu_{k}}).
\end{align*}
Thus,
\begin{align*}
\norm{\theta_{k,\ell} - \tilde{\theta}^{\mu_{k}}}_2&= \norm{(I - \gamma \Phi_{\scriptD_{k-1}}^\top \Phi_{\scriptD_{k-1}}) (\theta_{k,\ell-1} - \tilde{\theta}^{\mu_{k}})}_2\\&\leq \norm{I - \gamma \Phi_{\scriptD_{k-1}}^\top \Phi_{\scriptD_{k-1}}}_2 \norm{\theta_{k,\ell-1} - \tilde{\theta}^{\mu_{k}}}_2\\
&\leq \max_i |\lambda_i (I - \gamma \Phi_{\scriptD_{k-1}}^\top \Phi_{\scriptD_{k-1}})|\norm{\theta_{k,\ell-1} - \tilde{\theta}^{\mu_{k}}}_2\\
&\leq \max_i |1-\gamma\lambda_i ( \Phi_{\scriptD_{k-1}}^\top \Phi_{\scriptD_{k-1}})|\norm{\theta_{k,\ell-1} - \tilde{\theta}^{\mu_{k}}}_2 \\
&\leq \underbrace{\sup_k \max_i |1-\gamma\lambda_i ( \Phi_{\scriptD_k}^\top \Phi_{\scriptD_k})|}_{=: \alpha_{GD, \gamma}}\norm{\theta_{k,\ell-1} - \tilde{\theta}^{\mu_{k}}}_2,
\end{align*} where $\lambda_i$ denotes the $i$-th largest eigenvalue of a matrix.
Iterating over $k,$ the following holds:
\begin{align*}
\norm{\theta_{k} - \tilde{\theta}^{\mu_{k}}}_2 &=\norm{\theta_{k,\eta_{k}} - \tilde{\theta}^{\mu_{k}}}_2\\
&\leq \alpha_{GD, \gamma}^{\eta_{k}} \norm{\theta_{k,0} - \tilde{\theta}^{\mu_{k}}}_2\\
&= \alpha_{GD, \gamma}^{\eta_{k}} \norm{\theta_{k-1} - \tilde{\theta}^{\mu_{k}}}_2.
\end{align*}
\textit{Step 2}: Using Step 1 and matrix norm properties, we obtain the following bound on $\norm{J_{\mu_k}-J_k}_\infty:$
\begin{align}
\norm{J^{\mu_{k}}-J_k}_\infty \leq
a_k \norm{J_{k-1} -J^{\mu_{k-1}}}_\infty +b_k,\label{eq: first ineq or}
\end{align}
$$a_k := \alpha^{m+H-1} \delta_{FV}+ \frac{\sqrt{|S|}\norm{\Phi}_\infty}{\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta_{k}}( \alpha^{m+H-1} \delta_{FV}+1)$$ and
$$
b_k := (1+ \frac{\sqrt{|S|}\norm{\Phi}_\infty}{\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta_{k}})( \frac{\alpha^m + \alpha^{m+H-1}}{1-\alpha}\delta_{FV} + \delta_{app}+ \delta_{FV} \eps_{PE}) +\frac{\sqrt{|S|}\norm{\Phi}_\infty}{(1-\alpha)\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta_{k}}.
$$
\textit{Proof of Step 2:}
Using equivalence and sub-multiplicative properties of matrix norms, we have the following:
\begin{alignat*}{2}
&\frac{1}{\norm{\Phi}_\infty} \norm{\Phi \theta_k-\Phi \tilde{\theta}^{\mu_{k}}}_\infty &&\leq \norm{\theta_k - \tilde{\theta}^{\mu_{k}}}_\infty
\\& &&\leq \norm{\theta_k - \tilde{\theta}^{\mu_{k}}}_2
\\& &&\leq \alpha_{GD, \gamma}^{\eta_{k}} \norm{\theta_{k-1} - \tilde{\theta}^{\mu_{k}}}_2
\\& &&\leq \frac{1}{\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta_{k}} \norm{\Phi\theta_{k-1} - \Phi\tilde{\theta}^{\mu_{k}}}_2
\\& &&\leq \frac{\sqrt{|S|}}{\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta_{k}} \norm{\Phi\theta_{k-1} - \Phi\tilde{\theta}^{\mu_{k}}}_\infty\\
&\implies \norm{J_k-\Phi \tilde{\theta}^{\mu_{k}}}_\infty&&\leq \frac{\sqrt{|S|}\norm{\Phi}_\infty}{\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta_{k}} \norm{J_{k-1} - \Phi\tilde{\theta}^{\mu_{k}}}_\infty ,
\end{alignat*}
where $\sigma_{\min, \Phi}$ is the smallest singular value in the singular value decomposition of $\Phi$ and the last line follows from the fact that $J_k := \Phi \theta_k.$
The above implies the following:
\begin{align}
\norm{J^{\mu_{k}}-J_k}_\infty &\leq \norm{\Phi\tilde{\theta}^{\mu_{k}}-J^{\mu_k}}_\infty +\frac{\sqrt{|S|}\norm{\Phi}_\infty}{\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta_{k}}\norm{J_{k-1} - \Phi\tilde{\theta}^{\mu_{k}}}_\infty\nonumber\\
&= \norm{\scriptM_k (T_{\mu_k}^m T^{H-1} J_{k-1}+w_k)-J^{\mu_k}}_\infty +\frac{\sqrt{|S|}\norm{\Phi}_\infty}{\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta_{k}}\norm{J_{k-1} - \Phi\tilde{\theta}^{\mu_{k}}}_\infty,\label{eq: label 1 or}
\end{align}
where the equality follows from \eqref{eq: theta tilde muk or}.
Now we bound $\norm{J_{k-1}-\Phi\tilde{\theta}^{\mu_{k}}}_\infty$ as follows:
\begin{align}
\norm{J_{k-1}-\Phi\tilde{\theta}^{\mu_{k}}}_\infty
\nonumber&\leq \norm{J_{k-1} - J^{\mu_{k-1}}}_\infty + \norm{ J^{\mu_{k-1}}-J^{\mu_{k}}}_\infty + \norm{J^{\mu_{k}}-\Phi\tilde{\theta}^{\mu_{k}}}_\infty \\
\nonumber&\leq \norm{J_{k-1} - J^{\mu_{k-1}}}_\infty + \frac{1}{1-\alpha} + \norm{J^{\mu_{k}}-\Phi\tilde{\theta}^{\mu_{k}}}_\infty
\\
&\leq \norm{J_{k-1} - J^{\mu_{k-1}}}_\infty + \frac{1}{1-\alpha} + \norm{J^{\mu_{k}}-\scriptM_k (T_{\mu_k}^m T^{H-1} J_{k-1}+w_k)}_\infty, \label{eq: label 2 or}
\end{align}
where the last line follows from \eqref{eq: theta tilde muk or}. We introduce Lemma \ref{mainGDLemma} to upper bound $\norm{\scriptM_k (T_{\mu_k}^m T^{H-1} J_{k-1}+w_k)- J^{\mu_k}}_\infty$ as follows:
\begin{lemma}
\begin{align*}
\norm{\scriptM_k (T_{\mu_k}^m T^{H-1} J_{k-1}+w_k)- J^{\mu_k}}_\infty
&\leq \alpha^{m+H-1} \delta_{FV} \norm{J_{k-1} -J^{\mu_{k-1}}}_\infty + \frac{\alpha^m + \alpha^{m+H-1}}{1-\alpha}\delta_{FV} + \delta_{app}+ \delta_{FV} \eps_{PE}.
\end{align*} \label{mainGDLemma}
\end{lemma}
The proof of Lemma \ref{mainGDLemma} is in Appendix \ref{appendix:mainGDLemmaProof}.
Putting \eqref{eq: label 1 or}, \eqref{eq: label 2 or}, and Lemma \ref{mainGDLemma} together, we get the following:
\begin{align*}
\norm{J^{\mu_{k}}-J_k}_\infty \leq
a_k \norm{J_{k-1} -J^{\mu_{k-1}}}_\infty +b_k,
\end{align*}
\begin{align*}a_k := \alpha^{m+H-1} \delta_{FV}+ \frac{\sqrt{|S|}\norm{\Phi}_\infty}{\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta_{k}}( \alpha^{m+H-1} \delta_{FV}+1) \end{align*} and
\begin{align*}
b_k := (1+ \frac{\sqrt{|S|}\norm{\Phi}_\infty}{\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta_{k}})( \frac{\alpha^m + \alpha^{m+H-1}}{1-\alpha}\delta_{FV} + \delta_{app}+ \delta_{FV} \eps_{PE}) +\frac{\sqrt{|S|}\norm{\Phi}_\infty}{(1-\alpha)\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta_{k}}.
\end{align*}
\noindent\textit{Step 3:}
We will establish the following bound on $T_{\mu_{k+1}}T^{H-1}J^{\mu_k}$ using the contraction property of Bellman operators and property in \eqref{eq:usefulproperties}:
\begin{align*}
-T_{\mu_{k+1}}T^{H}J^{\mu_k}
&\leq 2\alpha^H f_k e - T^{H}J^{\mu_k} + \eps_{LA} e. \nonumber
\end{align*}
Using properties in \eqref{eq:usefulproperties} and monotonicity, we will repeatedly apply $T_{\mu_{k+1}}$ to both sides and take limits to obtain the following:
\begin{align*}
J^{\mu_{k+1}} - T^{H}J^{\mu_k}
&\geq - \frac{ 2\alpha^H \norm{J^{\mu_k}-J_k}_\infty + \eps_{LA} }{1-\alpha}e.
\end{align*}
\textit{Proof of Step 3:} We begin by noting that
\begin{align}
-T_{\mu_{k+1}}T^{H}J^{\mu_k} &\leq -T_{\mu_{k+1}}T^{H-1}J^{\mu_k} - T_{\mu_{k+1}}T^{H-1}J_k \nonumber+ T_{\mu_{k+1}}T^{H-1}J_k \nonumber\\
&\overset{(a)}\leq \alpha^{H} \norm{J^{\mu_k}-J_k}_\infty e - T_{\mu_{k+1}}T^{H-1}J_k \nonumber\\
&\leq \alpha^{H} \norm{J^{\mu_k}-J_k}_\infty e - T^H J_k + \eps_{LA} e \nonumber\\
&= \alpha^{H} \norm{J^{\mu_k}-J_k}_\infty e - T^H J_k - T^H J^{\mu_k} + T^H J^{\mu_k} + \eps_{LA} e \nonumber\\
&\overset{(b)}\leq 2\alpha^H \norm{J^{\mu_k}-J_k}_\infty e - T^{H}J^{\mu_k} + \eps_{LA} e, \nonumber
\end{align}
where $e$ is the vector of all $1$s, (a) and (b) follow from the contraction property of $T^H$ and $T_{\mu_{k+1}}$.
The rest of the proof is similar to the arguments in \cite{bertsekastsitsiklis}, \cite{bertsekas2019reinforcement}; however, we have to explicitly incorporate the role of lookahead ($H$) in the remaining steps of the proof. Suppose that we apply the $T_{\mu_{k+1}}$ operator $\ell-1$ times to both sides. Then, due to monotonicity and the fact $T_\mu(J+ce)=T_\mu(J)+\alpha ce,$ for any policy $\mu,$ we have the following:
\begin{align*}
T_{\mu_{k+1}}^\ell T^{H}J^{\mu_k} \leq \alpha^{\ell } (2\alpha^H \norm{J^{\mu_k}-J_k}_\infty + \eps_{LA} )e + T_{\mu_{k+1}}^{\ell+1} T^{H} J^{\mu_k}.
\end{align*}
Using a telescoping sum, we get the following inequality:
\begin{align*}
T_{\mu_{k+1}}^j T^{H} J^{\mu_k} - T^{H}J^{\mu_k}
&\geq - \sum_{\ell = 1}^{j} \alpha^{\ell -1} (2\alpha^H \norm{J^{\mu_k}-J_k}_\infty + \eps_{LA} )e.
\end{align*}
Taking limits as $j \to \infty,$ we obtain the following:
\begin{align*}
J^{\mu_{k+1}} - T^{H}J^{\mu_k}
&\geq - \frac{ 2\alpha^H \norm{J^{\mu_k}-J_k}_\infty + \eps_{LA} }{1-\alpha}e.
\end{align*}
The rest of proof is straightforward.
Subtracting $J^*$ from both sides of the previous inequality and using the contraction property of $T$, we get:
\begin{align*}
&\alpha^{H} \norm{J^* - J^{\mu_k}}_\infty e + \frac{ 2\alpha^H \norm{J^{\mu_k}-J_k}_\infty + \eps_{LA} }{1-\alpha}e. \\&\geq J^* - T^{H}J^{\mu_k} + \frac{ 2\alpha^H \norm{J^{\mu_k}-J_k}_\infty + \eps_{LA} }{1-\alpha}e
\\& \geq J^* - J^{\mu_{k}}\\
&\geq 0,
\end{align*}
which implies that
\begin{align}
\norm{J^{\mu_{k+1}} - J^*}_\infty &\leq \alpha^{H} \norm{J^{\mu_k}-J^*}_\infty + \frac{ 2\alpha^H \norm{J^{\mu_k}-J_k}_\infty + \eps_{LA} }{1-\alpha},\label{eq: ** 4}
\end{align} since $J^* \geq J^{\mu}$ for all policies $\mu$. The above, together with the inequality in \eqref{eq: first ineq or} gives us Proposition \ref{prop1}.
\endproof
\Halmos
We now prove Proposition \ref{prop2} by iterating over $k$ using Proposition \ref{prop1}.
\proof{Proof of Proposition \ref{prop2}}
First, noting our assumptions in Proposition \ref{prop1} that $a_k \leq a$ for all $k$ where $0<a<1,$ and $b_k \leq b$ for all $k$, where $0<b$, we iterate over $k$ using \eqref{eq: ineq 2 or} to obtain a bound on $\norm{J^{\mu_k}-J_k}_\infty$ as follows:
\begin{align}
\norm{J^{\mu_{k}}-J_{k}}_\infty \leq a^k \norm{J_0-J^{\mu_0}}_\infty + b\sum_{j=0}^{k-1}\prod_{m=j}^{k-1} a^m. \label{eq: ineq 1 last or}
\end{align}
Now, we iterate over $k$ in \eqref{eq: jmu-jk or} to get the following:
\begin{align}
\norm{J^{\mu_{k}} - J^*}_\infty \nonumber&\leq \alpha^{k(H-1)} \norm{J^{\mu_0}-J^*}_\infty + \frac{2\alpha^H}{1-\alpha}\sum_{\ell = 0}^{k-1}\alpha^{(k-\ell-1)(H-1)} \norm{J^{\mu_{\ell}}-J_\ell}_\infty + \frac{\eps_{LA}}{(1-\alpha)(1-\alpha^{H-1})}\\
&\leq \frac{\alpha^{k(H-1)}}{1-\alpha} + \frac{2\alpha^H}{1-\alpha}\sum_{\ell = 0}^{k-1}\alpha^{(k-\ell-1)(H-1)} \norm{J^{\mu_{\ell}}-J_\ell}_\infty + \frac{\eps_{LA}}{(1-\alpha)(1-\alpha^{H-1})}.\label{eq: ineq 2 last or}
\end{align}
Combining the inequalities in \eqref{eq: ineq 1 last or} and \eqref{eq: ineq 2 last or} gives us the following:
\begin{align}
\norm{J^{\mu_{k}} - J^*}_\infty &\leq \frac{\alpha^{k(H-1)}}{1-\alpha} + \frac{2\alpha^H}{1-\alpha}\sum_{\ell = 0}^{k-1}\alpha^{(k-\ell-1)(H-1)} \Big[a^\ell \norm{J_0-J^{\mu_0}}_\infty + b\sum_{j=0}^{\ell-1} \prod_{m=j}^{\ell-1} a^m \Big] \nonumber\\&+ \frac{\eps_{LA}}{(1-\alpha)(1-\alpha^{H-1})} \nonumber\\
&\leq \frac{\alpha^{k(H-1)}}{1-\alpha} + \frac{2\alpha^H}{1-\alpha}\sum_{\ell = 0}^{k-1}\alpha^{(k-\ell-1)(H-1)} \Big[ a^\ell \norm{J_0-J^{\mu_0}}_\infty + \frac{b}{1-a} \Big] + \frac{\eps_{LA}}{(1-\alpha)(1-\alpha^{H-1})} \nonumber\\
&\leq \frac{\alpha^{k(H-1)}}{1-\alpha} + \frac{2\alpha^H}{1-\alpha}\norm{J_0 - J^{\mu_0}}_\infty \sum_{\ell = 0}^{k-1}\alpha^{(k-\ell-1)(H-1)} a^\ell \nonumber+ \frac{2\alpha^H\frac{b}{1-a}+\eps_{LA}}{(1-\alpha)(1-\alpha^{H-1})} \nonumber\\
&\leq \frac{\alpha^{k(H-1)}}{1-\alpha} + \frac{2\alpha^H}{1-\alpha}\norm{J_0 - J^{\mu_0}}_\infty \sum_{\ell = 0}^{k-1}\max(\alpha^{H-1},a)^{k-1} \nonumber+ \frac{2\alpha^H\frac{b}{1-a}+\eps_{LA}}{(1-\alpha)(1-\alpha^{H-1})} \nonumber\\
&= \frac{\alpha^{k(H-1)}}{1-\alpha} + \frac{2\alpha^H}{1-\alpha}k \max(\alpha^{H-1},a)^{k-1}\norm{J^{\mu_0}-J_0}_\infty +\frac{2\alpha^H\frac{b}{1-a}+\eps_{LA}}{(1-\alpha)(1-\alpha^{H-1})},\nonumber
\end{align}
where the last inequality follows from the assumption that $0<a<1.$
\Halmos
\endproof
We now use Proposition \ref{prop2} to prove Theorems \ref{theorem 2 or}-\ref{theorem 3 or}.
\proof{Proof of Theorem \ref{theorem 2 or}}
When $\eta_k=\eta,$ the following holds:
$$a_k = a_\eta := \alpha^{m+H-1} \delta_{FV}+ \frac{\sqrt{|S|}\norm{\Phi}_\infty}{\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta}( \alpha^{m+H-1} \delta_{FV}+1)$$ and
$$
b_k = b_\eta := (1+ \frac{\sqrt{|S|}\norm{\Phi}_\infty}{\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta})( \frac{\alpha^m + \alpha^{m+H-1}}{1-\alpha}\delta_{FV} + \delta_{app}+ \delta_{FV} \eps_{PE}) +\frac{\sqrt{|S|}\norm{\Phi}_\infty}{(1-\alpha)\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta}.
$$
Note that from our assumptions on $m, \gamma,$ and $H$, $0<a_\eta<1$ and $0< b_\eta$.
To see that $a_\eta < 1$, observe from our assumptions in Theorem \ref{theorem 2 or} that \begin{align}\alpha^{m+H-1} \delta_{FV} < \frac{1}{2}\label{eq: ** or}\end{align} and $$\frac{\sqrt{|S|}\norm{\Phi}_\infty}{\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta}( \alpha^{m+H-1} \delta_{FV}+1) < \frac{\sqrt{|S|}\norm{\Phi}_\infty}{\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta}\frac{3}{2}< \frac{1}{2}.$$
Putting the above two lines together gives us
\begin{align*}
\alpha^{m+H-1} \delta_{FV}+ \frac{\sqrt{|S|}\norm{\Phi}_\infty}{\sigma_{\min,\Phi}} \alpha_{GD, \gamma}^{\eta}( \alpha^{m+H-1} \delta_{FV}+1)< 1.
\end{align*}
So, we can directly use Proposition \ref{prop2} to obtain \eqref{eq: case a bound}.
\Halmos
\endproof
\proof{Proof of Theorem \ref{theorem 3 or}}
Take any constant $0 <c^*< 1- \alpha^{m+H-1} \delta_{FV},$ where $c^*$ is a margin of error. Define $k(c^*)$ to be the smallest value $k(c^*)$ such that:
\begin{align} \eta_{k(c^*)} >\log\Big(\frac{1}{c^*}\frac{\sqrt{|S|}\norm{\Phi}_\infty}{\sigma_{\min,\Phi}}( \frac{\alpha^m + \alpha^{m+H-1}}{1-\alpha}\delta_{FV} + \delta_{app}+ \delta_{FV} \eps_{PE}+\frac{1}{1-\alpha})\Big)/\log(1/\alpha_{GD, \gamma}). \label{eq: c* assume or}
\end{align}
We know such a $k(c^*)$ exists since $\eta_k \to \infty$.
We define $a_{c^*}$ and $b_{c^*}$ as follows:
$$a_{c^*} := \alpha^{m+H-1} \delta_{FV} + c^*,$$
$$b_{c^*} := \frac{\alpha^m + \alpha^{m+H-1}}{1-\alpha}\delta_{FV} + \delta_{app}+ \delta_{FV} \eps_{PE} + c^*.$$
It is easy to see from \eqref{eq: c* assume or} and the definitions of $a_k$ and $b_k$ in Proposition \ref{prop1} that $a_k \leq a_{c^*}$ and $b_k \leq b_{c^*}$ when $k > k(c^*),$ since $0<\alpha_{GD, \gamma}<1$ from our assumption on $\gamma$ in Theorem \ref{theorem 3 or}.
From our assumptions on $c^*$, $m, \gamma,$ and $H$, $0<a_{c^*}<1$ and $0< b_{c^*}$, where we use a similar technique as in the proof of Theorem \ref{theorem 2 or} with $\alpha^{m+H-1}\delta_{FV}< 1-c^*$ in place of \eqref{eq: ** or} to show that $a_{c^*}<1$.
So, we can use Proposition \ref{prop2} and begin iterating at $k(c^*)$ to obtain finite time bounds on $\norm{J^{\mu_{k}} - J^*}_\infty$ as follows:
\begin{align}
&\norm{J^{\mu_{k}} - J^*}_\infty
\nonumber \\&\leq \underbrace{\frac{\alpha^{(k-k(c^*))(H)}}{1-\alpha} + \frac{2\alpha^H}{1-\alpha}(k-k(c^*))\max(a_{c^*}, \alpha^{H})^{k-k(c^*)-1}\norm{J^{\mu_{k(c^*)}}-J_{k(c^*)}}_\infty}_{ \text{ finite-time component }}+ \underbrace{\frac{2\alpha^H \frac{b_{c^*}}{1-a_{c^*}}+\eps_{LA}}{(1-\alpha^{H})(1-\alpha)}}_{ \text{ asymptotic component }}. \label{eq: fin time jmu nk}
\end{align}
Using Proposition \ref{prop1}, we can upper bound $\norm{J^{\mu_{k(c^*)}}-J_{k(c^*)}}_\infty$ as follows:
$$\norm{J^{\mu_{k(c^*)}}-J_{k(c^*)}}_\infty \leq \prod_{i=1}^{k(c^*)} a_{i}\norm{J_0-J^{\mu_0}}_\infty + \sum_{j=1}^{k(c^*)} b_j \prod_{k=j+1}^{k(c^*)} a_k,$$ where $a_k$ and $b_k$ are defined in Proposition \ref{prop1}.
Taking limits on both sides of \eqref{eq: fin time jmu nk}, we get the following:
\begin{align}
\limsup \norm{J^{\mu_{k}} - J^*}_\infty
\nonumber\leq \frac{2\alpha^H \frac{b_{c^*}}{1-a_{c^*}}+\eps_{LA}}{(1-\alpha^{H})(1-\alpha)}.
\end{align}
Since the above holds for all $c^* > 0$, we get Theorem \ref{theorem 3 or}.
\endproof\Halmos
Looking through the steps of our proof, one can obtain finite-time bounds for the case where $\eta_k$ is increasing. However, the algorithm consists of two loops: one corresponding to policy iteration and the other corresponding to gradient descent within each policy iteration step. It is hard to compare the relative complexities of each step within these loops. Therefore, a finite-time analysis does not shed much light into the amount of computations needed to execute the algorithm with an increasing sequence $\eta_k.$ However, it is interesting to note that for all $k>k(c^*),$ the algorithm converges exponentially fast in the number of policy iteration steps although the number of gradient descent steps within each policy iteration step is increasing.
\section{Conclusion}
Practical RL algorithms that deal with large state spaces implement some form of approximate policy iteration. In traditional analyses of approximate policy iteration, for example in \cite{bertsekas2019reinforcement}, it is assumed that there is an error in the policy evaluation step and an error in the policy improvement step. In this paper, we seek to understand the role of function approximation in the policy evaluation step and the associated changes that one has to make to the approximate policy iteration algorithm (such as lookahead) to counteract the effect of function approximation. Our main conclusion is that lookahead provides two benefits: (i) it mitigates the effects of function approximation, rollout and the choice of specific feature vectors, and (ii) from a theoretical perspective, it improves the upper bound on the error in approximate policy iteration from $1/(1-\alpha)^2$ to $1/(1-\alpha^{H})(1-\alpha).$
Possible directions for future work include the following:
\begin{itemize}
\item For problems with a terminal state, it would be interesting to consider cases where the value function of a given policy is estimated using a full rollout which provides an unbiased estimate as in \cite{tsitsiklis2002convergence}.
\item In game playing applications, gradient descent is commonly used to estimate the value function, but temporal-difference learning is used in other applications. It would be interesting to extend our results to the case of TD learning-based policy evaluation.
\item While neural networks are not linear function approximators, recent results on the NTK analysis of neural networks suggest that they can be approximated as linear combinations of basis functions \cite{jacot2018neural,du2018gradient,arora2019fine,ji2019polylogarithmic, cao2019generalization}. Thus, to the extent that the NTK approximation is reasonable, our results can potentially shed light on why the combination of the representation capability of neural networks and tree-search methods work well in practice, although further work is necessary to make this connection precise.
\end{itemize}
\begin{APPENDICES}
\section{Proof of Lemma \ref{mainGDLemma}} \label{appendix:mainGDLemmaProof}
\proof{Proof of Lemma \ref{mainGDLemma}}
\begin{align*}
& \norm{\scriptM_k (T_{\mu_k}^m T^{H-1} J_{k-1}+w_k)- J^{\mu_k}}_\infty \\
&\leq \norm{\scriptM_k T_{\mu_k}^m T^{H-1} J_{k-1}- J^{\mu_k}}_\infty + \norm{\scriptM_k w_k}_\infty \\
&\leq\norm{\scriptM_k T_{\mu_k}^m T^{H-1} J_{k-1}- J^{\mu_k}}_\infty + \norm{\scriptM_k}_\infty \norm{w_k}_\infty \\ \allowdisplaybreaks
&\leq \norm{\scriptM_k T_{\mu_k}^m T^{H-1} J_{k-1}- J^{\mu_k}}_\infty + \delta_{FV} \eps_{PE} \\ \allowdisplaybreaks
&= \norm{\scriptM_k T_{\mu_k}^m T^{H-1} J_{k-1} - \scriptM_k J^{\mu_k} + \scriptM_k J^{\mu_k}- J^{\mu_k}}_\infty + \delta_{FV} \eps_{PE}\\
&\leq \norm{\scriptM_k T_{\mu_k}^m T^{H-1} J_{k-1} - \scriptM_k J^{\mu_k}}_\infty + \norm{\scriptM_k J^{\mu_k}- J^{\mu_k}}_\infty+ \delta_{FV} \eps_{PE}\\
&\leq \norm{\scriptM_k}_\infty \norm{ T_{\mu_k}^m T^{H-1} J_{k-1} - J^{\mu_k}}_\infty + \norm{\scriptM_k J^{\mu_k}- J^{\mu_k}}_\infty+ \delta_{FV} \eps_{PE}\\
&\leq \alpha^m \norm{\scriptM_k}_\infty \norm{T^{H-1} J_{k-1} - J^{\mu_k}}_\infty + \sup_{k, \mu_k}\norm{\scriptM_k J^{\mu_k}- J^{\mu_k}}_\infty + \delta_{FV} \eps_{PE}\\
&\leq \alpha^m\norm{\scriptM_k}_\infty \norm{T^{H-1} J_{k-1} - J^* + J^* - J^{\mu_k}}_\infty +\delta_{app} + \delta_{FV} \eps_{PE}\\
&\leq \alpha^m\norm{\scriptM_k}_\infty \norm{T^{H-1} J_{k-1} - J^*}_\infty + \alpha^m\norm{\scriptM_k}_\infty \norm{J^* - J^{\mu_k}}_\infty + \delta_{app}+ \delta_{FV} \eps_{PE} \\
&\leq \alpha^{m+H-1}\norm{\scriptM_k}_\infty \norm{J_{k-1} - J^*}_\infty + \frac{\alpha^m}{1-\alpha}\norm{\scriptM_k}_\infty +\delta_{app} + \delta_{FV} \eps_{PE}\\
&\leq \alpha^{m+H-1}\norm{\scriptM_k}_\infty \norm{J_{k-1} -J^{\mu_{k-1}} + J^{\mu_{k-1}}- J^*}_\infty + \frac{\alpha^m}{1-\alpha}\norm{\scriptM_k}_\infty +\delta_{app} + \delta_{FV} \eps_{PE}\\
&\leq \alpha^{m+H-1}\norm{\scriptM_k}_\infty \norm{J_{k-1} -J^{\mu_{k-1}}}_\infty + \frac{\alpha^m + \alpha^{m+H-1}}{1-\alpha}\norm{\scriptM_k}_\infty +\delta_{app} + \delta_{FV} \eps_{PE}\\
&\leq \alpha^{m+H-1} \delta_{FV} \norm{J_{k-1} -J^{\mu_{k-1}}}_\infty + \frac{\alpha^m + \alpha^{m+H-1}}{1-\alpha}\delta_{FV} + \delta_{app}+ \delta_{FV} \eps_{PE}.
\end{align*} \Halmos
\endproof
\section{Proof of Theorem \ref{mainAPI}}
\label{appendix:thm1}
\begin{remark}
The proof of Theorem \ref{mainAPI} is somewhat simpler than that of Theorems \ref{theorem 2 or}-\ref{theorem 3 or} but uses many of the same ideas.
The proof of Theorem \ref{mainAPI} skips Steps 1 and 2 in the proof of Proposition \ref{prop1} and instead uses Lemma \ref{mainGDLemma} to obtain an analogous result to Step 2. The rest of the proof of Proposition \ref{prop1} also applies to the proof of Theorem \ref{mainAPI}.
\end{remark}
\proof{Proof of Theorem \ref{mainAPI}}
Consider policies $\mu_k$ and $\mu_{k+1},$ where $\mu_0$ can be taken to be any arbitrary policy. We have the following:
\begin{align}
-T_{\mu_{k+1}}T^{H}J^{\mu_k} &\leq -T_{\mu_{k+1}}T^{H-1}J^{\mu_k} - T_{\mu_{k+1}}T^{H-1}J_k \nonumber+ T_{\mu_{k+1}}T^{H-1}J_k \nonumber\\
&\overset{(a)}\leq \alpha^{H} \norm{J^{\mu_k}-J_k}_\infty e - T_{\mu_{k+1}}T^{H-1}J_k \nonumber\\
&\leq \alpha^{H} \norm{J^{\mu_k}-J_k}_\infty e - T^H J_k + \eps_{LA} e \nonumber\\
&= \alpha^{H} \norm{J^{\mu_k}-J_k}_\infty e - T^H J_k - T^H J^{\mu_k} + T^H J^{\mu_k} + \eps_{LA} e \nonumber\\
&\overset{(b)}\leq 2\alpha^H \norm{J^{\mu_k}-J_k}_\infty e - T^{H}J^{\mu_k} + \eps_{LA} e, \label{eq: *}
\end{align}
where $e$ is the vector of all $1$s, (a) and (b) follow from the contraction property of $T^H$ and $T_{\mu_{k+1}}$.
Since $\norm{J^{\mu_k}-J_k}_\infty= \norm{\scriptM_k (T_{\mu_k}^m T^{H-1} J_{k-1}+w_k)- J^{\mu_k}}_\infty$ from \eqref{eq:iterateAPI}, we can further bound $\norm{J^{\mu_k}-J_k}_\infty$ using Lemma \ref{mainGDLemma}. We now rewrite our recursion from Lemma \ref{mainGDLemma}:
Suppose that we define $\beta \in (0, 1)$ as follows:
\begin{align}
\beta := \alpha^{m+H-1} \delta_{FV}, \label{eq:defbeta}
\end{align} where we note from our assumption in Theorem \ref{mainAPI} that $\alpha^{m+H-1} \delta_{FV}<1.$ Furthermore, we denote $\tau$ as follows:
\begin{align}
\tau := \frac{\alpha^m + \alpha^{m+H-1}}{1-\alpha}\delta_{FV} + \delta_{app}+ \delta_{FV} \eps_{PE}. \label{eq:defdelta3}
\end{align}
Then, our bound from Lemma \ref{mainGDLemma} can be rewritten as the following:
\begin{align}
\norm{J^{\mu_k}-J_k}_\infty
&\leq \beta \norm{J_{k-1} -J^{\mu_{k-1}}}_\infty + \tau.\label{eq: iteration or}
\end{align}
Iterating over $k$, we get that for any $k$,
\begin{align}
\norm{J^{\mu_k}-J_k}_\infty \leq \underbrace{(\beta)^k \norm{J^{\mu_0}-J_0}_\infty + \sum_{i=0}^{k-1} \beta^i \tau}_{=: f_k}. \label{eq: pound2}
\end{align}
Putting (\ref{eq: *}) and (\ref{eq: pound2}) together, we have the following:
\begin{align}
-T_{\mu_{k+1}}T^{H}J^{\mu_k}
&\leq 2\alpha^H f_k e - T^{H}J^{\mu_k} + \eps_{LA} e.
\end{align}
The rest of the proof is similar to the arguments in \cite{bertsekastsitsiklis}, \cite{bertsekas2019reinforcement}; however, we have to explicitly incorporate the role of lookahead ($H$) in the remaining steps of the proof.
Suppose that we apply the $T_{\mu_{k+1}}$ operator $\ell-1$ times. Then, due to monotonicity and the fact that $T_\mu(J+ce)=T_\mu(J)+\alpha ce,$ for any policy $\mu,$ we have the following:
\begin{align*}
-T_{\mu_{k+1}}^{\ell+1} T^{H}J^{\mu_k}
&\leq \alpha^\ell (2\alpha^{H} f_k +\eps_{LA})e - T_{\mu_{k+1}}^{\ell}T^{H}J^{\mu_k}.
\end{align*}
Using a telescoping sum, we get the following inequality:
\begin{align*}
T_{\mu_{k+1}}^j T^{H} J^{\mu_k} - T^{H}J^{\mu_k}
&\geq -\sum_{\ell = 1}^{j} \alpha^{\ell-1} (2\alpha^{H} f_k +\eps_{LA})e.
\end{align*}
Taking the limit as $j\rightarrow\infty$ on both sides, we have the following:
\begin{align*}
J^{\mu_{k+1}} - T^{H}J^{\mu_k} \geq -\frac{2\alpha^{H} f_k +\eps_{LA}}{1-\alpha}e.
\end{align*}
Rearranging terms and subtracting $J^*$ from both sides, we get the following:
\begin{align*}
\alpha^{H} \norm{J^* - J^{\mu_k}}_\infty e + \frac{2\alpha^{H} f_k +\eps_{LA}}{1-\alpha}e \geq J^* - T^{H}J^{\mu_k} + \frac{2\alpha^{H} f_k +\eps_{LA}}{1-\alpha}e \geq J^* - J^{\mu_{k+1}} \geq 0,
\end{align*}
which implies that
\begin{align}
\norm{J^{\mu_{k+1}} - J^*}_\infty &\leq \alpha^{H} \norm{J^{\mu_k}-J^*}_\infty + \frac{2\alpha^{H} f_k +\eps_{LA}}{1-\alpha},
\end{align} since $J^* \geq J^{\mu}$ for all policies $\mu$.
\color{black}
We iterate over $k>0$ to get the following:
\begin{align*}
\norm{J^{\mu_{k}} - J^*}_\infty &\leq \alpha^{k(H)} \norm{J^{\mu_0}-J^*}_\infty + \sum_{\ell=0}^{k-1} \alpha^{(k-\ell-1)(H)}\frac{2\alpha^{H} f_\ell +\eps_{LA}}{1-\alpha}\\
&\leq \frac{\alpha^{k(H)} }{1-\alpha} + \sum_{\ell=0}^{k-1} \alpha^{(k-\ell-1)(H)}\frac{2\alpha^{H} f_\ell +\eps_{LA}}{1-\alpha}.
\end{align*}
where $$f_\ell := (\beta)^\ell \norm{J^{\mu_0}-J_0}_\infty + \sum_{i=0}^{k-1} \beta^i \tau,$$ and $\beta:= \alpha^{m+H-1} \delta_{FV}$ and $\tau := \frac{\alpha^m + \alpha^{m+H-1}}{1-\alpha}\delta_{FV} + \delta_{app}+ \delta_{FV} \eps_{PE}$.
We will now obtain finite-time bounds of $\norm{J^{\mu_{k}} - J^*}_\infty$ using the above results. The following holds:
\begin{align}
\norm{J^{\mu_{k}} - J^*}_\infty
&\leq \frac{\alpha^{k(H)} }{1-\alpha} + \sum_{\ell=0}^{k-1} \alpha^{(k-\ell-1)(H)}\frac{2\alpha^{H} f_\ell +\eps_{LA}}{1-\alpha} \nonumber\\
&= \frac{\alpha^{k(H)} }{1-\alpha} + \sum_{\ell=0}^{k-1} \alpha^{(k-\ell-1)(H)}\frac{2\alpha^{H} \Big(\beta^\ell \norm{J^{\mu_0}-J_0}_\infty + \frac{\tau}{1-\beta}\Big) +\eps_{LA}}{1-\alpha} \nonumber\\
&\leq \frac{\alpha^{k(H)} }{1-\alpha} + \frac{2\alpha^{H}\norm{J^{\mu_0}-J_0}_\infty}{1-\alpha}\sum_{\ell=0}^{k-1} \alpha^{(k-\ell-1)(H)} \beta^\ell +\frac{2\alpha^H \frac{\tau}{1-\beta} +\eps_{LA}}{(1-\alpha^{H})(1-\alpha)}\nonumber\\
&\leq \frac{\alpha^{k(H)} }{1-\alpha} + \frac{2\alpha^{H}\norm{J^{\mu_0}-J_0}_\infty}{1-\alpha}\sum_{\ell = 0}^{k-1}\max(\alpha^{H},\beta)^{k-1} +\frac{2\alpha^H \frac{\tau}{1-\beta} +\eps_{LA}}{(1-\alpha^{H})(1-\alpha)}\nonumber\\
&\leq \frac{\alpha^{k(H)} }{1-\alpha} + \frac{2\alpha^{H}\norm{J^{\mu_0}-J_0}_\infty}{1-\alpha}k\max(\alpha^{H},\beta)^{k-1} +\frac{2\alpha^H \frac{\tau}{1-\beta} +\eps_{LA}}{(1-\alpha^{H})(1-\alpha)} \label{eq: *** or},
\end{align}
where the second to last to last inequality holds due to the assumption in Theorem \ref{mainAPI}.
\Halmos \endproof
\color{black}
\begin{comment}
\section{A Modified Least-Squares Algorithm}\label{appendix:thm2}
Suppose Step \ref{step 3 alg} of Algorithm \ref{alg:LSalg} is changed to $\hat{J}^{\mu_{k+1}}(i) = T_{\mu_{k+1}}^m (J_k)(i)+w_{k+1}(i)$ for $i \in \scriptD_k$. Then, it is still possible to get bounds on the performance of the algorithm when $m$ is sufficiently large. With this modification to the algorithm, when Assumptions \ref{assume 1 or}, \ref{assume 2 or}, and \ref{assume 3 or} hold, we have the following:
\begin{theorem}\label{mainAPISecond}
Suppose that $m$ satisfies $m >\log (\delta_{FV})/\log (1/\alpha),$ where
$$\delta_{FV} := \sup_k \norm{\scriptM_k}_\infty = \sup_k\norm{\Phi(\Phi_{\scriptD_{k}}^\top \Phi_{\scriptD_{k}} )^{-1} \Phi_{\scriptD_{k}}^\top \scriptP_k}_\infty,$$
then
\begin{align*}
\limsup_{k\to \infty}\norm{J^{\mu_k}-J^*}_\infty \leq \frac{\tilde{c}_{m, H}}{(1-\alpha)(1-\alpha^{H-1})},
\end{align*}
where $$\tilde{c}_{m, H} := 2\alpha^H \Bigg(\frac{\frac{\alpha^m}{1-\alpha} \delta_{FV} + \delta_{app}+ \delta_{FV} \eps_{PE} }{1-\alpha^{m}\delta_{FV}} + \frac{1}{1-\alpha}+\norm{J_0}_\infty \Bigg)+\eps_{LA}$$ and
$
\delta_{app} := \sup_{k, \mu_k}\norm{\scriptM_k J^{\mu_k}- J^{\mu_k}}_\infty.
$
\end{theorem}
\proof{Proof of Theorem \ref{mainAPISecond}}
Consider policies $\mu_k$ and $\mu_{k+1},$ where $\mu_0$ can be taken to be any arbitrary policy. We have the following:
\begin{align}
-T_{\mu_{k+1}}T^{H-1}J^{\mu_k} &= -T_{\mu_{k+1}}T^{H-1}J^{\mu_k} - T_{\mu_{k+1}}T^{H-1}J_k + T_{\mu_{k+1}}T^{H-1}J_k \nonumber\\
&\overset{(a)}\leq \alpha^H \norm{J^{\mu_k}-J_k}_\infty e - T_{\mu_{k+1}}T^{H-1}J_k \nonumber\\
&\leq \alpha^H \norm{J^{\mu_k}-J_k}_\infty e - T^H J_k + \eps_{LA} e \nonumber\\
&= \alpha^H \norm{J^{\mu_k}-J_k}_\infty e - T^H J_k - T^H J^{\mu_k} + T^H J^{\mu_k} + \eps_{LA} e \nonumber\\
&\overset{(b)}\leq 2 \alpha^H \norm{J^{\mu_k}-J_k}_\infty e - T^H J^{\mu_k} + \eps_{LA} e \nonumber\\
&\leq 2\alpha^H \norm{J^{\mu_k}-J_k}_\infty e - T^{H-1}J^{\mu_k} + \eps_{LA} e, \label{eq: * 2}
\end{align}
where $e$ is the vector of all $1$s, (a) and (b) follow from the contraction property of $T^H$ and $T_{\mu_{k+1}}$ and the last inequality follows from standard arguments using the monotonicity properties and the definition of $T$: specifically, note that
$$
TJ^\mu = \max_{\mu'} T_{\mu'} J^\mu \geq T_\mu J^\mu = J^{\mu},
$$
and repeatedly apply $T$ to both sides of the inequality and use monotonocity to obtain
$T^\ell J^\mu \geq T^{\ell-1} J^\mu$ for all $\ell$ and all policies $\mu.$
We can further bound $\norm{J^{\mu_k}-J_k}$ as follows:
\begin{align*}
\norm{J^{\mu_k}-J_k} &= \norm{\scriptM_k (T_{\mu_k}^m J_{k-1}+w_k)- J^{\mu_k}}_\infty
\leq \norm{\scriptM_k T_{\mu_k}^m J_{k-1}- J^{\mu_k}}_\infty + \norm{\scriptM_k w_k}_\infty \\
&\leq\norm{\scriptM_k T_{\mu_k}^m J_{k-1}- J^{\mu_k}}_\infty + \norm{\scriptM_k}_\infty \norm{w_k}_\infty
\leq \norm{\scriptM_k T_{\mu_k}^m J_{k-1}- J^{\mu_k}}_\infty + \delta_{FV} \eps_{PE} \\
&= \norm{\scriptM_k T_{\mu_k}^m J_{k-1} - \scriptM_k J^{\mu_k} + \scriptM_k J^{\mu_k}- J^{\mu_k}}_\infty + \delta_{FV} \eps_{PE}\\
&\leq \norm{\scriptM_k T_{\mu_k}^m J_{k-1} - \scriptM_k J^{\mu_k}}_\infty + \norm{\scriptM_k J^{\mu_k}- J^{\mu_k}}_\infty+ \delta_{FV} \eps_{PE}\\
&\leq \norm{\scriptM_k}_\infty \norm{ T_{\mu_k}^m J_{k-1} - J^{\mu_k}}_\infty + \norm{\scriptM_k J^{\mu_k}- J^{\mu_k}}_\infty+ \delta_{FV} \eps_{PE}\\
&\leq \delta_{FV} \norm{ T_{\mu_k}^m J_{k-1} - J^{\mu_k}}_\infty + \norm{\scriptM_k J^{\mu_k}- J^{\mu_k}}_\infty+ \delta_{FV} \eps_{PE}\\
&\leq \alpha^m \delta_{FV} \norm{J_{k-1} - J^{\mu_k}}_\infty + \sup_{k, \mu_k}\norm{\scriptM_k J^{\mu_k}- J^{\mu_k}}_\infty + \delta_{FV} \eps_{PE}\\
&\leq \alpha^m \delta_{FV} \norm{J_{k-1} - J^{\mu_k}}_\infty + \delta_{app}+ \delta_{FV} \eps_{PE}\\
&\leq \alpha^m \delta_{FV} \norm{J_{k-1} - J^{\mu_{k-1}} + J^{\mu_{k-1}} - J^{\mu_k}}_\infty + \delta_{app}+ \delta_{FV} \eps_{PE}\\
&\leq \alpha^m\delta_{FV} \norm{J_{k-1} - J^{\mu_{k-1}}}_\infty + \alpha^m \delta_{FV} \norm{J^{\mu_{k-1}} - J^{\mu_k}}_\infty + \delta_{app}+ \delta_{FV} \eps_{PE}\\
&\leq \alpha^m \delta_{FV} \norm{J_{k-1} - J^{\mu_{k-1}}}_\infty + \frac{\alpha^m \delta_{FV}}{1-\alpha} + \delta_{app}+ \delta_{FV} \eps_{PE}
\end{align*}
Iterating over $k$ we get that for all $k$,
\begin{align}
\norm{J^{\mu_k}-J_k}_\infty \leq \underbrace{\frac{\frac{\alpha^m}{1-\alpha} \delta_{FV} + \delta_{app}+ \delta_{FV} \eps_{PE} }{1-\alpha^{m}\delta_{FV}} + \frac{1}{1-\alpha}+\norm{J_0}_\infty}_{=: p_{m}}, \label{eq: pound 2}
\end{align}
where we have used the assumption $\alpha^{m} \delta_{FV} < 1$ and the fact that $\norm{J^{\mu_0}}_\infty\leq 1/(1-\alpha)$ due to Assumption~\ref{assume 3 or}.
Putting (\ref{eq: * 2}) and (\ref{eq: pound 2}) together, we have the following:
\begin{align*}
-T_{\mu_{k+1}}T^{H-1}J^{\mu_k} \leq \underbrace{\Bigg[2\alpha^H p_{m} + \eps_{LA}\Bigg]}_{=: \tilde{c}_{m, H}} e -T^{H-1}J^{\mu_k}.
\end{align*}
The rest of the proof follows from the proof of Theorem \ref{mainAPI} with $\tilde{c}_{m, H}$ instead of $c_{m, H}$ and we get Theorem \ref{mainAPISecond}.
\Halmos \endproof
Analogously to the inequality \eqref{eq: ***} in Appendix \ref{appendix:thm1}, the proof of Theorem \ref{mainAPISecond} gives us the following:
\begin{align}
\norm{J^{\mu_{k}}-J^*}_\infty &\leq \alpha^{(H-1)k} \norm{J^{\mu_0}-J^*}_\infty + \sum_{i=0}^{k-1} \alpha^{(H-1)i} \frac{ \tilde{c}_{m, H}}{1-\alpha},
\label{eq: *** 8}
\end{align} for $k >0.$
Note that the inequality \eqref{eq: *** 8} provides finite-time performance bounds for the modified least-squares algorithm, in addition to the asymptotic result stated in Theorem \ref{mainAPISecond}. \color{black}
\section{Bounds on Iterates In Algorithm \ref{alg:LSalg}}\label{appendix:prop1}
In the following proposition, we present a bound on the difference between $J_k$ and $J^*.$
\begin{proposition} When $\alpha^{m+H-1} \delta_{FV} <1,$
$\limsup_{k \to \infty} \norm{J_k - J^*}_\infty \leq \frac{r_{m, H}}{1-q_{m, H}},$ \label{IterAPITheorem}
\end{proposition}
where $q_{m, H} := \delta_{FV} \alpha^{m+H-1} + \big( 1+\delta_{FV} \alpha^m \big)\alpha^{H-1}$ and $r_{m, H} := \big( 1+\delta_{FV} \alpha^m \big)\Bigg(\alpha^{H-1} p_{m, H} + \frac{ c_{m, H}}{1-\alpha} \Bigg) + \delta_{app}+\delta_{FV} \eps_{PE},$ where $c_{m, H}$ is defined in \eqref{eq:defcdmh} and $p_{m, H}$ is defined in $(\ref{eq: pound})$. The proof is as follows.
\proof{Proof of Proposition \ref{IterAPITheorem}}
\begin{align*}
&\norm{J_{k+1} - J^*}_\infty \\&= \norm{J_{k+1} -J^{\mu_{k+1}} + J^{\mu_{k+1}}- J^*}_\infty \\
&\leq \norm{J_{k+1} -J^{\mu_{k+1}}}_\infty + \norm{J^{\mu_{k+1}}- J^*}_\infty \\
&\leq \norm{\scriptM_{k+1} T_{\mu_{k+1}}^m T^{H-1} J_k -J^{\mu_{k+1}}}_\infty +\delta_{FV} \eps_{PE} +\norm{J^{\mu_{k+1}}- J^*}_\infty +\delta_{FV} \eps_{PE}\\
&= \norm{\scriptM_{k+1} T_{\mu_{k+1}}^m T^{H-1} J_k -\scriptM_{k+1} J^{\mu_{k+1}} + \scriptM_{k+1} J^{\mu_{k+1}} -J^{\mu_{k+1}}}_\infty +\norm{J^{\mu_{k+1}}- J^*}_\infty +\delta_{FV} \eps_{PE}\\
&\leq \norm{\scriptM_{k+1} T_{\mu_{k+1}}^m T^{H-1} J_k -\scriptM_{k+1} J^{\mu_{k+1}}}_\infty + \norm{\scriptM_{k+1} J^{\mu_{k+1}} -J^{\mu_{k+1}}}_\infty +\norm{J^{\mu_{k+1}}- J^*}_\infty +\delta_{FV} \eps_{PE}\\
&\leq \norm{\scriptM_{k+1}}_\infty \norm{T_{\mu_{k+1}}^m T^{H-1} J_k - J^{\mu_{k+1}}}_\infty + \norm{\scriptM_{k+1} J^{\mu_{k+1}} -J^{\mu_{k+1}}}_\infty +\norm{J^{\mu_{k+1}}- J^*}_\infty +\delta_{FV} \eps_{PE}\\
&\leq \delta_{FV} \alpha^m \norm{T^{H-1} J_k - J^{\mu_{k+1}}}_\infty + \delta_{app} +\norm{J^{\mu_{k+1}}- J^*}_\infty +\delta_{FV} \eps_{PE}\\
&= \delta_{FV} \alpha^m \norm{T^{H-1} J_k - J^* + J^* - J^{\mu_{k+1}}}_\infty + \delta_{app} +\norm{J^{\mu_{k+1}}- J^*}_\infty +\delta_{FV} \eps_{PE}\\
&\leq \delta_{FV} \alpha^m \norm{T^{H-1} J_k - J^*}_\infty + \delta_{FV} \alpha^m \norm{J^* - J^{\mu_{k+1}}}_\infty + \delta_{app} + \norm{J^{\mu_{k+1}}- J^*}_\infty+\delta_{FV} \eps_{PE}\\
&\leq \delta_{FV} \alpha^{m+H-1} \norm{J_k - J^*}_\infty + \delta_{FV} \alpha^m \norm{J^* - J^{\mu_{k+1}}}_\infty + \delta_{app} + \norm{J^{\mu_{k+1}}- J^*}_\infty+\delta_{FV} \eps_{PE}\\
&= \delta_{FV} \alpha^{m+H-1} \norm{J_k - J^*}_\infty + \big( 1+\delta_{FV} \alpha^m \big) \norm{J^* - J^{\mu_{k+1}}}_\infty + \delta_{app}+\delta_{FV} \eps_{PE} \allowdisplaybreaks\\
&\overset{(c)}\leq \delta_{FV} \alpha^{m+H-1} \norm{J_k - J^*}_\infty + \big( 1+\delta_{FV} \alpha^m \big) \Bigg(\alpha^{H-1} \norm{J^{\mu_k}-J^*}_\infty + \frac{ c_{m, H}}{1-\alpha} \Bigg) + \delta_{app} +\delta_{FV} \eps_{PE}\\
&\leq \delta_{FV} \alpha^{m+H-1} \norm{J_k - J^*}_\infty + \big( 1+\delta_{FV} \alpha^m \big) \Bigg(\alpha^{H-1} (\norm{J^{\mu_k}-J_k}_\infty + \norm{J_k-J^*}_\infty) + \frac{ c_{m, H}}{1-\alpha} \Bigg) + \delta_{app} +\delta_{FV} \eps_{PE}\allowdisplaybreaks\\
&= \Bigg(\delta_{FV} \alpha^{m+H-1} + \big( 1+\delta_{FV} \alpha^m \big)\alpha^{H-1} \Bigg) \norm{J_k - J^*}_\infty + \big( 1+\delta_{FV} \alpha^m \big)\Bigg(\alpha^{H-1} \norm{J^{\mu_k}-J_k}_\infty + \frac{ c_{m, H}}{1-\alpha} \Bigg)\\& + \delta_{app} +\delta_{FV} \eps_{PE}\allowdisplaybreaks\\
&\overset{(d)}\leq \Bigg(\delta_{FV} \alpha^{m+H-1} + \big( 1+\delta_{FV} \alpha^m \big)\alpha^{H-1} \Bigg) \norm{J_k - J^*}_\infty + \big( 1+\delta_{FV} \alpha^m \big)\Bigg(\alpha^{H-1} p_{m, H} + \frac{ c_{m, H}}{1-\alpha} \Bigg) + \delta_{app} +\delta_{FV} \eps_{PE}\\
&= q_{m, H} \norm{J_k - J^*}_\infty + r_{m, H},
\end{align*}
where $q_{m, H} := \delta_{FV} \alpha^{m+H-1} + \big( 1+\delta_{FV} \alpha^m \big)\alpha^{H-1}$ and $r_{m, H} := \big( 1+\delta_{FV} \alpha^m \big)\Bigg(\alpha^{H-1} p_{m, H} + \frac{ c_{m, H}}{1-\alpha} \Bigg) + \delta_{app}+\delta_{FV} \eps_{PE}.$ Note that $p_{m, H}$ is defined in $(\ref{eq: pound})$ and $c_{m, H}$ is defined in $(\ref{eq:defcdmh}).$ Additionally, $(c)$ follows from $(\ref{eq:cprop1})$ and $(d)$ follows from $(\ref{eq: pound})$, noting that the inequalities in $(\ref{eq:cprop1})$ and $(\ref{eq: pound})$ hold for all $\eps_{PE}'>0$.
Iterating over $k$, we get Proposition \ref{IterAPITheorem}.
We obtain the following inequality in much a similar way to inequality \eqref{eq: ***} in the proof of Theorem \ref{mainAPI}:
\begin{align}
\norm{J_{k} - J^*}_\infty \leq q_{m, H}^k \norm{J_0 - J^*}_\infty + \sum_{i=0}^{k-1} q_{m, H}^i r_{m, H}, \text{ for $k >0.$ } \label{eq:*** 3}
\end{align} \Halmos
\endproof
Note that the inequality \eqref{eq:*** 3} provides finite-time performance bounds, in addition to the asymptotic result stated in Proposition \ref{IterAPITheorem}.
\end{comment}
\section{A Modified Least Squares Algorithm}\label{appendix:thm2}
Suppose Step \ref{step 3 alg} of Algorithm \ref{alg:LSalg} is changed to $\hat{J}^{\mu_{k+1}}(i) = T_{\mu_{k+1}}^m (J_k)(i)+w_{k+1}(i)$ for $i \in \scriptD_k$. Then, it is still possible to get bounds on the performance of the algorithm when $m$ is sufficiently large. With this modification to the algorithm, we have the following:
\begin{proposition}\label{mainAPISecond}
Suppose that $m$ satisfies $m>\log (\delta_{FV})/\log (1/\alpha),$ where
$$\delta_{FV} := \sup_k \norm{\scriptM_k}_\infty = \sup_k\norm{\Phi(\Phi_{\scriptD_{k}}^\top \Phi_{\scriptD_{k}} )^{-1} \Phi_{\scriptD_{k}}^\top \scriptP_k}_\infty.$$
Then, under Assumptions \ref{assume 1 or}-\ref{assume 3 or}, the following holds:
\begin{align*}
\norm{J^{\mu_{k}} - J^*}_\infty
&\leq \underbrace{\frac{\alpha^{k(H)} }{1-\alpha} + \frac{2\alpha^{H}\norm{J^{\mu_0}-J_0}_\infty}{1-\alpha}k\max(\alpha^{H},\beta')^{k-1}}_{ \text{ finite-time component }} +\underbrace{\frac{2\alpha^H \frac{\tau'}{1-\beta'} +\eps_{LA}}{(1-\alpha^{H})(1-\alpha)}}_{ \text{ asymptotic component }}.
\end{align*}
where $$\tau':= \alpha^m \delta_{FV},$$
$$\beta':=\frac{\alpha^m \delta_{FV}}{1-\alpha} + \delta_{app} + \delta_{FV} \eps_{PE},$$
and
$$
\delta_{app} := \sup_{k, \mu_k}\norm{\scriptM_k J^{\mu_k}- J^{\mu_k}}_\infty.
$$
\end{proposition}
\begin{proof}{Proof of Proposition \ref{mainAPISecond}}
The proof of Theorem \ref{mainAPISecond} is identical to the proof of Theorem \ref{mainAPI} except for the iteration in \eqref{eq: iteration or}. We thus give the following iteration which can be substituted in our proof of Theorem \ref{mainAPI}:
\begin{align*}
\norm{J_k-J^{\mu_k}}_\infty &=\norm{\scriptM_k (T_{\mu_k}^m J_{k-1}+w_k)- J^{\mu_k}}_\infty \\&= \norm{\scriptM_k (T_{\mu_k}^m J_{k-1}+w_k)- J^{\mu_k}}_\infty \\
&\leq \norm{\scriptM_k T_{\mu_k}^m J_{k-1}- J^{\mu_k}}_\infty + \norm{\scriptM_k w_k}_\infty \\
&\leq\norm{\scriptM_k T_{\mu_k}^m J_{k-1}- J^{\mu_k}}_\infty + \norm{\scriptM_k}_\infty \norm{w_k}_\infty \\ \allowdisplaybreaks
&\leq \norm{\scriptM_k T_{\mu_k}^m J_{k-1}- J^{\mu_k}}_\infty + \delta_{FV} \eps_{PE} \\ \allowdisplaybreaks
&= \norm{\scriptM_k T_{\mu_k}^m J_{k-1} - \scriptM_k J^{\mu_k} + \scriptM_k J^{\mu_k}- J^{\mu_k}}_\infty + \delta_{FV} \eps_{PE}\\
&\leq \norm{\scriptM_k T_{\mu_k}^m J_{k-1} - \scriptM_k J^{\mu_k}}_\infty + \norm{\scriptM_k J^{\mu_k}- J^{\mu_k}}_\infty+ \delta_{FV} \eps_{PE}\\
&\leq \sup_k \norm{\scriptM_k}_\infty \norm{ T_{\mu_k}^m J_{k-1} - J^{\mu_k}}_\infty + \sup_{k, \mu_k}\norm{\scriptM_k J^{\mu_k}- J^{\mu_k}}_\infty+ \delta_{FV} \eps_{PE}\\
&\leq \alpha^m \delta_{FV} \norm{ J_{k-1} - J^{\mu_k}}_\infty + \delta_{app} + \delta_{FV} \eps_{PE}\\
&=\alpha^m \delta_{FV} \norm{ J_{k-1} -J^{\mu_{k-1}}+J^{\mu_{k-1}}- J^{\mu_k}}_\infty + \delta_{app} + \delta_{FV} \eps_{PE}\\
&\leq \alpha^m \delta_{FV} \norm{ J_{k-1} -J^{\mu_{k-1}}}_\infty+\alpha^m \delta_{FV}\norm{J^{\mu_{k-1}}- J^{\mu_k}}_\infty + \delta_{app} + \delta_{FV} \eps_{PE}\\
&\leq \alpha^m \delta_{FV} \norm{ J_{k-1} -J^{\mu_{k-1}}}_\infty+\frac{\alpha^m \delta_{FV}}{1-\alpha} + \delta_{app} + \delta_{FV} \eps_{PE}.
\end{align*}
Substituting $$\beta':= \alpha^m \delta_{FV}$$ and $$\tau' :=\frac{\alpha^m \delta_{FV}}{1-\alpha} + \delta_{app} + \delta_{FV} \eps_{PE},$$ in place of $\beta$ and $\tau$, respectively, in Theorem \ref{mainAPI} and the proof of Theorem \ref{mainAPI}, we obtain Proposition \ref{mainAPISecond}.
\end{proof}
\section{Bounds on Iterates In Algorithm \ref{alg:LSalg}}\label{appendix:prop1}
In the following proposition, we present a bound on the difference between $J_k$ and $J^*.$
\begin{proposition} \label{IterAPITheorem}When $\alpha^{m+H-1} \delta_{FV} <1,$
\begin{align*}
\limsup_{k\to\infty} \norm{J_{k} - J^*}_\infty &\leq \frac{ \big( 1+\delta_{FV} \alpha^m \big) \Big[\frac{2\alpha^H \frac{\tau}{1-\beta} +\eps_{LA}}{(1-\alpha^{H})(1-\alpha)} \Big] + \delta_{app}+\delta_{FV} \eps_{LA}}{1-\delta_{FV} \alpha^{m+H-1}},
\end{align*}
where $\beta$ and $\tau$ are defined in Theorem \ref{mainAPI}.
\end{proposition}
The proof is as follows.
\begin{proof}{Proof of Proposition \ref{IterAPITheorem}}
\begin{align*}
\norm{J_{k+1} - J^*}_\infty &= \norm{J_{k+1} -J^{\mu_{k+1}} + J^{\mu_{k+1}}- J^*}_\infty \\
&\leq \norm{J_{k+1} -J^{\mu_{k+1}}}_\infty + \norm{J^{\mu_{k+1}}- J^*}_\infty \\
&\leq \norm{\scriptM_{k+1} T_{\mu_{k+1}}^m T^{H-1} J_k -J^{\mu_{k+1}}}_\infty +\delta_{FV} \eps_{LA} \\&+\norm{J^{\mu_{k+1}}- J^*}_\infty +\delta_{FV} \eps_{LA}\\
&= \norm{\scriptM_{k+1} T_{\mu_{k+1}}^m T^{H-1} J_k -\scriptM_{k+1} J^{\mu_{k+1}} + \scriptM_{k+1} J^{\mu_{k+1}} -J^{\mu_{k+1}}}_\infty \\&+\norm{J^{\mu_{k+1}}- J^*}_\infty +\delta_{FV} \eps_{LA}\\
&\leq \norm{\scriptM_{k+1} T_{\mu_{k+1}}^m T^{H-1} J_k -\scriptM_{k+1} J^{\mu_{k+1}}}_\infty + \norm{\scriptM_{k+1} J^{\mu_{k+1}} -J^{\mu_{k+1}}}_\infty \\&+\norm{J^{\mu_{k+1}}- J^*}_\infty +\delta_{FV} \eps_{LA}\\
&\leq \norm{\scriptM_{k+1}}_\infty \norm{T_{\mu_{k+1}}^m T^{H-1} J_k - J^{\mu_{k+1}}}_\infty + \norm{\scriptM_{k+1} J^{\mu_{k+1}} -J^{\mu_{k+1}}}_\infty \\&+\norm{J^{\mu_{k+1}}- J^*}_\infty +\delta_{FV} \eps_{LA}\\
&\leq \delta_{FV} \alpha^m \norm{T^{H-1} J_k - J^{\mu_{k+1}}}_\infty + \delta_{app} \\&+\norm{J^{\mu_{k+1}}- J^*}_\infty +\delta_{FV} \eps_{LA}\\
&= \delta_{FV} \alpha^m \norm{T^{H-1} J_k - J^* + J^* - J^{\mu_{k+1}}}_\infty + \delta_{app} +\norm{J^{\mu_{k+1}}- J^*}_\infty +\delta_{FV} \eps_{LA}\\
&\leq \delta_{FV} \alpha^m \norm{T^{H-1} J_k - J^*}_\infty + \delta_{FV} \alpha^m \norm{J^* - J^{\mu_{k+1}}}_\infty + \delta_{app} \\&+ \norm{J^{\mu_{k+1}}- J^*}_\infty+\delta_{FV} \eps_{LA}\\
&\leq \delta_{FV} \alpha^{m+H-1} \norm{J_k - J^*}_\infty + \delta_{FV} \alpha^m \norm{J^* - J^{\mu_{k+1}}}_\infty + \delta_{app} \\&+ \norm{J^{\mu_{k+1}}- J^*}_\infty+\delta_{FV} \eps_{LA}\\
&= \delta_{FV} \alpha^{m+H-1} \norm{J_k - J^*}_\infty + \big( 1+\delta_{FV} \alpha^m \big) \norm{J^* - J^{\mu_{k+1}}}_\infty + \delta_{app}+\delta_{FV} \eps_{LA}.
\end{align*}
From Theorem \ref{mainAPI}, we have that
\begin{align*}
\limsup_{k\to\infty} \norm{J^{\mu_k}-J^*}_\infty \leq \frac{2\alpha^H \frac{\tau}{1-\beta} +\eps_{LA}}{(1-\alpha^{H})(1-\alpha)}.
\end{align*}
Thus, for every $\eps'>0,$ there exists a $k(\eps')$ such that for all $k>k(\eps')$,
\begin{align*}
\norm{J^{\mu_k}-J^*}_\infty \leq \frac{2\alpha^H \frac{\tau}{1-\beta} +\eps_{LA}}{(1-\alpha^{H})(1-\alpha)}+\eps'.
\end{align*}
Thus, for all $k>k(\eps')$, we have:
\begin{align*}
\norm{J_{k+1} - J^*}_\infty &\leq \delta_{FV} \alpha^{m+H-1} \norm{J_k - J^*}_\infty + \big( 1+\delta_{FV} \alpha^m \big) \Big[\frac{2\alpha^H \frac{\tau}{1-\beta} +\eps_{LA}}{(1-\alpha^{H})(1-\alpha)}+\eps' \Big] + \delta_{app}+\delta_{FV} \eps_{LA}.
\end{align*}
Iterating over $k$ gives us:
\begin{align*}
\limsup_{k\to\infty} \norm{J_{k} - J^*}_\infty &\leq \frac{ \big( 1+\delta_{FV} \alpha^m \big) \Big[\frac{2\alpha^H \frac{\tau}{1-\beta} +\eps_{LA}}{(1-\alpha^{H})(1-\alpha)}+\eps' \Big] + \delta_{app}+\delta_{FV} \eps_{LA}}{1-\delta_{FV} \alpha^{m+H-1}}.
\end{align*}
Since the above holds for all $\eps'$:
\begin{align*}
\limsup_{k\to\infty} \norm{J_{k} - J^*}_\infty &\leq \frac{ \big( 1+\delta_{FV} \alpha^m \big) \Big[\frac{2\alpha^H \frac{\tau}{1-\beta} +\eps_{LA}}{(1-\alpha^{H})(1-\alpha)} \Big] + \delta_{app}+\delta_{FV} \eps_{LA}}{1-\delta_{FV} \alpha^{m+H-1}}.
\end{align*}
\end{proof}
\section{Counterexample} \label{appendix:counterexAppendix}
Even though, in practice, $J^{\mu_k}$ is what we are interested in, the values $J_k$ computed as part of our algorithm should not go to $\infty$ since the algorithm would be numerically unstable otherwise. In Appendix \ref{appendix:prop1}, we provide a bound on $\norm{J_k-J^*}_\infty$ when $m+H-1$ is sufficiently large as in Theorem~\ref{mainAPI}. In this subsection, we show that, when this condition is not satisfied, $J_k$ can become unbounded.
The example we use is depicted in Figure \ref{fig:TsitVanRoyIm}.
\begin{figure}
\centering
\subfloat[\centering $\mu^a$]{\includegraphics[width=2.5cm]{Role of Lookahead and FA/Images/counter1.png} }%
\qquad
\subfloat[\centering $\mu^b$]{\includegraphics[width=2.5cm]{Role of Lookahead and FA/Images/counter2.png} }%
\caption{An example illustrating the necessity of the condition in Theorem~\ref{mainAPI}}%
\label{fig:TsitVanRoyIm}%
\end{figure}
There are two policies, $\mu^a$ and $\mu^b$ and the transitions are deterministic under the two policies. The rewards are deterministic and only depend on the states. The rewards associated with states are denoted by $r(x_1)$ and $r(x_2),$
with $r(x_1)>r(x_2)$. Thus, the optimal policy is $\mu^a$. We assume scalar features $\phi(x_1)=1$ and $\phi(x_2)=2.$
We fix $H=1$.
The MDP follows policy $\mu^a$ when:
\begin{align*}
&J_k(x_1) > J_k(x_2) \implies \theta_k > 2\theta_k.
\end{align*} Thus, as long as $\theta_k>0,$ the lookahead policy will be $\mu_b.$
We will now show that $\theta_k$ increases at each iteration when $\delta_{FV} \alpha^{m+H-1}>1.$ We assume that $\theta_0>0$ and $\scriptD_k = \{1, 2\}$ $\forall k.$ A straightforward computation shows that $\delta_{FV}=\frac{6}{5}.$
At iteration $k+1,$ suppose $\mu_{k+1}=\mu^b,$ our $\hat{J}^{\mu_{k+1}}(i)$ for $i = 1, 2$ are as follows:
\begin{align*}
\hat{J}^{\mu_{k+1}}(1) =r(x_1)+\sum_{i=1}^{m-1} r(x_1) \alpha^i + 2 \alpha^m \theta_k, \quad
\hat{J}^{\mu_{k+1}}(2) = r(x_2) +\sum_{i=1}^{m-1} r(x_2)\alpha^i + 2 \alpha^m \theta_k.
\end{align*}
Thus, from Step \ref{step 4 alg} of Algorithm \ref{alg:LSalg}:
\begin{align*}
&\theta_{k+1} = \arg \min_\theta \sum_{i =1}^2 \Big( (\Phi \theta)(i) - \hat{J}^{\mu_{k+1}}(i) \Big)^2 \\
&\implies \theta_{k+1} = \frac{\sum_{i=1}^{m-1} \alpha^i r(x_1)}{5} + \frac{2 \sum_{i=1}^{m-1} \alpha^i r(x_2)}{5} + \frac{6 \alpha^m \theta_k}{5}\\
&\implies \theta_{k+1}> \frac{6}{5} \alpha^{m}\theta_k.
\end{align*}
Thus, since $\theta_0 > 0$ and $H=1$, when $ \frac{6}{5} \alpha^{m+H-1}\theta_k =\delta_{FV} \alpha^{m+H-1} >1,$ $\theta_{k}$ goes to $\infty.$
\section{Numerical Results} \label{appendix:numerical}
In this appendix, we test our algorithms on a grid world problem, using the same grid world problem as in \cite{efroni2018} and \cite{efroni2019combine}.
For our simulations, we assume a deterministic grid world problem played on an $N \times N$ grid. The states are the squares of the grid and the actions are $\{ \text{'up', 'down', 'right', 'left', and 'stay'} \}$, which move the agent in the prescribed direction, if possible. In each experiment, a goal state is chosen uniformly at random to have a reward of 1, while each other state has a fixed reward drawn uniformly from $[-0.1, 0.1]$. Unless otherwise mentioned, for the duration of this section, $N=25$ and $\alpha = 0.9$.
In order to perform linear function approximation, we prescribe a feature vector for each state. In this section, we focus on three particular choices:
\begin{enumerate}
\item Random feature vectors: each entry of the matrix $\Phi$ is an independent $\mathcal{N}(0, 1)$ random variable
\item Designed feature vectors: the feature vector for a state with coordinates $(x, y)$ is $[x, y, d, 1]^T$, where $d$ is the number of steps required to reach the goal from state $(x, y)$
\item Indicator vectors: the feature vector for each state $i$ is a $N^2$-dimensional indicator vector where only the $i$-th entry is nonzero
\end{enumerate}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{Role of Lookahead and FA/Images/h-m-error-plots.png}
\caption{(Top) For random feature vectors, as $m$ and $H$ increase, the value $J_k$ eventually stops diverging. (Bottom) For designed feature vectors, a smaller amount of lookahead and $m$-step return are needed to prevent $J_k$ from diverging. }
\label{fig:hm_plots}
\end{figure}
Recall that our theorems suggest that the amount of lookahead and return depends on the choice of the feature vectors. Our experiments support this observation as well. The amount of lookahead and $m$-step return required is high (often over $30$) for random feature vectors, but we are able to significantly reduce the amount required by using the designed feature vectors which better represent the states.
We test Algorithm \ref{alg:LSalg} in each of our experiments, using a starting state of $J_0 = \theta_0 = 0$. All plots in this section graph an average over 20 trials, where each trial has a fixed random choice of $\mathcal{D}_k$, the set of states used for policy evaluation. Error bars show the standard deviation of the mean. The code used to produce these graphs is included in the Supplementary Material.
\subsection{The effect of m and H on convergence}
In Figure \ref{fig:hm_plots}, we showed how $H$ and $m$ affect convergence of the iterates $J_k$ to $J^*$. When $m$ and $H$ are small, the value of $J_k$ sometimes diverges. If the value diverges for even one trial, then the average over trials of $\|J_k - J^*\|_\infty$ also increases exponentially with $k$. However, if the average converges for all trials, then the plot is relatively flat. The $m$ or $H$ required for convergence depends on the parameter $\delta_{FV}$ defined in Theorem~\ref{mainAPI}. Over 20 trials, the average value of $\delta_{FV}$ for each of our choices of feature vectors are $30.22, 16.29$, and $1.0$, respectively. As showed through a counter-example in the main body of the paper, in general, one needs $m + H - 1 > \log(\delta_{FV}) / \log(1/\alpha)$ for convergence. However, in specific examples, it is possible for convergence to for smaller values of $m+H.$ For example, in our grid word model, $\frac{\log(16.29)}{\log(1/0.9)} \approx 26.5$, but we will observe that such a large amount of $m + H$ is not required for convergence.
In Figure \ref{fig:hm_plots}, it is difficult to see how $H$ and $m$ affect the probability of divergence, as a function of the representative states chosen to be sampled. Therefore, we introduce Figure \ref{fig:probability_of_convergence}. These plots show the proportion of trials in which the distance $\|J_{k} - J^*\|_\infty$ exceeded $10^5$ after 30 iterations of our algorithm. As expected, the algorithm never diverges for indicator vectors, as our algorithm is then equivalent to the tabular setting. The designed feature vectors clearly require a much smaller amount of lookahead or $m$-step return, well below the amount predicted by the average $\delta_{FV}$ of $16.29$. However, no matter the choice of feature vectors, we will eventually prevent our algorithm from diverging with a large enough value of $H + m$.
\begin{figure}
\centering
\subfloat[\centering Varying $H$]{\includegraphics[width=0.45\linewidth]{Images/prob_divergence_hs} }%
\qquad
\subfloat[\centering Varying $m$]{\includegraphics[width=0.45\linewidth]{Images/prob_divergence_ms} }%
\caption{We plot the probability that $\|J_k - J^*\|_\infty$ diverges as a function of $H$ and $m$. For the first plot, $m=3$ and for the second plot, $H=3$. In both cases, the algorithm never diverges once $H+m$ is large enough, though a smaller amount of lookahead or $m$-step return are needed for the designed feature vectors.}%
\label{fig:probability_of_convergence}%
\end{figure}
\begin{figure}
\centering
\subfloat[\centering Varying $H$]{\includegraphics[width=0.45\linewidth]{Images/final_policy_hs} }%
\qquad
\subfloat[\centering Varying $m$]{\includegraphics[width=0.45\linewidth]{Images/final_policy_ms} }%
\caption{We plot the final value of $\|J^{\mu_k} - J^*\|_\infty$ after 30 iterations. For the first plot, $m=3$ and for the second plot, $H=3$. As $H$ increases, the final policy improves. With large enough $H$, we obtain the optimal policy. However, past a certain point, increasing $m$ is not helpful for finding a better policy.}%
\label{fig:final_policy_value}%
\end{figure}
\subsection{Convergence to the optimal policy}
In Theorem~\ref{mainAPI}, we show that as $H$ increases, we converge to a policy $\mu_k$ that is closer to the optimal policy. In this section, we experimentally investigate the role of $m$ and $H$ on the final value of $\|J^{\mu_k} - J^*\|_\infty$. The results can be found in Figure \ref{fig:final_policy_value}. As predicted by theory, we do get closer to the optimal policy as $H$ increases. However, increasing $m$ does not help past a certain point, which is also consistent with the theory. Indeed, although $\mu_k$ is approaching the optimal policy $\mu^*$ as $H$ increases, the iterates $J_k$ are not converging to $J^*$ due to error induced by function approximation. Increasing $m$ improves the policy evaluation, but cannot correct for this inherent error from approximating the value function.
\begin{comment}
\begin{figure}[H]
\centering
\subfigure[\centering Varying $H$]{\includegraphics[width=0.45\linewidth]{Images/final_policy_hs} }%
\qquad
\subfigure[\centering Varying $m$]{\includegraphics[width=0.45\linewidth]{Images/final_policy_ms} }%
\caption{We plot the final value of $\|J^{\mu_k} - J^*\|_\infty$ after 30 iterations. For the first plot, $m=3$ and for the second plot, $H=3$. As $H$ increases, the final policy improves. With large enough $H$, we obtain the optimal policy. However, past a certain point, increasing $m$ is not helpful for finding a better policy.}%
\label{fig:final_policy_value}%
\end{figure}
\end{comment}
\end{APPENDICES}
\ACKNOWLEDGMENT{The research presented here was supported in part by a grant from Sandia National Labs and the NSF Grants CCF 1934986, CCF 2207547, CNS 2106801, ONR Grant N00014-19-1-2566, and ARO Grant W911NF-19-1-0379. Sandia National Laboratories is a multimission laboratory managed and operated by National Technology \& Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA0003525. This paper describes objective technical results and analysis. Any subjective views or opinions that might be expressed in the paper do not necessarily represent the views of the U.S. Department of Energy or the United States Government.
}
| {
"attr-fineweb-edu": 1.509766,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUeD3xK03BfHDu40ez | \section{Related Work} \label{sec:related-work}
Several studies leverage on FPGA computational power to implement the feed-forward propagation of \acp{cnn}. A complete review of these studies can be found in~\cite{Lacey2016}. In most approaches, CNN-based applications are implemented by mapping a limited subset of processing elements onto the target device, multiplexing in time the processing elements {and processing data in an SIMD fashion. This is the case for instance in~\cite{Qiu2016} where authors describe a \ac{cnn} accelerator implemented on a Zynq XC706 board}.
The dataflow-based implementation of \acp{cnn} is investigated in~\cite{Farabet2012} where authors describe Neuflow, an acceleration engine for \acp{cnn} relying on a dataflow execution model. The \ac{cnn} graph is transformed into a set of dataflow instructions, where each instruction is described as a hardware configuration of 2D-processing elements called \emph{Processing tiles (PTs)}. The execution of the graph is carried out by sequencing the instructions on an FPGA.
The previously evoked approaches require an external memory to store intermediate results, which in turn, even with the help of a DMA, limits the final speedup. The study in~\cite{Venieris2016} features a partitioning of the \ac{cnn} graph with one bit-stream per subgraph in a way that only on-chip memory is needed to store intermediate results. This however requires the reconfiguration of the FPGA whenever data has to enter a different subgraph, which adds a substantial reconfiguration time overhead. By contrast, the \ac{dhm} approach introduced in the present paper performs all processing \emph{on the fly} and does not require any external memory to store intermediate results. Throughput is therefore not influenced by off-chip memory bandwidth.
\section{\ac{cnn} Computation} \label{sec:overview}
A typical \ac{cnn} structure
performs a succession of convolutions interspersed with sub-sampling layers. The last layers of a \ac{cnn} are fully connected layers performing classification. Convolutional layers are the most computationally intensive layers and are commonly responsible for more than 90\% of the \ac{cnn} execution time~\cite{Cong2014}. As a consequence, we focus in this paper on the implementation of convolutional layers.
A convolutional layer $(l)$ extracts $N$ feature maps from $C$ input channels by performing $N$ convolutions of size $K \times K$ on each input. This filtering is followed by the application of a non-linear activation function $act$ and a bias term $b_n$ to each set of features. As shown in equation~\ref{convLayerProc}, $N \times C$ convolutions (resulting in $N \times C \times K \times K $ multiplications)
are required to process a given layer.
\begin{align}
\label{convLayerProc}
\forall l = &1:L \nonumber \text{\textit{ (Number of layers)}}\\
\forall n &=1:N^{(l)} \nonumber \text{\textit{ (Number of output feature maps)}}\\
& \bm{f_n}^{(l)} = \mbox{act} \left[ \bm{b_n}^{(l)} + \sum_{c=1}^{C^{(l)}} conv(\bm{\phi_c^{(l)}}, \bm{w_{nc}^{(l)}}) \right]
\end{align}
where
$\bm{f_n}^{(l)}$ is the $n^{th}$ output feature map of layer $(l)$,
$\bm{\phi_c^{(l)}}$ is the $c^{th}$ input feature map and
$\bm{w_{nc}^{(l)}}$ is a pre-learned filter.
The computation described in Equation~\ref{convLayerProc} exhibits four sources of concurrency. First, \acp{cnn} have a feed-forward hierarchical structure consisting of a succession of data-dependent layers. Layers can therefore be executed in a \emph{pipelined} fashion by launching layer $(l)$ before ending the execution of layer $(l-1)$. Second, each neuron of a layer can be executed independently from the others, meaning that each of the $N^{(l)}$ element of equation~\ref{convLayerProc} can be computed in parallel. Third, all of the convolutions performed by a single neuron can also be evaluated simultaneously by computing concurrently the $C^{(l)}$ elements of equation~\ref{convLayerProc}. Finally, each 2D image convolution can be implemented in a pipelined fashion~\cite{Shoup1994} computing the $K \times K$ multiplications concurrently.
\section{Direct Hardware Mapping of CNNs} \label{sec:dhm}
A \ac{cnn} can be modeled by a dataflow process network (DPN) where nodes correspond to processing actors and edges correspond to communication channels. Each actor follows a purely data-driven execution model where execution (firing) is triggered by the availability of input operands~\cite{Lee1995}. The \ac{dhm} approach consists of \emph{physically} mapping the whole graph of actors onto the target device. Each actor then becomes a computing unit with its specific instance on the \ac{fpga} and each edge becomes a signal.
This approach fully exploits \ac{cnn} concurrency. All neurons in a layer are mapped on the device to take advantage of inter-neuron parallelism (Fig.~\ref{dhm_entities}-a). In neurons, each convolution is mapped separately (Fig.~\ref{dhm_entities}-b) and finally, within a convolution engine, each multiplier is instantiated separately (Fig~\ref{dhm_entities}-c). As an example, Fig.~\ref{dhm_layer} illustrates how a convolution layer C1 ($C=3, N=5, K=3$) extracts 5 features from a 3-feature input pixel flow. In this example, 15 convolutions and 5 activation blocks are mapped onto the \ac{fpga} as a result of the layer graph transformation, which corresponds to 135 multiplications, 20 sums and 5 activations. \added{DHM of pooling layers is also performed but lowest-level implementation elements are kept out of the scope of this paper.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.55\textwidth]{./img/dhm_entities.eps}
\caption{The 3 levels of \ac{dhm} use on \ac{cnn} entities: (a) in the convolution layers, (b) in the neurons, (c) in the convolution engines.}
\label{dhm_entities}
\end{figure}
The \emph{direct hardware mapping} approach exemplified above makes external memory accesses unnecessary, while classical FPGA implementations store intermediate results or parameters on external memory. The processing is then performed \emph{on-the-fly} on \emph{streams} of feature maps. Moreover, due to the fully pipelined execution model, the global throughput is only limited by the maximum clock frequency.
These advantages come at the cost of a high resource consumption since the whole graph has to be mapped onto the physical resources of the \ac{fpga}. This resource consumption could make \ac{dhm} impractical. It is therefore crucial for \ac{dhm} to explore tactics that efficiently translate CNN actors into hardware. The most important issues to solve are those related to the representation of numbers and the implementation of multiplications.
\begin{figure}[!ht]
\centering
\input{tikz/layerSW.tex}
\caption{Applying the 3 levels of DHM (Fig.~\ref{dhm_entities}) to a convolutional layer C1 (N=5, C=3, K=3): 15 separate convolution engines (135 Multipliers and 15 adders) plus 5 adders and 5 activation blocks are required to process the fully parallel layer (bias omitted).}
\label{dhm_layer}
\end{figure}
\subsection{Approximate fixed-point data representations}
Several studies have demonstrated that \acp{cnn}, and more generally deep learning applications, usually tolerate approximate computing with short fixed-point arithmetic. Frameworks such as Ristretto~\cite{Gysel2016} fine-tune a \ac{cnn} data representation to support fixed-point numerical representations with variable data lengths. The \ac{dhm} approach advocated in this paper takes advantage of data and parameter quantization to reduce the amount of hardware resources by first inferring the minimal required precision and then \emph{deriving} the hardware resources that exactly match this precision.
\subsection{Implementing Multiplications with Logic Elements}
\label{subsec:le_arith}
Convolutions require many multiplications. If these multiplications are implemented using hardwired \ac{dsp} blocks within the target \ac{fpga}, they become the bottleneck limiting the size of the implemented CNN. For instance, the second layer of the LeNet5 network~\cite{Lecun1998} ($C=6, N=16, K=5$) requires $2400$ multipliers, exceeding the number of multipliers provided by the DSP blocks of most FPGAs, and especially of embedded FPGAs. We overcome this problem by systematically forcing the synthesis tool to implement multiplications with logical elements instead of DSP blocks, leading the resulting implementations to rely on AND gates and trees of half-adders~\cite{Altera04}.
In addition, we take advantage of the fact that the convolution kernels -- and hence one operand of each multiplication -- are constants derived from an offline training stage. Multipliers can thus be specialized to their constants. While this approach limits the flexibility of the system because it requires to re-synthesize the VHDL design whenever parameters values are changed, it delegates to the synthesis tool the task to perform low-level area and performance optimization. More particularly, multiplications by 0 (\textit{resp} 1) are removed (\textit{resp.} replaced by a simple signal connection) and multiplications by a power of 2 are transformed into shift registers.
\subsection{Automated Hardware Generation with Haddoc2}
The \textsc{Haddoc2} framework is a set of tools built upon the \ac{dhm} principle and upon the optimization tactics described in previous section. It generates a platform-independent hardware description of a \ac{cnn} from a Caffe model~\cite{Jia2014}.
\ac{cnn} layers in \textsc{Haddoc}2 are described using a small number of basic predefined actors written in structural VHDL. These actors follow a dataflow execution semantics. The output
can be synthesized for any FPGA device with tools supporting VHDL~93. The \textsc{Haddoc2} framework and the library of \ac{cnn} IP-cores supporting the \ac{dhm} approach are open-source and available\footnote{https://github.com/KamelAbdelouahab/haddoc2.}.
\section{Experimental Results with Haddoc2} \label{sec:res}
As proofs of concept, FPGA-based accelerators for three benchmark CNNs are implemented with \textsc{Haddoc2}: LeNet5~\cite{Lecun1998}, SVHN~\cite{Netzer2011} and CIFAR10~\cite{Krizhevsky2009}. Table~\ref{res:setup} details the topology of these CNNs where \emph{mpool} refers to the pooling layer that reduces the dimensionality of each feature map and \emph{tanh} is the hyperbolic tangent activation function. The Cifar10 and SVHN \acp{cnn} share the same topology with different kernel values, which is useful to study the impact of kernel proportions on a \ac{dhm}-based implementation. For each network, the fixed-point representation is chosen to respect the classification accuracy, as a result of an exploration shown in Fig.~\ref{acc_vs_nbits}. The study of quantization effects on \acp{cnn} is beyond the scope of this paper and can be found, for instance, in~\cite{Suyog2015,Gysel2016}. In our case, a 3-bit representation is chosen for the LeNet5 network and a 6-bit representation for SVHN and CIFAR10\footnotemark[2]. The shares of its zero-valued parameters, one-valued parameters and power-of-two-valued parameters are evaluated and reported in table~\ref{res:setup}. They represent, by far, more than 90\% of the parameters in all cases.
\footnotetext[2]{Similarly to~\cite{Gysel2016}, a fine tuning of the CNN parameters has been performed after selecting the bit-width, which increases the classification accuracy of the quantized CNN.}
\begin{figure}[!ht]
\centering
\input{tikz/nbits_vs_tpr.tex}
\caption{Evolution of classification accuracy vs bit-width for the studied CNNs. The dashed lines refers to accuracy of the baseline 32-bits floating point model.}
\label{acc_vs_nbits}
\end{figure}
\small
\begin{table}[!ht]
\centering
\caption{Topology of the convolutional layers of the studied \acp{cnn}.}
\begin{tabular}{l|ccc|ccc|ccc|}
\cline{2-10}
& \multicolumn{3}{|c|}{LeNet5~\cite{Lecun1998}} & \multicolumn{3}{|c|}{Cifar10~\cite{Krizhevsky2009} } & \multicolumn{3}{|c|}{SVHN~\cite{Netzer2011}} \\ \hline
\multicolumn{1}{|l|}{Input Patches} & \multicolumn{3}{c|}{28 x 28} & \multicolumn{3}{c|}{32 x 32 x 3} & \multicolumn{3}{c|}{32 x 32 x3} \\ \hline \hline
\multicolumn{1}{|l|}{Layer parameters} & $N$ & $C$ & $K$ & $N$ & $C$ & $K$ & $N$ & $C$ & $K$ \\ \hline
\multicolumn{1}{|l|}{conv1+mpool+tanh} & $20$ & $1$ & $5$ & $32$ & $3$ & $5$ & $32$ & $3$ & $5$ \\ \hline
\multicolumn{1}{|l|}{conv2+mpool+tanh} & $50$ & $20$ & $5$ & $32$ & $32$ & $5$ & $32$ & $32$ & $5$ \\ \hline
\multicolumn{1}{|l|}{conv3+mpool+tanh} & $-$ & $-$ & $-$ & $64$ & $32$ & $5$ & $64$ & $32$ & $5$ \\ \hline \hline
\multicolumn{1}{|l|}{accuracy float (\%)}& \multicolumn{3}{c|}{$98.96$} & \multicolumn{3}{c|}{$76.63$} & \multicolumn{3}{c|}{$87.54$} \\ \hline
\multicolumn{1}{|l|}{selected bit-width} & \multicolumn{3}{c|}{$3$} & \multicolumn{3}{c|}{$6$} & \multicolumn{3}{c|}{$6$} \\ \hline
\multicolumn{1}{|l|}{acc. bit-width (\%)}& \multicolumn{3}{c|}{$98.32$} & \multicolumn{3}{c|}{$73.05$} & \multicolumn{3}{c|}{$86.03$} \\ \hline \hline
\multicolumn{1}{|l|}{zero parameters(\%)}& \multicolumn{3}{c|}{$88.59$} & \multicolumn{3}{c|}{$33.78$} & \multicolumn{3}{c|}{$37.14$} \\ \hline
\multicolumn{1}{|l|}{one parameters(\%)} & \multicolumn{3}{c|}{$6.31$} & \multicolumn{3}{c|}{$45.32$} & \multicolumn{3}{c|}{$46.50$} \\ \hline
\multicolumn{1}{|l|}{pow2 parameters(\%)}& \multicolumn{3}{c|}{$0.05$} & \multicolumn{3}{c|}{$16.40$} & \multicolumn{3}{c|}{$13.62$} \\ \hline
\multicolumn{1}{|l|}{other (\%)} & \multicolumn{3}{c|}{$5.05$} & \multicolumn{3}{c|}{$4.50$} & \multicolumn{3}{c|}{$2.74$} \\ \hline
\end{tabular}
\label{res:setup}
\end{table}
In order to illustrate the impact of the developed tactics, Table~\ref{res:fit1} reports post-fitting results of a LeNet5 accelerator with a 5-bit precision on an embedded Intel Cyclone V 5CGXFC9E7 device, using 3 implementation strategies. In the first result, only DSP blocks are used to map all CNN multiplications. The resulting hardware requires $72\times$ the available resources of the device. The second case features an implementation of multiplication based on logic elements and requires $3.8\times$ the available logic. Using tailored multipliers reduces resources by a factor of $8.6\times$, fitting the CNN accelerator onto an Intel Cyclone V embedded FPGA.
Tables~\ref{res:fit2}-a and ~\ref{res:fit2}-b respectively detail post-fitting results on two embedded \ac{fpga} platforms: the Intel Cyclone V 5CGXFC9E7 and the Xilinx Kintex7 XC7Z045FBG (using respectively Intel Quartus 16.1 and Xilinx Vivaldo 2016.4 synthetizers). To the best of our knowledge, these numbers are the first to demonstrate the applicability of a DHM-based approach for the implementation of \acp{cnn} on embedded FPGAs. The three hardware accelerators fit onto the embedded devices with no off-chip memory requirement. The reported memory footprint corresponds to line buffers used by dataflow-based convolution engines~\cite{Shoup1994} and both synthesis tools instantiate LUT-based memory blocks to implement these buffers. As expected when using \ac{dhm}, the logic utilization in the \ac{fpga} grows with the size of the \ac{cnn}. In addition, the proportion of null kernels affects the amount of logic needed to map a CNN graph.
Finally, table~\ref{res:compare} compares Haddoc2 performance to implementations on FPGA, GPU and ASIC. For the Cifar10 CNN, we find that a direct hardware mapping approach grants $\times2.63$ higher throughput on the same device when compared to fpgaConvNet, the state-of-the-art framework for mapping CNNs on FPGAs. For LeNet5, a $\times1.28$ acceleration is reported which corresponds to a classification rate of 64.42 HD images/sec with a 3-scale pyramid. The GPU platform delivers the best performance in terms of computational throughput but the price is a high power consumption while ASIC technology gives the best throughput per Watt trade-off at the price of lower reconfigurability and higher production costs. \added{For deeper \ac{cnn} implementations, such as in~\cite{Qiu2016}, \ac{dhm} is infeasible on current embedded FPGAs because the Logic Elements required to derive the accelerators exceed the available hardware resources.}
\added{However, and given the recent improvements of \acp{bnn} --reported for instance in FINN~\cite{Umuroglu2017}--, the implementation of deeper CNNs can be addressed by leveraging on \acp{bnn}. \acp{bnn} involve a rescheduling of the \ac{cnn} graph as well as a retraining the network to perform operations using a single bit.}
\footnotetext[1]{Performance of the feature extractor}
\begin{table}[h]
\centering
\caption{Resource utilization by a \ac{dhm} LeNet5 CNN \newline with different implementations strategies for multipliers.}
\begin{tabular}{c|c|c|c|}
\cline{2-4}
\cline{2-4}
& DSP-based & LE-based & LE-based + const. \\ \hline
\multicolumn{1}{|l|}{Logic Usage (ALM)} & NA & 433500 (381\%) & 50452 (44\%) \\ \hline
\multicolumn{1}{|l|}{DSP Block usage} & 24480 (7159 \%) & 0 (0\%) & 0 (0\%) \\ \hline
\end{tabular}
\label{res:fit1}
\end{table}
\begin{table}[h]
\centering
\caption{Resource Utilization of the three hardware accelerators: a- an Intel Cyclone V FPGA, b- a Xilinx Kintex 7 FPGA.}
\begin{tabular}{ll|c|c|c|}
\cline{3-5}
& & LeNet5~\cite{Lecun1998}& Cifar10~\cite{Krizhevsky2009}& SVHN~\cite{Netzer2011}\\ \hline
\multicolumn{1}{|l|}{\multirow{5}{*}{a}}& Logic Elements (ALMs) & 8067 (7\%) & 51276 (45\%) & 39513 (35\%) \\ \cline{2-5}
\multicolumn{1}{|l|}{} & DSP Blocks & 0 (0 \%) & 0 (0\%) & 0 (0\%) \\ \cline{2-5}
\multicolumn{1}{|l|}{} & Block Memory Bits & 176 (1\%) & 15808 (1\%) & 10878 (1\%) \\ \cline{2-5}
\multicolumn{1}{|l|}{} & Frequency & 65.71 MHz & 63.89 MHz & 63.96 MHz \\ \hline
\multicolumn{1}{|l|}{\multirow{5}{*}{b}}& Slices & 25031 (11\%) & 172219 (79\%) & 136675 (63\%)\\ \cline{2-5}
\multicolumn{1}{|l|}{} & DSP Blocks & 0 (0\%) & 0 (0\%) & 0 (0\%) \\ \cline{2-5}
\multicolumn{1}{|l|}{} & LUTs as Memory & 252 (1\%) & 1458 (2\%) & 1552 (1\%) \\ \cline{2-5}
\multicolumn{1}{|l|}{} & Frequency & 59.37 MHz & 54.17 MHz & 54.49 MHz \\ \hline
\end{tabular}\label{res:fit2}
\end{table}
\normalsize
\begin{table}[!ht]
\centering
\caption{Comparison to state-of-the-art implementations}
\label{res:compare}
\begin{tabular}{c|c|c|c|c|}
\cline{2-5}
& Publication & Workload & Throughput & Platform \\ \hline
\multicolumn{1}{|c|}{\multirow{7}{*}{FPGA}} & \multirow{3}{*}{Haddoc2} & 3.8 Mop & 318.48 Gop/s\footnotemark[1] & Cyclone V \\ \cline{3-5}
\multicolumn{1}{|c|}{} & & 24 Mop & 515.78 Gop/s\footnotemark[1] & Cyclone V \\ \cline{3-5}
\multicolumn{1}{|c|}{} & & 24.8 Mop & 437.30 Gop/s\footnotemark[1] & Zynq XC706 \\ \cline{2-5}
\multicolumn{1}{|c|}{} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}fpgaConvNet\\ \cite{Venieris2016}\end{tabular}} & 3.8 Mop & 185.81 Gop/s\footnotemark[1] & Zynq XC706 \\ \cline{3-5}
\multicolumn{1}{|c|}{} & & 24.8 Mop & 166.16 Gop/s\footnotemark[1] & Zynq XC706 \\ \cline{2-5}
\multicolumn{1}{|c|}{} & Qiu \textit{et al.}~\cite{Qiu2016} & 30.76 Gop & 187.80 Gop/s\footnotemark[1] & \added{Zynq ZC706} \\ \cline{2-5}
\multicolumn{1}{|c|}{} & FINN~\cite{Umuroglu2017} & 112.5 Mop & 2500 Gop/s\footnotemark[1] & Zynq ZC706 \\ \hline
\multicolumn{1}{|c|}{GPU} & CudNN R3 & 1333 Mop & 6343 Gop/s & Titan X \\ \hline
\multicolumn{1}{|c|}{\multirow{3}{*}{ASIC}} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Yoda NN\\ \cite{andri2016}\end{tabular}} & 24.8 Mop & 525.4 Gop/s & UMC 65 nm \\ \cline{3-5}
\multicolumn{1}{|c|}{} & & 23.4 Mop & 454.4 Gop/s & UMC 65 nm \\ \cline{2-5}
\multicolumn{1}{|c|}{} & NeuFlow~\cite{Farabet2012} & 350 Mop & 1280 Gop/s & IBM 45nm SOI \\ \hline
\end{tabular}
\end{table}
\section{Conclusion and Future work}\label{sec:concl}
This paper has investigated the feasibility of \emph{direct hardware mapping (DHM)} for the implementation of FPGA-based \ac{cnn} accelerators. We have demonstrated that current embedded \acp{fpga} provide enough hardware resources to support this approach. To demonstrate DHM, the \textsc{Haddoc2} tool has been introduced and used to automatically generate platform-independent \ac{cnn} hardware accelerators from high level CNN descriptions. Tactics are presented for optimizing the area and resource utilization of arithmetic blocks. DHM opens new opportunities in terms of hardware implementations of CNNs and can be extended to ASIC technologies as well as Binary Neural Networks.
\small
\bibliographystyle{unsrt}
| {
"attr-fineweb-edu": 1.970703,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUeEg25V5hegR8Zd2U | \section{Introduction}
\input{tables/overview}
As machine learning models are increasingly deployed in real-world scenarios, it has motivated the development of interpretable machine learning (ML) as a research field with the goal of understanding ML models, performing model debugging, and using these insights to better inform the interaction between AI and humans in joint decision making~\cite{gilpin2018explaining,bhatt2020explainable,chen2022interpretable}. Recently, the promise of multimodal models for real-world representation learning in numerous applications such as multimedia~\cite{liang2021multibench,liang2018multimodal,1667983}, affective computing~\cite{liang2018ranking,PORIA201798}, robotics~\cite{kirchner2019embedded,lee2019making}, finance~\cite{doi:10.1177/0170840618765019}, dialogue~\cite{Pittermann2010}, human-computer interaction~\cite{dumas2009multimodal,obrenovic2004modeling}, and healthcare~\cite{xu2019multimodal} has invigorated research into multimodal machine learning, which brings unique challenges for both computational and theoretical research given the heterogeneity of various data sources and difficulty of capturing correspondences between modalities~\cite{baltruvsaitis2018multimodal}. Among one of these core challenges is \textit{interpretable multimodal learning} with the end goal of empowering various stakeholders by providing insights into multimodal learning, improving model design, or debugging models and datasets.
Recent work in interpretable multimodal learning has therefore focused on constructing interpretable multimodal models via careful model design~\cite{tsai2020multimodal,zadeh2018multimodal,park2018multimodal} or performing post-hoc explanations of black-box multimodal models~\cite{goyal2016towards,chandrasekaran2018explanations}. However, existing works typically focus on building interpretable models using suitable inductive biases, such as designing multimodal routing networks~\cite{tsai2020multimodal}, graph-based fusion~\cite{zadeh2018multimodal}, or multimodal explanation networks to highlight visual importance~\cite{park2018multimodal}. Some of these approaches also require the collection of specialized datasets annotated for visual explanations as intermediate steps in training interpretable models~\cite{park2018multimodal}. On the other hand, with the trend towards large-scale modeling or pre-training as an alternative over individual modality-specific or task-specific models~\cite{visualbert,liang2021multibench}, it is increasingly important to design general-purpose approaches that (1) are able to generate post-hoc explanations for arbitrary black-box models, and (2) does not assume anything about the modality or classification task itself.
As a step towards more fine-grained interpretations of general-purpose multimodal models across arbitrary tasks, we propose \textsc{DIME}, an interpretation method for black-box multimodal models. While existing work has been able to generate useful explanations to help humans understand model decision-making processes~\cite{chandrasekaran2018explanations}, they are often only performed at one step of the entire multimodal decision-making process. These singular steps typically include attributing feature importance~\cite{park2018multimodal,chandrasekaran2018explanations} or representation importance~\cite{tsai2020multimodal,zadeh2018multimodal}. The core idea in \textsc{DIME}\ is to provide more fine-grained interpretations by disentangling a multimodal model into unimodal contributions (\textbf{UC}) and multimodal interactions (\textbf{MI}). We show that this key insight enables more accurate and fine-grained analysis of multimodal models while maintaining generality across arbitrary modalities, model architectures~\cite{kamath2021mdetr,lxmert}, and tasks~\cite{balanced_vqa_v2,johnson2017clevr}.
Through a comprehensive suite of experiments on both synthetic and real-world multimodal tasks, we show that \textsc{DIME}\ is able to accurately perform disentanglement and generate reliable explanations for both UC and MI. Using \textsc{DIME}, we are able to gain a deeper understanding of model behavior on challenging multimodal tasks. For example, on VQA 2.0~\cite{goyal2017making}, we successfully use \textsc{DIME}\ to determine whether the model uses correct multimodal interactions to answer the questions, as shown in Figure~\ref{intro}. By providing these model explanations to a human annotator, they are able to gain additional insights on model behavior and better determine whether UC, MI, or both are the dominant factor behind the model's predictions on individual datapoints. Furthermore, \textsc{DIME}\ presents a step towards debugging and improving these models by systematically revealing certain undesirable behaviors.
\section{Related Work}
Interpretable machine learning as a research field aims to further our understanding of AI models, empower various stakeholders to build trust in AI models, perform model debugging, and use these insights to better inform the interaction between AI and humans in joint decision making~\cite{gilpin2018explaining,bhatt2020explainable,chen2022interpretable}. We cover related concepts in interpreting unimodal models and multimodal models.
\subsection{Interpreting Unimodal Models}
Related work has studied approaches for better understanding unimodal models used for vision, language, and audio modalities. These approaches can be roughly categorized into interpretable ML as designing models which are understandable by design, and explainable ML which focuses on producing post-hoc explanations for black-box models~\cite{rudin2019stop}. In the former, methods such as Concept Bottleneck Models~\cite{koh2020concept} and fitting sparse linear layers~\cite{wong2021leveraging} or decision trees on top of deep feature representations~\cite{wan2020nbdt} have emerged as promising choices marrying the expressive power of deep features with the interpretable decision-making processes of linear models or decision trees. In the latter, approaches such as saliency maps~\cite{simonyan2013deep,smilkov2017smoothgrad}, using surrogate models to interpret local decision boundaries~\cite{lime}, feature visualizations~\cite{yosinski2015understanding,erhan2009visualizing}, and assigning semantic concepts~\cite{bau2017network} all aim to provide insight into model predictions for specific input instances. We refer the reader to~\citet{chen2022interpretable} for a survey and taxonomy of interpretable ML approaches, as well as~\citet{bhatt2020explainable} for an analysis of how interpretable and explainable ML tools can be used in the real world.
\subsection{Interpreting Multimodal Models}
Similar to the interpretation of unimodal models, recent work in interpretable multimodal learning can be categorized into two sections: (1) constructing interpretable multimodal models via careful model design~\cite{tsai2020multimodal,zadeh2018multimodal,park2018multimodal} or (2) performing post-hoc explanations of black-box multimodal models~\cite{goyal2016towards,chandrasekaran2018explanations}. In the former, multimodal routing networks~\cite{tsai2020multimodal}, graph-based fusion techniques~\cite{zadeh2018multimodal,liang2018computational}, multimodal explanation networks to highlight visual importance~\cite{park2018multimodal}, hard-attention~\cite{chen2017multimodal}, and neuro-symbolic reasoning methods~\cite{vedantam2019probabilistic,andreas2016neural} have emerged as strong design choices as a step towards more interpretable multimodal learning. These approaches individually focus on building interpretable components for either modality importance~\cite{park2018multimodal}, cross-modal interactions~\cite{tsai2020multimodal,zadeh2018multimodal,liang2018computational}, or the reasoning process on top of cross-modal interactions~\cite{vedantam2019probabilistic,andreas2016neural}. While these approaches provide reliable interpretations by virtue of model design, they are typically restricted to a certain set of modalities or tasks. On the other hand, we propose a more general approach that is able to generate post-hoc explanations for arbitrary black-box multimodal models, and does not assume anything about the modality or classification task itself.
In the latter section on post-hoc explainability of black-box multimodal models, related work has similarly gravitated towards aiming to understand either modality importance~\cite{goyal2016towards,chandrasekaran2018explanations,kanehira2019multimodal} or cross-modal interactions in pretrained language and vision transformer models~\cite{frank2021vision,cao2020behind,parcalabescu2021seeing,li2020does}. Perhaps most related to our work is~\citet{wang2021m2lens} proposing M2Lens, an interactive visual analytics system to visualize and explain black-box multimodal models for sentiment analysis through both unimodal and multimodal contributions. Our approach further disentangles the two types of contributions, which allows us to generate visualizations on each and gain insight into which input features are involved in multimodal interactions. Our approach is also not restricted to sentiment analysis.
\subsection{Representation Disentanglement}
Related to our work is the idea of learning disentangled data representations - mutually independent latent variables that each explain a particular variation of the data~\cite{Bengio:2013:RLR:2498740.2498889,locatello2018challenging}. Disentangled representation learning has been shown to improve both generative and discriminative performance in multimodal tasks~\cite{tsai2018learning}. If the factors of variation are known, many methods learn latent attributes that individually control each variation of data by supervised training~\cite{karaletsos2015bayesian,reed2014learning,cheung2014discovering}. If the factors are partially known or unknown, deep generative models can be used to impose an isotropic Gaussian prior on the latent variables~\cite{vae2013,rubenstein2018latent,Higgins2016VAELB}, maximize the mutual information between a subset of latent variables and the data~\cite{chen2016infogan}, or to encourage the distribution of representations to be factorial and hence independent~\cite{pmlr-v80-kim18b}. Particularly related to our work is empirical multimodally-additive function projection (EMAP)~\cite{hessel2020emap}, an approach for disentangling the effects of unimodal (additive) contributions from cross-modal interactions in multimodal tasks.
\subsection{Dataset and Model Biases}
One core motivation for interpretable ML is to enable a better understanding of the model's decision-making process so as to check whether model behavior is as intended. Using these tools, researchers have uncovered several biases existing in machine learning models and datasets. These biases include undesirable associations captured either in the data or the model, which do not reflect decision-making as one would expect. For example, a line of work in visualizing and understanding multimodal models has uncovered unimodal biases in the language modality of VQA tasks~\cite{jabri2016revisiting,agrawal2016analyzing,anand2018blindfold,cadene2019rubi}, which then inspired follow-up datasets to elevate the importance of visual understanding through VQA 2.0~\cite{goyal2017making}. Similar visualizations also led to improved performance on image captioning tasks by relying less on gender biases and spurious correlations~\cite{hendricks2018women}. Our approach towards better visualizing and understanding multimodal models is also inspired by these insights, and we believe that our fine-grained and general approach will motivate future work towards removing biases from a wider range of datasets and models beyond the prototypical language and vision tasks.
\section{Method: \textsc{DIME}}
Our approach, \textsc{DIME}\ (short for \textsc{DIsentangled Multimodal Explanations}), is primarily based on disentangling a multimodal model into unimodal contributions (\textbf{UC}) and multimodal interactions (\textbf{MI}), before performing fine-grained visualizations on each disentangled factor. In this section, we introduce precise definitions of unimodal contributions and multimodal interactions, before explaining how disentanglement and interpretations are performed.
\subsection{Unimodal Contributions and Multimodal Interactions}
Unimodal contributions $(\textsc{UC})$ represent information gained by only looking at one of the modalities without interacting with any other modalities, while multimodal interactions $(\textsc{MI})$ are information gained from cross-referencing inputs from multiple modalities~\cite{hessel2020emap}. Multimodal models make decisions using a combination of information from both unimodal contributions and multimodal interactions. For example, in Figure~\ref{intro}, the model assigns a high likelihood to ``glass'' because (1) just by looking at the image, there are many glass objects (unimodal contributions) and (2) by cross-referencing with text, the model focuses on the glass table and assigns a high likelihood to ``glass'' (multimodal interaction).
Therefore, to performed fine-grained interpretation in a multimodal model $M$, we first propose a new method to disentangle the model into two submodels:
\begin{equation}
M = \textsc{UC}(M) + \textsc{MI}(M),
\end{equation}
where $\textsc{UC}(M)$ represents the unimodal contributions within $M$ and $\textsc{MI}(M)$ represents the multimodal interactions within $M$. We can then run visualizations on each sub-model in order to generate human-interpretable visualizations of unimodal contributions and multimodal interactions (see Figure~\ref{fig:nonexistent} for an overview of \textsc{DIME}). To generate visual explanations, we choose LIME~\cite{lime}, a widely used interpretation method for black-box models.
\begin{figure*}
\vspace{0mm}
\includegraphics[width=1.0\textwidth]{figs/mainpic.pdf}
\caption{High level illustration of \textsc{DIME}: we disentangle the model $M$ into two: unimodal contributions (UC) and multimodal interactions (MI), before running visualizations on each sub-model (e.g., using LIME~\cite{lime}) in order to generate fine-grained human-interpretable visualizations of each.}
\label{fig:nonexistent}
\vspace{-3mm}
\end{figure*}
\subsection{Model Disentanglement}
\label{sec:proof}
Let $M$ be the multimodal model that we wish to disentangle into unimodal contributions and multimodal interactions. For simplicity, suppose $M$ takes in two modalities as input and produces pre-softmax logits on $C$ classes as output. Therefore, we can view $M$ as a function that maps two inputs $x_1,x_2$ from two modalities to a output logit vector $V$, i.e., $V = M(x_1,x_2)$. Our goal will be to disentangle the function $M$ into a sum of two functions, one representing unimodal contributions and one representing multimodal interactions.
Formally, we would like to write $M$ as $M(x_1,x_2)=g_1(x_1)+g_2(x_2)+g_{12}(x_1,x_2)$, where $g_1$ and $g_2$ are unimodal contributions from the two input modalities, respectively, and $g_{12}$ represents multimodal interactions. By definition of multimodal interactions, we require that $\mathbb{E}_{x_1}g_{12}(x_1,x_2)=0$ for all $x_2$ and $\mathbb{E}_{x_2}g_{12}(x_1,x_2)=0$ for all $x_1$ so that $g_{12}$ contains no unimodal contribution. We will show that under this definition, for each $M$ there will be a unique $g_{12}$ that satisfies these rules.
We will compute $g_1(x_1)+g_2(x_2)$ using a similar method to EMAP~\cite{hessel2020emap}. We define $\textsc{UC}(M)$ as
\begin{equation}
\textsc{UC}(M(x_1,x_2)) = \mathbb{E}_{x_1}(M(x_1,x_2)) + \mathbb{E}_{x_2}(M(x_1,x_2)) - \mathbb{E}_{x_1,x_2}(M(x_1,x_2)).
\end{equation}
\textbf{Theorem 1} below (equations 3-5, proof in Appendix) states that $\textsc{UC}(M)$ indeed represents $g_1+g_2$.
\begin{eqnarray}
&& \textsc{UC}(M(x_1,x_2)) \\
&=& \mathbb{E}_{x_1}(M(x_1,x_2))+ \mathbb{E}_{x_2}(M(x_1,x_2)) - \mathbb{E}_{x_1,x_2}(M(x_1,x_2)) \\
&=& g_1(x_1)+g_2(x_2).
\end{eqnarray}
Thus, we can compute $g_{12}(x_1,x_2)$ by subtracting $\textsc{UC}(M(x_1,x_2))$ from $M(x_1,x_2)$, which we name $\textsc{MI}(M)$. Formally,
\begin{eqnarray}
&& \textsc{MI}(M(x_1,x_2)) \\
&=& M(x_1,x_2) - \textsc{UC}(M(x_1,x_2)) \\
&=& g_{12}(x_1,x_2).
\end{eqnarray}
This also shows that $g_{12}$ can be uniquely determined.
In practice, to compute $\textsc{UC}(M(x_1,x_2))$ and $\textsc{MI}(M(x_1,x_2))$, we use a sampling method similar to~\cite{hessel2020emap}, where we sample $N$ datapoints $x^{(i)}=(x^{(i)}_1,x^{(i)}_2)$ including the point we want to explain $x=(x_1,x_2)$ as one of them, and computing each expectation in $\textsc{UC}(M(x_1,x_2))$ by approximating
\begin{align}
\mathbb{E}_{x_1}(M(x_1,x_2)) &= \sum_{i \in [N]} M(x^{(i)}_1,x_2), \\
\mathbb{E}_{x_2}(M(x_1,x_2)) &= \sum_{i \in [N]} M(x_1,x^{(i)}_2), \\
\mathbb{E}_{x_1,x_2}(M(x_1,x_2)) &= \sum_{i \in [N]}\sum_{j \in [N]} M(x^{(i)}_1,x^{(j)}_2).
\end{align}
Figure~\ref{fig:illusdisent} illustrates this disentanglement process.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{figs/disent.pdf}
\caption{An illustration of the disentangling process of \textsc{DIME}. We disentangle a model into two: $\textsc{UC}(M) = g_1 + g_2$ and $\textsc{MI}(M) = g_{12}$, corresponding to unimodal contributions and multimodal interactions respectively.}
\label{fig:illusdisent}
\end{figure}
However, to compute $\textsc{UC}(M(x_1,x_2))$ and $\textsc{MI}(M(x_1,x_2))$, we will need to run forward passes through the model a total of $N^2$ times. In section~\ref{sec:fast} we will show an algorithm that computes this more efficiently by amortizing across multiple datapoints.
\subsection{Interpreting Disentanglement}
Now that we have disentangled the model into two, we will generate human-interpretable explanations on each modality using LIME~\cite{lime}. LIME works by subdividing the input into distinct features, and then randomly perturbing the features $S$ times to see how the perturbations on the features affect the model output logits of a specific class $c$. LIME then fits a linear model mapping the perturbations on each feature to the logits of $c$. The linear model weights on each feature gives the explanation of that feature: if the weight is positive, it means that this feature supports the decision of class $c$; if the weight is negative, it means that this feature is against the decision of class $c$; the larger the weight's absolute value, the stronger the contribution is. Visually, the weights can also be used to generate a human-interpretable visualization: for images, each feature is typically a part of the image, so the parts with the highest absolute weights can be highlighted in green for positive and red for negative contributions. For text, each feature is typically a word, so the explanation can be summarized as a histogram of weights of each word (see Figure~\ref{fig:nonexistent} for an example).
When running LIME on multimodal inputs, we run LIME on one modality at a time, treating the inputs to all other modalities as constant and only perturbing the inputs to that one modality. We denote the generated explanation on model $M$, datapoint $(x_1,x_2)$, and modality $i$ as $\textsc{LIME}_i(M(x_1,x_2))$. After disentanglement into unimodal contributions $\textsc{UC}(M(x_1,x_2))$ and multimodal interactions $\textsc{MI}(M(x_1,x_2))$, our approaches enables the generation of four fine-grained explanations:
\begin{itemize}
\item $\textsc{UC}_1 = \textsc{LIME}_1(\textsc{UC}(M(x_1,x_2)))$, the explanation of modality 1's unimodal contributions.
\item $\textsc{UC}_2 = \textsc{LIME}_2(\textsc{UC}(M(x_1,x_2)))$, the explanation of modality 2's unimodal contributions.
\item $\textsc{MI}_1 = \textsc{LIME}_1(\textsc{MI}(M(x_1,x_2)))$, the explanation of modality 1's contribution to multimodal interactions.
\item $\textsc{MI}_2 = \textsc{LIME}_2(\textsc{MI}(M(x_1,x_2)))$, the explanation of modality 2's contribution to multimodal interactions.
\end{itemize}
\subsection{Improving Efficiency}
\label{sec:fast}
Since running LIME on a black box model usually requires running the model many times (equal to the LIME sample size $S$), it can be costly to treat $\textsc{UC}(M)$ or $\textsc{MI}(M)$ as black-box models and run LIME on then directly - running $\textsc{UC}(M)$ involves computing $\mathbb{E}_{x_1,x_2}(M(x_1,x_2))$ which requires running $N^2$ forward passes where $N$ is the number of samples used for EMAP, so the total procedure of running \textsc{DIME}\ on one datapoint can take $O(SN^2)$ runs of $M$.
In order to make the process faster, we use the following algorithmic trick: we fix $N$ datapoints from the dataset, and then run $M$ on all $N^2$ combinations of the two modalities amongst the $N$ points, and store the resulting logits in a $N\times N\times C$ array $L$ (where $C$ is the number of classes in this task). When we want to run \textsc{DIME}\ on any one of those $N$ points (let's say the $i$th point), for each perturbed LIME sample (WLOG let's say we're running LIME on modality 1, so modality 1 is perturbed in the LIME sample), we make a deep copy of $L$ called $L'$, re-run $M$ on the combination of the perturbed modality 1 input and all $N$ modality 2 inputs, replace the values in the ith row of $L'$ with the results, and compute $\textsc{UC}(M)$ on this LIME sample with the updated table $L'$. Using this trick, after amortizing the one-time initial $O(N^2)$ runs of $M$, each followup \textsc{DIME}\ run on any of the $N$ points only takes $O(SN)$ runs of $M$. See details in Algorithm 1 in the Appendix.
\section{Experiments}
In this section, we will perform a set of experiments to fully evaluate the reliability and usefulness of \textsc{DIME}\ in interpreting multimodal models. We will be using 3 datasets: a synthetic dataset, CLEVR~\cite{johnson2017clevr}, and VQA 2.0~\cite{balanced_vqa_v2}, and with one corresponding state-of-the-art model for each: MLP, MDETR~\cite{kamath2021mdetr} and LXMERT~\cite{lxmert}. When dealing with datasets involving image and text modalities, we will refer to the two modalities as $(V,T)$ respectively (e.g., $\textsc{UC}_V$ would refer to the \textsc{DIME}\ explanation on image unimodal contribution). Our experiments are designed to illustrate the following takeaway messages of using \textsc{DIME}\ to analyze multimodal models:
\begin{enumerate}
\item Our method can reliably disentangle the model and generate accurate explanations for both UC and MI, correlating highly with their respective ground truths (section~\ref{rq1}).
\item In more difficult tasks such as CLEVR and VQA, and with more complex models, \textsc{DIME}\ can still disentangle the model reliably. We show that changing the text input affects $\textsc{UC}_V$ (explanation on image unimodal contribution) little but affects $\textsc{MI}_V$ (explanation on multimodal interactions from the image side) significantly (section~\ref{rq1}).
\item \textsc{DIME}\ gives additional insight into understanding multimodal model behavior by answering whether the model relies mostly on UC, MI, or both in making the prediction (section~\ref{rq2}).
\item \textsc{DIME}\ also enables human users to debug and improve models by identifying which input features are used in MI and revealing undesirable behavior in models (section~\ref{rq3}).
\end{enumerate}
Following these results, we will discuss limitations and future works (section~\ref{limit}).
\subsection{Setup}
\subsubsection{Datasets}
We will use three datasets: a synthetic dataset to enable controlled variations between unimodal and multimodal interactions, as well as two large-scale multimodal datasets: CLEVR, and VQA 2.0.
The \textbf{synthetic dataset $D$} is designed to model a task that requires both unimodal (additive) contributions and multimodal interactions to solve correctly. According to prior work~\cite{hessel2020emap}, the dot product of two modalities requires non-additive cross-modal interaction, while the sum of two vectors is additive. Therefore, we design a synthetic dataset $D$ by randomly generating two $10$-dimensional vectors following $N(0,1)$ independently for each element, and then computing the sum of all elements in both vectors plus the dot product of the two vectors. If the result's absolute value is below $0.01$, we discard this point; otherwise, we assign a $0/1$ label based on the sign of the result. We generate $100,000$ points to form $D$ and divide it into train/valid/test splits by $8/1/1$ ratio.
\textbf{CLEVR}~\cite{johnson2017clevr} is a diagnostic dataset designed for language and visual reasoning. The dataset consists of synthesized images of 3D shapes of various colors, sizes, and materials on a gray background, For each image, there are several questions about the shapes' attributes, positions, and numbers. This dataset has been widely used for diagnostic purposes to find model weaknesses.
\textbf{VQA 2.0}~\cite{balanced_vqa_v2} is a dataset containing various questions on real-world images. It is designed to force multimodal interactions, especially incorporating the visual aspect, by sometimes having the same question with two different answers on two different images. This dataset is interesting because models have been shown to occasionally ``guess'' correct answers purely from unimodal contributions or with the wrong visual grounding~\cite{cadene2019rubi,anand2018blindfold}. \textsc{DIME}\ will enable us to study how often models rely on undesirable unimodal biases and further understand the model's decision-making process.
\subsubsection{Models}
For synthetic dataset $D$, we train a \textbf{4-layer MLP} (with input size $20$ and hidden layer sizes $100,200,10,2$ respectively) on $D$ that reaches $97.3\%$ accuracy on the test split.
For CLEVR dataset, we will be using a pretrained \textbf{MDETR}~\cite{kamath2021mdetr} that achieves $99.7\%$ test accuracy.
For VQA 2.0, we will be using pretrained \textbf{LXMERT}~\cite{lxmert}, one of the best models on the dataset, with a $72.5\%$ test accuracy.
\subsection{Research Questions and Results}
\input{tables/synth}
\input{tables/relia}
\subsubsection{\textbf{RQ1:} Can \textsc{DIME}\ reliably disentangle a model into unimodal contributions and multimodal interactions and generate accurate explanations for both UC and MI in practice?}
\label{rq1}
\
In section~\ref{sec:proof}, we have theoretically shown that \textsc{DIME}\ can disentangle a model into unimodal contributions and multimodal interactions. To show that this also holds in practice (when expectation computations are replaced by sampling), we will run \textsc{DIME}\ on our trained model $M$ using $1,000$ randomly selected datapoints in the test split of our synthetic dataset $D$, on label $1$ (i.e., that the sum of all elements of both vectors plus the dot-product of the two vectors are positive).
For each point $(d_1,d_2)$ in $D$, since we are classifying whether the sum of all elements in $d_1$ and $d_2$ as well as the dot product of $d_1$ and $d_2$, the ground truth UC explanation on each modality will be $d_1$ and $d_2$ respectively, and the ground truth MI explanation will be element-wise product $d_1*d_2$. Therefore, for each generated explanation on input data $(d_1,d_2)$, we will compute the Pearson Correlation between the explanation weights of the $10$ features with the values of the $10$ features of $d_1$, the values of the $10$ features of $d_2$, and the $10$ features in the element-wise product of $d_1$ and $d_2$. In addition to \textsc{DIME}, we also run LIME under the same settings as an ablation and compute average correlations.
The results are shown in Table~\ref{tab:synth}. We found that within each datapoint ($d_1$,$d_2$), there is a strong correlation between each \textsc{DIME}-generated unimodal explanation ($\textsc{UC}_1, \textsc{UC}_2$) and the corresponding ground truth UC explanation, but there is neither correlation between $\textsc{UC}_1$/$\textsc{UC}_2$ and ground truth UC explanation of a different modality, nor correlation between $\textsc{UC}_1$/$\textsc{UC}_2$ and ground truth multimodal interaction explanations. This shows that \textsc{DIME}-generated UC explanations indeed capture unimodal contributions only. Moreover, we found that both \textsc{DIME}-generated multimodal interaction explanations ($\textsc{MI}_1, \textsc{MI}_2$) indeed correlate with the ground truth MI explanation, but not with either ground truth UC explanation. This shows that \textsc{DIME}-generated multimodal interaction explanation indeed captures explanations on just the multimodal interactions (i.e., the dot-product), and not any of the unimodal contributions. Meanwhile, running the original LIME on either modality just gives an explanation that weakly correlates with ground truth unimodal contributions and multimodal interactions, so the original LIME without disentangling is unable to give an accurate explanation of either unimodal contributions or multimodal interactions.
In addition to using a synthetic dataset, we show that \textsc{DIME}\ can also disentangle more complex models on multimodal tasks, such as MDETR on CLEVR and LXMERT on VQA (the latter model is far from perfect in performance). As a measure of disentanglement, we check how \textsc{DIME}-generated explanations would be different given the same image but different questions. From each dataset, we randomly select $100$ points and generate their \textsc{DIME}\ explanations on the correct label. Then, for each point, we swap out the question with another different question on the same image and generate their \textsc{DIME}\ explanations on the same label (i.e., correct label before the swap). We compute cosine distance between the explanation weights from $\textsc{UC}_V$ before/after the swap, as well as cosine distance between the weights from $\textsc{MI}_V$ before/after the swap, and report average cosine distances on each dataset in Table~\ref{tab:relia}. We can see that swapping text has almost no effect on $\textsc{UC}_V$ but affects $\textsc{MI}_V$ significantly. Therefore, \textsc{DIME}\ is able to correctly disentangle a model into unimodal contributions and multimodal interaction for more complex models and tasks.
\subsubsection{\textbf{RQ2:} Can \textsc{DIME}\ help researchers gain additional insight in whether unimodal contributions or multimodal interactions are the dominant factors behind a model's prediction?}
\label{rq2}
\
Disentangling the model into UC and MI and generating visualizations for each should provide additional insights into whether UC or MI is the main factor in the model's prediction. In the following experiments, we show that \textsc{DIME}\ can uncover which factor is dominant in a model's prediction process both across all points in the dataset (``global'') and on each individual datapoint (``local'').
\textbf{Global interpretation:} CLEVR dataset is designed to force multimodal interactions, and MDETR has a $99.7\%$ accuracy on CLEVR, so we expect that MDETR will be heavily reliant on multimodal interactions. To verify this, we run \textsc{DIME}\ on MDETR for $100$ randomly sampled datapoints from the validation split of CLEVR, and compute the average absolute weight of the top-5 features in \textsc{DIME}\ explanations. As shown in Table~\ref{tab:weight}, the $\textsc{MI}_V$ and $MV_T$ weights are indeed significantly larger than $\textsc{UC}_V$ and $\textsc{UC}_T$ weights. Note that unimodal text does still give some useful information in CLEVR, such as the answer type (yes/no, attribute, or number), so that explains why $\textsc{UC}_T$ still has a weight of about $60\%$ that of $\textsc{MI}_T$. The average weight for $\textsc{MI}_V$, however, is over $4$ times higher than $\textsc{UC}_V$. Therefore, using \textsc{DIME}, we confirmed that MDETR indeed relies mostly on multimodal interactions to solve the task.
\input{tables/weight}
\textbf{Local interpretation:} In most datasets and models, models will not be near-perfect, and they will have different dominating factors from datapoint to datapoint. In this case, a global analysis will not suffice, and it will be necessary to look into which factor contributes more to the model's prediction on individual datapoints. We perform the following experiment to show that \textsc{DIME}\ can help users determine whether a model makes a prediction on a datapoint where (1) unimodal text is dominant, (2) unimodal image is dominant, (3) multimodal interactions are dominant, and (4) both UC and MI have significant contributions to the answer. We will use LXMERT on VQA since LXMERT is not close to perfect and often relies on different factors when predicting different datapoints.
We gave five human annotators (who have some background knowledge in machine learning but do not have any knowledge about \textsc{DIME}) the same set of $52$ datapoints from VQA, as well as the prediction from LXMERT. For each datapoint, each human annotator is first given the LIME explanations without disentanglement as a baseline, and they are asked to categorize this point into one of the four categories above, while also rating how confident they are on their decision on a scale from one (least confident) to five (most confident). The human annotators are then presented with \textsc{DIME}\ explanations, and again they are asked to categorize each point as well as rate their confidence.
The results are shown in Table~\ref{tab:newanno}. We can see that human annotators have significantly higher average confidence scores when presented with \textsc{DIME}\ explanations as compared to the baseline. Moreover, \textsc{DIME}\ result shows significantly higher Krippendorff's alpha score~\cite{krippendorff2011computing}, which measures inter-annotator agreements, so annotators also tend to agree a lot more on their categorizations. Therefore, \textsc{DIME}\ is able to help researchers more confidently determine whether UC or MI (or both) is the dominant factor behind the model's prediction, and thus help researchers gain additional insight into model behavior.
\input{tables/anno}
\subsubsection{\textbf{RQ3}: Can \textsc{DIME}\ help us better assess the qualities of the model and gain insights on how to debug or improve model performance?}
\label{rq3}
\
When trying to debug or improve a model on a task involving challenging reasoning, such as VQA, one important question researchers often ask is: do we know if our model actually learns to do the task ``the intended way'' (i.e., go through the same logical reasoning process as a human would to perform the task)? How often does our model perform as intended? Therefore, we conduct the following experiment to show that \textsc{DIME}\ may help answer this question.
We use \textsc{DIME}\ explanations to categorize the model's behavior on each datapoint into one of the following categories:
When the model answers correctly,
\begin{itemize}
\item (1) The model fully identifies the necessary parts of the image to answer the question logically through MI.
\item (2) The model only partially identifies the parts of the image that are necessary to answer the question logically through MI and got it right with help of unimodal contributions.
\item (3) The model did not correctly identify any of the parts of the image that are necessary to answer the question logically through MI. It got it right purely by unimodal contributions or by chance.
\end{itemize}
And when the model answers incorrectly,
\begin{itemize}
\item (4) The model fully identifies the necessary parts of the image to answer the question logically through MI, but still gets the answer wrong because the model does not fully understand a concept or because the question is too difficult (even for a human being).
\item (5) The model only partially identifies the parts of the image that are necessary to answer the question logically through MI, thus missing some of the key parts of the image resulting in an incorrect answer.
\item (6) The model did not correctly identify any of the parts of the image that are necessary to answer the question logically through MI, and thus the model fails to answer the question correctly.
\end{itemize}
In Figure~\ref{fig:veryinterestingexamples}, we show examples of datapoints, model predictions, and explanations that were annotated into each of the above categories. As shown in the examples, in most cases, there will be enough evidence to categorize a datapoint just by looking at the multimodal interaction explanations from the image side ($\textsc{MI}_V$), but sometimes other \textsc{DIME}\ explanations (e.g., explanations of text interactions) will be needed to gain additional understanding of the model's decision-making process.
\input{tables/vqa}
The results of this human study are shown in Table~\ref{tab:vqa}. With \textsc{DIME}, we were able to categorize $118$ points with evidence, out of a total of $140$ points $(84\%)$. This shows that \textsc{DIME}\ is able to highlight which input features are aligned or recognized by MI. We observe that, even though the models can fully identify the correct parts of the image that are relevant to the questions half of the time $(69/118)$, there is still a significant portion of datapoints where the model correctly aligns text and image but relies on unimodal contributions instead. This highlights several shortcomings of the model's decision-making process despite answering the question correctly. Therefore, the information gained from performing \textsc{DIME}\ can help researchers identify weaknesses in their models and debug or improve these models accordingly.
\begin{figure*}
\vspace{2mm}
\includegraphics[width=1.0\textwidth]{figs/newfig6.pdf}
\caption{Here we present examples of using \textsc{DIME}\ to categorize and explain why LXMERT makes certain predictions on datapoints in VQA 2.0. We present one example from each category. In most cases, only looking at the multimodal interaction explanations from the image side ($\textsc{MI}_V$) is sufficient to explain and categorize the model, but in certain cases, additional information from $\textsc{UC}_V$, $\textsc{UC}_T$, or $\textsc{MI}_T$ is needed as well. \textsc{DIME}\ enables researchers to gain understanding of the model's decision-making process which presents a step towards debugging and improving these models.}
\label{fig:veryinterestingexamples}
\end{figure*}
We also observe that the model is more likely to not be able to fully identify the correct regions of the image when the model makes the wrong prediction, which is expected.
In addition, we also found the following interesting observations when looking at the \textsc{DIME}\ explanations of the $118$ points:
\begin{itemize}
\item LXMERT often relies too heavily on unimodal text contributions: for example, in a question involving ``car'', unimodal contributions in text will prompt the model to answer ``street'' even if the model is unable to find ``street'' in the image. Sometimes, even when the model is able to interactively identify the correct regions of the image, unimodal text contributions can still dominate over the multimodal interaction (such as the fourth example in Figure~\ref{fig:veryinterestingexamples}, where the model answered ``glove'' due to unimodal text contributions even though the model was able to interactively identify the bat).
\item The model sometimes interactively identifies the wrong object that happens to share the same properties in question as the correct object (such as the third example in Figure~\ref{fig:veryinterestingexamples}, where instead of the dog's paws, the model identified the nearby cat which also happens to be white). This coincidence happens more often than we expected, as there are $8$ such cases amongst the $118$ examples $(7\%)$.
\item When asked about the color of an object that has two colors, LXMERT will only pick out one of the colors. \textsc{DIME}\ analysis shows that this is often due to LXMERT only identifying subregions of the object in one color while ignoring other parts of the object that are in a different color. For example, in Figure~\ref{fig:hydrant}, the model thinks that the hydrant is not ``silver and red'' because it did not classify the red tip as part of the hydrant.
\end{itemize}
These additional observations may guide future research in improving LXMERT (and other similar models) or designing inductive biases to avoid these undesirable behaviors.
\begin{figure}
\centering
\vspace{2mm}
\includegraphics[width=0.44\textwidth]{figs/hydrant.pdf}
\caption{In this example, the model was unable to answer correctly because it did not recognize the red part in the image as part of the hydrant. As shown by the $\textsc{MI}_V$ explanation, the model actually thought that the red part is ``against'' the answer ``silver and red'', which means the model thought the red region isn't a part of the hydrant.}
\label{fig:hydrant}
\end{figure}
\subsection{Limitations and Future Directions}
\label{limit}
Despite the ability of \textsc{DIME}\ in interpreting and debugging multimodal models, there remain several directions for future work:
\textbf{1. Models with discrete outputs:} Even though \textsc{DIME}\ is designed to work for any black-box classification models, it requires the model to produce a continuous logit for each answer choice. \textsc{DIME}\ does not work well on the Neural-Symbolic VQA model~\cite{Mao2019NeuroSymbolic} since it only produces one discrete output instead of a continuous logit. Even when we tried to convert its outputs to logits by assigning its answer a logit of 1 and all other answer choices a logit of $-1$, \textsc{DIME}\ often fails to produce any meaningful explanation since the perturbations are unable to change the discrete answer of the model, thus having no effect on the assigned logits.
\textbf{2. Number of modalities:} In all experiments, \textsc{DIME}\ was applied to tasks with 2 modalities. Disentangling a model across even 3 modalities can be very costly, as we will need to run the model $N^3$ times to compute unimodal contributions. Another challenge lies in interpreting the multimodal interaction, which would consist of bi-modal interactions between each pair of modalities as well as tri-modal interactions across all 3 modalities. Future work should tackle these challenges and try to expand \textsc{DIME}\ for high-modality scenarios.
\textbf{3. Diverse modalities:} Even though the disentangling method in \textsc{DIME}\ theoretically works on any modality, our experiments have focused on image+text datasets (except the synthetic dataset experiment). This is because LIME-generated visualized explanations are relatively intuitive on image and text; it can be much harder for a human annotator to look at the results of explanations on other modalities (such as time-series of vectors) and try to make sense of them. In the future, we would like to design additional experiments to show that \textsc{DIME}\ can also be used to gain additional insight on model behavior in tasks involving modalities other than image and text as well.
\textbf{4. Using these insights to improve models:} Since \textsc{DIME}\ is able to reveal several hidden undesirable behaviors in multimodal models, future work should aim to propose targeted solutions to these highlighted biases as a step towards improving multimodal models. For example, according to insights gained on VQA in RQ3, LXMERT can be improved by encouraging less reliance on unimodal text contribution, where insights from~\citet{cadene2019rubi} (which studies this research question for non-pretrained models) could be useful. Furthermore, future work could also design new training objectives which penalize models that associate wrong objects with words in MI, despite getting the correct answer.
\section{Conclusion}
In conclusion, \textsc{DIME}\ presents a new way to help users understand multimodal models by disentanglement into unimodal contributions and multimodal interactions before generating visual explanations for each. \textsc{DIME}\ can generate accurate disentangled explanations, help researchers and developers gain a deeper understanding of model behavior, and presents a step towards debugging and improving these models. We hope that \textsc{DIME}\ inspires the design of multimodal models that are more trustworthy, reliable, and robust for real-world applications.
\section*{Acknowledgements}
This material is based upon work partially supported by the National Science Foundation (Awards \#1722822 and \#1750439) and National Institutes of Health (Awards \#R01MH125740, \#R01MH096951, and \#U01MH116925).
PPL is partially supported by a Facebook PhD Fellowship and a Carnegie Mellon University's Center for Machine Learning and Health Fellowship. RS is partially supported by NSF IIS1763562 and ONR Grant N000141812861.
Any opinions, findings, conclusions, or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation, National Institutes of Health, Facebook, Carnegie Mellon University's Center for Machine Learning and Health, or Office of Naval Research, and no official endorsement should be inferred. We are extremely grateful to Gunjan Chhablani, Martin Ma, Chaitanya Ahuja, Volkan Cirik, Peter Wu, Amir Zadeh, Alex Wilf, Victoria Lin, Dong Won Lee, and Torsten W\"{o}rtwein for helpful discussions and feedback on initial versions of this paper. Finally, we would also like to acknowledge NVIDIA's GPU support.
| {
"attr-fineweb-edu": 1.790039,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUeF04dbghcDJYGFuT | \section{Introduction}
Transverse-momentum-dependent (TMD) parton distribution functions (PDF)s
(abbreviated in what follows by the term ``TMD'')
accumulate information about the intrinsic 3-dimensional motion of
partons in a hadron \cite{S79}.
They depend, therefore, on the longitudinal ($x = k^+/p^+$), as well
as on the transverse $({\bm k}_\perp)$ momentum fractions of a
given parton.
Trying to work out a consistent operator definition of TMDs, one
encounters the puzzle of emergent divergences \cite{CS82, Col08}.
These, being hidden in the case of collinear PDFs, become visible in
the TMDs and jeopardize, in particular, their renormalizability
\cite{CS07, CS08, CS09, SC09, CKS10}.
In the present work, we explore the issue of extra rapidity divergences
in the TMDs in leading $\alpha_s$-order and describe a consistent
method to take care of them.
We start from the definition of a TMD (of a quark with flavor $i$ in
a hadron $h$) that respects gauge invariance and collinear factorization
on the tree-level \cite{CSS89, Col03, JY02, BJY03, BMP03, BR05},
but has no concern with any singularities --- as these arise only in the
one-loop corrections:
\begin{eqnarray}
&& {\cal F}_{i/h}^{\rm tree} \left(x, {\bm k}_\perp\right)
=
\frac{1}{2}
\int \frac{d\xi^- d^2{\bm \xi}_\perp}{2\pi (2\pi)^2}
{\rm e}^{-ik^{+}\xi^{-} + i {\bm k}_\perp
\cdot {\bm \xi}_\perp}
\left\langle
h |\bar \psi_i (\xi^-, {\bm \xi}_\perp)
[\xi^-, {\bm \xi}_\perp;
\infty^-, {\bm \xi}_\perp]^\dagger
\right. \nonumber \\
&& \left. \times
[\infty^-, {\bm \xi}_\perp;
\infty^-, {\bm \infty}_\perp]^\dagger
\gamma^+[\infty^-, {\bm \infty}_\perp;
\infty^-, \mathbf{0}_\perp]
[\infty^-, \mathbf{0}_\perp; 0^-,\mathbf{0}_\perp]
\psi_i (0^-,\mathbf{0}_\perp) | h
\right\rangle \ \, .
\label{eq:tmd_naive}
\end{eqnarray}
Tree-level gauge invariance is ensured by the inserted gauge links
(path-ordered Wilson-line operators) having the generic form
\begin{equation}
{ [y,x|\Gamma] }
=
{\cal P} \exp
\left[-ig\int_{x[\Gamma]}^{y}dz_{\mu} {\cal A}^\mu (z)
\right] \ ,
\label{eq:link}
\end{equation}
where ${\cal A} \equiv t^a A^a$.
The transverse gauge links, extending to light-cone infinity
\cite{JY02, BJY03, BMP03}, are also included in (\ref{eq:tmd_naive}).
Beyond the tree level, the function (\ref{eq:tmd_naive}) will be shown
to be dependent on the renormalization scale $\mu$ and the rapidity
cutoff $\eta$.
We assume that any soft and collinear singularities can be properly
factorized out and be treated by means of the standard procedure
so that we don't have to consider them anymore.
Thus, we only concentrate on the unusual divergences which are
specific to the TMD case.
It was shown in \cite{CS08} that in the light-cone gauge, the
anomalous divergent term containing overlapping
(${\rm UV} \otimes {\rm rapidity}$ singularity) stems from the
virtual-gluon contribution
\begin{equation}
\Sigma^{\rm LC}_{\rm virt.}
=
- \frac{\alpha_s}{\pi} C_{\rm F} \ \Gamma(\epsilon)\
\left[ 4 \pi \frac{\mu^2}{-p^2} \right]^\epsilon\
\delta (1-x) \delta^{(2)} (\bm k_\perp)\ \int_0^1\!
dx \frac{(1-x)^{1-\epsilon}}{x^\epsilon [x]_\eta} \ ,
\label{eq:sigma-lc}
\end{equation}
where the UV divergence is treated within the dimensional-regularization
$\omega = 4 - 2\epsilon$ approach, while the rapidity divergence in the
gluon propagator in the light-cone gauge is regularized by the parameter
$\eta$, entailing the following regularization of the last integral in
Eq.\ (\ref{eq:sigma-lc}):
\begin{equation}
\frac{1}{[x]_{\eta}^{\rm Ret. / Adv. / P.V.} }
= \left[ \frac{1}{x + i \eta}\ , \quad\quad
\frac{1}{x - i \eta}\ , \quad\quad
\frac{1}{2}\left(\frac{1}{x+i \eta}
+\frac{1}{x - i \eta}\right) \right]\ .
\end{equation}
Within this approach, one can extract from Eq.\ (\ref{eq:sigma-lc})
the UV-divergent part and obtain the overlapping singularity in the
logarithmic form
\begin{equation}
{\Sigma}^{\rm LC}_{\rm virt.}
=
{
- \frac{\alpha_s}{\pi}C_{\rm F} \ \frac{1}{\epsilon}
\left[- \frac{3}{4} - \ln \frac{\eta}{p^+} + \frac{i\pi}{2}
\right] } + [{\rm UV \ finite\ part}]\ ,
\end{equation}
where the contribution of the transverse link is taken into account,
while the mirror diagram is omitted (see for technical details in
\cite{CS08}).
The exact form of the overlapping singularity drops us a hint at the
form of the additional soft factor which must be introduced into the
definition of TMD (\ref{eq:tmd_naive}), if one wants to extend it
beyond the tree level in order to render it renormalizable and free
of undesirable divergences --- at least at one-loop \cite{CS07, CS08}.
Hence, a generalized renormalization procedure has been formulated
\cite{KR87} in terms of a soft factor supplementing the tree-level
TMD, i.e.,
\begin{equation}
{\cal F}^{\rm tree}(x, {\bm k}_\perp)
\to
{\cal F} (x, {\bm k}_\perp; \mu, \eta, \epsilon) \times R^{-1}
(\mu, \eta, \epsilon) \ ,
\end{equation}
so that the above expression is free of overlapping divergences and can
be renormalized by means of the standard $R-$operation.
Within this framework, the introduction of the small parameter $\eta$
allows one to keep the overlapping singularities under control and
treat the extra term in the UV-divergent part via the cusp anomalous
dimension, which in turn determines the specific form of the gauge
contour in the soft factor $R$.
It is worth comparing the result obtained in the light-cone gauge with
the calculation in covariant gauges.
In Ref.\ \cite{CS82}, it was shown that the virtual-gluon exchange
between the quark line and the light-like gauge link
(this graph is obviously absent in the light-cone gauge) yields
(in the dimensional regularization)
\begin{equation}
\Sigma_{\rm virt.}^{\rm cov.}
=
- \frac{\alpha_s}{\pi} C_{\rm F} \Gamma (\epsilon) \left[4\pi \frac{\mu^2}{-p^2} \right]^{\epsilon} \
\delta (1-x) \delta^{(2)} (\bm k_\perp)\ \int_0^1\! dx \ \frac{x^{1-\epsilon}}{(1-x)^{1+\epsilon}} \ .
\label{eq:sigma-cov}
\end{equation}
This expression contains the double pole $1/\epsilon^2$, which is not
compensated by the real counter part in the TMD case, while in collinear
PDFs, such a compensation does indeed take place.
Going back to our Eq.\ (\ref{eq:sigma-lc}), we observe that without
the $\eta$-regularization of the last integral and after a trivial change
of variables it is reduced to Eq.\ (\ref{eq:sigma-cov}) and reads
\begin{equation}
\Sigma_{\rm virt.}^{\rm cov.} (\epsilon)
=
\Sigma_{\rm virt.}^{\rm LC} (\epsilon, \eta = 0) \ .
\end{equation}
The latter result allows us to conclude that the generalized
renormalization procedure, described above, is gauge invariant and
regularization-independent: in principle, one can use dimensional
regularization to take care of the overlapping singularities as well.
However, in the latter case the structure of extra divergences is much
less transparent and one is not able to conclude about the specific
form of the soft factor.
Let us note that the applicability of dimensional regularization to
for a consistent treatment of the divergences arising in the path-dependent
gauge invariant two-quark correlation function had been studied in
Ref.\ \cite{Ste83}.
Another obstacle still arises in the soft factor.
Evaluating in the light-cone gauge one-loop graphs, one finds the
expression
\begin{equation}
\Sigma^{\rm LC}_{\rm soft}(\epsilon, \eta)
=
i g^2 \mu^{2\epsilon} C_{\rm F}2 p^+ \ \int\! \frac{d^\omega q}{(2\pi)^\omega}
\frac{1}{q^2 (q^- \cdot p^+ - i0) [q^+]_\eta} \ ,
\label{eq:soft}
\end{equation}
which contains a new singularity, that can not be circumvented by
dimensional regularization or by the $\eta$-cutoff, leading to
\begin{equation}
\Sigma^{\rm LC}_{\rm soft}(\epsilon, \eta)
=
- \frac{\alpha_s}{\pi} C_{\rm F} \left[\frac{4\pi \mu^2}{\lambda^2}\right]^\epsilon
\Gamma(\epsilon) \
\int_0^1 dx \frac{x}{x^2 [ x-1 ]_\eta} \ ,
\label{eq:soft_int}
\end{equation}
where $\lambda$ is the IR regulator.
In our previous paper \cite{CS08}, we have argued that this divergence
is irrelevant, since it doesn't affect the rapidity evolution.
However, for the sake of completeness, we propose here a procedure,
which allows one to remove this divergence in a proper way.
Taking into account that the extra singularity is cusp-independent,
we conclude that it represents the self-energy of the Wilson line,
evaluated along a ``straightened'' path, i.e., assuming that the cusp
angle becomes very small: $p^+ \to \eta$.
Subtraction of this self-energy part is presented graphically in
Fig.\ 1.
Note that there is no need to introduce additional parameters in this
subtraction.
Moreover, it has a clear physical interpretation: only an irrelevant
contribution due to the self-energy of the light-like gauge links is
removed, which is merely part of the unobservable background.
\begin{figure}[h]
\includegraphics[scale=0.7,angle=90]{soft_renorm}
\caption{Subtraction of the Wilson-line self-energy contribution in
the soft factor.}
\end{figure}
Therefore, the ``completely subtracted'' generalized definition of the
TMD reads
\begin{eqnarray}
&& {\cal F} \left(x, {\bm k}_\perp; \mu, \eta\right)
=
\frac{1}{2}
\int \frac{d\xi^- d^2{\bm \xi}_\perp}{2\pi (2\pi)^2}
{\rm e}^{-ik^{+}\xi^{-} +i {\bm k}_\perp
\cdot {\bm \xi}_\perp}
\left\langle
h |\bar \psi_i (\xi^-, {\bm \xi}_\perp)
[\xi^-, {\bm \xi}_\perp;
\infty^-, {\bm \xi}_\perp]^\dagger \right. \nonumber \\
&& \left.
\times
[\infty^-, {\bm \xi}_\perp;
\infty^-, {\bm \infty}_\perp]^\dagger
\gamma^+[\infty^-, {\bm \infty}_\perp;
\infty^-, \mathbf{0}_\perp]
[\infty^-, \mathbf{0}_\perp; 0^-,\mathbf{0}_\perp]
\psi_i (0^-,\mathbf{0}_\perp) | h
\right\rangle
R^{-1} \ ,
\label{eq:general}
\end{eqnarray}
\begin{eqnarray}
&& R^{-1}(\mu, \eta)
= \nonumber \\
&& \frac{\langle 0
| \ {\cal P}
\exp\Big[ig \int_{\mathcal{C}_{\rm cusp}}\! d\zeta^\mu
\ {\cal A}^\mu (\zeta)
\Big] \cdot
{\cal P}^{-1}
\exp\Big[- ig \int_{\mathcal{C'}_{\rm cusp}}\! d\zeta^\mu
\ {\cal A}^\mu (\xi + \zeta)
\Big]
{| 0
\rangle } }
{ \langle 0
| \ {\cal P}
\exp\Big[ig \int_{\mathcal{C}_{\rm smooth}}\! d\zeta^\mu
\ {\cal A}^\mu (\zeta)
\Big] \cdot
{\cal P}^{-1}
\exp\Big[- ig \int_{\mathcal{C'}_{\rm smooth}}\! d\zeta^\mu
\ {\cal A}^\mu (\xi + \zeta)
\Big]
| 0
\rangle } \ ,
\end{eqnarray}
where the cusped and smooth contours are presented in Fig.\ 1.
To conclude, we have demonstrated that the generalized definition of
the TMD (\ref{eq:general}) is completely gauge- and
regularization-invariant, renormalizable and free of any kind of
emergent overlapping divergences, including those produced by the
artifacts of the soft factor --- at least in leading $\alpha_s$-order.
For completeness, one has yet to prove that this definition is part of
a TMD factorization theorem (see for an example of such an explicit
proof in covariant gauges with gauge links shifted from the
light-cone in \cite{JMY04} and the discussion in Ref. \cite{CRS07}),
and clarify the relationship of our approach (in particular, the
precise form of the soft factors, which might vary within different
schemes) to other approaches for the operator definitions of
TMDs (e.g., Refs. \cite{CM04, CH00}).
This issue is left for future work.
\paragraph{Acknowledgments}
I.O.Ch. is grateful to the Organizers of the Workshop ``Diffraction 2010''
for the invitation and to the INFN for financial support.
| {
"attr-fineweb-edu": 1.15918,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUeIjxK0-nUh8iKLMX | \subsection*{Abstract}
When genetic variants in a gene cluster are associated with a disease outcome, the causal pathway from the variants to the outcome can be difficult to disentangle. For example, the chemokine receptor gene cluster contains genetic variants associated with various cytokines. Associations between variants in this cluster and stroke risk may be driven by any of these cytokines. Multivariable Mendelian randomization is an extension of standard univariable Mendelian randomization to estimate the direct effects of related exposures with shared genetic predictors. However, when genetic variants are clustered, a Goldilocks dilemma arises: including too many highly-correlated variants in the analysis can lead to ill-conditioning, but pruning variants too aggressively can lead to imprecise estimates or even lack of identification. We propose multivariable methods that use principal component analysis to reduce many correlated genetic variants into a smaller number of orthogonal components that are used as instrumental variables. We show in simulations that these methods result in more precise estimates that are less sensitive to numerical instability due to both strong correlations and small changes in the input data. We apply the methods to demonstrate the most likely causal risk factor for stroke at the chemokine gene cluster is monocyte chemoattractant protein-1.
\vspace{3mm}
\noindent \noindent \textbf{Key words:} Mendelian randomization, gene cluster, correlated variants, dimension reduction, causal inference.
\clearpage
\section*{Introduction}
Genetic variants associated with molecular and phenotypic traits can provide evidence on the causal pathways linking the associated trait with a disease outcome \citep{burgess2017gwas}. Various analytical approaches, including Mendelian randomization and colocalization, have been proposed that synthesize data on genetic associations to assess the nature of the relationship between a trait and a disease. In Mendelian randomization, it is assumed that the only pathway by which selected genetic variants influence the outcome is via the associated trait \citep{lawlor2007}. Formally, genetic variants are assumed to satisfy the assumptions of an instrumental variable \citep{didelez2007}. If multiple independent variants associated with the same trait show a consistent pattern of associations with the outcome, then it is plausible that the trait has a causal effect on the outcome \citep{bowden2015median, lawlor2016triangulation}.
We here consider an extension to standard Mendelian randomization known as multivariable Mendelian randomization, which allows genetic variants to be associated with multiple related traits \citep{burgess2014pleioaje}. For instance, it is difficult to find specific genetic predictors of fat-free mass that are not also associated with fat mass. Multivariable Mendelian randomization can be implemented by fitting a multivariable regression model using genetic associations with each of the traits as predictors \citep{sanderson2018}. Coefficients from this model represent direct effects; that is, the effect of varying one of the traits in the model while keeping other traits constant \citep{burgess2017summnetwork, carter2019}. Such investigations have suggested that fat mass rather than fat-free mass influences cardiovascular disease risk \citep{larsson2020body}, and that, amongst a set of lipid traits, apolipoprotein B is the primary driver of coronary heart disease risk \citep{zuber2021}.
Often, Mendelian randomization investigations are conducted using genetic variants from a single genetic region, an approach known as cis-Mendelian randomization \citep{schmidt2020}. This approach is particularly common when the risk factor is a gene product, such as gene expression or circulating levels of a protein. Such analyses are somewhat fragile, as the evidence is based on a single genetic region and so it is not possible to assess heterogeneity of findings across multiple genetic regions that represent independent datapoints \citep{burgess2020guidelines}. However, if the function of the gene is well-understood, these analyses can be particularly insightful into the impact of intervening on a specific biological pathway. In some cases, the function of the gene may correspond to the action of a pharmacological agent, and hence the analysis is informative about a druggable pathway \citep{gill2021wellcome}. Examples include the use of variants in the \emph{HMGCR} gene region to inform about the impact of statin drugs \citep{ference2015}, and variants in the \emph{IL6R} gene region about the impact of interleukin-6 receptor inhibitors, such as tocilizumab \citep{sarwar2012}.
However, some genetic regions contain multiple genes (referred to as a gene cluster), and so are associated with multiple gene products. For example, the \emph{FADS} locus contains genetic predictors of various fatty acids \citep{lattka2010}, and the \emph{IL1RL1}–-\emph{IL18R1} locus (the interleukin-1 receptor cluster) contains protein quantitative trait loci (pQTLs) for several proteins \citep{sun2017}. While variants in the interleukin-1 cluster are associated with several autoimmune diseases \citep{timms2004, zhu2008}, it is difficult to determine which of the proteins are causal risk factors \citep{reijmerink2010}. Although a multivariable cis-Mendelian randomization approach has been proposed to disentangle complex gene regions and identify the causal drivers of disease \citep{porcu2019}, authors of this approach suggest pruning genetic variants to near independence ($r^2 < 0.1$) to avoid potential problems of collinearity. However, it may not be possible to find sufficient near-independent variants for the multivariable regression model to give precise estimates for each trait. While it is possible to prune at a less strict threshold, we have previously shown that under-pruning can result in ill-conditioning \citep{burgess2017fine}. This represents a Goldilocks dilemma: too much pruning and we get imprecision or even lack of identification; too little pruning and we can get results that are highly sensitive to small changes in the estimated correlation matrix, and can be nonsensical.
We propose two methods for multivariable cis-Mendelian randomization that perform dimension reduction on the available genetic variants at a locus using principal component analysis (PCA). These methods reduce information on large numbers of highly correlated variants into a smaller number of orthogonal components, allowing efficient multivariable analyses to be implemented that are not so sensitive to high correlations between variants or small changes in the inputs. We demonstrate the superiority of these methods over pruning methods in a simulation study, and illustrate the methods in a applied analysis investigating effects on stroke risk of three cytokines associated with a gene cluster on chromosome 17.
\section*{Methods}
\subsection*{Overview of the approach}
Multivariable Mendelian randomization takes genetic variants that are each associated with at least one of a set of related exposure traits, and satisfy the instrumental variable assumptions for multivariable Mendelian randomization:
\begin{itemize}
\item[i.] each variant is associated with one or more of the exposures,
\item[ii.] each exposure is associated with one or more of the genetic variants,
\item[iii.] each variant is not associated with the outcome via a confounding pathway, and
\item[iv.] each variant does not affect the outcome directly, only possibly indirectly via one or more of the exposures.
\end{itemize}
Although the approach was originally developed for use with individual-level data using the established two-stage least squares method \citep{burgess2014pleioaje}, equivalent estimates can be obtained using summarized genetic association data that are typically reported by genome-wide association studies (GWAS) \citep{sanderson2018}. We use summarized genetic association data \citep{bowden2017}, and denote the genetic association of variant $j$ with exposure trait $k$ as $\hat{\beta}_{Xjk}$; this is the beta-coefficient from univariable regression of the trait on the variant. We denote the genetic association of variant $j$ with the outcome as $\hat{\beta}_{Yj}$ and its standard error as $\se(\hat{\beta}_{Yj})$; again, this is obtained from regression of the outcome on the variant.
\subsection*{Inverse-variance weighted method}
If the genetic variants are uncorrelated, then multivariable Mendelian randomization estimates can be obtained by fitting a multivariable model using weighted linear regression:
\begin{equation}
\hat{\beta}_{Yj} = \theta_{1} \; \hat{\beta}_{Xj1} + \theta_{2} \; \hat{\beta}_{Xj2} + \ldots + \theta_{K} \; \hat{\beta}_{XjK} + \epsilon_{j} \qquad \epsilon_{j} \sim \mathcal{N}(0, \se(\hat{\beta}_{Yj})^2)
\end{equation}
for variants $j = 1, 2, \ldots, J$, where $K$ is the total number of traits ($K > J$), and the error terms $\epsilon_{j}$ have independent normal distributions \citep{burgess2014pleioajeappendix}. The parameter $\theta_k$ is an estimate of the direct effect of the $k$th trait on the outcome (that is, the effect of intervening on that trait while keeping all other traits unchanged) \citep{carter2019}. We refer to this method as the multivariable inverse-variance weighted (MV-IVW) method, as it is an extension of the univariable IVW method \citep{burgess2013genepi} to the multivariable setting.
If the genetic variants are correlated, then we allow the error terms to be correlated and use generalized weighted linear regression:
\begin{equation}
\mathbf{\hat{\boldsymbol\beta}_{Y}} = \theta_{1} \; \mathbf{\hat{\boldsymbol\beta}_{X1}} + \theta_{2} \; \mathbf{\hat{\boldsymbol\beta}_{X2}} + \ldots + \theta_{K} \; \mathbf{\hat{\boldsymbol\beta}_{XK}} + \mathbf{\boldsymbol\epsilon} \qquad \mathbf{\boldsymbol\epsilon} \sim \mathcal{N}(\boldsymbol0, \Sigma)
\end{equation}
where bold face represents vectors, and $\Sigma$ is a variance-covariance matrix with elements $\Sigma_{j_1, j_2} = \se(\hat{\beta}_{Yj_1}) \; \se(\hat{\beta}_{Yj_2}) \; \rho_{j_1, j_2}$, with $\rho_{j_1, j_2}$ representing the correlation between the $j_1$th and $j_2$th variants. This method was advocated by Porcu \emph{et al} \citep{porcu2019} for the analysis of summarized genetic association data on gene expression traits with shared genetic predictors. Estimates are obtained as:
\begin{equation}
\mathbf{\hat{\boldsymbol\theta}_{MV-IVW}} = (\mathbf{\hat{\boldsymbol\beta}_{X}}^T \Sigma^{-1} \mathbf{\hat{\boldsymbol\beta}_{X}})^{-1} \mathbf{\hat{\boldsymbol\beta}_{X}}^T \Sigma^{-1} \mathbf{\hat{\boldsymbol\beta}_{Y}}
\end{equation}
where $\mathbf{\hat{\boldsymbol\beta}_{X}}$ is the $J$ by $K$ matrix of genetic associations with the traits, and $\mathbf{\hat{\boldsymbol\beta}_{Y}}$ is the $J$ by $1$ vector of genetic associations with the outcome.
This calculation requires inversion of the variance-covariance matrix $\Sigma$, which can lead to numerical instability if the matrix of correlations between genetic variants is near-singular. This occurs when there is a set of genetic variants that is close to being linearly dependent. If a set of genetic variants is linearly dependent (that is, there is at least one variant that can be perfectly predicted based on the other variants), then the correlation matrix will be exactly singular, and so cannot be inverted. If a set of genetic variants is almost but not exactly linearly dependent, then the correlation matrix can be inverted, but some elements of the matrix inverse will be very large in magnitude. This results in an ill-conditioned problem, meaning that small changes in the inputs can lead to large changes in the estimates. The condition number of a matrix is a measure of ill-conditioning; for a positive-definite symmetric matrix, this can be calculated as the ratio of the largest to the smallest eigenvalue. While there are no universal thresholds, condition numbers over 100 are cause for concern, particularly if the genetic associations are known to a limited degree of precision.
To implement the proposed PCA method, we first consider the matrix $\Psi$ where:
\begin{equation}
\Psi_{j_1, j_2} = \sum_k |\hat{\beta}_{Xj_1k}| \; \sum_k |\hat{\beta}_{Xj_2k}| \; \se(\hat{\beta}_{Yj_1})^{-1} \; \se(\hat{\beta}_{Yj_2})^{-1} \; \rho_{j_1, j_2}.
\end{equation}
This is a weighted version of the variance-covariance matrix, with weights taken as the sum of the absolute values of the genetic associations with the traits. Obtaining principal components of this matrix ensures that the top principal components assign greater weights for variants having larger associations with the traits and more precise associations with the outcome. Considering the PCA decomposition $\Psi = W \Lambda W^T$, where $W$ is the matrix of eigenvectors (or loadings) and $\Lambda$ is the diagonal matrix with the eigenvalues $\lambda_1 > \ldots > \lambda_J$ on the diagonal, let $W_k$ be the matrix constructed of the first $k$ columns of $W$. Then we define:
\begin{align*}
\mathbf{\tilde{\boldsymbol\beta}_X} &= W_k^T \mathbf{\hat{\boldsymbol\beta}_{X}} \mbox{ as the matrix of transformed genetic associations with the exposure traits} \\
\mathbf{\tilde{\boldsymbol\beta}_Y} &= W_k^T \mathbf{\hat{\boldsymbol\beta}_{Y}} \mbox{ as the vector of transformed genetic associations with the outcome} \\
\tilde{\Sigma} &= W_k^T \Sigma W_k \mbox{ as the transformed variance-covariance matrix.}
\end{align*}
The multivariable inverse-variance weighted principal component analysis (MV-IVW-PCA) estimate is given by:
\begin{equation}
\mathbf{\hat{\boldsymbol\theta}_{MV-IVW-PCA}} = (\mathbf{\tilde{\boldsymbol\beta}_{X}}^T \tilde{\Sigma}^{-1} \mathbf{\tilde{\boldsymbol\beta}_{X}})^{-1} \mathbf{\tilde{\boldsymbol\beta}_{X}}^T \tilde{\Sigma}^{-1} \mathbf{\tilde{\boldsymbol\beta}_{Y}}
\end{equation}
This is an adaptation of the MV-IVW method using transformed genetic instruments that represent linear weighted scores comprised of the original genetic variants, where the weights of the scores are the eigenvectors from the PCA decomposition. As the principal components are orthogonal, the transformed variance-covariance matrix should not be near-singular. The standard errors of these estimates are:
\begin{equation}
\se(\hat{\theta}_{k, MV-IVW-PCA}) = \sqrt{(\mathbf{\tilde{\boldsymbol\beta}_{X}}^T \tilde{\Sigma}^{-1} \mathbf{\tilde{\boldsymbol\beta}_{X}})^{-1}_{kk}}.
\end{equation}
In our investigations, we set the number of principal components to explain 99\% of the variance in the matrix $\Psi$.
\subsection*{Limited information maximum likelihood method}
An alternative method for instrumental variable analysis is the limited information maximum likelihood (LIML) method \citep{baum2007}. The estimate from the LIML method minimizes the Anderson--Rubin statistic \citep{anderson1949}, which is a measure of heterogeneity of the estimates based on the different genetic variants. Both the IVW and LIML methods are part of a larger family of methods, known as the generalized method of moments (GMM) \citep{hansen1982}. We here derive a multivariable analogue of the LIML method that can be implemented using summarized genetic association data for correlated variants. We refer to this method as the multivariable limited information maximum likelihood (MV-LIML) method.
For the MV-LIML method, we require the additional knowledge of the $K \times K$ correlation matrix of exposures, which we denote $\Phi$. In the simulation study, we set this matrix to be the identity.
We let $\mathbf{g}(\boldsymbol\theta)=\mathbf{\hat{\boldsymbol\beta}_{Y}}-\mathbf{\hat{\boldsymbol\beta}_{X}} \; \boldsymbol\theta$, where
$\boldsymbol\theta=(\theta_{1},\ldots,\theta_{K})^{T}$. Under the assumption that all $J$ variants are valid instruments, setting $\mathbf{g}(\boldsymbol\theta)=\boldsymbol0$ provides a set of $J$ estimating equations for the $K$ unknown parameters $\theta_k$. When $J>K$, if the genetic variants are linearly independent, it is generally not possible to find an estimator $\mathbf{\hat{\boldsymbol\theta}}$ that can set $\mathbf{g}(\hat{\boldsymbol\theta})=\boldsymbol0$. Thus, LIML-based methods take $K$ linear combinations of the $J$ estimating equations, where the weights in the linear combination are chosen to minimize the variance of the resulting estimator $\mathbf{\hat{\boldsymbol\theta}}$.
In particular, the MV-LIML estimator is given by
\begin{equation}
\mathbf{\hat{\boldsymbol\theta}_{MV-LIML}}=\arg\min_{\boldsymbol\theta}\hat{Q}(\boldsymbol\theta)
\end{equation}
where $Q(\boldsymbol\theta)=\mathbf{g}(\boldsymbol\theta)^{T} \; \Omega(\boldsymbol\theta)^{-1} \; \mathbf{g}(\boldsymbol\theta)$,
and $\Omega(\boldsymbol\theta)$ is a $J\times J$ matrix with its $(j_{1},j_{2})$th element given by
\begin{equation}
\Omega_{j_{1},j_{2}}(\theta)=\big(\text{se}(\hat{\beta}_{Y_{j_{1}}})\text{se}(\hat{\beta}_{Y_{j_{2}}})\rho_{j_{1},j_{2}}\big)+\sum_{k=1}^{K}\sum_{l=1}^{K}\sqrt{\text{se}(\hat{\beta}_{X_{j_{1}k}})\text{se}(\hat{\beta}_{X_{j_{2}k}})}\sqrt{\text{se}(\hat{\beta}_{X_{j_{1}l}})\text{se}(\hat{\beta}_{X_{j_{2}l}})}\rho_{j_{1},j_{2}}\Phi_{k,l}\theta_{k}\theta_{l}.
\end{equation}
For exposure $k$, the MV-LIML estimator of $\theta_{k}$ is given by the $k$th element of $\mathbf{\hat{\boldsymbol\theta}_{MV-LIML}}$,
and its standard error is given by $\sqrt{\hat{V}_{k,k}}$, where $\hat{V}_{k,k}$ is the $k$th diagonal element of the $K \times K$
matrix $\hat{V}=(\mathbf{\hat{\boldsymbol\beta}_{X}}^{T} \; \Omega(\mathbf{\hat{\boldsymbol\theta}_{MV-LIML}})^{-1} \; \mathbf{\hat{\boldsymbol\beta}_{X}})^{-1}$.
Theoretical results suggest LIML provides robust estimation when using many weak instruments \citep{chao2005}. For univariable cis-Mendelian randomization analyses, simulation evidence has further highlighted the low bias properties of LIML-based estimators in finite samples \citep{patel2020}.
We consider a version of the LIML method using PCA to perform dimension reduction on the set of genetic variants by replacing:
\begin{align}
\mathbf{\tilde{g}(\boldsymbol\theta)} &= W_k^T \mathbf{g}(\boldsymbol\theta) \\
\tilde{\Omega}(\boldsymbol\theta) &= W_k^T \Omega(\boldsymbol\theta) W_k \notag \\
\tilde{Q}(\boldsymbol\theta) &= \mathbf{\tilde{g}}(\boldsymbol\theta)^{T} \; \tilde{\Omega}(\boldsymbol\theta)^{-1} \; \mathbf{\tilde{g}}(\boldsymbol\theta) \notag
\end{align}
and minimizing $\tilde{Q}(\boldsymbol\theta)$, with $W_k$ defined as previously. We refer to this method as MV-LIML-PCA. As per the MV-IVW-PCA method, we set the number of principal components to explain 99\% of the variance in the matrix $\Psi$.
\subsection*{Simulation study}
We perform a simulation study to assess whether the proposed PCA methods are able to detect which out of a set of traits with shared clustered genetic predictors influences an outcome, and whether the causal effects of the traits can be estimated reliably.
We consider a scenario with three traits and 100 correlated genetic variants. We generate 10\thinspace000 simulated datasets according to the following data-generating model for 20\thinspace000 individuals indexed by $i$:
\begin{align}
A_{j_1, j_2} &\sim \mbox{Uniform}(-0.3, 1) \mbox{ for } j_1, j_2 = 1, \ldots, 100 \notag \\
B &= \cor(A \; A^T) \notag \\
\mathbf{G_i} &\sim \mathcal{N}_J(\boldsymbol0, B) \notag \\
\alpha_j &\sim \mathcal{N}(0.08, 0.01^2) \mbox{ for } j = 1, \ldots, 15 \notag \\
X_{i1} &= \sum_{j=1}^5 \alpha_j G_{ij} + U_{i1} + \epsilon_{Xi1} \notag \\
X_{i2} &= \sum_{j=6}^{10} \alpha_j G_{ij} + U_{i2} + \epsilon_{Xi2} \notag \\
X_{i3} &= \sum_{j=11}^{15} \alpha_j G_{ij} + U_{i3} + \epsilon_{Xi3} \notag \\
Y_{i} &= 0.4 X_{i1} - 0.6 X_{i3} + U_{i1} + U_{i2} + U_{i3} + \epsilon_{Yi} \notag \\
U_{i1}, U_{i2}, U_{i3}, \epsilon_{Xi1}, \epsilon_{Xi2}, \epsilon_{Xi3}, \epsilon_{Yi} &\sim \mathcal{N}(0, 1) \mbox{ independently} \notag
\end{align}
The genetic variants $G_j$ are simulated from a multivariable normal distribution with mean zero and variance-covariance matrix $B$. The traits $X_1$, $X_2$, and $X_3$ are simulated such that 5 variants influence the first trait, the next 5 influence the second trait, the next 5 influence the third trait, and the remaining 85 do not influence any trait. However, due to the moderately large correlations between the genetic variants (which typically range from around $-0.1$ to $+0.6$ with an interquartile range from around $+0.2$ to $+0.4$), typically each of the 100 variants is associated with all three traits at a genome-wide level of significance. The outcome $Y$ is influenced by traits $X_1$ and $X_3$, with the true effect of $X_1$ set at $+0.4$ and the true effect of $X_3$ set at $-0.6$. The associations between the traits and the outcome are affected by confounders $U_1$, $U_2$, and $U_3$.
We estimate genetic associations with the exposures on the first 10\thinspace000 individuals, and genetic associations with the outcome on the subsequent 10\thinspace000 individuals. Correlations between the genetic variants are estimated on the first 10\thinspace000 individuals. This represents a two-sample scenario, where genetic associations with the exposures and outcome are obtained on non-overlapping samples \citep{pierce2013}. The mean instrument strength based on the 5 causal variants for each trait is $R^2 = 3.5\%$, corresponding to a mean univariable F statistic (on 5 and 19\thinspace994 degrees of freedom) for each trait of around 145, and a conditional F statistic (on 15 and 19\thinspace984 degrees of freedom) for each trait of around 22 \citep{sanderson2018}.
We compare four different methods: the MV-IVW and MV-LIML methods with various choices of genetic variants as inputs, and the MV-IVW-PCA and MV-LIML-PCA methods described above. For the MV-IVW and MV-LIML methods, we consider pruning the variants at thresholds of $|\rho| < 0.4$, $|\rho| < 0.6$, and $|\rho| < 0.8$ (equivalent to $r^2 < 0.16$, $r^2 < 0.36$, and $r^2 < 0.64$). We note that pruning at $|\rho| < 0.1$ would often result in 3 or fewer variants being available for analysis, which would not allow multivariable Mendelian randomization to be attempted, as the number of genetic variants needs to be greater than the number of traits. Pruning is performed by first selecting the variant with the lowest p-value for association with any of the traits, and then excluding from consideration all variants more strongly correlated with the selected variant than the threshold value. We then select the variant amongst those remaining with the lowest p-value, and exclude variants more correlated with that variant than the threshold value. We repeat until all variants have either been selected or excluded. We also consider an oracle choice of variants, in which only the 15 genetic variants that truly influence the traits are used as instruments.
In addition to the main simulation study, we also consider the performance of methods with other parameter settings: 1) weaker instruments: we generate the $\alpha_{j}$ parameters from a normal distribution with mean 0.5 (corresponding to a mean instrument strength of $R^2 = 1.4\%$, mean univariable F = $57$, mean conditional F = $9.4$); 2) stronger correlations: we generate elements of the $A$ matrix from a uniform distribution on $+0.1$ to $+1.0$, resulting in correlations which typically range from around $+0.5$ to $+0.85$ with an interquartile range from around $+0.65$ to $+0.75$; 3) stronger causal effects, with $\theta_1 = +0.8$ and $\theta_3 = +1.0$, and 4) two alternative approaches for generating a correlation matrix taken from the R package \emph{clusterGeneration} \citep{joe2006}. For each scenario, 10\thinspace000 datasets were generated.
Further, we consider two variations to the main simulation study that are reflective of potential problems that may arise in applied practice. First, we estimate the variant correlation matrix based on an independent sample. This reflects the common occurrence that the correlation matrix is obtained from a reference sample rather than the dataset under analysis, and assesses robustness of the methods to variability in the correlation matrix. Secondly, we round the genetic associations and their standard errors to three decimal places. This reflects the common occurrence that genetic associations are obtained from a publicly-available source, and hence are not known to absolute precision. Again, this assesses robustness of the methods to variability in the data inputs.
\subsection*{Applied example: chemokine gene cluster and risk of stroke}
We illustrate our methods using data on genetic associations with three cytokines and stroke risk. Previous research has implicated monocyte chemoattractant protein-1 (MCP-1), which is also called chemokine (C-C motif) ligand 2 (CCL2), in the pathophysiology of stroke \citep{georgakis2019, georgakis2021basic, georgakis2021epi}. However, the \emph{CCL2} gene that encodes this cytokine is located in a cluster that also includes genes \emph{CCL7}, and \emph{CCL11}. Variants in this genetic region are associated with multiple cytokines other than MCP-1, including MCP-3 (also called CCL7), and eotaxin-1 (also called CCL11). Hence, it is not clear from univariable Mendelian randomization (that is, analyses with a single exposure trait) which of these proteins is driving stroke risk.
We conduct a multivariable cis-Mendelian randomization analysis to disentangle the effects of these cytokines. We take variants from the \emph{CCL2} gene region (GRCh38/hg38, chr17:34\thinspace255\thinspace218\thinspace-\thinspace34\thinspace257\thinspace203) plus 500 kilobasepairs either side, genetic associations with the cytokines from a re-analysis of data on three Finnish cohorts by Ahola-Olli \emph{et al} \citep{ahola2017} that did not adjust for body mass index \citep{kalaoja2021}, and genetic associations with all stroke and cardioembolic stroke from the MEGASTROKE consortium \citep{malik2018}. Cardioembolic stroke was chosen as genetic associations were stronger with this stroke subtype than for all stroke in a motivating Mendelian randomization analysis that included variants from throughout the genome \citep{georgakis2019}. Correlations between variants were estimated in 376\thinspace703 individuals of European ancestries from UK Biobank.
\section*{Results}
\subsection*{Simulation study}
Results from the simulation study are shown in Table~\ref{tab:main}. For each method, we display the mean estimate of each parameter, the standard deviation of estimates, the mean standard error, and the empirical power of the 95\% confidence interval, which represents the proportion of confidence intervals that exclude the null. For $\theta_2$, the empirical power is the Type 1 error rate, and should be close to 5\%.
While the MV-IVW and MV-LIML methods perform well under the oracle setting, power to detect a causal effect is substantially reduced when pruning variants at 0.4. Although increasing the pruning threshold to 0.6 increases power, it also results in Type 1 error inflation. For the MV-IVW method, Type 1 error rates increase to 9.1\%, and for the MV-LIML method, to 20.2\%. When increasing the pruning threshold to 0.8, estimates are completely unreliable, with mean estimates in the MV-IVW method for $\theta_1$ and $\theta_3$ having the wrong sign.
In contrast, the MV-IVW-PCA and MV-LIML-PCA methods perform well throughout, with Type 1 error rates similar to those of the oracle methods, and greater power to detect a causal effect than the methods that rely on pruning. For the MV-IVW-PCA method, the variability and mean standard errors of estimates are similar to those of the oracle methods, although estimates of $\theta_1$ and $\theta_3$ are attenuated. This is a result of weak instrument bias \citep{burgess2016overlap}. For the MV-LIML-PCA method, estimates are less attenuated, but slightly more variable with greater mean standard errors. Compared with the MV-IVW-PCA method, power to detect a causal effect is slightly increased, although the Type 1 error rate was slightly higher (8.7\% versus 7.1\%).
Similar findings were observed when considering weaker instruments (Supplementary Table~\ref{tab:weak}), stronger correlations (Supplementary Table~\ref{tab:strong}), stronger causal effects (Supplementary Table~\ref{tab:strongeff}), and alternative correlation matrices (Supplementary Tables~\ref{tab:vine} and \ref{tab:onion}). Although the power varied between simulation settings, in each case the PCA methods outperformed the pruning methods in terms of power and precision at a threshold of 0.4, whereas at a threshold of 0.6 the MV-IVW and MV-LIML methods had inflated Type 1 error rates. Type 1 error rates for the PCA methods were generally well controlled, although the Type 1 error for the MV-LIML-PCA method was slightly inflated with weaker instruments (6.7\% for MV-IVW-PCA, 13.9\% for MV-LIML-PCA), and the Type 1 error for the MV-IVW-PCA method was slightly inflated with stronger causal effects (12.8\% for MV-IVW-PCA, 8.2\% for MV-LIML-PCA).
We also considered two additional variations to the main simulation reflective of potential problems in applied practice. Table~\ref{tab:diffcorr} shows results in which the variant correlation matrix was obtained from an independent sample of 10\thinspace000 individuals. Results were similar, except that the Type 1 error rate for the MV-IVW method at a pruning threshold of 0.6 was slightly higher at 11.2\%. When obtaining the correlation matrix from an independent sample of 1000 individuals, Type 1 error rates for the MV-IVW method were higher still at 18.7\% (Supplementary Table~\ref{tab:diffcorr2}). Table~\ref{tab:round} shows results in which the genetic association estimates were rounded to 3 decimal places. Again, results were similar, except that the Type 1 error rate for the MV-IVW method at a pruning threshold of 0.6 was notably higher at 15.1\%. In contrast, results from the PCA methods were not sensitive to changes in the variant correlation matrix or rounding of the genetic association estimates.
\subsection*{Applied example: chemokine gene cluster and risk of stroke}
Genetic associations with each of the cytokines and all stroke were available for 2922 variants, and with cardioembolic stroke for 2904 variants. We compare results from the MV-IVW-PCA method to those from the MV-IVW method at a pruning threshold of $|\rho| < 0.1$ (equivalent to $r^2 < 0.01$), $|\rho| < 0.4$ (equivalent to $r^2 < 0.16$), and $|\rho| < 0.6$ (equivalent to $r^2 < 0.13$). In the MV-IVW-PCA method, we initially pruned at $|\rho| < 0.95$ to remove very highly correlated variants from the analysis. We also excluded variants not associated with any of the cytokines at $p<0.001$ from all analyses. Estimates for the three cytokines, which represent log odds ratios per 1 standard deviation increase in the cytokine, are provided in Table~\ref{tab:chemokine}.
For all stroke, at a pruning threshold of $0.1$, the MV-IVW method indicates that MCP-1 is the true causal risk factor. However, a researcher may be tempted to consider a less strict pruning threshold to obtain more precise estimates. But at a pruning threshold of $0.4$, the MV-IVW method indicates that eotaxin-1 is the true causal risk factor, and at a pruning threshold of $0.6$, the MV-IVW method again indicates that MCP-1 has the strongest evidence of being the true causal risk factor, but the causal estimate is in the other direction. In contrast, the MV-IVW-PCA method indicates that MCP-1 has the strongest evidence of being the true causal risk factor, similarly to the MV-IVW method at the most conservative pruning threshold. Compared with results at this threshold, estimates from the MV-IVW-PCA method have narrower standard errors, although the estimate for MCP-1 is slightly attenuated and so has a slightly higher p-value. At a pruning threshold of 0.4, the condition number for the variance-covariance matrix $\Sigma$ in the MV-IVW method is 1224, whereas the condition number for the transformed variance-covariance matrix $\tilde{\Sigma}$ in the MV-IVW-PCA method is 24.9; a larger number signals worse problems due to ill-conditioning. We would therefore trust results from the MV-IVW-PCA method and the MV-IVW method at a threshold of 0.1, which both suggest that the strongest evidence is for MCP-1 as the causal risk factor, and the effect is in the harmful direction. For cardioembolic stroke, estimates are more similar amongst the different implementations of the methods, with stronger evidence for MCP-1 as the causal risk factor at this locus, particularly from the MV-IVW-PCA method. These findings add to the existing body of basic science \citep{georgakis2021basic}, observational \citep{georgakis2021epi, georgakis2019epi}, and genetic evidence \citep{georgakis2019} implicating circulating MCP-1 in stroke risk.
\section*{Discussion}
In this manuscript, we have introduced two methods for multivariable cis-Mendelian randomization, the MV-IVW-PCA and MV-LIML-PCA methods. Compared to existing methods that rely on pruning, these methods had superior performance: they outperformed pruning methods in terms of power to detect a causal effect, and they generally maintained close to nominal Type 1 error rates across a range of scenarios. They were also less sensitive that the pruning methods to variation in the variant correlation matrix, and to rounding of the genetic association estimates. We applied the MV-IVW-PCA method to disentangle the effects of three similar exposures with shared genetic predictors at a gene cluster; the method gave results that correspond to existing biological understanding of this pathway.
The approach of multivariable cis-Mendelian randomization has several potential applications. In our applied example, we considered proteins as exposures. Alternative potential applications could include expression of different genes as exposures, or expression of the same gene in different tissues. However, results from the latter case may be difficult to interpret if the genetic predictors of gene expression do not vary between tissues, or if data on variants affecting gene expression in all relevant tissues are not available. Another possible area of application is if there are different aspects of an exposure trait that could be considered as independent risk factors, such as concentration and isoform size of lipoprotein(a) \citep{saleheen2017}. To obtain estimates for the different exposures, it is not necessary to have genetic predictors that are uniquely associated with each exposure trait, but it is necessary to have some variants that associate more or less strongly with the different traits \citep{sanderson2018}.
An alternative approach for disentangling clusters of correlated variants associated with multiple traits is colocalization. Colocalization is a method that attempts to distinguish between two scenarios: a scenario in which two traits (which we here conceptualize as an exposure and an outcome) are influenced by distinct genetic variants, but there is overlap in the genetic associations due to correlation between the variants; and an alternative scenario in which the two traits are influenced by the same genetic variant \citep{wallace2012}. In the latter scenario, it is likely that the two traits are causally related, although the colocalization method is agnostic as to whether the exposure trait influences the outcome trait, the outcome influences the exposure, or the two are influenced by a common causal factor \citep{solovieff2013}. There are several conceptual differences between colocalization and cis-Mendelian randomization, although there are also similarities. Two specific advantages of the proposed cis-Mendelian randomization method are that it allows for the existence of multiple causal variants, in contrast to some colocalization methods \citep{giambartolomei2014, foley2019}, and it allows for the existence of multiple causal traits. Another feature is that it provides causal estimates, although the value of causal estimates in Mendelian randomization beyond indicating the direction of effect is disputed \citep{vanderweele2014}.
Although the PCA methods can be implemented with highly correlated variants, we would recommend a minimal level of pruning (say, $r^2<0.9$) before applying the methods in practice as variants that are very highly correlated do not contribute independent information to the analysis, but could provide computational challenges. Additionally, we have assumed that the traits under analysis are causally independent. If a trait has a causal effect on the outcome that is fully mediated by one of the other traits in the analysis, then the estimate for that trait would be zero. Hence, the method identifies the proximal causal risk factors for the outcome \citep{grant2020mvmr}.
If there are large numbers of traits, the MV-IVW-PCA method could be combined with a Bayesian variable selection method that compares models with different sets of traits on the assumption of a sparse risk factor set \citep{zuber2018}.
There are several limitations to these methods, which are shared by other methods for Mendelian randomization using summarized data \citep{bowden2017, burgess2015scoretj}. Uncertainty in genetic associations with the exposure traits is not accounted for in the analysis. However, this is typically small compared with uncertainty in the genetic associations with the outcome, as variants selected for inclusion in the analysis are typically associated with at least one of the traits at a robust level of statistical significance. The effects of the exposure traits on the outcome are assumed to be linear. This is usually a reasonable assumption, given the small influence of genetic variants on traits, meaning that estimates reflect average causal effects for a small shift in the overall distribution of a trait \citep{burgess2014nonlin}. In our main simulation, all the exposure and outcome traits are continuous. For binary traits, the method can be implemented using genetic association estimates obtained from logistic regression. We have previously shown that multivariable Mendelian randomization methods are still able to make correct inferences in this setting \citep{burgess2014pleioaje, grant2020mvmr}, although the interpretation of estimates is obscured due to non-collapsibility \citep{burgess2012noncollapse}. Finally, estimates are subject to weak instrument bias. Whereas in univariable Mendelian randomization, weak instrument bias in a two-sample setting is towards the null \citep{pierce2012}, in multivariable Mendelian randomization, weak instruments can bias estimates in any direction \citep{zuber2018}. This is because weak instrument bias is analogous to measurement error, as the genetically-predicted values of the exposures are estimated with uncertainty, which in a multivariable regression model can lead to arbitrary bias \citep{thouless1939, phillips1991}. Hence it is important to balance the inclusion of several genetic variants in the analysis to achieve identification, with the inclusion of only variants strongly associated with the exposures to minimize weak instruments. The optimal balance will depend on the specifics of the analysis (such as the sample size available), but researchers should consider the conditional strength of instruments (via conditional F statistics) as well as the more conventional univariable F statistics \citep{sanderson2018}. Performance with weak instruments was mixed; mean estimates from the MV-LIML-PCA method were generally less affected than those from the MV-IVW-PCA method, although both methods had slightly elevated Type 1 error rates in one of the scenarios considered. We therefore recommend that both methods are applied when the instruments are weak, and caution is expressed if the methods give divergent results. Overall, we slightly prefer the MV-IVW-PCA method, as the Type 1 error rates from this method were generally slightly lower.
In summary, multivariable cis-Mendelian randomization can be used to disentangle the causal relationships of traits, such as proteins or gene expression measurements, that are influenced by a cluster of correlated genetic variants. The proposed PCA methods provide a compromise between loss of precision resulting from over-pruning and numerical instability resulting from under-pruning, to allow valid statistical tests that identify the causal traits influencing the outcome.
\section*{Data availability statement}
The summary statistics for genetic associations with three cytokines that support the findings of this study are available through \citep{ahola2017}. The summary statistics for genetic associations with any stroke and cardioembolic stroke are available in MEGASTROKE consortium \citep{malik2018} and data were derived from the public domain at [\url{www.megastroke.org}]. The variants correlation were accessed from UK Biobank \citep{biobank2014uk} at [\url{www.ukbiobank.ac.uk}].
\bibliographystyle{apalike}
| {
"attr-fineweb-edu": 1.527344,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUeJc4eIOjReIV-jhI | \section{Introduction}\label{sec1}
Orthogonal arrays (OAs) are widely used for designing experiments. One of
the most important criteria for assessing the usefulness of an array is the
generalized word length pattern (GWLP) as proposed by \mbox{\citet{XuWu01}}:
$A_{3}, A_{4}, \ldots$ are the numbers of (generalized) words of lengths
$3, 4, \ldots,$ and the design has resolution $R$, if $A_{i} = 0$ for all $i
< R$ and $A_{R} > 0$. Analogously to the well-known minimum aberration
criterion for regular fractional factorial designs [\citet{FriHun80}],
the quality criterion based on the GWLP is generalized minimum aberration
[GMA; \citet{XuWu01}]: a design $D_{1}$ has better generalized aberration
than a design $D_{2}$, if its resolution is higher or---if both designs
have resolution $R$---if its number $A_{R}$ of shortest words is smaller;
in case of ties in $A_{R}$, frequencies of successively longer words are
compared, until a difference is encountered.
The definition of the $A_{i}$ in Xu and Wu is very technical (see Section~\ref{sec2}). One of the key results of this paper is to provide a statistical
meaning for the number of shortest words, $A_{R}$: we will show that
$A_{R}$ is the sum of $R^{2}$ values from linear models with main effects
model matrix columns in orthogonal coding as dependent variables and full
models in $R -1$ other factors on the explanatory side. For arbitrary
factor coding, the ``sum of $R^{2}$'' interpretation cannot be upheld, but
it can be shown that $A_{R}$ is the sum of squared canonical correlations
[\citet{Hot36}] between a factor's main effects model matrix columns in
arbitrary coding and the full model matrix from $R -1$ other factors. These
results will be derived in Section~\ref{sec2}.
For regular fractional factorial 2-level designs, the GWLP coincides with
the well-known word length pattern (WLP). An important difference between
regular and nonregular designs is that factorial effects in regular
fractional factorial designs are either completely aliased or not aliased
at all, while nonregular designs can have partial aliasing, which can lead
to noninteger entries in the GWLP. In fact, the absence of complete
aliasing has been considered an advantage of nonregular designs [e.g.,
those by Plackett and Burman (\citeyear{PlaBur46})] for screening applications. \citet{DenTan99} and \citet{TanDen99} defined ``generalized resolution''
($\mathrm{GR}$) for nonregular designs with 2-level factors, in order to capture
their advantage over complete confounding in a number. For example, the 12
run Plackett--Burman design has $\mathrm{GR}=3.67$, which indicates that it is
resolution III, but does not have any triples of factors with complete
aliasing. \citet{Evaetal05} have made a useful proposal for
generalizing $\mathrm{GR}$ (called \textit{GRes} by them) for designs in
quantitative factors at 3 levels; in conjunction with \citet{CheYe04},
their proposal can easily be generalized to cover designs with quantitative
factors in general. However, there is so far no convincing proposal for
designs with qualitative factors. The second goal of this paper is to close
this gap, that is, to generalize Deng and Tang's/Tang and Deng's $\mathrm{GR}$ to
OAs for qualitative factors. Any reasonable generalization of $\mathrm{GR}$ has to
fulfill the following requirements: (i)~it must be coding-invariant, that
is, must not depend on the coding chosen for the experimental factors (this
is a key difference vs. designs for quantitative factors), (ii) it must be
applicable for symmetric and asymmetric designs (i.e., designs with a fixed
number of levels and designs with mixed numbers of levels), (iii) like in
the 2-level case, $R+1 > \mathrm{GR} \geq R$ must hold, and $\mathrm{GR} = R$ must be
equivalent to the presence of complete aliasing somewhere in the design,
implying that $R+1 > \mathrm{GR} > R$ indicates a resolution $R$ design with no
complete aliasing among projections of $R$ factors. We offer two proposals
that fulfill all these requirements and provide a rationale behind each of
them, based on the relation of the GWLP to regression relations and
canonical correlations among the columns of the model matrix.
The paper is organized as follows: Section~\ref{sec2} formally introduces
the GWLP and provides a statistical meaning to its number of shortest
words, as discussed above. Section~\ref{sec3} briefly introduces generalized
resolution by Deng and Tang (\citeyear{DenTan99})
and Tang and Deng (\citeyear{DenTan99})
and generalizes it in two meaningful ways. Section~\ref{sec4} shows weak
strength $R$ [in a version modified from \citet{Xu03} to imply strength
$R- 1$] to be sufficient for maximizing one of the generalized
resolutions in a resolution $R$ design. Furthermore, it derives an explicit
upper bound for the proposed generalized resolutions for two classes of
symmetric designs. Section~\ref{sec5} derives factor wise versions of both types of
generalized resolution and demonstrates that these provide useful
additional detail to the overall values. The paper closes with a discussion
and an outlook on future work.
Throughout the paper, we will use the following notation: An orthogonal
array of resolution $R = \mathrm{strength}\ R -1$ in $N$ runs with $n$ factors will
be denoted as $\operatorname{OA}(N, s_{1}, \ldots, s_{n}, R -1)$, with $s_{1}, \ldots,
s_{n}$ the numbers of levels of the $n$ factors (possibly but not
necessarily distinct), or as $\operatorname{OA}(N, s_{1}^{n_1},\ldots, s_{k}^{n_k}$, $R
-1$) with $n_{1}$ factors at $s_{1}$ levels, $\ldots, n_{k}$ factors at
$s_{k}$ levels ($s_{1}, \ldots, s_{k}$ possibly but not necessarily
distinct), whichever is more suitable for the purpose at hand. A subset of
$k$ indices that identifies a $k$-factor projection is denoted by
$\{u_{1},\ldots,u_{k} \} (\subseteq \{1,\ldots,n\})$. The unsquared letter
$R$ always refers to the resolution of a design, while $R^{2}$ denotes the
coefficient of determination.
\section{Projection frequencies and linear models}\label{sec2}
Consider an $\operatorname{OA}(N, s_{1}, \ldots,\break s_{n}, R -1)$. The resolution $R$
implies that main effects can be confounded with interactions among $R -1$
factors, where the extent of confounding of degree $R$ can be investigated
on a global scale or in more detail: Following \citet{XuWu01}, the
factors are coded in orthogonal contrasts with squared column length
normalized to $N$. We will use the expression ``normalized orthogonal
coding'' to refer to this coding; on the contrary, the expressions
``orthogonal coding'' or ``orthogonal contrast coding'' refer to main
effects model matrix columns that have mean zero and are pairwise
orthogonal, but need not be normalized. For later reference, note that for
orthogonal coding (whether normalized or not) the main effects model matrix
columns for an OA (of strength at least 2) are always uncorrelated.
We write the model matrix for the full model in normalized orthogonal
coding as
\begin{equation}
\mathbf{M} = (\mathbf{M}_{0}, \mathbf{M}_{1}, \ldots,
\mathbf{M}_{n}), \label{eq1}
\end{equation}
where $\mathbf{M}_{0}$ is a column of ``$+1$''s, $\mathbf{M}_{1}$ contains
all main effects model matrices, and $\mathbf{M}_{k}$ is the matrix of all
$ { n \choose
k}$ $k$-factor interaction model matrices, $k = 2,\ldots,n$.
The portion $\mathbf{X}_{u_1,\ldots, u_k}$ of $\mathbf{M}_{k} =
(\mathbf{X}_{1,\ldots, k},\ldots,\mathbf{X}_{n - k+1,\ldots, n})$ denotes the
model matrix for the particular $k$-factor interaction indexed by
$\{u_{1},\ldots,u_{k}\}$ and is obtained by all products from one main
effects contrast column each from the $k$ factors in the interaction. Note
that the normalized orthogonal coding of the main effects implies that all
columns of $\mathbf{M}_{k}$ have squared length $N$ for $k \leq R -1$. Now,
on the global scale, the overall number of words of length $k$ can be
obtained as the sum of squared column averages of $\mathbf{M}_{k}$, that
is, $A_{k} = \mathbf{1}_{N}^{\mathrm{T}} \mathbf{M}_{k}
\mathbf{M}_{k}^{\mathrm{T}} \mathbf{1}_{N} / N^{2}$. Obviously, this sum
can be split into contributions from individual $k$-factor projections for
more detailed considerations, that is,
\begin{equation}
A_{k} = \mathop{\sum_{\{ u_{1},\ldots,u_{k} \} }}_{\subseteq \{ 1,\ldots,n \}} \mathbf{1}_{N}^{\mathrm{T}}
\mathbf{X}_{u_{1},\ldots,u_{k}}\mathbf{X}_{u_{1},\ldots, u_{k}}^{\mathrm{T}}
\mathbf{1}_{N} / N^{2} =:\mathop{\sum_{ \{ u_{1},\ldots,u_{k} \}}}_{
\subseteq \{ 1,\ldots,n \}}
a_{k} ( u_{1},\ldots,u_{k} ),\label{eq2}
\end{equation}
where $a_{k}(u_{1},\ldots,u_{k})$ is simply the $A_{k}$ value of the
$k$-factor projection $\{u_{1},\ldots,\break u_{k}\}$. The summands
$a_{k}(u_{1},\ldots,u_{k})$ are called ``projection frequencies.''
\begin{example}\label{ex1}
For 3-level factors, normalized polynomial coding
has the linear contrast coefficients $- \sqrt{3 / 2}, 0, \sqrt{3 / 2}$ and
the quadratic contrast coefficients $\sqrt{1 / 2}, - \sqrt{2}, \sqrt{1
/ 2}$. For the regular design $\operatorname{OA}(9, 3^{3}, 2)$ with the defining relation
$\mathrm{C}=\mathrm{A}+\mathrm{B}$ (mod 3), the model matrix $\mathbf{M}$
has dimensions $9\times27$, including one column for $\mathbf{M}_{0}$, six for
$\mathbf{M}_{1}$, twelve for $\mathbf{M}_{2}$ and eight for
$\mathbf{M}_{3}$. Like always, the column sum of $\mathbf{M}_{0}$ is $N$
(here: 9), and like for any orthogonal array, the column sums of
$\mathbf{M}_{1}$ and $\mathbf{M}_{2}$ are 0, which implies $A_{0} =1$,
$A_{1} =A_{2} =0$. We now take a closer look at $\mathbf{M}_{3}$, arranging
factor A as (0 0 0 1 1 1 2 2 2), factor B as (0 1 2 0 1 2 0 1 2) and factor
C as their sum (mod 3), denoting linear contrast columns by the subscript
$l$ and quadratic contrast columns by the subscript $q$. Then
{\fontsize{8}{10}\selectfont{
\[
\hspace*{-6pt}\mathbf{M}_{3} = \pmatrix{ \mathrm{contrast} & \mathrm{A}_{
l}
\mathrm{B}_{ l}\mathrm{C}_{ l} & \mathrm{A}_{ q}
\mathrm{B}_{
l}\mathrm{C}_{ l} & \mathrm{A}_{ l}
\mathrm{B}_{ q}\mathrm{C}_{ l} & \mathrm{A}_{ q}
\mathrm{B}_{ q}\mathrm{C}_{ l} & \mathrm{A}_{ l}
\mathrm{B}_{
l}\mathrm{C}_{ q} & \mathrm{A}_{ q}
\mathrm{B}_{ l}\mathrm{C}_{ q} & \mathrm{A}_{ l}
\mathrm{B}_{ q}\mathrm{C}_{ q} & \mathrm{A}_{ q}
\mathrm{B}_{
q}\mathrm{C}_{ q} \vspace*{2pt}
\cr
\hline & -
\sqrt{\frac{27}{8}} & \sqrt{\frac{9}{8}} & \sqrt{\frac{9}{8}} & -
\sqrt{\frac{3}{8}} & \sqrt{\frac{9}{8}} & - \sqrt{\frac{3}{8}}
& - \sqrt{\frac{3}{8}} & \sqrt{\frac{1}{8}} \vspace*{2pt}
\cr
& 0 & 0
& 0 & 0 & 0 & 0 & - \sqrt{6} & \sqrt{2} \vspace*{2pt}
\cr
& - \sqrt{
\frac{27}{8}} & \sqrt{\frac{9}{8}} & - \sqrt{\frac{9}{8}} &
\sqrt{\frac{3}{8}} & - \sqrt{\frac{9}{8}} & \sqrt{\frac{3}{8}} &
- \sqrt{\frac{3}{8}} & \sqrt{\frac{1}{8}} \vspace*{2pt}
\cr
& 0 & 0 &
0 & 0 & 0 & - \sqrt{6} & 0 & \sqrt{2} \vspace*{2pt}
\cr
& 0 & 0 & 0 & \sqrt{6} & 0
& 0 & 0 & \sqrt{2} \vspace*{2pt}
\cr
& 0 & \sqrt{\frac{9}{2}} & 0 & \sqrt{
\frac{3}{2}} & 0 & - \sqrt{\frac{3}{2}} & 0 & - \sqrt{
\frac{1}{2}} \vspace*{2pt}
\cr
& - \sqrt{\frac{27}{8}} & - \sqrt{
\frac{9}{8}} & \sqrt{\frac{9}{8}} & \sqrt{\frac{3}{8}} & -
\sqrt{\frac{9}{8}} & - \sqrt{\frac{3}{8}} & \sqrt{\frac{3}{8}} &
\sqrt{\frac{1}{8}} \vspace*{2pt}
\cr
& 0 & 0 & \sqrt{\frac{9}{2}} &
\sqrt{\frac{3}{2}} & 0 & 0 & - \sqrt{\frac{3}{2}} & - \sqrt{
\frac{1}{2}} \vspace*{2pt}
\cr
& 0 & 0 & 0 & 0 & - \sqrt{\frac{9}{2}} &
- \sqrt{\frac{3}{2}} & - \sqrt{\frac{3}{2}} & - \sqrt{
\frac{1}{2}} \vspace*{2pt}
\cr
\hline
\mathrm{column} & -
\sqrt{\frac{243}{8}} & \sqrt{\frac{81}{8}} & \sqrt{\frac{81}{8}} &
\sqrt{\frac{243}{8}} & - \sqrt{\frac{81}{8}} & - \sqrt{\frac{243}{8}}
& - \sqrt{\frac{243}{8}} & \sqrt{\frac{81}{8}}
\vspace*{2pt}\cr
\mathrm{sum}&&&&&&&& }.
\]}}
\hspace*{-3pt}Half of the squared column sums of $\mathbf{M}_{3}$ are $243/8$ and $81/8$,
respectively. This implies that the sum of the squared column sums is
$A_{3} = a_{3} (1,2,3) = (4\cdot 243/8+4\cdot 81/8)/81 = 2$.
\end{example}
\begin{example}\label{ex2}
Table~\ref{tab1} displays the only $\operatorname{OA}(18, 2^{1}3^{2}, 2)$
that cannot be obtained as a projection from the L18 design that was
popularized by Taguchi (of course, this triple is not interesting as a
stand-alone design, but as a projection from a design in more factors only;
the Taguchi L18 is for convenience displayed in Table~\ref{tab4} below). The 3-level
factors are coded like in Example~\ref{eq1}, for the 2-level factor, normalized
orthogonal coding is the customary $-1/+1$ coding. Now, the model matrix
$\mathbf{M}$ has dimensions $18\times18$, including one column for
$\mathbf{M}_{0}$, five for $\mathbf{M}_{1}$, eight for $\mathbf{M}_{2}$ and
four for $\mathbf{M}_{3}$. Again, $A_{0} =1$, $A_{1} =A_{2} =0$. The
squared column sums of $\mathbf{M}_{3}$ are 9 ($1\times$), 27 ($2\times$) and 81 ($1\times$),
respectively. Thus, $A_{3} = a_{3} (1,2,3) = (9+2\cdot 27+81)/324 = 4/9$.
\end{example}
The projection frequencies $a_{k}(u_{1},\ldots,u_{k})$ from equation (\ref{eq2}) are
the building blocks for the overall $A_{k}$. The $a_{R}(u_{1},\ldots,u_{R})$
will be instrumental in defining one version of generalized resolution.
Theorem~\ref{th1} provides them with an intuitive interpretation. The proof is
given in the \hyperref[app]{Appendix}.
\begin{table}
\caption{A partially confounded $\operatorname{OA}(18, 2^{1}3^{2}, 2)$
(transposed)}\label{tab1}
\begin{tabular*}{305pt}{@{\extracolsep{\fill}}lcccccccccccccccccc@{}}
\hline
A &0 &1 &1 &0& 0 &1 &0& 1 &0& 1& 1& 0& 1& 0& 0& 1& 0& 1\\
B& 0& 0 &0& 0 &0& 0& 1 &1 &1 &1 &1& 1 &2 &2& 2& 2 &2 &2\\
C& 0& 1& 2& 1& 2& 0 &0 &2& 0& 1& 1& 2& 2& 1& 2& 0&1 &0\\
\hline
\end{tabular*} \vspace*{-3pt}
\end{table}
\begin{theorem}\label{th1}
In an $\operatorname{OA}(N, s_{1}, \ldots, s_{n}, R -1)$, denote
by $\mathbf{X}_{c}$ the model matrix for the main effects of a particular
factor $c \in \{u_{1},\ldots,u_{R} \} \subseteq \{1,\ldots,n\}$ in normalized
orthogonal coding, and let $\mathrm{C} = \{u_{1},\ldots,u_{R}\} \setminus
\{c\}$. Then $a_{R}(u_{1},\ldots,\break u_{R})$ is the sum of the $R^{2}$-values
from the $s_{c} -1$ regression models that explain the columns of
$\mathbf{X}_{c}$ by a full model in the factors from $\mathrm{C}$.
\end{theorem}
\begin{remark}\label{re1}
(i) Theorem~\ref{th1} holds regardless which factor is
singled out for the left-hand side of the model. (ii) The proof simplifies
by restriction to normalized orthogonal coding, but the result holds
whenever the factor $c$ is coded by any set of orthogonal contrasts,
whether normalized or not. (iii) Individual $R^{2}$ values are coding
dependent, but the sum is not. (iv) In case of normalized orthogonal coding
for all factors, the full model in the factors from $\mathrm{C}$ can be reduced to the
$R -1$ factor interaction only, since the matrix $\mathbf{X}_{c}$ is
orthogonal to the model matrices for all lower degree effects in the other
$R -1$ factors.
\end{remark}
\setcounter{example}{0}
\begin{example}[(Continued)]\label{ex1co} The overall $a_{3} (1,2,3) = 2$ is the sum
of two $R^{2}$ values which are 1, regardless\vadjust{\goodbreak} which factor is singled out
as the main effects factor for the left-hand sides of regression. This
reflects that the level of each factor is uniquely determined by the level
combination of the other two factors.
\end{example}
\begin{example}[(Continued)]\label{ex2co} The $R^{2}$ from regressing the single
model matrix column of the 2-level factor on the four model matrix columns
for the interaction among the two 3-level factors is $4/9$. Alternatively,
the $R^{2}$-values for the regression of the two main effects columns for
factor B on the AC interaction columns are $1/9$ and $3/9$, respectively, which
also yields the sum $4/9$ obtained above for $a_{3}(1,2,3)$. For factor B in
dummy coding with reference level 0 instead of normalized polynomical
coding, the two main effects model matrix columns for factor B have
correlation 0.5; the sum of the $R^{2}$ values from full models in A and C
for explaining these two columns is $1/3 + 1/3 = 2/3 \neq a_{3} (1,2,3) =
4/9$. This demonstrates that Theorem~\ref{th1} is not applicable if orthogonal
coding [see Remark~\ref{re1}(ii)] is violated.
\end{example}
\begin{corollary}\label{co1} In an $\operatorname{OA}(N, s_{1}, \ldots,s_{n}, R -1)$, let
$\{u_{1},\ldots,u_{R} \} \subseteq \{1,\ldots,\break n\}$, with $s_{\mathrm{min}}=
\mathrm{min}_{i=1,\ldots,R}(s_{u_i})$.
\begin{longlist}[(iii)]
\item[(i)] A factor $c \in \{u_{1},\ldots,u_{R}\}$ in $s_{c}$ levels is
completely confounded by the factors in
$\mathrm{C} = \{u_{1},\ldots,u_{R}\} \setminus \{ c\}$, if and only if
$a_{R}(u_{1},\ldots,u_{R}) = s_{c} -1$.
\item[(ii)] $a_{R}(u_{1},\ldots,u_{R}) \leq s_{\mathrm{min}} - 1$.
\item[(iii)] If several factors in $\{u_{1},\ldots,u_{R}\}$ have
$s_{\mathrm{min}}$ levels, either all of them are or none of them is completely
confounded by the respective other $R -1$ factors in
$\{u_{1},\ldots,u_{R}\}$.
\item[(iv)] A factor with more than $s_{\mathrm{min}}$ levels cannot be
completely confounded by the other factors in $\{u_{1},\ldots,u_{R}\}$.
\end{longlist}
\end{corollary}
Part (i) of Corollary~\ref{co1} follows easily from Theorem~\ref{th1}, as
$a_{R}(u_{1},\ldots,u_{R}) = s_{c} -1$ if and only if all $R^{2}$ values for
columns of the factor $c$ main effects model matrix are 100\%, that is, the
factor $c$ main effects model matrix is completely explained by the factors
in C. Part (ii) follows, because the sum of $R^{2}$ values is of course
bounded by the minimum number of regressions conducted for any single
factor $c$, which is $s_{\mathrm{min}}- 1$. Parts (iii) and (iv) follow
directly from parts (i) and (ii). For symmetric $s$-level designs, part
(ii) of the corollary has already been proven by \citet{XuCheWu04}.
\begin{table}
\caption{An $\operatorname{OA}(8, 4^{1}2^{2}, 2)$ (transposed)}\label{tab2}
\begin{tabular*}{150pt}{@{\extracolsep{\fill}}lcccccccc@{}}
\hline
A &0 &0 &0& 0 &1 &1 &1 &1
\\
B &0 &0& 1& 1 &0& 0 &1 &1
\\
C &0 &2& 1& 3& 3& 1& 2 &0
\\
\hline
\end{tabular*}
\end{table}
\begin{example}\label{ex3}
For the design of Table~\ref{tab2}, $s_{\mathrm{min}} = 2$, and
$a_{3} (1,2,3) = 1$, that is, both 2-level factors are completely
confounded, while the 4-level factor is only partially confounded. The
individual $R^{2}$ values for the separate degrees of freedom of the
4-level factor main effect model matrix depend on the coding (e.g., 0.2, 0
and 0.8 for the linear, quadratic and cubic contrasts in normalized
orthogonal polynomial coding), while their sum is 1, regardless of the
chosen orthogonal coding.
\end{example}
\begin{theorem}\label{th2}
In an $\operatorname{OA}(N, s_{1}, \ldots, s_{n}, R -1)$, let
$\{u_{1},\ldots,u_{R} \} \subseteq \{1,\ldots,n\}$ with $s_{\mathrm{min}} =
\mathrm{min}_{i=1,\ldots,R}(s_{u_i})$. Let $c \in \{u_{1},\ldots,u_{R}\}$ with $s_{c}
= s_{\mathrm{min}}$, $\mathrm{C} = \{u_{1},\ldots,\break u_{R}\} \setminus \{c\}$.
Under normalized orthogonal coding denote by $\mathbf{X}_{c}$ the main
effects model matrix for factor $c$ and by $\mathbf{X}_{\mathrm{C}}$ the $R
-1$ factor interaction model matrix for the factors in $\mathrm{C}$.
If $a_{R}(u_{1},\ldots,u_{R}) = s_{\mathrm{min}} - 1$, $\mathbf{X}_{\mathrm{C}}$
can be orthogonally transformed (rotation and or switching) such that
$s_{\mathrm{min}} -1$ of its columns are collinear to the columns of~$\mathbf{X}_{c}$.
\end{theorem}
\begin{pf}
$a_{R}(u_{1},\ldots,u_{R}) = s_{\mathrm{min}} - 1$ implies
all $s_{\mathrm{min}}- 1$ regressions of the columns of $\mathbf{X}_{c}$ on
the columns of $\mathbf{X}_{\mathrm{C}}$ have $R^{2} =1$. Then, each of the
$s_{\mathrm{min}} - 1$ $\mathbf{X}_{c}$ columns can be perfectly matched by a
linear combination $\mathbf{X}_{\mathrm{C}} \mathbf{b}$ of the
$\mathbf{X}_{\mathrm{C}}$ columns; since all columns have the same length,
this linear transformation involves rotation and/or switching only. If
necessary, these $s_{\mathrm{min}} - 1$ orthogonal linear combinations can be
supplemented by further length-preserving orthogonal linear combinations so
that the dimension of $\mathbf{X}_{\mathrm{C}}$ remains intact.
\end{pf}
Theorems \ref{th1} and \ref{th2} are related to canonical correlation analysis, and the
redundancy index discussed in that context [\citet{SteLov68}]. In
order to make the following comments digestible, a brief definition of
canonical correlation analysis is included without going into any technical
detail about the method; details can, for example, be found in H\"{a}rdle and
Simar [(\citeyear{HarSim03}), Chapter~14]. It will be helpful to think of the columns of the
main effects model matrix of factor $c$ as the $Y$ variables and the
columns of the full model matrix in the $R -1$ other factors from the set $\mathrm{C}$
(excluding the constant column of ones for the intercept) as the $X$
variables of the following definition and explanation. As it would be
unnatural to consider the model matrices from experimental designs as
random variables, we directly define canonical correlation analysis in
terms of data matrices $\mathbf{X}$ and $\mathbf{Y}$ ($N$ rows each) and
empirical covariance matrices $\mathbf{S}_{xx} =
\mathbf{X}^{*\mathrm{T}} \mathbf{X}^{*} /(N - 1)$, $\mathbf{S}_{yy} =
\mathbf{Y}^{*\mathrm{T}} \mathbf{Y}^{*} /(N - 1)$, $\mathbf{S}_{xy}=
\mathbf{X}^{*\mathrm{T}} \mathbf{Y}^{*} /(N - 1)$ and $\mathbf{S}_{yx}
= \mathbf{Y}^{*\mathrm{T}} \mathbf{X}^{*} /(N - 1)$, where the superscript
$^{\ast}$ denotes columnwise centering of a matrix. We do not attempt a minimal
definition, but prioritize suitability for our purpose. Note that our
$\mathbf{S}_{xx}$ and $\mathbf{S}_{yy}$ are nonsingular matrices,
since the designs we consider have strength $R -1$; the covariance matrix
$(\mathbf{X}^{*} \mathbf{Y}^{*})^{\mathrm{T}}(\mathbf{X}^{*}
\mathbf{Y}^{*})/(N - 1)$ of the combined set of variables may, however, be
singular, which does not pose a problem to canonical correlation analysis,
even though some accounts request this matrix to be nonsingular.
\begin{definition}\label{de1}
Consider a set of $p X$-variables and $q
Y$-variables. Let the $N\times p$ matrix $\mathbf{X}$ and the
$N\times q$ matrix $\mathbf{Y}$ denote the data matrices of $N$
observations, and $\mathbf{S}_{xx}$, $\mathbf{S}_{yy}$,
$\mathbf{S}_{xy}$ and $\mathbf{S}_{yx}$ the empirical covariance
matrices obtained from them, with positive definite $\mathbf{S}_{xx}$
and $\mathbf{S}_{yy}$.
\begin{enumerate}[(ii)]
\item[(i)] Canonical correlation analysis creates $k = \mathrm{min}(p, q)$ pairs
of linear combination vectors $\mathbf{u}_{i} =\mathbf{Xa}_{i}$ and
$\mathbf{v}_{i} =\mathbf{Yb}_{i}$ with $p\times 1$ coefficient vectors
$\mathbf{a}_{i}$ and $q\times 1$ coefficient vectors $\mathbf{b}_{i}$, $i =
1,\ldots,k$, such that:
\begin{enumerate}[(a)]
\item[(a)] the $\mathbf{u}_{1},\ldots,\mathbf{u}_{k}$ are uncorrelated to
each other,
\item[(b)] the $\mathbf{v}_{1},\ldots,\mathbf{v}_{k}$ are uncorrelated to
each other,\vadjust{\goodbreak}
\item[(c)] the pair ($\mathbf{u}_{1}$, $\mathbf{v}_{1}$) has the maximum
possible correlation for any pair of linear combinations of the
$\mathbf{X}$ and $\mathbf{Y}$ columns, respectively,
\item[(d)] the pairs ($\mathbf{u}_{i}$, $\mathbf{v}_{i}$), $i=2,\ldots,k$
successively maximize the remaining correlation, given the constraints of
(a) and (b).
\end{enumerate}
\item[(ii)] The correlations $r_{i} = \operatorname{cor}(\mathbf{u}_{i},
\mathbf{v}_{i})$ are called ``canonical correlations,''
and the $\mathbf{u}_{i}$ and $\mathbf{v}_{i}$ are called ``canonical
variates.''
\end{enumerate}
\end{definition}
\begin{remark}\label{re2}
(i) If the matrices $\mathbf{X}$
and $\mathbf{Y}$ are centered, that is,
$\mathbf{X}=\mathbf{X}^{*}$ and
$\mathbf{Y}=\mathbf{Y}^{*}$, the
$\mathbf{u}$ and $\mathbf{v}$
vectors also have zero means, and the uncorrelatedness in~(a) and
(b) is equivalent to orthogonality of the vectors. (ii) It is well known
that the canonical correlations are the eigenvalues of the matrices
$\mathbf{Q}_{1}=\mathbf{S}_{xx}^{-1}\mathbf{S}_{xy}\mathbf{S}_{yy}^{-1}\mathbf{S}_{yx}$
and $\mathbf{Q}_{2}=\mathbf{S}_{yy}^{-1}\mathbf{S}_{yx}\mathbf{S}_{xx}^{-1}\mathbf{S}_{xy}$
[the first $\mathrm{min}({p},
{q})$ eigenvalues of both matrices are the
same; the larger matrix has the appropriate number of additional zeroes]
and the $\mathbf{a}_{i}$
are the corresponding eigenvectors of
$\mathbf{Q}_{1}$, the
$\mathbf{b}_{i}$ the
corresponding eigenvectors of
$\mathbf{Q}_{2}$.
\end{remark}
According to the definition, the canonical correlations are nonnegative.
It can also be shown that $\mathbf{u}_{i}$ and $\mathbf{v}_{j}$, $i \neq
j$, are uncorrelated, and orthogonal in case of centered data matrices;
thus, the pairs ($\mathbf{u}_{i}$, $\mathbf{v}_{i}$) decompose the relation
between $\mathbf{X}$ and $\mathbf{Y}$ into uncorrelated components, much
like the principal components decompose the total variance into
uncorrelated components. In data analysis, canonical correlation analysis
is often used for dimension reduction. Here, we retain the full
dimensionality. For uncorrelated $Y$ variables like the model matrix
columns of $\mathbf{X}_{c}$ in Theorem~\ref{th1}, it is straightforward to see that
the sum of the $R^{2}$ values from regressing each of the $Y$ variables on
all the $X$ variables coincides with the sum of the squared canonical
correlations. It is well known that the canonical correlations are
invariant to arbitrary nonsingular affine transformations applied to the
$X$- and $Y$-variables, which translate into nonsingular linear
transformations applied to the centered $\mathbf{X}$- and
$\mathbf{Y}$-matrices [cf., e.g., H\"{a}rdle and Simar (\citeyear{HarSim03}),
Theorem~14.3].
For our application, this implies invariance of the canonical correlations
to factor coding. Unfortunately, this invariance property does not hold for
the $R^{2}$ values or their sum: according to Lazraq and Cl\'{e}roux
[(\citeyear{LazCle01}), Section~2] the aforementioned redundancy index---which is the average
$R^{2}$ value calculated as $a_{R}(u_{1},\ldots,u_{R})/(s_{c} -1)$ in the
situation of Theorem~\ref{th1}---is invariant to linear transformations of the
centered $\mathbf{X}$ matrix, but only to \textit{orthonormal}
transformations of the centered $\mathbf{Y}$ matrix or scalar multiples
thereof. For correlated $Y$-variables, the redundancy index contains some
overlap between variables, as was already seen for Example \ref{ex2}, where the sum
of the $R^{2}$ values from dummy coding exceeded $a_{3}(1,2,3)$; in that
case, only the average or sum of the squared canonical correlations yields
an adequate measure of the overall explanatory power of the $X$-variables
on the $Y$-variables. Hence, for the case of arbitrary coding, Theorem~\ref{th1}
has to be restated in terms of squared canonical correlations.
\begin{theorem}\label{th3}
In an $\operatorname{OA}(N, s_{1},\ldots, s_{n}, R -1)$, denote
by $\mathbf{X}_{c}$ the model matrix for the main effects of a particular
factor\vadjust{\goodbreak} $c \in \{u_{1},\ldots,u_{R}\}$ in arbitrary coding, and let
$\mathrm{C} = \{u_{1},\ldots,u_{R}\}\setminus \{c\}$. Then
$a_{R}(u_{1},\ldots,u_{R})$ is the sum of the squared canonical correlations
from a canonical correlation analysis of the columns of $\mathbf{X}_{c}$
and the columns of the full model matrix $\mathbf{F}_{\mathrm{C}}$ in the
factors from $\mathrm{C}$.
\end{theorem}
\setcounter{example}{0}
\begin{example}[(Continued)
$s_{\mathrm{min}}=3$, $a_{3} (1,2,3) = 2$,
that is, the assumptions of Theorems \ref{th2} and \ref{th3} are fulfilled. Both canonical
correlations must be 1, because the sum must be 2. The transformation of
$\mathbf{X}_{\mathrm{C}}$ from Theorem~\ref{th2} can be obtained from the canonical
correlation analysis: For all factors in the role of $Y$, $\mathbf{v}_{i}
\propto \mathbf{y}_{i}$ (with $\mathbf{y}_{i}$ denoting the $i$th column
of the main effects model matrix of the $Y$-variables factor) can be used.
For the first or second factor in the role of $Y$, the corresponding
canonical vectors on the $X$ side fulfill
\begin{eqnarray}
\mathbf{u}_{1}& \propto& \mathrm{B}_{q}\mathrm{C}_{l} -
\mathrm{B}_{l}\mathrm{C}_{q} - \sqrt{3}\mathrm{B}_{l}\mathrm{C}_{l} -
\sqrt{3}\mathrm{B}_{q}\mathrm{C}_{q},\nonumber\\
\mathbf{u}_{2} &\propto& \sqrt{3}\mathrm{B}_{l}\mathrm{C}_{q} -
\sqrt{3}\mathrm{B}_{q}\mathrm{C}_{l} - \mathrm{B}_{l}\mathrm{C}_{l} -
\mathrm{B}_{q}\mathrm{C}_{q}\nonumber\\
\eqntext{\mbox{(or B replaced by A for the second factor in
the role of $Y$),}}
\end{eqnarray}
with the indices $l$ and $q$ denoting the
\textit{normalized} linear and quadratic coding introduced above.
For the third factor in the role of $Y$,
\begin{eqnarray*}
\mathbf{u}_{1}& \propto& - \sqrt{3}\mathrm{A}_{l}\mathrm{B}_{l} +
\mathrm{A}_{q}\mathrm{B}_{l} + \mathrm{A}_{l}\mathrm{B}_{q} +
\sqrt{3}\mathrm{A}_{q}\mathrm{B}_{q},\\
\mathbf{u}_{2}& \propto& -A_{l}\mathrm{B}_{l} -
\sqrt{3}\mathrm{A}_{l}\mathrm{B}_{q} - \sqrt{3}\mathrm{A}_{q}\mathrm{B}_{l}
+ \mathrm{A}_{q}\mathrm{B}_{q}.
\end{eqnarray*}
\end{example}
\setcounter{example}{0}
\begin{example}[(Now with dummy coding)
When using the design of
Example~\ref{ex1} for an experiment with qualitative factors, dummy coding is much
more usual than orthogonal contrast\vadjust{\goodbreak} coding. This example shows how Theorem~\ref{th3} can be applied for arbitrary nonorthogonal coding: $\mathrm{A}_{1}$ is 1
for $\mathrm{A}=1$ and 0 otherwise, $\mathrm{A}_{2}$ is 1 for
$\mathrm{A}=2$ and 0 otherwise, B and C are coded analogously; interaction
matrix columns are obtained as products of the respective main effects
columns. The main effect and two-factor interaction model matrix columns in
this coding do not have column means zero and have to be centered first by
subtracting $1/3$ or $1/9$, respectively. As canonical correlations are
invariant to affine transformations, dummy coding leads to the same
canonical correlations as the previous normalized orthogonal polynomial
coding. We consider the first factor in the role of $Y$; the centered model
matrix columns $\mathbf{y}_{1} = \mathrm{A}_{1} -1/3$ and $\mathbf{y}_{2} =
\mathrm{A}_{2} -1/3$ are correlated, so that we must not choose both
canonical variates for the $Y$ side proportional\vspace*{1pt} to the original variates.
One instance of the canonical variates for the $Y$ side is $\mathbf{v}_{1}
= - \mathbf{y}_{1} / \sqrt{2}, \mathbf{v}_{2} = ( \mathbf{y}_{1} +
2\mathbf{y}_{2} ) / \sqrt{6}$; these canonical vectors are unique up
to rotation only, because the two canonical correlations have the same
size. The corresponding canonical vectors on the $X$ side are obtained from
the centered full model matrix
\begin{eqnarray*}
&&\mathbf{F}_{\mathrm{C}} = \bigl( \bigl( \mathrm{B}_{1} -
\tfrac{1}{3} \bigr), \bigl( \mathrm{B}_{2} - \tfrac{1}{3}
\bigr), \bigl( \mathrm{C}_{1} - \tfrac{1}{3} \bigr), \bigl(
\mathrm{C}_{2} - \tfrac{1}{3} \bigr), \bigl(
\mathrm{B}_{1}\mathrm{C}_{1} - \tfrac{1}{9} \bigr),
\bigl( \mathrm{B}_{2}\mathrm{C}_{1} - \tfrac{1}{9}
\bigr), \\
&&\hspace*{212pt}{}\bigl( \mathrm{B}_{1}\mathrm{C}_{2} -
\tfrac{1}{9} \bigr), \bigl( \mathrm{B}_{2}\mathrm{C}_{2}
- \tfrac{1}{9} \bigr) \bigr)\vadjust{\goodbreak}
\end{eqnarray*}
as $\mathbf{u}_{1} = ( - \mathbf{f}_{2} - \mathbf{f}_{3} +
\mathbf{f}_{5} + 2\mathbf{f}_{6} - \mathbf{f}_{7} + \mathbf{f}_{8} )
/ \sqrt{2}$ and $\mathbf{u}_{2} = ( 2\mathbf{f}_{1} + \mathbf{f}_{2} +
\mathbf{f}_{3} + 2\mathbf{f}_{4} - 3\mathbf{f}_{5} - 3\mathbf{f}_{7} -
3\mathbf{f}_{8} ) / \sqrt{6}$, with $\mathbf{f}_{j}$ denoting the
$j$th column of $\mathbf{F}_{\mathrm{C}}$.
Note that the canonical vectors $\mathbf{u}_{1}$ and $\mathbf{u}_{2}$ now
contain contributions not only from the interaction part of the model
matrix but also from the main effects part, that is, we do indeed need the
full model matrix as stated in Theorem~\ref{th3}.
\end{example}
\begin{example}[(Continued)
$s_{\mathrm{min}} = 2$, $a_{3} (1,2,3) =
4/9$, that is, the assumption of Theorem~\ref{th2} is not fulfilled, the assumption
of Theorem~\ref{th3} is. The canonical correlation using the one column main
effects model matrix of the 2-level factor A in the role of $Y$ is $2/3$, the
canonical correlations using the main effects model matrix for the 3-level
factor B in the role of $Y$ are $2/3$ and 0; in both cases, the sum of the
squared canonical correlations is $a_{3} (1,2,3) = 4/9$. For any other
coding, for example, the dummy coding for factor B considered earlier, the
canonical correlations remain unchanged ($2/3$ and 0, resp.), since
they are coding invariant; thus, the sum of the squared canonical
correlations remains $4/9$, even though the sum of the $R^{2}$ values was
found to be different. Of course, the linear combination coefficients for
obtaining the canonical variates depend on the coding [see, e.g.,
H\"{a}rdle and Simar (\citeyear{HarSim03}), Theorem~14.3].
\end{example}
\begin{table}
\caption{Main effects matrix of factor A regressed on full model in factors
B and C for the 10 nonisomorphic GMA $\operatorname{OA}(32, 4^{3}, 2)$}
\label{tab3}
\begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}lcccccccccc@{}}
\hline
\multicolumn{3}{c}{$\bolds{R^{2}}$ \textbf{values from}} &
\multicolumn{3}{c}{$\bolds{R^{2}}$ \textbf{values from}} &
\multicolumn{3}{c}{\textbf{Squared canonical}} & &
\\
\multicolumn{3}{c}{\textbf{polynomial coding}} &
\multicolumn{3}{c}{\textbf{Helmert coding}} &
\multicolumn{3}{c}{\textbf{correlations}} & &
\\[-6pt]
\multicolumn{3}{@{}l}{\hrulefill} &
\multicolumn{3}{c}{\hrulefill} &
\multicolumn{3}{c}{\hrulefill} & &
\\
\textbf{{L}} & \textbf{{Q}} & \textbf{{C}} &
\textbf{1} & \textbf{2} & \textbf{3} & \textbf{1} & \textbf{2} &
\textbf{3} & $\bolds{A_{3}}$ & \multicolumn{1}{c@{}}{\textbf{Designs}}\\
\hline
0.8 & 0\phantom{00.} & 0.2\phantom{00} & 0 & $2/3$ & $1/3$ & 1\phantom{000.} & 0\phantom{000.} & 0\phantom{00.} & 1 & 1\\
0.65 & 0\phantom{00.} & 0.35\phantom{0} & $1/8$ & $13/24$ & $1/3$ & 0.75\phantom{0} & 0.25\phantom{0} & 0\phantom{00.} & 1 & 2\\
0.5 & 0\phantom{00.} & 0.5\phantom{00} & $1/4$ & $5/12$ & $1/3$ & 0.5\phantom{00} & 0.5\phantom{00} & 0\phantom{00.} & 1 & 3, 6, 8, 10\\
0.45 & 0.25 & 0.3\phantom{00} & $1/4$ & $5/12$ & $1/3$ & 0.5\phantom{00} & 0.25\phantom{0} & 0.25 & 1 & 4, 5, 7\\
0.375 & 0.25 & 0.375 & $5/16$ & $17/48$ & $1/3$ & 0.375 & 0.375 & 0.25 & 1 & 9\\
\hline
\end{tabular*}
\end{table}
Canonical correlation analysis can also be used to verify that a result
analogous to Theorem~\ref{th2} cannot be generalized to sets of $R$ factors for
which $a_{R}(u_{1},\ldots,u_{R}) < s_{\mathrm{min}} - 1$. For this, note that
the number of nonzero\vadjust{\goodbreak} canonical correlations indicates the dimension of
the relationship between the $X$- and the $Y$-variables.
Table~\ref{tab3} displays the $R^{2}$ values from two different orthogonal codings
and the squared canonical correlations from the main effects matrix of the
first factor ($Y$-variables) vs. the full model matrix of the other two
factors \mbox{($X$-variables)} for the ten nonisomorphic GMA $\operatorname{OA}(32,4^{3},2)$
obtained from \citet{EenSch}. These designs have one
generalized word of length 3, that is, they are nonregular. There are cases
with one, two and three nonzero canonical correlations, that is, neither
is it generally possible to collapse the linear dependence into a
one-dimensional structure nor does the linear dependence generally involve
more than one dimension.
\section{Generalized resolution}\label{sec3}
Before presenting the new proposals for generalized resolution, we briefly
review generalized resolution for symmetric 2-level designs by \citet{DenTan99} and \citet{TanDen99}. For 2-level factors, each effect has
a single degree of freedom (df) only, that is, all the $\mathbf{X}$'s in
any $\mathbf{M}_{k}$ [cf. equation (\ref{eq1})] are one-column matrices. \citet{DenTan99} looked at the absolute sums of the columns of $\mathbf{M}$,
which were termed $J$-characteristics by \citet{TanDen99}.
Specifically, for a resolution $R$ design, these authors introduced $\mathrm{GR}$
as
\begin{equation}
\mathrm{GR} = R + 1 - \frac{\max J_{R}}{N},\label{eq3}
\end{equation}
where $J_{R} = |\mathbf{1}_{N}^{\mathrm{T}} \mathbf{M}_{R}|$ is the row
vector of the $J\mbox{-}\mathrm{characteristics}\ |\mathbf{1}_{N}^{\mathrm{T}}
\mathbf{X}_{u_1,\ldots,u_R}|$ obtained from the $ { n
\choose R}$ $R$-factor interaction model columns
$\mathbf{X}_{u_1,\ldots, u_R}$. For 2-level designs, it is straightforward to
verify the following identities:
\begin{eqnarray}\label{eq4}
\mathrm{GR} &=& R + 1 - \sqrt{\max_{(u_{1},\ldots,u_{R})}a_{R}(u_{1},
\ldots,u_{R})}
\nonumber
\\[-8pt]
\\[-8pt]
\nonumber
& = & R + 1 - \max_{(u_{1},\ldots,u_{R})}\bigl\llvert \rho
(X_{u_{1}},X_{u_{2},\ldots, u_{R}}) \bigr\rrvert,
\end{eqnarray}
where $\rho$ denotes the correlation; note that the correlation in (\ref{eq4}) does
not depend on which of the $u_{\mathrm{i}}$ takes the role of $u_{1}$. Deng
and Tang [(\citeyear{DenTan99}), Proposition~2] proved a very convincing projection interpretation
of their $\mathrm{GR}$. Unfortunately, Proposition~4.4 of \citet{DieBed02}, in which a particular $\operatorname{OA}(18, 3^{3}, 2)$ is proven to be
indecomposable into two $\operatorname{OA}(9, 3^{3}, 2)$, implies that Deng and Tang's
result cannot be generalized to more than two levels.
The quantitative approach by Evangelaras et al. [(\citeyear{Evaetal05}), their equation (4)]
generalized the correlation version of (\ref{eq4}) by applying it to single df
contrasts for the quantitative factors. For the qualitative factors
considered here, any approach based on direct usage of single df contrasts
is not acceptable because it is coding dependent. The approach for
qualitative factors taken by Evangelaras et al. is unreasonable, as will be
demonstrated in Example~\ref{ex5}. \citet{PanLiu10} also proposed a generalized
resolution based on complex contrasts. For designs with more than 3 levels,
permuting levels for one or more factors will lead to different generalized
resolutions according to their definition, which is unacceptable for
qualitative factors. For 2-level designs, their approach boils down to
omitting the square root from
$\sqrt{\max_{(u_{1},\ldots,u_{R})}a_{R}(u_{1},\ldots,u_{R})}$ in (\ref{eq4}), which
implies that their proposal does not simplify to the well-grounded
generalized resolution of \citet{DenTan99}/\citet{TanDen99} for
2-level designs. This in itself makes their approach unconvincing. Example~\ref{ex5} will compare their approach to ours for 3-level designs. The results from
the previous section can be used to create two adequate generalizations of
$\mathrm{GR}$ for qualitative factors. These are introduced in the following two
definitions.
For the first definition, an $R$ factor projection is considered as
completely aliased, whenever all the levels of at least one of the factors
are completely determined by the level combination of the other $R -1$
factors. Thus, generalized resolution should be equal to $R$, if and only
if there is at least one $R$ factor projection with
$a_{R}(u_{1},\ldots,u_{R}) = s_{\mathrm{min}}-1$. The $\mathrm{GR}$ defined in
Definition~\ref{de2} guarantees this behavior and fulfills all requirements stated
in the \hyperref[sec1]{Introduction}:
\begin{definition}\label{de2}
For an $\operatorname{OA}(N, s_{1}, \ldots, s_{n}, R -1)$,
\[
\mathrm{GR} = R + 1 - \sqrt{\mathop{\max_{\{ u_{1},\ldots,u_{R}\} }}_{
\subseteq \{ 1,\ldots,n\}} \frac{a_{R} ( u_{1},\ldots,u_{R}
)}{\mathop{\mathrm{min}}\limits_{i = 1,\ldots,R}s_{u_{i}} - 1}}.
\]
In words, $\mathrm{GR}$ increases the resolution by one minus the square root of the
worst case average $R^{2}$ obtained from any $R$ factor projection, when
regressing the main effects columns in orthogonal coding from a factor with
the minimum number of levels on the other factors in the projection. It is
straightforward to see that (\ref{eq4}) is a special case of the definition, since
the denominator is 1 for 2-level designs. Regarding the requirements stated
in the \hyperref[sec1]{Introduction}, (i) $\mathrm{GR}$ from Definition~\ref{de2} is coding invariant because
the $a_{R}(\cdot)$ are coding invariant according to \citet{XuWu01}. (ii) The
technique is obviously applicable for symmetric and asymmetric designs
alike, and (iii) $\mathrm{GR} < R + 1$ follows from the resolution, $\mathrm{GR} \geq R$
follows from part (ii) of Corollary~\ref{co1}, $\mathrm{GR} = R$ is equivalent to complete
confounding in at least one $R$-factor projection according to part (i) of
Corollary~\ref{co1}.
\end{definition}
\setcounter{example}{3}
\begin{example}\label{ex4}
The $\mathrm{GR}$ values for the designs from Examples \ref{ex1} and
\ref{ex3} are $3 (\mathrm{GR}=R)$, the $\mathrm{GR}$ value for the design from Example~\ref{ex2} is $3 + 1 -
\sqrt{4 / 9} = 3.33$, and the $\mathrm{GR}$ values for all designs from Table~\ref{tab3} are
$3 + 1 - \sqrt{1 / 3} = 3.42$.
\end{example}
Now, complete aliasing is considered regarding individual degrees of
freedom (df). A coding invariant individual df approach considers a
factor's main effect as completely aliased in an $R$ factor projection,
whenever there is at least one pair of canonical variates with correlation
one. A projection is considered completely aliased, if at least one
factor's main effect is completely aliased in this individual df sense.
Note that it is now possible that factors with the same number of levels
can show different extents of individual df aliasing within the same
projection, as will be seen in Example~\ref{ex5} below.
\begin{definition}\label{de3}
For an $\operatorname{OA}(N, s_{1}, \ldots, s_{n}, R -1)$ and
tuples $(c, \mathrm{C})$ with $\mathrm{C} = \{u_{1},\ldots,u_{R}\} \setminus
\{c\}$,
\[
\mathrm{GR}_{\mathrm{ind}} = R + 1 - \max_{\{ u_{1},\ldots,u_{R}\} \subseteq
\{ 1,\ldots,n\}} \max
_{c \in \{ u_{1},\ldots,u_{R}\}} r_{1} ( \mathbf{X}_{c};
\mathbf{F}_{\mathrm{C}} )
\]
with $r_{1}(\mathbf{X}_{c}; \mathbf{F}_{\mathrm{C}})$ the largest canonical
correlation between the main effects model matrix for factor $c$ and the
full model matrix of the factors in $\mathrm{C}$.
\end{definition}
In words, $\mathrm{GR}_{\mathrm{ind}}$ is the worst case confounding for an individual
main effects df in the design that can be obtained by the worst case coding
(which corresponds to the $\mathbf{v}_{1}$ vector associated with the worst
canonical correlation). Obviously, $\mathrm{GR}_{\mathrm{ind}}$ is thus a stricter
criterion than $\mathrm{GR}$. Formally, Theorem~\ref{th3} implies that $\mathrm{GR}$ from Definition~\ref{de2} can be written as
\begin{equation}
\mathrm{GR} = R + 1 - \sqrt{\mathop{\max_{( u_{1},\ldots,u_{R} )\dvtx
}}_{\{ u_{1},\ldots,u_{R}\} \subseteq \{ 1,\ldots,n\}} \frac{\sum_{j = 1}^{s_{u_{1}} - 1} r_{j} (
\mathbf{X}_{u_{1}};\mathbf{F}_{ \{ u_{2},\ldots,u_{R} \}}
)^{2}} {\mathrm{min}_{i}s_{u_{i}} - 1}}.
\label{eq5}
\end{equation}
Note that maximization in (\ref{eq5}) is over tuples, so that it is ensured that
the factor with the minimum number of levels does also get into the first
position. Comparing~(\ref{eq5}) with Definition~\ref{de3}, $\mathrm{GR}_{\mathrm{ind}}\leq \mathrm{GR}$ is
obvious, because $r_{1}^{2}$ cannot be smaller than the average over all
$r_{i}^{2}$ (but can be equal, if all canonical correlations have the same
size). This is stated in a theorem.
\begin{theorem}\label{th4}
For $\mathrm{GR}$ from Definition~\ref{de2} and $\mathrm{GR}_{\mathrm{ind}}$ from
Definition~\ref{de3}, $\mathrm{GR}_{\mathrm{ind}} \leq \mathrm{GR}$.
\end{theorem}
\begin{remark}\label{re3}
(i) Under normalized orthogonal coding, the full
model matrix $\mathbf{F}_{\mathrm{C}}$ in Definition~\ref{de3} can again be
replaced by the $R -1$ factor interaction matrix~$\mathbf{X}_{\mathrm{C}}$.
(ii) Definition~\ref{de3} involves calculation of
$R {n \choose
R}$ canonical correlations ($R$ correlations for each $R$
factor projection). In any projection with at least one 2-level factor, it
is sufficient to calculate one single canonical correlation obtained with
an arbitrary 2-level factor in the role of $Y$, because this is necessarily
the worst case. Nevertheless, calculation of $\mathrm{GR}_{\mathrm{ind}}$ carries some
computational burden for designs with many factors.
\begin{table}[b]
\tabcolsep=0pt
\caption{The Taguchi L18 (transposed)}
\label{tab4}
\begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}lcccccccccccccccccc@{}}
\hline
&\multicolumn{18}{c@{}}{\textbf{Row}}\\[-6pt]
&\multicolumn{18}{c@{}}{\hrulefill}\\
\textbf{Column}&
\textbf{1} & \textbf{2} & \textbf{3} & \textbf{4} & \textbf{5} & \textbf{6} &
\textbf{7} & \textbf{8} & \textbf{9} & \textbf{10} & \textbf{11} &
\textbf{12} & \textbf{13} & \textbf{14} & \textbf{15} & \textbf{16} &
\textbf{17} & \textbf{18}\\
\hline
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1\\
2 & 0 & 0 & 0 & 1 & 1 & 1 & 2 & 2 & 2 & 0 & 0 & 0 & 1 & 1 & 1 & 2 & 2 & 2\\
3 & 0 & 1 & 2 & 0 & 1 & 2 & 0 & 1 & 2 & 0 & 1 & 2 & 0 & 1 & 2 & 0 & 1 & 2\\
4 & 0 & 1 & 2 & 0 & 1 & 2 & 1 & 2 & 0 & 2 & 0 & 1 & 1 & 2 & 0 & 2 & 0 & 1\\
5 & 0 & 1 & 2 & 1 & 2 & 0 & 0 & 1 & 2 & 2 & 0 & 1 & 2 & 0 & 1 & 1 & 2 & 0\\
6 & 0 & 1 & 2 & 1 & 2 & 0 & 2 & 0 & 1 & 1 & 2 & 0 & 0 & 1 & 2 & 2 & 0 & 1\\
7 & 0 & 1 & 2 & 2 & 0 & 1 & 1 & 2 & 0 & 1 & 2 & 0 & 2 & 0 & 1 & 0 & 1 & 2\\
8 & 0 & 1 & 2 & 2 & 0 & 1 & 2 & 0 & 1 & 0 & 1 & 2 & 1 & 2 & 0 & 1 & 2 & 0\\
\hline
\end{tabular*}
\end{table}
Obviously, (\ref{eq4}) is a special case of $\mathrm{GR}_{\mathrm{ind}}$, since the average
$R^{2}$ coincides with the only squared canonical correlation for
projections of $R$ $2$-level factors. $\mathrm{GR}_{\mathrm{ind}}$ also fulfills all
requirements stated in the \hyperref[sec1]{Introduction}: (i) $\mathrm{GR}_{\mathrm{ind}}$ is coding
invariant because the canonical correlations are invariant to affine
transformations of the $X$ and $Y$ variables, as was discussed in Section~\ref{sec2}. (ii) The technique is obviously applicable for symmetric and asymmetric
designs alike, and (iii) $\mathrm{GR}_{\mathrm{ind}} < R+1$ again follows from the
resolution, $\mathrm{GR}_{\mathrm{ind}} \geq R$ follows from the properties of
correlations, and $\mathrm{GR}_{\mathrm{ind}} = R$ is obviously equivalent to complete
confounding of at least one main effects contrast in at least one $R$
factor projection, in the individual df sense discussed above.
\end{remark}
\begin{example}\label{ex5}
We consider the three nonisomorphic $\operatorname{OA}(18, 3^{3},
2)$ that can be obtained as projections from the well-known Taguchi L18 (see
Table~\ref{tab4}) by using columns 3, 4 and 5 $(D_{1})$, columns 2, 3 and 6
$(D_{2})$ or columns 2, 4 and 5\vadjust{\goodbreak} $(D_{3})$. We have $A_{3}(D_{1}) = 0.5$,
$A_{3}(D_{2}) = 1$ and $A_{3}(D_{3}) = 2$, and consequently $\mathrm{GR}(D_{1}) =
3.5$, $\mathrm{GR}(D_{2}) = 3.29$ and $\mathrm{GR}(D_{3}) = 3$. For calculating
$\mathrm{GR}_{\mathrm{ind}}$, the largest canonical correlations of all factors in the
role of $Y$ are needed. These are all 0.5 for $D_{1}$ and all 1 for
$D_{3}$, such that $\mathrm{GR}_{\mathrm{ind}} = \mathrm{GR}$ for these two designs. For
$D_{2}$, the largest canonical correlation is 1 with the first factor (from
column 2 of the L18) in the role of $Y$, while it is $\sqrt{0.5}$ with
either of the other two factors in the role of $Y$; thus, $\mathrm{GR}_{\mathrm{ind}} =
3 < \mathrm{GR} = 3.29$. The completely aliased 1 df contrast of the first factor is
the contrast of the third level vs. the other two levels, which is apparent
from Table~\ref{tab5}: the contrast $\mathrm{A} = 2$ vs. $\mathrm{A}$ in $(0,1)$ is fully aliased
with the contrast of one level of B vs. the other two, given a particular
level of C. Regardless of factor coding, this direct aliasing is reflected
by a canonical correlation ``one'' for the first canonical variate of the
main effects contrast matrix of factor A.
\end{example}
\begin{table}
\caption{Frequency table of columns $2\ (=\mathrm{A})$, $3\ (=\mathrm{B})$ and $6\ (=\mathrm{C})$ of the
Taguchi L18}\label{tab5}
\begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}ccc@{}}
\textbf{, ,} $\mathbf{C \bolds{=} 0}$&\textbf{, ,} $\mathbf{C \bolds{=} 1}$&
\textbf{, ,} $\mathbf{C \bolds{=} 2}$\\
\begin{tabular}{ccccc}
&\textbf{B}&&&\\
\textbf{A}&& \textbf{0} &\textbf{1} &\textbf{2}\\
&\textbf{0}& 1& 0 &1\\
&\textbf{1}& 1 &0 &1\\
&\textbf{2}& 0& 2& 0
\end{tabular}
&
\begin{tabular}{ccccc}
&\textbf{B}&&&\\
\textbf{A}&& \textbf{0} &\textbf{1} &\textbf{2}\\
&\textbf{0}& 1& 1 &0\\
&\textbf{1}& 1 &1&0\\
&\textbf{2}& 0& 0& 2
\end{tabular}
&
\begin{tabular}{ccccc}
&\textbf{B}&&&\\
\textbf{A}&& \textbf{0} &\textbf{1} &\textbf{2}\\
&\textbf{0}& 0& 1 &1\\
&\textbf{1}& 0 &1 &1\\
&\textbf{2}& 2& 0& 0
\end{tabular}
\end{tabular*}
\end{table}
Using this example, we now compare the $\mathrm{GR}$ introduced here to proposals by
\citet{Evaetal05} and \citet{PanLiu10}: The \textit{GRes} values
reported by \citet{Evaetal05} for designs $D_{1}$, $D_{2}$ and
$D_{3}$ in the qualitative case are 3.75, 3.6464, 3.5, respectively;
especially the 3.5 for the completely aliased design $D_{3}$ does not make
sense. Pang and Liu reported values 3.75, 3.75 and 3, respectively; here,
at least the completely aliased design $D_{3}$ is assigned the value~``3.''
Introducing the square root, as was discussed in connection with equation
(\ref{eq4}), their generalized resolutions become 3.5, 3.5 and 3, respectively,
that is, they coincide with our $\mathrm{GR}$ results for designs $D_{1}$ and
$D_{3}$. For design $D_{2}$, their value 3.5 is still different from our
3.29 for the following reason: our approach considers $A_{3} =
a_{3}(1,2,3)$ as a sum of two $R^{2}$-values and subtracts the square root
of their average or maximum ($\mathrm{GR}$ or $\mathrm{GR}_{\mathrm{ind}}$, resp.), while
Pang and Liu's approach considers it as a sum of $2^{3} =8$ summands,
reflecting the potentially different linear combinations of the three
factors in the Galois field sense, the (square root of the) maximum of
which they subtract from $R+1$.
\section{Properties of GR}\label{sec4}
Let G be the set of all runs of an $s_{1} \times \cdots \times s_{n}$ full
factorial design, with $\llvert \mathrm{G} \rrvert = \prod_{i = 1}^{n}
s_{i}$ the cardinality of G. For any design $D$ in $N$ runs for $n$ factors
at $s_{1}, \ldots, s_{n}$ levels, let $N_{\mathbf{x}}$ be the number of
times that a point $\mathbf{x} \in\mathrm{G}$ appears in $D$. $\bar{N} = N /
\llvert \mathrm{G} \rrvert $ denotes the average frequency for each point of
G in the design $D$. We can measure the goodness of a fractional factorial
design $D$ by the uniformity of the design points of $D$ in the set of all
points in G, that is, the uniformity of the frequency distribution
$N_{\mathbf{x}}$. One measure, suggested by \citet{Tan01} and \citet{AiZha04}, is the variance
\[
\operatorname{V} (D) = \frac{1}{\llvert \mathrm{G} \rrvert }\sum_{\mathbf{x}
\in \mathrm{G}}
( N_{\mathbf{x}} - \bar{N} )^{2} = \frac{1}{\llvert \mathrm{G} \rrvert }\sum
_{\mathbf{x} \in \mathrm{G}} N_{\mathbf{x}}^{2} - \bar{N}^{2}.
\]
Let $N = q |\mathrm{G}| + r$ with nonnegative integer $q$ and $r$ and $0
\leq r < |G|$ (often $q=0$), that is, $r = N \operatorname{mod} |\mathrm{G}|$ is the remainder of
$N$ when divided by $|\mathrm{G}|$. Note that $\sum_{\mathbf{x} \in \mathrm{G}}
N_{\mathbf{x}} = N$, so $\mathrm{V}(D)$ is minimized if and only if each
$N_{\mathbf{x}}$ takes values on $q$ or $q + 1$ for any $\mathbf{x} \in \mathrm{G}$.
When $r$ points in $\mathrm{G}$ appear $q + 1$ times and the remaining $|\mathrm{G}| -
r$ points appear $q$ times, $\mathrm{V}(D)$ reaches the minimal value
$r(|\mathrm{G}| - r) / |\mathrm{G}|^{2}$. \citet{AiZha04} showed that
$\mathrm{V}(D)$ is a function of GWLP. In particular, if $D$ has strength
$n - 1$, their result implies that $V(D) = \bar{N}^{2}A_{n}(d)$. Combining
these results, and using the following definition, we obtain an upper bound
for $\mathrm{GR}$ for some classes of designs and provide a necessary and sufficient
condition under which this bound is achieved.
\begin{definition}[{[Modified from \citet{Xu03}]}]\label{de4}
(i) A design $D$ has maximum $t$-balance, if and only if the
possible level combinations for all projections onto $t$ columns occur as
equally often as possible, that is, either $q$ or $q+1$ times, where $q$ is
an integer such that $N = q|\mathrm{G}_{\mathrm{proj}}| + r$ with $\mathrm{G}_{\mathrm{proj}}$ the set
of all runs for the full factorial design of each respective
$t$-factor-projection and $0 \leq r < | \mathrm{G}_{\mathrm{proj}}|$.
(ii) An $\operatorname{OA}(N, s_{1}, \ldots, s_{n}, t -1)$ with $n \geq t$ has
weak strength $t$ if and only if it has maximum $t$-balance. We denote weak
strength $t$ as $\operatorname{OA}(N,s_{1}, \ldots, s_{n}, t^{-}$).
\end{definition}
\begin{remark}\label{re4}
\citet{Xu03} did not require strength
${t}-1$ in the definition of weak strength
${t}$, that is, the \citet{Xu03} definition of
weak strength ${t}$ corresponds to our
definition of maximum ${t}$-balance. For the
frequent case, for which all ${t}$-factor
projections have ${q}=0$ or
${q}=1$ and
${r}=0$ in Definition~\ref{de4}(i), maximum
${t}$-balance is equivalent to the absence of
repeated runs in any projection onto ${t}$
factors. In that case, maximum
${t}$-balance implies maximum
${k}$-balance for ${k}
> {t}$, and weak strength
${t}$ is equivalent to strength
${t}-1$ with absence of repeated runs in any
projection onto ${t}$ or more factors.
\end{remark}
\begin{theorem}\label{th5}
Let $D$ be an $\operatorname{OA}(N, s_{1}, \ldots, s_{R}, R - 1)$.
Then $A_{R} ( D ) \ge \frac{r ( \prod_{i = 1}^{R} s_{i} - r
)}{N^{2}}$, where $r$ is the remainder when $N$ is divided by
$\prod_{i = 1}^{R} s_{i}$. The equality holds if and only if $D$ has weak
strength $R$.
\end{theorem}
As all $R$ factor projections of any $\operatorname{OA}(N, s_{1}, \ldots, s_{n}, R^{-})$
fulfill the necessary and sufficient condition of Theorem~\ref{th5}, we have the
following corollary.\vadjust{\goodbreak}
\begin{corollary}\label{co2}
Suppose that an $\operatorname{OA}(N, s_{1},\ldots, s_{n}, R)$
does not exist. Then any $\operatorname{OA}(N, s_{1}, \ldots, s_{n}, R^{-})$ has maximum
$\mathrm{GR}$ among all $\operatorname{OA}(N, s_{1},\ldots, s_{n},\break R - 1)$.
\end{corollary}
\begin{corollary}\label{co3}
Suppose that an $\operatorname{OA}(N, s^{n}, R)$ does not
exist. Let $D$ be an $\operatorname{OA}(N, s^{n}, R -1)$. Then $\mathrm{GR}(D) \le R + 1 -
\sqrt{\frac{r ( s^{R} - r )}{N^{2} ( s - 1 )}}$, where
$r$ is the remainder when $N$ is divided by $s^{R}$. The equality holds if
and only if $D$ has weak strength $R$.
\end{corollary}
\begin{example}\label{ex6}
(1) Any projection onto three 3-level columns from
an $\operatorname{OA}(18, 6^{1}3^{6}, 2)$ has 18 distinct runs
($q=0$, $r=N=18$) and is an
{OA} of weak strength 3, so it has $A_{3} =1/2$ and $\mathrm{GR} =
4 - \sqrt{18
\cdot 9 / ( 18^{2} \cdot 2 )}=3.5$. (2) Any projection onto
three or more $s$-level columns from an $\operatorname{OA}(s^{2}, s^{s+1}, 2)$ has $\mathrm{GR} =
3$, since $N = r = s^{2}$, so that the upper limit from the corollary
becomes $\mathrm{GR} = R = 3$.
\end{example}
Using the following lemma according to \citet{MukWu95}, Corollary~\ref{co3}
can be applied to a further class of designs.
\begin{lemma}[{[\citet{MukWu95}]}]\label{le1} For a saturated $\operatorname{OA}(N,
s_{1}^{n_1}\mathrm{s}_{2}^{n_2}, 2)$ with $n_{1}(s_{1} -1) + n_{2}(s_{2}
-1) = N - 1$, let $\delta_{i}(a, b)$ be the number of coincidences of two
distinct rows $a$ and $b$ in the $n_{i}$ columns of $s_{i}$ levels, for $i
= 1, 2$. Then
\[
s_{1} \delta_{1}(a, b) + s_{2}
\delta_{2}(a, b) = n_{1} + n_{2} - 1.
\]
\end{lemma}
Consider a saturated $\operatorname{OA}(2s^{2}, (2s)^{1} s^{2s}, 2)$, where $r = N =
2s^{2}, s_{1} = 2s, s_{2} = s, n_{1} = 1, n_{2} = 2s$. From Lemma~\ref{le1}, we have $2\delta_{1}(a, b) + \delta_{2}(a, b) = 2$. So any
projection onto three or more $s$-level columns has no repeated runs, and
thus it achieves the upper limit $\mathrm{GR} = 4 - \sqrt{ ( s - 2 ) /
( 2s - 2 )}$ according to Corollary~\ref{co3}.
\begin{corollary}\label{co4}
For a saturated $\operatorname{OA}(2s^{2}, (2s)^{1} s^{2s},
2)$, any projection onto three or more $s$-level columns has $\mathrm{GR} = 4 -
\sqrt{ ( s - 2 ) / ( 2s - 2 )}$, which is optimum
among all possible OAs in $2s^{2}$ runs.
\end{corollary}
\begin{example}\label{ex7}
Design 1 of Table~\ref{tab3} is isomorphic to a projection
from a saturated $\operatorname{OA}(32,
8^{1}4^{8}, 2)$.
${A}_{3}$ attains the lower bound
from Theorem \ref{th5} $(32\cdot (64-32)/32^{2}= 1)$,
and thus GR attains the upper bound
$4-(1/3)^{\textonehalf} = 3.42$ from the
corollary.
\end{example}
Because of Theorem~\ref{th4}, any upper bound for $\mathrm{GR}$ is of course also an upper
bound for $\mathrm{GR}_{\mathrm{ind}}$, that is, Corollaries \ref{co3} and \ref{co4} also provide upper
bounds for $\mathrm{GR}_{\mathrm{ind}}$. However, for $\mathrm{GR}_{\mathrm{ind}}$ the bounds are not
tight in general; for example, $\mathrm{GR}_{\mathrm{ind}} = 3$ for the design of
Example~\ref{ex7} (see also Example~\ref{ex9} in the following section).
\citet{But05} previously showed that all projections onto $s$-level columns
of $\operatorname{OA}(s^{2}, s^{s+1}, 2)$ or $\operatorname{OA}(2s^{2}, (2s)^{1} s^{2s}, 2)$ have GMA
among all possible designs.
\section{Factor wise GR values}\label{sec5}
In Section~\ref{sec3}, two versions of overall generalized resolution were defined:
$\mathrm{GR}$ and $\mathrm{GR}_{\mathrm{ind}}$. These take a worst case perspective: even if a
single projection in a large design is completely confounded---in the case
of mixed level designs or $\mathrm{GR}_{\mathrm{ind}}$ affecting perhaps only one factor
within that projection---the overall metric takes the worst case value
$R$. It can therefore be useful to accompany $\mathrm{GR}$ and $\mathrm{GR}_{\mathrm{ind}}$ by
factor specific summaries. For the factor specific individual df
perspective, one simply has to omit the maximization over the factors in
each projection and has to use the factor of interest in the role of $Y$
only. For a factor specific complete confounding perspective, one has to
divide each projection's $a_{R}(\cdot)$ value by the factor's df rather than the
minimum df, in order to obtain the average $R^{2}$ value for this
particular factor. This leads to the following definition.
\begin{definition}\label{de5}
For an $\operatorname{OA}(N, s_{1},\ldots, s_{n}, R -1)$,
define
\begin{longlist}[(ii)]
\item[(i)] $\mathrm{GR}_{\mathrm{tot}(i)} = R + 1 - \sqrt{\max_{ \{
u_{2},\ldots,u_{R} \} \subseteq \{ 1,\ldots,n
\} \setminus \{ \mathrm{i} \}}
\frac{a_{R} (
i,u_{2},\ldots,u_{R} )}{s_{i} - 1}}$,
\item[(ii)] $\mathrm{GR}_{\mathrm{ind}(i)} = R + 1 - \max_{\{ i,u_{2},\ldots,u_{R}\}
\subseteq \{ 1,\ldots,n\}} r_{1} (
\mathbf{X}_{i};\mathbf{X}_{u_{2},\ldots, u_{R}} )$, with $\mathbf{X}_{i}$
the model matrix of factor $i$ and $\mathbf{X}_{u_2,\ldots, u_R}$ the $R -1$
factor interaction model matrix of the factors in $\{u_{2},\ldots,u_{R}\}$
in normalized orthogonal coding, and $r_{1}(\mathbf{Y};\mathbf{X})$ the
first canonical correlation between matrices $\mathbf{X}$ and
$\mathbf{Y}$.
\end{longlist}
\end{definition}
It is straightforward to verify that $\mathrm{GR}$ and $\mathrm{GR}_{\mathrm{ind}}$ can be
calculated as the respective minima of the factor specific $\mathrm{GR}$ values from
Definition~\ref{de5}.
\begin{theorem}\label{th6}
For the quantities from Definitions \ref{de2}, \ref{de3} and \ref{de5}, we
have
\begin{longlist}[(ii)]
\item[(i)] $\mathrm{GR} = \mathrm{min}_{i} \mathrm{GR}_{\mathrm{tot}(i)}$,
\item[(ii)] $\mathrm{GR}_{\mathrm{ind}} = \mathrm{min}_{i} \mathrm{GR}_{\mathrm{ind}(i)}$.
\end{longlist}
\end{theorem}
\begin{example}\label{ex8}
The Taguchi L18 has $\mathrm{GR} = \mathrm{GR}_{\mathrm{ind}} = 3$, and
the following $\mathrm{GR}_{\mathrm{ind}(i)}$ and $\mathrm{GR}_{\mathrm{tot}(i)}$ values
($\mathrm{GR}_{\mathrm{ind}(i)} = \mathrm{GR}_{\mathrm{tot}(i)}$ for all $i$): 3.18, 3, 3.29, 3,
3, 3.29, 3.29, 3.29. When omitting the second column, the remaining seven
columns have $\mathrm{GR} = \mathrm{GR}_{\mathrm{ind}} = 3.18$, again with $\mathrm{GR}_{\mathrm{ind}(i)}
= \mathrm{GR}_{\mathrm{tot}(i)}$ and the value for all 3-level factors at 3.42. When
omitting the fourth column instead, the then remaining seven columns have
$\mathrm{GR} = 3.18$, $\mathrm{GR}_{\mathrm{ind}} = 3$, $\mathrm{GR}_{\mathrm{tot}(i)}$ values 3.18, 3.29,
3.29, 3.42, 3.29, 3.29, 3.29 and $\mathrm{GR}_{\mathrm{ind}(i)}$ values the same,
except for the second column, which has $\mathrm{GR}_{\mathrm{ind}(2)} = 3$.
\end{example}
\begin{table}
\caption{Largest canonical correlations, $\mathrm{GR}_{\mathrm{ind}(i)}$ and
$\mathrm{GR}_{\mathrm{ind}}$ values for the GMA
$\operatorname{OA}(32, 4^{3}, 2)$}
\label{tab6}
\begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}lccccccc@{}}
\hline
& $\bolds{r_{1}(1;23)}$ & $\bolds{r_{1}(2;13)}$ &
$\bolds{r_{1}(3;12)}$ & $\bolds{\mathrm{GR}_{\mathrm{ind}(1)}}$ &
$\bolds{\mathrm{GR}_{\mathrm{ind}(2)}}$ & $\bolds{\mathrm{GR}_{\mathrm{ind}(3)}}$ &
$\bolds{\mathrm{GR}_{\mathrm{ind}}}$\\
\hline
\phantom{0}1 & 1.000 & 1.000 & 1.000 & 3.000 & 3.000 & 3.000 & 3.000\\
\phantom{0}2 & 0.866 & 0.866 & 0.866 & 3.134 & 3.134 & 3.134 & 3.134\\
\phantom{0}3 & 0.707 & 0.707 & 1.000 & 3.293 & 3.293 & 3.000 & 3.000\\
\phantom{0}4 & 0.707 & 0.707 & 0.866 & 3.293 & 3.293 & 3.134 & 3.134\\
\phantom{0}5 & 0.707 & 0.707 & 0.791 & 3.293 & 3.293 & 3.209 & 3.209\\
\phantom{0}6 & 0.707 & 0.707 & 0.707 & 3.293 & 3.293 & 3.293 & 3.293\\
\phantom{0}7 & 0.707 & 0.707 & 0.707 & 3.293 & 3.293 & 3.293 & 3.293\\
\phantom{0}8 & 0.707 & 0.707 & 0.707 & 3.293 & 3.293 & 3.293 & 3.293\\
\phantom{0}9 & 0.612 & 0.612 & 0.612 & 3.388 & 3.388 & 3.388 & 3.388\\
10 & 0.707 & 0.707 & 0.707 & 3.293 & 3.293 & 3.293 & 3.293\\
\hline
\end{tabular*}
\end{table}
$\mathrm{GR}$ from Definition~\ref{de2} and $\mathrm{GR}_{\mathrm{ind}}$ from Definition~\ref{de3} are not the
only possible generalizations of (\ref{eq4}). It is also possible to define a
$\mathrm{GR}_{\mathrm{tot}}$, by declaring only those $R$ factor projections as
completely confounded for which all factors are completely confounded. For
this, the factor wise average $R^{2}$ values for each projection---also
used in $\mathrm{GR}_{\mathrm{tot}(i)}$---need to be considered. A projection is
completely confounded, if these are all one, which can be formalized by
requesting their minimum or their average to be one. The average appears
more informative, leading to
\begin{equation}
\mathrm{GR}_{\mathrm{tot}} = R + 1 - \sqrt{\mathop{\max_{\{
u_{1},\ldots,u_{R} \}}}_{ \subseteq \{ 1,\ldots,n
\}}
\frac{1}{R}\sum_{i = 1}^{R}
\frac{a_{R}( u_{1},\ldots,u_{R}
)}{s_{{u}_{i}} - 1}}. \label{eq6}
\end{equation}
It is straightforward to see that $\mathrm{GR}_{\mathrm{tot}} \geq \mathrm{GR}$, and that
$\mathrm{GR}_{\mathrm{tot}} = \mathrm{GR}$ for symmetric designs. The asymmetric design of Table~\ref{tab2}
(Example~\ref{ex3}) has $\mathrm{GR} = 3$ and $\mathrm{GR}_{\mathrm{tot}} = 3 + 1 - \sqrt{ ( 1 + 1 + 1 /
3 ) / 3} = 3.12> 3$, in spite of the fact that two of its factors are
completely confounded. Of course, mixed level projections can never be
completely confounded according to (\ref{eq6}), which is the main reason why we
have not pursued this approach.
The final example uses the designs of Table~\ref{tab3} to show that $\mathrm{GR}_{\mathrm{ind}}$
and the $\mathrm{GR}_{\mathrm{ind}(i)}$ can introduce meaningful differentiation
between GMA designs.
\begin{example}\label{ex9}
All designs of Table~\ref{tab3} had $A_{3} =1$ and $\mathrm{GR}=3.42$.
The information provided in Table~\ref{tab3} is insufficient for determining
$\mathrm{GR}_{\mathrm{ind}}$. Table~\ref{tab6} provides the necessary information: the largest
canonical correlations are the same regardless which variable is chosen as
the $Y$ variable for seven designs, while they vary with the choice of the
$Y$ variable for three designs. There are five different $\mathrm{GR}_{\mathrm{ind}}$
values for these 10 designs that were not further differentiated by $A_{3}$
or $\mathrm{GR}$, and in combination with the $\mathrm{GR}_{\mathrm{ind}(i)}$, seven different
structures can be distinguished.
\end{example}
The differentiation achieved by $\mathrm{GR}_{\mathrm{ind}}$ is meaningful, as can be
seen by comparing frequency tables of the first, third and ninth design
(see Table~\ref{tab7}). The first and third design have $\mathrm{GR}_{\mathrm{ind}}=3$, which is
due to a very regular confounding pattern: in the first design,
dichotomizing each factor into a $0/1$ vs. $2/3$ design yields a regular
resolution III 2-level design (four different runs only), that is, each
main effect contrast $0/1$ vs. $2/3$ is completely confounded by the two-factor
interaction of the other two $0/1$ vs. $2/3$ contrasts; the third design shows
this severe confounding for factor C only, whose $0/1$ vs. $2/3$ contrast
is likewise completely confounded by the interaction between factors A and
B. Design 9 is the best of all GMA designs in terms of $\mathrm{GR}_{\mathrm{ind}}$. It
does not display such a strong regularity in behavior. $\mathrm{GR}_{\mathrm{ind}}$
treats designs 1 and 3 alike, although design 1 is clearly more severely
affected than design 3, which can be seen from the individual
$\mathrm{GR}_{\mathrm{ind}(i)}$. However, as generalized resolution has always taken a
``worst case'' perspective, this way of handling things is appropriate in
this context.
\begin{table}\tabcolsep=4pt
\caption{Frequency tables of designs 1, 3 and 9
from Table~\protect\ref{tab6}}\label{tab7}
\begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}cccc@{}}
\textbf{, ,} $\mathbf{C \bolds{=} 0}$&\textbf{, ,} $\mathbf{C \bolds{=} 1}$&
\textbf{, ,} $\mathbf{C \bolds{=} 2}$&\textbf{, ,} $\mathbf{C \bolds{=} 3}$\\
\multicolumn{4}{@{}l}{Design 1}\\
\begin{tabular}{@{}cccccc@{}}
&\textbf{B}&&&&\\
\textbf{A}&& \textbf{0} &\textbf{1} &\textbf{2}&\textbf{3}\\
&\textbf{0}& {1}& {1} &{0}&{0}\\
&\textbf{1}& {1} &{1} &{0}&{0}\\
&\textbf{2}& {0}& {0}& {1}&{1}\\
&\textbf{3}&{0}&{0}&{1}&{1}
\end{tabular}
&
\begin{tabular}{@{}cccccc@{}}
&\textbf{B}&&&&\\
\textbf{A}&& \textbf{0} &\textbf{1} &\textbf{2}&\textbf{3}\\
&\textbf{0}& {1}& {1} &{0}&{0}\\
&\textbf{1}& {1} &{1} &{0}&{0}\\
&\textbf{2}& {0}& {0}& {1}&{1}\\
&\textbf{3}&{0}&{0}&{1}&{1}
\end{tabular}
&
\begin{tabular}{@{}cccccc@{}}
&\textbf{B}&&&&\\
\textbf{A}&& \textbf{0} &\textbf{1} &\textbf{2}&\textbf{3}\\
&\textbf{0}& {0}& {0} &{1}&{1}\\
&\textbf{1}& {0} &{0} &{1}&{1}\\
&\textbf{2}& {1}& {1}& {0}&{0}\\
&\textbf{3}&{1}&{1}&{0}&{0}
\end{tabular}
&
\begin{tabular}{@{}cccccc@{}}
&\textbf{B}&&&&\\
\textbf{A}&& \textbf{0} &\textbf{1} &\textbf{2}&\textbf{3}\\
&\textbf{0}& {0}& {0} &{1}&{1}\\
&\textbf{1}& {0} &{0} &{1}&{1}\\
&\textbf{2}& {1}& {1}& {0}&{0}\\
&\textbf{3}&{1}&{1}&{0}&{0}
\end{tabular}\\[6pt]
\multicolumn{4}{@{}l}{Design 3}\\
\begin{tabular}{@{}cccccc@{}}
&\textbf{B}&&&&\\
\textbf{A}&& \textbf{0} &\textbf{1} &\textbf{2}&\textbf{3}\\
&\textbf{0}& {1}& {1} &{0}&{0}\\
&\textbf{1}& {1} &{0} &{1}&{0}\\
&\textbf{2}& {0}& {1}& {0}&{1}\\
&\textbf{3}&{0}&{0}&{1}&{1}
\end{tabular}
&
\begin{tabular}{@{}cccccc@{}}
&\textbf{B}&&&&\\
\textbf{A}&& \textbf{0} &\textbf{1} &\textbf{2}&\textbf{3}\\
&\textbf{0}& {1}& {1} &{0}&{0}\\
&\textbf{1}& {1} &{0} &{1}&{0}\\
&\textbf{2}& {0}& {1}& {0}&{1}\\
&\textbf{3}&{0}&{0}&{1}&{1}
\end{tabular}&
\begin{tabular}{@{}cccccc@{}}
&\textbf{B}&&&&\\
\textbf{A}&& \textbf{0} &\textbf{1} &\textbf{2}&\textbf{3}\\
&\textbf{0}& {0}& {0} &{1}&{1}\\
&\textbf{1}& {0} &{1} &{0}&{1}\\
&\textbf{2}& {1}& {0}& {1}&{0}\\
&\textbf{3}&{1}&{1}&{0}&{0}
\end{tabular}&
\begin{tabular}{@{}cccccc@{}}
&\textbf{B}&&&&\\
\textbf{A}&& \textbf{0} &\textbf{1} &\textbf{2}&\textbf{3}\\
&\textbf{0}& {0}& {0} &{1}&{1}\\
&\textbf{1}& {0} &{1} &{0}&{1}\\
&\textbf{2}& {1}& {0}& {1}&{0}\\
&\textbf{3}&{1}&{1}&{0}&{0}
\end{tabular}\\[6pt]
\multicolumn{4}{@{}l}{Design 9}\\
\begin{tabular}{@{}cccccc@{}}
&\textbf{B}&&&&\\
\textbf{A}&& \textbf{0} &\textbf{1} &\textbf{2}&\textbf{3}\\
&\textbf{0}& {1}& {1} &{0}&{0}\\
&\textbf{1}& {1} &{0} &{1}&{0}\\
&\textbf{2}& {0}& {0}& {1}&{1}\\
&\textbf{3}&{0}&{1}&{0}&{1}
\end{tabular}&
\begin{tabular}{@{}cccccc@{}}
&\textbf{B}&&&&\\
\textbf{A}&& \textbf{0} &\textbf{1} &\textbf{2}&\textbf{3}\\
&\textbf{0}& {1}& {0} &{1}&{0}\\
&\textbf{1}& {0} &{1} &{0}&{1}\\
&\textbf{2}& {1}& {1}& {0}&{0}\\
&\textbf{3}&{0}&{0}&{1}&{1}
\end{tabular}
&
\begin{tabular}{@{}cccccc@{}}
&\textbf{B}&&&&\\
\textbf{A}&& \textbf{0} &\textbf{1} &\textbf{2}&\textbf{3}\\
&\textbf{0}& {0}& {1} &{0}&{1}\\
&\textbf{1}& {1} &{0} &{0}&{1}\\
&\textbf{2}& {0}& {1}& {1}&{0}\\
&\textbf{3}&{1}&{0}&{1}&{0}
\end{tabular}&
\begin{tabular}{@{}cccccc@{}}
&\textbf{B}&&&&\\
\textbf{A}&& \textbf{0} &\textbf{1} &\textbf{2}&\textbf{3}\\
&\textbf{0}& {0}& {0} &{1}&{1}\\
&\textbf{1}& {0} &{1} &{1}&{0}\\
&\textbf{2}& {1}& {0}& {0}&{1}\\
&\textbf{3}&{1}&{1}&{0}&{0}
\end{tabular}\\
\end{tabular*}
\end{table}
\section{Discussion}\label{sec6}
We have provided a statistically meaningful interpretation for the building
blocks of GWLP and have generalized resolution by \citet{DenTan99} and \citet{TanDen99} in two meaningful ways for qualitatitve
factors. The complete confounding perspective of $\mathrm{GR}$ of Definition~\ref{de2}
appears to be more sensible than the individual df perspective of
$\mathrm{GR}_{\mathrm{ind}}$ as a primary criterion. However, $\mathrm{GR}_{\mathrm{ind}}$ provides an
interesting new aspect that may provide additional understanding of the
structure of OAs and may help in ranking tied designs. The factor wise
values of Section~\ref{sec5} add useful detail. It will be interesting to pursue
concepts derived from the building blocks of $\mathrm{GR}_{\mathrm{tot}(i)}$ and
$\mathrm{GR}_{\mathrm{ind}(i)}$ for the ranking of mixed level designs. As was
demonstrated in Section~\ref{sec5}, $\mathrm{GR}$ from Definition~\ref{de2} and $\mathrm{GR}_{\mathrm{ind}}$ from
Definition~\ref{de3} are not the only possible generalizations of (\ref{eq4}) for
qualitative factors. The alternative given in equation (\ref{eq6}) appears too
lenient and has therefore not been pursued. The concept of weak strength
deserves further attention: For symmetric designs with weak strength $t$
according to Definition~\ref{de4}, Xu [(\citeyear{Xu03}), Theorem~3] showed that these have
minimum moment aberration (MMA), and consequently GMA (as MMA is equivalent
to GMA for symmetric designs) if they also have maximum $k$-balance for $k
= t+1,\ldots,n$. In particular, this implies that an $\operatorname{OA}(N, s^{n},
t^{-})$ with $N \leq s^{t}$ has GMA, because of Remark~\ref{re4}. Here, we showed
that designs of the highest possible resolution $R$ maximize $\mathrm{GR}$ if they
have weak strength $R$. It is likely that there are further beneficial
consequences from the concept of weak strength.
\begin{appendix}\label{app}
\section*{Appendix: Proof of Theorem~\texorpdfstring{\lowercase{\protect\ref{th1}}}{1}
Let $\mathbf{M}_{\mathrm{C}} = (\mathbf{1}_{N} ,
\mathbf{M}_{1;\mathrm{C}}, \ldots, \mathbf{M}_{R -1;\mathrm{C}})$, with
$\mathbf{M}_{k;\mathrm{C}}$ the model matrix for all $k$-factor interactions,
$k=1,\ldots,R -1$. The assumption that the resolution of the array is $R$
and the chosen orthogonal contrasts imply $\mathbf{X}_{c}^{\mathrm{T}}
\mathbf{M}_{k;\mathrm{C}} = \mathbf{0}$ for $k < R-1$, with $\mathbf{X}_{c}$
as defined in the theorem. Denoting the $R -1$-factor interaction matrix
$\mathbf{M}_{R -1;\mathrm{C}}$ as $\mathbf{X}_{\mathrm{C}}$, the predictions
for the columns of $\mathbf{X}_{c}$ can be written as
\[
\hat{\mathbf{X}}_{c} = \mathbf{X}_{\mathrm{C}}\bigl (
\mathbf{X}_{\mathrm{C}}^{\mathrm{T}}\mathbf{X}_{\mathrm{C}} \bigr)^{ -
1}\mathbf{X}_{\mathrm{C}}^{\mathrm{T}}\mathbf{X}_{c} =
\frac{1}{N}\mathbf{X}_{\mathrm{C}}\mathbf{X}_{\mathrm{C}}^{\mathrm{T}}
\mathbf{X}_{c},
\]
since $\mathbf{X}_{\mathrm{C}}^{\mathrm{T}} \mathbf{X}_{\mathrm{C}} = N
\mathbf{I}_{\mathrm{df}(\mathrm{C})}$. As the column averages of $\hat{\mathbf{X}}_{c}$
are 0 because of the coding, the nominators for the $R^{2}$ values are the
diagonal elements of the matrix
\[
\hat{\mathbf{X}}_{c}^{\mathrm{T}}\hat{\mathbf{X}}_{c} =
\frac{1}{N^{2}}\mathbf{X}_{c}^{\mathrm{T}}\mathbf{X}_{\mathrm{C}}
\mathbf{X}_{\mathrm{C}}^{\mathrm{T}}\mathbf{X}_{\mathrm{C}}
\mathbf{X}_{\mathrm{C}}^{\mathrm{T}}\mathbf{X}_{c}\mathop{ =}
_{\mathbf{X}_{\mathrm{C}}^{\mathrm{T}}\mathbf{X}_{\mathrm{C}} =
N\mathbf{I}_{\mathrm{df}(\mathrm{C})}}\frac{1}{N}\mathbf{X}_{c}^{\mathrm{T}}
\mathbf{X}_{\mathrm{C}}\mathbf{X}_{\mathrm{C}}^{\mathrm{T}}
\mathbf{X}_{c}.
\]
Analogously, the corresponding denominators are the diagonal elements of
\[
\mathbf{X}_{c}^{\mathrm{T}}\mathbf{X}_{c} = N
\mathbf{I}_{\mathrm{df}(c)},
\]
which are all identical to $N$. Thus, the sum of the $R^{2}$ values is the
trace of
$\frac{1}{N^{2}}\mathbf{X}_{c}^{\mathrm{T}}\mathbf{X}_{\mathrm{C}}\mathbf{X}_{\mathrm{C}}^{\mathrm{T}}\mathbf{X}_{c}$,
which can be written as
\begin{equation}
\operatorname{tr} \biggl( \frac{1}{N^{2}}\mathbf{X}_{c}^{\mathrm{T}}
\mathbf{X}_{\mathrm{C}}\mathbf{X}_{\mathrm{C}}^{\mathrm{T}}
\mathbf{X}_{c} \biggr) = \frac{1}{N^{2}}\operatorname{vec} \bigl(
\mathbf{X}_{\mathrm{C}}^{\mathrm{T}}\mathbf{X}_{c}
\bigr)^{\mathrm{T}}\operatorname{vec} \bigl( \mathbf{X}_{\mathrm{C}}^{\mathrm{T}}
\mathbf{X}_{c} \bigr),\label{eq7}
\end{equation}
where the vec operator stacks the columns of a matrix on top of each other,
that is, generates a column vector from all elements of a matrix [see,
e.g., \citet{Ber09} for the rule connecting trace to vec]. Now, realize
that
\[
\operatorname{vec} \bigl( \mathbf{X}_{\mathrm{C}}^{\mathrm{T}}\mathbf{X}_{c}
\bigr)^{\mathrm{T}} = \operatorname{vec} \Biggl( \Biggl( \sum_{i = 1}^{N}
\mathbf{X}_{\mathrm{C}(i,f)}\mathbf{X}_{c(i,g)} \Biggr)_{(f,g)}
\Biggr)^{\mathrm{T}} = \mathbf{1} {}_{1 \times
N}\mathbf{X}_{u_{1},\ldots,u_{R}},
\]
where an index pair ($i$, $j$) stand for the $i$th row and $j$th column,
respectively, and the columns in\vspace*{1pt} $\mathbf{X}_{u_{1},\ldots,u_{R}}$ are assumed
to appear in the order that corresponds to that in $\operatorname{vec} (
\mathbf{X}_{\mathrm{C}}^{\mathrm{T}}\mathbf{X}_{c} )^{\mathrm{T}}$
(w.l.o.g.). Then (\ref{eq7}) becomes
\[
\frac{1}{N^{2}}\mathbf{1} {}_{1 \times
N}\mathbf{X}_{u_{1},\ldots,u_{R}}
\mathbf{X}_{u_{1},\ldots,u_{R}}^{\mathrm{T}}\mathbf{1} {}_{1
\times N}^{\mathrm{T}}
= a_{R} ( u_{1},\ldots,u_{R} ),
\]
which proves the assertion.
\end{appendix}
| {
"attr-fineweb-edu": 1.788086,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUeKU4dbghdUvaxXzc | \section{Introduction}
\input{sections/introduction}
\section{Spin-boson model and iterative QuAPI method}
\label{Spin_Boson_And_IQUAPI}
\input{sections/spin_bosonModelAndIterativeQuAPIMethod}
\section{Continuation of the Path Segments}
\label{Sec_Paths}
\input{sections/continuationOfThePathSegments}
\section{Differential Equations for Path Integrals}
\label{sec:quapi_pde}
\input{sections/differentialEquationForPathIntegral}
\section{Numerical Experiments}
\label{Sec_Numerical_Experiments}
\input{sections/numericalExperiments}
\section{Conclusion and discussion}
\label{Discussion_Conclusion}
\input{sections/discussionAndConclusion}
\section*{Appendix. Derivation of i-QuAPI Method}
\input{sections/appendix}
\bibliographystyle{abbrv}
\subsection{Strang Splitting}
Strang splitting can be used in
the numerical scheme of
the partial differential equation \eqref{PDEoriginal}.
Consider a differential equation with the form
\begin{equation}
\frac{\partial}{\partial t} u = L_1 u + L_2 u
\end{equation}
with $L_1$ and $L_2$ being two operators.
The solution to the differential equation is
\begin{equation}
u(t+\Delta t) = \e^{(L_1+L_2)\Delta t} u(t).
\end{equation}
Strang splitting suggests to approximate the solution by
\begin{equation}
u(t+\Delta t) \approx \e^{L_1 \Delta t/2} \e^{L_2 \Delta t} \e^{L_1 \Delta t/2} u(t).
\end{equation}
With the splitting, the PDE \eqref{PDEoriginal} is separated into
two equations.
\begin{align}
\frac{\partial}{\partial t} A_{(s^+,s^-)}^{D,\mathrm{sgn}}(t,[\tau_1,\cdots,\tau_D])
=& \frac{\partial}{\partial \tau_1}
A_{(s^+,s^-)}^{D,\mathrm{sgn}}(t,[\tau_1,\cdots,\tau_D]);
\label{eq_advection}\\
\begin{split}
\frac{\partial}{\partial t} A_{(s^+,s^-)}^{D,\mathrm{sgn}}(t,[\tau_1,\cdots,\tau_D])
=& -W_{(s^+,s^-)}^{D,\mathrm{sgn}}\left([\tau_1,\cdots,\tau_D]\right)
A_{(s^+,s^-)}^{D,\mathrm{sgn}}\left(t,[\tau_1,\cdots,\tau_D]\right) \\
&+ A_{(s^+,\hat{s}^-)}^{D+1,[-,\mathrm{sgn}]}
\left(t,[0,\tau_1,\cdots,\tau_D]\right)
+ A_{(\hat{s}^+,s^-)}^{D+1,[+,\mathrm{sgn}]}
\left(t,[0,\tau_1,\cdots,\tau_D]\right).
\end{split}
\label{eq_source}
\end{align}
In this way, we can focus on the advection term
and the source term respectively.
\subsection{Finite difference method}
A simple idea to solve this equation numerically
is to apply the finite difference method.
The uniform mesh is used
in the discretization of the simplices.
We first fix the spacial step $h_s = \frac{T}{N}$
for $N$ is some positive integer.
For the simplex \eqref{simplex},
the grid points are chosen to be
\begin{equation}
\mathcal{P}_D = \left\{ \left(\frac{m_1h_s}{N},\cdots,\frac{m_Dh_s}{N}\right); m_k \in \mathbb{Z}^+, \sum_{k=1}^D m_k < N \right\},
D = 0,\cdots,D_{\max}.
\end{equation}
Note that when $D=0$, $\mathcal{P}_0$ has one element,
which is an empty list.
For example, if $N = 5$,
\Cref{fig:demonstrationOfGridPoints}
shows the grid points on the two-dimensional (triangle)
and three-dimensional (tetrahedron) simplices.
We do not sample at the hypotenuse since
the values on the hypotenuse are determined
by the boundary condition
as discussed in \Cref{boundaryCondition}.
\begin{figure}
\centering
\begin{tikzpicture}
\draw[->] (-0.5,0) -- (5.5,0);
\draw[->] (0,-0.5) -- (0,5.5);
\draw[line width = 1.5pt] (0,0) -- (5,0) -- (0,5) -- cycle;
\filldraw[black] (5.5,0) circle node[anchor=north] {$\tau_1$};
\filldraw[black] (0,5.5) circle node[anchor=east] {$\tau_2$};
\filldraw[black] (5,0) circle node[anchor=north] {$T$};
\filldraw[black] (0,5) circle node[anchor=east] {$T$};
\filldraw[black] (0,0) circle node[anchor=north east] {$0$};
\filldraw[red] (0,0) circle (2pt);
\filldraw[red] (1,0) circle (2pt);
\filldraw[red] (2,0) circle (2pt);
\filldraw[red] (3,0) circle (2pt);
\filldraw[red] (4,0) circle (2pt);
\filldraw[red] (0,1) circle (2pt);
\filldraw[red] (1,1) circle (2pt);
\filldraw[red] (2,1) circle (2pt);
\filldraw[red] (3,1) circle (2pt);
\filldraw[red] (0,2) circle (2pt);
\filldraw[red] (1,2) circle (2pt);
\filldraw[red] (2,2) circle (2pt);
\filldraw[red] (0,3) circle (2pt);
\filldraw[red] (1,3) circle (2pt);
\filldraw[red] (0,4) circle (2pt);
\draw[->] (9,2) -- (12.5,2);
\draw[->] (9,2) -- (9,5.5);
\draw[->] (9,2) -- (7.3,0);
\draw[line width = 1.5pt] (9,2) -- (12,2) -- (9,5) -- cycle;
\draw[line width = 1.5pt] (9,2) -- (7.725,0.5) -- (12,2);
\draw[line width = 1.5pt] (9,5) -- (7.725,0.5);
\filldraw[black] (7.3,0) circle node[anchor=north] {$\tau_1$};
\filldraw[black] (12.5,2) circle node[anchor=north] {$\tau_2$};
\filldraw[black] (9,5.5) circle node[anchor=east] {$\tau_3$};
\filldraw[black] (12,2) circle node[anchor=north] {$T$};
\filldraw[black] (9,5) circle node[anchor=east] {$T$};
\filldraw[black] (7.725,0.5) circle node[anchor=east] {$T$};
\filldraw[red] (9,2) circle (2pt);
\filldraw[red] (9,2.75) circle (2pt);
\filldraw[red] (9,3.5) circle (2pt);
\filldraw[red] (9,4.25) circle (2pt);
\filldraw[red] (9.75,2) circle (2pt);
\filldraw[red] (9.75,2.75) circle (2pt);
\filldraw[red] (9.75,3.5) circle (2pt);
\filldraw[red] (10.5,2) circle (2pt);
\filldraw[red] (10.5,2.75) circle (2pt);
\filldraw[red] (11.25,2) circle (2pt);
\filldraw[blue] (9-0.31875,2-0.375) circle (2pt);
\filldraw[blue] (9-0.31875,2.75-0.375) circle (2pt);
\filldraw[blue] (9-0.31875,3.5-0.375) circle (2pt);
\filldraw[blue] (9.75-0.31875,2-0.375) circle (2pt);
\filldraw[blue] (9.75-0.31875,2.75-0.375) circle (2pt);
\filldraw[blue] (10.5-0.31875,2-0.375) circle (2pt);
\filldraw[green] (9-2*0.31875,2-2*0.375) circle (2pt);
\filldraw[green] (9-2*0.31875,2.75-2*0.375) circle (2pt);
\filldraw[green] (9.75-2*0.31875,2-2*0.375) circle (2pt);
\filldraw[purple] (9-3*0.31875,2-3*0.375) circle (2pt);
\end{tikzpicture}
\caption{The grid points on the two-dimensional simplex (red points) with $N=5$ and three dimensional simplex (points of all colors) with $N=4$.}
\label{fig:demonstrationOfGridPoints}
\end{figure}
For a $D$-dimensional simplex with $N$ nodes on each edge,
there are ${N+D \choose D}$ grid points in one simplex.
For different choices of initial states $(s^+,s^-)$
and different choices of $\mathrm{sgn}$,
in the spin-boson model,
there are in total $4\times 2^D$ $D$-dimensional simplices.
Therefore, there are in total
$4\sum_{D=0}^{D_{\max}} 2^D {D+N \choose D}$
nodes in the method.
With the grid mentioned above,
a numerical scheme can be designed
to evolve the whole system.
\begin{itemize}
\item For each set of parameters
$D = 0,\cdots,D_{\max}$, $(s^+,s^-) = (\pm1,\pm1)$
and $p\in\mathcal{P}_D$
compute
$A_{(s^+,s^-)}^{D,\mathrm{sgn}}(0,p)$
according to \eqref{initialCondition}.
\item Assume that we already have all values of $A_{(s^+,s^-)}^{D,\mathrm{sgn}}(t,p)$ for $D = 0,\cdots,D_{\max}$
all possible choices of $(s^+,s^-)$ and $p\in\mathcal{P}_D$
at specific time $t$.
We evolve the system to obtain corresponding values of at time $t+\Delta t$ by the following steps.
\begin{itemize}
\item Evolve the system according to \eqref{eq_advection} for half time step $\Delta t/2$ by upwind scheme.
If points on hypotenuse (the sum of all components are $T$) are involved,
boundary condition \eqref{eq_boundaryCondition} is applied.
\item Evolve the system according to \eqref{eq_source} for full time step $\Delta t$.
If points on $(D_{\max}+1)$-dimensional simplex is involved,
the method to close the system in \Cref{ClosureOfTheSystem}
is applied.
\item Evolve the system according to \eqref{eq_advection} for another half time step $\Delta t/2$ by upwind scheme.
\item Compute the density matrix and observable at time $t+\Delta t$ according to \eqref{densityMatrixContinuous}
and $O = \tilde{\rho}_{(+1,+1)}-\tilde{\rho}_{(-1,-1)}$.
\end{itemize}
\item Repeat the previous step to evolve the system iteratively.
\end{itemize}
Note that the evolution according to \eqref{eq_advection}\eqref{eq_source}
can be written as matrix-vector multiplications.
\subsection{Spin-boson model}
\input{sections/subsections/spin_boson_model}
\subsection{Iterative QuAPI method}
\input{sections/subsections/iterativeQuAPIMethod}
\subsection{Representation of the paths}
\label{RepresentationOfPaths}
\input{sections/subsections/representationOfThePaths}
\subsection{Continuous limit of the propagator}
\label{LimitOfPropagator}
\input{sections/subsections/continuousLimitOfThePropagator}
\subsection{Continuous limit of $A$'s}
\label{ContinuityOfA}
\input{sections/subsections/continuousLimitOfAs_version2}
\subsection{Density Matrix and Observable} \label{sec:DensityMatrix}
\input{sections/subsections/densityMatrixAndObservable}
\subsection{Formulation of the Differential Equations}
\input{sections/subsections/formulationOfTheDifferentialEquations}
\subsection{Boundary Conditions}
\label{sec:boundaryCondition}
\input{sections/subsections/boundaryCondition}
\subsection{Truncation and Closure of the System}
\label{ClosureOfTheSystem}
\input{sections/subsections/truncationOfTheDimensionAndClousureOfTheSystem}
\subsubsection{Experiments with different coupling intensities}
In order to check the validity of our method, we first study the following parameters, which have been considered in \cite{Kelly2013efficient,cai2020inchworm}:
\begin{displaymath}
\Delta=1, \quad \omega_c = 2.5\Delta, \quad \beta = 5/\Delta, \quad \epsilon=0.
\end{displaymath}
The numerical results for $\xi = 0.2$ and $0.4$ are given in \Cref{fig_result_data0113}. The results of our method is compared with the i-QuAPI results, and the parameters used for both methods are given in the caption. The results correctly show that when the bath-system coupling is stronger, the fluctuation of the observable damps faster. In both cases, the evolution of the observable matches well with each other, showing reliability of our approach.
\begin{figure}[ht]
\subfloat[$\xi=0.2$]{\includegraphics[width= 0.45\textwidth]{figures/data01.eps}}\qquad
\subfloat[$\xi=0.4$]{\includegraphics[width= 0.45\textwidth]{figures/data13.eps}}
\caption{Both methods use truncation time $T=1.5$. For i-QuAPI method, $\Delta k$ is chosen to be 10. For c-QuAPI, $D_{\max}=8$ and $N=8$.}
\label{fig_result_data0113}
\end{figure}
According to \eqref{eq:ndof}, it can be calculated that the DEBPI method needs to store $N_{\mathrm{dof}} = 17444860$ different values of $A(t,h)$ for each time step.
In comparison, the i-QuAPI method, which has a smaller grid size, only requires to store $2^{2\times 10} = 1048576$ different values. To understand such a difference, we note that each discrete path segment in the i-QuAPI method can also be represented using the initial state $(r^+, r^-)$, the number of spin flips $D$, the branches of the spin flips $\mathrm{sgn}$, and the time difference between each pair of spin flips $(\tau_1, \cdots, \tau_D)$. However, for the i-QuAPI method, there are never more than one spin flip occurring at the same time on the same branch. In other words, if $\mathrm{sgn}_k = \mathrm{sgn}_{k+1}$, then $\tau_{k+1}$ must be nonzero. For example, when $D = 2$ and $\mathrm{sgn}_1 = \mathrm{sgn}_2$, the five points located on the vertical axis of \Cref{fig:2d} are not considered in the i-QuAPI method, leading to a lower memory cost than that in our discretization. For larger $D$, this will cause more significant difference in the memory usage even if the grid sizes for both methods are the same. This difference can be eliminated by using a smarter PDE solver, which will be studied to our future work.
Based on our current approach, the memory cost grows exponentially with $D_{\max}$. Therefore, if the problem setting allows a lower value of $D_{\max}$, the DEBPI method will show its advantage. Moreover, if $D_{\max}$ can be chosen small, we can use a small grid size in the discretization of $\triangle_T^{(D)}$ to achieve better accuracy, while for the i-QuAPI method, the memory cost grows exponentially with $\Delta k$, so that it is prohibitive to choose small grid sizes. In the following two subsections, we will show several cases with small spin-flipping frequency such that our method can achieve a lower memory cost.
\subsubsection{Experiments with different biases} \label{sec:bias}
We now consider the following set of parameters:
\begin{displaymath}
\xi = 0.2, \quad \Delta=0.2, \quad \omega_c = 5\Delta, \quad \beta = 5/\Delta,
\end{displaymath}
where the spin flipping frequency $\Delta$ is smaller so we expect that a smaller $D_{\max}$ can be adopted for our DEBPI approach. Three different biases $\epsilon$ are chosen, and the results are given in
\Cref{fig_result_data101112}. For $D_{\max} = 5$, the curves for the DEPBI method almost coincide with the i-QuAPI results. Since the parameter $\epsilon$ denotes the difference between the energies of the two states, the three figures correctly show that when $\epsilon$ is larger, the state of spin is more likely to be found as $\ket{-1}$, leading to the downshift of the curve in the second and third subplots in \Cref{fig_result_data101112}.
\begin{figure}[ht]
\subfloat[$\epsilon=0$]{\includegraphics[width= 0.3\textwidth]{figures/data10.eps}} \quad
\subfloat[$\epsilon=0.5\Delta$]{\includegraphics[width= 0.3\textwidth]{figures/data11.eps}} \quad
\subfloat[$\epsilon=\Delta$]{\includegraphics[width= 0.3\textwidth]{figures/data12.eps}}
\caption{Both methods use truncation time $T=4$. For i-QuAPI method, $\Delta k$ is chosen to be 10 and for c-QuAPI, $D_{\max}=5$ and $N=10$.}
\label{fig_result_data101112}
\end{figure}
In these tests, DEBPI needs
$\displaystyle\sum_{D=0}^5 4\times 2^D \begin{pmatrix} D+10 \\ D \end{pmatrix} = 458748$ numbers to store the numerical solution, while
the i-QuAPI scheme needs $2^{2\times 10} = 1048576$. Thus the memory cost of our method is about a half of the cost of i-QuAPI. Note that further reducing $D_{\max}$ may cause significant error in the numerical solution.
\Cref{differentDmax} shows the numerical results
for $D_{\max} = 3, 4, 5$ when $\epsilon = 0$, indicating that $D_{\max} = 5$ is a proper choice to guarantee the quality of the solution.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{figures/Dmax.eps}
\caption{$\epsilon=0$ and $D_{\max}=3,4,5$.}
\label{differentDmax}
\end{figure}
In general, the parameter $D_{\max}$ can be considered as a factor controlling the trade-off between the memory cost and the numerical accuracy.
In order that $D_{\max}$ can be chosen small, we need both the memory time and the spin flipping frequency to be relatively small, meaning that within the path segment $T$,
the spin is unlikely to flip many times.
Under such circumstances, our method is more likely to outperform the method of i-QuAPI in terms of the memory cost.
\subsubsection{Experiments with different temperatures}
For the last set of tests, we choose the parameters
\begin{displaymath}
\xi=0.2, \quad \Delta = 0.1, \quad \omega_c = 2.5\Delta, \quad \epsilon=0,
\end{displaymath}
and we let the inverse temperature of the bath $\beta$ vary from $0.2/\Delta$ to $5/\Delta$.
The results and the numerical parameters are given in \Cref{fig_result_data151409}. As reflected in the numerical results, in the case of higher temperature, stronger quantum dissipation leads to faster reduction to the state with equal probabilities on both spin states.
\begin{figure}[ht]
\subfloat[$\beta=0.2/\Delta$]{\includegraphics[width= 0.3\textwidth]{figures/data15.eps}} \quad
\subfloat[$\beta=1/\Delta$]{\includegraphics[width= 0.3\textwidth]{figures/data14.eps}} \quad
\subfloat[$\beta=5/\Delta$]{\includegraphics[width= 0.3\textwidth]{figures/data09.eps}} \quad
\caption{Both methods use truncation time $T=4$. For i-QuAPI method, $\Delta k$ is chosen to be 10 and for c-QuAPI, $D_{\max}=3$ and $N=15$.}
\label{fig_result_data151409}
\end{figure}
Compared with the examples in \Cref{sec:bias}, we use the same memory length $T = 4$, while the spin flipping frequency $\Delta$ is reduced by a half. Thus it can be expected that the value of $D_{\max}$ can also be reduced by a half. Therefore, choosing $D_{\max} = 3$ is sufficient to capture the behavior of evolution processes here.
In these examples, due to the small value of $D_{\max}$, the memory cost of the DEBPI approach is much lower even if $N$ is chosen to be greater than $\Delta k$. In detail, the DEBPI approach only requires to store $\displaystyle\sum_{D=0}^3 4\times 2^D \begin{pmatrix} D+15 \\ D \end{pmatrix} = 28420$ values, while the i-QuAPI scheme requires $2^{2\times 10} = 1048576$.
\subsection{Representation of the paths}
\subsection{Numerical Method}
\label{sec:num_method}
\input{sections/subsections/numerialMethod}
\subsection{Numerical Results}
\label{sec:num_res}
\input{sections/subsections/numericalResults}
| {
"attr-fineweb-edu": 1.868164,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUeOc4dbghfTKFPDOb | \section{Introduction}
Studies of non-leptonic decays of charmed mesons constitute a primary method of investigations into direct CP-violation
in that system. Even though the experimental precision for studying $D$ decays has steadily improved over the past decade,
theory calculations have faced severe challenges. Precise numerical predictions of CP-violating observables are not possible
at the moment due to large non-perturbative contributions from strong interactions affecting weak-decay amplitudes. A way out
in such a situation involves phenomenological fits of decay amplitudes to experimentally measured decay widths of charmed
mesons. If the number of fit parameters is smaller than the number of experimentally measured observables, then predictions
are possible. Such fits require a defined procedure on how to parametrize complex-valued decay amplitudes \cite{Petrov:2021idw}.
One way to approach the problem is to note that the light-quark operators in the weak effective Hamiltonian governing heavy-quark decays,
as well as the initial and final states form product representations of a flavor $SU(3)_F$ group. These product representations can be
reduced with the help of the Wigner-Eckart theorem. This way, a basis is chosen, in which all decay amplitudes can be expanded in terms
of the reduced matrix elements. Such an approach was applied to both $B$-decays \cite{Savage:1989ub,Grinstein:1996us} and
$D$-decays \cite{Quigg:1979ic,Kaeding:1995vq,Savage:1991wu}. In the limit of exact $SU(3)_F$ symmetry, all decays of a triplet of
$D$-mesons, $D^0, D^+,$ and $D^+_s$, into two light octet meson states can be parametrized in terms of five independent complex
parameters \cite{Quigg:1979ic}. We will refer to this approach as the ``$SU(3)_F$ matrix-elements approach.''
Alternatively, a topological flavor-flow approach can be used. Developed in the study of $B$-decays \cite{Gronau:1994rj,Buras:1998ra,Imbeault:2006ss}, it has been applied to the charm sector \cite{Bhattacharya:2012pc,Bhattacharya:2012ah}. The flavor-flow approach postulates a basis of universal flavor
topologies for various decay amplitudes.\footnote{We remind the reader that while the flavor-flow diagrams do resemble Feynman graphs, they are not computed in perturbative field theory due to large non-perturbative QCD effects.} $SU(3)_F$ symmetry can be used to relate
decay amplitudes, as both the light-quark final states and initial $D$-mesons transform under it. These universal topological amplitudes can be
fitted to the existing experimental data. Due to long-distance effects, particularly rescattering among hadronic final states, often multiple flavor-flow
diagrams contribute to the same process. A subset of {\it linear combinations} of flavor-flow amplitudes can then be identified as the basis set for the
flavor-flow approach.
The two approaches described above are equivalent if the number of reduced matrix elements in the $SU(3)_F$ matrix-elements approach is
equal to the number of diagrammatic combinations in the flavor-flow approach, both describing the same set of decay amplitudes. Such
an equivalence has been shown in the of exact $SU(3)_F$ symmetry \cite{Gronau:1994rj,Zeppenfeld:1980ex}, as well as when first-order
$SU(3)_F$-breaking corrections are taken into account \cite{Muller:2015lua}. Here we revisit the question of equivalence of the two descriptions
and discuss the fit quality of the available data in both approaches.
Non-leptonic decays of charmed mesons can be additionally classified according to the rate of suppression of the (leading-order) weak-decay amplitudes by the Wolfenstein parameter, $\lambda = \sin\theta \simeq 0.2$ \cite{Wolfenstein:1983yz}, where
$\theta$ is the Cabibbo angle. Such amplitudes may contain zero, one, or two powers of $\lambda$. Weak-hadronic decays of charm are, therefore, categorized into Cabibbo-favored (CF) decays (${\cal A} \propto V^*_{cs}V^{}_{ud} \sim {\cal O}(1)$), singly-Cabibbo-suppressed (SCS) decays (${\cal A} \propto V^*_{cq}V^{}_{uq} \sim {\cal O}(\lambda)$ where $q = d, s$), and doubly-Cabibbo-suppressed (DCS) decays
(${\cal A}\propto V^*_{cd}V^{}_{us} \sim {\cal O}(\lambda^2))$. Since such classification is external to QCD, both the flavor $SU(3)_F$ and topological flavor-flow approaches can, in principle, be used to parametrize all CF, SCS, and DCS amplitudes. This can be considered as an advantage, as some fit parameters can be obtained from the CF and/or DCS transitions and then used to predict CP-violating asymmetries in SCS decays
\cite{Golden:1989qx,Pirtskhalava:2011va,Cheng:2012wr,Cheng:2012xb,Feldmann:2012js}.
This is so because the quark-level transitions for CF ($c\to su{\bar d}$) and DCS ($c\to du{\bar s}$) modes involve four distinct quark flavors that belong to the first two generations and therefore do not generate CP-asymmetries in the Standard Model at leading order in $\lambda$. To execute this program one needs to control $SU(3)_F$-breaking corrections in both
approaches \cite{Kaeding:1995vq,Savage:1991wu,Muller:2015lua,Hiller:2012xm,Falk:2001hx}.
Here we take a different look at the equivalency of the flavor $SU(3)_F$ and the topological flavor-flow approaches. Since the Wolfenstein parameter $\lambda$ is external to any QCD-based parametrization of decay amplitudes, one can, theoretically, dial any value for it. In particular, setting $\lambda=0$ would only leave CF decays as experimental data for a fit. It is interesting to note that in this case, in the $SU(3)_F$ limit, we would be left with {\it three} irreducible $SU(3)_F$ amplitudes and {\it four} topological flavor-flow amplitudes. In this paper we explore the equivalency of the phenomenological descriptions of CF charmed-meson decays in light of this discrepancy.
This paper is organized as follows. In Section \ref{sec:fits} we review both the flavor $SU(3)_F$ and topological flavor-flow
approaches to CF charm decays. We extend the discussion by including CF decays with the real $\eta$ and $\eta^\prime$ states
and present associated fits. In Section \ref{sec:connections}, we discuss the connections between those two approaches.
We conclude in Section \ref{sec:conclusions}.
\section{Cabibbo-favored decays in light of flavor-$SU(3)$ symmetry }
\label{sec:fits}
$SU(3)_F$ symmetry plays a prominent role in both the $SU(3)_F$ matrix-elements and topological flavor-flow approaches. Both methods use the fact that the
initial and final states transform under some product representations of the $SU(3)_F$ group. In particular, the initial state $D$-mesons,
$\ket{D^0}, \ket{D^+}, \ket{D^+_s}$, form a triplet of $SU(3)_F$, while the nine pseudoscalar mesons ($\pi^\pm,\pi^0,K^\pm,K^0,\overline{K}^0,\eta,\eta'$)
contain both an octet and a singlet. The two approaches differ by the choice of the ``basis'' parameters, which we discuss below.
In what follows, we employ physical $\eta$ and $\eta'$ states that are constructed from the $SU(3)$ octet $\eta_8$ and the singlet $\eta_1$ states using octet-singlet mixing,
\begin{eqnarray}
\eta &=& -\cos\theta\,\eta_8 - \sin\theta\,\eta_1 ,
\nonumber \\
\eta' &=& -\sin\theta\,\eta_8 + \cos\theta\,\eta_1,
\end{eqnarray}
where the octet $\eta_8$ and the singlet $\eta_1$ states are defined as
\begin{eqnarray}
\eta_8 &=& (u{\bar u} + d{\bar d} - 2\,s{\bar s})/\sqrt{6},
\nonumber \\
\eta_1 &=& (u{\bar u} + d{\bar d} + s{\bar s})/\sqrt{3} ,~
\end{eqnarray}
and $\theta$ is the $\eta-\eta'$ mixing angle. This mixing angle has nothing to do with weak decays of heavy flavors and can be fixed externally, for example from $B$-meson decays \cite{Datta:2001ir} or radiative decays of $J/\psi$ \cite{Petrov:1997yf} into $\eta$ and $\eta^\prime$ final states. Thus, we do not consider it a fit parameter. Oftentimes, we will use $\theta = \arcsin(1/3)$.
\subsection{$SU(3)_F$ matrix-elements approach}
The $SU(3)_F$ matrix-elements approach uses the fact that the Hamiltonian governing $D$ decays into the light mesons also transforms as a product representations of $SU(3)_F$. The quark-level Hamiltonian for CF $D$-meson decays can be written as
\begin{equation}
{{\cal H}}_\text{CF}~=~\frac{G_F}{\sqrt{2}}V_{ud}V_{cs}^*(\bar ud)(\bar sc) + {\rm h.c.}
\label{eq:HCF}
\end{equation}
We begin by considering the Wigner-Eckart decompositions of the CF $D\to PP$ amplitudes using $SU(3)_{F}$ symmetry. An element of $SU(3)$ can be
represented using the state $\ket{\bm{r}YII_3}$ where $\bm{r}$ is the irreducible
representation (irrep) of the state, $Y$ is its hypercharge, while $I$ and $I_3$ stand for the isospin and its
third component, respectively. Under $SU(3)_{F}$ symmetry, the light quarks $u$, $d$, and $s$ (and the respective antiquarks)
transform as the fundamental triplet (anti-triplet) represented by,
\begin{eqnarray}
\ket{u}~=~\ket{\bm{3},\frac{1}{3},\frac{1}{2},\frac{1}{2}}\,,\quad
\ket{d}~=~\ket{\bm{3},\frac{1}{3},\frac{1}{2},-\frac{1}{2}}\,,\quad
\ket{s}~=~\ket{\mathbf{3},-\frac{2}{3},0,0}\,, \\
\ket{\overline{u}}~=~-\ket{\bm{\overline{3}},-\frac{1}{3},\frac{1}{2},-\frac{1}{2}}\,,\quad
\ket{\overline{d}}~=~ \ket{\bm{\overline{3}},\frac{1}{3},\frac{1}{2},\frac{1}{2}}\,,\quad
\ket{\overline{s}}~=~\ket{\bm{\overline{3}},\frac{2}{3},0,0}\,.
\end{eqnarray}
The charm quark (and anti-quark) is heavy and transforms as an $SU(3)_{F}$ singlet represented by $\ket{\bm{1},0,0,0}$.
Using this notation one can show that the CF Hamiltonian in Eq.~(\ref{eq:HCF}) contains the irreps
$\bm{\overline{15}}$ and $\bm{6}$ \cite{Quigg:1979ic,Savage:1991wu,Kaeding:1995vq} and can be represented by,
\begin{equation}
\mathcal{H}_\text{CF}~=~ -~\frac{G_F}{\sqrt{2}}V_{ud}V_{cs}^*\(A\,{{\cal O}}^{(\bm{\overline{15}})}_{\frac{2}{3},1,-1} + C\,{{\cal O}}^{(\bm{6})}_{\frac{2}{3},1,-1}\) + {\rm h.c.}\,,
\end{equation}
where we have used the notation $O^{(\bm{r})}_{Y,I,I_3}$ to denote the $SU(3)$ operators, whereas $A$ and $C$ represent
their respective coefficients.
As mentioned previously, the final states transform under a product representation of $SU(3)_F$. Since the $\ket{PP}$ final states must respect
Bose symmetry, we only consider symmetrized products of $SU(3)_F$ irreps,
\begin{eqnarray}
\[(\bm{8} + \bm{1}) \times (\bm{8} + \bm{1})\]_{\rm sym} &=& (\bm{8}\times\bm{8})_{\rm sym}
+ (\bm{8}\times\bm{1})_{\rm sym} + \bm{1} \,,~
\nonumber \\
&=& {\bm 27} + {\bm 8}_{{\bm 8}\times{\bm 8}} + {\bm 8}_{{\bm 8}\times{\bm 1}} + \bm{1}_{{\bm 8}\times{\bm 8}}
+ \bm{1}\,.
\label{eq:FinStDec}
\end{eqnarray}
Note that there are two octets in Eq.~(\ref{eq:FinStDec}): one from the octet-octet final state and the other from the
octet-singlet one.
Now, of the above irreps only the $\bm{27}$ and $\bm{8}$ appear in the products $\bm{\overline{15}}\times\bm{3}$
and $\bm{6}\times\bm{3}$ needed to construct the states $\ket{{\cal H}|D}$. Furthermore, $\bm{\overline{15}}\times\bm{3}$
contains both a $\bm{27}$ and an $\bm{8}$, while $\bm{6}\times\bm{3}$ contains only an $\bm{8}$. Therefore, it appears
that $D\to PP$ amplitudes can be represented using the following three independent reduced matrix elements.
\begin{equation}
A_{27} ~=~ \Braket{\bm{27}|{\cal O}^{\bm{\overline{15}}}|\bm{3}}\,, \quad
A_8 ~=~ \Braket{\bm{8}|{\cal O}^{\bm{\overline{15}}}|\bm{3}}\,, \quad
C_8 ~=~ \Braket{\bm{8}|{\cal O}^{\bm{6}}|\bm{3}}\,.
\label{eq:SU3me}
\end{equation}
These reduced matrix elements depend on five real parameters -- three magnitudes and two relative strong phases (one overall phase can be ignored).
The amplitudes for the CF $D\to PP$ processes can be constructed using these reduced matrix elements. As there are two different octets in
Eq.~(\ref{eq:FinStDec}), in general this would imply two additional reduced matrix elements for the ${\cal O}^{\bm{\overline{15}}}$ and ${\cal O}^{\bm{6}}$ operators,
$A_8^{(1)}$ and $C_8^{(1)}$ respectively. In Section \ref{sec:connections} we will show that indeed in order to get a complete description of these decays
one must include these additional matrix elements that correspond to the $SU(3)_F$-singlet final state.
Assuming them to be the same, $A_8^{(1)}=A_8$ and $C_8^{(1)}=C_8$, which can be motivated by a nonet symmetry, the final states containing physical
$\eta$ and $\eta^\prime$ contain an admixture of singlet and octet $SU(3)_F$ amplitudes. The decay amplitudes into those final states can be written as
shown in Table ~\ref{tab:CFeta}.
\begin{table}[!h]
\begin{center}
\begin{tabular}{|c|c|}\hline
Decay&Representation\\
\hline
$D^0\to \bar{K^0}\eta$&$\frac{1}{10\sqrt{3}}\left[(3A_{27}-2A_8+\sqrt{10}C_8)\cos\theta+2(-2\sqrt{5}A_8+5\sqrt{2}C_8)\sin\theta\right]$\\
$D^0\to\bar{K^0}\eta^\prime$&$\frac{1}{10\sqrt{3}}\left[(3A_{27}-2A_8+\sqrt{10}C_8)\sin\theta+2(2\sqrt{5}A_8-5\sqrt{2}C_8)\cos\theta\right]$\\
\hline
$D_s^+\to\pi^+\eta$&$\frac{1}{5\sqrt{3}}\left[(3A_{27}-2A_8-\sqrt{10}C_8)\cos\theta+(2\sqrt{5}A_8+5\sqrt{2}C_8)\sin\theta\right]$\\
$D_s^+\to\pi^+\eta^\prime$&$-\frac{1}{5\sqrt{3}}\left[(-3A_{27}+2A_8+\sqrt{10}C_8)\sin\theta+(2\sqrt{5}A_8+5\sqrt{2}C_8)\cos\theta\right]$\\
\hline
\end{tabular}
\caption{$\eta-\eta^\prime$ decay-amplitude representations with $A_8^{(1)}=A_8$ and $C_8^{(1)}=C_8$
in the $SU(3)_F$ matrix-elements approach.}
\label{tab:CFeta}
\end{center}
\end{table}
Assuming, for simplicity, $\theta = \arcsin(1/3)$, all CF decay amplitudes can be written in terms of only three complex parameters of Eq.~(\ref{eq:SU3me}).
We provide a representation of the decay amplitudes in terms of those parameters in Table~\ref{tab:CF}.
\begin{table}[!h]
\begin{center}
\begin{tabular}{|c|c|}\hline
Decay & $SU(3)_F$ Amplitude \\\hline
$D^0\to K^-\pi^+$ & $\frac{G_F}{\sqrt{2}}V_{ud}V_{cs}^*~\frac{1}{5}\left(\sqrt{2}A_{27}+\sqrt{2}A_8-\sqrt{5}C_8\right)$ \\
$D^0\to \overline{K}^0\pi^0$ & $\frac{G_F}{\sqrt{2}}V_{ud}V_{cs}^*~\frac{1}{10}\left(3A_{27}-2A_8+\sqrt{10}C_8\right)$ \\
$D^0\to \overline{K}^0\eta$ & $\frac{G_F}{\sqrt{2}} V_{ud}V_{cs}^* \, \frac{1}{45} \left(3\sqrt{6}A_{27} - 2(\sqrt{6}+\sqrt{15}) A_8+(5\sqrt{6}+2\sqrt{15})C_8\right)$ \\
$D^0\to\overline{K}^0\eta'$ & $\frac{G_F}{\sqrt{2}}V_{ud}V_{cs}^* ~\frac{1}{30\sqrt{3}} \left(3A_{27}+(8\sqrt{10}-2)A_8+(\sqrt{10}-40) C_8\right)$ \\ \hline
$D^+\to \overline{K}^0\pi^+$ &$\frac{G_F}{\sqrt{2}}V_{ud}V_{cs}^*~\frac{1}{\sqrt{2}}A_{27}$ \\ \hline
$D^+_s\to\overline{K}^0 K^+$ & $\frac{G_F}{\sqrt{2}}V_{ud}V_{cs}^*~\frac{1}{5}\left(\sqrt{2}A_{27}+\sqrt{2}A_8+\sqrt{5}C_8\right)$ \\
$D^+_s\to\pi^+\eta$ & $\frac{G_F}{\sqrt{2}}V_{ud}V_{cs}^* ~\frac{1}{45}\left(6\sqrt{6}A_{27} - 2 (2\sqrt{6}-\sqrt{15}) A_8 + (5\sqrt{6}-4\sqrt{15})C_8\right)$ \\
$D^+_s\to\pi^+\eta'$&$\frac{G_F}{\sqrt{2}}V_{ud}V_{cs}^*~\frac{1}{15\sqrt{3}}\left(3A_{27}-2(1+2\sqrt{10})A_8-(20+\sqrt{10}) C_8\right)$ \\ \hline
\end{tabular}
\end{center}
\caption{$SU(3)_F$ matrix-elements representation of Cabibbo-Favored Decays in the Standard Model. Note that
the $\eta-\eta^\prime$ angle $\theta = \arcsin(1/3)$.}
\label{tab:CF}
\end{table}
These matrix elements can be fit to experimentally-measured branching ratios.
The measured branching ratios, ${\cal B}$, for the CF $D\to PP$ decays are given in Table~\ref{tab:CFBR}. The absolute value of each
decay amplitude can be determined from the measured branching ratios using,
\begin{equation}
|{\cal A}_{D\to PP}| ~=~ \sqrt{\frac{8\pi\hbar\,m^2_D\,{\cal B}_{D\to PP}}{\tau_D\,p^*}} ,~
\end{equation}
where $p^*$ refers to the magnitude of the three-momentum of each final-state pseudoscalar in the $D$-meson rest frame.
\begin{table}[h!]
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
Meson & Decay & Branching Ratio (\%) \\ \hline
$D^0$ &$K^-\pi^+$ &$3.950\pm0.031$ \\
&$\overline{K}^0\pi^0$ &$2.48\pm0.044$ \\
&$\overline{K}^0\eta$ &$1.018\pm0.012$ \\
&$\overline{K}^0\eta'$ &$1.898\pm0.064$ \\ \hline
$D^+$ &$\overline{K}^0\pi^+$ &$3.124\pm0.062$ \\ \hline
$D^+_s$ &$\overline{K}^0 K^+$ &$2.95\pm0.14$ \\
&$\pi^+\eta$ &$1.70\pm0.09$ \\
&$\pi^+\eta'$ &$3.94\pm0.25$ \\ \hline
\end{tabular}
\end{center}
\caption{Experimental branching ratios for CF $D$ decays taken from~\cite{Zyla:2020zbs}.
\label{tab:CFBR}}
\end{table}
Since there are eight measured $D\to PP$ branching ratios that depend on five real parameters (three magnitudes and two
relative phases of three reduced matrix elements), a $\chi^2$-minimization fit can be employed to determine the parameters.
Such a fit has three degrees of freedom. We perform a fit by constraining the $C_8$ amplitude to be purely real and find,
\begin{eqnarray}
\chi^2_{\rm min}/{\rm dof} &=& 5440/3 ,~ \nonumber\\
A_{27} &=& \(0.288 \pm 0.003 \)\,e^{(-108 \pm 2)^\circ i}\,{\rm GeV}^3 ,~ \nonumber\\
A_8 &=& \(0.25 \pm 0.02 \)\,e^{(-102 \pm 2)^\circ i}\,{\rm GeV}^3 ,~ \nonumber\\
C_8 &=& \(0.429 \pm 0.008 \)\,{\rm GeV}^3 .~
\end{eqnarray}
Clearly, the fit is very poor. This leads us to believe that the above description of CF $D\to PP$ decays in terms of
the minimum number of $SU(3)_F$ reduced matrix elements is incomplete and needs to be modified.
In the next section we discuss another parametrization of the same matrix elements, in terms of the topological
flavor-flow amplitudes. We again identify the minimal set of basis amplitudes to describe the CF decays in the flavor-$SU(3)$ limit.
This minimal set appears to work better, seemingly providing an adequate description of CF decays, including those
with the $\eta$ and $\eta^\prime$ mesons in the final state.
\subsection{Topological flavor-flow approach}
The eight CF $D\to PP$ decays can also be described in terms of topological flavor-flow diagrams using
$SU(3)_F$ symmetry, as discussed in Ref.~\cite{Bhattacharya_2010}. Based on the Hamiltonian in Eq.~(\ref{eq:HCF}), the amplitudes
for the CF $D\to PP$ decays can be represented using four flavor topologies shown in Fig.~\ref{fig:SMdiagrams}. The basis of the topological
amplitudes is obtained by identifying the color-favored tree ($T$), color-suppressed tree ($C$), exchange ($E$), and annihilation ($A$) amplitudes.
These four topological amplitudes depend on seven real parameters, four magnitudes and three relative phases (once again one overall
phase is ignored).
\begin{figure}[h]
\centering
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth]{SMTdiagram.pdf}
\caption{Color-favored tree ($T$)}
\end{subfigure} \hspace{0.3in}
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth]{SMCdiagram.pdf}
\caption{Color-suppressed tree ($C$)}
\end{subfigure} \vskip 0.3in
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth]{SMEdiagram.pdf}
\caption{Exchange ($E$)}
\end{subfigure} \hspace{0.3in}
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth]{SMAdiagram.pdf}
\caption{Annihilation ($A$)}
\end{subfigure}
\caption{Topological flavor-flow diagrams used to describe CF $D\to PP$ decays.}
\label{fig:SMdiagrams}
\end{figure}
The amplitudes for CF decays with at least one $\eta$ or $\eta'$ meson in the final state have explicit dependence
on the $\eta-\eta'$ mixing angle. The topological flavor-flow representation for these decays are given in Table~\ref{tab:arbanglediag}.
\begin{table}[h]
\centering
\begin{tabular}{|c|c|}\hline
Decay & Representation\\\hline
$D^0\to \overline{K}^0\eta$ &$\frac{G_F}{\sqrt{2}}V_{ud}V_{cs}^*~\left(\frac{\cos\theta}{\sqrt{6}}(C-E)-\frac{\sin\theta}{\sqrt{3}}(C+2E)\right)$\\
$D^0\to\overline{K}^0\eta'$ &$\frac{G_F}{\sqrt{2}}V_{ud}V_{cs}^*~\left(\frac{\sin\theta}{\sqrt{6}}(C-E)+\frac{\cos\theta}{\sqrt{3}}(C+2E)\right)$\\
\hline
$D_s^+\to\pi^+\eta$ &$\frac{G_F}{\sqrt{2}}V_{ud}V_{cs}^*~\left(\frac{2\cos\theta}{\sqrt{6}}(T-A)-\frac{\sin\theta}{\sqrt{3}}(T+2A)\right)$\\
$D_s^+\to\pi^+\eta$ &$\frac{G_F}{\sqrt{2}}V_{ud}V_{cs}^*~\left(\frac{2\sin\theta}{\sqrt{6}}(T-A)+\frac{\cos\theta}{\sqrt{3}}(T+2A)\right)$\\
\hline
\end{tabular}
\caption{Amplitudes for $\eta-\eta'$ CF decays in the topological-diagram representation with the assumption that the octet and singlet diagrams are equal.}
\label{tab:arbanglediag}
\end{table}
Once again employing $\theta = \arcsin(1/3)$, in Table~\ref{tab:diagrep}, we express all CF $D\to PP$ decays in terms
of flavor-topological diagrams.
\begin{table}[!h]
\centering
\begin{tabular}{|c|c|}\hline
Decay & Diagrammatic Amplitude \\\hline
$D^0\to K^-\pi^+$ &$\frac{G_F}{\sqrt{2}}V_{ud}V_{cs}^*~(T+E)$ \\
$D^0\to\overline{K}^0\pi^0$ &$\frac{G_F}{\sqrt{2}}V_{ud}V_{cs}^*~\frac{1}{\sqrt{2}}(C-E)$ \\
$D^0\to\overline{K}^0\eta$ &$\frac{G_F}{\sqrt{2}}V_{ud}V_{cs}^*~\frac{1}{\sqrt{3}}C$ \\
$D^0\to\overline{K}^0\eta'$ &$\frac{G_F}{\sqrt{2}}V_{ud}V_{cs}^*~-\frac{1}{\sqrt{6}}(C+3E)$ \\ \hline
$D^+\to\overline{K}^0\pi^+$ &$\frac{G_F}{\sqrt{2}}V_{ud}V_{cs}^*~(C+T)$ \\ \hline
$D^+_s\to\overline{K}^0 K^+$ &$\frac{G_F}{\sqrt{2}}V_{ud}V_{cs}^*~(C+A)$ \\
$D^+_s\to\pi^+\eta$ &$\frac{G_F}{\sqrt{2}}V_{ud}V_{cs}^*~\frac{1}{\sqrt{3}}(T-2A)$ \\
$D^+_s\to\pi^+\eta'$ &$\frac{G_F}{\sqrt{2}}V_{ud}V_{cs}^*~\frac{2}{\sqrt{6}}(T+A)$ \\ \hline
\end{tabular}
\caption{Amplitudes for CF $D$ decays expressed in terms of $SU(3)_F$ flavor-topological diagrams
\label{tab:diagrep}}
\end{table}
The above diagrammatic description of the CF $D\to PP$ processes leads to a parametrization of the eight decay modes in terms of seven real parameters.
This time, with one remaining degree of freedom, a $\chi^2$-minimization fit can once again be performed. We perform such a fit by constraining $T$ to be purely real and find,
\begin{eqnarray}
\chi^2_{\rm min}/{\rm dof} &=& 1.37/1 ~, \nonumber\\
T &=& (0.352 \pm 0.003)\,{\rm GeV}^3 ~,\nonumber\\
C &=& (0.286 \pm 0.002)\,e^{i (-150.0 \pm 0.4)^\circ}~{\rm GeV}^3~, \nonumber\\
E &=& (0.193 \pm 0.003)\,e^{i ( 119.3 \pm 0.8)^\circ}~{\rm GeV}^3~, \nonumber\\
A &=& (0.04 \pm 0.01 )\,e^{i (63 \pm 9)^\circ}~{\rm GeV}^3~.
\end{eqnarray}
This fit appears to be excellent, suggesting that the diagrammatic representation of CF $D\to PP$ decays aligns well with
experimental measurements. Note that the diagrammatic approach has one additional complex-valued amplitude (i.e.\ two additional real-valued parameters) compared to the $SU(3)_F$ matrix-elements approach. We observe a significant decrease in the minimum value of $\chi^2$ even though the diagrammatic description is still overdetermined, i.e. there are more observables than parameters.
In the following section we investigate the differences between the two approaches and present an argument for greater consistency between the two parametrizations.
\section{Connections between flavor-flow and matrix elements in $SU(3)_F$}
\label{sec:connections}
An obvious difference between the two approaches presented in the previous section is that, even in the limit of exact $SU(3)_F$ symmetry, the minimal bases contain
different numbers of complex independent parameters: three in the matrix-elements approach and four in the flavor-flow approach. Yet, we expect an equivalence between
the two approaches \cite{Gronau:1994rj,Zeppenfeld:1980ex}. The implication is either that the $SU(3)_F$ matrix-elements approach described above is incomplete or
that the diagrammatic approach has too many parameters. A key observation that follows is that the one-to-one correspondence between group theory and diagrams is
possible to achieve by treating the decays involving only octets separately from those also involving singlets. In order to demonstrate these separate correspondences,
in Table \ref{tab:CFeta8eta1}, we have listed the $SU(3)_F$ matrix-elements and flavor-flow representations side-by-side.
\begin{table}[h!]
\begin{center}
\begin{tabular}{|c|c|c|}\hline
Decay & Matrix Elements & Diagrams \\ \hline
\multicolumn{3}{|c|}{$SU(3)_F$ octet-octet final states} \\ \hline
$D^0\to K^-\pi^+$ &$\frac{1}{5}\(\sqrt{2}A_{27}+\sqrt{2}A_8-\sqrt{5}C_8\)$ & $T + E$ \\
$D^0\to\overline{K}^0\pi^0$ &$\frac{1}{10}\(3A_{27}-2A_{8}+\sqrt{10}C_{8}\)$ & $(C - E)/\sqrt{2}$ \\
$D^0\to\overline{K}^0\eta_8$ &$-\frac{1}{10\sqrt{3}}\left(3A_{27}-2A_{8}+\sqrt{10}C_{8}\right)$& $-(C - E)/\sqrt{6}$ \\
$D^+\to\overline{K}^0\pi^+$ &$\frac{1}{\sqrt{2}}A_{27}$ & $T + C$ \\
$D^+_s\to\overline{K}^0 K^+$ &$\frac{1}{5}\(\sqrt{2}A_{27}+\sqrt{2}A_{8}+\sqrt{5}C_{8}\)$ & $C + A$ \\
$D^+_s\to\pi^+\eta_8$&$\frac{1}{5\sqrt{3}}\(-3A_{27}+2A_{8}+\sqrt{10}C_{8}\)$ & $-2(T - A)/\sqrt{6}$ \\ \hline
\multicolumn{3}{|c|}{$SU(3)_F$ octet-singlet final states} \\ \hline
$D^0\to \overline{K}^0\eta_1$ &$\frac{1}{\sqrt{3}}\(\frac{2}{\sqrt{5}}A_8-\sqrt{2}C_8\)$ & $(C + 2E)/\sqrt{3}$ \\
$D^+_s\to\pi^+\eta_1$&$-\frac{1}{\sqrt{3}}\(\frac{2}{\sqrt{5}}A_{8}+\sqrt{2}C_{8}\)$ & $(T + 2A)/\sqrt{3}$ \\ \hline
\end{tabular}
\end{center}
\caption{Amplitudes for CF $D\to PP$ processes using the $SU(3)_F$ matrix-elements and flavor-flow representations. We have separated decays to final states
involving octets only and those also involving singlets. Overall factors containing $G_F$ and $V_{\rm CKM}$, that are identical in both representations, have
been left out for brevity. \label{tab:CFeta8eta1}}
\end{table}
Focusing our attention, first, on the octet-octet final-state amplitudes in Table \ref{tab:CFeta8eta1}, we see that there are six decay amplitudes that depend on three
$SU(3)_F$ reduced matrix elements. Therefore, there must be three relationships between these amplitudes. They are,
\begin{eqnarray}
{\cal A}(D^0\to K^-\pi^+) + \sqrt{2}{\cal A}(D^0\to\overline{K}^0\pi^0) &=& {\cal A}(D^+\to\overline{K}^0\pi^+) ,~ \\
{\cal A}(D^0\to\overline{K}^0\pi^0) + \sqrt{3}{\cal A}(D^0\to\overline{K}^0\eta_8) &=& 0 ,~ \\
\sqrt{2}\,{\cal A}(D^+\to\overline{K}^0\pi^+) + \sqrt{3}\,{\cal A}(D^+_s\to\pi^+\eta_8) &=& \sqrt{2}\,{\cal A}(D^+_s\to\overline{K}^0 K^+) .~
\end{eqnarray}
Of these relationships, the first is a consequence of isospin symmetry, while the other two originate from the full $SU(3)_F$ symmetry. Note that these relationships are
satisfied by both matrix elements and diagrams. Although there are still four diagrams in play, every amplitude can be written in terms of three distinct linear combinations of them.
One can establish a one-to-one correspondence between the combinations of these diagrams and matrix elements as follows. The $SU(3)_F$ reduced matrix elements
can be expressed in terms of the flavor-flow diagrams using
\begin{equation}\label{eq:octets}
\[\begin{matrix} A_{27} \\ A_{8} \\ C_{8} \end{matrix}\] ~=~
\[\begin{matrix} 0 & \sqrt{2} & 0 \\
\frac{5\sqrt{2}}{4} & -\sqrt{2}&\frac{5\sqrt{2}}{4} \\
-\frac{\sqrt{5}}{2} & 0 &\frac{\sqrt{5}}{2} \end{matrix}\]\,
\[\begin{matrix} T + E \\ T + C \\ C + A \end{matrix}\] .~
\end{equation}
Since the transformation matrix has a non-zero determinant it is invertible thus establishing a one-to-one correspondence. Next, we turn our attention to the octet-singlet final state
amplitudes in Table \ref{tab:CFeta8eta1}. Here, there are two decay amplitudes that depend on two $SU(3)_F$ reduced matrix elements and two combinations of flavor-flow diagrams.
Once again, the reduced matrix elements can be expressed in terms of diagrammatic amplitudes using
\begin{equation}\label{eq:singlets}
\[\begin{matrix} A_{8} \\ C_{8} \end{matrix}\] ~=~
\[\begin{matrix} -\frac{\sqrt{5}}{4} & -\frac{\sqrt{5}}{4} \\
\frac{\sqrt{2}}{4} & -\frac{\sqrt{2}}{4} \end{matrix}\]\,
\[\begin{matrix} C + 2 E \\ T + 2 A \end{matrix}\] .~
\end{equation}
Here too, we see that the transformation matrix is invertible and a one-to-one correspondence exists.
Although Eqs.~(\ref{eq:octets}) and (\ref{eq:singlets}) establish a one-to-one correspondence between matrix elements and sets of flavor-flow amplitudes, it is easy to see that the
correspondences are not the same. On the $SU(3)_F$ matrix-elements side this can be traced back to the definitions: while the $\bm{27}$ appears only in
the $\bm{8}\times\bm{8}$ final states, the $\bm{8}$ appears in both $\bm{8}\times\bm{8}$ and $\bm{8}\times\bm{1}$. In principle, these final state octets are different and the
corresponding amplitudes should be treated as such. On the side of topological flavor-flow amplitudes, similarly, this implies distinct diagrams for octet-octet and octet-singlet
final states. In order to make these distinctions clear for the matrix elements, we use the following (re)definitions.
\begin{eqnarray}
A_{27} ~=~ \Braket{\bm{27}|{\cal O}^{\bm{\overline{15}}}|\bm{3}}\,, \quad
A_8 ~=~ \Braket{\bm{8}_{\bm{8}\times\bm{8}}|{\cal O}^{\bm{\overline{15}}}|\bm{3}}\,, \quad
C_8 ~=~ \Braket{\bm{8}_{\bm{8}\times\bm{8}}|{\cal O}^{\bm{6}}|\bm{3}}\,, \label{eq:revSU3me8} \\
A^{(1)}_8 ~=~ \Braket{\bm{8}_{\bm{8}\times\bm{1}}|{\cal O}^{\bm{\overline{15}}}|\bm{3}}\,, \quad
C^{(1)}_8 ~=~ \Braket{\bm{8}_{\bm{8}\times\bm{1}}|{\cal O}^{\bm{6}}|\bm{3}}\,.~~~~~~~~~~~~~~ \label{eq:revSU3me1}
\end{eqnarray}
For diagrams, we simply add the subscript $1$ to represent the octet-singlet final states. Since these changes affect only the octet-singlet final states part of Table \ref{tab:CFeta8eta1},
we have listed the changes in Table \ref{tab:revCFeta8eta1}.
\begin{table}[h!]
\begin{center}
\begin{tabular}{|c|c|c|}\hline
Decay & Matrix Elements & Diagrams \\ \hline
\multicolumn{3}{|c|}{$SU(3)_F$ octet-singlet final states} \\ \hline
$D^0\to \overline{K}^0\eta_1$ &$\frac{1}{\sqrt{3}}\(\frac{2}{\sqrt{5}}A^{(1)}_8-\sqrt{2}C^{(1)}_8\)$ & $(C_1 + 2E_1)/\sqrt{3}$ \\
$D^+_s\to\pi^+\eta_1$&$-\frac{1}{\sqrt{3}}\(\frac{2}{\sqrt{5}}A^{(1)}_{8}+\sqrt{2}C^{(1)}_{8}\)$ & $(T_1 + 2A_1)/\sqrt{3}$ \\ \hline
\end{tabular}
\end{center}
\caption{Amplitudes for CF $D\to PP$ with octet-singlet final states using $SU(3)_F$ matrix-elements and diagrams. Overall factors containing $G_F$ and $V_{\rm CKM}$, that are identical in both representations, have been left out for brevity. \label{tab:revCFeta8eta1}}
\end{table}
Let us, now, reconsider the $\chi^2$ minimization fits presented in Section \ref{sec:fits} in light of the newly-defined amplitudes. The $SU(3)_F$ matrix-elements approach for the fits involved three complex-valued amplitudes ($A_{27}, A_8$, and $C_8$), rather than the five defined here ($A_{27}, A_8, C_8, A_8^{(1)}$, and $C_8^{(1)}$). The implicit assumptions in the fit were,
\begin{equation}
A_8^{(1)} ~=~ A_8~, \quad {\rm and} \quad C_8^{(1)} ~=~ C_8~.~
\end{equation}
The results of the fit were poor showing that matrix elements for the octet-octet and octet-singlet final states may not be identical. On the other hand, the diagrammatic approach involved four complex-valued amplitudes ($T, C, E$, and $A$), as opposed to eight ($T, C, E, A, T_1, C_1, E_1$, and $A_1$). The diagrammatic fits, therefore, assumed $X ~=~ X_1$ where $X = T, C, E$, and $A$.
In either scenario, matrix elements or diagrams, the parametrizations established in this section are insufficient by themselves to produce a reasonable fit.
Both parametrizations are equivalent as established above, and as such there are five complex-valued amplitudes which correspond to nine real-valued
parameters (five magnitudes and four relative phases). With only eight branching ratios to fit, lack of additional input will lead to overfitting. Clearly, additional
input is necessary to perform a fit.
On the $SU(3)_F$ matrix-elements side a fit was made possible by the assumption that the reduced matrix elements for octet-octet and octet-singlet final states
were the same. These assumptions put two complex constraints reducing the number of fit parameters to five. The resulting fit was rather poor. On the other hand,
the flavor-flow side assumption that individual diagrams corresponding to octet-octet and octet-singlet final states are the same led to four complex constraints
reducing the number of fit parameters to seven. The resulting fit was good.
Due to the established equivalence between the $SU(3)_F$ matrix-elements and topological flavor-flow approaches, one naturally inquires about the consequence
of either set of assumptions on the alternate parametrization. The $SU(3)_F$ matrix-elements side assumptions, $A_8^{(1)}=A_8$ and $C_8^{(1)}=C_8$, lead to the
following relationships on the topological flavor-flow side,
\begin{eqnarray}
\sqrt{2}\[(T + C) + 5(E + A)\] + \sqrt{5}\[(T_1 + C_1) + 2(E_1 + A_1)\] &=& 0 ~,~~ \label{eq:toprel1} \\
\sqrt{10}\(T - C + E - A \) - \[T_1 - C_1 + 2(E_1 - A_1)\] &=& 0 ~.~~ \label{eq:toprel2}
\end{eqnarray}
Similarly, the flavor-flow side assumptions that $X ~=~ X_1$ where $X = T, C, E$, and $A$, lead to the number of reduced matrix elements being greater than that of the flavor-flow amplitudes. Then, the relations of Eqs.~(\ref{eq:octets}) and (\ref{eq:singlets}) lead to the following phenomenological relationship on the $SU(3)_F$ matrix-elements side,
\begin{equation}
3\,A_{27} + 8\,A_8 + 4\sqrt{10}\,A_8^{(1)} ~=~ 0 ~.~ \label{eq:SU3rel}
\end{equation}
A fit performed with $A_{27}, C_8, A_8^{(1)}$, and $C_8^{(1)}$ as the available matrix elements, but with $A_8$ constrained
through the relationship in Eq.~(\ref{eq:SU3rel}), yields identical results as the diagrammatic fit assuming an equivalence between octet-octet and octet-singlet final states. We find,
\begin{eqnarray}
\chi^2_{\rm min}/{\rm dof} &=& 1.37/1 ~, \nonumber\\
A_{27} &=& (0.257 \pm 0.003)\,{\rm GeV}^3 ~,\nonumber\\
C_8 &=& (0.659 \pm 0.008)\,e^{i (97 \pm 1)^\circ}~{\rm GeV}^3~, \nonumber\\
A_8^{(1)} &=& (0.16 \pm 0.02 )\,e^{i (26 \pm 2)^\circ}~{\rm GeV}^3~, \nonumber\\
C_8^{(1)} &=& (0.313 \pm 0.005)\,e^{i(134 \pm 2)^\circ}~{\rm GeV}^3~.
\end{eqnarray}
Note that neither Eqs.~(\ref{eq:toprel1}) and (\ref{eq:toprel2}), nor Eq.~(\ref{eq:SU3rel}) automatically imply underlying relationships between the related parameters at a fundamental level. However, the fact that the fit on the diagrammatic side is far better than on the matrix-elements side, indicates a phenomenological preference for Eq.~(\ref{eq:SU3rel}).
\begin{table}[!htbp]
\begin{centering}
\begin{tabular}{|c|c|} \hline
Input relationship & $\chi^2_{\rm min}$ \\ \hline
$A_{27} ~=~ 0$ & 9230 \\
$|A_8^{(1)}| ~=~ |A_8|$ and
$|C_8^{(1)}| ~=~ |C_8|$ & 419 \\
$|A_8^{(1)}| ~=~ |C_8^{(1)}|$ and
$|A_8| ~=~ |C_8|$ & 181 \\
$\rm arg(A_8^{(1)}) ~=~ \rm arg(C_8^{(1)})$ and
$\rm arg(A_8) ~=~ \rm arg(C_8)$ & 132 \\
$A_8 ~=~ 0$ & 130 \\
$C_8^{(1)} ~=~ C_8$ & 99.6 \\
$\rm arg(A_8^{(1)}) ~=~ \rm arg(A_8)$ and
$\rm arg(C_8^{(1)}) ~=~ \rm arg(C_8)$ & 93.4 \\
$C_8 ~=~ 0$ & 71.2 \\
$A_8^{(1)} ~=~ 0$ & 9.87 \\
$C_8^{(1)} ~=~ 0$ & 8.26 \\
$3\,A_{27} + 8\,A_8 + 4\sqrt{10}\,
A_8^{(1)} ~=~ 0$ & 1.37 \\
$A_8^{(1)} ~=~ A_8$ & 0.56 \\ \hline
\end{tabular}
\caption{Input relationships between $SU(3)_F$ matrix elements used to perform $\chi^2$ minimization fits, listed in descending
order of minimum $\chi^2$ value obtained in a fit. Each input relationship adds two real-valued constraints. The corresponding fits each
have one degree of freedom. $\rm arg(X)$ refers to the phase of the matrix element $X$.}\label{tab:input}
\end{centering}
\end{table}
For the sake of completeness, we performed additional seven-parameter $\chi^2$-minimization fits to the data,
each time changing the input relationship between the $SU(3)_F$ matrix elements. The minimum values of $\chi^2$
obtained in these fits are listed in Table~\ref{tab:input}. We see that all but one of the fits appear worse than the fit
with octet-octet and octet-singlet diagrams set equal to each other. The fit that has a smaller minimum $\chi^2$ is
one where we imposed the relationship $A_8^{(1)} = A_8$. For this fit, we find,
\begin{eqnarray}
\chi^2_{\rm min}/{\rm dof} &=& 0.56/1 ,~
\nonumber \\
A_{27} &=& (0.243 \pm 0.003) {\rm GeV}^3 ,~
\nonumber \\
A_8 &=& (0.13 \pm 0.01 )\,e^{i(-115 \pm 4)^\circ}~{\rm GeV}^3 ,~
\nonumber \\
C_8 &=& (0.617 \pm 0.007)\,e^{i ( 82 \pm 1)^\circ}~{\rm GeV}^3 ,~
\nonumber \\
C_8^{(1)}&=& (0.344 \pm 0.007)\,e^{i (165 \pm 2)^\circ}~{\rm GeV}^3 .~
\end{eqnarray}
Diagrammatically, this input relationship is equivalent to Eq.~(\ref{eq:toprel1}) on the flavor-flow side.
We conclude this section with the following observation. Since in the most general case the number of
basis decay parameters exceeds the number of experimentally-measured CF decay modes, additional
assumptions must be employed to extract individual reduced matrix elements or flavor-flow amplitudes
presented above. Yet, the relations such as Eqs.~(\ref{eq:octets}) and (\ref{eq:singlets}) are rather
general. This allows us to make a comment regarding hadronic final state interactions (FSI) in charm.
In the $SU(3)_F$ limit FSI cannot change the values of the reduced matrix elements. In other words,
action of the strong interaction $S$-matrix on the basis of the $SU(3)_F$ reduced matrix elements
leaves this basis invariant. This is not necessarily so for the individual flavor-flow amplitudes. Yet,
the {\it combinations} of these amplitudes are preserved under strong FSI. Extraction of the magnitudes
and phases of the individual amplitudes is only possible with additional assumptions.
\section{Conclusions}
\label{sec:conclusions}
Nonleptonic decays of charmed mesons provide plethora of interesting information about QCD
dynamics in its nonperturbative regime. In this paper we discussed two phenomenological
parametrizations of those decay amplitudes based on $SU(3)_F$ symmetry, which have been proven
equivalent in the decays of B-mesons.
We argue that application of such parametrizations to charm decays require care due to insufficient number of
experimentally-measured decay models and the presence of
final state interactions. Noting that the Wolfenstein parameter $\lambda$ is external to any QCD-based
parametrization of decay amplitudes, the equivalency of the flavor $SU(3)_F$ and the topological flavor-flow
approaches must be separately realized for the Cabibbo-favored decays of charmed mesons. Including
decays to the physical $\eta$ and $\eta^\prime$ mesons in our description, we find relationships between
the basis parameters of the flavor-flow amplitudes. This can be interpreted from the point of view that quark
rescatterings imply that only certain linear combinations of flow diagrams can contribute to the decay amplitudes.
We presented extractions of the basis amplitudes in two approaches under various assumptions.
\bigskip
{\bf Acknowledgments}: This work was financially supported in part by NSF Grant No. PHY2013984 (BB), PHY1915142
(AD \& JW), and the U.S. Department of Energy under contract de-sc0007983 (AAP). BB thanks J.~L. Rosner for useful conversations.
\bibliographystyle{JHEP}
| {
"attr-fineweb-edu": 1.664062,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUeUDxK1yAgXBVFz1E | \section*{Appendix}
\subsection*{The Ornstein-Zernike equation and liquid state structure}
The Ornstein-Zernike (OZ) equations for the total correlation functions $h_{ij}(r)$ of a binary fluid mixture are:
\begin{equation}
h_{ij}(r)=c_{ij}(r)+\sum_{p=b,s}\rho_{0,p}\int d\textbf{r}' c_{ip}(|\textbf{r}-\textbf{r}'|)h_{pj}(\textbf{r}'),
\label{eq:OZ}
\end{equation}
where $c_{ij}(r)$ are the pair direct correlation functions and $\rho_{0,i}$ for $i=b,s$ are the bulk fluid densities of the two species \cite{hansen}. The radial distribution functions are related to the total correlation functions via $g_{ij}(r)=1+h_{ij}(r)$. These coupled equations must be solved in conjunction with the following (exact) closure relations
\begin{equation}
c_{ij}(r)=-\beta\phi_{ij}(r)+h_{ij}(r)-\ln(1+h_{ij}(r))+B_{ij}(r),
\label{eq:closure}
\end{equation}
where $B_{ij}(r)$ are the so-called bridge-functions, $\phi_{ij}(r)$ are the pair potentials and $\beta=1/k_BT$ \cite{hansen}. The hypernetted chain (HNC) approximation consists of setting $B_{ij}(r)=0$ for all $r$. Due to the convolutions in \eqref{eq:OZ}, on Fourier transforming we obtain the following set of algebraic equations
\begin{equation}
\hat{h}_{ij}(k)=\hat{c}_{ij}(k)+\sum_{p=b,s}\rho_{0,p}\hat{c}_{ip}(k)\hat{h}_{pj}(k),
\label{eq:OZ_FT}
\end{equation}
where $\hat{h}_{ij}(k)$ and $\hat{c}_{ij}(k)$ are the Fourier transforms of $h_{ij}(r)$ and $c_{ij}(r)$, respectively. The partial static structure factors are related to these as follows \cite{hansen, dispersion_relation}:
\begin{equation}
\begin{split}
S_{bb}(k)&=1+\rho_{0,b}\hat{h}_{bb}(k),\\
S_{ss}(k)&=1+\rho_{0,s}\hat{h}_{ss}(k),\\
S_{bs}(k)&=\sqrt{\rho_{0,b}\rho_{0,s}}\hat{h}_{bs}(k).
\end{split}
\end{equation}
From \eqref{eq:OZ_FT} we obtain
\begin{equation}
\hat{h}_{ij}(k)=\frac{N_{ij}(k)}{D(k)},
\end{equation}
with the numerators given by
\begin{equation}
\begin{split}
N_{bb}(k)&=\hat{c}_{bb}(k)+\rho_{0,s}\left[\hat{c}_{bs}^2(k)-\hat{c}_{bb}(k)\hat{c}_{ss}(k)\right],\\
N_{ss}(k)&=\hat{c}_{ss}(k)+\rho_{0,b}\left[\hat{c}_{bs}^2(k)-\hat{c}_{bb}(k)\hat{c}_{ss}(k)\right],\\
N_{bs}(k)&=\hat{c}_{bs}(k).
\end{split}
\end{equation}
and the common denominator
\begin{equation}\label{D_of_k}
D(k)\equiv \left[1-\rho_{0,b}\hat{c}_{bb}(k)\right]\left[1-\rho_{0,s}\hat{c}_{ss}(k)\right]-\rho_{0,b}\rho_{0,s}\hat{c}_{bs}^2(k).
\end{equation}
For the stable liquid, $D(k)>0$ for all $k$. However, if this is not the case, then the liquid state is unstable. Thus, we can determine the stability threshold for the uniform liquid from solving for the locus in the phase diagram where a solution to the equation $D(k)=0$ appears.
\subsection*{Density functional theory for binary mixtures}
The density profiles $\rho_i(\mathbf{r})$ are obtained using classical density functional theory (DFT). The grand potential of the system is \cite{hansen, evans1979nature}
\begin{equation}
\Omega[\rho_b,\rho_s]=\mathcal{F}[\rho_b, \rho_s]+\sum_{i=b,s}\int d\textbf{r}\left(V_{i}^{\textrm{ext}}(\textbf{r})-\mu_{i}\right)\rho_{i}(\textbf{r}),
\end{equation}
where $\mathcal{F}$ is the intrinsic Helmholtz free energy functional, $V_{i}^{\textrm{ext}}(\mathbf{r})$ is the one-body external potential acting on species $i$ (here we set $V_{i}^{\textrm{ext}}(\textbf{r})\equiv 0$ for $i=b,s$, in order to study bulk phases) and $\mu_{i}$ are the chemical potentials. The intrinsic Helmholtz free energy can be split into two terms
\begin{equation}\label{helmholtz}
\mathcal{F}[\rho_b,\rho_s]=\mathcal{F}^{\textrm{id}}[\rho_b,\rho_s]+\mathcal{F}^{\textrm{ex}}[\rho_b,\rho_s],
\end{equation}
where the first term is the ideal gas contribution,
\begin{equation}
\mathcal{F}^{\textrm{id}}[\rho_b,\rho_s]=k_B T\sum_{i=b,s}\int d\textbf{r} \rho_{i}(\textbf{r})\left[\ln(\Lambda_{i}^d\rho_{i}(\textbf{r})-1\right],
\end{equation}
where $\Lambda_{i}$ is the (irrelevant) thermal de Broglie wavelength and $d$ is the dimensionality of the system. The second term in Eq.~\eqref{helmholtz} is the excess Helmholtz free energy, arising from the interactions between the particles. Following Ramakrishnan and Yussouff \cite{ramakrishnan1979first}, the approximation we use here is to expand this functional around the homogeneous fluid state in a functional Taylor expansion and truncate at second order, giving
\begin{eqnarray}
\mathcal{F}^{\textrm{ex}}[\rho_b,\rho_s]=\mathcal{F}^{\textrm{ex}}[\rho_{0,b},\rho_{0,s}]
+\sum_{i=b,s}\int d\textbf{r}\mu^{\textrm{ex}}_{i}\delta\rho_{i}(\textbf{r})\nonumber\\
-\frac{1}{2\beta}\sum_{\substack{i=b,s\\ j=b,s}}\int d\textbf{r}\delta\rho_{i}(\textbf{r})c_{ij}(\mid \textbf{r}-\textbf{r}'\mid)\delta\rho_{j}(\textbf{r}'),
\end{eqnarray}
where $\delta\rho_{i}(\textbf{r})=\rho_{i}(\textbf{r})-\rho_{0,i}$ and $\mu^{\textrm{ex}}_{i}=\mu_{i}-k_B T\ln\left(\rho_{0,i}\Lambda_{i}^d\right)$ are the excess chemical potentials. We further approximate the pair direct correlation functions $c_{ij}(r)$ via those obtained from the HNC theory. The equilibrium density profiles are those which minimise the grand potential $\Omega$ and which therefore satisfy the following pair of coupled Euler-Lagrange equations
\begin{equation}
\frac{\delta\Omega[\rho_b,\rho_s]}{\delta\rho_{i}}=0,
\end{equation}
for $i=b,s$.
\subsection*{Dynamics: the growth or decay of small amplitude density perturbations}
When the equations of motion of the particles can be approximated by stochastic Brownian equations of motion, then dynamical density functional theory (DDFT) shows that the non-equilibrium density distributions for the two species of particles $\rho_i(\mathbf{r},t)$ is described by \cite{archer2004dynamical, Marconi:TarazonaJCP1999, mt00, ar04}:
\begin{equation}
\frac{\partial \rho_i}{\partial t} =
\nabla\cdot\left(\gamma_i\rho_i\nabla
\frac{\delta \Omega[\rho_s,\rho_b]}{\delta\rho_i}\right),
\label{eq:DDFT}
\end{equation}
where the mobility coefficient $\gamma_i=\beta D_i$ and where $D_i$ is the diffusion coefficient of species $i$. Note that if instead the particles evolve according to Newton's equations of motion, then the equations for the time evolution of the density profiles are more complicated, but in dense systems one can argue that Eq.~\eqref{eq:DDFT} still governs the long time (on diffusive timescales) behaviour \cite{Archer05}. If we consider the growth or decay of small amplitude density perturbations around the bulk value of the form $\delta\rho_{i}(\textbf{r},t)=\rho_{i}(\textbf{r},t)-\rho_{0,i}$, then we can expand Eqs.~\eqref{eq:DDFT} to obtain \cite{evans1979nature, archer2004dynamical, archer2012solidification, dispersion_relation, archer2016generation}:
\begin{eqnarray}
\frac{\partial \delta\rho_i(\textbf{r},t)}{\partial t} =
D_i\nabla^2\delta\rho_i(\textbf{r},t)\hspace{4.5cm}\nonumber\\
-D_i\rho_{0,i}\sum_{j=b,s}\nabla^2\int d\textbf{r}'\delta\rho_{j}(\textbf{r}',t)c_{ij}(\mid \textbf{r}-\textbf{r}'\mid)\nonumber\\
+O(\delta \rho_i^2).\hspace{5.2cm}
\label{eq:DDFT_linear}
\end{eqnarray}
Linearising this equation and then Fourier transforming, we obtain
\begin{equation}
\frac{\partial \hat{\rho}_i(\mathbf{k},t)}{\partial t} =
-k^2D_i\hat{\rho}_i(\mathbf{k},t)+k^2D_i\rho_{0,i}\sum_{j=b,s}\hat{\rho}_{j}(\mathbf{k},t)c_{ij}(k),
\label{eq:DDFT_linear_FT}
\end{equation}
where $\hat{\rho}_i(\mathbf{k},t)$ is the Fourier transform of $\delta\rho_i(\mathbf{r},t)$ and $k=|\mathbf{k}|$. Assuming $\hat{\rho}_i(\mathbf{k},t)\propto\exp(\omega(k)t)$, then Eq.~\eqref{eq:DDFT_linear_FT} becomes \cite{dispersion_relation}:
\begin{equation}
\textbf{1}\omega(k)\hat{\boldsymbol{\rho}}=\textbf{L}\hat{\boldsymbol{\rho}},
\label{eq:matrix_eq}
\end{equation}
where $\hat{\boldsymbol{\rho}}=(\hat{\rho}_b,\hat{\rho}_s)$ and the matrix $\textbf{L}=\textbf{M}\textbf{E}$, where the two matrices $\textbf{M}$ and $\textbf{E}$ are defined as
\begin{equation}\label{eq:M_matrix}
\textbf{M}=-k^2\begin{pmatrix}
D_b\rho_{0,b} & 0\\
0 & D_s\rho_{0,s}
\end{pmatrix}
\end{equation}
and
\begin{equation}
\textbf{E}=\begin{pmatrix}
\left[\frac{1}{\rho_{0,b}}-\hat{c}_{bb}(k)\right] & -\hat{c}_{bs}(k)\\
-\hat{c}_{sb}(k) & \left[\frac{1}{\rho_{0,s}}-\hat{c}_{ss}(k)\right]
\end{pmatrix}.
\end{equation}
Solving Eq.~\eqref{eq:matrix_eq} for the dispersion relation $\omega(k)$, one obtains two branches of solutions, $\omega_{\pm}(k)$. These are given by
\begin{equation}\label{dispersion_relation_eq}
\omega_{\pm}(k)=\frac{1}{2}\textrm{Tr}(\textbf{M}\textbf{E})\pm \sqrt{\frac{1}{4}\textrm{Tr}(\textbf{M}\textbf{E})^2-\textrm{det}(\textbf{M}\textbf{E})}.
\end{equation}
Further details of this derivation can be found in Ref.~\cite{dispersion_relation}. Note that the equation $\textrm{det}(\textbf{E})=0$ is entirely equivalent to solving $D(k)=0$, from Eq.~\eqref{D_of_k}.
It is worth recalling that the values of the diffusion coefficients $D_b$ and $D_s$ do not ever determine which structure is the thermodynamic equilibrium state, i.e.\ the minimum of the free energy. Therefore, the values of $D_b$ and $D_s$ are not involved in determining the phase diagram in Fig.~4 of the main text. Nor do the values of $D_b$ and $D_s$ determine the locations of the linear stability threshold lines in the phase diagram, i.e.\ the lines in Fig.~4 where either $\omega_+(k_1)=0$ or $\omega_+(k_2)=0$ or $\omega_+(k_{s})=0$. This is because these lines come from solving the equation $\textrm{det}(\textbf{E})=0$, whilst the values of the diffusion coefficients only enter the mobility matrix $\textbf{M}$ in Eq.~\eqref{eq:M_matrix}. That said, the precise value of the ratio $D_b/D_s$ does influence the dispersion relation curves, but does not affect where the peaks occur (i.e.\ does not change $k_1$ or $k_2$). Thus, the value of the ratio $D_b/D_s$ is only relevant to the non-equilibrium dynamics of the system. However, since here we are solely ultimately interested in the equilibrium phase behaviour of the system, which does not depend on $D_b/D_s$, we therefore set this ratio equal to 1, i.e.\ we set $D_b=D_s=D$.
\subsection*{Note on the width of the coexistence region between the QC and the $b$-Hex phase}
In the main text we comment briefly on the fact that in the phase diagram in Fig.~4 the coexistence region between the QC and the $b$-Hex phase is fairly small. It is worth expanding on those comments here. That the coexistence region is narrow is important, because it implies that in large portions of the phase diagram (as displayed in Fig~4), the QC is the thermodynamic equilibrium. In the main text we give the width of the coexistence region $\Delta\chi\approx0.04$ for $J=2$. For lower values of $J$ the coexistence region becomes a little broader (e.g.\ at $J=1.5$ the width of the coexistence region $\Delta\chi\approx0.06$) and for higher $J$ it is narrower. Other model systems where the coexistence gap between the QC and hexagonal phases is very narrow include the systems described in Refs.~\cite{andy_PRL_13, dan}, so based on our experience with those systems, the narrowness in the present system is perhaps not too surprising.
Another observation on this issue worth noting is the following: If one initiates the system in the QC state and then decreases $\chi$ in small steps, following the QC branch of solutions, one eventually falls off that branch onto the $b$-Hex phase branch of solutions. For example, for $J=1.5$ this occurs at $\chi\approx0.3$. Some authors would refer to this as the ``spinodal'' point for the QC phase. In other words, for $J=1.5$ and $\chi<0.3$ the QC state is no longer a stable solution to the model equations. In a similar way, if one initiates the system in the $b$-Hex state and then increases $\chi$ in small steps, following the $b$-Hex branch of solutions, one eventually falls off that branch onto a state that is a periodic approximant for the QC state. For $J=1.5$ this $b$-Hex spinodal point occurs at $\chi\approx0.37$. In other words, for $J=1.5$ and $\chi>0.37$ the $b$-Hex state is no longer a stable solution to the model equations. This fact that the system falls from $b$-Hex branch of solutions onto a branch related to the QC state is a very strong indicator that the QC is the thermodynamic equilibrium state. Moreover, the distance in the phase diagram between these two spinodal points $0.37-0.30=0.07$, is an upper bound for the coexistence region width $\Delta\chi$.
| {
"attr-fineweb-edu": 1.876953,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |